User login
More Than His Car Is Bent Out of Shape
ANSWER
The radiograph demonstrates bilateral hip dislocations. On the right, the femoral head appears to be posteriorly dislocated and slightly internally rotated. On the left, the femoral head appears to be anteriorly and superiorly dislocated (although evaluation is limited by a single view). Neither side appears to have any obvious fractures.
The patient’s dislocations were promptly reduced in the trauma bay by the orthopedic service before he was sent for CT.
ANSWER
The radiograph demonstrates bilateral hip dislocations. On the right, the femoral head appears to be posteriorly dislocated and slightly internally rotated. On the left, the femoral head appears to be anteriorly and superiorly dislocated (although evaluation is limited by a single view). Neither side appears to have any obvious fractures.
The patient’s dislocations were promptly reduced in the trauma bay by the orthopedic service before he was sent for CT.
ANSWER
The radiograph demonstrates bilateral hip dislocations. On the right, the femoral head appears to be posteriorly dislocated and slightly internally rotated. On the left, the femoral head appears to be anteriorly and superiorly dislocated (although evaluation is limited by a single view). Neither side appears to have any obvious fractures.
The patient’s dislocations were promptly reduced in the trauma bay by the orthopedic service before he was sent for CT.
A 30-year-old man is broug
Upon arrival, he is immediately intubated because emergency personnel had difficulty intubating him in the field. He has a Glasgow Coma Scale score of 3T. The patient’s blood pressure is 90/40 mm Hg and his heart rate, 150 beats/min. He appears to have deformities in his lower extremities.
You obtain portable radiographs of the chest and pelvis. The latter is shown. What is your impression?
Preclinical findings highlight value of Lynch syndrome for cancer vaccine development
ATLANTA – Lynch syndrome serves as an excellent platform for the development of immunoprevention cancer vaccines, and findings from a preclinical Lynch syndrome mouse model support ongoing research, according to Steven M. Lipkin, MD, PhD.
A novel vaccine, which included peptides encoding four intestinal cancer frameshift peptide (FSP) neoantigens derived from coding microsatellite (cMS) mutations in the genes Nacad, Maz, Xirp1, and Senp6 elicited strong antigen-specific cellular immune responses in the model, Dr. Lipkin, the Gladys and Roland Harriman Professor of Medicine and vice chair for research in the Sanford and Joan Weill Department of Medicine, Weill Cornell Medical College, New York, reported at the annual meeting of the American Association for Cancer Research.
CD4-specific T cell responses were detected for Maz, Nacad, and Senp6, and CD8-positive T cells were detected for Xirp1 and Nacad, he noted, explaining that the findings come in the wake of a recently completed clinical phase 1/2a trial that successfully demonstrated safety and immunogenicity of an FSP neoantigen-based vaccine in microsatellite unstable (MSI) colorectal cancer patients.
The current effort to further develop a cancer preventive vaccine against MSI cancers in Lynch syndrome using a preclinical mouse model involved a systematic database search to identify cMS sequences in the murine genome. Intestinal tumors obtained from Lynch syndrome mice were evaluated for mutations affecting these candidate cMS, and of 13 with a mutation frequency of 15% or higher, the 4 FSP neoantigens ultimately included in the vaccine elicited strong antigen-specific cellular immune responses.
Vaccination with peptides encoding these four intestinal cancer FSP neoantigens promoted antineoantigen immunity, reduced intestinal tumorigenicity, and prolonged overall survival, Dr. Lipkin said.
Further, based on preclinical data suggesting that naproxen in this setting might provide better risk-reducing effects, compared with aspirin (which has previously been shown to reduce colorectal cancer risk in Lynch syndrome patients), its addition to the vaccine did, indeed, improve response, he noted, explaining that naproxen worked as “sort of a super-aspirin,” that improved overall survival, compared with vaccine alone or nonsteroidal anti-inflammatory agents alone.
In a video interview, Dr. Lipkin describes his research and its potential implications for the immunoprevention of Lynch syndrome and other cancers.
Vaccination with as few as four mutations that occur across Lynch syndrome tumors induced complete cures in some mice and delays in disease onset in others, he said.
“[This is] a very simple approach, very effective,” he added, noting that the T cells are now being studied to better understand the biology of the effects. “The idea of immunoprevention ... is actually very exciting and ... can be expanded beyond this.”
Lynch syndrome is a “great place to start,” because of the high rate of mutations, which are the most immunogenic types of mutations, he said.
“If we can get this basic paradigm to work, I think we can expand it to other types of mutations – for example, KRAS or BRAF, which are seen frequently in lung cancers, colon cancers, stomach cancers, pancreatic cancers, and others,” he said, noting that a proposal for a phase 1 clinical trial has been submitted.
ATLANTA – Lynch syndrome serves as an excellent platform for the development of immunoprevention cancer vaccines, and findings from a preclinical Lynch syndrome mouse model support ongoing research, according to Steven M. Lipkin, MD, PhD.
A novel vaccine, which included peptides encoding four intestinal cancer frameshift peptide (FSP) neoantigens derived from coding microsatellite (cMS) mutations in the genes Nacad, Maz, Xirp1, and Senp6 elicited strong antigen-specific cellular immune responses in the model, Dr. Lipkin, the Gladys and Roland Harriman Professor of Medicine and vice chair for research in the Sanford and Joan Weill Department of Medicine, Weill Cornell Medical College, New York, reported at the annual meeting of the American Association for Cancer Research.
CD4-specific T cell responses were detected for Maz, Nacad, and Senp6, and CD8-positive T cells were detected for Xirp1 and Nacad, he noted, explaining that the findings come in the wake of a recently completed clinical phase 1/2a trial that successfully demonstrated safety and immunogenicity of an FSP neoantigen-based vaccine in microsatellite unstable (MSI) colorectal cancer patients.
The current effort to further develop a cancer preventive vaccine against MSI cancers in Lynch syndrome using a preclinical mouse model involved a systematic database search to identify cMS sequences in the murine genome. Intestinal tumors obtained from Lynch syndrome mice were evaluated for mutations affecting these candidate cMS, and of 13 with a mutation frequency of 15% or higher, the 4 FSP neoantigens ultimately included in the vaccine elicited strong antigen-specific cellular immune responses.
Vaccination with peptides encoding these four intestinal cancer FSP neoantigens promoted antineoantigen immunity, reduced intestinal tumorigenicity, and prolonged overall survival, Dr. Lipkin said.
Further, based on preclinical data suggesting that naproxen in this setting might provide better risk-reducing effects, compared with aspirin (which has previously been shown to reduce colorectal cancer risk in Lynch syndrome patients), its addition to the vaccine did, indeed, improve response, he noted, explaining that naproxen worked as “sort of a super-aspirin,” that improved overall survival, compared with vaccine alone or nonsteroidal anti-inflammatory agents alone.
In a video interview, Dr. Lipkin describes his research and its potential implications for the immunoprevention of Lynch syndrome and other cancers.
Vaccination with as few as four mutations that occur across Lynch syndrome tumors induced complete cures in some mice and delays in disease onset in others, he said.
“[This is] a very simple approach, very effective,” he added, noting that the T cells are now being studied to better understand the biology of the effects. “The idea of immunoprevention ... is actually very exciting and ... can be expanded beyond this.”
Lynch syndrome is a “great place to start,” because of the high rate of mutations, which are the most immunogenic types of mutations, he said.
“If we can get this basic paradigm to work, I think we can expand it to other types of mutations – for example, KRAS or BRAF, which are seen frequently in lung cancers, colon cancers, stomach cancers, pancreatic cancers, and others,” he said, noting that a proposal for a phase 1 clinical trial has been submitted.
ATLANTA – Lynch syndrome serves as an excellent platform for the development of immunoprevention cancer vaccines, and findings from a preclinical Lynch syndrome mouse model support ongoing research, according to Steven M. Lipkin, MD, PhD.
A novel vaccine, which included peptides encoding four intestinal cancer frameshift peptide (FSP) neoantigens derived from coding microsatellite (cMS) mutations in the genes Nacad, Maz, Xirp1, and Senp6 elicited strong antigen-specific cellular immune responses in the model, Dr. Lipkin, the Gladys and Roland Harriman Professor of Medicine and vice chair for research in the Sanford and Joan Weill Department of Medicine, Weill Cornell Medical College, New York, reported at the annual meeting of the American Association for Cancer Research.
CD4-specific T cell responses were detected for Maz, Nacad, and Senp6, and CD8-positive T cells were detected for Xirp1 and Nacad, he noted, explaining that the findings come in the wake of a recently completed clinical phase 1/2a trial that successfully demonstrated safety and immunogenicity of an FSP neoantigen-based vaccine in microsatellite unstable (MSI) colorectal cancer patients.
The current effort to further develop a cancer preventive vaccine against MSI cancers in Lynch syndrome using a preclinical mouse model involved a systematic database search to identify cMS sequences in the murine genome. Intestinal tumors obtained from Lynch syndrome mice were evaluated for mutations affecting these candidate cMS, and of 13 with a mutation frequency of 15% or higher, the 4 FSP neoantigens ultimately included in the vaccine elicited strong antigen-specific cellular immune responses.
Vaccination with peptides encoding these four intestinal cancer FSP neoantigens promoted antineoantigen immunity, reduced intestinal tumorigenicity, and prolonged overall survival, Dr. Lipkin said.
Further, based on preclinical data suggesting that naproxen in this setting might provide better risk-reducing effects, compared with aspirin (which has previously been shown to reduce colorectal cancer risk in Lynch syndrome patients), its addition to the vaccine did, indeed, improve response, he noted, explaining that naproxen worked as “sort of a super-aspirin,” that improved overall survival, compared with vaccine alone or nonsteroidal anti-inflammatory agents alone.
In a video interview, Dr. Lipkin describes his research and its potential implications for the immunoprevention of Lynch syndrome and other cancers.
Vaccination with as few as four mutations that occur across Lynch syndrome tumors induced complete cures in some mice and delays in disease onset in others, he said.
“[This is] a very simple approach, very effective,” he added, noting that the T cells are now being studied to better understand the biology of the effects. “The idea of immunoprevention ... is actually very exciting and ... can be expanded beyond this.”
Lynch syndrome is a “great place to start,” because of the high rate of mutations, which are the most immunogenic types of mutations, he said.
“If we can get this basic paradigm to work, I think we can expand it to other types of mutations – for example, KRAS or BRAF, which are seen frequently in lung cancers, colon cancers, stomach cancers, pancreatic cancers, and others,” he said, noting that a proposal for a phase 1 clinical trial has been submitted.
REPORTING FROM AACR 2019
CV disease and mortality risk higher with younger age of type 2 diabetes diagnosis
Individuals who are younger when diagnosed with type 2 diabetes are at greater risk of cardiovascular disease and death, compared with those diagnosed at an older age, according to a retrospective study involving almost 2 million people.
People diagnosed with type 2 diabetes at age 40 or younger were at greatest risk of most outcomes, reported lead author Naveed Sattar, MD, PhD, professor of metabolic medicine, University of Glasgow, Scotland, and his colleagues. “Treatment target recommendations in regards to the risk factor control may need to be more aggressive in people developing diabetes at younger ages,” they wrote in Circulation.
In contrast, developing type 2 diabetes over the age of 80 years had little impact on risks.
“[R]eassessment of treatment goals in elderly might be useful,” the investigators wrote. “Diabetes screening needs for the elderly (above 80) should also be reevaluated.”
The study involved 318,083 patients with type 2 diabetes registered in the Swedish National Diabetes Registry between 1998 and 2012. Each patient was matched with 5 individuals from the general population based on sex, age, and country of residence, providing a control population of 1,575,108. Outcomes assessed included non-cardiovascular mortality, cardiovascular mortality, all causemortality, hospitalization for heart failure, coronary heart disease, stroke, atrial fibrillation, and acute myocardial infarction. Patients were followed for cardiovascular outcomes from 1998 to December 2013, while mortality surveillance continued through 2014.
In comparison with controls, patients 40 years or less had the highest excess risk of the most outcomes. *Excess risk of heart failure was elevated almost 5-fold (hazard ratio (HR), R 4.77), and risk of coronary heart disease wasn’t far behind (HR, 4.33). Risks of acute MI (HR, 3.41), stroke (HR, 3.58), and atrial fibrillation (HR, 1.95) were also elevated. Cardiovascular-related mortality was increased almost 3-fold (HR, 2.72), while total mortality (HR, 2.05) and non-cardiovascular mortality (HR, 1.95) were raised to a lesser degree.
“Thereafter, incremental risks generally declined with each higher decade age at diagnosis” of type 2 diabetes,” the investigators wrote.
After 80 years of age, all relative mortality risk factors dropped to less than 1, indicating lower risk than controls. Although non-fatal outcomes were still greater than 1 in this age group, these risks were “substantially attenuated compared with relative incremental risks in those diagnosed with T2DM at younger ages,” the investigators wrote.
The study was funded by the Swedish Association of Local Authorities Regions, the Swedish Heart and Lung Foundation, and the Swedish Research Council.
The investigators disclosed financial relationships with Amgen, AstraZeneca, Eli Lilly, and other pharmaceutical companies.
SOURCE: Sattar et al. Circulation. 2019 Apr 8. doi:10.1161/CIRCULATIONAHA.118.037885.
Individuals who are younger when diagnosed with type 2 diabetes are at greater risk of cardiovascular disease and death, compared with those diagnosed at an older age, according to a retrospective study involving almost 2 million people.
People diagnosed with type 2 diabetes at age 40 or younger were at greatest risk of most outcomes, reported lead author Naveed Sattar, MD, PhD, professor of metabolic medicine, University of Glasgow, Scotland, and his colleagues. “Treatment target recommendations in regards to the risk factor control may need to be more aggressive in people developing diabetes at younger ages,” they wrote in Circulation.
In contrast, developing type 2 diabetes over the age of 80 years had little impact on risks.
“[R]eassessment of treatment goals in elderly might be useful,” the investigators wrote. “Diabetes screening needs for the elderly (above 80) should also be reevaluated.”
The study involved 318,083 patients with type 2 diabetes registered in the Swedish National Diabetes Registry between 1998 and 2012. Each patient was matched with 5 individuals from the general population based on sex, age, and country of residence, providing a control population of 1,575,108. Outcomes assessed included non-cardiovascular mortality, cardiovascular mortality, all causemortality, hospitalization for heart failure, coronary heart disease, stroke, atrial fibrillation, and acute myocardial infarction. Patients were followed for cardiovascular outcomes from 1998 to December 2013, while mortality surveillance continued through 2014.
In comparison with controls, patients 40 years or less had the highest excess risk of the most outcomes. *Excess risk of heart failure was elevated almost 5-fold (hazard ratio (HR), R 4.77), and risk of coronary heart disease wasn’t far behind (HR, 4.33). Risks of acute MI (HR, 3.41), stroke (HR, 3.58), and atrial fibrillation (HR, 1.95) were also elevated. Cardiovascular-related mortality was increased almost 3-fold (HR, 2.72), while total mortality (HR, 2.05) and non-cardiovascular mortality (HR, 1.95) were raised to a lesser degree.
“Thereafter, incremental risks generally declined with each higher decade age at diagnosis” of type 2 diabetes,” the investigators wrote.
After 80 years of age, all relative mortality risk factors dropped to less than 1, indicating lower risk than controls. Although non-fatal outcomes were still greater than 1 in this age group, these risks were “substantially attenuated compared with relative incremental risks in those diagnosed with T2DM at younger ages,” the investigators wrote.
The study was funded by the Swedish Association of Local Authorities Regions, the Swedish Heart and Lung Foundation, and the Swedish Research Council.
The investigators disclosed financial relationships with Amgen, AstraZeneca, Eli Lilly, and other pharmaceutical companies.
SOURCE: Sattar et al. Circulation. 2019 Apr 8. doi:10.1161/CIRCULATIONAHA.118.037885.
Individuals who are younger when diagnosed with type 2 diabetes are at greater risk of cardiovascular disease and death, compared with those diagnosed at an older age, according to a retrospective study involving almost 2 million people.
People diagnosed with type 2 diabetes at age 40 or younger were at greatest risk of most outcomes, reported lead author Naveed Sattar, MD, PhD, professor of metabolic medicine, University of Glasgow, Scotland, and his colleagues. “Treatment target recommendations in regards to the risk factor control may need to be more aggressive in people developing diabetes at younger ages,” they wrote in Circulation.
In contrast, developing type 2 diabetes over the age of 80 years had little impact on risks.
“[R]eassessment of treatment goals in elderly might be useful,” the investigators wrote. “Diabetes screening needs for the elderly (above 80) should also be reevaluated.”
The study involved 318,083 patients with type 2 diabetes registered in the Swedish National Diabetes Registry between 1998 and 2012. Each patient was matched with 5 individuals from the general population based on sex, age, and country of residence, providing a control population of 1,575,108. Outcomes assessed included non-cardiovascular mortality, cardiovascular mortality, all causemortality, hospitalization for heart failure, coronary heart disease, stroke, atrial fibrillation, and acute myocardial infarction. Patients were followed for cardiovascular outcomes from 1998 to December 2013, while mortality surveillance continued through 2014.
In comparison with controls, patients 40 years or less had the highest excess risk of the most outcomes. *Excess risk of heart failure was elevated almost 5-fold (hazard ratio (HR), R 4.77), and risk of coronary heart disease wasn’t far behind (HR, 4.33). Risks of acute MI (HR, 3.41), stroke (HR, 3.58), and atrial fibrillation (HR, 1.95) were also elevated. Cardiovascular-related mortality was increased almost 3-fold (HR, 2.72), while total mortality (HR, 2.05) and non-cardiovascular mortality (HR, 1.95) were raised to a lesser degree.
“Thereafter, incremental risks generally declined with each higher decade age at diagnosis” of type 2 diabetes,” the investigators wrote.
After 80 years of age, all relative mortality risk factors dropped to less than 1, indicating lower risk than controls. Although non-fatal outcomes were still greater than 1 in this age group, these risks were “substantially attenuated compared with relative incremental risks in those diagnosed with T2DM at younger ages,” the investigators wrote.
The study was funded by the Swedish Association of Local Authorities Regions, the Swedish Heart and Lung Foundation, and the Swedish Research Council.
The investigators disclosed financial relationships with Amgen, AstraZeneca, Eli Lilly, and other pharmaceutical companies.
SOURCE: Sattar et al. Circulation. 2019 Apr 8. doi:10.1161/CIRCULATIONAHA.118.037885.
FROM CIRCULATION
Key clinical point: Patients who are younger when diagnosed with type 2 diabetes mellitus (T2DM) are at greater risk of cardiovascular disease and death than patients diagnosed at an older age.
Major finding: Patients diagnosed with T2DM at age 40 or younger had twice the risk of death from any cause, compared with age-matched controls (hazard ratio, 2.05).
Study details: A retrospective analysis of type 2 diabetes and associations with cardiovascular and mortality risks, using data from 318,083 patients in the Swedish National Diabetes Registry.
Disclosures: The study was funded by the Swedish Association of Local Authorities Regions, the Swedish Heart and Lung Foundation, and the Swedish Research Council. The investigators disclosed financial relationships with Amgen, Astra-Zeneca, Eli Lilly, and others.
Source: Sattar et al. Circulation. 2019 Apr 8. doi:10.1161/CIRCULATIONAHA.118.037885.
Managing Eating Disorders on a General Pediatrics Unit: A Centralized Video Monitoring Pilot
Hospitalizations for nutritional rehabilitation of patients with restrictive eating disorders are increasing.1 Among primary mental health admissions at free-standing children’s hospitals, eating disorders represent 5.5% of hospitalizations and are associated with the longest length of stay (LOS; mean 14.3 days) and costliest care (mean $46,130).2 Admission is necessary to ensure initial weight restoration and monitoring for symptoms of refeeding syndrome, including electrolyte shifts and vital sign abnormalities.3-5
Supervision is generally considered an essential element of caring for hospitalized patients with eating disorders, who may experience difficulty adhering to nutritional treatment, perform excessive movement or exercise, or demonstrate purging or self-harming behaviors. Supervision is presumed to prevent counterproductive behaviors, facilitating weight gain and earlier discharge to psychiatric treatment. Best practices for patient supervision to address these challenges have not been established but often include meal time or continuous one-to-one supervision by nursing assistants (NAs) or other staff.6,7 While meal supervision has been shown to decrease medical LOS, it is costly, reduces staff availability for the care of other patient care, and can be a barrier to caring for patients with eating disorders in many institutions.8
Although not previously used in patients with eating disorders, centralized video monitoring (CVM) may provide an additional mode of supervision. CVM is an emerging technology consisting of real-time video streaming, without video recording, enabling tracking of patient movement, redirection of behaviors, and communication with unit nurses when necessary. CVM has been used in multiple patient safety initiatives to reduce falls, address staffing shortages, reduce costs,9,10 supervise patients at risk for self-harm or elopement, and prevent controlled medication diversion.10,11
We sought to pilot a novel use of CVM to replace our institution’s standard practice of continuous one-to-one nursing assistant (NA) supervision of patients admitted for medical stabilization of an eating disorder. Our objective was to evaluate the supervision cost and feasibility of CVM, using LOS and days to weight gain as balancing measures.
METHODS
Setting and Participants
This retrospective cohort study included patients 12-18 years old admitted to the pediatric hospital medicine service on a general unit of an academic quaternary care children’s hospital for medical stabilization of an eating disorder between September 2013 and March 2017. Patients were identified using administrative data based on primary or secondary diagnosis of anorexia nervosa, eating disorder not other wise specified, or another specified eating disorder (ICD 9 3071, 20759, or ICD 10 f5000, 5001, f5089, f509).12,13 This research study was considered exempt by the University of Wisconsin School of Medicine and Public Health’s Institutional Review Board.
Supervision Interventions
A standard medical stabilization protocol was used for patients admitted with an eating disorder throughout the study period (Appendix). All patients received continuous one-to-one NA supervision until they reached the target calorie intake and demonstrated the ability to follow the nutritional meal protocol. Beginning July 2015, patients received continuous CVM supervision unless they expressed suicidal ideation (SI), which triggered one-to-one NA supervision until they no longer endorsed suicidality.
Centralized Video Monitoring Implementation
Institutional CVM technology was AvaSys TeleSitter Solution (AvaSure, Inc). Our institution purchased CVM devices for use in adult settings, and one was assigned for pediatric CVM. Mobile CVM video carts were deployed to patient rooms and generated live video streams, without recorded capture, which were supervised by CVM technicians. These technicians were NAs hired and trained specifically for this role; worked four-, eight-, and 12-hour shifts; and observed up to eight camera feeds on a single monitor in a centralized room. Patients and family members could refuse CVM, which would trigger one-to-one NA supervision. Patients were not observed by CVM while in the restroom; staff were notified by either the patient or technician, and one-to-one supervision was provided. CVM had two-way audio communication, which allowed technicians to redirect patients verbally. Technicians could contact nursing staff directly by phone when additional intervention was needed.
Supervision Costs
NA supervision costs were estimated at $19/hour, based upon institutional human resources average NA salaries at that time. No additional mealtime supervision was included, as in-person supervision was already occurring.
CVM supervision costs were defined as the sum of the device cost plus CVM technician costs and two hours of one-to-one NA mealtime supervision per day. The CVM device cost was estimated at $2.10/hour, assuming a 10-year machine life expectancy (single unit cost $82,893 in 2015, 3,944 hours of use in fiscal year of 2018). CVM technician costs were $19/hour, based upon institutional human resources average CVM technician salaries at that time. Because technicians monitored an average of six patients simultaneously during this study, one-sixth of a CVM technician’s salary (ie, $3.17/hour) was used for each hour of CVM monitoring. Patients with mixed (NA and CVM) supervision were analyzed with those having CVM supervision. These patients’ costs were the sum of their NA supervision costs plus their CVM supervision costs.
Data Collection
Descriptive variables including age, gender, race/ethnicity, insurance, and LOS were collected from administrative data. The duration and type of supervision for all patients were collected from daily staffing logs. The eating disorder protocol standardized the process of obtaining daily weights (Appendix). Days to weight gain following admission were defined as the total number of days from admission to the first day of weight gain that was followed by another day of weight gain or maintaining the same weight
Data Analysis
Patient and hospitalization characteristics were summarized. A sample size of at least 14 in each group was estimated as necessary to detect a 50% reduction in supervision cost between the groups using alpha = 0.05, a power of 80%, a mean cost of $4,400 in the NA group, and a standard deviation of $1,600.Wilcoxon rank-sum tests were used to assess differences in median supervision cost between NA and CVM use. Differences in mean LOS and days to weight gain between NA and CVM use were assessed with t-tests because these data were normally distributed.
RESULTS
Patient Characteristics and Supervision Costs
The study included 37 consecutive admissions (NA = 23 and CVM = 14) with 35 unique patients. Patients were female, primarily non-Hispanic White, and privately insured (Table 1). Median supervision cost for the NA was statistically significantly more expensive at $4,104/admission versus $1,166/admission for CVM (P < .001, Table 2).
Balancing Measures, Acceptability, and Feasibility
Mean LOS was 11.7 days for NA and 9.8 days for CVM (P = .27; Table 2). The mean number of days to weight gain was 3.1 and 3.6 days, respectively (P = .28). No patients converted from CVM to NA supervision. One patient with SI converted to CVM after SI resolved and two patients required ongoing NA supervision due to continued SI. There were no reported refusals, technology failures, or unplanned discontinuations of CVM. One patient/family reported excessive CVM redirection of behavior.
DISCUSSION
This is the first description of CVM use in adolescent patients or patients with eating disorders. Our results suggest that CVM appears feasible and less costly in this population than one-to-one NA supervision, without statistically significant differences in LOS or time to weight gain. Patients with CVM with any NA supervision (except mealtime alone) were analyzed in the CVM group; therefore, this study may underestimate cost savings from CVM supervision. This innovative use of CVM may represent an opportunity for hospitals to repurpose monitoring technology for more efficient supervision of patients with eating disorders.
This pediatric pilot study adds to the growing body of literature in adult patients suggesting CVM supervision may be a feasible inpatient cost-reduction strategy.9,10 One single-center study demonstrated that the use of CVM with adult inpatients led to fewer unsafe behaviors, eg, patient removal of intravenous catheters and oxygen therapy. Personnel savings exceeded the original investment cost of the monitor within one fiscal quarter.9 Results of another study suggest that CVM use with hospitalized adults who required supervision to prevent falls was associated with improved patient and family satisfaction.14 In the absence of a gold standard for supervision of patients hospitalized with eating disorders, CVM technology is a tool that may balance cost, care quality, and patient experience. Given the upfront investment in CVM units, this technology may be most appropriate for institutions already using CVM for other inpatient indications.
Although our institutional cost of CVM use was similar to that reported by other institutions,11,15 the single-center design of this pilot study limits the generalizability of our findings. Unadjusted results of this observational study may be confounded by indication bias. As this was a pilot study, it was powered to detect a clinically significant difference in cost between NA and CVM supervision. While statistically significant differences were not seen in LOS or weight gain, this pilot study was not powered to detect potential differences or to adjust for all potential confounders (eg, other mental health conditions or comorbidities, eating disorder type, previous hospitalizations). Future studies should include these considerations in estimating sample sizes. The ability to conduct a robust cost-effectiveness analysis was also limited by cost data availability and reliance on staffing assumptions to calculate supervision costs. However, these findings will be important for valid effect size estimates for future interventional studies that rigorously evaluate CVM effectiveness and safety. Patients and families were not formally surveyed about their experiences with CVM, and the patient and family experience is another important outcome to consider in future studies.
CONCLUSION
The results of this pilot study suggest that supervision costs for patients admitted for medical stabilization of eating disorders were statistically significantly lower with CVM when compared with one-to-one NA supervision, without a change in hospitalization LOS or time to weight gain. These findings are particularly important as hospitals seek opportunities to reduce costs while providing safe and effective care. Future efforts should focus on evaluating clinical outcomes and patient experiences with this technology and strategies to maximize efficiency to offset the initial device cost.
Disclosures
The authors have no financial relationships relevant to this article to disclose. The authors have no conflicts of interest relevant to this article to disclose.
1. Zhao Y, Encinosa W. An update on hospitalizations for eating disorders, 1999 to 2009: statistical brief #120. In: Healthcare Cost and Utilization Project (HCUP) Statistical Briefs. Rockville, MD: Agency for Healthcare Research and Quality (US); 2006. PubMed
2. Bardach NS, Coker TR, Zima BT, et al. Common and costly hospitalizations for pediatric mental health disorders. Pediatrics. 2014;133(4):602-609. doi: 10.1542/peds.2013-3165. PubMed
3. Society for Adolescent H, Medicine, Golden NH, et al. Position Paper of the Society for Adolescent Health and Medicine: medical management of restrictive eating disorders in adolescents and young adults. J Adolesc Health. 2015;56(1):121-125. doi: 10.1016/j.jadohealth.2014.10.259. PubMed
4. Katzman DK. Medical complications in adolescents with anorexia nervosa: a review of the literature. Int J Eat Disord. 2005;37(S1):S52-S59; discussion S87-S59. doi: 10.1002/eat.20118. PubMed
5. Strandjord SE, Sieke EH, Richmond M, Khadilkar A, Rome ES. Medical stabilization of adolescents with nutritional insufficiency: a clinical care path. Eat Weight Disord. 2016;21(3):403-410. doi: 10.1007/s40519-015-0245-5. PubMed
6. Kells M, Davidson K, Hitchko L, O’Neil K, Schubert-Bob P, McCabe M. Examining supervised meals in patients with restrictive eating disorders. Appl Nurs Res. 2013;26(2):76-79. doi: 10.1016/j.apnr.2012.06.003. PubMed
7. Leclerc A, Turrini T, Sherwood K, Katzman DK. Evaluation of a nutrition rehabilitation protocol in hospitalized adolescents with restrictive eating disorders. J Adolesc Health. 2013;53(5):585-589. doi: 10.1016/j.jadohealth.2013.06.001. PubMed
8. Kells M, Schubert-Bob P, Nagle K, et al. Meal supervision during medical hospitalization for eating disorders. Clin Nurs Res. 2017;26(4):525-537. doi: 10.1177/1054773816637598. PubMed
9. Jeffers S, Searcey P, Boyle K, et al. Centralized video monitoring for patient safety: a Denver Health Lean journey. Nurs Econ. 2013;31(6):298-306. PubMed
10. Sand-Jecklin K, Johnson JR, Tylka S. Protecting patient safety: can video monitoring prevent falls in high-risk patient populations? J Nurs Care Qual. 2016;31(2):131-138. doi: 10.1097/NCQ.0000000000000163. PubMed
11. Burtson PL, Vento L. Sitter reduction through mobile video monitoring: a nurse-driven sitter protocol and administrative oversight. J Nurs Adm. 2015;45(7-8):363-369. doi: 10.1097/NNA.0000000000000216. PubMed
12. Prevention CfDCa. ICD-9-CM Guidelines, 9th ed. https://www.cdc.gov/nchs/data/icd/icd9cm_guidelines_2011.pdf. Accessed April 11, 2018.
13. Prevention CfDca. IDC-9-CM Code Conversion Table. https://www.cdc.gov/nchs/data/icd/icd-9-cm_fy14_cnvtbl_final.pdf. Accessed April 11, 2018.
14. Cournan M, Fusco-Gessick B, Wright L. Improving patient safety through video monitoring. Rehabil Nurs. 2016. doi: 10.1002/rnj.308. PubMed
15. Rochefort CM, Ward L, Ritchie JA, Girard N, Tamblyn RM. Patient and nurse staffing characteristics associated with high sitter use costs. J Adv Nurs. 2012;68(8):1758-1767. doi: 10.1111/j.1365-2648.2011.05864.x. PubMed
Hospitalizations for nutritional rehabilitation of patients with restrictive eating disorders are increasing.1 Among primary mental health admissions at free-standing children’s hospitals, eating disorders represent 5.5% of hospitalizations and are associated with the longest length of stay (LOS; mean 14.3 days) and costliest care (mean $46,130).2 Admission is necessary to ensure initial weight restoration and monitoring for symptoms of refeeding syndrome, including electrolyte shifts and vital sign abnormalities.3-5
Supervision is generally considered an essential element of caring for hospitalized patients with eating disorders, who may experience difficulty adhering to nutritional treatment, perform excessive movement or exercise, or demonstrate purging or self-harming behaviors. Supervision is presumed to prevent counterproductive behaviors, facilitating weight gain and earlier discharge to psychiatric treatment. Best practices for patient supervision to address these challenges have not been established but often include meal time or continuous one-to-one supervision by nursing assistants (NAs) or other staff.6,7 While meal supervision has been shown to decrease medical LOS, it is costly, reduces staff availability for the care of other patient care, and can be a barrier to caring for patients with eating disorders in many institutions.8
Although not previously used in patients with eating disorders, centralized video monitoring (CVM) may provide an additional mode of supervision. CVM is an emerging technology consisting of real-time video streaming, without video recording, enabling tracking of patient movement, redirection of behaviors, and communication with unit nurses when necessary. CVM has been used in multiple patient safety initiatives to reduce falls, address staffing shortages, reduce costs,9,10 supervise patients at risk for self-harm or elopement, and prevent controlled medication diversion.10,11
We sought to pilot a novel use of CVM to replace our institution’s standard practice of continuous one-to-one nursing assistant (NA) supervision of patients admitted for medical stabilization of an eating disorder. Our objective was to evaluate the supervision cost and feasibility of CVM, using LOS and days to weight gain as balancing measures.
METHODS
Setting and Participants
This retrospective cohort study included patients 12-18 years old admitted to the pediatric hospital medicine service on a general unit of an academic quaternary care children’s hospital for medical stabilization of an eating disorder between September 2013 and March 2017. Patients were identified using administrative data based on primary or secondary diagnosis of anorexia nervosa, eating disorder not other wise specified, or another specified eating disorder (ICD 9 3071, 20759, or ICD 10 f5000, 5001, f5089, f509).12,13 This research study was considered exempt by the University of Wisconsin School of Medicine and Public Health’s Institutional Review Board.
Supervision Interventions
A standard medical stabilization protocol was used for patients admitted with an eating disorder throughout the study period (Appendix). All patients received continuous one-to-one NA supervision until they reached the target calorie intake and demonstrated the ability to follow the nutritional meal protocol. Beginning July 2015, patients received continuous CVM supervision unless they expressed suicidal ideation (SI), which triggered one-to-one NA supervision until they no longer endorsed suicidality.
Centralized Video Monitoring Implementation
Institutional CVM technology was AvaSys TeleSitter Solution (AvaSure, Inc). Our institution purchased CVM devices for use in adult settings, and one was assigned for pediatric CVM. Mobile CVM video carts were deployed to patient rooms and generated live video streams, without recorded capture, which were supervised by CVM technicians. These technicians were NAs hired and trained specifically for this role; worked four-, eight-, and 12-hour shifts; and observed up to eight camera feeds on a single monitor in a centralized room. Patients and family members could refuse CVM, which would trigger one-to-one NA supervision. Patients were not observed by CVM while in the restroom; staff were notified by either the patient or technician, and one-to-one supervision was provided. CVM had two-way audio communication, which allowed technicians to redirect patients verbally. Technicians could contact nursing staff directly by phone when additional intervention was needed.
Supervision Costs
NA supervision costs were estimated at $19/hour, based upon institutional human resources average NA salaries at that time. No additional mealtime supervision was included, as in-person supervision was already occurring.
CVM supervision costs were defined as the sum of the device cost plus CVM technician costs and two hours of one-to-one NA mealtime supervision per day. The CVM device cost was estimated at $2.10/hour, assuming a 10-year machine life expectancy (single unit cost $82,893 in 2015, 3,944 hours of use in fiscal year of 2018). CVM technician costs were $19/hour, based upon institutional human resources average CVM technician salaries at that time. Because technicians monitored an average of six patients simultaneously during this study, one-sixth of a CVM technician’s salary (ie, $3.17/hour) was used for each hour of CVM monitoring. Patients with mixed (NA and CVM) supervision were analyzed with those having CVM supervision. These patients’ costs were the sum of their NA supervision costs plus their CVM supervision costs.
Data Collection
Descriptive variables including age, gender, race/ethnicity, insurance, and LOS were collected from administrative data. The duration and type of supervision for all patients were collected from daily staffing logs. The eating disorder protocol standardized the process of obtaining daily weights (Appendix). Days to weight gain following admission were defined as the total number of days from admission to the first day of weight gain that was followed by another day of weight gain or maintaining the same weight
Data Analysis
Patient and hospitalization characteristics were summarized. A sample size of at least 14 in each group was estimated as necessary to detect a 50% reduction in supervision cost between the groups using alpha = 0.05, a power of 80%, a mean cost of $4,400 in the NA group, and a standard deviation of $1,600.Wilcoxon rank-sum tests were used to assess differences in median supervision cost between NA and CVM use. Differences in mean LOS and days to weight gain between NA and CVM use were assessed with t-tests because these data were normally distributed.
RESULTS
Patient Characteristics and Supervision Costs
The study included 37 consecutive admissions (NA = 23 and CVM = 14) with 35 unique patients. Patients were female, primarily non-Hispanic White, and privately insured (Table 1). Median supervision cost for the NA was statistically significantly more expensive at $4,104/admission versus $1,166/admission for CVM (P < .001, Table 2).
Balancing Measures, Acceptability, and Feasibility
Mean LOS was 11.7 days for NA and 9.8 days for CVM (P = .27; Table 2). The mean number of days to weight gain was 3.1 and 3.6 days, respectively (P = .28). No patients converted from CVM to NA supervision. One patient with SI converted to CVM after SI resolved and two patients required ongoing NA supervision due to continued SI. There were no reported refusals, technology failures, or unplanned discontinuations of CVM. One patient/family reported excessive CVM redirection of behavior.
DISCUSSION
This is the first description of CVM use in adolescent patients or patients with eating disorders. Our results suggest that CVM appears feasible and less costly in this population than one-to-one NA supervision, without statistically significant differences in LOS or time to weight gain. Patients with CVM with any NA supervision (except mealtime alone) were analyzed in the CVM group; therefore, this study may underestimate cost savings from CVM supervision. This innovative use of CVM may represent an opportunity for hospitals to repurpose monitoring technology for more efficient supervision of patients with eating disorders.
This pediatric pilot study adds to the growing body of literature in adult patients suggesting CVM supervision may be a feasible inpatient cost-reduction strategy.9,10 One single-center study demonstrated that the use of CVM with adult inpatients led to fewer unsafe behaviors, eg, patient removal of intravenous catheters and oxygen therapy. Personnel savings exceeded the original investment cost of the monitor within one fiscal quarter.9 Results of another study suggest that CVM use with hospitalized adults who required supervision to prevent falls was associated with improved patient and family satisfaction.14 In the absence of a gold standard for supervision of patients hospitalized with eating disorders, CVM technology is a tool that may balance cost, care quality, and patient experience. Given the upfront investment in CVM units, this technology may be most appropriate for institutions already using CVM for other inpatient indications.
Although our institutional cost of CVM use was similar to that reported by other institutions,11,15 the single-center design of this pilot study limits the generalizability of our findings. Unadjusted results of this observational study may be confounded by indication bias. As this was a pilot study, it was powered to detect a clinically significant difference in cost between NA and CVM supervision. While statistically significant differences were not seen in LOS or weight gain, this pilot study was not powered to detect potential differences or to adjust for all potential confounders (eg, other mental health conditions or comorbidities, eating disorder type, previous hospitalizations). Future studies should include these considerations in estimating sample sizes. The ability to conduct a robust cost-effectiveness analysis was also limited by cost data availability and reliance on staffing assumptions to calculate supervision costs. However, these findings will be important for valid effect size estimates for future interventional studies that rigorously evaluate CVM effectiveness and safety. Patients and families were not formally surveyed about their experiences with CVM, and the patient and family experience is another important outcome to consider in future studies.
CONCLUSION
The results of this pilot study suggest that supervision costs for patients admitted for medical stabilization of eating disorders were statistically significantly lower with CVM when compared with one-to-one NA supervision, without a change in hospitalization LOS or time to weight gain. These findings are particularly important as hospitals seek opportunities to reduce costs while providing safe and effective care. Future efforts should focus on evaluating clinical outcomes and patient experiences with this technology and strategies to maximize efficiency to offset the initial device cost.
Disclosures
The authors have no financial relationships relevant to this article to disclose. The authors have no conflicts of interest relevant to this article to disclose.
Hospitalizations for nutritional rehabilitation of patients with restrictive eating disorders are increasing.1 Among primary mental health admissions at free-standing children’s hospitals, eating disorders represent 5.5% of hospitalizations and are associated with the longest length of stay (LOS; mean 14.3 days) and costliest care (mean $46,130).2 Admission is necessary to ensure initial weight restoration and monitoring for symptoms of refeeding syndrome, including electrolyte shifts and vital sign abnormalities.3-5
Supervision is generally considered an essential element of caring for hospitalized patients with eating disorders, who may experience difficulty adhering to nutritional treatment, perform excessive movement or exercise, or demonstrate purging or self-harming behaviors. Supervision is presumed to prevent counterproductive behaviors, facilitating weight gain and earlier discharge to psychiatric treatment. Best practices for patient supervision to address these challenges have not been established but often include meal time or continuous one-to-one supervision by nursing assistants (NAs) or other staff.6,7 While meal supervision has been shown to decrease medical LOS, it is costly, reduces staff availability for the care of other patient care, and can be a barrier to caring for patients with eating disorders in many institutions.8
Although not previously used in patients with eating disorders, centralized video monitoring (CVM) may provide an additional mode of supervision. CVM is an emerging technology consisting of real-time video streaming, without video recording, enabling tracking of patient movement, redirection of behaviors, and communication with unit nurses when necessary. CVM has been used in multiple patient safety initiatives to reduce falls, address staffing shortages, reduce costs,9,10 supervise patients at risk for self-harm or elopement, and prevent controlled medication diversion.10,11
We sought to pilot a novel use of CVM to replace our institution’s standard practice of continuous one-to-one nursing assistant (NA) supervision of patients admitted for medical stabilization of an eating disorder. Our objective was to evaluate the supervision cost and feasibility of CVM, using LOS and days to weight gain as balancing measures.
METHODS
Setting and Participants
This retrospective cohort study included patients 12-18 years old admitted to the pediatric hospital medicine service on a general unit of an academic quaternary care children’s hospital for medical stabilization of an eating disorder between September 2013 and March 2017. Patients were identified using administrative data based on primary or secondary diagnosis of anorexia nervosa, eating disorder not other wise specified, or another specified eating disorder (ICD 9 3071, 20759, or ICD 10 f5000, 5001, f5089, f509).12,13 This research study was considered exempt by the University of Wisconsin School of Medicine and Public Health’s Institutional Review Board.
Supervision Interventions
A standard medical stabilization protocol was used for patients admitted with an eating disorder throughout the study period (Appendix). All patients received continuous one-to-one NA supervision until they reached the target calorie intake and demonstrated the ability to follow the nutritional meal protocol. Beginning July 2015, patients received continuous CVM supervision unless they expressed suicidal ideation (SI), which triggered one-to-one NA supervision until they no longer endorsed suicidality.
Centralized Video Monitoring Implementation
Institutional CVM technology was AvaSys TeleSitter Solution (AvaSure, Inc). Our institution purchased CVM devices for use in adult settings, and one was assigned for pediatric CVM. Mobile CVM video carts were deployed to patient rooms and generated live video streams, without recorded capture, which were supervised by CVM technicians. These technicians were NAs hired and trained specifically for this role; worked four-, eight-, and 12-hour shifts; and observed up to eight camera feeds on a single monitor in a centralized room. Patients and family members could refuse CVM, which would trigger one-to-one NA supervision. Patients were not observed by CVM while in the restroom; staff were notified by either the patient or technician, and one-to-one supervision was provided. CVM had two-way audio communication, which allowed technicians to redirect patients verbally. Technicians could contact nursing staff directly by phone when additional intervention was needed.
Supervision Costs
NA supervision costs were estimated at $19/hour, based upon institutional human resources average NA salaries at that time. No additional mealtime supervision was included, as in-person supervision was already occurring.
CVM supervision costs were defined as the sum of the device cost plus CVM technician costs and two hours of one-to-one NA mealtime supervision per day. The CVM device cost was estimated at $2.10/hour, assuming a 10-year machine life expectancy (single unit cost $82,893 in 2015, 3,944 hours of use in fiscal year of 2018). CVM technician costs were $19/hour, based upon institutional human resources average CVM technician salaries at that time. Because technicians monitored an average of six patients simultaneously during this study, one-sixth of a CVM technician’s salary (ie, $3.17/hour) was used for each hour of CVM monitoring. Patients with mixed (NA and CVM) supervision were analyzed with those having CVM supervision. These patients’ costs were the sum of their NA supervision costs plus their CVM supervision costs.
Data Collection
Descriptive variables including age, gender, race/ethnicity, insurance, and LOS were collected from administrative data. The duration and type of supervision for all patients were collected from daily staffing logs. The eating disorder protocol standardized the process of obtaining daily weights (Appendix). Days to weight gain following admission were defined as the total number of days from admission to the first day of weight gain that was followed by another day of weight gain or maintaining the same weight
Data Analysis
Patient and hospitalization characteristics were summarized. A sample size of at least 14 in each group was estimated as necessary to detect a 50% reduction in supervision cost between the groups using alpha = 0.05, a power of 80%, a mean cost of $4,400 in the NA group, and a standard deviation of $1,600.Wilcoxon rank-sum tests were used to assess differences in median supervision cost between NA and CVM use. Differences in mean LOS and days to weight gain between NA and CVM use were assessed with t-tests because these data were normally distributed.
RESULTS
Patient Characteristics and Supervision Costs
The study included 37 consecutive admissions (NA = 23 and CVM = 14) with 35 unique patients. Patients were female, primarily non-Hispanic White, and privately insured (Table 1). Median supervision cost for the NA was statistically significantly more expensive at $4,104/admission versus $1,166/admission for CVM (P < .001, Table 2).
Balancing Measures, Acceptability, and Feasibility
Mean LOS was 11.7 days for NA and 9.8 days for CVM (P = .27; Table 2). The mean number of days to weight gain was 3.1 and 3.6 days, respectively (P = .28). No patients converted from CVM to NA supervision. One patient with SI converted to CVM after SI resolved and two patients required ongoing NA supervision due to continued SI. There were no reported refusals, technology failures, or unplanned discontinuations of CVM. One patient/family reported excessive CVM redirection of behavior.
DISCUSSION
This is the first description of CVM use in adolescent patients or patients with eating disorders. Our results suggest that CVM appears feasible and less costly in this population than one-to-one NA supervision, without statistically significant differences in LOS or time to weight gain. Patients with CVM with any NA supervision (except mealtime alone) were analyzed in the CVM group; therefore, this study may underestimate cost savings from CVM supervision. This innovative use of CVM may represent an opportunity for hospitals to repurpose monitoring technology for more efficient supervision of patients with eating disorders.
This pediatric pilot study adds to the growing body of literature in adult patients suggesting CVM supervision may be a feasible inpatient cost-reduction strategy.9,10 One single-center study demonstrated that the use of CVM with adult inpatients led to fewer unsafe behaviors, eg, patient removal of intravenous catheters and oxygen therapy. Personnel savings exceeded the original investment cost of the monitor within one fiscal quarter.9 Results of another study suggest that CVM use with hospitalized adults who required supervision to prevent falls was associated with improved patient and family satisfaction.14 In the absence of a gold standard for supervision of patients hospitalized with eating disorders, CVM technology is a tool that may balance cost, care quality, and patient experience. Given the upfront investment in CVM units, this technology may be most appropriate for institutions already using CVM for other inpatient indications.
Although our institutional cost of CVM use was similar to that reported by other institutions,11,15 the single-center design of this pilot study limits the generalizability of our findings. Unadjusted results of this observational study may be confounded by indication bias. As this was a pilot study, it was powered to detect a clinically significant difference in cost between NA and CVM supervision. While statistically significant differences were not seen in LOS or weight gain, this pilot study was not powered to detect potential differences or to adjust for all potential confounders (eg, other mental health conditions or comorbidities, eating disorder type, previous hospitalizations). Future studies should include these considerations in estimating sample sizes. The ability to conduct a robust cost-effectiveness analysis was also limited by cost data availability and reliance on staffing assumptions to calculate supervision costs. However, these findings will be important for valid effect size estimates for future interventional studies that rigorously evaluate CVM effectiveness and safety. Patients and families were not formally surveyed about their experiences with CVM, and the patient and family experience is another important outcome to consider in future studies.
CONCLUSION
The results of this pilot study suggest that supervision costs for patients admitted for medical stabilization of eating disorders were statistically significantly lower with CVM when compared with one-to-one NA supervision, without a change in hospitalization LOS or time to weight gain. These findings are particularly important as hospitals seek opportunities to reduce costs while providing safe and effective care. Future efforts should focus on evaluating clinical outcomes and patient experiences with this technology and strategies to maximize efficiency to offset the initial device cost.
Disclosures
The authors have no financial relationships relevant to this article to disclose. The authors have no conflicts of interest relevant to this article to disclose.
1. Zhao Y, Encinosa W. An update on hospitalizations for eating disorders, 1999 to 2009: statistical brief #120. In: Healthcare Cost and Utilization Project (HCUP) Statistical Briefs. Rockville, MD: Agency for Healthcare Research and Quality (US); 2006. PubMed
2. Bardach NS, Coker TR, Zima BT, et al. Common and costly hospitalizations for pediatric mental health disorders. Pediatrics. 2014;133(4):602-609. doi: 10.1542/peds.2013-3165. PubMed
3. Society for Adolescent H, Medicine, Golden NH, et al. Position Paper of the Society for Adolescent Health and Medicine: medical management of restrictive eating disorders in adolescents and young adults. J Adolesc Health. 2015;56(1):121-125. doi: 10.1016/j.jadohealth.2014.10.259. PubMed
4. Katzman DK. Medical complications in adolescents with anorexia nervosa: a review of the literature. Int J Eat Disord. 2005;37(S1):S52-S59; discussion S87-S59. doi: 10.1002/eat.20118. PubMed
5. Strandjord SE, Sieke EH, Richmond M, Khadilkar A, Rome ES. Medical stabilization of adolescents with nutritional insufficiency: a clinical care path. Eat Weight Disord. 2016;21(3):403-410. doi: 10.1007/s40519-015-0245-5. PubMed
6. Kells M, Davidson K, Hitchko L, O’Neil K, Schubert-Bob P, McCabe M. Examining supervised meals in patients with restrictive eating disorders. Appl Nurs Res. 2013;26(2):76-79. doi: 10.1016/j.apnr.2012.06.003. PubMed
7. Leclerc A, Turrini T, Sherwood K, Katzman DK. Evaluation of a nutrition rehabilitation protocol in hospitalized adolescents with restrictive eating disorders. J Adolesc Health. 2013;53(5):585-589. doi: 10.1016/j.jadohealth.2013.06.001. PubMed
8. Kells M, Schubert-Bob P, Nagle K, et al. Meal supervision during medical hospitalization for eating disorders. Clin Nurs Res. 2017;26(4):525-537. doi: 10.1177/1054773816637598. PubMed
9. Jeffers S, Searcey P, Boyle K, et al. Centralized video monitoring for patient safety: a Denver Health Lean journey. Nurs Econ. 2013;31(6):298-306. PubMed
10. Sand-Jecklin K, Johnson JR, Tylka S. Protecting patient safety: can video monitoring prevent falls in high-risk patient populations? J Nurs Care Qual. 2016;31(2):131-138. doi: 10.1097/NCQ.0000000000000163. PubMed
11. Burtson PL, Vento L. Sitter reduction through mobile video monitoring: a nurse-driven sitter protocol and administrative oversight. J Nurs Adm. 2015;45(7-8):363-369. doi: 10.1097/NNA.0000000000000216. PubMed
12. Prevention CfDCa. ICD-9-CM Guidelines, 9th ed. https://www.cdc.gov/nchs/data/icd/icd9cm_guidelines_2011.pdf. Accessed April 11, 2018.
13. Prevention CfDca. IDC-9-CM Code Conversion Table. https://www.cdc.gov/nchs/data/icd/icd-9-cm_fy14_cnvtbl_final.pdf. Accessed April 11, 2018.
14. Cournan M, Fusco-Gessick B, Wright L. Improving patient safety through video monitoring. Rehabil Nurs. 2016. doi: 10.1002/rnj.308. PubMed
15. Rochefort CM, Ward L, Ritchie JA, Girard N, Tamblyn RM. Patient and nurse staffing characteristics associated with high sitter use costs. J Adv Nurs. 2012;68(8):1758-1767. doi: 10.1111/j.1365-2648.2011.05864.x. PubMed
1. Zhao Y, Encinosa W. An update on hospitalizations for eating disorders, 1999 to 2009: statistical brief #120. In: Healthcare Cost and Utilization Project (HCUP) Statistical Briefs. Rockville, MD: Agency for Healthcare Research and Quality (US); 2006. PubMed
2. Bardach NS, Coker TR, Zima BT, et al. Common and costly hospitalizations for pediatric mental health disorders. Pediatrics. 2014;133(4):602-609. doi: 10.1542/peds.2013-3165. PubMed
3. Society for Adolescent H, Medicine, Golden NH, et al. Position Paper of the Society for Adolescent Health and Medicine: medical management of restrictive eating disorders in adolescents and young adults. J Adolesc Health. 2015;56(1):121-125. doi: 10.1016/j.jadohealth.2014.10.259. PubMed
4. Katzman DK. Medical complications in adolescents with anorexia nervosa: a review of the literature. Int J Eat Disord. 2005;37(S1):S52-S59; discussion S87-S59. doi: 10.1002/eat.20118. PubMed
5. Strandjord SE, Sieke EH, Richmond M, Khadilkar A, Rome ES. Medical stabilization of adolescents with nutritional insufficiency: a clinical care path. Eat Weight Disord. 2016;21(3):403-410. doi: 10.1007/s40519-015-0245-5. PubMed
6. Kells M, Davidson K, Hitchko L, O’Neil K, Schubert-Bob P, McCabe M. Examining supervised meals in patients with restrictive eating disorders. Appl Nurs Res. 2013;26(2):76-79. doi: 10.1016/j.apnr.2012.06.003. PubMed
7. Leclerc A, Turrini T, Sherwood K, Katzman DK. Evaluation of a nutrition rehabilitation protocol in hospitalized adolescents with restrictive eating disorders. J Adolesc Health. 2013;53(5):585-589. doi: 10.1016/j.jadohealth.2013.06.001. PubMed
8. Kells M, Schubert-Bob P, Nagle K, et al. Meal supervision during medical hospitalization for eating disorders. Clin Nurs Res. 2017;26(4):525-537. doi: 10.1177/1054773816637598. PubMed
9. Jeffers S, Searcey P, Boyle K, et al. Centralized video monitoring for patient safety: a Denver Health Lean journey. Nurs Econ. 2013;31(6):298-306. PubMed
10. Sand-Jecklin K, Johnson JR, Tylka S. Protecting patient safety: can video monitoring prevent falls in high-risk patient populations? J Nurs Care Qual. 2016;31(2):131-138. doi: 10.1097/NCQ.0000000000000163. PubMed
11. Burtson PL, Vento L. Sitter reduction through mobile video monitoring: a nurse-driven sitter protocol and administrative oversight. J Nurs Adm. 2015;45(7-8):363-369. doi: 10.1097/NNA.0000000000000216. PubMed
12. Prevention CfDCa. ICD-9-CM Guidelines, 9th ed. https://www.cdc.gov/nchs/data/icd/icd9cm_guidelines_2011.pdf. Accessed April 11, 2018.
13. Prevention CfDca. IDC-9-CM Code Conversion Table. https://www.cdc.gov/nchs/data/icd/icd-9-cm_fy14_cnvtbl_final.pdf. Accessed April 11, 2018.
14. Cournan M, Fusco-Gessick B, Wright L. Improving patient safety through video monitoring. Rehabil Nurs. 2016. doi: 10.1002/rnj.308. PubMed
15. Rochefort CM, Ward L, Ritchie JA, Girard N, Tamblyn RM. Patient and nurse staffing characteristics associated with high sitter use costs. J Adv Nurs. 2012;68(8):1758-1767. doi: 10.1111/j.1365-2648.2011.05864.x. PubMed
© 2019 Society of Hospital Medicine
Interhospital Transfer: Transfer Processes and Patient Outcomes
The transfer of patients between acute care hospitals (interhospital transfer [IHT]) occurs regularly among patients with a variety of diagnoses, in theory, to gain access to unique specialty services and/or a higher level of care, among other reasons.1,2
However, the practice of IHT is variable and nonstandardized,3,4 and existing data largely suggests that transferred patients experience worse outcomes, including longer length of stay, higher hospitalization costs, longer ICU time, and greater mortality, even with rigorous adjustment for confounding by indication.5,6 Though there are many possible reasons for these findings, existing literature suggests that there may be aspects of the transfer process itself which contribute to these outcomes.2,6,7
Understanding which aspects of the transfer process contribute to poor patient outcomes is a key first step toward the development of targeted quality improvement initiatives to improve this process of care. In this study, we aim to examine the association between select characteristics of the transfer process, including the timing of transfer and workload of the admitting physician team, and clinical outcomes among patients undergoing IHT.
METHODS
Data and Study Population
We performed a retrospective analysis of patients ≥age 18 years who transferred to Brigham and Women’s Hospital (BWH), a 777-bed tertiary care hospital, from another acute care hospital between January 2005, and September 2013. Dates of inclusion were purposefully chosen prior to BWH implementation of a new electronic health records system to avoid potential information bias. As at most academic medical centers, night coverage at BWH differs by service and includes a combination of long-call admitting teams and night float coverage. On weekends, many services are less well staffed, and some procedures may only be available if needed emergently. Some services have caps on the daily number of admissions or total patient census, but none have caps on the number of discharges per day. Patients were excluded from analysis if they left BWH against medical advice, were transferred from closely affiliated hospitals with shared personnel and electronic health records (Brigham and Women’s Faulkner Hospital, Dana Farber Cancer Institute), transferred from inpatient psychiatric or inpatient hospice facilities, or transferred to obstetrics or nursery services. Data were obtained from administrative sources and the research patient data repository (RPDR), a centralized clinical data repository that gathers data from various hospital legacy systems and stores them in one data warehouse.8 Our study was approved by the Partners Institutional Review Board (IRB) with a waiver of patient consent.
Transfer Process Characteristics
Predictors included select characteristics of the transfer process, including (1) Day of week of transfer, dichotomized into Friday through Sunday (“weekend”), versus Monday through Thursday (“weekday”);9 Friday was included with “weekend” given the suggestion of increased volume of transfers in advance of the weekend; (2) Time of arrival of the transferred patient, categorized into “daytime” (7
Outcomes
Outcomes included transfer to the intensive care unit (ICU) within 48 hours of arrival and 30-day mortality from date of index admission.5,6
Patient Characteristics
Covariates for adjustment included: patient age, sex, race, Elixhauser comorbidity score,11 Diagnosis-Related Group (DRG)-weight, insurance status, year of admission, number of preadmission medications, and service of admission.
Statistical Analyses
We used descriptive statistics to display baseline characteristics and performed a series of univariable and multivariable logistic regression models to obtain the adjusted odds of each transfer process characteristic on each outcome, adjusting for all covariates (proc logistic, SAS Statistical Software, Cary, North Carolina). For analyses of ICU transfer within 48 hours of arrival, all patients initially admitted to the ICU at time of transfer were excluded.
In the secondary analyses, we used a combined day-of-week and time-of-day variable (ie, Monday day, Monday evening, Monday night, Tuesday day, and so on, with Monday day as the reference group) to obtain a more detailed evaluation of timing of transfer on patient outcomes. We also performed stratified analyses to evaluate each transfer process characteristic on adjusted odds of 30-day mortality stratified by service of admission (ie, at the time of transfer to BWH), adjusting for all covariates. For all analyses, two-sided P values < .05 were considered significant.
RESULTS
Overall, 24,352 patients met our inclusion criteria and underwent IHT, of whom 2,174 (8.9%) died within 30 days. Of the 22,910 transferred patients originally admitted to a non-ICU service, 5,464 (23.8%) underwent ICU transfer within 48 hours of arrival. Cohort characteristics are shown in Table 1.
Multivariable regression analyses demonstrated no significant association between weekend (versus weekday) transfer or increased time delay between patient acceptance and arrival (>48 hours) and adjusted odds of ICU transfer within 48 hours or 30-day mortality. However, they did demonstrate that nighttime (versus daytime) transfer was associated with greater adjusted odds of both ICU transfer and 30-day mortality. Increased admitting team busyness was associated with lower adjusted odds of ICU transfer but was not significantly associated with adjusted odds of 30-day mortality (Table 2). As expected, decreased time delay between patient acceptance and arrival (0-12 hours) was associated with increased adjusted odds of both ICU transfer (adjusted OR 2.68; 95% CI 2.29, 3.15) and 30-day mortality (adjusted OR 1.25; 95% CI 1.03, 1.53) compared with 12-24 hours (results not shown). Time delay >48 hours was not associated with either outcome.
Regression analyses with the combined day/time variable demonstrated that compared with Monday daytime transfer, Sunday night transfer was significantly associated with increased adjusted odds of 30-day mortality, and Friday night transfer was associated with a trend toward increased 30-day mortality (adjusted OR [aOR] 1.88; 95% CI 1.25, 2.82, and aOR 1.43; 95% CI 0.99, 2.06, respectively). We also found that all nighttime transfers (ie, Monday through Sunday night) were associated with increased adjusted odds of ICU transfer within 48 hours (as compared with Monday daytime transfer). Other days/time analyses were not significant.
Univariable and multivariable analyses stratified by service were performed (Appendix). Multivariable stratified analyses demonstrated that weekend transfer, nighttime transfer, and increased admitting team busyness were associated with increased adjusted odds of 30-day mortality among cardiothoracic (CT) and gastrointestinal (GI) surgical service patients. Increased admitting team busyness was also associated with increased mortality among ICU service patients but was associated with decreased mortality among cardiology service patients. An increased time delay between patient acceptance and arrival was associated with decreased mortality among CT and GI surgical service patients (Figure; Appendix). Other adjusted stratified outcomes were not significant.
DISCUSSION
In this study of 24,352 patients undergoing IHT, we found no significant association between weekend transfer or increased time delay between transfer acceptance and arrival and patient outcomes in the cohort as a whole; but we found that nighttime transfer is associated with increased adjusted odds of both ICU transfer within 48 hours and 30-day mortality. Our analyses combining day-of-week and time-of-day demonstrate that Sunday night transfer is particularly associated with increased adjusted odds of 30-day mortality (as compared with Monday daytime transfer), and show a trend toward increased mortality with Friday night transfers. These detailed analyses otherwise reinforce that nighttime transfer across all nights of the week is associated with increased adjusted odds of ICU transfer within 48 hours. We also found that increased admitting team busyness on the day of patient transfer is associated with decreased odds of ICU transfer, though this may solely be reflective of higher turnover services (ie, cardiology) caring for lower acuity patients, as suggested by secondary analyses stratified by service. In addition, secondary analyses demonstrated differential associations between weekend transfers, nighttime transfers, and increased team busyness on the odds of 30-day mortality based on service of transfer. These analyses showed that patients transferred to higher acuity services requiring procedural care, including CT surgery, GI surgery, and Medical ICU, do worse under all three circumstances as compared with patients transferred to other services. Secondary analyses also demonstrated that increased time delay between patient acceptance and arrival is inversely associated with 30-day mortality among CT and GI surgery service patients, likely reflecting lower acuity patients (ie, less sick patients are less rapidly transferred).
There are several possible explanations for these findings. Patients transferred to surgical services at night may reflect a more urgent need for surgery and include a sicker cohort of patients, possibly explaining these findings. Alternatively, or in addition, both weekend and nighttime hospital admission expose patients to similar potential risks, ie, limited resources available during off-peak hours. Our findings could, therefore, reflect the possibility that patients transferred to higher acuity services in need of procedural care are most vulnerable to off-peak timing of transfer. Similar data looking at patients admitted through the emergency room (ER) find the strongest effect of off-peak admissions on patients in need of procedures, including GI hemorrhage,12 atrial fibrillation13 and acute myocardial infarction (AMI),14 arguably because of the limited availability of necessary interventions. Patients undergoing IHT are a sicker cohort of patients than those admitted through the ER, and, therefore, may be even more vulnerable to these issues.3,5 This is supported by our findings that Sunday night transfers (and trend toward Friday night transfers) are associated with greater mortality compared with Monday daytime transfers, when at-the-ready resources and/or specialty personnel may be less available (Sunday night), and delays until receipt of necessary procedures may be longer (Friday night). Though we did not observe similar results among cardiology service transfers, as may be expected based on existing literature,13,14 this subset of patients includes more heterogeneous diagnoses, (ie, not solely those that require acute intervention) and exhibited a low level of acuity (low Elixhauser score and DRG-weight, data not shown).
We also found that increased admitting team busyness on the day of patient transfer is associated with increased odds of 30-day mortality among CT surgery, GI surgery, and ICU service transfers. As above, there are several possible explanations for this finding. It is possible that among these services, only the sickest/neediest patients are accepted for transfer when teams are busiest, explaining our findings. Though this explanation is possible, the measure of team “busyness” includes patient discharge, thereby increasing, not decreasing, availability for incoming patients, making this explanation less likely. Alternatively, it is possible that this finding is reflective of reverse causation, ie, that teams have less ability to discharge/admit new patients when caring for particularly sick/unstable patient transfers, though this assumes that transferred patients arrive earlier in the day, (eg, in time to influence discharge decisions), which infrequently occurs (Table 1). Lastly, it is possible that this subset of patients will be more vulnerable to the workload of the team that is caring for them at the time of their arrival. With high patient turnover (admissions/discharges), the time allocated to each patient’s care may be diminished (ie, “work compression,” trying to do the same amount of work in less time), and may result in decreased time to care for the transferred patient. This has been shown to influence patient outcomes at the time of patient discharge.10
In trying to understand why we observed an inverse relationship between admitting team busyness and odds of ICU transfer within 48 hours, we believe this finding is largely driven by cardiology service transfers, which comprise the highest volume of transferred patients in our cohort (Table 1), and are low acuity patients. Within this population of patients, admitting team busyness is likely a surrogate variable for high turnover/low acuity. This idea is supported by our findings that admitting team busyness is associated with decreased adjusted odds of 30-day mortality in this group (and only in this group).
Similarly, our observed inverse relationship between increased time delay and 30-day mortality among CT and GI surgical service patients is also likely reflective of lower acuity patients. We anticipated that decreased time delay (0-12 hours) would be reflective of greater patient acuity (supported by our findings that decreased time delay is associated with increased odds of ICU transfer and 30-day mortality). However, our findings also suggest that increased time delay (>48 hours) is similarly representative of lower patient acuity and therefore an imperfect measure of discontinuity and/or harmful delays in care during IHT (see limitations below).
Our study is subject to several limitations. This is a single site study; given known variation in transfer practices between hospitals,3 it is possible that our findings are not generalizable. However, given similar existing data on patients admitted through the ER, it is likely our findings may be reflective of IHT to similar tertiary referral hospitals. Second, although we adjusted for patient characteristics, there remains the possibility of unmeasured confounding and other bias that account for our results, as discussed. Third, although the definition of “busyness” used in this study was chosen based on prior data demonstrating an effect on patient outcomes,10 we did not include other measures of busyness that may influence outcomes of transferred patients such as overall team census or hospital busyness. However, the workload associated with a high volume of patient admissions and discharges is arguably a greater reflection of “work compression” for the admitting team compared with overall team census, which may reflect a more static workload with less impact on the care of a newly transferred patient. Also, although hospital census may influence the ability to transfer (ie, lower volume of transferred patients during times of high hospital census), this likely has less of an impact on the direct care of transferred patients than the admitting team’s workload. It is more likely that it would serve as a confounder (eg, sicker patients are accepted for transfer despite high hospital census, while lower risk patients are not).
Nevertheless, future studies should further evaluate the association with other measures of busyness/workload and outcomes of transferred patients. Lastly, though we anticipated time delay between transfer acceptance and arrival would be correlated with patient acuity, we hypothesized that longer delay might affect patient continuity and communication and impact patient outcomes. However, our results demonstrate that our measurement of this variable was unsuccessful in unraveling patient acuity from our intended evaluation of these vulnerable aspects of IHT. It is likely that a more detailed evaluation is required to explore potential challenges more fully that may occur with greater time delays (eg, suboptimal communication regarding changes in clinical status during this time period, delays in treatment). Similarly, though our study evaluates the association between nighttime and weekend transfer (and the interaction between these) with patient outcomes, we did not evaluate other intermediate outcomes that may be more affected by the timing of transfer, such as diagnostic errors or delays in procedural care, which warrant further investigation. We do not directly examine the underlying reasons that explain our observed associations, and thus more research is needed to identify these as well as design and evaluate solutions.
Collectively, our findings suggest that high acuity patients in need of procedural care experience worse outcomes during off-peak times of transfer, and during times of high care-team workload. Though further research is needed to identify underlying reasons to explain our findings, both the timing of patient transfer (when modifiable) and workload of the team caring for the patient on arrival may serve as potential targets for interventions to improve the quality and safety of IHT for patients at greatest risk.
Disclosures
Dr. Mueller and Dr. Schnipper have nothing to disclose. Ms. Fiskio has nothing to disclose. Dr. Schnipper is the recipient of grant funding from Mallinckrodt Pharmaceuticals to conduct an investigator-initiated study of predictors and impact of opioid-related adverse drug events.
1. Iwashyna TJ. The incomplete infrastructure for interhospital patient transfer. Crit Care Med. 2 012;40(8):2470-2478. https://doi.org/10.1097/CCM.0b013e318254516f.
2. Mueller SK, Shannon E, Dalal A, Schnipper JL, Dykes P. Patient and physician experience with interhospital transfer: a qualitative study. J Patient Saf. 2018. https://doi.org/10.1097/PTS.0000000000000501
3. Mueller SK, Zheng J, Orav EJ, Schnipper JL. Rates, predictors and variability of interhospital transfers: a national evaluation. J Hosp Med. 2017;12(6):435-442.https://doi.org/10.12788/jhm.2747.
4. Bosk EA, Veinot T, Iwashyna TJ. Which patients and where: a qualitative study of patient transfers from community hospitals. Med Care. 2011;49(6):592-598. https://doi.org/10.1097/MLR.0b013e31820fb71b.
5. Sokol-Hessner L, White AA, Davis KF, Herzig SJ, Hohmann SF. Interhospital transfer patients discharged by academic hospitalists and general internists: characteristics and outcomes. J Hosp Med. 2016;11(4):245-50. https://doi.org/10.1002/jhm.2515.
6. Mueller S, Zheng J, Orav EJP, Schnipper JL. Inter-hospital transfer and patient outcomes: a retrospective cohort study. BMJ Qual Saf. 2018. https://doi.org/10.1136/bmjqs-2018-008087.
7. Mueller SK, Schnipper JL. Physician perspectives on interhospital transfers. J Patient Saf. 2016. https://doi.org/10.1097/PTS.0000000000000312.
8. Research Patient Data Registry (RPDR). http://rc.partners.org/rpdr. Accessed April 20, 2018.
9. Bell CM, Redelmeier DA. Mortality among patients admitted to hospitals on weekends as compared with weekdays. N Engl J Med. 2001;345(9):663-668. https://doi.org/10.1056/NEJMsa003376
10. Mueller SK, Donze J, Schnipper JL. Intern workload and discontinuity of care on 30-day readmission. Am J Med. 2013;126(1):81-88. https://doi.org/10.1016/j.amjmed.2012.09.003.
11. Elixhauser A, Steiner C, Harris DR, Coffey RM. Comorbidity measures for use with administrative data. Med Care. 1998;36(1):8-27. PubMed
12. Ananthakrishnan AN, McGinley EL, Saeian K. Outcomes of weekend admissions for upper gastrointestinal hemorrhage: a nationwide analysis. Clin Gastroenterol Hepatol. 2009;7(3):296-302e1. https://doi.org/10.1016/j.cgh.2008.08.013.
13. Deshmukh A, Pant S, Kumar G, Bursac Z, Paydak H, Mehta JL. Comparison of outcomes of weekend versus weekday admissions for atrial fibrillation. Am J Cardiol. 2012;110(2):208-211. https://doi.org/10.1016/j.amjcard.2012.03.011.
14. Clarke MS, Wills RA, Bowman RV, et al. Exploratory study of the ‘weekend effect’ for acute medical admissions to public hospitals in Queensland, Australia. Intern Med J. 2010;40(11):777-783. https://doi.org/-10.1111/j.1445-5994.2009.02067.x.
The transfer of patients between acute care hospitals (interhospital transfer [IHT]) occurs regularly among patients with a variety of diagnoses, in theory, to gain access to unique specialty services and/or a higher level of care, among other reasons.1,2
However, the practice of IHT is variable and nonstandardized,3,4 and existing data largely suggests that transferred patients experience worse outcomes, including longer length of stay, higher hospitalization costs, longer ICU time, and greater mortality, even with rigorous adjustment for confounding by indication.5,6 Though there are many possible reasons for these findings, existing literature suggests that there may be aspects of the transfer process itself which contribute to these outcomes.2,6,7
Understanding which aspects of the transfer process contribute to poor patient outcomes is a key first step toward the development of targeted quality improvement initiatives to improve this process of care. In this study, we aim to examine the association between select characteristics of the transfer process, including the timing of transfer and workload of the admitting physician team, and clinical outcomes among patients undergoing IHT.
METHODS
Data and Study Population
We performed a retrospective analysis of patients ≥age 18 years who transferred to Brigham and Women’s Hospital (BWH), a 777-bed tertiary care hospital, from another acute care hospital between January 2005, and September 2013. Dates of inclusion were purposefully chosen prior to BWH implementation of a new electronic health records system to avoid potential information bias. As at most academic medical centers, night coverage at BWH differs by service and includes a combination of long-call admitting teams and night float coverage. On weekends, many services are less well staffed, and some procedures may only be available if needed emergently. Some services have caps on the daily number of admissions or total patient census, but none have caps on the number of discharges per day. Patients were excluded from analysis if they left BWH against medical advice, were transferred from closely affiliated hospitals with shared personnel and electronic health records (Brigham and Women’s Faulkner Hospital, Dana Farber Cancer Institute), transferred from inpatient psychiatric or inpatient hospice facilities, or transferred to obstetrics or nursery services. Data were obtained from administrative sources and the research patient data repository (RPDR), a centralized clinical data repository that gathers data from various hospital legacy systems and stores them in one data warehouse.8 Our study was approved by the Partners Institutional Review Board (IRB) with a waiver of patient consent.
Transfer Process Characteristics
Predictors included select characteristics of the transfer process, including (1) Day of week of transfer, dichotomized into Friday through Sunday (“weekend”), versus Monday through Thursday (“weekday”);9 Friday was included with “weekend” given the suggestion of increased volume of transfers in advance of the weekend; (2) Time of arrival of the transferred patient, categorized into “daytime” (7
Outcomes
Outcomes included transfer to the intensive care unit (ICU) within 48 hours of arrival and 30-day mortality from date of index admission.5,6
Patient Characteristics
Covariates for adjustment included: patient age, sex, race, Elixhauser comorbidity score,11 Diagnosis-Related Group (DRG)-weight, insurance status, year of admission, number of preadmission medications, and service of admission.
Statistical Analyses
We used descriptive statistics to display baseline characteristics and performed a series of univariable and multivariable logistic regression models to obtain the adjusted odds of each transfer process characteristic on each outcome, adjusting for all covariates (proc logistic, SAS Statistical Software, Cary, North Carolina). For analyses of ICU transfer within 48 hours of arrival, all patients initially admitted to the ICU at time of transfer were excluded.
In the secondary analyses, we used a combined day-of-week and time-of-day variable (ie, Monday day, Monday evening, Monday night, Tuesday day, and so on, with Monday day as the reference group) to obtain a more detailed evaluation of timing of transfer on patient outcomes. We also performed stratified analyses to evaluate each transfer process characteristic on adjusted odds of 30-day mortality stratified by service of admission (ie, at the time of transfer to BWH), adjusting for all covariates. For all analyses, two-sided P values < .05 were considered significant.
RESULTS
Overall, 24,352 patients met our inclusion criteria and underwent IHT, of whom 2,174 (8.9%) died within 30 days. Of the 22,910 transferred patients originally admitted to a non-ICU service, 5,464 (23.8%) underwent ICU transfer within 48 hours of arrival. Cohort characteristics are shown in Table 1.
Multivariable regression analyses demonstrated no significant association between weekend (versus weekday) transfer or increased time delay between patient acceptance and arrival (>48 hours) and adjusted odds of ICU transfer within 48 hours or 30-day mortality. However, they did demonstrate that nighttime (versus daytime) transfer was associated with greater adjusted odds of both ICU transfer and 30-day mortality. Increased admitting team busyness was associated with lower adjusted odds of ICU transfer but was not significantly associated with adjusted odds of 30-day mortality (Table 2). As expected, decreased time delay between patient acceptance and arrival (0-12 hours) was associated with increased adjusted odds of both ICU transfer (adjusted OR 2.68; 95% CI 2.29, 3.15) and 30-day mortality (adjusted OR 1.25; 95% CI 1.03, 1.53) compared with 12-24 hours (results not shown). Time delay >48 hours was not associated with either outcome.
Regression analyses with the combined day/time variable demonstrated that compared with Monday daytime transfer, Sunday night transfer was significantly associated with increased adjusted odds of 30-day mortality, and Friday night transfer was associated with a trend toward increased 30-day mortality (adjusted OR [aOR] 1.88; 95% CI 1.25, 2.82, and aOR 1.43; 95% CI 0.99, 2.06, respectively). We also found that all nighttime transfers (ie, Monday through Sunday night) were associated with increased adjusted odds of ICU transfer within 48 hours (as compared with Monday daytime transfer). Other days/time analyses were not significant.
Univariable and multivariable analyses stratified by service were performed (Appendix). Multivariable stratified analyses demonstrated that weekend transfer, nighttime transfer, and increased admitting team busyness were associated with increased adjusted odds of 30-day mortality among cardiothoracic (CT) and gastrointestinal (GI) surgical service patients. Increased admitting team busyness was also associated with increased mortality among ICU service patients but was associated with decreased mortality among cardiology service patients. An increased time delay between patient acceptance and arrival was associated with decreased mortality among CT and GI surgical service patients (Figure; Appendix). Other adjusted stratified outcomes were not significant.
DISCUSSION
In this study of 24,352 patients undergoing IHT, we found no significant association between weekend transfer or increased time delay between transfer acceptance and arrival and patient outcomes in the cohort as a whole; but we found that nighttime transfer is associated with increased adjusted odds of both ICU transfer within 48 hours and 30-day mortality. Our analyses combining day-of-week and time-of-day demonstrate that Sunday night transfer is particularly associated with increased adjusted odds of 30-day mortality (as compared with Monday daytime transfer), and show a trend toward increased mortality with Friday night transfers. These detailed analyses otherwise reinforce that nighttime transfer across all nights of the week is associated with increased adjusted odds of ICU transfer within 48 hours. We also found that increased admitting team busyness on the day of patient transfer is associated with decreased odds of ICU transfer, though this may solely be reflective of higher turnover services (ie, cardiology) caring for lower acuity patients, as suggested by secondary analyses stratified by service. In addition, secondary analyses demonstrated differential associations between weekend transfers, nighttime transfers, and increased team busyness on the odds of 30-day mortality based on service of transfer. These analyses showed that patients transferred to higher acuity services requiring procedural care, including CT surgery, GI surgery, and Medical ICU, do worse under all three circumstances as compared with patients transferred to other services. Secondary analyses also demonstrated that increased time delay between patient acceptance and arrival is inversely associated with 30-day mortality among CT and GI surgery service patients, likely reflecting lower acuity patients (ie, less sick patients are less rapidly transferred).
There are several possible explanations for these findings. Patients transferred to surgical services at night may reflect a more urgent need for surgery and include a sicker cohort of patients, possibly explaining these findings. Alternatively, or in addition, both weekend and nighttime hospital admission expose patients to similar potential risks, ie, limited resources available during off-peak hours. Our findings could, therefore, reflect the possibility that patients transferred to higher acuity services in need of procedural care are most vulnerable to off-peak timing of transfer. Similar data looking at patients admitted through the emergency room (ER) find the strongest effect of off-peak admissions on patients in need of procedures, including GI hemorrhage,12 atrial fibrillation13 and acute myocardial infarction (AMI),14 arguably because of the limited availability of necessary interventions. Patients undergoing IHT are a sicker cohort of patients than those admitted through the ER, and, therefore, may be even more vulnerable to these issues.3,5 This is supported by our findings that Sunday night transfers (and trend toward Friday night transfers) are associated with greater mortality compared with Monday daytime transfers, when at-the-ready resources and/or specialty personnel may be less available (Sunday night), and delays until receipt of necessary procedures may be longer (Friday night). Though we did not observe similar results among cardiology service transfers, as may be expected based on existing literature,13,14 this subset of patients includes more heterogeneous diagnoses, (ie, not solely those that require acute intervention) and exhibited a low level of acuity (low Elixhauser score and DRG-weight, data not shown).
We also found that increased admitting team busyness on the day of patient transfer is associated with increased odds of 30-day mortality among CT surgery, GI surgery, and ICU service transfers. As above, there are several possible explanations for this finding. It is possible that among these services, only the sickest/neediest patients are accepted for transfer when teams are busiest, explaining our findings. Though this explanation is possible, the measure of team “busyness” includes patient discharge, thereby increasing, not decreasing, availability for incoming patients, making this explanation less likely. Alternatively, it is possible that this finding is reflective of reverse causation, ie, that teams have less ability to discharge/admit new patients when caring for particularly sick/unstable patient transfers, though this assumes that transferred patients arrive earlier in the day, (eg, in time to influence discharge decisions), which infrequently occurs (Table 1). Lastly, it is possible that this subset of patients will be more vulnerable to the workload of the team that is caring for them at the time of their arrival. With high patient turnover (admissions/discharges), the time allocated to each patient’s care may be diminished (ie, “work compression,” trying to do the same amount of work in less time), and may result in decreased time to care for the transferred patient. This has been shown to influence patient outcomes at the time of patient discharge.10
In trying to understand why we observed an inverse relationship between admitting team busyness and odds of ICU transfer within 48 hours, we believe this finding is largely driven by cardiology service transfers, which comprise the highest volume of transferred patients in our cohort (Table 1), and are low acuity patients. Within this population of patients, admitting team busyness is likely a surrogate variable for high turnover/low acuity. This idea is supported by our findings that admitting team busyness is associated with decreased adjusted odds of 30-day mortality in this group (and only in this group).
Similarly, our observed inverse relationship between increased time delay and 30-day mortality among CT and GI surgical service patients is also likely reflective of lower acuity patients. We anticipated that decreased time delay (0-12 hours) would be reflective of greater patient acuity (supported by our findings that decreased time delay is associated with increased odds of ICU transfer and 30-day mortality). However, our findings also suggest that increased time delay (>48 hours) is similarly representative of lower patient acuity and therefore an imperfect measure of discontinuity and/or harmful delays in care during IHT (see limitations below).
Our study is subject to several limitations. This is a single site study; given known variation in transfer practices between hospitals,3 it is possible that our findings are not generalizable. However, given similar existing data on patients admitted through the ER, it is likely our findings may be reflective of IHT to similar tertiary referral hospitals. Second, although we adjusted for patient characteristics, there remains the possibility of unmeasured confounding and other bias that account for our results, as discussed. Third, although the definition of “busyness” used in this study was chosen based on prior data demonstrating an effect on patient outcomes,10 we did not include other measures of busyness that may influence outcomes of transferred patients such as overall team census or hospital busyness. However, the workload associated with a high volume of patient admissions and discharges is arguably a greater reflection of “work compression” for the admitting team compared with overall team census, which may reflect a more static workload with less impact on the care of a newly transferred patient. Also, although hospital census may influence the ability to transfer (ie, lower volume of transferred patients during times of high hospital census), this likely has less of an impact on the direct care of transferred patients than the admitting team’s workload. It is more likely that it would serve as a confounder (eg, sicker patients are accepted for transfer despite high hospital census, while lower risk patients are not).
Nevertheless, future studies should further evaluate the association with other measures of busyness/workload and outcomes of transferred patients. Lastly, though we anticipated time delay between transfer acceptance and arrival would be correlated with patient acuity, we hypothesized that longer delay might affect patient continuity and communication and impact patient outcomes. However, our results demonstrate that our measurement of this variable was unsuccessful in unraveling patient acuity from our intended evaluation of these vulnerable aspects of IHT. It is likely that a more detailed evaluation is required to explore potential challenges more fully that may occur with greater time delays (eg, suboptimal communication regarding changes in clinical status during this time period, delays in treatment). Similarly, though our study evaluates the association between nighttime and weekend transfer (and the interaction between these) with patient outcomes, we did not evaluate other intermediate outcomes that may be more affected by the timing of transfer, such as diagnostic errors or delays in procedural care, which warrant further investigation. We do not directly examine the underlying reasons that explain our observed associations, and thus more research is needed to identify these as well as design and evaluate solutions.
Collectively, our findings suggest that high acuity patients in need of procedural care experience worse outcomes during off-peak times of transfer, and during times of high care-team workload. Though further research is needed to identify underlying reasons to explain our findings, both the timing of patient transfer (when modifiable) and workload of the team caring for the patient on arrival may serve as potential targets for interventions to improve the quality and safety of IHT for patients at greatest risk.
Disclosures
Dr. Mueller and Dr. Schnipper have nothing to disclose. Ms. Fiskio has nothing to disclose. Dr. Schnipper is the recipient of grant funding from Mallinckrodt Pharmaceuticals to conduct an investigator-initiated study of predictors and impact of opioid-related adverse drug events.
The transfer of patients between acute care hospitals (interhospital transfer [IHT]) occurs regularly among patients with a variety of diagnoses, in theory, to gain access to unique specialty services and/or a higher level of care, among other reasons.1,2
However, the practice of IHT is variable and nonstandardized,3,4 and existing data largely suggests that transferred patients experience worse outcomes, including longer length of stay, higher hospitalization costs, longer ICU time, and greater mortality, even with rigorous adjustment for confounding by indication.5,6 Though there are many possible reasons for these findings, existing literature suggests that there may be aspects of the transfer process itself which contribute to these outcomes.2,6,7
Understanding which aspects of the transfer process contribute to poor patient outcomes is a key first step toward the development of targeted quality improvement initiatives to improve this process of care. In this study, we aim to examine the association between select characteristics of the transfer process, including the timing of transfer and workload of the admitting physician team, and clinical outcomes among patients undergoing IHT.
METHODS
Data and Study Population
We performed a retrospective analysis of patients ≥age 18 years who transferred to Brigham and Women’s Hospital (BWH), a 777-bed tertiary care hospital, from another acute care hospital between January 2005, and September 2013. Dates of inclusion were purposefully chosen prior to BWH implementation of a new electronic health records system to avoid potential information bias. As at most academic medical centers, night coverage at BWH differs by service and includes a combination of long-call admitting teams and night float coverage. On weekends, many services are less well staffed, and some procedures may only be available if needed emergently. Some services have caps on the daily number of admissions or total patient census, but none have caps on the number of discharges per day. Patients were excluded from analysis if they left BWH against medical advice, were transferred from closely affiliated hospitals with shared personnel and electronic health records (Brigham and Women’s Faulkner Hospital, Dana Farber Cancer Institute), transferred from inpatient psychiatric or inpatient hospice facilities, or transferred to obstetrics or nursery services. Data were obtained from administrative sources and the research patient data repository (RPDR), a centralized clinical data repository that gathers data from various hospital legacy systems and stores them in one data warehouse.8 Our study was approved by the Partners Institutional Review Board (IRB) with a waiver of patient consent.
Transfer Process Characteristics
Predictors included select characteristics of the transfer process, including (1) Day of week of transfer, dichotomized into Friday through Sunday (“weekend”), versus Monday through Thursday (“weekday”);9 Friday was included with “weekend” given the suggestion of increased volume of transfers in advance of the weekend; (2) Time of arrival of the transferred patient, categorized into “daytime” (7
Outcomes
Outcomes included transfer to the intensive care unit (ICU) within 48 hours of arrival and 30-day mortality from date of index admission.5,6
Patient Characteristics
Covariates for adjustment included: patient age, sex, race, Elixhauser comorbidity score,11 Diagnosis-Related Group (DRG)-weight, insurance status, year of admission, number of preadmission medications, and service of admission.
Statistical Analyses
We used descriptive statistics to display baseline characteristics and performed a series of univariable and multivariable logistic regression models to obtain the adjusted odds of each transfer process characteristic on each outcome, adjusting for all covariates (proc logistic, SAS Statistical Software, Cary, North Carolina). For analyses of ICU transfer within 48 hours of arrival, all patients initially admitted to the ICU at time of transfer were excluded.
In the secondary analyses, we used a combined day-of-week and time-of-day variable (ie, Monday day, Monday evening, Monday night, Tuesday day, and so on, with Monday day as the reference group) to obtain a more detailed evaluation of timing of transfer on patient outcomes. We also performed stratified analyses to evaluate each transfer process characteristic on adjusted odds of 30-day mortality stratified by service of admission (ie, at the time of transfer to BWH), adjusting for all covariates. For all analyses, two-sided P values < .05 were considered significant.
RESULTS
Overall, 24,352 patients met our inclusion criteria and underwent IHT, of whom 2,174 (8.9%) died within 30 days. Of the 22,910 transferred patients originally admitted to a non-ICU service, 5,464 (23.8%) underwent ICU transfer within 48 hours of arrival. Cohort characteristics are shown in Table 1.
Multivariable regression analyses demonstrated no significant association between weekend (versus weekday) transfer or increased time delay between patient acceptance and arrival (>48 hours) and adjusted odds of ICU transfer within 48 hours or 30-day mortality. However, they did demonstrate that nighttime (versus daytime) transfer was associated with greater adjusted odds of both ICU transfer and 30-day mortality. Increased admitting team busyness was associated with lower adjusted odds of ICU transfer but was not significantly associated with adjusted odds of 30-day mortality (Table 2). As expected, decreased time delay between patient acceptance and arrival (0-12 hours) was associated with increased adjusted odds of both ICU transfer (adjusted OR 2.68; 95% CI 2.29, 3.15) and 30-day mortality (adjusted OR 1.25; 95% CI 1.03, 1.53) compared with 12-24 hours (results not shown). Time delay >48 hours was not associated with either outcome.
Regression analyses with the combined day/time variable demonstrated that compared with Monday daytime transfer, Sunday night transfer was significantly associated with increased adjusted odds of 30-day mortality, and Friday night transfer was associated with a trend toward increased 30-day mortality (adjusted OR [aOR] 1.88; 95% CI 1.25, 2.82, and aOR 1.43; 95% CI 0.99, 2.06, respectively). We also found that all nighttime transfers (ie, Monday through Sunday night) were associated with increased adjusted odds of ICU transfer within 48 hours (as compared with Monday daytime transfer). Other days/time analyses were not significant.
Univariable and multivariable analyses stratified by service were performed (Appendix). Multivariable stratified analyses demonstrated that weekend transfer, nighttime transfer, and increased admitting team busyness were associated with increased adjusted odds of 30-day mortality among cardiothoracic (CT) and gastrointestinal (GI) surgical service patients. Increased admitting team busyness was also associated with increased mortality among ICU service patients but was associated with decreased mortality among cardiology service patients. An increased time delay between patient acceptance and arrival was associated with decreased mortality among CT and GI surgical service patients (Figure; Appendix). Other adjusted stratified outcomes were not significant.
DISCUSSION
In this study of 24,352 patients undergoing IHT, we found no significant association between weekend transfer or increased time delay between transfer acceptance and arrival and patient outcomes in the cohort as a whole; but we found that nighttime transfer is associated with increased adjusted odds of both ICU transfer within 48 hours and 30-day mortality. Our analyses combining day-of-week and time-of-day demonstrate that Sunday night transfer is particularly associated with increased adjusted odds of 30-day mortality (as compared with Monday daytime transfer), and show a trend toward increased mortality with Friday night transfers. These detailed analyses otherwise reinforce that nighttime transfer across all nights of the week is associated with increased adjusted odds of ICU transfer within 48 hours. We also found that increased admitting team busyness on the day of patient transfer is associated with decreased odds of ICU transfer, though this may solely be reflective of higher turnover services (ie, cardiology) caring for lower acuity patients, as suggested by secondary analyses stratified by service. In addition, secondary analyses demonstrated differential associations between weekend transfers, nighttime transfers, and increased team busyness on the odds of 30-day mortality based on service of transfer. These analyses showed that patients transferred to higher acuity services requiring procedural care, including CT surgery, GI surgery, and Medical ICU, do worse under all three circumstances as compared with patients transferred to other services. Secondary analyses also demonstrated that increased time delay between patient acceptance and arrival is inversely associated with 30-day mortality among CT and GI surgery service patients, likely reflecting lower acuity patients (ie, less sick patients are less rapidly transferred).
There are several possible explanations for these findings. Patients transferred to surgical services at night may reflect a more urgent need for surgery and include a sicker cohort of patients, possibly explaining these findings. Alternatively, or in addition, both weekend and nighttime hospital admission expose patients to similar potential risks, ie, limited resources available during off-peak hours. Our findings could, therefore, reflect the possibility that patients transferred to higher acuity services in need of procedural care are most vulnerable to off-peak timing of transfer. Similar data looking at patients admitted through the emergency room (ER) find the strongest effect of off-peak admissions on patients in need of procedures, including GI hemorrhage,12 atrial fibrillation13 and acute myocardial infarction (AMI),14 arguably because of the limited availability of necessary interventions. Patients undergoing IHT are a sicker cohort of patients than those admitted through the ER, and, therefore, may be even more vulnerable to these issues.3,5 This is supported by our findings that Sunday night transfers (and trend toward Friday night transfers) are associated with greater mortality compared with Monday daytime transfers, when at-the-ready resources and/or specialty personnel may be less available (Sunday night), and delays until receipt of necessary procedures may be longer (Friday night). Though we did not observe similar results among cardiology service transfers, as may be expected based on existing literature,13,14 this subset of patients includes more heterogeneous diagnoses, (ie, not solely those that require acute intervention) and exhibited a low level of acuity (low Elixhauser score and DRG-weight, data not shown).
We also found that increased admitting team busyness on the day of patient transfer is associated with increased odds of 30-day mortality among CT surgery, GI surgery, and ICU service transfers. As above, there are several possible explanations for this finding. It is possible that among these services, only the sickest/neediest patients are accepted for transfer when teams are busiest, explaining our findings. Though this explanation is possible, the measure of team “busyness” includes patient discharge, thereby increasing, not decreasing, availability for incoming patients, making this explanation less likely. Alternatively, it is possible that this finding is reflective of reverse causation, ie, that teams have less ability to discharge/admit new patients when caring for particularly sick/unstable patient transfers, though this assumes that transferred patients arrive earlier in the day, (eg, in time to influence discharge decisions), which infrequently occurs (Table 1). Lastly, it is possible that this subset of patients will be more vulnerable to the workload of the team that is caring for them at the time of their arrival. With high patient turnover (admissions/discharges), the time allocated to each patient’s care may be diminished (ie, “work compression,” trying to do the same amount of work in less time), and may result in decreased time to care for the transferred patient. This has been shown to influence patient outcomes at the time of patient discharge.10
In trying to understand why we observed an inverse relationship between admitting team busyness and odds of ICU transfer within 48 hours, we believe this finding is largely driven by cardiology service transfers, which comprise the highest volume of transferred patients in our cohort (Table 1), and are low acuity patients. Within this population of patients, admitting team busyness is likely a surrogate variable for high turnover/low acuity. This idea is supported by our findings that admitting team busyness is associated with decreased adjusted odds of 30-day mortality in this group (and only in this group).
Similarly, our observed inverse relationship between increased time delay and 30-day mortality among CT and GI surgical service patients is also likely reflective of lower acuity patients. We anticipated that decreased time delay (0-12 hours) would be reflective of greater patient acuity (supported by our findings that decreased time delay is associated with increased odds of ICU transfer and 30-day mortality). However, our findings also suggest that increased time delay (>48 hours) is similarly representative of lower patient acuity and therefore an imperfect measure of discontinuity and/or harmful delays in care during IHT (see limitations below).
Our study is subject to several limitations. This is a single site study; given known variation in transfer practices between hospitals,3 it is possible that our findings are not generalizable. However, given similar existing data on patients admitted through the ER, it is likely our findings may be reflective of IHT to similar tertiary referral hospitals. Second, although we adjusted for patient characteristics, there remains the possibility of unmeasured confounding and other bias that account for our results, as discussed. Third, although the definition of “busyness” used in this study was chosen based on prior data demonstrating an effect on patient outcomes,10 we did not include other measures of busyness that may influence outcomes of transferred patients such as overall team census or hospital busyness. However, the workload associated with a high volume of patient admissions and discharges is arguably a greater reflection of “work compression” for the admitting team compared with overall team census, which may reflect a more static workload with less impact on the care of a newly transferred patient. Also, although hospital census may influence the ability to transfer (ie, lower volume of transferred patients during times of high hospital census), this likely has less of an impact on the direct care of transferred patients than the admitting team’s workload. It is more likely that it would serve as a confounder (eg, sicker patients are accepted for transfer despite high hospital census, while lower risk patients are not).
Nevertheless, future studies should further evaluate the association with other measures of busyness/workload and outcomes of transferred patients. Lastly, though we anticipated time delay between transfer acceptance and arrival would be correlated with patient acuity, we hypothesized that longer delay might affect patient continuity and communication and impact patient outcomes. However, our results demonstrate that our measurement of this variable was unsuccessful in unraveling patient acuity from our intended evaluation of these vulnerable aspects of IHT. It is likely that a more detailed evaluation is required to explore potential challenges more fully that may occur with greater time delays (eg, suboptimal communication regarding changes in clinical status during this time period, delays in treatment). Similarly, though our study evaluates the association between nighttime and weekend transfer (and the interaction between these) with patient outcomes, we did not evaluate other intermediate outcomes that may be more affected by the timing of transfer, such as diagnostic errors or delays in procedural care, which warrant further investigation. We do not directly examine the underlying reasons that explain our observed associations, and thus more research is needed to identify these as well as design and evaluate solutions.
Collectively, our findings suggest that high acuity patients in need of procedural care experience worse outcomes during off-peak times of transfer, and during times of high care-team workload. Though further research is needed to identify underlying reasons to explain our findings, both the timing of patient transfer (when modifiable) and workload of the team caring for the patient on arrival may serve as potential targets for interventions to improve the quality and safety of IHT for patients at greatest risk.
Disclosures
Dr. Mueller and Dr. Schnipper have nothing to disclose. Ms. Fiskio has nothing to disclose. Dr. Schnipper is the recipient of grant funding from Mallinckrodt Pharmaceuticals to conduct an investigator-initiated study of predictors and impact of opioid-related adverse drug events.
1. Iwashyna TJ. The incomplete infrastructure for interhospital patient transfer. Crit Care Med. 2 012;40(8):2470-2478. https://doi.org/10.1097/CCM.0b013e318254516f.
2. Mueller SK, Shannon E, Dalal A, Schnipper JL, Dykes P. Patient and physician experience with interhospital transfer: a qualitative study. J Patient Saf. 2018. https://doi.org/10.1097/PTS.0000000000000501
3. Mueller SK, Zheng J, Orav EJ, Schnipper JL. Rates, predictors and variability of interhospital transfers: a national evaluation. J Hosp Med. 2017;12(6):435-442.https://doi.org/10.12788/jhm.2747.
4. Bosk EA, Veinot T, Iwashyna TJ. Which patients and where: a qualitative study of patient transfers from community hospitals. Med Care. 2011;49(6):592-598. https://doi.org/10.1097/MLR.0b013e31820fb71b.
5. Sokol-Hessner L, White AA, Davis KF, Herzig SJ, Hohmann SF. Interhospital transfer patients discharged by academic hospitalists and general internists: characteristics and outcomes. J Hosp Med. 2016;11(4):245-50. https://doi.org/10.1002/jhm.2515.
6. Mueller S, Zheng J, Orav EJP, Schnipper JL. Inter-hospital transfer and patient outcomes: a retrospective cohort study. BMJ Qual Saf. 2018. https://doi.org/10.1136/bmjqs-2018-008087.
7. Mueller SK, Schnipper JL. Physician perspectives on interhospital transfers. J Patient Saf. 2016. https://doi.org/10.1097/PTS.0000000000000312.
8. Research Patient Data Registry (RPDR). http://rc.partners.org/rpdr. Accessed April 20, 2018.
9. Bell CM, Redelmeier DA. Mortality among patients admitted to hospitals on weekends as compared with weekdays. N Engl J Med. 2001;345(9):663-668. https://doi.org/10.1056/NEJMsa003376
10. Mueller SK, Donze J, Schnipper JL. Intern workload and discontinuity of care on 30-day readmission. Am J Med. 2013;126(1):81-88. https://doi.org/10.1016/j.amjmed.2012.09.003.
11. Elixhauser A, Steiner C, Harris DR, Coffey RM. Comorbidity measures for use with administrative data. Med Care. 1998;36(1):8-27. PubMed
12. Ananthakrishnan AN, McGinley EL, Saeian K. Outcomes of weekend admissions for upper gastrointestinal hemorrhage: a nationwide analysis. Clin Gastroenterol Hepatol. 2009;7(3):296-302e1. https://doi.org/10.1016/j.cgh.2008.08.013.
13. Deshmukh A, Pant S, Kumar G, Bursac Z, Paydak H, Mehta JL. Comparison of outcomes of weekend versus weekday admissions for atrial fibrillation. Am J Cardiol. 2012;110(2):208-211. https://doi.org/10.1016/j.amjcard.2012.03.011.
14. Clarke MS, Wills RA, Bowman RV, et al. Exploratory study of the ‘weekend effect’ for acute medical admissions to public hospitals in Queensland, Australia. Intern Med J. 2010;40(11):777-783. https://doi.org/-10.1111/j.1445-5994.2009.02067.x.
1. Iwashyna TJ. The incomplete infrastructure for interhospital patient transfer. Crit Care Med. 2 012;40(8):2470-2478. https://doi.org/10.1097/CCM.0b013e318254516f.
2. Mueller SK, Shannon E, Dalal A, Schnipper JL, Dykes P. Patient and physician experience with interhospital transfer: a qualitative study. J Patient Saf. 2018. https://doi.org/10.1097/PTS.0000000000000501
3. Mueller SK, Zheng J, Orav EJ, Schnipper JL. Rates, predictors and variability of interhospital transfers: a national evaluation. J Hosp Med. 2017;12(6):435-442.https://doi.org/10.12788/jhm.2747.
4. Bosk EA, Veinot T, Iwashyna TJ. Which patients and where: a qualitative study of patient transfers from community hospitals. Med Care. 2011;49(6):592-598. https://doi.org/10.1097/MLR.0b013e31820fb71b.
5. Sokol-Hessner L, White AA, Davis KF, Herzig SJ, Hohmann SF. Interhospital transfer patients discharged by academic hospitalists and general internists: characteristics and outcomes. J Hosp Med. 2016;11(4):245-50. https://doi.org/10.1002/jhm.2515.
6. Mueller S, Zheng J, Orav EJP, Schnipper JL. Inter-hospital transfer and patient outcomes: a retrospective cohort study. BMJ Qual Saf. 2018. https://doi.org/10.1136/bmjqs-2018-008087.
7. Mueller SK, Schnipper JL. Physician perspectives on interhospital transfers. J Patient Saf. 2016. https://doi.org/10.1097/PTS.0000000000000312.
8. Research Patient Data Registry (RPDR). http://rc.partners.org/rpdr. Accessed April 20, 2018.
9. Bell CM, Redelmeier DA. Mortality among patients admitted to hospitals on weekends as compared with weekdays. N Engl J Med. 2001;345(9):663-668. https://doi.org/10.1056/NEJMsa003376
10. Mueller SK, Donze J, Schnipper JL. Intern workload and discontinuity of care on 30-day readmission. Am J Med. 2013;126(1):81-88. https://doi.org/10.1016/j.amjmed.2012.09.003.
11. Elixhauser A, Steiner C, Harris DR, Coffey RM. Comorbidity measures for use with administrative data. Med Care. 1998;36(1):8-27. PubMed
12. Ananthakrishnan AN, McGinley EL, Saeian K. Outcomes of weekend admissions for upper gastrointestinal hemorrhage: a nationwide analysis. Clin Gastroenterol Hepatol. 2009;7(3):296-302e1. https://doi.org/10.1016/j.cgh.2008.08.013.
13. Deshmukh A, Pant S, Kumar G, Bursac Z, Paydak H, Mehta JL. Comparison of outcomes of weekend versus weekday admissions for atrial fibrillation. Am J Cardiol. 2012;110(2):208-211. https://doi.org/10.1016/j.amjcard.2012.03.011.
14. Clarke MS, Wills RA, Bowman RV, et al. Exploratory study of the ‘weekend effect’ for acute medical admissions to public hospitals in Queensland, Australia. Intern Med J. 2010;40(11):777-783. https://doi.org/-10.1111/j.1445-5994.2009.02067.x.
© 2019 Society of Hospital Medicine
Critical Errors in Inhaler Technique among Children Hospitalized with Asthma
Many studies have shown that improved control can be achieved for most children with asthma if inhaled medications are taken correctly and adequately.1-3 Drug delivery studies have shown that bioavailability of medication with a pressurized metered-dose inhaler (MDI) improves from 34% to 83% with the addition of spacer devices. This difference is largely due to the decrease in oropharyngeal deposition,1,4,5 and therefore, the use of a spacer with proper technique has been recommended in all pediatric patients.1,6
Poor inhaler technique is common among children.1,7 Previous studies of children with asthma have evaluated inhaler technique, primarily in the outpatient and community settings, and reported variable rates of error (from 45% to >90%).8,9 No studies have evaluated children hospitalized with asthma. As these children represent a particularly high-risk group for morbidity and mortality,10,11 the objectives of this study were to assess errors in inhaler technique in hospitalized asthmatic children and identify risk factors for improper use.
METHODS
As part of a larger interventional study, we conducted a prospective cross-sectional study at a tertiary urban children’s hospital. We enrolled a convenience sample of children aged 2-16 years admitted to the inpatient ward with an asthma exacerbation Monday-Friday from 8 AM to 6 PM. Participants were required to have a diagnosis of asthma (an established diagnosis by their primary care provider or meets the National Heart, Lung, and Blood Institute [NHLBI] criteria1), have a consenting adult available, and speak English. Patients were excluded if they had a codiagnosis of an additional respiratory disease (ie, pneumonia), cardiac disease, or sickle cell anemia. The Institutional Review Board approved this study.
We asked caregivers, or children >10 years old if they independently use their inhaler, to demonstrate their typical home inhaler technique using a spacer with mask (SM), spacer with mouthpiece (SMP), or no spacer (per their usual home practice). Inhaler technique was scored using a previously validated asthma checklist (Table 1).12 Certain steps in the checklist were identified as critical: (Step 1) removing the cap, (Step 3) attaching to a spacer, (Step 7) taking six breaths (SM), and (Step 9) holding breath for five seconds (SMP). Caregivers only were also asked to complete questionnaires assessing their literacy (Brief Health Literacy Screen [BHLS]), confidence (Parent Asthma Management Self-Efficacy scale [PAMSE]), and any barriers to managing their child’s asthma (Barriers to Asthma Care). Demographic and medical history information was extracted from the medical chart.
Inhaler technique was evaluated in two ways by comparing: (1) patients who missed more than one critical step with those who missed zero critical steps and (2) patients with an asthma checklist score <7 versus ≥7. While there is a lot of variability in how inhaler technique has been measured in past studies, these two markers (75% of steps and critical errors) were the most common.8
We assessed a number of variables to evaluate their association with improper inhaler technique. For categorical variables, the association with each outcome was evaluated using relative risks (RRs). Bivariate P-values were calculated using chi-square or Fisher’s exact tests, as appropriate. Continuous variables were assessed for associations with each outcome using two-sample t-tests. Odds ratios (ORs) and 95% confidence intervals (CIs) were calculated using logistic regression analyses. Using a model entry criterion of P < .10 on univariate tests, variables were entered into a multivariable logistic regression model for each outcome. Full models with all eligible covariates and reduced models selected via a manual backward selection process were evaluated. Two-sided P-values <.05 were considered statistically significant.
RESULTS
Participants
From October 2016 to June 2017, 380 participants were assessed for participation; 215 were excluded for not having a parent available (59%), not speaking English (27%), not having an asthma diagnosis (ie, viral wheezing; 14%), and 52 (14%) declined to participate. Therefore, a total of 113 participants were enrolled, with demonstrations provided by 100 caregivers and 13 children. The mean age of the patients overall was 6.6 ± 3.4 years and over half (55%) of the participants had uncontrolled asthma (NHLBI criteria1).
Errors in Inhaler Technique
The mean asthma checklist score was 6.7 (maximum score of 10 for SM and 12 for SMP). A third (35%) scored <7 on the asthma checklist and 42% of participants missed at least one critical step. Overall, children who missed a critical step were significantly older (7.8 [6.7-8.9] vs 5.8 [5.1-6.5] years; P = .002). More participants missed a critical step with the SMP than the SM (75% [51%-90%] vs 36% [27%-46%]; P = .003), and this was the most prominent factor for missing a critical step in the adjusted regression analysis (OR 6.95 [1.71-28.23], P = .007). The most commonly missed steps were breathing normally for 30 seconds for SM, and for SMP, it was breathing out fully and breathing away from the spacer (Table 1). Twenty participants (18%) did not use a spacer device; these patients were older than those who did use a spacer (mean age 8.5 [6.7-10.4] vs 6.2 [5.6-6.9] years; P = .005); however, no other significant differences were identified.
Demographic, Medical History, and Socioeconomic Characteristics
Overall, race, ethnicity, and insurance status did not vary significantly based on asthma checklist score ≥7 or missing a critical step. Patients in the SM group who had received inpatient asthma education during a previous admission, had a history of pediatric intensive care unit (PICU) admission, and had been prescribed a daily controller were less likely to miss a critical step (Table 2). Parental education level varied, with 33% having a high school degree or less, but was not associated with asthma checklist score or missing critical steps. Parental BHLS and parental confidence (PAMSE) were not significantly associated with inhaler proficiency. However, transportation-related barriers were more common in patients with checklist scores <7 and more missed critical steps (OR 1.62 [1.06-2.46]; P = .02).
DISCUSSION
Nearly half of the participants in this study missed at least one critical step in inhaler use. In addition, 18% did not use a spacer when demonstrating their inhaler technique. Despite robust studies demonstrating how asthma education can improve both asthma skills and clinical outcomes,13 our study demonstrates that a large gap remains in proper inhaler technique among asthmatic patients presenting for inpatient care. Specifically, in the mouthpiece group, steps related to breathing technique were the most commonly missed. Our results also show that inhaler technique errors were most prominent in the adolescent population, possibly coinciding with the process of transitioning to a mouthpiece and more independence in medication administration. Adolescents may be a high-impact population on which to focus inpatient asthma education. Additionally, we found that a previous PICU admission and previous inpatient asthma education were associated with missing fewer critical steps in inhaler technique. This finding is consistent with those of another study that evaluated inhaler technique in the emergency department and found that previous hospitalization for asthma was inversely related to improper inhaler use (RR 0.55, 95% CI 0.36-0.84).14 This supports that when provided, inpatient education can increase inhaler administration skills.
Previous studies conducted in the outpatient setting have demonstrated variable rates of inhaler skill, from 0% to approximately 89% of children performing all steps of inhalation correctly.8 This wide range may be related to variations in the number and definition of critical steps between the different studies. In our study, we highlighted removing the cap, attaching a spacer, and adequate breathing technique as critical steps, because failure to complete them would significantly reduce lung deposition of medication. While past studies did evaluate both MDIs and discuss the devices, our study is the first to report difference in problems with technique between SM and SMP. As asthma educational interventions are developed and/or implemented, it is important to stress that different steps in inhaler technique are being missed in those using a mask versus mouthpiece.
The limitations of this study include that it was at a single center with a primarily urban and English-speaking population; however, this study population reflects the racial diversity of pediatric asthma patients. Further studies may explore the reproducibility of these findings at multiple centers and with non-English-speaking families. This study included younger patients than in some previous publications investigating asthma; however, all patients met the criteria for asthma diagnosis and this age range is reflective of patients presenting for inpatient asthma care. Furthermore, because of our daytime research hours, 59% of patients were excluded because a primary caregiver was not available. It is possible that these families have decreased access to inpatient asthma educators as well and may be another target group for future studies. Finally, a large proportion of parents had a college education or greater in our sample. However, there was no association within our analysis between parental education level and inhaler proficiency.
The findings from this study indicate that continued efforts are needed to establish that inhaler technique is adequate for all families regardless of their educational status or socioeconomic background, especially for adolescents and in the setting of poor asthma control. Furthermore, our findings support that inhaler technique education may be beneficial in the inpatient setting and that acute care settings can provide a valuable “teachable moment.”14,15
CONCLUSION
Errors in inhaler technique are prevalent in pediatric inpatients with asthma, primarily those using a mouthpiece device. Educational efforts in both inpatient and outpatient settings have the potential to improve drug delivery and therefore asthma control. Inpatient hospitalization may serve as a platform for further studies to investigate innovative educational interventions.
Acknowledgments
The authors thank Tina Carter for her assistance in the recruitment and data collection and Ashley Hull and Susannah Butters for training the study staff on the use of the asthma checklist.
Disclosures
Dr. Gupta receives research grant support from the National Institutes of Health and the United Healthcare Group. Dr. Gupta serves as a consultant for DBV Technology, Aimmune Therapeutics, Kaleo & BEFORE Brands. Dr. Gupta has received lecture fees/honorariums from the Allergy Asthma Network & the American College of Asthma, Allergy & Immunology. Dr. Press reports research support from the Chicago Center for Diabetes Translation Research Pilot and Feasibility Grant, the Bucksbaum Institute for Clinical Excellence Pilot Grant Program, the Academy of Distinguished Medical Educators, the Development of Novel Hospital-initiated Care Bundle in Adults Hospitalized for Acute Asthma: the 41st Multicenter Airway Research Collaboration (MARC-41) Study, UCM’s Innovation Grant Program, the University of Chicago-Chapin Hall Join Research Fund, the NIH/NHLBI Loan Repayment Program, 1 K23 HL118151 01, NIH NLBHI R03 (RFA-HL-18-025), the George and Carol Abramson Pilot Awards, the COPD Foundation Green Shoots Grant, the University of Chicago Women’s Board Grant, NIH NHLBI UG1 (RFA-HL-17-009), and the CTSA Pilot Award, outside the submitted work. These disclosures have been reported to Dr. Press’ institutional IRB board. Additionally, a management plan is on file that details how to address conflicts such as these which are sources of research support but do not directly support the work at hand. The remaining authors have no conflicts of interest relevant to the article to disclose.
Funding
This study was funded by internal grants from Ann and Robert H. Lurie Children’s Hospital of Chicago. Dr. Press was funded by a K23HL118151.
1. Expert Panel Report 3: guidelines for the diagnosis and management of asthma: full report. Washington, DC: US Department of Health and Human Services, National Institutes of Health, National Heart, Lung, and Blood Institute; 2007. PubMed
2. Hekking PP, Wener RR, Amelink M, Zwinderman AH, Bouvy ML, Bel EH. The prevalence of severe refractory asthma. J Allergy Clin Immunol. 2015;135(4):896-902. doi: 10.1016/j.jaci.2014.08.042. PubMed
3. Peters SP, Ferguson G, Deniz Y, Reisner C. Uncontrolled asthma: a review of the prevalence, disease burden and options for treatment. Respir Med. 2006;100(7):1139-1151. doi: 10.1016/j.rmed.2006.03.031. PubMed
4. Dickens GR, Wermeling DP, Matheny CJ, et al. Pharmacokinetics of flunisolide administered via metered dose inhaler with and without a spacer device and following oral administration. Ann Allergy Asthma Immunol. 2000;84(5):528-532. doi: 10.1016/S1081-1206(10)62517-3. PubMed
5. Nikander K, Nicholls C, Denyer J, Pritchard J. The evolution of spacers and valved holding chambers. J Aerosol Med Pulm Drug Deliv. 2014;27(1):S4-S23. doi: 10.1089/jamp.2013.1076. PubMed
6. Rubin BK, Fink JB. The delivery of inhaled medication to the young child. Pediatr Clin North Am. 2003;50(3):717-731. doi:10.1016/S0031-3955(03)00049-X. PubMed
7. Roland NJ, Bhalla RK, Earis J. The local side effects of inhaled corticosteroids: current understanding and review of the literature. Chest. 2004;126(1):213-219. doi: 10.1378/chest.126.1.213. PubMed
8. Gillette C, Rockich-Winston N, Kuhn JA, Flesher S, Shepherd M. Inhaler technique in children with asthma: a systematic review. Acad Pediatr. 2016;16(7):605-615. doi: 10.1016/j.acap.2016.04.006. PubMed
9. Pappalardo AA, Karavolos K, Martin MA. What really happens in the home: the medication environment of urban, minority youth. J Allergy Clin Immunol Pract. 2017;5(3):764-770. doi: 10.1016/j.jaip.2016.09.046. PubMed
10. Crane J, Pearce N, Burgess C, Woodman K, Robson B, Beasley R. Markers of risk of asthma death or readmission in the 12 months following a hospital admission for asthma. Int J Epidemiol. 1992;21(4):737-744. doi: 10.1093/ije/21.4.737. PubMed
11. Turner MO, Noertjojo K, Vedal S, Bai T, Crump S, Fitzgerald JM. Risk factors for near-fatal asthma. A case-control study in hospitalized patients with asthma. Am J Respir Crit Care Med. 1998;157(6 Pt 1):1804-1809. doi: 10.1164/ajrccm.157.6.9708092. PubMed
12. Press VG, Arora VM, Shah LM, et al. Misuse of respiratory inhalers in hospitalized patients with asthma or COPD. J Gen Intern Med. 2011;26(6):635-642. doi: 10.1007/s11606-010-1624-2. PubMed
13. Guevara JP, Wolf FM, Grum CM, Clark NM. Effects of educational interventions for self management of asthma in children and adolescents: systematic review and meta-analysis. BMJ. 2003;326(7402):1308-1309. doi: 10.1136/bmj.326.7402.1308. PubMed
14. Scarfone RJ, Capraro GA, Zorc JJ, Zhao H. Demonstrated use of metered-dose inhalers and peak flow meters by children and adolescents with acute asthma exacerbations. Arch Pediatr Adolesc Med. 2002;156(4):378-383. doi: 10.1001/archpedi.156.4.378. PubMed
15. Sockrider MM, Abramson S, Brooks E, et al. Delivering tailored asthma family education in a pediatric emergency department setting: a pilot study. Pediatrics. 2006;117(4 Pt 2):S135-144. doi: 10.1542/peds.2005-2000K. PubMed
Many studies have shown that improved control can be achieved for most children with asthma if inhaled medications are taken correctly and adequately.1-3 Drug delivery studies have shown that bioavailability of medication with a pressurized metered-dose inhaler (MDI) improves from 34% to 83% with the addition of spacer devices. This difference is largely due to the decrease in oropharyngeal deposition,1,4,5 and therefore, the use of a spacer with proper technique has been recommended in all pediatric patients.1,6
Poor inhaler technique is common among children.1,7 Previous studies of children with asthma have evaluated inhaler technique, primarily in the outpatient and community settings, and reported variable rates of error (from 45% to >90%).8,9 No studies have evaluated children hospitalized with asthma. As these children represent a particularly high-risk group for morbidity and mortality,10,11 the objectives of this study were to assess errors in inhaler technique in hospitalized asthmatic children and identify risk factors for improper use.
METHODS
As part of a larger interventional study, we conducted a prospective cross-sectional study at a tertiary urban children’s hospital. We enrolled a convenience sample of children aged 2-16 years admitted to the inpatient ward with an asthma exacerbation Monday-Friday from 8 AM to 6 PM. Participants were required to have a diagnosis of asthma (an established diagnosis by their primary care provider or meets the National Heart, Lung, and Blood Institute [NHLBI] criteria1), have a consenting adult available, and speak English. Patients were excluded if they had a codiagnosis of an additional respiratory disease (ie, pneumonia), cardiac disease, or sickle cell anemia. The Institutional Review Board approved this study.
We asked caregivers, or children >10 years old if they independently use their inhaler, to demonstrate their typical home inhaler technique using a spacer with mask (SM), spacer with mouthpiece (SMP), or no spacer (per their usual home practice). Inhaler technique was scored using a previously validated asthma checklist (Table 1).12 Certain steps in the checklist were identified as critical: (Step 1) removing the cap, (Step 3) attaching to a spacer, (Step 7) taking six breaths (SM), and (Step 9) holding breath for five seconds (SMP). Caregivers only were also asked to complete questionnaires assessing their literacy (Brief Health Literacy Screen [BHLS]), confidence (Parent Asthma Management Self-Efficacy scale [PAMSE]), and any barriers to managing their child’s asthma (Barriers to Asthma Care). Demographic and medical history information was extracted from the medical chart.
Inhaler technique was evaluated in two ways by comparing: (1) patients who missed more than one critical step with those who missed zero critical steps and (2) patients with an asthma checklist score <7 versus ≥7. While there is a lot of variability in how inhaler technique has been measured in past studies, these two markers (75% of steps and critical errors) were the most common.8
We assessed a number of variables to evaluate their association with improper inhaler technique. For categorical variables, the association with each outcome was evaluated using relative risks (RRs). Bivariate P-values were calculated using chi-square or Fisher’s exact tests, as appropriate. Continuous variables were assessed for associations with each outcome using two-sample t-tests. Odds ratios (ORs) and 95% confidence intervals (CIs) were calculated using logistic regression analyses. Using a model entry criterion of P < .10 on univariate tests, variables were entered into a multivariable logistic regression model for each outcome. Full models with all eligible covariates and reduced models selected via a manual backward selection process were evaluated. Two-sided P-values <.05 were considered statistically significant.
RESULTS
Participants
From October 2016 to June 2017, 380 participants were assessed for participation; 215 were excluded for not having a parent available (59%), not speaking English (27%), not having an asthma diagnosis (ie, viral wheezing; 14%), and 52 (14%) declined to participate. Therefore, a total of 113 participants were enrolled, with demonstrations provided by 100 caregivers and 13 children. The mean age of the patients overall was 6.6 ± 3.4 years and over half (55%) of the participants had uncontrolled asthma (NHLBI criteria1).
Errors in Inhaler Technique
The mean asthma checklist score was 6.7 (maximum score of 10 for SM and 12 for SMP). A third (35%) scored <7 on the asthma checklist and 42% of participants missed at least one critical step. Overall, children who missed a critical step were significantly older (7.8 [6.7-8.9] vs 5.8 [5.1-6.5] years; P = .002). More participants missed a critical step with the SMP than the SM (75% [51%-90%] vs 36% [27%-46%]; P = .003), and this was the most prominent factor for missing a critical step in the adjusted regression analysis (OR 6.95 [1.71-28.23], P = .007). The most commonly missed steps were breathing normally for 30 seconds for SM, and for SMP, it was breathing out fully and breathing away from the spacer (Table 1). Twenty participants (18%) did not use a spacer device; these patients were older than those who did use a spacer (mean age 8.5 [6.7-10.4] vs 6.2 [5.6-6.9] years; P = .005); however, no other significant differences were identified.
Demographic, Medical History, and Socioeconomic Characteristics
Overall, race, ethnicity, and insurance status did not vary significantly based on asthma checklist score ≥7 or missing a critical step. Patients in the SM group who had received inpatient asthma education during a previous admission, had a history of pediatric intensive care unit (PICU) admission, and had been prescribed a daily controller were less likely to miss a critical step (Table 2). Parental education level varied, with 33% having a high school degree or less, but was not associated with asthma checklist score or missing critical steps. Parental BHLS and parental confidence (PAMSE) were not significantly associated with inhaler proficiency. However, transportation-related barriers were more common in patients with checklist scores <7 and more missed critical steps (OR 1.62 [1.06-2.46]; P = .02).
DISCUSSION
Nearly half of the participants in this study missed at least one critical step in inhaler use. In addition, 18% did not use a spacer when demonstrating their inhaler technique. Despite robust studies demonstrating how asthma education can improve both asthma skills and clinical outcomes,13 our study demonstrates that a large gap remains in proper inhaler technique among asthmatic patients presenting for inpatient care. Specifically, in the mouthpiece group, steps related to breathing technique were the most commonly missed. Our results also show that inhaler technique errors were most prominent in the adolescent population, possibly coinciding with the process of transitioning to a mouthpiece and more independence in medication administration. Adolescents may be a high-impact population on which to focus inpatient asthma education. Additionally, we found that a previous PICU admission and previous inpatient asthma education were associated with missing fewer critical steps in inhaler technique. This finding is consistent with those of another study that evaluated inhaler technique in the emergency department and found that previous hospitalization for asthma was inversely related to improper inhaler use (RR 0.55, 95% CI 0.36-0.84).14 This supports that when provided, inpatient education can increase inhaler administration skills.
Previous studies conducted in the outpatient setting have demonstrated variable rates of inhaler skill, from 0% to approximately 89% of children performing all steps of inhalation correctly.8 This wide range may be related to variations in the number and definition of critical steps between the different studies. In our study, we highlighted removing the cap, attaching a spacer, and adequate breathing technique as critical steps, because failure to complete them would significantly reduce lung deposition of medication. While past studies did evaluate both MDIs and discuss the devices, our study is the first to report difference in problems with technique between SM and SMP. As asthma educational interventions are developed and/or implemented, it is important to stress that different steps in inhaler technique are being missed in those using a mask versus mouthpiece.
The limitations of this study include that it was at a single center with a primarily urban and English-speaking population; however, this study population reflects the racial diversity of pediatric asthma patients. Further studies may explore the reproducibility of these findings at multiple centers and with non-English-speaking families. This study included younger patients than in some previous publications investigating asthma; however, all patients met the criteria for asthma diagnosis and this age range is reflective of patients presenting for inpatient asthma care. Furthermore, because of our daytime research hours, 59% of patients were excluded because a primary caregiver was not available. It is possible that these families have decreased access to inpatient asthma educators as well and may be another target group for future studies. Finally, a large proportion of parents had a college education or greater in our sample. However, there was no association within our analysis between parental education level and inhaler proficiency.
The findings from this study indicate that continued efforts are needed to establish that inhaler technique is adequate for all families regardless of their educational status or socioeconomic background, especially for adolescents and in the setting of poor asthma control. Furthermore, our findings support that inhaler technique education may be beneficial in the inpatient setting and that acute care settings can provide a valuable “teachable moment.”14,15
CONCLUSION
Errors in inhaler technique are prevalent in pediatric inpatients with asthma, primarily those using a mouthpiece device. Educational efforts in both inpatient and outpatient settings have the potential to improve drug delivery and therefore asthma control. Inpatient hospitalization may serve as a platform for further studies to investigate innovative educational interventions.
Acknowledgments
The authors thank Tina Carter for her assistance in the recruitment and data collection and Ashley Hull and Susannah Butters for training the study staff on the use of the asthma checklist.
Disclosures
Dr. Gupta receives research grant support from the National Institutes of Health and the United Healthcare Group. Dr. Gupta serves as a consultant for DBV Technology, Aimmune Therapeutics, Kaleo & BEFORE Brands. Dr. Gupta has received lecture fees/honorariums from the Allergy Asthma Network & the American College of Asthma, Allergy & Immunology. Dr. Press reports research support from the Chicago Center for Diabetes Translation Research Pilot and Feasibility Grant, the Bucksbaum Institute for Clinical Excellence Pilot Grant Program, the Academy of Distinguished Medical Educators, the Development of Novel Hospital-initiated Care Bundle in Adults Hospitalized for Acute Asthma: the 41st Multicenter Airway Research Collaboration (MARC-41) Study, UCM’s Innovation Grant Program, the University of Chicago-Chapin Hall Join Research Fund, the NIH/NHLBI Loan Repayment Program, 1 K23 HL118151 01, NIH NLBHI R03 (RFA-HL-18-025), the George and Carol Abramson Pilot Awards, the COPD Foundation Green Shoots Grant, the University of Chicago Women’s Board Grant, NIH NHLBI UG1 (RFA-HL-17-009), and the CTSA Pilot Award, outside the submitted work. These disclosures have been reported to Dr. Press’ institutional IRB board. Additionally, a management plan is on file that details how to address conflicts such as these which are sources of research support but do not directly support the work at hand. The remaining authors have no conflicts of interest relevant to the article to disclose.
Funding
This study was funded by internal grants from Ann and Robert H. Lurie Children’s Hospital of Chicago. Dr. Press was funded by a K23HL118151.
Many studies have shown that improved control can be achieved for most children with asthma if inhaled medications are taken correctly and adequately.1-3 Drug delivery studies have shown that bioavailability of medication with a pressurized metered-dose inhaler (MDI) improves from 34% to 83% with the addition of spacer devices. This difference is largely due to the decrease in oropharyngeal deposition,1,4,5 and therefore, the use of a spacer with proper technique has been recommended in all pediatric patients.1,6
Poor inhaler technique is common among children.1,7 Previous studies of children with asthma have evaluated inhaler technique, primarily in the outpatient and community settings, and reported variable rates of error (from 45% to >90%).8,9 No studies have evaluated children hospitalized with asthma. As these children represent a particularly high-risk group for morbidity and mortality,10,11 the objectives of this study were to assess errors in inhaler technique in hospitalized asthmatic children and identify risk factors for improper use.
METHODS
As part of a larger interventional study, we conducted a prospective cross-sectional study at a tertiary urban children’s hospital. We enrolled a convenience sample of children aged 2-16 years admitted to the inpatient ward with an asthma exacerbation Monday-Friday from 8 AM to 6 PM. Participants were required to have a diagnosis of asthma (an established diagnosis by their primary care provider or meets the National Heart, Lung, and Blood Institute [NHLBI] criteria1), have a consenting adult available, and speak English. Patients were excluded if they had a codiagnosis of an additional respiratory disease (ie, pneumonia), cardiac disease, or sickle cell anemia. The Institutional Review Board approved this study.
We asked caregivers, or children >10 years old if they independently use their inhaler, to demonstrate their typical home inhaler technique using a spacer with mask (SM), spacer with mouthpiece (SMP), or no spacer (per their usual home practice). Inhaler technique was scored using a previously validated asthma checklist (Table 1).12 Certain steps in the checklist were identified as critical: (Step 1) removing the cap, (Step 3) attaching to a spacer, (Step 7) taking six breaths (SM), and (Step 9) holding breath for five seconds (SMP). Caregivers only were also asked to complete questionnaires assessing their literacy (Brief Health Literacy Screen [BHLS]), confidence (Parent Asthma Management Self-Efficacy scale [PAMSE]), and any barriers to managing their child’s asthma (Barriers to Asthma Care). Demographic and medical history information was extracted from the medical chart.
Inhaler technique was evaluated in two ways by comparing: (1) patients who missed more than one critical step with those who missed zero critical steps and (2) patients with an asthma checklist score <7 versus ≥7. While there is a lot of variability in how inhaler technique has been measured in past studies, these two markers (75% of steps and critical errors) were the most common.8
We assessed a number of variables to evaluate their association with improper inhaler technique. For categorical variables, the association with each outcome was evaluated using relative risks (RRs). Bivariate P-values were calculated using chi-square or Fisher’s exact tests, as appropriate. Continuous variables were assessed for associations with each outcome using two-sample t-tests. Odds ratios (ORs) and 95% confidence intervals (CIs) were calculated using logistic regression analyses. Using a model entry criterion of P < .10 on univariate tests, variables were entered into a multivariable logistic regression model for each outcome. Full models with all eligible covariates and reduced models selected via a manual backward selection process were evaluated. Two-sided P-values <.05 were considered statistically significant.
RESULTS
Participants
From October 2016 to June 2017, 380 participants were assessed for participation; 215 were excluded for not having a parent available (59%), not speaking English (27%), not having an asthma diagnosis (ie, viral wheezing; 14%), and 52 (14%) declined to participate. Therefore, a total of 113 participants were enrolled, with demonstrations provided by 100 caregivers and 13 children. The mean age of the patients overall was 6.6 ± 3.4 years and over half (55%) of the participants had uncontrolled asthma (NHLBI criteria1).
Errors in Inhaler Technique
The mean asthma checklist score was 6.7 (maximum score of 10 for SM and 12 for SMP). A third (35%) scored <7 on the asthma checklist and 42% of participants missed at least one critical step. Overall, children who missed a critical step were significantly older (7.8 [6.7-8.9] vs 5.8 [5.1-6.5] years; P = .002). More participants missed a critical step with the SMP than the SM (75% [51%-90%] vs 36% [27%-46%]; P = .003), and this was the most prominent factor for missing a critical step in the adjusted regression analysis (OR 6.95 [1.71-28.23], P = .007). The most commonly missed steps were breathing normally for 30 seconds for SM, and for SMP, it was breathing out fully and breathing away from the spacer (Table 1). Twenty participants (18%) did not use a spacer device; these patients were older than those who did use a spacer (mean age 8.5 [6.7-10.4] vs 6.2 [5.6-6.9] years; P = .005); however, no other significant differences were identified.
Demographic, Medical History, and Socioeconomic Characteristics
Overall, race, ethnicity, and insurance status did not vary significantly based on asthma checklist score ≥7 or missing a critical step. Patients in the SM group who had received inpatient asthma education during a previous admission, had a history of pediatric intensive care unit (PICU) admission, and had been prescribed a daily controller were less likely to miss a critical step (Table 2). Parental education level varied, with 33% having a high school degree or less, but was not associated with asthma checklist score or missing critical steps. Parental BHLS and parental confidence (PAMSE) were not significantly associated with inhaler proficiency. However, transportation-related barriers were more common in patients with checklist scores <7 and more missed critical steps (OR 1.62 [1.06-2.46]; P = .02).
DISCUSSION
Nearly half of the participants in this study missed at least one critical step in inhaler use. In addition, 18% did not use a spacer when demonstrating their inhaler technique. Despite robust studies demonstrating how asthma education can improve both asthma skills and clinical outcomes,13 our study demonstrates that a large gap remains in proper inhaler technique among asthmatic patients presenting for inpatient care. Specifically, in the mouthpiece group, steps related to breathing technique were the most commonly missed. Our results also show that inhaler technique errors were most prominent in the adolescent population, possibly coinciding with the process of transitioning to a mouthpiece and more independence in medication administration. Adolescents may be a high-impact population on which to focus inpatient asthma education. Additionally, we found that a previous PICU admission and previous inpatient asthma education were associated with missing fewer critical steps in inhaler technique. This finding is consistent with those of another study that evaluated inhaler technique in the emergency department and found that previous hospitalization for asthma was inversely related to improper inhaler use (RR 0.55, 95% CI 0.36-0.84).14 This supports that when provided, inpatient education can increase inhaler administration skills.
Previous studies conducted in the outpatient setting have demonstrated variable rates of inhaler skill, from 0% to approximately 89% of children performing all steps of inhalation correctly.8 This wide range may be related to variations in the number and definition of critical steps between the different studies. In our study, we highlighted removing the cap, attaching a spacer, and adequate breathing technique as critical steps, because failure to complete them would significantly reduce lung deposition of medication. While past studies did evaluate both MDIs and discuss the devices, our study is the first to report difference in problems with technique between SM and SMP. As asthma educational interventions are developed and/or implemented, it is important to stress that different steps in inhaler technique are being missed in those using a mask versus mouthpiece.
The limitations of this study include that it was at a single center with a primarily urban and English-speaking population; however, this study population reflects the racial diversity of pediatric asthma patients. Further studies may explore the reproducibility of these findings at multiple centers and with non-English-speaking families. This study included younger patients than in some previous publications investigating asthma; however, all patients met the criteria for asthma diagnosis and this age range is reflective of patients presenting for inpatient asthma care. Furthermore, because of our daytime research hours, 59% of patients were excluded because a primary caregiver was not available. It is possible that these families have decreased access to inpatient asthma educators as well and may be another target group for future studies. Finally, a large proportion of parents had a college education or greater in our sample. However, there was no association within our analysis between parental education level and inhaler proficiency.
The findings from this study indicate that continued efforts are needed to establish that inhaler technique is adequate for all families regardless of their educational status or socioeconomic background, especially for adolescents and in the setting of poor asthma control. Furthermore, our findings support that inhaler technique education may be beneficial in the inpatient setting and that acute care settings can provide a valuable “teachable moment.”14,15
CONCLUSION
Errors in inhaler technique are prevalent in pediatric inpatients with asthma, primarily those using a mouthpiece device. Educational efforts in both inpatient and outpatient settings have the potential to improve drug delivery and therefore asthma control. Inpatient hospitalization may serve as a platform for further studies to investigate innovative educational interventions.
Acknowledgments
The authors thank Tina Carter for her assistance in the recruitment and data collection and Ashley Hull and Susannah Butters for training the study staff on the use of the asthma checklist.
Disclosures
Dr. Gupta receives research grant support from the National Institutes of Health and the United Healthcare Group. Dr. Gupta serves as a consultant for DBV Technology, Aimmune Therapeutics, Kaleo & BEFORE Brands. Dr. Gupta has received lecture fees/honorariums from the Allergy Asthma Network & the American College of Asthma, Allergy & Immunology. Dr. Press reports research support from the Chicago Center for Diabetes Translation Research Pilot and Feasibility Grant, the Bucksbaum Institute for Clinical Excellence Pilot Grant Program, the Academy of Distinguished Medical Educators, the Development of Novel Hospital-initiated Care Bundle in Adults Hospitalized for Acute Asthma: the 41st Multicenter Airway Research Collaboration (MARC-41) Study, UCM’s Innovation Grant Program, the University of Chicago-Chapin Hall Join Research Fund, the NIH/NHLBI Loan Repayment Program, 1 K23 HL118151 01, NIH NLBHI R03 (RFA-HL-18-025), the George and Carol Abramson Pilot Awards, the COPD Foundation Green Shoots Grant, the University of Chicago Women’s Board Grant, NIH NHLBI UG1 (RFA-HL-17-009), and the CTSA Pilot Award, outside the submitted work. These disclosures have been reported to Dr. Press’ institutional IRB board. Additionally, a management plan is on file that details how to address conflicts such as these which are sources of research support but do not directly support the work at hand. The remaining authors have no conflicts of interest relevant to the article to disclose.
Funding
This study was funded by internal grants from Ann and Robert H. Lurie Children’s Hospital of Chicago. Dr. Press was funded by a K23HL118151.
1. Expert Panel Report 3: guidelines for the diagnosis and management of asthma: full report. Washington, DC: US Department of Health and Human Services, National Institutes of Health, National Heart, Lung, and Blood Institute; 2007. PubMed
2. Hekking PP, Wener RR, Amelink M, Zwinderman AH, Bouvy ML, Bel EH. The prevalence of severe refractory asthma. J Allergy Clin Immunol. 2015;135(4):896-902. doi: 10.1016/j.jaci.2014.08.042. PubMed
3. Peters SP, Ferguson G, Deniz Y, Reisner C. Uncontrolled asthma: a review of the prevalence, disease burden and options for treatment. Respir Med. 2006;100(7):1139-1151. doi: 10.1016/j.rmed.2006.03.031. PubMed
4. Dickens GR, Wermeling DP, Matheny CJ, et al. Pharmacokinetics of flunisolide administered via metered dose inhaler with and without a spacer device and following oral administration. Ann Allergy Asthma Immunol. 2000;84(5):528-532. doi: 10.1016/S1081-1206(10)62517-3. PubMed
5. Nikander K, Nicholls C, Denyer J, Pritchard J. The evolution of spacers and valved holding chambers. J Aerosol Med Pulm Drug Deliv. 2014;27(1):S4-S23. doi: 10.1089/jamp.2013.1076. PubMed
6. Rubin BK, Fink JB. The delivery of inhaled medication to the young child. Pediatr Clin North Am. 2003;50(3):717-731. doi:10.1016/S0031-3955(03)00049-X. PubMed
7. Roland NJ, Bhalla RK, Earis J. The local side effects of inhaled corticosteroids: current understanding and review of the literature. Chest. 2004;126(1):213-219. doi: 10.1378/chest.126.1.213. PubMed
8. Gillette C, Rockich-Winston N, Kuhn JA, Flesher S, Shepherd M. Inhaler technique in children with asthma: a systematic review. Acad Pediatr. 2016;16(7):605-615. doi: 10.1016/j.acap.2016.04.006. PubMed
9. Pappalardo AA, Karavolos K, Martin MA. What really happens in the home: the medication environment of urban, minority youth. J Allergy Clin Immunol Pract. 2017;5(3):764-770. doi: 10.1016/j.jaip.2016.09.046. PubMed
10. Crane J, Pearce N, Burgess C, Woodman K, Robson B, Beasley R. Markers of risk of asthma death or readmission in the 12 months following a hospital admission for asthma. Int J Epidemiol. 1992;21(4):737-744. doi: 10.1093/ije/21.4.737. PubMed
11. Turner MO, Noertjojo K, Vedal S, Bai T, Crump S, Fitzgerald JM. Risk factors for near-fatal asthma. A case-control study in hospitalized patients with asthma. Am J Respir Crit Care Med. 1998;157(6 Pt 1):1804-1809. doi: 10.1164/ajrccm.157.6.9708092. PubMed
12. Press VG, Arora VM, Shah LM, et al. Misuse of respiratory inhalers in hospitalized patients with asthma or COPD. J Gen Intern Med. 2011;26(6):635-642. doi: 10.1007/s11606-010-1624-2. PubMed
13. Guevara JP, Wolf FM, Grum CM, Clark NM. Effects of educational interventions for self management of asthma in children and adolescents: systematic review and meta-analysis. BMJ. 2003;326(7402):1308-1309. doi: 10.1136/bmj.326.7402.1308. PubMed
14. Scarfone RJ, Capraro GA, Zorc JJ, Zhao H. Demonstrated use of metered-dose inhalers and peak flow meters by children and adolescents with acute asthma exacerbations. Arch Pediatr Adolesc Med. 2002;156(4):378-383. doi: 10.1001/archpedi.156.4.378. PubMed
15. Sockrider MM, Abramson S, Brooks E, et al. Delivering tailored asthma family education in a pediatric emergency department setting: a pilot study. Pediatrics. 2006;117(4 Pt 2):S135-144. doi: 10.1542/peds.2005-2000K. PubMed
1. Expert Panel Report 3: guidelines for the diagnosis and management of asthma: full report. Washington, DC: US Department of Health and Human Services, National Institutes of Health, National Heart, Lung, and Blood Institute; 2007. PubMed
2. Hekking PP, Wener RR, Amelink M, Zwinderman AH, Bouvy ML, Bel EH. The prevalence of severe refractory asthma. J Allergy Clin Immunol. 2015;135(4):896-902. doi: 10.1016/j.jaci.2014.08.042. PubMed
3. Peters SP, Ferguson G, Deniz Y, Reisner C. Uncontrolled asthma: a review of the prevalence, disease burden and options for treatment. Respir Med. 2006;100(7):1139-1151. doi: 10.1016/j.rmed.2006.03.031. PubMed
4. Dickens GR, Wermeling DP, Matheny CJ, et al. Pharmacokinetics of flunisolide administered via metered dose inhaler with and without a spacer device and following oral administration. Ann Allergy Asthma Immunol. 2000;84(5):528-532. doi: 10.1016/S1081-1206(10)62517-3. PubMed
5. Nikander K, Nicholls C, Denyer J, Pritchard J. The evolution of spacers and valved holding chambers. J Aerosol Med Pulm Drug Deliv. 2014;27(1):S4-S23. doi: 10.1089/jamp.2013.1076. PubMed
6. Rubin BK, Fink JB. The delivery of inhaled medication to the young child. Pediatr Clin North Am. 2003;50(3):717-731. doi:10.1016/S0031-3955(03)00049-X. PubMed
7. Roland NJ, Bhalla RK, Earis J. The local side effects of inhaled corticosteroids: current understanding and review of the literature. Chest. 2004;126(1):213-219. doi: 10.1378/chest.126.1.213. PubMed
8. Gillette C, Rockich-Winston N, Kuhn JA, Flesher S, Shepherd M. Inhaler technique in children with asthma: a systematic review. Acad Pediatr. 2016;16(7):605-615. doi: 10.1016/j.acap.2016.04.006. PubMed
9. Pappalardo AA, Karavolos K, Martin MA. What really happens in the home: the medication environment of urban, minority youth. J Allergy Clin Immunol Pract. 2017;5(3):764-770. doi: 10.1016/j.jaip.2016.09.046. PubMed
10. Crane J, Pearce N, Burgess C, Woodman K, Robson B, Beasley R. Markers of risk of asthma death or readmission in the 12 months following a hospital admission for asthma. Int J Epidemiol. 1992;21(4):737-744. doi: 10.1093/ije/21.4.737. PubMed
11. Turner MO, Noertjojo K, Vedal S, Bai T, Crump S, Fitzgerald JM. Risk factors for near-fatal asthma. A case-control study in hospitalized patients with asthma. Am J Respir Crit Care Med. 1998;157(6 Pt 1):1804-1809. doi: 10.1164/ajrccm.157.6.9708092. PubMed
12. Press VG, Arora VM, Shah LM, et al. Misuse of respiratory inhalers in hospitalized patients with asthma or COPD. J Gen Intern Med. 2011;26(6):635-642. doi: 10.1007/s11606-010-1624-2. PubMed
13. Guevara JP, Wolf FM, Grum CM, Clark NM. Effects of educational interventions for self management of asthma in children and adolescents: systematic review and meta-analysis. BMJ. 2003;326(7402):1308-1309. doi: 10.1136/bmj.326.7402.1308. PubMed
14. Scarfone RJ, Capraro GA, Zorc JJ, Zhao H. Demonstrated use of metered-dose inhalers and peak flow meters by children and adolescents with acute asthma exacerbations. Arch Pediatr Adolesc Med. 2002;156(4):378-383. doi: 10.1001/archpedi.156.4.378. PubMed
15. Sockrider MM, Abramson S, Brooks E, et al. Delivering tailored asthma family education in a pediatric emergency department setting: a pilot study. Pediatrics. 2006;117(4 Pt 2):S135-144. doi: 10.1542/peds.2005-2000K. PubMed
© 2019 Society of Hospital Medicine
The Current State of Advanced Practice Provider Fellowships in Hospital Medicine: A Survey of Program Directors
Postgraduate training for physician assistants (PAs) and nurse practitioners (NPs) is a rapidly evolving field. It has been estimated that the number of these advanced practice providers (APPs) almost doubled between 2000 and 2016 (from 15.3 to 28.2 per 100 physicians) and is expected to double again by 2030.
Historically, postgraduate APP fellowships have functioned to help bridge the gap in clinical practice experience between physicians and APPs.
First described in 2010 by the Mayo Clinic,
METHODS
This was a cross-sectional study of all APP adult and pediatric fellowships in hospital medicine, in the United States, that were identifiable through May 2018. Multiple methods were used to identify all active fellowships. First, all training programs offering a Hospital Medicine Fellowship in the ARC-PA and Association of Postgraduate PA Programs databases were noted. Second, questionnaires were given out at the NP/PA forum at the national SHM conference in 2018 to gather information on existing APP fellowships. Third, similar online requests to identify known programs were posted to the SHM web forum Hospital Medicine Exchange (HMX). Fourth, Internet searches were used to discover additional programs. Once those fellowships were identified, surveys were sent to their program directors (PDs). These surveys not only asked the PDs about their fellowship but also asked them to identify additional APP fellowships beyond those that we had captured. Once additional programs were identified, a second round of surveys was sent to their PDs. This was performed in an iterative fashion until no additional fellowships were discovered.
The survey tool was developed and validated internally in the AAMC Survey Development style18 and was influenced by prior validated surveys of postgraduate medical fellowships.10,
A web-based survey format (Qualtrics) was used to distribute the questionnaire e-mail to the PDs. Follow up e-mail reminders were sent to all nonresponders to encourage full participation. Survey completion was voluntary; no financial incentives or gifts were offered. IRB approval was obtained at Johns Hopkins Bayview (IRB number 00181629). Descriptive statistics (proportions, means, and ranges as appropriate) were calculated for all variables. Stata 13 (StataCorp. 2013. Stata Statistical Software: Release 13. College Station, Texas. StataCorp LP) was used for data analysis.
RESULTS
In total, 11 fellowships were identified using our multimethod approach. We found four (36%) programs by utilizing existing online databases, two (18%) through the SHM questionnaire and HMX forum, three (27%) through internet searches, and the remaining two (18%) were referred to us by the other PDs who were surveyed. Of the programs surveyed, 10 were adult programs and one was a pediatric program. Surveys were sent to the PDs of the 11 fellowships, and all but one of them (10/11, 91%) responded. Respondent programs were given alphabetical designations A through J (Table).
Fellowship and Individual Characteristics
Most programs have been in existence for five years or fewer. Eighty percent of the programs are about one year in duration; two outlier programs have fellowship lengths of six months and 18 months. The main hospital where training occurs has a mean of 496 beds (range 213 to 900). Ninety percent of the hospitals also have physician residency training programs. Sixty percent of programs enroll two to four fellows per year while 40% enroll five or more. The salary range paid by the programs is $55,000 to >$70,000, and half the programs pay more than $65,000.
The majority of fellows accepted into APP fellowships in hospital medicine are women. Eighty percent of fellows are 26-30 years old, and 90% of fellows have been out of NP or PA school for one year or less. Both NP and PA applicants are accepted in 80% of fellowships.
Program Rationales
All programs reported that training and retaining applicants is the main driver for developing their fellowship, and 50% of them offer financial incentives for retention upon successful completion of the program. Forty percent of PDs stated that there is an implicit or explicit understanding that successful completion of the fellowship would result in further employment. Over the last five years, 89% (range: 71%-100%) of graduates were asked to remain for a full-time position after program completion.
In addition to training and retention, building an interprofessional team (50%), managing patient volume (30%), and reducing overhead (20%) were also reported as rationales for program development. The majority of programs (80%) have fellows bill for clinical services, and five of those eight programs do so after their fellows become more clinically competent.
Curricula
Of the nine adult programs, 67% teach explicitly to SHM core competencies and 33% send their fellows to the SHM NP/PA Boot Camp. Thirty percent of fellowships partner formally with either a physician residency or a local PA program to develop educational content. Six of the nine programs with active physician residencies, including the pediatric fellowship, offer shared educational experiences for the residents and APPs.
There are notable differences in clinical rotations between the programs (Figure 1). No single rotation is universally required, although general hospital internal medicine is required in all adult fellowships. The majority (80%) of programs offer at least one elective. Six programs reported mandatory rotations outside the department of medicine, most commonly neurology or the stroke service (four programs). Only one program reported only general medicine rotations, with no subspecialty electives.
There are also differences between programs with respect to educational experiences and learning formats (Figure 2). Each fellowship takes a unique approach to clinical instruction; teaching rounds and lecture attendance are the only experiences that are mandatory across the board. Grand rounds are available, but not required, in all programs. Ninety percent of programs offer or require fellow presentations, journal clubs, reading assignments, or scholarly projects. Fellow presentations (70%) and journal club attendance (60%) are required in more than half the programs; however, reading assignments (30%) and scholarly projects (20%) are rarely required.
Methods of Fellow Assessment
Each program surveyed has a unique method of fellow assessment. Ninety percent of the programs use more than one method to assess their fellows. Faculty reviews are most commonly used and are conducted in all rotations in 80% of fellowships. Both self-assessment exercises and written examinations are used in some rotations by the majority of programs. Capstone projects are required infrequently (30%).
DISCUSSION
We found several commonalities between the fellowships surveyed. Many of the program characteristics, such as years in operation, salary, duration, and lack of accreditation, are quite similar. Most fellowships also have a similar rationale for building their programs and use resources from the SHM to inform their curricula. Fellows, on average, share several demographic characteristics, such as age, gender, and time out of schooling. Conversely, we found wide variability in clinical rotations, the general teaching structure, and methods of fellow evaluation.
There have been several publications detailing successful individual APP fellowships in medical subspecialties,
It is noteworthy that every program surveyed was created with training and retention in mind, rather than other factors like decreasing overhead or managing patient volume. Training one’s own APPs so that they can learn on the job, come to understand expectations within a group, and witness the culture is extremely valuable. From a patient safety standpoint, it has been documented that physician hospitalists straight out of residency have a higher patient mortality compared with more experienced providers.
Several limitations to this study should be considered. While we used multiple strategies to locate as many fellowships as possible, it is unlikely that we successfully captured all existing programs, and new programs are being developed annually. We also relied on self-reported data from PDs. While we would expect PDs to provide accurate data, we could not externally validate their answers. Additionally, although our survey tool was reviewed extensively and validated internally, it was developed de novo for this study.
CONCLUSION
APP fellowships in hospital medicine have experienced marked growth since the first program was described in 2010. The majority of programs are 12 months long, operate in existing teaching centers, and are intended to further enhance the training and retention of newly graduated PAs and NPs. Despite their similarities, fellowships have striking variability in their methods of teaching and assessing their learners. Best practices have yet to be identified, and further study is required to determine how to standardize curricula across the board.
Acknowledgments
Disclosures
The authors report no conflicts of interest.
Funding
This project was supported by the Johns Hopkins School of Medicine Biostatistics, Epidemiology and Data Management (BEAD) Core. Dr. Wright is the Anne Gaines and G. Thomas Miller Professor of Medicine, which is supported through the Johns Hopkins’ Center for Innovative Medicine.
1. Auerbach DI, Staiger DO, Buerhaus PI. Growing ranks of advanced practice clinicians — implications for the physician workforce. N Engl J Med. 2018;378(25):2358-2360. doi: 10.1056/nejmp1801869. PubMed
2. Darves B. Midlevels make a rocky entrance into hospital medicine. Todays Hospitalist. 2007;5(1):28-32.
3. Polansky M. A historical perspective on postgraduate physician assistant education and the association of postgraduate physician assistant programs. J Physician Assist Educ. 2007;18(3):100-108. doi: 10.1097/01367895-200718030-00014.
4. FNP & AGNP Certification Candidate Handbook. The American Academy of Nurse Practitioners National Certification Board, Inc; 2018. https://www.aanpcert.org/resource/documents/AGNP FNP Candidate Handbook.pdf. Accessed December 20, 2018
5. Become a PA: Getting Your Prerequisites and Certification. AAPA. https://www.aapa.org/career-central/become-a-pa/. Accessed December 20, 2018.
6. ACGME Common Program Requirements. ACGME; 2017. https://www.acgme.org/Portals/0/PFAssets/ProgramRequirements/CPRs_2017-07-01.pdf. Accessed December 20, 2018
7. Committee on the Learning Health Care System in America; Institute of Medicine, Smith MD, Smith M, Saunders R, Stuckhardt L, McGinnis JM. Best Care at Lower Cost: The Path to Continuously Learning Health Care in America. Washington, DC: National Academies Press; 2013. PubMed
8. The Future of Nursing LEADING CHANGE, ADVANCING HEALTH. THE NATIONAL ACADEMIES PRESS; 2014. https://www.nap.edu/read/12956/chapter/1. Accessed December 16, 2018.
9. Hussaini SS, Bushardt RL, Gonsalves WC, et al. Accreditation and implications of clinical postgraduate pa training programs. JAAPA. 2016:29:1-7. doi: 10.1097/01.jaa.0000482298.17821.fb. PubMed
10. Polansky M, Garver GJH, Hilton G. Postgraduate clinical education of physician assistants. J Physician Assist Educ. 2012;23(1):39-45. doi: 10.1097/01367895-201223010-00008.
11. Will KK, Budavari AI, Wilkens JA, Mishark K, Hartsell ZC. A hospitalist postgraduate training program for physician assistants. J Hosp Med. 2010;5(2):94-98. doi: 10.1002/jhm.619. PubMed
12. Kartha A, Restuccia JD, Burgess JF, et al. Nurse practitioner and physician assistant scope of practice in 118 acute care hospitals. J Hosp Med. 2014;9(10):615-620. doi: 10.1002/jhm.2231. PubMed
13. Singh S, Fletcher KE, Schapira MM, et al. A comparison of outcomes of general medical inpatient care provided by a hospitalist-physician assistant model vs a traditional resident-based model. J Hosp Med. 2011;6(3):122-130. doi: 10.1002/jhm.826. PubMed
14. Hussaini SS, Bushardt RL, Gonsalves WC, et al. Accreditation and implications of clinical postgraduate PA training programs. JAAPA. 2016;29(5):1-7. doi: 10.1097/01.jaa.0000482298.17821.fb. PubMed
15. Postgraduate Programs. ARC-PA. http://www.arc-pa.org/accreditation/postgraduate-programs. Accessed September 13, 2018.
16. National Nurse Practitioner Residency & Fellowship Training Consortium: Mission. https://www.nppostgradtraining.com/About-Us/Mission. Accessed September 27, 2018.
17. NP/PA Boot Camp. State of Hospital Medicine | Society of Hospital Medicine. http://www.hospitalmedicine.org/events/nppa-boot-camp. Accessed September 13, 2018.
18. Gehlbach H, Artino Jr AR, Durning SJ. AM last page: survey development guidance for medical education researchers. Acad Med. 2010;85(5):925. doi: 10.1097/ACM.0b013e3181dd3e88.” Accessed March 10, 2018. PubMed
19. Kraus C, Carlisle T, Carney D. Emergency Medicine Physician Assistant (EMPA) post-graduate training programs: program characteristics and training curricula. West J Emerg Med. 2018;19(5):803-807. doi: 10.5811/westjem.2018.6.37892.
20. Shah NH, Rhim HJH, Maniscalco J, Wilson K, Rassbach C. The current state of pediatric hospital medicine fellowships: A survey of program directors. J Hosp Med. 2016;11(5):324-328. doi: 10.1002/jhm.2571. PubMed
21. Thompson BM, Searle NS, Gruppen LD, Hatem CJ, Nelson E. A national survey of medical education fellowships. Med Educ Online. 2011;16(1):5642. doi: 10.3402/meo.v16i0.5642. PubMed
22. Hooker R. A physician assistant rheumatology fellowship. JAAPA. 2013;26(6):49-52. doi: 10.1097/01.jaa.0000430346.04435.e4 PubMed
23. Keizer T, Trangle M. the benefits of a physician assistant and/or nurse practitioner psychiatric postgraduate training program. Acad Psychiatry. 2015;39(6):691-694. doi: 10.1007/s40596-015-0331-z. PubMed
24. Miller A, Weiss J, Hill V, Lindaman K, Emory C. Implementation of a postgraduate orthopaedic physician assistant fellowship for improved specialty training. JBJS Journal of Orthopaedics for Physician Assistants. 2017:1. doi: 10.2106/jbjs.jopa.17.00021.
25. Sharma P, Brooks M, Roomiany P, Verma L, Criscione-Schreiber L. physician assistant student training for the inpatient setting. J Physician Assist Educ. 2017;28(4):189-195. doi: 10.1097/jpa.0000000000000174. PubMed
26. Goodwin JS, Salameh H, Zhou J, Singh S, Kuo Y-F, Nattinger AB. Association of hospitalist years of experience with mortality in the hospitalized medicare population. JAMA Intern Med. 2018;178(2):196. doi: 10.1001/jamainternmed.2017.7049. PubMed
27. Barnes H. Exploring the factors that influence nurse practitioner role transition. J Nurse Pract. 2015;11(2):178-183. doi: 10.1016/j.nurpra.2014.11.004. PubMed
28. Will K, Williams J, Hilton G, Wilson L, Geyer H. Perceived efficacy and utility of postgraduate physician assistant training programs. JAAPA. 2016;29(3):46-48. doi: 10.1097/01.jaa.0000480569.39885.c8. PubMed
29. Torok H, Lackner C, Landis R, Wright S. Learning needs of physician assistants working in hospital medicine. J Hosp Med. 2011;7(3):190-194. doi: 10.1002/jhm.1001. PubMed
30. Cate O. Competency-based postgraduate medical education: past, present and future. GMS J Med Educ. 2017:34(5). doi: 10.3205/zma001146. PubMed
31. Exploring the ACGME Core Competencies (Part 1 of 7). NEJM Knowledge. https://knowledgeplus.nejm.org/blog/exploring-acgme-core-competencies/. Accessed October 24, 2018.
32. Core Competencies. Core Competencies | Society of Hospital Medicine. http://www.hospitalmedicine.org/professional-development/core-competencies/. Accessed October 24, 2018.
Postgraduate training for physician assistants (PAs) and nurse practitioners (NPs) is a rapidly evolving field. It has been estimated that the number of these advanced practice providers (APPs) almost doubled between 2000 and 2016 (from 15.3 to 28.2 per 100 physicians) and is expected to double again by 2030.
Historically, postgraduate APP fellowships have functioned to help bridge the gap in clinical practice experience between physicians and APPs.
First described in 2010 by the Mayo Clinic,
METHODS
This was a cross-sectional study of all APP adult and pediatric fellowships in hospital medicine, in the United States, that were identifiable through May 2018. Multiple methods were used to identify all active fellowships. First, all training programs offering a Hospital Medicine Fellowship in the ARC-PA and Association of Postgraduate PA Programs databases were noted. Second, questionnaires were given out at the NP/PA forum at the national SHM conference in 2018 to gather information on existing APP fellowships. Third, similar online requests to identify known programs were posted to the SHM web forum Hospital Medicine Exchange (HMX). Fourth, Internet searches were used to discover additional programs. Once those fellowships were identified, surveys were sent to their program directors (PDs). These surveys not only asked the PDs about their fellowship but also asked them to identify additional APP fellowships beyond those that we had captured. Once additional programs were identified, a second round of surveys was sent to their PDs. This was performed in an iterative fashion until no additional fellowships were discovered.
The survey tool was developed and validated internally in the AAMC Survey Development style18 and was influenced by prior validated surveys of postgraduate medical fellowships.10,
A web-based survey format (Qualtrics) was used to distribute the questionnaire e-mail to the PDs. Follow up e-mail reminders were sent to all nonresponders to encourage full participation. Survey completion was voluntary; no financial incentives or gifts were offered. IRB approval was obtained at Johns Hopkins Bayview (IRB number 00181629). Descriptive statistics (proportions, means, and ranges as appropriate) were calculated for all variables. Stata 13 (StataCorp. 2013. Stata Statistical Software: Release 13. College Station, Texas. StataCorp LP) was used for data analysis.
RESULTS
In total, 11 fellowships were identified using our multimethod approach. We found four (36%) programs by utilizing existing online databases, two (18%) through the SHM questionnaire and HMX forum, three (27%) through internet searches, and the remaining two (18%) were referred to us by the other PDs who were surveyed. Of the programs surveyed, 10 were adult programs and one was a pediatric program. Surveys were sent to the PDs of the 11 fellowships, and all but one of them (10/11, 91%) responded. Respondent programs were given alphabetical designations A through J (Table).
Fellowship and Individual Characteristics
Most programs have been in existence for five years or fewer. Eighty percent of the programs are about one year in duration; two outlier programs have fellowship lengths of six months and 18 months. The main hospital where training occurs has a mean of 496 beds (range 213 to 900). Ninety percent of the hospitals also have physician residency training programs. Sixty percent of programs enroll two to four fellows per year while 40% enroll five or more. The salary range paid by the programs is $55,000 to >$70,000, and half the programs pay more than $65,000.
The majority of fellows accepted into APP fellowships in hospital medicine are women. Eighty percent of fellows are 26-30 years old, and 90% of fellows have been out of NP or PA school for one year or less. Both NP and PA applicants are accepted in 80% of fellowships.
Program Rationales
All programs reported that training and retaining applicants is the main driver for developing their fellowship, and 50% of them offer financial incentives for retention upon successful completion of the program. Forty percent of PDs stated that there is an implicit or explicit understanding that successful completion of the fellowship would result in further employment. Over the last five years, 89% (range: 71%-100%) of graduates were asked to remain for a full-time position after program completion.
In addition to training and retention, building an interprofessional team (50%), managing patient volume (30%), and reducing overhead (20%) were also reported as rationales for program development. The majority of programs (80%) have fellows bill for clinical services, and five of those eight programs do so after their fellows become more clinically competent.
Curricula
Of the nine adult programs, 67% teach explicitly to SHM core competencies and 33% send their fellows to the SHM NP/PA Boot Camp. Thirty percent of fellowships partner formally with either a physician residency or a local PA program to develop educational content. Six of the nine programs with active physician residencies, including the pediatric fellowship, offer shared educational experiences for the residents and APPs.
There are notable differences in clinical rotations between the programs (Figure 1). No single rotation is universally required, although general hospital internal medicine is required in all adult fellowships. The majority (80%) of programs offer at least one elective. Six programs reported mandatory rotations outside the department of medicine, most commonly neurology or the stroke service (four programs). Only one program reported only general medicine rotations, with no subspecialty electives.
There are also differences between programs with respect to educational experiences and learning formats (Figure 2). Each fellowship takes a unique approach to clinical instruction; teaching rounds and lecture attendance are the only experiences that are mandatory across the board. Grand rounds are available, but not required, in all programs. Ninety percent of programs offer or require fellow presentations, journal clubs, reading assignments, or scholarly projects. Fellow presentations (70%) and journal club attendance (60%) are required in more than half the programs; however, reading assignments (30%) and scholarly projects (20%) are rarely required.
Methods of Fellow Assessment
Each program surveyed has a unique method of fellow assessment. Ninety percent of the programs use more than one method to assess their fellows. Faculty reviews are most commonly used and are conducted in all rotations in 80% of fellowships. Both self-assessment exercises and written examinations are used in some rotations by the majority of programs. Capstone projects are required infrequently (30%).
DISCUSSION
We found several commonalities between the fellowships surveyed. Many of the program characteristics, such as years in operation, salary, duration, and lack of accreditation, are quite similar. Most fellowships also have a similar rationale for building their programs and use resources from the SHM to inform their curricula. Fellows, on average, share several demographic characteristics, such as age, gender, and time out of schooling. Conversely, we found wide variability in clinical rotations, the general teaching structure, and methods of fellow evaluation.
There have been several publications detailing successful individual APP fellowships in medical subspecialties,
It is noteworthy that every program surveyed was created with training and retention in mind, rather than other factors like decreasing overhead or managing patient volume. Training one’s own APPs so that they can learn on the job, come to understand expectations within a group, and witness the culture is extremely valuable. From a patient safety standpoint, it has been documented that physician hospitalists straight out of residency have a higher patient mortality compared with more experienced providers.
Several limitations to this study should be considered. While we used multiple strategies to locate as many fellowships as possible, it is unlikely that we successfully captured all existing programs, and new programs are being developed annually. We also relied on self-reported data from PDs. While we would expect PDs to provide accurate data, we could not externally validate their answers. Additionally, although our survey tool was reviewed extensively and validated internally, it was developed de novo for this study.
CONCLUSION
APP fellowships in hospital medicine have experienced marked growth since the first program was described in 2010. The majority of programs are 12 months long, operate in existing teaching centers, and are intended to further enhance the training and retention of newly graduated PAs and NPs. Despite their similarities, fellowships have striking variability in their methods of teaching and assessing their learners. Best practices have yet to be identified, and further study is required to determine how to standardize curricula across the board.
Acknowledgments
Disclosures
The authors report no conflicts of interest.
Funding
This project was supported by the Johns Hopkins School of Medicine Biostatistics, Epidemiology and Data Management (BEAD) Core. Dr. Wright is the Anne Gaines and G. Thomas Miller Professor of Medicine, which is supported through the Johns Hopkins’ Center for Innovative Medicine.
Postgraduate training for physician assistants (PAs) and nurse practitioners (NPs) is a rapidly evolving field. It has been estimated that the number of these advanced practice providers (APPs) almost doubled between 2000 and 2016 (from 15.3 to 28.2 per 100 physicians) and is expected to double again by 2030.
Historically, postgraduate APP fellowships have functioned to help bridge the gap in clinical practice experience between physicians and APPs.
First described in 2010 by the Mayo Clinic,
METHODS
This was a cross-sectional study of all APP adult and pediatric fellowships in hospital medicine, in the United States, that were identifiable through May 2018. Multiple methods were used to identify all active fellowships. First, all training programs offering a Hospital Medicine Fellowship in the ARC-PA and Association of Postgraduate PA Programs databases were noted. Second, questionnaires were given out at the NP/PA forum at the national SHM conference in 2018 to gather information on existing APP fellowships. Third, similar online requests to identify known programs were posted to the SHM web forum Hospital Medicine Exchange (HMX). Fourth, Internet searches were used to discover additional programs. Once those fellowships were identified, surveys were sent to their program directors (PDs). These surveys not only asked the PDs about their fellowship but also asked them to identify additional APP fellowships beyond those that we had captured. Once additional programs were identified, a second round of surveys was sent to their PDs. This was performed in an iterative fashion until no additional fellowships were discovered.
The survey tool was developed and validated internally in the AAMC Survey Development style18 and was influenced by prior validated surveys of postgraduate medical fellowships.10,
A web-based survey format (Qualtrics) was used to distribute the questionnaire e-mail to the PDs. Follow up e-mail reminders were sent to all nonresponders to encourage full participation. Survey completion was voluntary; no financial incentives or gifts were offered. IRB approval was obtained at Johns Hopkins Bayview (IRB number 00181629). Descriptive statistics (proportions, means, and ranges as appropriate) were calculated for all variables. Stata 13 (StataCorp. 2013. Stata Statistical Software: Release 13. College Station, Texas. StataCorp LP) was used for data analysis.
RESULTS
In total, 11 fellowships were identified using our multimethod approach. We found four (36%) programs by utilizing existing online databases, two (18%) through the SHM questionnaire and HMX forum, three (27%) through internet searches, and the remaining two (18%) were referred to us by the other PDs who were surveyed. Of the programs surveyed, 10 were adult programs and one was a pediatric program. Surveys were sent to the PDs of the 11 fellowships, and all but one of them (10/11, 91%) responded. Respondent programs were given alphabetical designations A through J (Table).
Fellowship and Individual Characteristics
Most programs have been in existence for five years or fewer. Eighty percent of the programs are about one year in duration; two outlier programs have fellowship lengths of six months and 18 months. The main hospital where training occurs has a mean of 496 beds (range 213 to 900). Ninety percent of the hospitals also have physician residency training programs. Sixty percent of programs enroll two to four fellows per year while 40% enroll five or more. The salary range paid by the programs is $55,000 to >$70,000, and half the programs pay more than $65,000.
The majority of fellows accepted into APP fellowships in hospital medicine are women. Eighty percent of fellows are 26-30 years old, and 90% of fellows have been out of NP or PA school for one year or less. Both NP and PA applicants are accepted in 80% of fellowships.
Program Rationales
All programs reported that training and retaining applicants is the main driver for developing their fellowship, and 50% of them offer financial incentives for retention upon successful completion of the program. Forty percent of PDs stated that there is an implicit or explicit understanding that successful completion of the fellowship would result in further employment. Over the last five years, 89% (range: 71%-100%) of graduates were asked to remain for a full-time position after program completion.
In addition to training and retention, building an interprofessional team (50%), managing patient volume (30%), and reducing overhead (20%) were also reported as rationales for program development. The majority of programs (80%) have fellows bill for clinical services, and five of those eight programs do so after their fellows become more clinically competent.
Curricula
Of the nine adult programs, 67% teach explicitly to SHM core competencies and 33% send their fellows to the SHM NP/PA Boot Camp. Thirty percent of fellowships partner formally with either a physician residency or a local PA program to develop educational content. Six of the nine programs with active physician residencies, including the pediatric fellowship, offer shared educational experiences for the residents and APPs.
There are notable differences in clinical rotations between the programs (Figure 1). No single rotation is universally required, although general hospital internal medicine is required in all adult fellowships. The majority (80%) of programs offer at least one elective. Six programs reported mandatory rotations outside the department of medicine, most commonly neurology or the stroke service (four programs). Only one program reported only general medicine rotations, with no subspecialty electives.
There are also differences between programs with respect to educational experiences and learning formats (Figure 2). Each fellowship takes a unique approach to clinical instruction; teaching rounds and lecture attendance are the only experiences that are mandatory across the board. Grand rounds are available, but not required, in all programs. Ninety percent of programs offer or require fellow presentations, journal clubs, reading assignments, or scholarly projects. Fellow presentations (70%) and journal club attendance (60%) are required in more than half the programs; however, reading assignments (30%) and scholarly projects (20%) are rarely required.
Methods of Fellow Assessment
Each program surveyed has a unique method of fellow assessment. Ninety percent of the programs use more than one method to assess their fellows. Faculty reviews are most commonly used and are conducted in all rotations in 80% of fellowships. Both self-assessment exercises and written examinations are used in some rotations by the majority of programs. Capstone projects are required infrequently (30%).
DISCUSSION
We found several commonalities between the fellowships surveyed. Many of the program characteristics, such as years in operation, salary, duration, and lack of accreditation, are quite similar. Most fellowships also have a similar rationale for building their programs and use resources from the SHM to inform their curricula. Fellows, on average, share several demographic characteristics, such as age, gender, and time out of schooling. Conversely, we found wide variability in clinical rotations, the general teaching structure, and methods of fellow evaluation.
There have been several publications detailing successful individual APP fellowships in medical subspecialties,
It is noteworthy that every program surveyed was created with training and retention in mind, rather than other factors like decreasing overhead or managing patient volume. Training one’s own APPs so that they can learn on the job, come to understand expectations within a group, and witness the culture is extremely valuable. From a patient safety standpoint, it has been documented that physician hospitalists straight out of residency have a higher patient mortality compared with more experienced providers.
Several limitations to this study should be considered. While we used multiple strategies to locate as many fellowships as possible, it is unlikely that we successfully captured all existing programs, and new programs are being developed annually. We also relied on self-reported data from PDs. While we would expect PDs to provide accurate data, we could not externally validate their answers. Additionally, although our survey tool was reviewed extensively and validated internally, it was developed de novo for this study.
CONCLUSION
APP fellowships in hospital medicine have experienced marked growth since the first program was described in 2010. The majority of programs are 12 months long, operate in existing teaching centers, and are intended to further enhance the training and retention of newly graduated PAs and NPs. Despite their similarities, fellowships have striking variability in their methods of teaching and assessing their learners. Best practices have yet to be identified, and further study is required to determine how to standardize curricula across the board.
Acknowledgments
Disclosures
The authors report no conflicts of interest.
Funding
This project was supported by the Johns Hopkins School of Medicine Biostatistics, Epidemiology and Data Management (BEAD) Core. Dr. Wright is the Anne Gaines and G. Thomas Miller Professor of Medicine, which is supported through the Johns Hopkins’ Center for Innovative Medicine.
1. Auerbach DI, Staiger DO, Buerhaus PI. Growing ranks of advanced practice clinicians — implications for the physician workforce. N Engl J Med. 2018;378(25):2358-2360. doi: 10.1056/nejmp1801869. PubMed
2. Darves B. Midlevels make a rocky entrance into hospital medicine. Todays Hospitalist. 2007;5(1):28-32.
3. Polansky M. A historical perspective on postgraduate physician assistant education and the association of postgraduate physician assistant programs. J Physician Assist Educ. 2007;18(3):100-108. doi: 10.1097/01367895-200718030-00014.
4. FNP & AGNP Certification Candidate Handbook. The American Academy of Nurse Practitioners National Certification Board, Inc; 2018. https://www.aanpcert.org/resource/documents/AGNP FNP Candidate Handbook.pdf. Accessed December 20, 2018
5. Become a PA: Getting Your Prerequisites and Certification. AAPA. https://www.aapa.org/career-central/become-a-pa/. Accessed December 20, 2018.
6. ACGME Common Program Requirements. ACGME; 2017. https://www.acgme.org/Portals/0/PFAssets/ProgramRequirements/CPRs_2017-07-01.pdf. Accessed December 20, 2018
7. Committee on the Learning Health Care System in America; Institute of Medicine, Smith MD, Smith M, Saunders R, Stuckhardt L, McGinnis JM. Best Care at Lower Cost: The Path to Continuously Learning Health Care in America. Washington, DC: National Academies Press; 2013. PubMed
8. The Future of Nursing LEADING CHANGE, ADVANCING HEALTH. THE NATIONAL ACADEMIES PRESS; 2014. https://www.nap.edu/read/12956/chapter/1. Accessed December 16, 2018.
9. Hussaini SS, Bushardt RL, Gonsalves WC, et al. Accreditation and implications of clinical postgraduate pa training programs. JAAPA. 2016:29:1-7. doi: 10.1097/01.jaa.0000482298.17821.fb. PubMed
10. Polansky M, Garver GJH, Hilton G. Postgraduate clinical education of physician assistants. J Physician Assist Educ. 2012;23(1):39-45. doi: 10.1097/01367895-201223010-00008.
11. Will KK, Budavari AI, Wilkens JA, Mishark K, Hartsell ZC. A hospitalist postgraduate training program for physician assistants. J Hosp Med. 2010;5(2):94-98. doi: 10.1002/jhm.619. PubMed
12. Kartha A, Restuccia JD, Burgess JF, et al. Nurse practitioner and physician assistant scope of practice in 118 acute care hospitals. J Hosp Med. 2014;9(10):615-620. doi: 10.1002/jhm.2231. PubMed
13. Singh S, Fletcher KE, Schapira MM, et al. A comparison of outcomes of general medical inpatient care provided by a hospitalist-physician assistant model vs a traditional resident-based model. J Hosp Med. 2011;6(3):122-130. doi: 10.1002/jhm.826. PubMed
14. Hussaini SS, Bushardt RL, Gonsalves WC, et al. Accreditation and implications of clinical postgraduate PA training programs. JAAPA. 2016;29(5):1-7. doi: 10.1097/01.jaa.0000482298.17821.fb. PubMed
15. Postgraduate Programs. ARC-PA. http://www.arc-pa.org/accreditation/postgraduate-programs. Accessed September 13, 2018.
16. National Nurse Practitioner Residency & Fellowship Training Consortium: Mission. https://www.nppostgradtraining.com/About-Us/Mission. Accessed September 27, 2018.
17. NP/PA Boot Camp. State of Hospital Medicine | Society of Hospital Medicine. http://www.hospitalmedicine.org/events/nppa-boot-camp. Accessed September 13, 2018.
18. Gehlbach H, Artino Jr AR, Durning SJ. AM last page: survey development guidance for medical education researchers. Acad Med. 2010;85(5):925. doi: 10.1097/ACM.0b013e3181dd3e88.” Accessed March 10, 2018. PubMed
19. Kraus C, Carlisle T, Carney D. Emergency Medicine Physician Assistant (EMPA) post-graduate training programs: program characteristics and training curricula. West J Emerg Med. 2018;19(5):803-807. doi: 10.5811/westjem.2018.6.37892.
20. Shah NH, Rhim HJH, Maniscalco J, Wilson K, Rassbach C. The current state of pediatric hospital medicine fellowships: A survey of program directors. J Hosp Med. 2016;11(5):324-328. doi: 10.1002/jhm.2571. PubMed
21. Thompson BM, Searle NS, Gruppen LD, Hatem CJ, Nelson E. A national survey of medical education fellowships. Med Educ Online. 2011;16(1):5642. doi: 10.3402/meo.v16i0.5642. PubMed
22. Hooker R. A physician assistant rheumatology fellowship. JAAPA. 2013;26(6):49-52. doi: 10.1097/01.jaa.0000430346.04435.e4 PubMed
23. Keizer T, Trangle M. the benefits of a physician assistant and/or nurse practitioner psychiatric postgraduate training program. Acad Psychiatry. 2015;39(6):691-694. doi: 10.1007/s40596-015-0331-z. PubMed
24. Miller A, Weiss J, Hill V, Lindaman K, Emory C. Implementation of a postgraduate orthopaedic physician assistant fellowship for improved specialty training. JBJS Journal of Orthopaedics for Physician Assistants. 2017:1. doi: 10.2106/jbjs.jopa.17.00021.
25. Sharma P, Brooks M, Roomiany P, Verma L, Criscione-Schreiber L. physician assistant student training for the inpatient setting. J Physician Assist Educ. 2017;28(4):189-195. doi: 10.1097/jpa.0000000000000174. PubMed
26. Goodwin JS, Salameh H, Zhou J, Singh S, Kuo Y-F, Nattinger AB. Association of hospitalist years of experience with mortality in the hospitalized medicare population. JAMA Intern Med. 2018;178(2):196. doi: 10.1001/jamainternmed.2017.7049. PubMed
27. Barnes H. Exploring the factors that influence nurse practitioner role transition. J Nurse Pract. 2015;11(2):178-183. doi: 10.1016/j.nurpra.2014.11.004. PubMed
28. Will K, Williams J, Hilton G, Wilson L, Geyer H. Perceived efficacy and utility of postgraduate physician assistant training programs. JAAPA. 2016;29(3):46-48. doi: 10.1097/01.jaa.0000480569.39885.c8. PubMed
29. Torok H, Lackner C, Landis R, Wright S. Learning needs of physician assistants working in hospital medicine. J Hosp Med. 2011;7(3):190-194. doi: 10.1002/jhm.1001. PubMed
30. Cate O. Competency-based postgraduate medical education: past, present and future. GMS J Med Educ. 2017:34(5). doi: 10.3205/zma001146. PubMed
31. Exploring the ACGME Core Competencies (Part 1 of 7). NEJM Knowledge. https://knowledgeplus.nejm.org/blog/exploring-acgme-core-competencies/. Accessed October 24, 2018.
32. Core Competencies. Core Competencies | Society of Hospital Medicine. http://www.hospitalmedicine.org/professional-development/core-competencies/. Accessed October 24, 2018.
1. Auerbach DI, Staiger DO, Buerhaus PI. Growing ranks of advanced practice clinicians — implications for the physician workforce. N Engl J Med. 2018;378(25):2358-2360. doi: 10.1056/nejmp1801869. PubMed
2. Darves B. Midlevels make a rocky entrance into hospital medicine. Todays Hospitalist. 2007;5(1):28-32.
3. Polansky M. A historical perspective on postgraduate physician assistant education and the association of postgraduate physician assistant programs. J Physician Assist Educ. 2007;18(3):100-108. doi: 10.1097/01367895-200718030-00014.
4. FNP & AGNP Certification Candidate Handbook. The American Academy of Nurse Practitioners National Certification Board, Inc; 2018. https://www.aanpcert.org/resource/documents/AGNP FNP Candidate Handbook.pdf. Accessed December 20, 2018
5. Become a PA: Getting Your Prerequisites and Certification. AAPA. https://www.aapa.org/career-central/become-a-pa/. Accessed December 20, 2018.
6. ACGME Common Program Requirements. ACGME; 2017. https://www.acgme.org/Portals/0/PFAssets/ProgramRequirements/CPRs_2017-07-01.pdf. Accessed December 20, 2018
7. Committee on the Learning Health Care System in America; Institute of Medicine, Smith MD, Smith M, Saunders R, Stuckhardt L, McGinnis JM. Best Care at Lower Cost: The Path to Continuously Learning Health Care in America. Washington, DC: National Academies Press; 2013. PubMed
8. The Future of Nursing LEADING CHANGE, ADVANCING HEALTH. THE NATIONAL ACADEMIES PRESS; 2014. https://www.nap.edu/read/12956/chapter/1. Accessed December 16, 2018.
9. Hussaini SS, Bushardt RL, Gonsalves WC, et al. Accreditation and implications of clinical postgraduate pa training programs. JAAPA. 2016:29:1-7. doi: 10.1097/01.jaa.0000482298.17821.fb. PubMed
10. Polansky M, Garver GJH, Hilton G. Postgraduate clinical education of physician assistants. J Physician Assist Educ. 2012;23(1):39-45. doi: 10.1097/01367895-201223010-00008.
11. Will KK, Budavari AI, Wilkens JA, Mishark K, Hartsell ZC. A hospitalist postgraduate training program for physician assistants. J Hosp Med. 2010;5(2):94-98. doi: 10.1002/jhm.619. PubMed
12. Kartha A, Restuccia JD, Burgess JF, et al. Nurse practitioner and physician assistant scope of practice in 118 acute care hospitals. J Hosp Med. 2014;9(10):615-620. doi: 10.1002/jhm.2231. PubMed
13. Singh S, Fletcher KE, Schapira MM, et al. A comparison of outcomes of general medical inpatient care provided by a hospitalist-physician assistant model vs a traditional resident-based model. J Hosp Med. 2011;6(3):122-130. doi: 10.1002/jhm.826. PubMed
14. Hussaini SS, Bushardt RL, Gonsalves WC, et al. Accreditation and implications of clinical postgraduate PA training programs. JAAPA. 2016;29(5):1-7. doi: 10.1097/01.jaa.0000482298.17821.fb. PubMed
15. Postgraduate Programs. ARC-PA. http://www.arc-pa.org/accreditation/postgraduate-programs. Accessed September 13, 2018.
16. National Nurse Practitioner Residency & Fellowship Training Consortium: Mission. https://www.nppostgradtraining.com/About-Us/Mission. Accessed September 27, 2018.
17. NP/PA Boot Camp. State of Hospital Medicine | Society of Hospital Medicine. http://www.hospitalmedicine.org/events/nppa-boot-camp. Accessed September 13, 2018.
18. Gehlbach H, Artino Jr AR, Durning SJ. AM last page: survey development guidance for medical education researchers. Acad Med. 2010;85(5):925. doi: 10.1097/ACM.0b013e3181dd3e88.” Accessed March 10, 2018. PubMed
19. Kraus C, Carlisle T, Carney D. Emergency Medicine Physician Assistant (EMPA) post-graduate training programs: program characteristics and training curricula. West J Emerg Med. 2018;19(5):803-807. doi: 10.5811/westjem.2018.6.37892.
20. Shah NH, Rhim HJH, Maniscalco J, Wilson K, Rassbach C. The current state of pediatric hospital medicine fellowships: A survey of program directors. J Hosp Med. 2016;11(5):324-328. doi: 10.1002/jhm.2571. PubMed
21. Thompson BM, Searle NS, Gruppen LD, Hatem CJ, Nelson E. A national survey of medical education fellowships. Med Educ Online. 2011;16(1):5642. doi: 10.3402/meo.v16i0.5642. PubMed
22. Hooker R. A physician assistant rheumatology fellowship. JAAPA. 2013;26(6):49-52. doi: 10.1097/01.jaa.0000430346.04435.e4 PubMed
23. Keizer T, Trangle M. the benefits of a physician assistant and/or nurse practitioner psychiatric postgraduate training program. Acad Psychiatry. 2015;39(6):691-694. doi: 10.1007/s40596-015-0331-z. PubMed
24. Miller A, Weiss J, Hill V, Lindaman K, Emory C. Implementation of a postgraduate orthopaedic physician assistant fellowship for improved specialty training. JBJS Journal of Orthopaedics for Physician Assistants. 2017:1. doi: 10.2106/jbjs.jopa.17.00021.
25. Sharma P, Brooks M, Roomiany P, Verma L, Criscione-Schreiber L. physician assistant student training for the inpatient setting. J Physician Assist Educ. 2017;28(4):189-195. doi: 10.1097/jpa.0000000000000174. PubMed
26. Goodwin JS, Salameh H, Zhou J, Singh S, Kuo Y-F, Nattinger AB. Association of hospitalist years of experience with mortality in the hospitalized medicare population. JAMA Intern Med. 2018;178(2):196. doi: 10.1001/jamainternmed.2017.7049. PubMed
27. Barnes H. Exploring the factors that influence nurse practitioner role transition. J Nurse Pract. 2015;11(2):178-183. doi: 10.1016/j.nurpra.2014.11.004. PubMed
28. Will K, Williams J, Hilton G, Wilson L, Geyer H. Perceived efficacy and utility of postgraduate physician assistant training programs. JAAPA. 2016;29(3):46-48. doi: 10.1097/01.jaa.0000480569.39885.c8. PubMed
29. Torok H, Lackner C, Landis R, Wright S. Learning needs of physician assistants working in hospital medicine. J Hosp Med. 2011;7(3):190-194. doi: 10.1002/jhm.1001. PubMed
30. Cate O. Competency-based postgraduate medical education: past, present and future. GMS J Med Educ. 2017:34(5). doi: 10.3205/zma001146. PubMed
31. Exploring the ACGME Core Competencies (Part 1 of 7). NEJM Knowledge. https://knowledgeplus.nejm.org/blog/exploring-acgme-core-competencies/. Accessed October 24, 2018.
32. Core Competencies. Core Competencies | Society of Hospital Medicine. http://www.hospitalmedicine.org/professional-development/core-competencies/. Accessed October 24, 2018.
© 2019 Society of Hospital Medicine
Modifiable Factors Associated with Quality of Bowel Preparation Among Hospitalized Patients Undergoing Colonoscopy
Inadequate bowel preparation (IBP) at the time of inpatient colonoscopy is common and associated with increased length of stay and cost of care.1 The factors that contribute to IBP can be categorized into those that are modifiable and those that are nonmodifiable. While many factors have been associated with IBP, studies have been limited by small sample size or have combined inpatient/outpatient populations, thus limiting generalizability.1-5 Moreover, most factors associated with IBP, such as socioeconomic status, male gender, increased age, and comorbidities, are nonmodifiable. No studies have explicitly focused on modifiable risk factors, such as medication use, colonoscopy timing, or assessed the potential impact of modifying these factors.
In a large, multihospital system, we examine the frequency of IBP among inpatients undergoing colonoscopy along with factors associated with IBP. We attempted to identify
METHODS
Potential Predictors of IBP
Demographic data such as patient age, gender, ethnicity, body mass index (BMI), and insurance/payor status were obtained from the electronic health record (EHR). International Classification of Disease 9th and 10th revision, Clinical Modifications (ICD-9/10-CM) codes were used to obtain patient comorbidities including diabetes, coronary artery disease, heart failure, cirrhosis, gastroparesis, hypothyroidism, inflammatory bowel disease, constipation, stroke, dementia, dysphagia, and nausea/vomiting. Use of opioid medications within three days before colonoscopy was extracted from the medication administration record. These variables were chosen as biologically plausible modifiers of bowel preparation or had previously been assessed in the literature.1-6 The name and volume, classified as 4 L (GoLytely®) and < 4 liters (MoviPrep®) of bowel preparation, time of day when colonoscopy was performed, solid diet the day prior to colonoscopy, type of sedation used (conscious sedation or general anesthesia), and total colonoscopy time (defined as the time from scope insertion to removal) was recorded. Hospitalization-related variables, including the number of hospitalizations in the year before the current hospitalization, the year in which the colonoscopy was performed, and the number of days from admission to colonoscopy, were also obtained from the EHR.
Outcome Measures
An internally validated natural language algorithm, using Structured Queried Language was used to search through colonoscopy reports to identify adequacy of bowel preparation. ProVation® software allows the gastroenterologist to use some terms to describe bowel preparation in a drop-down menu format. In addition to the Aronchik scale (which allows the gastroenterologist to rate bowel preparation on a five-point scale: “excellent,” “good,” “fair,” “poor,” and “inadequate”) it also allows the provider to use terms such as “adequate” or “adequate to detect polyps >5 mm” as well as “unsatisfactory.”7 Mirroring prior literature, bowel preparation quality was classified into “adequate” and “inadequate”; “good” and “excellent” on the Aronchik scale were categorized as adequate as was the term “adequate” in any form; “fair,” “poor,” or “inadequate” on the Aronchik scale were classified as inadequate as was the term “unsatisfactory.” We evaluated the hospital length of stay (LOS) as a secondary outcome measure.
Statistical Analysis
After describing the frequency of IBP, the quality of bowel preparation (adequate vs inadequate) was compared based on the predictors described above. Categorical variables were reported as frequencies with percentages and continuous variables were reported as medians with 25th-75th percentile values. The significance of the difference between the proportion or median values of those who had inadequate versus adequate bowel preparation was assessed. Two-sided chi-square analysis was used to assess the significance of differences between categorical variables and the Wilcoxon Rank-Sum test was used to assess the significance of differences between continuous variables.
Multivariate logistic regression analysis was performed to assess factors associated with hospital predictors and outcomes, after adjusting for all the aforementioned factors and clustering the effect based on the endoscopist. To evaluate the potential impact of modifiable factors on IBP, we performed counterfactual analysis, in which the observed distribution was compared to a hypothetical population in which all the modifiable risk factors were optimal.
RESULTS
Overall, 8,819 patients were included in our study population. They had a median age of 64 [53-76] years; 50.5% were female and 51% had an IBP. Patient characteristics and rates of IBP are presented in Table 1.
In unadjusted analyses, with regards to modifiable factors, opiate use within three days of colonoscopy was associated with a higher rate of IBP (55.4% vs 47.3%, P <.001), as was a lower volume (<4L) bowel preparation (55.3% vs 50.4%, P = .003). IBP was less frequent when colonoscopy was performed before noon vs afternoon (50.3% vs 57.4%, P < .001), and when patients were documented to receive a clear liquid diet or nil per os vs a solid diet the day prior to colonoscopy (50.3% vs 57.4%, P < .001). Overall bowel preparation quality improved over time (Figure 1). Median LOS was five [3-11] days. Patients who had IBP on their initial colonoscopy had a LOS one day longer than patients without IBP (six days vs five days, P < .001).
Multivariate Analysis
Table 2 shows the results of the multivariate analysis. The following modifiable factors were associated with IBP: opiate used within three days of the procedure (OR 1.31; 95% CI 1.8, 1.45), having the colonoscopy performed after12:00
Potential Impact of Modifiable Variables
We conducted a counterfactual analysis based on a multivariate model to assess the impact of each modifiable risk factor on the IBP rate (Figure 1). In the included study population, 44.9% received an opiate, 39.3% had a colonoscopy after 12:00
DISCUSSION
In this large, multihospital cohort, IBP was documented in half (51%) of 8,819 inpatient colonoscopies performed. Nonmodifiable patient characteristics independently associated with IBP were age, male gender, white race, Medicare and Medicaid insurance, nausea/vomiting, dysphagia, and gastroparesis. Modifiable factors included not consuming opiates within three days of colonoscopy, avoidance of a solid diet the day prior to colonoscopy and performing the colonoscopy before noon. The volume of bowel preparation consumed was not associated with IBP. In a counterfactual analysis, we found that if all three modifiable factors were optimized, the predicted rate of IBP would drop to 45%.
Many studies, including our analysis, have shown significant differences between the frequency of IBP in inpatient versus outpatient bowel preparations.8-11 Therefore, it is crucial to study IBP in these settings separately. Three single-institution studies, including a total of 898 patients, have identified risk factors for inpatient IBP. Individual studies ranged in size from 130 to 524 patients with rates of IBP ranging from 22%-57%.1-3 They found IBP to be associated with increasing age, lower income, ASA Grade >3, diabetes, coronary artery disease (CAD), nausea or vomiting, BMI >25, and chronic constipation. Modifiable factors included opiates, afternoon procedures, and runway times >6 hours.
We also found IBP to be associated with increasing age and male gender. However, we found no association with diabetes, chronic constipation, CAD or BMI. As we were able to adjust for a wider variety of variables, it is possible that we were able to account for residual confounding better than previous studies. For example, we found that having nausea/vomiting, dysphagia, and gastroparesis was associated with IBP. Gastroparesis with associated nausea and vomiting may be the mechanism by which diabetes increases the risk for IBP. Further studies are needed to assess if interventions or alternative bowel cleansing in these patients can result in improved IBP. Finally, in contrast to studies with smaller cohorts which found that lower volume bowel preps improved IBP in the right colon,4,12 we found no association between IBP based and volume of bowel preparation consumed. Our impact analysis suggests that avoidance of opiates for at least three days before colonoscopy, avoidance of solid diet on the day before colonoscopy and performing all colonoscopies before noon would
The factors mentioned above may not always be amenable to modification. For example, for patients with active gastrointestinal bleeding, postponing colonoscopy by one day for the sake of maintaining a patient on a clear diet may not be feasible. Similarly, performing colonoscopies in the morning is highly dependent on endoscopy suite availability and hospital logistics. Denying opiates to patients experiencing severe pain is not ethical. In many scenarios, however, these variables could be modified, and institutional efforts to support these practices could yield considerable savings. Future prospective studies are needed to verify the real impact of these changes.
Further discussion is needed to contextualize the finding that colonoscopies scheduled in the afternoon are associated with improved bowel preparation quality. Previous research—albeit in the outpatient setting—has demonstrated 11.8 hours as the maximum upper time limit for the time elapsed between the end of bowel preparation to colonoscopy.14 Another study found an inverse relationship between the quality of bowel preparation and the time after completion of the bowel preparation.15 This makes sense from a physiological perspective as delaying the time between completion of bowel preparation, and the procedure allows chyme from the small intestine to reaccumulate in the colon. Anecdotally, at our institution as well as at many others, the bowel preparations are ordered to start in the evening to allow the consumption of complete bowel preparation by midnight. As a result of this practice, only patients who have their colonoscopies scheduled before noon fall within the optimal period of 11.8 hours. In the outpatient setting, the use of split preparations has led to the obliteration of the difference in the quality of bowel preparation between morning and afternoon colonoscopies.16 Prospective trials are needed to evaluate the use of split preparations to improve the quality of afternoon inpatient colonoscopies.
Few other strategies have been shown to mitigate IBP in the inpatient setting. In a small randomized controlled trial, Ergen et al. found that providing an educational booklet improved inpatient bowel preparation as measured by the Boston Bowel Preparation Scale.17 In a quasi-experimental design, Yadlapati et al. found that an automated split-dose bowel preparation resulted in decreased IBP, fewer repeated procedures, shorter LOS, and lower hospital cost.18 Our study adds to these tools by identifying three additional risk factors which could be optimized for inpatients. Because our findings are observational, they should be subjected to prospective trials. Our study also calls into question the impact of bowel preparation volume. We found no difference in the rate of IBP between low and large volume preparations. It is possible that other factors are more important than the specific preparation employed.
Interestingly, we found that IBP declined substantially in 2014 and continued to decline after that. The year was the most influential risk factor for IBP (on par with gastroparesis). The reason for this is unclear, as rates of our modifiable risk factors did not differ substantially by year. Other possibilities include improved access (including weekend access) to endoscopy coinciding with the development of a new endoscopy facility and use of integrated irrigation pump system instead of the use of manual syringes for flushing.
Our study has many strengths. It is by far the most extensive study of bowel preparation quality in inpatients to date and the only one that has included patient, procedural and bowel preparation characteristics. The study also has several significant limitations. This is a single center study, which could limit generalizability. Nonetheless, it was conducted within a health system with multiple hospitals in different parts of the United States (Ohio and Florida) and included a broad population mix with differing levels of acuity. The retrospective nature of the assessment precludes establishing causation. However, we mitigated confounding by adjusting for a wide variety of factors, and there is a plausible physiological mechanism for each of the factors we studied. Also, the retrospective nature of our study predisposes our data to omissions and misrepresentations during the documentation process. This is especially true with the use of ICD codes.19 Inaccuracies in coding are likely to bias toward the null, so observed associations may be an underestimate of the true association.
Our inability to ascertain if a patient completed the prescribed bowel preparation limited our ability to detect what may be a significant risk factor. Lastly, while clinically relevant, the Aronchik scale used to identify adequate from IBP has never been validated though it is frequently utilized and cited in the bowel preparation literature.20
CONCLUSIONS
In this large retrospective study evaluating bowel preparation quality in inpatients undergoing colonoscopy, we found that more than half of the patients have IBP and that IBP was associated with an extra day of hospitalization. Our study identifies those patients at highest risk and identifies modifiable risk factors for IBP. Specifically, we found that abstinence from opiates or solid diet before the colonoscopy, along with performing colonoscopies before noon were associated with improved outcomes. Prospective studies are needed to confirm the effects of these interventions on bowel preparation quality.
Disclosures
Carol A Burke, MD has received research funding from Ferring Pharmaceuticals. Other authors have no conflicts of interest to disclose.
1. Yadlapati R, Johnston ER, Gregory DL, Ciolino JD, Cooper A, Keswani RN. Predictors of inadequate inpatient colonoscopy preparation and its association with hospital length of stay and costs. Dig Dis Sci. 2015;60(11):3482-3490. doi: 10.1007/s10620-015-3761-2. PubMed
2. Jawa H, Mosli M, Alsamadani W, et al. Predictors of inadequate bowel preparation for inpatient colonoscopy. Turk J Gastroenterol. 2017;28(6):460-464. doi: 10.5152/tjg.2017.17196. PubMed
3. Mcnabb-Baltar J, Dorreen A, Dhahab HA, et al. Age is the only predictor of poor bowel preparation in the hospitalized patient. Can J Gastroenterol Hepatol. 2016;2016:1-5. doi: 10.1155/2016/2139264. PubMed
4. Rotondano G, Rispo A, Bottiglieri ME, et al. Tu1503 Quality of bowel cleansing in hospitalized patients is not worse than that of outpatients undergoing colonoscopy: results of a multicenter prospective regional study. Gastrointest Endosc. 2014;79(5):AB564. doi: 10.1016/j.gie.2014.02.949. PubMed
5. Ness R. Predictors of inadequate bowel preparation for colonoscopy. Am J Gastroenterol. 2001;96(6):1797-1802. doi: 10.1016/s0002-9270(01)02437-6. PubMed
6. Johnson DA, Barkun AN, Cohen LB, et al. Optimizing adequacy of bowel cleansing for colonoscopy: recommendations from the us multi-society task force on colorectal cancer. Gastroenterology. 2014;147(4):903-924. doi: 10.1053/j.gastro.2014.07.002. PubMed
7. Aronchick CA, Lipshutz WH, Wright SH, et al. A novel tableted purgative for colonoscopic preparation: efficacy and safety comparisons with Colyte and Fleet Phospho-Soda. Gastrointest Endosc. 2000;52(3):346-352. doi: 10.1067/mge.2000.108480. PubMed
8. Froehlich F, Wietlisbach V, Gonvers J-J, Burnand B, Vader J-P. Impact of colonic cleansing on quality and diagnostic yield of colonoscopy: the European Panel of Appropriateness of Gastrointestinal Endoscopy European multicenter study. Gastrointest Endosc. 2005;61(3):378-384. doi: 10.1016/s0016-5107(04)02776-2. PubMed
9. Sarvepalli S, Garber A, Rizk M, et al. 923 adjusted comparison of commercial bowel preparations based on inadequacy of bowel preparation in outpatient settings. Gastrointest Endosc. 2018;87(6):AB127. doi: 10.1016/j.gie.2018.04.1331.
10. Hendry PO, Jenkins JT, Diament RH. The impact of poor bowel preparation on colonoscopy: a prospective single center study of 10 571 colonoscopies. Colorectal Dis. 2007;9(8):745-748. doi: 10.1111/j.1463-1318.2007.01220.x. PubMed
11. Lebwohl B, Wang TC, Neugut AI. Socioeconomic and other predictors of colonoscopy preparation quality. Dig Dis Sci. 2010;55(7):2014-2020. doi: 10.1007/s10620-009-1079-7. PubMed
12. Chorev N, Chadad B, Segal N, et al. Preparation for colonoscopy in hospitalized patients. Dig Dis Sci. 2007;52(3):835-839. doi: 10.1007/s10620-006-9591-5. PubMed
13. Weiss AJ. Overview of Hospital Stays in the United States, 2012. HCUP Statistical Brief #180. Rockville, MD: Agency for Healthcare Research and Quality; 2014. PubMed
14. Kojecky V, Matous J, Keil R, et al. The optimal bowel preparation intervals before colonoscopy: a randomized study comparing polyethylene glycol and low-volume solutions. Dig Liver Dis. 2018;50(3):271-276. doi: 10.1016/j.dld.2017.10.010. PubMed
15. Siddiqui AA, Yang K, Spechler SJ, et al. Duration of the interval between the completion of bowel preparation and the start of colonoscopy predicts bowel-preparation quality. Gastrointest Endosc. 2009;69(3):700-706. doi: 10.1016/j.gie.2008.09.047. PubMed
16. Eun CS, Han DS, Hyun YS, et al. The timing of bowel preparation is more important than the timing of colonoscopy in determining the quality of bowel cleansing. Dig Dis Sci. 2010;56(2):539-544. doi: 10.1007/s10620-010-1457-1. PubMed
17. Ergen WF, Pasricha T, Hubbard FJ, et al. Providing hospitalized patients with an educational booklet increases the quality of colonoscopy bowel preparation. Clin Gastroenterol Hepatol. 2016;14(6):858-864. doi: 10.1016/j.cgh.2015.11.015. PubMed
18. Yadlapati R, Johnston ER, Gluskin AB, et al. An automated inpatient split-dose bowel preparation system improves colonoscopy quality and reduces repeat procedures. J Clin Gastroenterol. 2018;52(8):709-714. doi: 10.1097/mcg.0000000000000849. PubMed
19. Birman-Deych E, Waterman AD, Yan Y, Nilasena DS, Radford MJ, Gage BF. The accuracy of ICD-9-CM codes for identifying cardiovascular and stroke risk factors. Med Care. 2005;43(5):480-485. doi: 10.1097/01.mlr.0000160417.39497.a9. PubMed
20. Parmar R, Martel M, Rostom A, Barkun AN. Validated scales for colon cleansing: a systematic review. J Clin Gastroenterol. 2016;111(2):197-204. doi: 10.1038/ajg.2015.417. PubMed
Inadequate bowel preparation (IBP) at the time of inpatient colonoscopy is common and associated with increased length of stay and cost of care.1 The factors that contribute to IBP can be categorized into those that are modifiable and those that are nonmodifiable. While many factors have been associated with IBP, studies have been limited by small sample size or have combined inpatient/outpatient populations, thus limiting generalizability.1-5 Moreover, most factors associated with IBP, such as socioeconomic status, male gender, increased age, and comorbidities, are nonmodifiable. No studies have explicitly focused on modifiable risk factors, such as medication use, colonoscopy timing, or assessed the potential impact of modifying these factors.
In a large, multihospital system, we examine the frequency of IBP among inpatients undergoing colonoscopy along with factors associated with IBP. We attempted to identify
METHODS
Potential Predictors of IBP
Demographic data such as patient age, gender, ethnicity, body mass index (BMI), and insurance/payor status were obtained from the electronic health record (EHR). International Classification of Disease 9th and 10th revision, Clinical Modifications (ICD-9/10-CM) codes were used to obtain patient comorbidities including diabetes, coronary artery disease, heart failure, cirrhosis, gastroparesis, hypothyroidism, inflammatory bowel disease, constipation, stroke, dementia, dysphagia, and nausea/vomiting. Use of opioid medications within three days before colonoscopy was extracted from the medication administration record. These variables were chosen as biologically plausible modifiers of bowel preparation or had previously been assessed in the literature.1-6 The name and volume, classified as 4 L (GoLytely®) and < 4 liters (MoviPrep®) of bowel preparation, time of day when colonoscopy was performed, solid diet the day prior to colonoscopy, type of sedation used (conscious sedation or general anesthesia), and total colonoscopy time (defined as the time from scope insertion to removal) was recorded. Hospitalization-related variables, including the number of hospitalizations in the year before the current hospitalization, the year in which the colonoscopy was performed, and the number of days from admission to colonoscopy, were also obtained from the EHR.
Outcome Measures
An internally validated natural language algorithm, using Structured Queried Language was used to search through colonoscopy reports to identify adequacy of bowel preparation. ProVation® software allows the gastroenterologist to use some terms to describe bowel preparation in a drop-down menu format. In addition to the Aronchik scale (which allows the gastroenterologist to rate bowel preparation on a five-point scale: “excellent,” “good,” “fair,” “poor,” and “inadequate”) it also allows the provider to use terms such as “adequate” or “adequate to detect polyps >5 mm” as well as “unsatisfactory.”7 Mirroring prior literature, bowel preparation quality was classified into “adequate” and “inadequate”; “good” and “excellent” on the Aronchik scale were categorized as adequate as was the term “adequate” in any form; “fair,” “poor,” or “inadequate” on the Aronchik scale were classified as inadequate as was the term “unsatisfactory.” We evaluated the hospital length of stay (LOS) as a secondary outcome measure.
Statistical Analysis
After describing the frequency of IBP, the quality of bowel preparation (adequate vs inadequate) was compared based on the predictors described above. Categorical variables were reported as frequencies with percentages and continuous variables were reported as medians with 25th-75th percentile values. The significance of the difference between the proportion or median values of those who had inadequate versus adequate bowel preparation was assessed. Two-sided chi-square analysis was used to assess the significance of differences between categorical variables and the Wilcoxon Rank-Sum test was used to assess the significance of differences between continuous variables.
Multivariate logistic regression analysis was performed to assess factors associated with hospital predictors and outcomes, after adjusting for all the aforementioned factors and clustering the effect based on the endoscopist. To evaluate the potential impact of modifiable factors on IBP, we performed counterfactual analysis, in which the observed distribution was compared to a hypothetical population in which all the modifiable risk factors were optimal.
RESULTS
Overall, 8,819 patients were included in our study population. They had a median age of 64 [53-76] years; 50.5% were female and 51% had an IBP. Patient characteristics and rates of IBP are presented in Table 1.
In unadjusted analyses, with regards to modifiable factors, opiate use within three days of colonoscopy was associated with a higher rate of IBP (55.4% vs 47.3%, P <.001), as was a lower volume (<4L) bowel preparation (55.3% vs 50.4%, P = .003). IBP was less frequent when colonoscopy was performed before noon vs afternoon (50.3% vs 57.4%, P < .001), and when patients were documented to receive a clear liquid diet or nil per os vs a solid diet the day prior to colonoscopy (50.3% vs 57.4%, P < .001). Overall bowel preparation quality improved over time (Figure 1). Median LOS was five [3-11] days. Patients who had IBP on their initial colonoscopy had a LOS one day longer than patients without IBP (six days vs five days, P < .001).
Multivariate Analysis
Table 2 shows the results of the multivariate analysis. The following modifiable factors were associated with IBP: opiate used within three days of the procedure (OR 1.31; 95% CI 1.8, 1.45), having the colonoscopy performed after12:00
Potential Impact of Modifiable Variables
We conducted a counterfactual analysis based on a multivariate model to assess the impact of each modifiable risk factor on the IBP rate (Figure 1). In the included study population, 44.9% received an opiate, 39.3% had a colonoscopy after 12:00
DISCUSSION
In this large, multihospital cohort, IBP was documented in half (51%) of 8,819 inpatient colonoscopies performed. Nonmodifiable patient characteristics independently associated with IBP were age, male gender, white race, Medicare and Medicaid insurance, nausea/vomiting, dysphagia, and gastroparesis. Modifiable factors included not consuming opiates within three days of colonoscopy, avoidance of a solid diet the day prior to colonoscopy and performing the colonoscopy before noon. The volume of bowel preparation consumed was not associated with IBP. In a counterfactual analysis, we found that if all three modifiable factors were optimized, the predicted rate of IBP would drop to 45%.
Many studies, including our analysis, have shown significant differences between the frequency of IBP in inpatient versus outpatient bowel preparations.8-11 Therefore, it is crucial to study IBP in these settings separately. Three single-institution studies, including a total of 898 patients, have identified risk factors for inpatient IBP. Individual studies ranged in size from 130 to 524 patients with rates of IBP ranging from 22%-57%.1-3 They found IBP to be associated with increasing age, lower income, ASA Grade >3, diabetes, coronary artery disease (CAD), nausea or vomiting, BMI >25, and chronic constipation. Modifiable factors included opiates, afternoon procedures, and runway times >6 hours.
We also found IBP to be associated with increasing age and male gender. However, we found no association with diabetes, chronic constipation, CAD or BMI. As we were able to adjust for a wider variety of variables, it is possible that we were able to account for residual confounding better than previous studies. For example, we found that having nausea/vomiting, dysphagia, and gastroparesis was associated with IBP. Gastroparesis with associated nausea and vomiting may be the mechanism by which diabetes increases the risk for IBP. Further studies are needed to assess if interventions or alternative bowel cleansing in these patients can result in improved IBP. Finally, in contrast to studies with smaller cohorts which found that lower volume bowel preps improved IBP in the right colon,4,12 we found no association between IBP based and volume of bowel preparation consumed. Our impact analysis suggests that avoidance of opiates for at least three days before colonoscopy, avoidance of solid diet on the day before colonoscopy and performing all colonoscopies before noon would
The factors mentioned above may not always be amenable to modification. For example, for patients with active gastrointestinal bleeding, postponing colonoscopy by one day for the sake of maintaining a patient on a clear diet may not be feasible. Similarly, performing colonoscopies in the morning is highly dependent on endoscopy suite availability and hospital logistics. Denying opiates to patients experiencing severe pain is not ethical. In many scenarios, however, these variables could be modified, and institutional efforts to support these practices could yield considerable savings. Future prospective studies are needed to verify the real impact of these changes.
Further discussion is needed to contextualize the finding that colonoscopies scheduled in the afternoon are associated with improved bowel preparation quality. Previous research—albeit in the outpatient setting—has demonstrated 11.8 hours as the maximum upper time limit for the time elapsed between the end of bowel preparation to colonoscopy.14 Another study found an inverse relationship between the quality of bowel preparation and the time after completion of the bowel preparation.15 This makes sense from a physiological perspective as delaying the time between completion of bowel preparation, and the procedure allows chyme from the small intestine to reaccumulate in the colon. Anecdotally, at our institution as well as at many others, the bowel preparations are ordered to start in the evening to allow the consumption of complete bowel preparation by midnight. As a result of this practice, only patients who have their colonoscopies scheduled before noon fall within the optimal period of 11.8 hours. In the outpatient setting, the use of split preparations has led to the obliteration of the difference in the quality of bowel preparation between morning and afternoon colonoscopies.16 Prospective trials are needed to evaluate the use of split preparations to improve the quality of afternoon inpatient colonoscopies.
Few other strategies have been shown to mitigate IBP in the inpatient setting. In a small randomized controlled trial, Ergen et al. found that providing an educational booklet improved inpatient bowel preparation as measured by the Boston Bowel Preparation Scale.17 In a quasi-experimental design, Yadlapati et al. found that an automated split-dose bowel preparation resulted in decreased IBP, fewer repeated procedures, shorter LOS, and lower hospital cost.18 Our study adds to these tools by identifying three additional risk factors which could be optimized for inpatients. Because our findings are observational, they should be subjected to prospective trials. Our study also calls into question the impact of bowel preparation volume. We found no difference in the rate of IBP between low and large volume preparations. It is possible that other factors are more important than the specific preparation employed.
Interestingly, we found that IBP declined substantially in 2014 and continued to decline after that. The year was the most influential risk factor for IBP (on par with gastroparesis). The reason for this is unclear, as rates of our modifiable risk factors did not differ substantially by year. Other possibilities include improved access (including weekend access) to endoscopy coinciding with the development of a new endoscopy facility and use of integrated irrigation pump system instead of the use of manual syringes for flushing.
Our study has many strengths. It is by far the most extensive study of bowel preparation quality in inpatients to date and the only one that has included patient, procedural and bowel preparation characteristics. The study also has several significant limitations. This is a single center study, which could limit generalizability. Nonetheless, it was conducted within a health system with multiple hospitals in different parts of the United States (Ohio and Florida) and included a broad population mix with differing levels of acuity. The retrospective nature of the assessment precludes establishing causation. However, we mitigated confounding by adjusting for a wide variety of factors, and there is a plausible physiological mechanism for each of the factors we studied. Also, the retrospective nature of our study predisposes our data to omissions and misrepresentations during the documentation process. This is especially true with the use of ICD codes.19 Inaccuracies in coding are likely to bias toward the null, so observed associations may be an underestimate of the true association.
Our inability to ascertain if a patient completed the prescribed bowel preparation limited our ability to detect what may be a significant risk factor. Lastly, while clinically relevant, the Aronchik scale used to identify adequate from IBP has never been validated though it is frequently utilized and cited in the bowel preparation literature.20
CONCLUSIONS
In this large retrospective study evaluating bowel preparation quality in inpatients undergoing colonoscopy, we found that more than half of the patients have IBP and that IBP was associated with an extra day of hospitalization. Our study identifies those patients at highest risk and identifies modifiable risk factors for IBP. Specifically, we found that abstinence from opiates or solid diet before the colonoscopy, along with performing colonoscopies before noon were associated with improved outcomes. Prospective studies are needed to confirm the effects of these interventions on bowel preparation quality.
Disclosures
Carol A Burke, MD has received research funding from Ferring Pharmaceuticals. Other authors have no conflicts of interest to disclose.
Inadequate bowel preparation (IBP) at the time of inpatient colonoscopy is common and associated with increased length of stay and cost of care.1 The factors that contribute to IBP can be categorized into those that are modifiable and those that are nonmodifiable. While many factors have been associated with IBP, studies have been limited by small sample size or have combined inpatient/outpatient populations, thus limiting generalizability.1-5 Moreover, most factors associated with IBP, such as socioeconomic status, male gender, increased age, and comorbidities, are nonmodifiable. No studies have explicitly focused on modifiable risk factors, such as medication use, colonoscopy timing, or assessed the potential impact of modifying these factors.
In a large, multihospital system, we examine the frequency of IBP among inpatients undergoing colonoscopy along with factors associated with IBP. We attempted to identify
METHODS
Potential Predictors of IBP
Demographic data such as patient age, gender, ethnicity, body mass index (BMI), and insurance/payor status were obtained from the electronic health record (EHR). International Classification of Disease 9th and 10th revision, Clinical Modifications (ICD-9/10-CM) codes were used to obtain patient comorbidities including diabetes, coronary artery disease, heart failure, cirrhosis, gastroparesis, hypothyroidism, inflammatory bowel disease, constipation, stroke, dementia, dysphagia, and nausea/vomiting. Use of opioid medications within three days before colonoscopy was extracted from the medication administration record. These variables were chosen as biologically plausible modifiers of bowel preparation or had previously been assessed in the literature.1-6 The name and volume, classified as 4 L (GoLytely®) and < 4 liters (MoviPrep®) of bowel preparation, time of day when colonoscopy was performed, solid diet the day prior to colonoscopy, type of sedation used (conscious sedation or general anesthesia), and total colonoscopy time (defined as the time from scope insertion to removal) was recorded. Hospitalization-related variables, including the number of hospitalizations in the year before the current hospitalization, the year in which the colonoscopy was performed, and the number of days from admission to colonoscopy, were also obtained from the EHR.
Outcome Measures
An internally validated natural language algorithm, using Structured Queried Language was used to search through colonoscopy reports to identify adequacy of bowel preparation. ProVation® software allows the gastroenterologist to use some terms to describe bowel preparation in a drop-down menu format. In addition to the Aronchik scale (which allows the gastroenterologist to rate bowel preparation on a five-point scale: “excellent,” “good,” “fair,” “poor,” and “inadequate”) it also allows the provider to use terms such as “adequate” or “adequate to detect polyps >5 mm” as well as “unsatisfactory.”7 Mirroring prior literature, bowel preparation quality was classified into “adequate” and “inadequate”; “good” and “excellent” on the Aronchik scale were categorized as adequate as was the term “adequate” in any form; “fair,” “poor,” or “inadequate” on the Aronchik scale were classified as inadequate as was the term “unsatisfactory.” We evaluated the hospital length of stay (LOS) as a secondary outcome measure.
Statistical Analysis
After describing the frequency of IBP, the quality of bowel preparation (adequate vs inadequate) was compared based on the predictors described above. Categorical variables were reported as frequencies with percentages and continuous variables were reported as medians with 25th-75th percentile values. The significance of the difference between the proportion or median values of those who had inadequate versus adequate bowel preparation was assessed. Two-sided chi-square analysis was used to assess the significance of differences between categorical variables and the Wilcoxon Rank-Sum test was used to assess the significance of differences between continuous variables.
Multivariate logistic regression analysis was performed to assess factors associated with hospital predictors and outcomes, after adjusting for all the aforementioned factors and clustering the effect based on the endoscopist. To evaluate the potential impact of modifiable factors on IBP, we performed counterfactual analysis, in which the observed distribution was compared to a hypothetical population in which all the modifiable risk factors were optimal.
RESULTS
Overall, 8,819 patients were included in our study population. They had a median age of 64 [53-76] years; 50.5% were female and 51% had an IBP. Patient characteristics and rates of IBP are presented in Table 1.
In unadjusted analyses, with regards to modifiable factors, opiate use within three days of colonoscopy was associated with a higher rate of IBP (55.4% vs 47.3%, P <.001), as was a lower volume (<4L) bowel preparation (55.3% vs 50.4%, P = .003). IBP was less frequent when colonoscopy was performed before noon vs afternoon (50.3% vs 57.4%, P < .001), and when patients were documented to receive a clear liquid diet or nil per os vs a solid diet the day prior to colonoscopy (50.3% vs 57.4%, P < .001). Overall bowel preparation quality improved over time (Figure 1). Median LOS was five [3-11] days. Patients who had IBP on their initial colonoscopy had a LOS one day longer than patients without IBP (six days vs five days, P < .001).
Multivariate Analysis
Table 2 shows the results of the multivariate analysis. The following modifiable factors were associated with IBP: opiate used within three days of the procedure (OR 1.31; 95% CI 1.8, 1.45), having the colonoscopy performed after12:00
Potential Impact of Modifiable Variables
We conducted a counterfactual analysis based on a multivariate model to assess the impact of each modifiable risk factor on the IBP rate (Figure 1). In the included study population, 44.9% received an opiate, 39.3% had a colonoscopy after 12:00
DISCUSSION
In this large, multihospital cohort, IBP was documented in half (51%) of 8,819 inpatient colonoscopies performed. Nonmodifiable patient characteristics independently associated with IBP were age, male gender, white race, Medicare and Medicaid insurance, nausea/vomiting, dysphagia, and gastroparesis. Modifiable factors included not consuming opiates within three days of colonoscopy, avoidance of a solid diet the day prior to colonoscopy and performing the colonoscopy before noon. The volume of bowel preparation consumed was not associated with IBP. In a counterfactual analysis, we found that if all three modifiable factors were optimized, the predicted rate of IBP would drop to 45%.
Many studies, including our analysis, have shown significant differences between the frequency of IBP in inpatient versus outpatient bowel preparations.8-11 Therefore, it is crucial to study IBP in these settings separately. Three single-institution studies, including a total of 898 patients, have identified risk factors for inpatient IBP. Individual studies ranged in size from 130 to 524 patients with rates of IBP ranging from 22%-57%.1-3 They found IBP to be associated with increasing age, lower income, ASA Grade >3, diabetes, coronary artery disease (CAD), nausea or vomiting, BMI >25, and chronic constipation. Modifiable factors included opiates, afternoon procedures, and runway times >6 hours.
We also found IBP to be associated with increasing age and male gender. However, we found no association with diabetes, chronic constipation, CAD or BMI. As we were able to adjust for a wider variety of variables, it is possible that we were able to account for residual confounding better than previous studies. For example, we found that having nausea/vomiting, dysphagia, and gastroparesis was associated with IBP. Gastroparesis with associated nausea and vomiting may be the mechanism by which diabetes increases the risk for IBP. Further studies are needed to assess if interventions or alternative bowel cleansing in these patients can result in improved IBP. Finally, in contrast to studies with smaller cohorts which found that lower volume bowel preps improved IBP in the right colon,4,12 we found no association between IBP based and volume of bowel preparation consumed. Our impact analysis suggests that avoidance of opiates for at least three days before colonoscopy, avoidance of solid diet on the day before colonoscopy and performing all colonoscopies before noon would
The factors mentioned above may not always be amenable to modification. For example, for patients with active gastrointestinal bleeding, postponing colonoscopy by one day for the sake of maintaining a patient on a clear diet may not be feasible. Similarly, performing colonoscopies in the morning is highly dependent on endoscopy suite availability and hospital logistics. Denying opiates to patients experiencing severe pain is not ethical. In many scenarios, however, these variables could be modified, and institutional efforts to support these practices could yield considerable savings. Future prospective studies are needed to verify the real impact of these changes.
Further discussion is needed to contextualize the finding that colonoscopies scheduled in the afternoon are associated with improved bowel preparation quality. Previous research—albeit in the outpatient setting—has demonstrated 11.8 hours as the maximum upper time limit for the time elapsed between the end of bowel preparation to colonoscopy.14 Another study found an inverse relationship between the quality of bowel preparation and the time after completion of the bowel preparation.15 This makes sense from a physiological perspective as delaying the time between completion of bowel preparation, and the procedure allows chyme from the small intestine to reaccumulate in the colon. Anecdotally, at our institution as well as at many others, the bowel preparations are ordered to start in the evening to allow the consumption of complete bowel preparation by midnight. As a result of this practice, only patients who have their colonoscopies scheduled before noon fall within the optimal period of 11.8 hours. In the outpatient setting, the use of split preparations has led to the obliteration of the difference in the quality of bowel preparation between morning and afternoon colonoscopies.16 Prospective trials are needed to evaluate the use of split preparations to improve the quality of afternoon inpatient colonoscopies.
Few other strategies have been shown to mitigate IBP in the inpatient setting. In a small randomized controlled trial, Ergen et al. found that providing an educational booklet improved inpatient bowel preparation as measured by the Boston Bowel Preparation Scale.17 In a quasi-experimental design, Yadlapati et al. found that an automated split-dose bowel preparation resulted in decreased IBP, fewer repeated procedures, shorter LOS, and lower hospital cost.18 Our study adds to these tools by identifying three additional risk factors which could be optimized for inpatients. Because our findings are observational, they should be subjected to prospective trials. Our study also calls into question the impact of bowel preparation volume. We found no difference in the rate of IBP between low and large volume preparations. It is possible that other factors are more important than the specific preparation employed.
Interestingly, we found that IBP declined substantially in 2014 and continued to decline after that. The year was the most influential risk factor for IBP (on par with gastroparesis). The reason for this is unclear, as rates of our modifiable risk factors did not differ substantially by year. Other possibilities include improved access (including weekend access) to endoscopy coinciding with the development of a new endoscopy facility and use of integrated irrigation pump system instead of the use of manual syringes for flushing.
Our study has many strengths. It is by far the most extensive study of bowel preparation quality in inpatients to date and the only one that has included patient, procedural and bowel preparation characteristics. The study also has several significant limitations. This is a single center study, which could limit generalizability. Nonetheless, it was conducted within a health system with multiple hospitals in different parts of the United States (Ohio and Florida) and included a broad population mix with differing levels of acuity. The retrospective nature of the assessment precludes establishing causation. However, we mitigated confounding by adjusting for a wide variety of factors, and there is a plausible physiological mechanism for each of the factors we studied. Also, the retrospective nature of our study predisposes our data to omissions and misrepresentations during the documentation process. This is especially true with the use of ICD codes.19 Inaccuracies in coding are likely to bias toward the null, so observed associations may be an underestimate of the true association.
Our inability to ascertain if a patient completed the prescribed bowel preparation limited our ability to detect what may be a significant risk factor. Lastly, while clinically relevant, the Aronchik scale used to identify adequate from IBP has never been validated though it is frequently utilized and cited in the bowel preparation literature.20
CONCLUSIONS
In this large retrospective study evaluating bowel preparation quality in inpatients undergoing colonoscopy, we found that more than half of the patients have IBP and that IBP was associated with an extra day of hospitalization. Our study identifies those patients at highest risk and identifies modifiable risk factors for IBP. Specifically, we found that abstinence from opiates or solid diet before the colonoscopy, along with performing colonoscopies before noon were associated with improved outcomes. Prospective studies are needed to confirm the effects of these interventions on bowel preparation quality.
Disclosures
Carol A Burke, MD has received research funding from Ferring Pharmaceuticals. Other authors have no conflicts of interest to disclose.
1. Yadlapati R, Johnston ER, Gregory DL, Ciolino JD, Cooper A, Keswani RN. Predictors of inadequate inpatient colonoscopy preparation and its association with hospital length of stay and costs. Dig Dis Sci. 2015;60(11):3482-3490. doi: 10.1007/s10620-015-3761-2. PubMed
2. Jawa H, Mosli M, Alsamadani W, et al. Predictors of inadequate bowel preparation for inpatient colonoscopy. Turk J Gastroenterol. 2017;28(6):460-464. doi: 10.5152/tjg.2017.17196. PubMed
3. Mcnabb-Baltar J, Dorreen A, Dhahab HA, et al. Age is the only predictor of poor bowel preparation in the hospitalized patient. Can J Gastroenterol Hepatol. 2016;2016:1-5. doi: 10.1155/2016/2139264. PubMed
4. Rotondano G, Rispo A, Bottiglieri ME, et al. Tu1503 Quality of bowel cleansing in hospitalized patients is not worse than that of outpatients undergoing colonoscopy: results of a multicenter prospective regional study. Gastrointest Endosc. 2014;79(5):AB564. doi: 10.1016/j.gie.2014.02.949. PubMed
5. Ness R. Predictors of inadequate bowel preparation for colonoscopy. Am J Gastroenterol. 2001;96(6):1797-1802. doi: 10.1016/s0002-9270(01)02437-6. PubMed
6. Johnson DA, Barkun AN, Cohen LB, et al. Optimizing adequacy of bowel cleansing for colonoscopy: recommendations from the us multi-society task force on colorectal cancer. Gastroenterology. 2014;147(4):903-924. doi: 10.1053/j.gastro.2014.07.002. PubMed
7. Aronchick CA, Lipshutz WH, Wright SH, et al. A novel tableted purgative for colonoscopic preparation: efficacy and safety comparisons with Colyte and Fleet Phospho-Soda. Gastrointest Endosc. 2000;52(3):346-352. doi: 10.1067/mge.2000.108480. PubMed
8. Froehlich F, Wietlisbach V, Gonvers J-J, Burnand B, Vader J-P. Impact of colonic cleansing on quality and diagnostic yield of colonoscopy: the European Panel of Appropriateness of Gastrointestinal Endoscopy European multicenter study. Gastrointest Endosc. 2005;61(3):378-384. doi: 10.1016/s0016-5107(04)02776-2. PubMed
9. Sarvepalli S, Garber A, Rizk M, et al. 923 adjusted comparison of commercial bowel preparations based on inadequacy of bowel preparation in outpatient settings. Gastrointest Endosc. 2018;87(6):AB127. doi: 10.1016/j.gie.2018.04.1331.
10. Hendry PO, Jenkins JT, Diament RH. The impact of poor bowel preparation on colonoscopy: a prospective single center study of 10 571 colonoscopies. Colorectal Dis. 2007;9(8):745-748. doi: 10.1111/j.1463-1318.2007.01220.x. PubMed
11. Lebwohl B, Wang TC, Neugut AI. Socioeconomic and other predictors of colonoscopy preparation quality. Dig Dis Sci. 2010;55(7):2014-2020. doi: 10.1007/s10620-009-1079-7. PubMed
12. Chorev N, Chadad B, Segal N, et al. Preparation for colonoscopy in hospitalized patients. Dig Dis Sci. 2007;52(3):835-839. doi: 10.1007/s10620-006-9591-5. PubMed
13. Weiss AJ. Overview of Hospital Stays in the United States, 2012. HCUP Statistical Brief #180. Rockville, MD: Agency for Healthcare Research and Quality; 2014. PubMed
14. Kojecky V, Matous J, Keil R, et al. The optimal bowel preparation intervals before colonoscopy: a randomized study comparing polyethylene glycol and low-volume solutions. Dig Liver Dis. 2018;50(3):271-276. doi: 10.1016/j.dld.2017.10.010. PubMed
15. Siddiqui AA, Yang K, Spechler SJ, et al. Duration of the interval between the completion of bowel preparation and the start of colonoscopy predicts bowel-preparation quality. Gastrointest Endosc. 2009;69(3):700-706. doi: 10.1016/j.gie.2008.09.047. PubMed
16. Eun CS, Han DS, Hyun YS, et al. The timing of bowel preparation is more important than the timing of colonoscopy in determining the quality of bowel cleansing. Dig Dis Sci. 2010;56(2):539-544. doi: 10.1007/s10620-010-1457-1. PubMed
17. Ergen WF, Pasricha T, Hubbard FJ, et al. Providing hospitalized patients with an educational booklet increases the quality of colonoscopy bowel preparation. Clin Gastroenterol Hepatol. 2016;14(6):858-864. doi: 10.1016/j.cgh.2015.11.015. PubMed
18. Yadlapati R, Johnston ER, Gluskin AB, et al. An automated inpatient split-dose bowel preparation system improves colonoscopy quality and reduces repeat procedures. J Clin Gastroenterol. 2018;52(8):709-714. doi: 10.1097/mcg.0000000000000849. PubMed
19. Birman-Deych E, Waterman AD, Yan Y, Nilasena DS, Radford MJ, Gage BF. The accuracy of ICD-9-CM codes for identifying cardiovascular and stroke risk factors. Med Care. 2005;43(5):480-485. doi: 10.1097/01.mlr.0000160417.39497.a9. PubMed
20. Parmar R, Martel M, Rostom A, Barkun AN. Validated scales for colon cleansing: a systematic review. J Clin Gastroenterol. 2016;111(2):197-204. doi: 10.1038/ajg.2015.417. PubMed
1. Yadlapati R, Johnston ER, Gregory DL, Ciolino JD, Cooper A, Keswani RN. Predictors of inadequate inpatient colonoscopy preparation and its association with hospital length of stay and costs. Dig Dis Sci. 2015;60(11):3482-3490. doi: 10.1007/s10620-015-3761-2. PubMed
2. Jawa H, Mosli M, Alsamadani W, et al. Predictors of inadequate bowel preparation for inpatient colonoscopy. Turk J Gastroenterol. 2017;28(6):460-464. doi: 10.5152/tjg.2017.17196. PubMed
3. Mcnabb-Baltar J, Dorreen A, Dhahab HA, et al. Age is the only predictor of poor bowel preparation in the hospitalized patient. Can J Gastroenterol Hepatol. 2016;2016:1-5. doi: 10.1155/2016/2139264. PubMed
4. Rotondano G, Rispo A, Bottiglieri ME, et al. Tu1503 Quality of bowel cleansing in hospitalized patients is not worse than that of outpatients undergoing colonoscopy: results of a multicenter prospective regional study. Gastrointest Endosc. 2014;79(5):AB564. doi: 10.1016/j.gie.2014.02.949. PubMed
5. Ness R. Predictors of inadequate bowel preparation for colonoscopy. Am J Gastroenterol. 2001;96(6):1797-1802. doi: 10.1016/s0002-9270(01)02437-6. PubMed
6. Johnson DA, Barkun AN, Cohen LB, et al. Optimizing adequacy of bowel cleansing for colonoscopy: recommendations from the us multi-society task force on colorectal cancer. Gastroenterology. 2014;147(4):903-924. doi: 10.1053/j.gastro.2014.07.002. PubMed
7. Aronchick CA, Lipshutz WH, Wright SH, et al. A novel tableted purgative for colonoscopic preparation: efficacy and safety comparisons with Colyte and Fleet Phospho-Soda. Gastrointest Endosc. 2000;52(3):346-352. doi: 10.1067/mge.2000.108480. PubMed
8. Froehlich F, Wietlisbach V, Gonvers J-J, Burnand B, Vader J-P. Impact of colonic cleansing on quality and diagnostic yield of colonoscopy: the European Panel of Appropriateness of Gastrointestinal Endoscopy European multicenter study. Gastrointest Endosc. 2005;61(3):378-384. doi: 10.1016/s0016-5107(04)02776-2. PubMed
9. Sarvepalli S, Garber A, Rizk M, et al. 923 adjusted comparison of commercial bowel preparations based on inadequacy of bowel preparation in outpatient settings. Gastrointest Endosc. 2018;87(6):AB127. doi: 10.1016/j.gie.2018.04.1331.
10. Hendry PO, Jenkins JT, Diament RH. The impact of poor bowel preparation on colonoscopy: a prospective single center study of 10 571 colonoscopies. Colorectal Dis. 2007;9(8):745-748. doi: 10.1111/j.1463-1318.2007.01220.x. PubMed
11. Lebwohl B, Wang TC, Neugut AI. Socioeconomic and other predictors of colonoscopy preparation quality. Dig Dis Sci. 2010;55(7):2014-2020. doi: 10.1007/s10620-009-1079-7. PubMed
12. Chorev N, Chadad B, Segal N, et al. Preparation for colonoscopy in hospitalized patients. Dig Dis Sci. 2007;52(3):835-839. doi: 10.1007/s10620-006-9591-5. PubMed
13. Weiss AJ. Overview of Hospital Stays in the United States, 2012. HCUP Statistical Brief #180. Rockville, MD: Agency for Healthcare Research and Quality; 2014. PubMed
14. Kojecky V, Matous J, Keil R, et al. The optimal bowel preparation intervals before colonoscopy: a randomized study comparing polyethylene glycol and low-volume solutions. Dig Liver Dis. 2018;50(3):271-276. doi: 10.1016/j.dld.2017.10.010. PubMed
15. Siddiqui AA, Yang K, Spechler SJ, et al. Duration of the interval between the completion of bowel preparation and the start of colonoscopy predicts bowel-preparation quality. Gastrointest Endosc. 2009;69(3):700-706. doi: 10.1016/j.gie.2008.09.047. PubMed
16. Eun CS, Han DS, Hyun YS, et al. The timing of bowel preparation is more important than the timing of colonoscopy in determining the quality of bowel cleansing. Dig Dis Sci. 2010;56(2):539-544. doi: 10.1007/s10620-010-1457-1. PubMed
17. Ergen WF, Pasricha T, Hubbard FJ, et al. Providing hospitalized patients with an educational booklet increases the quality of colonoscopy bowel preparation. Clin Gastroenterol Hepatol. 2016;14(6):858-864. doi: 10.1016/j.cgh.2015.11.015. PubMed
18. Yadlapati R, Johnston ER, Gluskin AB, et al. An automated inpatient split-dose bowel preparation system improves colonoscopy quality and reduces repeat procedures. J Clin Gastroenterol. 2018;52(8):709-714. doi: 10.1097/mcg.0000000000000849. PubMed
19. Birman-Deych E, Waterman AD, Yan Y, Nilasena DS, Radford MJ, Gage BF. The accuracy of ICD-9-CM codes for identifying cardiovascular and stroke risk factors. Med Care. 2005;43(5):480-485. doi: 10.1097/01.mlr.0000160417.39497.a9. PubMed
20. Parmar R, Martel M, Rostom A, Barkun AN. Validated scales for colon cleansing: a systematic review. J Clin Gastroenterol. 2016;111(2):197-204. doi: 10.1038/ajg.2015.417. PubMed
© 2019 Society of Hospital Medicine
Sepsis Presenting in Hospitals versus Emergency Departments: Demographic, Resuscitation, and Outcome Patterns in a Multicenter Retrospective Cohort
Sepsis is both the most expensive condition treated and the most common cause of death in hospitals in the United States.1-3 Most sepsis patients (as many as 80% to 90%) meet sepsis criteria on hospital arrival, but mortality and costs are higher when meeting criteria after admission.3-6 Mechanisms of this increased mortality for these distinct populations are not well explored. Patients who present septic in the emergency department (ED) and patients who present as inpatients likely present very different challenges for recognition, treatment, and monitoring.7 Yet, how these groups differ by demographic and clinical characteristics, the etiology and severity of infection, and patterns of resuscitation care are not well described. Literature on sepsis epidemiology on hospital wards is particularly limited.8
This knowledge gap is important. If hospital-presenting sepsis (HPS) contributes disproportionately to disease burdCHFens, it reflects a high-yield population deserving the focus of quality improvement (QI) initiatives. If specific causes of disparities were identified—eg, poor initial resuscitation— they could be specifically targeted for correction. Given that current treatment guidelines are uniform for the two populations,9,10 characterizing phenotypic differences could also have implications for both diagnostic and therapeutic recommendations, particularly if the groups display substantially differing clinical presentations. Our prior work has not probed these effects specifically, but suggested ED versus inpatient setting at the time of initial sepsis presentation might be an effect modifier for the association between several elements of fluid resuscitation and patient outcomes.11,12
We, therefore, conducted a retrospective analysis to ask four sequential questions: (1) Do patients with HPS, compared with EDPS, contribute adverse outcome out of proportion to case prevalence? (2) At the time of initial presentation, how do HPS patients differ from EDPS patients with respect to demographics, comorbidities, infectious etiologies, clinical presentations, and severity of illness (3) If holding observed baseline factors constant, does the physical location of sepsis presentation inherently increase the risk for treatment delays and mortality? (4) To what extent can differences in the likelihood for timely initial treatment between the ED and inpatient settings explain differences in mortality and patient outcomes?
We hypothesized a priori that HPS would reflect chronically sicker patients whom both received less timely resuscitation and who contributed disproportionately frequent bad outcomes. We expected disparities in timely resuscitation care would explain a large proportion of this difference.
METHODS
We performed a retrospective analysis of the Northwell Sepsis Database, a prospectively captured, multisite, real world, consecutive-sample cohort of all “severe sepsis” and septic shock patients treated at nine tertiary and community hospitals in New York from October 1, 2014, to March 31, 2016. We analyzed all patients from a previously published cohort.11
Database Design and Structure
The Northwell Sepsis Database has previously been described in detail.11,13,14 Briefly, all patients met clinical sepsis criteria: (1) infection AND (2) ≥2 (SIRS) criteria AND (3) ≥1 acute organ dysfunction criterion. Organ dysfunction criteria were hypotension, acute kidney injury (AKI), coagulopathy, altered gas exchange, elevated bilirubin (≥2.0 mg/dL), or altered mental status (AMS; clarified in Supplemental Table 1). All organ dysfunction was not otherwise explained by patients’ medical histories; eg, patients on warfarin anticoagulation were not documented to have coagulopathy based on international normalized ratio > 1.5. The time of the sepsis episode (and database inclusion) was the time of the first vital sign measurement or laboratory result where a patient simultaneously met all three inclusion criteria: infection, SIRS, and organ dysfunction. The database excludes patients who were <18 years, declined bundle interventions, had advance directives precluding interventions, or were admitted directly to palliative care or hospice. Abstractors assumed comorbidities were absent if not documented within the medical record and that physiologic abnormalities were absent if not measured by the treatment team. There were no missing data for the variables analyzed. We report analysis in adherence with the STROBE statement guidelines for observational research.
Exposure
The primary exposure was whether patients had EDPS versus HPS. We defined EDPS patients as meeting all objective clinical inclusion criteria while physically in the ED. We defined HPS as first meeting sepsis inclusion criteria outside the ED, regardless of the reason for admission, and regardless of whether patients were admitted through the ED or directly to the hospital. All ED patients were admitted to the hospital.
Outcomes
Process outcomes were full 3-hour bundle compliance, time to antibiotic administration, blood cultures before antibiotics, time to fluid initiation, the volume of administered fluid resuscitation, lactate result time, and whether repeat lactate was obtained (Supplemental Table 2). Treatment times were times of administration (rather than order time). The primary patient outcome was hospital mortality. Secondary patient outcomes were mechanical ventilation, ICU admission, ICU days, hospital length of stay (LOS). We discounted HPS patients’ LOS to include only days after meeting the inclusion criteria. Patients were excluded from the analysis of the ICU admission outcome if they were already in the ICU prior to meeting sepsis criteria.
Statistical Analysis
We report continuous variables as means (standard deviation) or medians (interquartile range), and categorical variables as frequencies (proportions), as appropriate. Summative statistics with 95% confidence intervals (CI) describe overall group contributions. We used generalized linear models to determine patient factors associated with EDPS versus HPS, entering random effects for individual study sites to control for intercenter variability.
Next, to generate a propensity-matched cohort, we computed propensity scores adjusted from a priori selected variables: age, sex, tertiary versus community hospital, congestive heart failure (CHF), renal failure, COPD, diabetes, liver failure, immunocompromise, primary source of infection, nosocomial source, temperature, initial lactate, presenting hypotension, altered gas exchange, AMS, AKI, and coagulopathy. We then matched subjects 1:1 without optimization or replacement, imposing a caliper width of 0.01; ie, we required matched pairs to have a <1.0% difference in propensity scores. The macro used to match subjects is publically available.15
We then compared resuscitation and patient outcomes in the matched cohort using generalized linear models, ie, doubly-robust estimation (DRE).16 When assessing patient outcomes corrected for resuscitation, we used mixed DRE/multivariable regression. We did this for two reasons: first, DRE has the advantage of only requiring only one approach (propensity vs covariate adjustments) to be correctly specified.16 Second, computing propensity scores adjusted for resuscitation would be inappropriate given that resuscitation occurs after the exposure allocation (HPS vs EDPS). However, these factors could still impact the outcome and in fact, we hypothesized they were potential mediators of the exposure effect. To interrogate this mediating relationship, we recapitulated the DRE modeling but added covariates for resuscitation factors. Resuscitation-adjusted models controlled for timeliness of antibiotics, fluids, and lactate results; blood cultures before antibiotics; repeat lactate obtained, and fluid volume in the first six hours. Since ICU days and LOS are subject to competing risks bias (LOS could be shorter if patients died earlier), we used proportional hazards models where “the event” was defined as a live discharge to censor for mortality and we report output as inverse hazard ratios. We also tested interaction coefficients for discrete bundle elements and HPS to determine if specific bundle elements were effect modifiers for the association between the presenting location and mortality risk. Finally, we estimated attributable risk differences by comparing adjusted odds ratios of adverse outcome with and without adjustment for resuscitation variables, as described by Sahai et al.17
As sensitivity analyses, we recomputed propensity scores and generated a new matched cohort that excluded HPS patients who met criteria for sepsis while already in the ICU for another reason (ie, excluding ICU-presenting sepsis). We then recapitulated all analyses as above for this cohort. We performed analyses using SAS version 9.4 (SAS Institute, Cary, North Carolina).
RESULTS
Prevalence and Outcome Contributions
Of the 11,182 sepsis patients in the database, we classified 2,509 (22%) as HPS (Figure 1). HPS contributed 785 (35%) of 2,241 sepsis-related mortalities, 1,241 (38%) mechanical ventilations, and 1,762 (34%) ICU admissions. Of 39,263 total ICU days and 127,178 hospital days, HPS contributed 18,104 (46.1%) and 44,412 (34.9%) days, respectively.
Patient Characteristics
Most HPS presented early in the hospital course, with 1,352 (53.9%) cases meeting study criteria within three days of admission. Median time from admission to meeting study criteria for HPS was two days (interquartile range: one to seven days). We report selected baseline patient characteristics in Table 1 and adjusted associations of baseline variables with HPS versus EDPS in Table 2. The full cohort characterization is available in Supplemental Table 3. Notably, HPS patients more often had CHF (aOR [adjusted odds ratio}: 1.31, CI: 1.18-1.47) or renal failure (aOR: 1.62, CI: 1.38-1.91), gastrointestinal source of infection (aOR: 1.84, CI: 1.48-2.29), hypothermia (aOR: 1.56, CI: 1.28-1.90) hypotension (aOR: 1.85, CI: 1.65-2.08), or altered gas exchange (aOR: 2.46, CI: 1.43-4.24). In contrast, HPS patients less frequently were admitted from skilled nursing facilities (aOR: 0.44, CI: 0.32-0.60), or had COPD (aOR: 0.53, CI: 0.36-0.76), fever (aOR: 0.70, CI: 0.52-0.91), tachypnea (aOR: 0.76, CI: 0.58-0.98), or AKI (aOR: 082, CI: 0.68-0.97). Other baseline variables were similar, including respiratory source, tachycardia, white cell abnormalities, AMS, and coagulopathies. These associations were preserved in the sensitivity analysis excluding ICU-presenting sepsis.
Propensity Matching
Propensity score matching yielded 1,942 matched pairs (n = 3,884, 77% of HPS patients, 22% of EDPS patients). Table 1 and Supplemental Table 3 show patient characteristics after propensity matching. Supplemental Table 4 shows the propensity model. The frequency densities are shown for the cohort as a function of propensity score in Supplemental Figure 1. After matching, frequencies between groups differed by <5% for all categorical variables assessed. In the sensitivity analysis, propensity matching (model in Supplemental Table 5) resulted in 1,233 matched pairs (n = 2,466, 49% of HPS patients, 14% of EDPS patients), with group differences comparable to the primary analysis.
Process Outcomes
We present propensity-matched differences in initial resuscitation in Figure 2A for all HPS patients, as well as non-ICU-presenting HPS, versus EDPS. HPS patients were roughly half as likely to receive fully 3-hour bundle compliant care (17.0% vs 30.3%, aOR: 0.47, CI: 0.40-0.57), to have blood cultures drawn within three hours prior to antibiotics (44.9% vs 67.2%, aOR: 0.40, CI: 0.35-0.46), or to receive fluid resuscitation initiated within two hours (11.1% vs 26.1%, aOR: 0.35, CI: 0.29-0.42). Antibiotic receipt within one hour was comparable (45.3% vs 48.1%, aOR: 0.89, CI: 0.79-1.01). However, differences emerged for antibiotics within three hours (66.2% vs 83.8%, aOR: 0.38, CI: 0.32-0.44) and persisted at six hours (77.0% vs 92.5%, aOR: 0.27, CI: 0.22-33). Excluding ICU-presenting sepsis from propensity matching exaggerated disparities in antibiotic receipt at one hour (43.4% vs 49.1%, aOR: 0.80, CI: 0.68-0.93), three hours (64.2% vs 86.1%, aOR: 0.29, CI: 0.24-0.35), and six hours (75.7% vs 93.0%, aOR: 0.23, CI: 0.18-0.30). HPS patients more frequently had repeat lactate obtained within 24 hours (62.4% vs 54.3%, aOR: 1.40, CI: 1.23-1.59).
Patient Outcomes
HPS patients had higher mortality (31.2% vs19.3%), mechanical ventilation (51.5% vs27.4%), and ICU admission (60.6% vs 46.5%) (Table 1 and Supplemental Table 6). Figure 2b shows propensity-matched and covariate-adjusted differences in patient outcomes before and after adjusting for initial resuscitation. aORs corresponded to approximate relative risk differences18 of 1.38 (CI: 1.28-1.48), 1.68 (CI: 1.57-1.79), and 1.72 (CI: 1.61-1.84) for mortality, mechanical ventilation, and ICU admission, respectively. HPS was associated with 83% longer mortality-censored ICU stays (five vs nine days, HR–1: 1.83, CI: 1.65-2.03), and 108% longer hospital stay (eight vs 17 days, HR–1: 2.08, CI: 1.93-2.24). After adjustment for resuscitation, all effect sizes decreased but persisted. The initial crystalloid volume was a significant negative effect modifier for mortality (Supplemental Table 7). That is, the magnitude of the association between HPS and greater mortality decreased by a factor of 0.89 per 10 mL/kg given (CI: 0.82-0.97). We did not observe significant interaction from other interventions, or overall bundle compliance, meaning these interventions’ association with mortality did not significantly differ between HPS versus EDPS.
The implied attributable risk difference from discrepancies in initial resuscitation was 23.3% for mortality, 35.2% for mechanical ventilation, and 7.6% for ICU admission (Figure 2B). Resuscitation explained 26.5% of longer ICU LOS and 16.7% of longer hospital LOS associated with HPS.
Figure 2C shows sensitivity analysis excluding ICU-presenting sepsis from propensity matching (ie, limiting HPS to hospital ward presentations). Again, HPS was associated with all adverse outcomes, though effect sizes were smaller than in the primary cohort for all outcomes except hospital LOS. In this cohort, resuscitation factors now explained 16.5% of HPS’ association with mortality, and 14.5% of the association with longer ICU LOS. However, they explained a greater proportion (13.0%) of ICU admissions. Attributable risk differences were comparable to the primary cohort for mechanical ventilation (37.6%) and hospital LOS (15.3%).
DISCUSSION
In this analysis of 11,182 sepsis and septic shock patients, HPS contributed 22% of prevalence but >35% of total sepsis mortalities, ICU utilization, and hospital days. HPS patients had higher comorbidity burdens and had clinical presentations less obviously attributable to infection with more severe organ dysfunction. EDPS received antibiotics within three hours about 1.62 times more often than do HPS patients. EDPS patients also receive fluids initiated within two hours about 1.82 times more often than HPS patients do. HPS had nearly 1.5-fold greater mortality and LOS, and nearly two-fold greater mechanical ventilation and ICU utilization. Resuscitation disparities could partially explain these associations. These patterns persisted when comparing only wards presenting HPS with EDPS.
Our analysis revealed several notable findings. First, these data confirm that HPS represents a potentially high-impact target population that contributes adverse outcomes disproportionately frequently with respect to case prevalence.
Our findings, unsurprisingly, revealed HPS and EDPS reflect dramatically different patient populations. We found that the two groups significantly differed by the majority of the baseline factors we compared. It may be worth asking if and how these substantial differences in illness etiology, chronic health, and acute physiology impact what we consider an optimal approach to management. Significant interaction effects of fluid volume on the association between HPS and mortality suggest differential treatment effects may exist between patients. Indeed, patients who newly arrive from the community and those who are several days into admission likely have different volume status. However, no interactions were noted with other bundle elements, such as timeliness of antibiotics or timeliness of initial fluids.
Another potentially concerning observation was that HPS patients were admitted much less frequently from skilled nursing facilities, as it could imply that this poorer-fairing population had a comparatively higher baseline functional status. The fact that 25% of EDPS cases were admitted from these facilities also underscores the need to engage skilled nursing facility providers in future sepsis initiatives.
We found marked disparities in resuscitation. Timely delivery of interventions, such as antibiotics and initial fluid resuscitation, occurred less than half as often for HPS, especially on hospital wards. While evidence supporting the efficacy of specific 3-hour bundle elements remains unsettled,19 a wealth of literature demonstrates a correlation between bundle uptake and decreased sepsis mortality, especially for early antibiotic administration.13,20-26 Some analysis suggests that differing initial resuscitation practices explain different mortality rates in the early goal-directed therapy trials.27 The comparatively poor performance for non-ICU HPS indicates further QI efforts are better focused on inpatient wards, rather than on EDs or ICUs where resuscitation is already delivered with substantially greater fidelity.
While resuscitation differences partially explained outcome discrepancies between groups, they did not account for as much variation as expected. Though resuscitation accounted for >35% of attributable mechanical ventilation risk, it explained only 16.5% of mortality differences for non-ICU HPS vs EDPS. We speculate that several factors may contribute.
First, HPS patients are already hospitalized for another acute insult and may be too physiologically brittle to derive equal benefit from initial resuscitation. Some literature suggests protocolized sepsis resuscitation may paradoxically be more effective in milder/earlier disease.28
Second, clinical information indicating septic organ dysfunction may become available too late in HPS—a possible data limitation where inpatient providers are counterintuitively more likely to miss early signs of patients’ deterioration and a subsequent therapeutic window. Several studies found that fluid resuscitation is associated with improved sepsis outcomes only when it is administered very early.11,29-31 In inpatient wards, decreased monitoring32 and human factors (eg, hospital workflow, provider-to-patient ratios, electronic documentation burdens)33,34 may hinder early diagnosis. In contrast, ED environments are explicitly designed to identify acutely ill patients and deliver intervention rapidly. If HPS patients were sicker when they were identified, this would also explain their more severe organ dysfunctions. Our data seems to support this possibility. HPS patients had tachypnea less frequently but more often had impaired gas exchange. This finding may suggest that early tachypnea was either less often detected or documented, or that it had progressed further by the time of detection.
Third, inpatients with sepsis may more often present with greater diagnostic complexity. We observed that HPS patients were more often euthermic and less often tachypneic. Beyond suggesting a greater diagnostic challenge, this also raises questions as to whether differences reflect patient physiology (response to infection) or iatrogenic factors (eg, prior antipyretics). Higher comorbidity and acute physiological burdens also limit the degree to which new organ dysfunction can be clearly attributed to infection. We note differences in the proportion of patients who received antibiotics increased over time, suggesting that HPS patients who received delayed antibiotics did so much later than their EDPS counterparts. This lag could also arise from diagnostic difficulty.
All three possibilities highlight a potential lead time effect, where the same measured three-hour period on the wards, between meeting sepsis criteria and starting treatment, actually reflects a longer period between (as yet unmeasurable) pathobiologic “time zero” and treatment versus the ED. The time of sepsis detection, as distinct from the time of sepsis onset, therefore proves difficult to evaluate and impossible to account for statistically.
Regardless, our findings suggest additional difficulty in both the recognition and resuscitation of inpatient sepsis. Inpatients, especially with infections, may need closer monitoring. How to cost effectively implement this monitoring is a challenge that deserves attention.
A more rational systems approach to HPS likely combines efforts to improve initial resuscitation with other initiatives aimed at both improving monitoring and preventing infection.
To be clear, we do not imply that timely initial resuscitation does not matter on the wards. Rather, resuscitation-focused QI alone does not appear to be sufficient to overcome differences in outcomes for HPS. The 23.3% attributable mortality risk we observed still implies that resuscitation differences could explain nearly one in four excess HPS mortalities. We previously showed that timely resuscitation is strongly associated with better outcomes.11,13,30 As discussed above, the unclear degree to which better resuscitation is a marker for more obvious presentations is a persistent limitation of prior investigations and the present study.
Taken together, the ultimate question that this study raises but cannot answer is whether the timely recognition of sepsis, rather than any specific treatment, is what truly improves outcomes.
In addition to those above, this study has several limitations. Our study did not differentiate HPS with respect to patients admitted for noninfectious reasons and who subsequently became septic versus nonseptic patients admitted for an infection who subsequently became septic from that infection. Nor could we discriminate between missed ED diagnoses and true delayed presentations. We note distinguishing these entities clinically can be equally challenging. Additionally, this was a propensity-matched retrospective analysis of an existing sepsis cohort, and the many limitations of both retrospective study and propensity matching apply.35,36 We note that randomizing patients to develop sepsis in the community versus hospital is not feasible and that two of our aims intended to describe overall patterns rather than causal effects. We could not ascertain robust measures of severity of illness (eg, SOFA) because a real world setting precludes required data points—eg, urine output is unreliably recorded. We also note incomplete overlap between inclusion criteria and either Sepsis-2 or -3 definitions,1,37 because we designed and populated our database prior to publication of Sepsis-3. Further, we could not account for surgical source control, the appropriateness of antimicrobial therapy, mechanical ventilation before sepsis onset, or most treatments given after initial resuscitation.
In conclusion, hospital-presenting sepsis accounted for adverse patient outcomes disproportionately to prevalence. HPS patients had more complex presentations, received timely antibiotics half as often ED-presenting sepsis, and had nearly twice the mortality odds. Resuscitation disparities explained roughly 25% of this difference.
Disclosures
The authors have no conflicts of interest to disclose.
Funding
This investigation was funded in part by a grant from the Center for Medicare and Medicaid Innovation to the High Value Healthcare Collaborative, of which the study sites’ umbrella health system was a part. This grant helped fund the underlying QI program and database in this study.
1. Singer M, Deutschman CS, Seymour CW, et al. The third international consensus definitions for sepsis and septic shock (Sepsis-3). JAMA. 2016;315(8):801-810. doi: 10.1001/jama.2016.0287. PubMed
2. Torio CMA, Andrews RMA. National inpatient hospital costs: the most expensive conditions by payer, 2011. In. Statistical Brief No. 160. Rockville, MD: Agency for Healthcare Research and Quality; 2013. PubMed
3. Liu V, Escobar GJ, Greene JD, et al. Hospital deaths in patients with sepsis from 2 independent cohorts. JAMA. 2014;312(1):90-92. doi: 10.1001/jama.2014.5804. PubMed
4. Seymour CW, Liu VX, Iwashyna TJ, et al. Assessment of clinical criteria for sepsis: for the third international consensus definitions for sepsis and septic shock (Sepsis-3). JAMA. 2016;315(8):762-774. doi: 10.1001/jama.2016.0288. PubMed
5. Jones SL, Ashton CM, Kiehne LB, et al. Outcomes and resource use of sepsis-associated stays by presence on admission, severity, and hospital type. Med Care. 2016;54(3):303-310. doi: 10.1097/MLR.0000000000000481. PubMed
6. Page DB, Donnelly JP, Wang HE. Community-, healthcare-, and hospital-acquired severe sepsis hospitalizations in the university healthsystem consortium. Crit Care Med. 2015;43(9):1945-1951. doi: 10.1097/CCM.0000000000001164. PubMed
7. Rothman M, Levy M, Dellinger RP, et al. Sepsis as 2 problems: identifying sepsis at admission and predicting onset in the hospital using an electronic medical record-based acuity score. J Crit Care. 2016;38:237-244. doi: 10.1016/j.jcrc.2016.11.037. PubMed
8. Chan P, Peake S, Bellomo R, Jones D. Improving the recognition of, and response to in-hospital sepsis. Curr Infect Dis Rep. 2016;18(7):20. doi: 10.1007/s11908-016-0528-7. PubMed
9. Rhodes A, Evans LE, Alhazzani W, et al. Surviving sepsis campaign: international guidelines for management of sepsis and septic shock: 2016. Crit Care Med. 2017;45(3):486-552. doi: 10.1097/CCM.0000000000002255. PubMed
10. Levy MM, Evans LE, Rhodes A. The Surviving Sepsis Campaign Bundle: 2018 Update. Crit Care Med. 2018;46(6):997-1000. doi: 10.1097/CCM.0000000000003119. PubMed
11. Leisman DE, Goldman C, Doerfler ME, et al. Patterns and outcomes associated with timeliness of initial crystalloid resuscitation in a prospective sepsis and septic shock cohort. Crit Care Med. 2017;45(10):1596-1606. doi: 10.1097/CCM.0000000000002574. PubMed
12. Leisman DE, Doerfler ME, Schneider SM, Masick KD, D’Amore JA, D’Angelo JK. Predictors, prevalence, and outcomes of early crystalloid responsiveness among initially hypotensive patients with sepsis and septic shock. Crit Care Med. 2018;46(2):189-198. doi: 10.1097/CCM.0000000000002834. PubMed
13. Leisman DE, Doerfler ME, Ward MF, et al. Survival benefit and cost savings from compliance with a simplified 3-hour sepsis bundle in a series of prospective, multisite, observational cohorts. Crit Care Med. 2017;45(3):395-406. doi: 10.1097/CCM.0000000000002184. PubMed
14. Doerfler ME, D’Angelo J, Jacobsen D, et al. Methods for reducing sepsis mortality in emergency departments and inpatient units. Jt Comm J Qual Patient Saf. 2015;41(5):205-211. doi: 10.1016/S1553-7250(15)41027-X. PubMed
15. Murphy B, Fraeman KH. A general SAS® macro to implement optimal N:1 propensity score matching within a maximum radius. In: Paper 812-2017. Waltham, MA: Evidera; 2017. https://support.sas.com/resources/papers/proceedings17/0812-2017.pdf. Accessed February 20, 2019.
16. Funk MJ, Westreich D, Wiesen C, Stürmer T, Brookhart MA, Davidian M. Doubly robust estimation of causal effects. Am J Epidemiol. 2011;173(7):761-767. doi: 10.1093/aje/kwq439. PubMed
17. Sahai HK, Khushid A. Statistics in Epidemiology: Methods, Techniques, and Applications. Boca Raton, FL: CRC Press; 1995.
18. VanderWeele TJ. On a square-root transformation of the odds ratio for a common outcome. Epidemiology. 2017;28(6):e58-e60. doi: 10.1097/EDE.0000000000000733. PubMed
19. Pepper DJ, Natanson C, Eichacker PQ. Evidence underpinning the centers for medicare & medicaid services’ severe sepsis and septic shock management bundle (SEP-1). Ann Intern Med. 2018;168(8):610-612. doi: 10.7326/L18-0140. PubMed
20. Levy MM, Rhodes A, Phillips GS, et al. Surviving sepsis campaign: association between performance metrics and outcomes in a 7.5-year study. Crit Care Med. 2015;43(1):3-12. doi: 10.1097/CCM.0000000000000723. PubMed
11. Liu VX, Morehouse JW, Marelich GP, et al. Multicenter Implementation of a Treatment Bundle for Patients with Sepsis and Intermediate Lactate Values. Am J Respir Crit Care Med. 2016;193(11):1264-1270. doi: 10.1164/rccm.201507-1489OC. PubMed
22. Miller RR, Dong L, Nelson NC, et al. Multicenter implementation of a severe sepsis and septic shock treatment bundle. Am J Respir Crit Care Med. 2013;188(1):77-82. doi: 10.1164/rccm.201212-2199OC. PubMed
23. Seymour CW, Gesten F, Prescott HC, et al. Time to treatment and mortality during mandated emergency care for sepsis. N Engl J Med. 2017;376(23):2235-2244. doi: 10.1056/NEJMoa1703058. PubMed
24. Pruinelli L, Westra BL, Yadav P, et al. Delay within the 3-hour surviving sepsis campaign guideline on mortality for patients with severe sepsis and septic shock. Crit Care Med. 2018;46(4):500-505. doi: 10.1097/CCM.0000000000002949. PubMed
25. Kumar A, Roberts D, Wood KE, et al. Duration of hypotension before initiation of effective antimicrobial therapy is the critical determinant of survival in human septic shock. Crit Care Med. 2006;34(6):1589-1596. doi: 10.1097/01.CCM.0000217961.75225.E9. PubMed
26. Liu VX, Fielding-Singh V, Greene JD, et al. The timing of early antibiotics and hospital mortality in sepsis. Am J Respir Crit Care Med. 2017;196(7):856-863. doi: 10.1164/rccm.201609-1848OC. PubMed
27. Kalil AC, Johnson DW, Lisco SJ, Sun J. Early goal-directed therapy for sepsis: a novel solution for discordant survival outcomes in clinical trials. Crit Care Med. 2017;45(4):607-614. doi: 10.1097/CCM.0000000000002235. PubMed
28. Kellum JA, Pike F, Yealy DM, et al. relationship between alternative resuscitation strategies, host response and injury biomarkers, and outcome in septic shock: analysis of the protocol-based care for early septic shock study. Crit Care Med. 2017;45(3):438-445. doi: 10.1097/CCM.0000000000002206. PubMed
29. Seymour CW, Cooke CR, Heckbert SR, et al. Prehospital intravenous access and fluid resuscitation in severe sepsis: an observational cohort study. Crit Care. 2014;18(5):533. doi: 10.1186/s13054-014-0533-x. PubMed
30. Leisman D, Wie B, Doerfler M, et al. Association of fluid resuscitation initiation within 30 minutes of severe sepsis and septic shock recognition with reduced mortality and length of stay. Ann Emerg Med. 2016;68(3):298-311. doi: 10.1016/j.annemergmed.2016.02.044. PubMed
31. Lee SJ, Ramar K, Park JG, Gajic O, Li G, Kashyap R. Increased fluid administration in the first three hours of sepsis resuscitation is associated with reduced mortality: a retrospective cohort study. Chest. 2014;146(4):908-915. doi: 10.1378/chest.13-2702. PubMed
32. Smyth MA, Daniels R, Perkins GD. Identification of sepsis among ward patients. Am J Respir Crit Care Med. 2015;192(8):910-911. doi: 10.1164/rccm.201507-1395ED. PubMed
33. Wenger N, Méan M, Castioni J, Marques-Vidal P, Waeber G, Garnier A. Allocation of internal medicine resident time in a Swiss hospital: a time and motion study of day and evening shifts. Ann Intern Med. 2017;166(8):579-586. doi: 10.7326/M16-2238. PubMed
34. Mamykina L, Vawdrey DK, Hripcsak G. How do residents spend their shift time? A time and motion study with a particular focus on the use of computers. Acad Med. 2016;91(6):827-832. doi: 10.1097/ACM.0000000000001148. PubMed
35. Kaji AH, Schriger D, Green S. Looking through the retrospectoscope: reducing bias in emergency medicine chart review studies. Ann Emerg Med. 2014;64(3):292-298. doi: 10.1016/j.annemergmed.2014.03.025. PubMed
36. Leisman DE. Ten pearls and pitfalls of propensity scores in critical care research: a guide for clinicians and researchers. Crit Care Med. 2019;47(2):176-185. doi: 10.1097/CCM.0000000000003567. PubMed
37. Levy MM, Fink MP, Marshall JC, et al. 2001 SCCM/ESICM/ACCP/ATS/SIS international sepsis definitions conference. Crit Care Med. 2003;31(4):1250-1256. doi: 10.1097/01.CCM.0000050454.01978.3B. PubMed
Sepsis is both the most expensive condition treated and the most common cause of death in hospitals in the United States.1-3 Most sepsis patients (as many as 80% to 90%) meet sepsis criteria on hospital arrival, but mortality and costs are higher when meeting criteria after admission.3-6 Mechanisms of this increased mortality for these distinct populations are not well explored. Patients who present septic in the emergency department (ED) and patients who present as inpatients likely present very different challenges for recognition, treatment, and monitoring.7 Yet, how these groups differ by demographic and clinical characteristics, the etiology and severity of infection, and patterns of resuscitation care are not well described. Literature on sepsis epidemiology on hospital wards is particularly limited.8
This knowledge gap is important. If hospital-presenting sepsis (HPS) contributes disproportionately to disease burdCHFens, it reflects a high-yield population deserving the focus of quality improvement (QI) initiatives. If specific causes of disparities were identified—eg, poor initial resuscitation— they could be specifically targeted for correction. Given that current treatment guidelines are uniform for the two populations,9,10 characterizing phenotypic differences could also have implications for both diagnostic and therapeutic recommendations, particularly if the groups display substantially differing clinical presentations. Our prior work has not probed these effects specifically, but suggested ED versus inpatient setting at the time of initial sepsis presentation might be an effect modifier for the association between several elements of fluid resuscitation and patient outcomes.11,12
We, therefore, conducted a retrospective analysis to ask four sequential questions: (1) Do patients with HPS, compared with EDPS, contribute adverse outcome out of proportion to case prevalence? (2) At the time of initial presentation, how do HPS patients differ from EDPS patients with respect to demographics, comorbidities, infectious etiologies, clinical presentations, and severity of illness (3) If holding observed baseline factors constant, does the physical location of sepsis presentation inherently increase the risk for treatment delays and mortality? (4) To what extent can differences in the likelihood for timely initial treatment between the ED and inpatient settings explain differences in mortality and patient outcomes?
We hypothesized a priori that HPS would reflect chronically sicker patients whom both received less timely resuscitation and who contributed disproportionately frequent bad outcomes. We expected disparities in timely resuscitation care would explain a large proportion of this difference.
METHODS
We performed a retrospective analysis of the Northwell Sepsis Database, a prospectively captured, multisite, real world, consecutive-sample cohort of all “severe sepsis” and septic shock patients treated at nine tertiary and community hospitals in New York from October 1, 2014, to March 31, 2016. We analyzed all patients from a previously published cohort.11
Database Design and Structure
The Northwell Sepsis Database has previously been described in detail.11,13,14 Briefly, all patients met clinical sepsis criteria: (1) infection AND (2) ≥2 (SIRS) criteria AND (3) ≥1 acute organ dysfunction criterion. Organ dysfunction criteria were hypotension, acute kidney injury (AKI), coagulopathy, altered gas exchange, elevated bilirubin (≥2.0 mg/dL), or altered mental status (AMS; clarified in Supplemental Table 1). All organ dysfunction was not otherwise explained by patients’ medical histories; eg, patients on warfarin anticoagulation were not documented to have coagulopathy based on international normalized ratio > 1.5. The time of the sepsis episode (and database inclusion) was the time of the first vital sign measurement or laboratory result where a patient simultaneously met all three inclusion criteria: infection, SIRS, and organ dysfunction. The database excludes patients who were <18 years, declined bundle interventions, had advance directives precluding interventions, or were admitted directly to palliative care or hospice. Abstractors assumed comorbidities were absent if not documented within the medical record and that physiologic abnormalities were absent if not measured by the treatment team. There were no missing data for the variables analyzed. We report analysis in adherence with the STROBE statement guidelines for observational research.
Exposure
The primary exposure was whether patients had EDPS versus HPS. We defined EDPS patients as meeting all objective clinical inclusion criteria while physically in the ED. We defined HPS as first meeting sepsis inclusion criteria outside the ED, regardless of the reason for admission, and regardless of whether patients were admitted through the ED or directly to the hospital. All ED patients were admitted to the hospital.
Outcomes
Process outcomes were full 3-hour bundle compliance, time to antibiotic administration, blood cultures before antibiotics, time to fluid initiation, the volume of administered fluid resuscitation, lactate result time, and whether repeat lactate was obtained (Supplemental Table 2). Treatment times were times of administration (rather than order time). The primary patient outcome was hospital mortality. Secondary patient outcomes were mechanical ventilation, ICU admission, ICU days, hospital length of stay (LOS). We discounted HPS patients’ LOS to include only days after meeting the inclusion criteria. Patients were excluded from the analysis of the ICU admission outcome if they were already in the ICU prior to meeting sepsis criteria.
Statistical Analysis
We report continuous variables as means (standard deviation) or medians (interquartile range), and categorical variables as frequencies (proportions), as appropriate. Summative statistics with 95% confidence intervals (CI) describe overall group contributions. We used generalized linear models to determine patient factors associated with EDPS versus HPS, entering random effects for individual study sites to control for intercenter variability.
Next, to generate a propensity-matched cohort, we computed propensity scores adjusted from a priori selected variables: age, sex, tertiary versus community hospital, congestive heart failure (CHF), renal failure, COPD, diabetes, liver failure, immunocompromise, primary source of infection, nosocomial source, temperature, initial lactate, presenting hypotension, altered gas exchange, AMS, AKI, and coagulopathy. We then matched subjects 1:1 without optimization or replacement, imposing a caliper width of 0.01; ie, we required matched pairs to have a <1.0% difference in propensity scores. The macro used to match subjects is publically available.15
We then compared resuscitation and patient outcomes in the matched cohort using generalized linear models, ie, doubly-robust estimation (DRE).16 When assessing patient outcomes corrected for resuscitation, we used mixed DRE/multivariable regression. We did this for two reasons: first, DRE has the advantage of only requiring only one approach (propensity vs covariate adjustments) to be correctly specified.16 Second, computing propensity scores adjusted for resuscitation would be inappropriate given that resuscitation occurs after the exposure allocation (HPS vs EDPS). However, these factors could still impact the outcome and in fact, we hypothesized they were potential mediators of the exposure effect. To interrogate this mediating relationship, we recapitulated the DRE modeling but added covariates for resuscitation factors. Resuscitation-adjusted models controlled for timeliness of antibiotics, fluids, and lactate results; blood cultures before antibiotics; repeat lactate obtained, and fluid volume in the first six hours. Since ICU days and LOS are subject to competing risks bias (LOS could be shorter if patients died earlier), we used proportional hazards models where “the event” was defined as a live discharge to censor for mortality and we report output as inverse hazard ratios. We also tested interaction coefficients for discrete bundle elements and HPS to determine if specific bundle elements were effect modifiers for the association between the presenting location and mortality risk. Finally, we estimated attributable risk differences by comparing adjusted odds ratios of adverse outcome with and without adjustment for resuscitation variables, as described by Sahai et al.17
As sensitivity analyses, we recomputed propensity scores and generated a new matched cohort that excluded HPS patients who met criteria for sepsis while already in the ICU for another reason (ie, excluding ICU-presenting sepsis). We then recapitulated all analyses as above for this cohort. We performed analyses using SAS version 9.4 (SAS Institute, Cary, North Carolina).
RESULTS
Prevalence and Outcome Contributions
Of the 11,182 sepsis patients in the database, we classified 2,509 (22%) as HPS (Figure 1). HPS contributed 785 (35%) of 2,241 sepsis-related mortalities, 1,241 (38%) mechanical ventilations, and 1,762 (34%) ICU admissions. Of 39,263 total ICU days and 127,178 hospital days, HPS contributed 18,104 (46.1%) and 44,412 (34.9%) days, respectively.
Patient Characteristics
Most HPS presented early in the hospital course, with 1,352 (53.9%) cases meeting study criteria within three days of admission. Median time from admission to meeting study criteria for HPS was two days (interquartile range: one to seven days). We report selected baseline patient characteristics in Table 1 and adjusted associations of baseline variables with HPS versus EDPS in Table 2. The full cohort characterization is available in Supplemental Table 3. Notably, HPS patients more often had CHF (aOR [adjusted odds ratio}: 1.31, CI: 1.18-1.47) or renal failure (aOR: 1.62, CI: 1.38-1.91), gastrointestinal source of infection (aOR: 1.84, CI: 1.48-2.29), hypothermia (aOR: 1.56, CI: 1.28-1.90) hypotension (aOR: 1.85, CI: 1.65-2.08), or altered gas exchange (aOR: 2.46, CI: 1.43-4.24). In contrast, HPS patients less frequently were admitted from skilled nursing facilities (aOR: 0.44, CI: 0.32-0.60), or had COPD (aOR: 0.53, CI: 0.36-0.76), fever (aOR: 0.70, CI: 0.52-0.91), tachypnea (aOR: 0.76, CI: 0.58-0.98), or AKI (aOR: 082, CI: 0.68-0.97). Other baseline variables were similar, including respiratory source, tachycardia, white cell abnormalities, AMS, and coagulopathies. These associations were preserved in the sensitivity analysis excluding ICU-presenting sepsis.
Propensity Matching
Propensity score matching yielded 1,942 matched pairs (n = 3,884, 77% of HPS patients, 22% of EDPS patients). Table 1 and Supplemental Table 3 show patient characteristics after propensity matching. Supplemental Table 4 shows the propensity model. The frequency densities are shown for the cohort as a function of propensity score in Supplemental Figure 1. After matching, frequencies between groups differed by <5% for all categorical variables assessed. In the sensitivity analysis, propensity matching (model in Supplemental Table 5) resulted in 1,233 matched pairs (n = 2,466, 49% of HPS patients, 14% of EDPS patients), with group differences comparable to the primary analysis.
Process Outcomes
We present propensity-matched differences in initial resuscitation in Figure 2A for all HPS patients, as well as non-ICU-presenting HPS, versus EDPS. HPS patients were roughly half as likely to receive fully 3-hour bundle compliant care (17.0% vs 30.3%, aOR: 0.47, CI: 0.40-0.57), to have blood cultures drawn within three hours prior to antibiotics (44.9% vs 67.2%, aOR: 0.40, CI: 0.35-0.46), or to receive fluid resuscitation initiated within two hours (11.1% vs 26.1%, aOR: 0.35, CI: 0.29-0.42). Antibiotic receipt within one hour was comparable (45.3% vs 48.1%, aOR: 0.89, CI: 0.79-1.01). However, differences emerged for antibiotics within three hours (66.2% vs 83.8%, aOR: 0.38, CI: 0.32-0.44) and persisted at six hours (77.0% vs 92.5%, aOR: 0.27, CI: 0.22-33). Excluding ICU-presenting sepsis from propensity matching exaggerated disparities in antibiotic receipt at one hour (43.4% vs 49.1%, aOR: 0.80, CI: 0.68-0.93), three hours (64.2% vs 86.1%, aOR: 0.29, CI: 0.24-0.35), and six hours (75.7% vs 93.0%, aOR: 0.23, CI: 0.18-0.30). HPS patients more frequently had repeat lactate obtained within 24 hours (62.4% vs 54.3%, aOR: 1.40, CI: 1.23-1.59).
Patient Outcomes
HPS patients had higher mortality (31.2% vs19.3%), mechanical ventilation (51.5% vs27.4%), and ICU admission (60.6% vs 46.5%) (Table 1 and Supplemental Table 6). Figure 2b shows propensity-matched and covariate-adjusted differences in patient outcomes before and after adjusting for initial resuscitation. aORs corresponded to approximate relative risk differences18 of 1.38 (CI: 1.28-1.48), 1.68 (CI: 1.57-1.79), and 1.72 (CI: 1.61-1.84) for mortality, mechanical ventilation, and ICU admission, respectively. HPS was associated with 83% longer mortality-censored ICU stays (five vs nine days, HR–1: 1.83, CI: 1.65-2.03), and 108% longer hospital stay (eight vs 17 days, HR–1: 2.08, CI: 1.93-2.24). After adjustment for resuscitation, all effect sizes decreased but persisted. The initial crystalloid volume was a significant negative effect modifier for mortality (Supplemental Table 7). That is, the magnitude of the association between HPS and greater mortality decreased by a factor of 0.89 per 10 mL/kg given (CI: 0.82-0.97). We did not observe significant interaction from other interventions, or overall bundle compliance, meaning these interventions’ association with mortality did not significantly differ between HPS versus EDPS.
The implied attributable risk difference from discrepancies in initial resuscitation was 23.3% for mortality, 35.2% for mechanical ventilation, and 7.6% for ICU admission (Figure 2B). Resuscitation explained 26.5% of longer ICU LOS and 16.7% of longer hospital LOS associated with HPS.
Figure 2C shows sensitivity analysis excluding ICU-presenting sepsis from propensity matching (ie, limiting HPS to hospital ward presentations). Again, HPS was associated with all adverse outcomes, though effect sizes were smaller than in the primary cohort for all outcomes except hospital LOS. In this cohort, resuscitation factors now explained 16.5% of HPS’ association with mortality, and 14.5% of the association with longer ICU LOS. However, they explained a greater proportion (13.0%) of ICU admissions. Attributable risk differences were comparable to the primary cohort for mechanical ventilation (37.6%) and hospital LOS (15.3%).
DISCUSSION
In this analysis of 11,182 sepsis and septic shock patients, HPS contributed 22% of prevalence but >35% of total sepsis mortalities, ICU utilization, and hospital days. HPS patients had higher comorbidity burdens and had clinical presentations less obviously attributable to infection with more severe organ dysfunction. EDPS received antibiotics within three hours about 1.62 times more often than do HPS patients. EDPS patients also receive fluids initiated within two hours about 1.82 times more often than HPS patients do. HPS had nearly 1.5-fold greater mortality and LOS, and nearly two-fold greater mechanical ventilation and ICU utilization. Resuscitation disparities could partially explain these associations. These patterns persisted when comparing only wards presenting HPS with EDPS.
Our analysis revealed several notable findings. First, these data confirm that HPS represents a potentially high-impact target population that contributes adverse outcomes disproportionately frequently with respect to case prevalence.
Our findings, unsurprisingly, revealed HPS and EDPS reflect dramatically different patient populations. We found that the two groups significantly differed by the majority of the baseline factors we compared. It may be worth asking if and how these substantial differences in illness etiology, chronic health, and acute physiology impact what we consider an optimal approach to management. Significant interaction effects of fluid volume on the association between HPS and mortality suggest differential treatment effects may exist between patients. Indeed, patients who newly arrive from the community and those who are several days into admission likely have different volume status. However, no interactions were noted with other bundle elements, such as timeliness of antibiotics or timeliness of initial fluids.
Another potentially concerning observation was that HPS patients were admitted much less frequently from skilled nursing facilities, as it could imply that this poorer-fairing population had a comparatively higher baseline functional status. The fact that 25% of EDPS cases were admitted from these facilities also underscores the need to engage skilled nursing facility providers in future sepsis initiatives.
We found marked disparities in resuscitation. Timely delivery of interventions, such as antibiotics and initial fluid resuscitation, occurred less than half as often for HPS, especially on hospital wards. While evidence supporting the efficacy of specific 3-hour bundle elements remains unsettled,19 a wealth of literature demonstrates a correlation between bundle uptake and decreased sepsis mortality, especially for early antibiotic administration.13,20-26 Some analysis suggests that differing initial resuscitation practices explain different mortality rates in the early goal-directed therapy trials.27 The comparatively poor performance for non-ICU HPS indicates further QI efforts are better focused on inpatient wards, rather than on EDs or ICUs where resuscitation is already delivered with substantially greater fidelity.
While resuscitation differences partially explained outcome discrepancies between groups, they did not account for as much variation as expected. Though resuscitation accounted for >35% of attributable mechanical ventilation risk, it explained only 16.5% of mortality differences for non-ICU HPS vs EDPS. We speculate that several factors may contribute.
First, HPS patients are already hospitalized for another acute insult and may be too physiologically brittle to derive equal benefit from initial resuscitation. Some literature suggests protocolized sepsis resuscitation may paradoxically be more effective in milder/earlier disease.28
Second, clinical information indicating septic organ dysfunction may become available too late in HPS—a possible data limitation where inpatient providers are counterintuitively more likely to miss early signs of patients’ deterioration and a subsequent therapeutic window. Several studies found that fluid resuscitation is associated with improved sepsis outcomes only when it is administered very early.11,29-31 In inpatient wards, decreased monitoring32 and human factors (eg, hospital workflow, provider-to-patient ratios, electronic documentation burdens)33,34 may hinder early diagnosis. In contrast, ED environments are explicitly designed to identify acutely ill patients and deliver intervention rapidly. If HPS patients were sicker when they were identified, this would also explain their more severe organ dysfunctions. Our data seems to support this possibility. HPS patients had tachypnea less frequently but more often had impaired gas exchange. This finding may suggest that early tachypnea was either less often detected or documented, or that it had progressed further by the time of detection.
Third, inpatients with sepsis may more often present with greater diagnostic complexity. We observed that HPS patients were more often euthermic and less often tachypneic. Beyond suggesting a greater diagnostic challenge, this also raises questions as to whether differences reflect patient physiology (response to infection) or iatrogenic factors (eg, prior antipyretics). Higher comorbidity and acute physiological burdens also limit the degree to which new organ dysfunction can be clearly attributed to infection. We note differences in the proportion of patients who received antibiotics increased over time, suggesting that HPS patients who received delayed antibiotics did so much later than their EDPS counterparts. This lag could also arise from diagnostic difficulty.
All three possibilities highlight a potential lead time effect, where the same measured three-hour period on the wards, between meeting sepsis criteria and starting treatment, actually reflects a longer period between (as yet unmeasurable) pathobiologic “time zero” and treatment versus the ED. The time of sepsis detection, as distinct from the time of sepsis onset, therefore proves difficult to evaluate and impossible to account for statistically.
Regardless, our findings suggest additional difficulty in both the recognition and resuscitation of inpatient sepsis. Inpatients, especially with infections, may need closer monitoring. How to cost effectively implement this monitoring is a challenge that deserves attention.
A more rational systems approach to HPS likely combines efforts to improve initial resuscitation with other initiatives aimed at both improving monitoring and preventing infection.
To be clear, we do not imply that timely initial resuscitation does not matter on the wards. Rather, resuscitation-focused QI alone does not appear to be sufficient to overcome differences in outcomes for HPS. The 23.3% attributable mortality risk we observed still implies that resuscitation differences could explain nearly one in four excess HPS mortalities. We previously showed that timely resuscitation is strongly associated with better outcomes.11,13,30 As discussed above, the unclear degree to which better resuscitation is a marker for more obvious presentations is a persistent limitation of prior investigations and the present study.
Taken together, the ultimate question that this study raises but cannot answer is whether the timely recognition of sepsis, rather than any specific treatment, is what truly improves outcomes.
In addition to those above, this study has several limitations. Our study did not differentiate HPS with respect to patients admitted for noninfectious reasons and who subsequently became septic versus nonseptic patients admitted for an infection who subsequently became septic from that infection. Nor could we discriminate between missed ED diagnoses and true delayed presentations. We note distinguishing these entities clinically can be equally challenging. Additionally, this was a propensity-matched retrospective analysis of an existing sepsis cohort, and the many limitations of both retrospective study and propensity matching apply.35,36 We note that randomizing patients to develop sepsis in the community versus hospital is not feasible and that two of our aims intended to describe overall patterns rather than causal effects. We could not ascertain robust measures of severity of illness (eg, SOFA) because a real world setting precludes required data points—eg, urine output is unreliably recorded. We also note incomplete overlap between inclusion criteria and either Sepsis-2 or -3 definitions,1,37 because we designed and populated our database prior to publication of Sepsis-3. Further, we could not account for surgical source control, the appropriateness of antimicrobial therapy, mechanical ventilation before sepsis onset, or most treatments given after initial resuscitation.
In conclusion, hospital-presenting sepsis accounted for adverse patient outcomes disproportionately to prevalence. HPS patients had more complex presentations, received timely antibiotics half as often ED-presenting sepsis, and had nearly twice the mortality odds. Resuscitation disparities explained roughly 25% of this difference.
Disclosures
The authors have no conflicts of interest to disclose.
Funding
This investigation was funded in part by a grant from the Center for Medicare and Medicaid Innovation to the High Value Healthcare Collaborative, of which the study sites’ umbrella health system was a part. This grant helped fund the underlying QI program and database in this study.
Sepsis is both the most expensive condition treated and the most common cause of death in hospitals in the United States.1-3 Most sepsis patients (as many as 80% to 90%) meet sepsis criteria on hospital arrival, but mortality and costs are higher when meeting criteria after admission.3-6 Mechanisms of this increased mortality for these distinct populations are not well explored. Patients who present septic in the emergency department (ED) and patients who present as inpatients likely present very different challenges for recognition, treatment, and monitoring.7 Yet, how these groups differ by demographic and clinical characteristics, the etiology and severity of infection, and patterns of resuscitation care are not well described. Literature on sepsis epidemiology on hospital wards is particularly limited.8
This knowledge gap is important. If hospital-presenting sepsis (HPS) contributes disproportionately to disease burdCHFens, it reflects a high-yield population deserving the focus of quality improvement (QI) initiatives. If specific causes of disparities were identified—eg, poor initial resuscitation— they could be specifically targeted for correction. Given that current treatment guidelines are uniform for the two populations,9,10 characterizing phenotypic differences could also have implications for both diagnostic and therapeutic recommendations, particularly if the groups display substantially differing clinical presentations. Our prior work has not probed these effects specifically, but suggested ED versus inpatient setting at the time of initial sepsis presentation might be an effect modifier for the association between several elements of fluid resuscitation and patient outcomes.11,12
We, therefore, conducted a retrospective analysis to ask four sequential questions: (1) Do patients with HPS, compared with EDPS, contribute adverse outcome out of proportion to case prevalence? (2) At the time of initial presentation, how do HPS patients differ from EDPS patients with respect to demographics, comorbidities, infectious etiologies, clinical presentations, and severity of illness (3) If holding observed baseline factors constant, does the physical location of sepsis presentation inherently increase the risk for treatment delays and mortality? (4) To what extent can differences in the likelihood for timely initial treatment between the ED and inpatient settings explain differences in mortality and patient outcomes?
We hypothesized a priori that HPS would reflect chronically sicker patients whom both received less timely resuscitation and who contributed disproportionately frequent bad outcomes. We expected disparities in timely resuscitation care would explain a large proportion of this difference.
METHODS
We performed a retrospective analysis of the Northwell Sepsis Database, a prospectively captured, multisite, real world, consecutive-sample cohort of all “severe sepsis” and septic shock patients treated at nine tertiary and community hospitals in New York from October 1, 2014, to March 31, 2016. We analyzed all patients from a previously published cohort.11
Database Design and Structure
The Northwell Sepsis Database has previously been described in detail.11,13,14 Briefly, all patients met clinical sepsis criteria: (1) infection AND (2) ≥2 (SIRS) criteria AND (3) ≥1 acute organ dysfunction criterion. Organ dysfunction criteria were hypotension, acute kidney injury (AKI), coagulopathy, altered gas exchange, elevated bilirubin (≥2.0 mg/dL), or altered mental status (AMS; clarified in Supplemental Table 1). All organ dysfunction was not otherwise explained by patients’ medical histories; eg, patients on warfarin anticoagulation were not documented to have coagulopathy based on international normalized ratio > 1.5. The time of the sepsis episode (and database inclusion) was the time of the first vital sign measurement or laboratory result where a patient simultaneously met all three inclusion criteria: infection, SIRS, and organ dysfunction. The database excludes patients who were <18 years, declined bundle interventions, had advance directives precluding interventions, or were admitted directly to palliative care or hospice. Abstractors assumed comorbidities were absent if not documented within the medical record and that physiologic abnormalities were absent if not measured by the treatment team. There were no missing data for the variables analyzed. We report analysis in adherence with the STROBE statement guidelines for observational research.
Exposure
The primary exposure was whether patients had EDPS versus HPS. We defined EDPS patients as meeting all objective clinical inclusion criteria while physically in the ED. We defined HPS as first meeting sepsis inclusion criteria outside the ED, regardless of the reason for admission, and regardless of whether patients were admitted through the ED or directly to the hospital. All ED patients were admitted to the hospital.
Outcomes
Process outcomes were full 3-hour bundle compliance, time to antibiotic administration, blood cultures before antibiotics, time to fluid initiation, the volume of administered fluid resuscitation, lactate result time, and whether repeat lactate was obtained (Supplemental Table 2). Treatment times were times of administration (rather than order time). The primary patient outcome was hospital mortality. Secondary patient outcomes were mechanical ventilation, ICU admission, ICU days, hospital length of stay (LOS). We discounted HPS patients’ LOS to include only days after meeting the inclusion criteria. Patients were excluded from the analysis of the ICU admission outcome if they were already in the ICU prior to meeting sepsis criteria.
Statistical Analysis
We report continuous variables as means (standard deviation) or medians (interquartile range), and categorical variables as frequencies (proportions), as appropriate. Summative statistics with 95% confidence intervals (CI) describe overall group contributions. We used generalized linear models to determine patient factors associated with EDPS versus HPS, entering random effects for individual study sites to control for intercenter variability.
Next, to generate a propensity-matched cohort, we computed propensity scores adjusted from a priori selected variables: age, sex, tertiary versus community hospital, congestive heart failure (CHF), renal failure, COPD, diabetes, liver failure, immunocompromise, primary source of infection, nosocomial source, temperature, initial lactate, presenting hypotension, altered gas exchange, AMS, AKI, and coagulopathy. We then matched subjects 1:1 without optimization or replacement, imposing a caliper width of 0.01; ie, we required matched pairs to have a <1.0% difference in propensity scores. The macro used to match subjects is publically available.15
We then compared resuscitation and patient outcomes in the matched cohort using generalized linear models, ie, doubly-robust estimation (DRE).16 When assessing patient outcomes corrected for resuscitation, we used mixed DRE/multivariable regression. We did this for two reasons: first, DRE has the advantage of only requiring only one approach (propensity vs covariate adjustments) to be correctly specified.16 Second, computing propensity scores adjusted for resuscitation would be inappropriate given that resuscitation occurs after the exposure allocation (HPS vs EDPS). However, these factors could still impact the outcome and in fact, we hypothesized they were potential mediators of the exposure effect. To interrogate this mediating relationship, we recapitulated the DRE modeling but added covariates for resuscitation factors. Resuscitation-adjusted models controlled for timeliness of antibiotics, fluids, and lactate results; blood cultures before antibiotics; repeat lactate obtained, and fluid volume in the first six hours. Since ICU days and LOS are subject to competing risks bias (LOS could be shorter if patients died earlier), we used proportional hazards models where “the event” was defined as a live discharge to censor for mortality and we report output as inverse hazard ratios. We also tested interaction coefficients for discrete bundle elements and HPS to determine if specific bundle elements were effect modifiers for the association between the presenting location and mortality risk. Finally, we estimated attributable risk differences by comparing adjusted odds ratios of adverse outcome with and without adjustment for resuscitation variables, as described by Sahai et al.17
As sensitivity analyses, we recomputed propensity scores and generated a new matched cohort that excluded HPS patients who met criteria for sepsis while already in the ICU for another reason (ie, excluding ICU-presenting sepsis). We then recapitulated all analyses as above for this cohort. We performed analyses using SAS version 9.4 (SAS Institute, Cary, North Carolina).
RESULTS
Prevalence and Outcome Contributions
Of the 11,182 sepsis patients in the database, we classified 2,509 (22%) as HPS (Figure 1). HPS contributed 785 (35%) of 2,241 sepsis-related mortalities, 1,241 (38%) mechanical ventilations, and 1,762 (34%) ICU admissions. Of 39,263 total ICU days and 127,178 hospital days, HPS contributed 18,104 (46.1%) and 44,412 (34.9%) days, respectively.
Patient Characteristics
Most HPS presented early in the hospital course, with 1,352 (53.9%) cases meeting study criteria within three days of admission. Median time from admission to meeting study criteria for HPS was two days (interquartile range: one to seven days). We report selected baseline patient characteristics in Table 1 and adjusted associations of baseline variables with HPS versus EDPS in Table 2. The full cohort characterization is available in Supplemental Table 3. Notably, HPS patients more often had CHF (aOR [adjusted odds ratio}: 1.31, CI: 1.18-1.47) or renal failure (aOR: 1.62, CI: 1.38-1.91), gastrointestinal source of infection (aOR: 1.84, CI: 1.48-2.29), hypothermia (aOR: 1.56, CI: 1.28-1.90) hypotension (aOR: 1.85, CI: 1.65-2.08), or altered gas exchange (aOR: 2.46, CI: 1.43-4.24). In contrast, HPS patients less frequently were admitted from skilled nursing facilities (aOR: 0.44, CI: 0.32-0.60), or had COPD (aOR: 0.53, CI: 0.36-0.76), fever (aOR: 0.70, CI: 0.52-0.91), tachypnea (aOR: 0.76, CI: 0.58-0.98), or AKI (aOR: 082, CI: 0.68-0.97). Other baseline variables were similar, including respiratory source, tachycardia, white cell abnormalities, AMS, and coagulopathies. These associations were preserved in the sensitivity analysis excluding ICU-presenting sepsis.
Propensity Matching
Propensity score matching yielded 1,942 matched pairs (n = 3,884, 77% of HPS patients, 22% of EDPS patients). Table 1 and Supplemental Table 3 show patient characteristics after propensity matching. Supplemental Table 4 shows the propensity model. The frequency densities are shown for the cohort as a function of propensity score in Supplemental Figure 1. After matching, frequencies between groups differed by <5% for all categorical variables assessed. In the sensitivity analysis, propensity matching (model in Supplemental Table 5) resulted in 1,233 matched pairs (n = 2,466, 49% of HPS patients, 14% of EDPS patients), with group differences comparable to the primary analysis.
Process Outcomes
We present propensity-matched differences in initial resuscitation in Figure 2A for all HPS patients, as well as non-ICU-presenting HPS, versus EDPS. HPS patients were roughly half as likely to receive fully 3-hour bundle compliant care (17.0% vs 30.3%, aOR: 0.47, CI: 0.40-0.57), to have blood cultures drawn within three hours prior to antibiotics (44.9% vs 67.2%, aOR: 0.40, CI: 0.35-0.46), or to receive fluid resuscitation initiated within two hours (11.1% vs 26.1%, aOR: 0.35, CI: 0.29-0.42). Antibiotic receipt within one hour was comparable (45.3% vs 48.1%, aOR: 0.89, CI: 0.79-1.01). However, differences emerged for antibiotics within three hours (66.2% vs 83.8%, aOR: 0.38, CI: 0.32-0.44) and persisted at six hours (77.0% vs 92.5%, aOR: 0.27, CI: 0.22-33). Excluding ICU-presenting sepsis from propensity matching exaggerated disparities in antibiotic receipt at one hour (43.4% vs 49.1%, aOR: 0.80, CI: 0.68-0.93), three hours (64.2% vs 86.1%, aOR: 0.29, CI: 0.24-0.35), and six hours (75.7% vs 93.0%, aOR: 0.23, CI: 0.18-0.30). HPS patients more frequently had repeat lactate obtained within 24 hours (62.4% vs 54.3%, aOR: 1.40, CI: 1.23-1.59).
Patient Outcomes
HPS patients had higher mortality (31.2% vs19.3%), mechanical ventilation (51.5% vs27.4%), and ICU admission (60.6% vs 46.5%) (Table 1 and Supplemental Table 6). Figure 2b shows propensity-matched and covariate-adjusted differences in patient outcomes before and after adjusting for initial resuscitation. aORs corresponded to approximate relative risk differences18 of 1.38 (CI: 1.28-1.48), 1.68 (CI: 1.57-1.79), and 1.72 (CI: 1.61-1.84) for mortality, mechanical ventilation, and ICU admission, respectively. HPS was associated with 83% longer mortality-censored ICU stays (five vs nine days, HR–1: 1.83, CI: 1.65-2.03), and 108% longer hospital stay (eight vs 17 days, HR–1: 2.08, CI: 1.93-2.24). After adjustment for resuscitation, all effect sizes decreased but persisted. The initial crystalloid volume was a significant negative effect modifier for mortality (Supplemental Table 7). That is, the magnitude of the association between HPS and greater mortality decreased by a factor of 0.89 per 10 mL/kg given (CI: 0.82-0.97). We did not observe significant interaction from other interventions, or overall bundle compliance, meaning these interventions’ association with mortality did not significantly differ between HPS versus EDPS.
The implied attributable risk difference from discrepancies in initial resuscitation was 23.3% for mortality, 35.2% for mechanical ventilation, and 7.6% for ICU admission (Figure 2B). Resuscitation explained 26.5% of longer ICU LOS and 16.7% of longer hospital LOS associated with HPS.
Figure 2C shows sensitivity analysis excluding ICU-presenting sepsis from propensity matching (ie, limiting HPS to hospital ward presentations). Again, HPS was associated with all adverse outcomes, though effect sizes were smaller than in the primary cohort for all outcomes except hospital LOS. In this cohort, resuscitation factors now explained 16.5% of HPS’ association with mortality, and 14.5% of the association with longer ICU LOS. However, they explained a greater proportion (13.0%) of ICU admissions. Attributable risk differences were comparable to the primary cohort for mechanical ventilation (37.6%) and hospital LOS (15.3%).
DISCUSSION
In this analysis of 11,182 sepsis and septic shock patients, HPS contributed 22% of prevalence but >35% of total sepsis mortalities, ICU utilization, and hospital days. HPS patients had higher comorbidity burdens and had clinical presentations less obviously attributable to infection with more severe organ dysfunction. EDPS received antibiotics within three hours about 1.62 times more often than do HPS patients. EDPS patients also receive fluids initiated within two hours about 1.82 times more often than HPS patients do. HPS had nearly 1.5-fold greater mortality and LOS, and nearly two-fold greater mechanical ventilation and ICU utilization. Resuscitation disparities could partially explain these associations. These patterns persisted when comparing only wards presenting HPS with EDPS.
Our analysis revealed several notable findings. First, these data confirm that HPS represents a potentially high-impact target population that contributes adverse outcomes disproportionately frequently with respect to case prevalence.
Our findings, unsurprisingly, revealed HPS and EDPS reflect dramatically different patient populations. We found that the two groups significantly differed by the majority of the baseline factors we compared. It may be worth asking if and how these substantial differences in illness etiology, chronic health, and acute physiology impact what we consider an optimal approach to management. Significant interaction effects of fluid volume on the association between HPS and mortality suggest differential treatment effects may exist between patients. Indeed, patients who newly arrive from the community and those who are several days into admission likely have different volume status. However, no interactions were noted with other bundle elements, such as timeliness of antibiotics or timeliness of initial fluids.
Another potentially concerning observation was that HPS patients were admitted much less frequently from skilled nursing facilities, as it could imply that this poorer-fairing population had a comparatively higher baseline functional status. The fact that 25% of EDPS cases were admitted from these facilities also underscores the need to engage skilled nursing facility providers in future sepsis initiatives.
We found marked disparities in resuscitation. Timely delivery of interventions, such as antibiotics and initial fluid resuscitation, occurred less than half as often for HPS, especially on hospital wards. While evidence supporting the efficacy of specific 3-hour bundle elements remains unsettled,19 a wealth of literature demonstrates a correlation between bundle uptake and decreased sepsis mortality, especially for early antibiotic administration.13,20-26 Some analysis suggests that differing initial resuscitation practices explain different mortality rates in the early goal-directed therapy trials.27 The comparatively poor performance for non-ICU HPS indicates further QI efforts are better focused on inpatient wards, rather than on EDs or ICUs where resuscitation is already delivered with substantially greater fidelity.
While resuscitation differences partially explained outcome discrepancies between groups, they did not account for as much variation as expected. Though resuscitation accounted for >35% of attributable mechanical ventilation risk, it explained only 16.5% of mortality differences for non-ICU HPS vs EDPS. We speculate that several factors may contribute.
First, HPS patients are already hospitalized for another acute insult and may be too physiologically brittle to derive equal benefit from initial resuscitation. Some literature suggests protocolized sepsis resuscitation may paradoxically be more effective in milder/earlier disease.28
Second, clinical information indicating septic organ dysfunction may become available too late in HPS—a possible data limitation where inpatient providers are counterintuitively more likely to miss early signs of patients’ deterioration and a subsequent therapeutic window. Several studies found that fluid resuscitation is associated with improved sepsis outcomes only when it is administered very early.11,29-31 In inpatient wards, decreased monitoring32 and human factors (eg, hospital workflow, provider-to-patient ratios, electronic documentation burdens)33,34 may hinder early diagnosis. In contrast, ED environments are explicitly designed to identify acutely ill patients and deliver intervention rapidly. If HPS patients were sicker when they were identified, this would also explain their more severe organ dysfunctions. Our data seems to support this possibility. HPS patients had tachypnea less frequently but more often had impaired gas exchange. This finding may suggest that early tachypnea was either less often detected or documented, or that it had progressed further by the time of detection.
Third, inpatients with sepsis may more often present with greater diagnostic complexity. We observed that HPS patients were more often euthermic and less often tachypneic. Beyond suggesting a greater diagnostic challenge, this also raises questions as to whether differences reflect patient physiology (response to infection) or iatrogenic factors (eg, prior antipyretics). Higher comorbidity and acute physiological burdens also limit the degree to which new organ dysfunction can be clearly attributed to infection. We note differences in the proportion of patients who received antibiotics increased over time, suggesting that HPS patients who received delayed antibiotics did so much later than their EDPS counterparts. This lag could also arise from diagnostic difficulty.
All three possibilities highlight a potential lead time effect, where the same measured three-hour period on the wards, between meeting sepsis criteria and starting treatment, actually reflects a longer period between (as yet unmeasurable) pathobiologic “time zero” and treatment versus the ED. The time of sepsis detection, as distinct from the time of sepsis onset, therefore proves difficult to evaluate and impossible to account for statistically.
Regardless, our findings suggest additional difficulty in both the recognition and resuscitation of inpatient sepsis. Inpatients, especially with infections, may need closer monitoring. How to cost effectively implement this monitoring is a challenge that deserves attention.
A more rational systems approach to HPS likely combines efforts to improve initial resuscitation with other initiatives aimed at both improving monitoring and preventing infection.
To be clear, we do not imply that timely initial resuscitation does not matter on the wards. Rather, resuscitation-focused QI alone does not appear to be sufficient to overcome differences in outcomes for HPS. The 23.3% attributable mortality risk we observed still implies that resuscitation differences could explain nearly one in four excess HPS mortalities. We previously showed that timely resuscitation is strongly associated with better outcomes.11,13,30 As discussed above, the unclear degree to which better resuscitation is a marker for more obvious presentations is a persistent limitation of prior investigations and the present study.
Taken together, the ultimate question that this study raises but cannot answer is whether the timely recognition of sepsis, rather than any specific treatment, is what truly improves outcomes.
In addition to those above, this study has several limitations. Our study did not differentiate HPS with respect to patients admitted for noninfectious reasons and who subsequently became septic versus nonseptic patients admitted for an infection who subsequently became septic from that infection. Nor could we discriminate between missed ED diagnoses and true delayed presentations. We note distinguishing these entities clinically can be equally challenging. Additionally, this was a propensity-matched retrospective analysis of an existing sepsis cohort, and the many limitations of both retrospective study and propensity matching apply.35,36 We note that randomizing patients to develop sepsis in the community versus hospital is not feasible and that two of our aims intended to describe overall patterns rather than causal effects. We could not ascertain robust measures of severity of illness (eg, SOFA) because a real world setting precludes required data points—eg, urine output is unreliably recorded. We also note incomplete overlap between inclusion criteria and either Sepsis-2 or -3 definitions,1,37 because we designed and populated our database prior to publication of Sepsis-3. Further, we could not account for surgical source control, the appropriateness of antimicrobial therapy, mechanical ventilation before sepsis onset, or most treatments given after initial resuscitation.
In conclusion, hospital-presenting sepsis accounted for adverse patient outcomes disproportionately to prevalence. HPS patients had more complex presentations, received timely antibiotics half as often ED-presenting sepsis, and had nearly twice the mortality odds. Resuscitation disparities explained roughly 25% of this difference.
Disclosures
The authors have no conflicts of interest to disclose.
Funding
This investigation was funded in part by a grant from the Center for Medicare and Medicaid Innovation to the High Value Healthcare Collaborative, of which the study sites’ umbrella health system was a part. This grant helped fund the underlying QI program and database in this study.
1. Singer M, Deutschman CS, Seymour CW, et al. The third international consensus definitions for sepsis and septic shock (Sepsis-3). JAMA. 2016;315(8):801-810. doi: 10.1001/jama.2016.0287. PubMed
2. Torio CMA, Andrews RMA. National inpatient hospital costs: the most expensive conditions by payer, 2011. In. Statistical Brief No. 160. Rockville, MD: Agency for Healthcare Research and Quality; 2013. PubMed
3. Liu V, Escobar GJ, Greene JD, et al. Hospital deaths in patients with sepsis from 2 independent cohorts. JAMA. 2014;312(1):90-92. doi: 10.1001/jama.2014.5804. PubMed
4. Seymour CW, Liu VX, Iwashyna TJ, et al. Assessment of clinical criteria for sepsis: for the third international consensus definitions for sepsis and septic shock (Sepsis-3). JAMA. 2016;315(8):762-774. doi: 10.1001/jama.2016.0288. PubMed
5. Jones SL, Ashton CM, Kiehne LB, et al. Outcomes and resource use of sepsis-associated stays by presence on admission, severity, and hospital type. Med Care. 2016;54(3):303-310. doi: 10.1097/MLR.0000000000000481. PubMed
6. Page DB, Donnelly JP, Wang HE. Community-, healthcare-, and hospital-acquired severe sepsis hospitalizations in the university healthsystem consortium. Crit Care Med. 2015;43(9):1945-1951. doi: 10.1097/CCM.0000000000001164. PubMed
7. Rothman M, Levy M, Dellinger RP, et al. Sepsis as 2 problems: identifying sepsis at admission and predicting onset in the hospital using an electronic medical record-based acuity score. J Crit Care. 2016;38:237-244. doi: 10.1016/j.jcrc.2016.11.037. PubMed
8. Chan P, Peake S, Bellomo R, Jones D. Improving the recognition of, and response to in-hospital sepsis. Curr Infect Dis Rep. 2016;18(7):20. doi: 10.1007/s11908-016-0528-7. PubMed
9. Rhodes A, Evans LE, Alhazzani W, et al. Surviving sepsis campaign: international guidelines for management of sepsis and septic shock: 2016. Crit Care Med. 2017;45(3):486-552. doi: 10.1097/CCM.0000000000002255. PubMed
10. Levy MM, Evans LE, Rhodes A. The Surviving Sepsis Campaign Bundle: 2018 Update. Crit Care Med. 2018;46(6):997-1000. doi: 10.1097/CCM.0000000000003119. PubMed
11. Leisman DE, Goldman C, Doerfler ME, et al. Patterns and outcomes associated with timeliness of initial crystalloid resuscitation in a prospective sepsis and septic shock cohort. Crit Care Med. 2017;45(10):1596-1606. doi: 10.1097/CCM.0000000000002574. PubMed
12. Leisman DE, Doerfler ME, Schneider SM, Masick KD, D’Amore JA, D’Angelo JK. Predictors, prevalence, and outcomes of early crystalloid responsiveness among initially hypotensive patients with sepsis and septic shock. Crit Care Med. 2018;46(2):189-198. doi: 10.1097/CCM.0000000000002834. PubMed
13. Leisman DE, Doerfler ME, Ward MF, et al. Survival benefit and cost savings from compliance with a simplified 3-hour sepsis bundle in a series of prospective, multisite, observational cohorts. Crit Care Med. 2017;45(3):395-406. doi: 10.1097/CCM.0000000000002184. PubMed
14. Doerfler ME, D’Angelo J, Jacobsen D, et al. Methods for reducing sepsis mortality in emergency departments and inpatient units. Jt Comm J Qual Patient Saf. 2015;41(5):205-211. doi: 10.1016/S1553-7250(15)41027-X. PubMed
15. Murphy B, Fraeman KH. A general SAS® macro to implement optimal N:1 propensity score matching within a maximum radius. In: Paper 812-2017. Waltham, MA: Evidera; 2017. https://support.sas.com/resources/papers/proceedings17/0812-2017.pdf. Accessed February 20, 2019.
16. Funk MJ, Westreich D, Wiesen C, Stürmer T, Brookhart MA, Davidian M. Doubly robust estimation of causal effects. Am J Epidemiol. 2011;173(7):761-767. doi: 10.1093/aje/kwq439. PubMed
17. Sahai HK, Khushid A. Statistics in Epidemiology: Methods, Techniques, and Applications. Boca Raton, FL: CRC Press; 1995.
18. VanderWeele TJ. On a square-root transformation of the odds ratio for a common outcome. Epidemiology. 2017;28(6):e58-e60. doi: 10.1097/EDE.0000000000000733. PubMed
19. Pepper DJ, Natanson C, Eichacker PQ. Evidence underpinning the centers for medicare & medicaid services’ severe sepsis and septic shock management bundle (SEP-1). Ann Intern Med. 2018;168(8):610-612. doi: 10.7326/L18-0140. PubMed
20. Levy MM, Rhodes A, Phillips GS, et al. Surviving sepsis campaign: association between performance metrics and outcomes in a 7.5-year study. Crit Care Med. 2015;43(1):3-12. doi: 10.1097/CCM.0000000000000723. PubMed
11. Liu VX, Morehouse JW, Marelich GP, et al. Multicenter Implementation of a Treatment Bundle for Patients with Sepsis and Intermediate Lactate Values. Am J Respir Crit Care Med. 2016;193(11):1264-1270. doi: 10.1164/rccm.201507-1489OC. PubMed
22. Miller RR, Dong L, Nelson NC, et al. Multicenter implementation of a severe sepsis and septic shock treatment bundle. Am J Respir Crit Care Med. 2013;188(1):77-82. doi: 10.1164/rccm.201212-2199OC. PubMed
23. Seymour CW, Gesten F, Prescott HC, et al. Time to treatment and mortality during mandated emergency care for sepsis. N Engl J Med. 2017;376(23):2235-2244. doi: 10.1056/NEJMoa1703058. PubMed
24. Pruinelli L, Westra BL, Yadav P, et al. Delay within the 3-hour surviving sepsis campaign guideline on mortality for patients with severe sepsis and septic shock. Crit Care Med. 2018;46(4):500-505. doi: 10.1097/CCM.0000000000002949. PubMed
25. Kumar A, Roberts D, Wood KE, et al. Duration of hypotension before initiation of effective antimicrobial therapy is the critical determinant of survival in human septic shock. Crit Care Med. 2006;34(6):1589-1596. doi: 10.1097/01.CCM.0000217961.75225.E9. PubMed
26. Liu VX, Fielding-Singh V, Greene JD, et al. The timing of early antibiotics and hospital mortality in sepsis. Am J Respir Crit Care Med. 2017;196(7):856-863. doi: 10.1164/rccm.201609-1848OC. PubMed
27. Kalil AC, Johnson DW, Lisco SJ, Sun J. Early goal-directed therapy for sepsis: a novel solution for discordant survival outcomes in clinical trials. Crit Care Med. 2017;45(4):607-614. doi: 10.1097/CCM.0000000000002235. PubMed
28. Kellum JA, Pike F, Yealy DM, et al. relationship between alternative resuscitation strategies, host response and injury biomarkers, and outcome in septic shock: analysis of the protocol-based care for early septic shock study. Crit Care Med. 2017;45(3):438-445. doi: 10.1097/CCM.0000000000002206. PubMed
29. Seymour CW, Cooke CR, Heckbert SR, et al. Prehospital intravenous access and fluid resuscitation in severe sepsis: an observational cohort study. Crit Care. 2014;18(5):533. doi: 10.1186/s13054-014-0533-x. PubMed
30. Leisman D, Wie B, Doerfler M, et al. Association of fluid resuscitation initiation within 30 minutes of severe sepsis and septic shock recognition with reduced mortality and length of stay. Ann Emerg Med. 2016;68(3):298-311. doi: 10.1016/j.annemergmed.2016.02.044. PubMed
31. Lee SJ, Ramar K, Park JG, Gajic O, Li G, Kashyap R. Increased fluid administration in the first three hours of sepsis resuscitation is associated with reduced mortality: a retrospective cohort study. Chest. 2014;146(4):908-915. doi: 10.1378/chest.13-2702. PubMed
32. Smyth MA, Daniels R, Perkins GD. Identification of sepsis among ward patients. Am J Respir Crit Care Med. 2015;192(8):910-911. doi: 10.1164/rccm.201507-1395ED. PubMed
33. Wenger N, Méan M, Castioni J, Marques-Vidal P, Waeber G, Garnier A. Allocation of internal medicine resident time in a Swiss hospital: a time and motion study of day and evening shifts. Ann Intern Med. 2017;166(8):579-586. doi: 10.7326/M16-2238. PubMed
34. Mamykina L, Vawdrey DK, Hripcsak G. How do residents spend their shift time? A time and motion study with a particular focus on the use of computers. Acad Med. 2016;91(6):827-832. doi: 10.1097/ACM.0000000000001148. PubMed
35. Kaji AH, Schriger D, Green S. Looking through the retrospectoscope: reducing bias in emergency medicine chart review studies. Ann Emerg Med. 2014;64(3):292-298. doi: 10.1016/j.annemergmed.2014.03.025. PubMed
36. Leisman DE. Ten pearls and pitfalls of propensity scores in critical care research: a guide for clinicians and researchers. Crit Care Med. 2019;47(2):176-185. doi: 10.1097/CCM.0000000000003567. PubMed
37. Levy MM, Fink MP, Marshall JC, et al. 2001 SCCM/ESICM/ACCP/ATS/SIS international sepsis definitions conference. Crit Care Med. 2003;31(4):1250-1256. doi: 10.1097/01.CCM.0000050454.01978.3B. PubMed
1. Singer M, Deutschman CS, Seymour CW, et al. The third international consensus definitions for sepsis and septic shock (Sepsis-3). JAMA. 2016;315(8):801-810. doi: 10.1001/jama.2016.0287. PubMed
2. Torio CMA, Andrews RMA. National inpatient hospital costs: the most expensive conditions by payer, 2011. In. Statistical Brief No. 160. Rockville, MD: Agency for Healthcare Research and Quality; 2013. PubMed
3. Liu V, Escobar GJ, Greene JD, et al. Hospital deaths in patients with sepsis from 2 independent cohorts. JAMA. 2014;312(1):90-92. doi: 10.1001/jama.2014.5804. PubMed
4. Seymour CW, Liu VX, Iwashyna TJ, et al. Assessment of clinical criteria for sepsis: for the third international consensus definitions for sepsis and septic shock (Sepsis-3). JAMA. 2016;315(8):762-774. doi: 10.1001/jama.2016.0288. PubMed
5. Jones SL, Ashton CM, Kiehne LB, et al. Outcomes and resource use of sepsis-associated stays by presence on admission, severity, and hospital type. Med Care. 2016;54(3):303-310. doi: 10.1097/MLR.0000000000000481. PubMed
6. Page DB, Donnelly JP, Wang HE. Community-, healthcare-, and hospital-acquired severe sepsis hospitalizations in the university healthsystem consortium. Crit Care Med. 2015;43(9):1945-1951. doi: 10.1097/CCM.0000000000001164. PubMed
7. Rothman M, Levy M, Dellinger RP, et al. Sepsis as 2 problems: identifying sepsis at admission and predicting onset in the hospital using an electronic medical record-based acuity score. J Crit Care. 2016;38:237-244. doi: 10.1016/j.jcrc.2016.11.037. PubMed
8. Chan P, Peake S, Bellomo R, Jones D. Improving the recognition of, and response to in-hospital sepsis. Curr Infect Dis Rep. 2016;18(7):20. doi: 10.1007/s11908-016-0528-7. PubMed
9. Rhodes A, Evans LE, Alhazzani W, et al. Surviving sepsis campaign: international guidelines for management of sepsis and septic shock: 2016. Crit Care Med. 2017;45(3):486-552. doi: 10.1097/CCM.0000000000002255. PubMed
10. Levy MM, Evans LE, Rhodes A. The Surviving Sepsis Campaign Bundle: 2018 Update. Crit Care Med. 2018;46(6):997-1000. doi: 10.1097/CCM.0000000000003119. PubMed
11. Leisman DE, Goldman C, Doerfler ME, et al. Patterns and outcomes associated with timeliness of initial crystalloid resuscitation in a prospective sepsis and septic shock cohort. Crit Care Med. 2017;45(10):1596-1606. doi: 10.1097/CCM.0000000000002574. PubMed
12. Leisman DE, Doerfler ME, Schneider SM, Masick KD, D’Amore JA, D’Angelo JK. Predictors, prevalence, and outcomes of early crystalloid responsiveness among initially hypotensive patients with sepsis and septic shock. Crit Care Med. 2018;46(2):189-198. doi: 10.1097/CCM.0000000000002834. PubMed
13. Leisman DE, Doerfler ME, Ward MF, et al. Survival benefit and cost savings from compliance with a simplified 3-hour sepsis bundle in a series of prospective, multisite, observational cohorts. Crit Care Med. 2017;45(3):395-406. doi: 10.1097/CCM.0000000000002184. PubMed
14. Doerfler ME, D’Angelo J, Jacobsen D, et al. Methods for reducing sepsis mortality in emergency departments and inpatient units. Jt Comm J Qual Patient Saf. 2015;41(5):205-211. doi: 10.1016/S1553-7250(15)41027-X. PubMed
15. Murphy B, Fraeman KH. A general SAS® macro to implement optimal N:1 propensity score matching within a maximum radius. In: Paper 812-2017. Waltham, MA: Evidera; 2017. https://support.sas.com/resources/papers/proceedings17/0812-2017.pdf. Accessed February 20, 2019.
16. Funk MJ, Westreich D, Wiesen C, Stürmer T, Brookhart MA, Davidian M. Doubly robust estimation of causal effects. Am J Epidemiol. 2011;173(7):761-767. doi: 10.1093/aje/kwq439. PubMed
17. Sahai HK, Khushid A. Statistics in Epidemiology: Methods, Techniques, and Applications. Boca Raton, FL: CRC Press; 1995.
18. VanderWeele TJ. On a square-root transformation of the odds ratio for a common outcome. Epidemiology. 2017;28(6):e58-e60. doi: 10.1097/EDE.0000000000000733. PubMed
19. Pepper DJ, Natanson C, Eichacker PQ. Evidence underpinning the centers for medicare & medicaid services’ severe sepsis and septic shock management bundle (SEP-1). Ann Intern Med. 2018;168(8):610-612. doi: 10.7326/L18-0140. PubMed
20. Levy MM, Rhodes A, Phillips GS, et al. Surviving sepsis campaign: association between performance metrics and outcomes in a 7.5-year study. Crit Care Med. 2015;43(1):3-12. doi: 10.1097/CCM.0000000000000723. PubMed
11. Liu VX, Morehouse JW, Marelich GP, et al. Multicenter Implementation of a Treatment Bundle for Patients with Sepsis and Intermediate Lactate Values. Am J Respir Crit Care Med. 2016;193(11):1264-1270. doi: 10.1164/rccm.201507-1489OC. PubMed
22. Miller RR, Dong L, Nelson NC, et al. Multicenter implementation of a severe sepsis and septic shock treatment bundle. Am J Respir Crit Care Med. 2013;188(1):77-82. doi: 10.1164/rccm.201212-2199OC. PubMed
23. Seymour CW, Gesten F, Prescott HC, et al. Time to treatment and mortality during mandated emergency care for sepsis. N Engl J Med. 2017;376(23):2235-2244. doi: 10.1056/NEJMoa1703058. PubMed
24. Pruinelli L, Westra BL, Yadav P, et al. Delay within the 3-hour surviving sepsis campaign guideline on mortality for patients with severe sepsis and septic shock. Crit Care Med. 2018;46(4):500-505. doi: 10.1097/CCM.0000000000002949. PubMed
25. Kumar A, Roberts D, Wood KE, et al. Duration of hypotension before initiation of effective antimicrobial therapy is the critical determinant of survival in human septic shock. Crit Care Med. 2006;34(6):1589-1596. doi: 10.1097/01.CCM.0000217961.75225.E9. PubMed
26. Liu VX, Fielding-Singh V, Greene JD, et al. The timing of early antibiotics and hospital mortality in sepsis. Am J Respir Crit Care Med. 2017;196(7):856-863. doi: 10.1164/rccm.201609-1848OC. PubMed
27. Kalil AC, Johnson DW, Lisco SJ, Sun J. Early goal-directed therapy for sepsis: a novel solution for discordant survival outcomes in clinical trials. Crit Care Med. 2017;45(4):607-614. doi: 10.1097/CCM.0000000000002235. PubMed
28. Kellum JA, Pike F, Yealy DM, et al. relationship between alternative resuscitation strategies, host response and injury biomarkers, and outcome in septic shock: analysis of the protocol-based care for early septic shock study. Crit Care Med. 2017;45(3):438-445. doi: 10.1097/CCM.0000000000002206. PubMed
29. Seymour CW, Cooke CR, Heckbert SR, et al. Prehospital intravenous access and fluid resuscitation in severe sepsis: an observational cohort study. Crit Care. 2014;18(5):533. doi: 10.1186/s13054-014-0533-x. PubMed
30. Leisman D, Wie B, Doerfler M, et al. Association of fluid resuscitation initiation within 30 minutes of severe sepsis and septic shock recognition with reduced mortality and length of stay. Ann Emerg Med. 2016;68(3):298-311. doi: 10.1016/j.annemergmed.2016.02.044. PubMed
31. Lee SJ, Ramar K, Park JG, Gajic O, Li G, Kashyap R. Increased fluid administration in the first three hours of sepsis resuscitation is associated with reduced mortality: a retrospective cohort study. Chest. 2014;146(4):908-915. doi: 10.1378/chest.13-2702. PubMed
32. Smyth MA, Daniels R, Perkins GD. Identification of sepsis among ward patients. Am J Respir Crit Care Med. 2015;192(8):910-911. doi: 10.1164/rccm.201507-1395ED. PubMed
33. Wenger N, Méan M, Castioni J, Marques-Vidal P, Waeber G, Garnier A. Allocation of internal medicine resident time in a Swiss hospital: a time and motion study of day and evening shifts. Ann Intern Med. 2017;166(8):579-586. doi: 10.7326/M16-2238. PubMed
34. Mamykina L, Vawdrey DK, Hripcsak G. How do residents spend their shift time? A time and motion study with a particular focus on the use of computers. Acad Med. 2016;91(6):827-832. doi: 10.1097/ACM.0000000000001148. PubMed
35. Kaji AH, Schriger D, Green S. Looking through the retrospectoscope: reducing bias in emergency medicine chart review studies. Ann Emerg Med. 2014;64(3):292-298. doi: 10.1016/j.annemergmed.2014.03.025. PubMed
36. Leisman DE. Ten pearls and pitfalls of propensity scores in critical care research: a guide for clinicians and researchers. Crit Care Med. 2019;47(2):176-185. doi: 10.1097/CCM.0000000000003567. PubMed
37. Levy MM, Fink MP, Marshall JC, et al. 2001 SCCM/ESICM/ACCP/ATS/SIS international sepsis definitions conference. Crit Care Med. 2003;31(4):1250-1256. doi: 10.1097/01.CCM.0000050454.01978.3B. PubMed
© 2019 Society of Hospital Medicine
Resuming Anticoagulation following Upper Gastrointestinal Bleeding among Patients with Nonvalvular Atrial Fibrillation—A Microsimulation Analysis
Anticoagulation is commonly used in the management of atrial fibrillation to reduce the risk of ischemic stroke. Warfarin and other anticoagulants increase the risk of hemorrhagic complications, including upper gastrointestinal bleeding (UGIB). Following UGIB, management of anticoagulation is highly variable. Many patients permanently discontinue anticoagulation, while others continue without interruption.1-4 Among patients who resume warfarin, different cohorts have measured median times to resumption ranging from four days to 50 days.1-3 Outcomes data are sparse, and clinical guidelines offer little direction.5
Following UGIB, the balance between the risks and benefits of anticoagulation changes over time. Rebleeding risk is highest immediately after the event and declines quickly; therefore, rapid resumption of anticoagulation causes patient harm.3 Meanwhile, the risk of stroke remains constant, and delay in resumption of anticoagulation is associated with increased risk of stroke and death.1 At some point in time following the initial UGIB, the expected harm from bleeding would equal the expected harm from stroke. This time point would represent the optimal time to restart anticoagulation.
Trial data are unlikely to identify the optimal time for restarting anticoagulation. A randomized trial comparing discrete reinitiation times (eg, two weeks vs six weeks) may easily miss the optimal timing. Moreover, because the daily probability of thromboembolic events is low, large numbers of patients would be required to power such a study. In addition, a number of oral anticoagulants are now approved for prevention of thromboembolic stroke in atrial fibrillation, and each drug may have different optimal timing.
In contrast to randomized trials that would be impracticable for addressing this clinical issue, microsimulation modeling can provide granular information regarding the optimal time to restart anticoagulation. Herein, we set out to estimate the expected benefit of reinitiation of warfarin, the most commonly used oral anticoagulant,6 or apixaban, the direct oral anticoagulant with the most favorable risk profile,7 as a function of days after UGIB.
METHODS
We previously described a microsimulation model of anticoagulation among patients with nonvalvular atrial fibrillation (NVAF; hereafter, we refer to this model as the Personalized Anticoagulation Decision-Making Assistance model, or PADMA).8,9 For this study, we extended this model to incorporate the probability of rebleeding following UGIB and include apixaban as an alternative to warfarin. This model begins with a synthetic population following UGIB, the members of which are at varying risk for thromboembolism, recurrent UGIB, and other hemorrhages. For each patient, the model simulates a number of possible events (eg, thromboembolic stroke, intracranial hemorrhage, rebleeding, and other major extracranial hemorrhages) on each day of an acute period of 90 days after hemostasis. Patients who survive until the end of the acute period enter a simulation with annual, rather than daily, cycles. Our model then estimates total quality-adjusted life-years (QALYs) for each patient, discounted to the present. We report the average discounted QALYs produced by the model for the same population if all individuals in our input population were to resume either warfarin or apixaban on a specific day. Input parameters and ranges are summarized in Table 1, a simplified schematic of our model is shown in the Supplemental Appendix, and additional details regarding model structure and assumptions can be found in earlier work.8,9 We simulated from a health system perspective over a lifelong time horizon. All analyses were performed in version 14 of Stata (StataCorp, LLC, College Station, Texas).
Synthetic Population
To generate a population reflective of the comorbidities and age distribution of the US population with NVAF, we merged relevant variables from the National Health and Nutrition Examination Survey (NHANES; 2011-2012), using multiple imputation to correct for missing variables.10 We then bootstrapped to national population estimates by age and sex to arrive at a hypothetical population of the United States.11 Because NHANES does not include atrial fibrillation, we applied sex- and age-specific prevalence rates from the AnTicoagulation and Risk Factors In Atrial Fibrillation study.12 We then calculated commonly used risk scores (CHA2DS2-Vasc and HAS-BLED) for each patient and limited the population to patients with a CHA2DS2-Vasc score of one or greater.13,14 The population resuming apixaban was further limited to patients whose creatinine clearance was 25 mL/min or greater in keeping with the entry criteria in the phase 3 clinical trial on which the medication’s approval was based.15
To estimate patient-specific probability of rebleeding, we generated a Rockall score for each patient.16 Although the discrimination of the Rockall score is limited for individual patients, as with all other tools used to predict rebleeding following UGIB, the Rockall score has demonstrated reasonable calibration across a threefold risk gradient.17-19 International consensus guidelines recommend the Rockall score as one of two risk prediction tools for clinical use in the management of patients with UGIB.20 In addition, because the Rockall score includes some demographic components (five of a possible 11 points), our estimates of rebleeding risk are covariant with other patient-specific risks. We assumed that the endoscopic components of the Rockall score were present in our cohort at the same frequency as in the original derivation and are independent of known patient risk factors.16 For example, 441 out of 4,025 patients in the original Rockall derivation cohort presented with a systolic blood pressure less than 100 mm Hg. We assumed that an independent and random 10.96% of the cohort would present with shock, which confers two points in the Rockall score.
The population was replicated 60 times, with identical copies of the population resuming anticoagulation on each of days 1-60 (where day zero represents hemostasis). Intermediate data regarding our simulated population can be found in the Supplemental Appendix and in prior work.
Event Type, Severity, and Mortality
Each patient in our simulation could sustain several discrete and independent events: ischemic stroke, intracranial hemorrhage, recurrent UGIB, or extracranial major hemorrhage other than recurrent UGIB. As in prior analyses using the PADMA model, we did not consider minor hemorrhagic events.8
The probability of each event was conditional on the corresponding risk scoring system. Patient-specific probability of ischemic stroke was conditional on CHA2DS2-Vasc score.21,22 Patient-specific probability of intracranial hemorrhage was conditional on HAS-BLED score, with the proportions of intracranial hemorrhage of each considered subtype (intracerebral, subarachnoid, or subdural) bootstrapped from previously-published data.21-24 Patient-specific probability of rebleeding was conditional on Rockall score from the combined Rockall and Vreeburg validation cohorts.17 Patient-specific probability of extracranial major hemorrhage was conditional on HAS-BLED score.21 To avoid double-counting of UGIB, we subtracted the baseline risk of UGIB from the overall rate of extracranial major hemorrhages using previously-published data regarding relative frequency and a bootstrapping approach.25
Probability of Rebleeding Over Time
To estimate the decrease in rebleeding risk over time, we searched the Medline database for systematic reviews of recurrent bleeding following UGIB using the strategy detailed in the Supplemental Appendix. Using the interval rates of rebleeding we identified, we calculated implied daily rates of rebleeding at the midpoint of each interval. For example, 39.5% of rebleeding events occurred within three days of hemostasis, implying a daily rate of approximately 13.2% on day two (32 of 81 events over a three-day period). We repeated this process to estimate daily rates at the midpoint of each reported time interval and fitted an exponential decay function.26 Our exponential fitted these datapoints quite well, but we lacked sufficient data to test other survival functions (eg, Gompertz, lognormal, etc.). Our fitted exponential can be expressed as:
P rebleeding = b 0 *exp(b 1 *day)
where b0 = 0.1843 (SE: 0.0136) and b1 = –0.1563 (SE: 0.0188). For example, a mean of 3.9% of rebleeding episodes will occur on day 10 (0.1843 *exp(–0.1563 *10)).
Relative Risks of Events with Anticoagulation
For patients resuming warfarin, the probabilities of each event were adjusted based on patient-specific daily INR. All INRs were assumed to be 1.0 until the day of warfarin reinitiation, after which interpolated trajectories of postinitiation INR measurements were sampled for each patient from an earlier study of clinical warfarin initiation.27 Relative risks of ischemic stroke and hemorrhagic events were calculated based on each day’s INR.
For patients taking apixaban, we assumed that the medication would reach full therapeutic effect one day after reinitiation. Based on available evidence, we applied the relative risks of each event with apixaban compared with warfarin.25
Future Disability and Mortality
Each event in our simulation resulted in hospitalization. Length of stay was sampled for each diagnosis.28 The disutility of hospitalization was estimated based on length of stay.8 Inpatient mortality and future disability were predicted for each event as previously described.8 We assumed that recurrent episodes of UGIB conferred morbidity and mortality identical to extracranial major hemorrhages more broadly.29,30
Disutilities
We used a multiplicative model for disutility with baseline utilities conditional on age and sex.31 Each day after resumption of anticoagulation carried a disutility of 0.012 for warfarin or 0.002 for apixaban, which we assumed to be equivalent to aspirin in disutility.32 Long-term disutility and life expectancy were conditional on modified Rankin Score (mRS).33,34 We discounted all QALYs to day zero using standard exponential discounting and a discount rate centered at 3%. We then computed the average discounted QALYs among the cohort of patients that resumed anticoagulation on each day following the index UGIB.
Sensitivity Analyses and Metamodel
To assess sensitivity to continuously varying input parameters, such as discount rate, the proportion of extracranial major hemorrhages that are upper GI bleeds, and inpatient mortality from extracranial major hemorrhage, we constructed a metamodel (a regression model of our microsimulation results).35 We tested for interactions among input parameters and dropped parameters that were not statistically significant predictors of discounted QALYs from our metamodel. We then tested for interactions between each parameter and day resuming anticoagulation to determine which factors may impact the optimal day of reinitiation. Finally, we used predicted marginal effects from our metamodel to assess the change in optimal day across the ranges of each input parameter when other parameters were held at their medians.
RESULTS
Resuming warfarin on day zero produced the fewest QALYs. With delay in reinitiation of anticoagulation, expected QALYs increased, peaked, and then declined for all scenarios. In our base-case simulation of warfarin, peak utility was achieved by resumption 41 days after the index UGIB. Resumption between days 32 and 51 produced greater than 99.9% of peak utility. In our base-case simulation of apixaban, peak utility was achieved by resumption 32 days after the index UGIB. Resumption between days 21 and 47 produced greater than 99.9% of peak utility. Results for warfarin and apixaban are shown in Figures 1 and 2, respectively.
The optimal day of warfarin reinitiation was most sensitive to CHA2DS2-Vasc scores and varied by around 11 days between a CHA2DS2-Vasc score of one and a CHA2DS2-Vasc score of six (the 5th and 95th percentiles, respectively) when all other parameters are held at their medians. Results were comparatively insensitive to rebleeding risk. Varying Rockall score from two to seven (the 5th and 95th percentiles, respectively) added three days to optimal warfarin resumption. Varying other parameters from the 5th to the 95th percentile (including HAS-BLED score, sex, age, and discount rate) changed expected QALYs but did not change the optimal day of reinitiation of warfarin. Optimal day of reinitiation for warfarin stratified by CHA2DS2-Vasc score is shown in Table 2.
Sensitivity analyses for apixaban produced broadly similar results, but with greater sensitivity to rebleeding risk. Optimal day of reinitiation varied by 15 days over the examined range of CHA2DS2-Vasc scores (Table 2) and by six days over the range of Rockall scores (Supplemental Appendix). Other input parameters, including HAS-BLED score, age, sex, and discount rate, changed expected QALYs and were significant in our metamodel but did not affect the optimal day of reinitiation. Metamodel results for both warfarin and apixaban are included in the Supplemental Appendix.
DISCUSSION
Anticoagulation is frequently prescribed for patients with NVAF, and hemorrhagic complications are common. Although anticoagulants are withheld following hemorrhages, scant evidence to inform the optimal timing of reinitiation is available. In this microsimulation analysis, we found that the optimal time to reinitiate anticoagulation following UGIB is around 41 days for warfarin and around 32 days for apixaban. We have further demonstrated that the optimal timing of reinitiation can vary by nearly two weeks, depending on a patient’s underlying risk of stroke, and that early reinitiation is more sensitive to rebleeding risk than late reinitiation.
Prior work has shown that early reinitiation of anticoagulation leads to higher rates of recurrent hemorrhage while failure to reinitiate anticoagulation is associated with higher rates of stroke and mortality.1-4,36 Our results add to the literature in a number of important ways. First, our model not only confirms that anticoagulation should be restarted but also suggests when this action should be taken. The competing risks of bleeding and stroke have left clinicians with little guidance; we have quantified the clinical reasoning required for the decision to resume anticoagulation. Second, by including the disutility of hospitalization and long-term disability, our model more accurately represents the complex tradeoffs between recurrent hemorrhage and (potentially disabling) stroke than would a comparison of event rates. Third, our model is conditional upon patient risk factors, allowing clinicians to personalize the timing of anticoagulation resumption. Theory would suggest that patients at higher risk of ischemic stroke benefit from earlier resumption of anticoagulation, while patients at higher risk of hemorrhage benefit from delayed reinitiation. We have quantified the extent to which patient-specific risks should change timing. Fourth, we offer a means of improving expected health outcomes that requires little more than appropriate scheduling. Current practice regarding resuming anticoagulation is widely variable. Many patients never resume warfarin, and those that do resume do so after highly varied periods of time.1-5,36 We offer a means of standardizing clinical practice and improving expected patient outcomes.
Interestingly, patient-specific risk of rebleeding had little effect on our primary outcome for warfarin, and a greater effect in our simulation of apixaban. It would seem that rebleeding risk, which decreases roughly exponentially, is sufficiently low by the time period at which warfarin should be resumed that patient-specific hemorrhage risk factors have little impact. Meanwhile, at the shorter post-event intervals at which apixaban can be resumed, both stroke risk and patient-specific bleeding risk are worthy considerations.
Our model is subject to several important limitations. First, our predictions of the optimal day as a function of risk scores can only be as well-calibrated as the input scoring systems. It is intuitive that patients with higher risk of rebleeding benefit from delayed reinitiation, while patients with higher risk of thromboembolic stroke benefit from earlier reinitiation. Still, clinicians seeking to operationalize competing risks through these two scores—or, indeed, any score—should be mindful of their limited calibration and shared variance. In other words, while the optimal day of reinitiation is likely in the range we have predicted and varies to the degree demonstrated here, the optimal day we have predicted for each score is likely overly precise. However, while better-calibrated prediction models would improve the accuracy of our model, we believe ours to be the best estimate of timing given available data and this approach to be the most appropriate way to personalize anticoagulation resumption.
Our simulation of apixaban carries an additional source of potential miscalibration. In the clinical trials that led to their approval, apixaban and other direct oral anticoagulants (DOACs) were compared with warfarin over longer periods of time than the acute period simulated in this work. Over a short period of time, patients treated with more rapidly therapeutic medications (in this case, apixaban) would receive more days of effective therapy compared with a slower-onset medication, such as warfarin. Therefore, the relative risks experienced by patients are likely different over the time period we have simulated compared with those measured over longer periods of time (as in phase 3 clinical trials). Our results for apixaban should be viewed as more limited than our estimates for warfarin. More broadly, simulation analyses are intended to predict overall outcomes that are difficult to measure. While other frameworks to assess model credibility exist, the fact remains that no extant datasets can directly validate our predictions.37
Our findings are limited to patients with NVAF. Anticoagulants are prescribed for a variety of indications with widely varied underlying risks and benefits. Models constructed for these conditions would likely produce different timing for resumption of anticoagulation. Unfortunately, large scale cohort studies to inform such models are lacking. Similarly, we simulated UGIB, and our results should not be generalized to populations with other types of bleeding (eg, intracranial hemorrhage). Again, cohort studies of other types of bleeding would be necessary to understand the risks of anticoagulation over time in such populations.
Higher-quality data regarding risk of rebleeding over time would improve our estimates. Our literature search identified only one systematic review that could be used to estimate the risk of recurrent UGIB over time. These data are not adequate to interrogate other forms this survival curve could take, such as Gompertz or Weibull distributions. Recurrence risk almost certainly declines over time, but how quickly it declines carries additional uncertainty.
Despite these limitations, we believe our results to be the best estimates to date of the optimal time of anticoagulation reinitiation following UGIB. Our findings could help inform clinical practice guidelines and reduce variation in care where current practice guidelines are largely silent. Given the potential ease of implementing scheduling changes, our results represent an opportunity to improve patient outcomes with little resource investment.
In conclusion, after UGIB associated with anticoagulation, our model suggests that warfarin is optimally restarted approximately six weeks following hemostasis and that apixaban is optimally restarted approximately one month following hemostasis. Modest changes to this timing based on probability of thromboembolic stroke are reasonable.
Disclosures
The authors have nothing to disclose.
Funding
The authors received no specific funding for this work.
1. Witt DM, Delate T, Garcia DA, et al. Risk of thromboembolism, recurrent hemorrhage, and death after warfarin therapy interruption for gastrointestinal tract bleeding. Arch Intern Med. 2012;172(19):1484-1491. doi: 10.1001/archinternmed.2012.4261. PubMed
2. Sengupta N, Feuerstein JD, Patwardhan VR, et al. The risks of thromboembolism vs recurrent gastrointestinal bleeding after interruption of systemic anticoagulation in hospitalized inpatients with gastrointestinal bleeding: a prospective study. Am J Gastroenterol. 2015;110(2):328-335. doi: 10.1038/ajg.2014.398. PubMed
3. Qureshi W, Mittal C, Patsias I, et al. Restarting anticoagulation and outcomes after major gastrointestinal bleeding in atrial fibrillation. Am J Cardiol. 2014;113(4):662-668. doi: 10.1016/j.amjcard.2013.10.044. PubMed
4. Milling TJ, Spyropoulos AC. Re-initiation of dabigatran and direct factor Xa antagonists after a major bleed. Am J Emerg Med. 2016;34(11):19-25. doi: 10.1016/j.ajem.2016.09.049. PubMed
5. Brotman DJ, Jaffer AK. Resuming anticoagulation in the first week following gastrointestinal tract hemorrhage. Arch Intern Med. 2012;172(19):1492-1493. doi: 10.1001/archinternmed.2012.4309. PubMed
6. Barnes GD, Lucas E, Alexander GC, Goldberger ZD. National trends in ambulatory oral anticoagulant use. Am J Med. 2015;128(12):1300-5. doi: 10.1016/j.amjmed.2015.05.044. PubMed
7. Noseworthy PA, Yao X, Abraham NS, Sangaralingham LR, McBane RD, Shah ND. Direct comparison of dabigatran, rivaroxaban, and apixaban for effectiveness and safety in nonvalvular atrial fibrillation. Chest. 2016;150(6):1302-1312. doi: 10.1016/j.chest.2016.07.013. PubMed
8. Pappas MA, Barnes GD, Vijan S. Personalizing bridging anticoagulation in patients with nonvalvular atrial fibrillation—a microsimulation analysis. J Gen Intern Med. 2017;32(4):464-470. doi: 10.1007/s11606-016-3932-7. PubMed
9. Pappas MA, Vijan S, Rothberg MB, Singer DE. Reducing age bias in decision analyses of anticoagulation for patients with nonvalvular atrial fibrillation – a microsimulation study. PloS One. 2018;13(7):e0199593. doi: 10.1371/journal.pone.0199593. PubMed
10. National Center for Health Statistics. National Health and Nutrition Examination Survey. https://www.cdc.gov/nchs/nhanes/about_nhanes.htm. Accessed August 30, 2018.
11. United States Census Bureau. Age and sex composition in the United States: 2014. https://www.census.gov/data/tables/2014/demo/age-and-sex/2014-age-sex-composition.html. Accessed August 30, 2018.
12. Go AS, Hylek EM, Phillips KA, et al. Prevalence of diagnosed atrial fibrillation in adults: national implications for rhythm management and stroke prevention: the AnTicoagulation and Risk Factors in Atrial Fibrillation (ATRIA) study. JAMA. 2001;285(18):2370-2375. doi: 10.1001/jama.285.18.2370. PubMed
13. Lip GYH, Nieuwlaat R, Pisters R, Lane DA, Crijns HJGM. Refining clinical risk stratification for predicting stroke and thromboembolism in atrial fibrillation using a novel risk factor-based approach: the euro heart survey on atrial fibrillation. Chest. 2010;137(2):263-272. doi: 10.1378/chest.09-1584. PubMed
14. Pisters R, Lane DA, Nieuwlaat R, de Vos CB, Crijns HJGM, Lip GYH. A novel user-friendly score (HAS-BLED) to assess 1-year risk of major bleeding in patients with atrial fibrillation. Chest. 2010;138(5):1093-1100. doi: 10.1378/chest.10-0134. PubMed
15. Granger CB, Alexander JH, McMurray JJV, et al. Apixaban versus warfarin in patients with atrial fibrillation. N Engl J Med. 2011;365(11):981-992. doi: 10.1056/NEJMoa1107039.
16. Rockall TA, Logan RF, Devlin HB, Northfield TC. Risk assessment after acute upper gastrointestinal haemorrhage. Gut. 1996;38(3):316-321. doi: 10.1136/gut.38.3.316. PubMed
17. Vreeburg EM, Terwee CB, Snel P, et al. Validation of the Rockall risk scoring system in upper gastrointestinal bleeding. Gut. 1999;44(3):331-335. doi: 10.1136/gut.44.3.331. PubMed
18. Enns RA, Gagnon YM, Barkun AN, et al. Validation of the Rockall scoring system for outcomes from non-variceal upper gastrointestinal bleeding in a Canadian setting. World J Gastroenterol. 2006;12(48):7779-7785. doi: 10.3748/wjg.v12.i48.7779. PubMed
19. Stanley AJ, Laine L, Dalton HR, et al. Comparison of risk scoring systems for patients presenting with upper gastrointestinal bleeding: international multicentre prospective study. BMJ. 2017;356:i6432. doi: 10.1136/bmj.i6432. PubMed
20. Barkun AN, Bardou M, Kuipers EJ, et al. International consensus recommendations on the management of patients with nonvariceal upper gastrointestinal bleeding. Ann Intern Med. 2010;152(2):101-113. doi: 10.7326/0003-4819-152-2-201001190-00009. PubMed
21. Friberg L, Rosenqvist M, Lip GYH. Evaluation of risk stratification schemes for ischaemic stroke and bleeding in 182 678 patients with atrial fibrillation: the Swedish atrial fibrillation cohort study. Eur Heart J. 2012;33(12):1500-1510. doi: 10.1093/eurheartj/ehr488. PubMed
22. Friberg L, Rosenqvist M, Lip GYH. Net clinical benefit of warfarin in patients with atrial fibrillation: a report from the Swedish atrial fibrillation cohort study. Circulation. 2012;125(19):2298-2307. doi: 10.1161/CIRCULATIONAHA.111.055079. PubMed
23. Hart RG, Diener HC, Yang S, et al. Intracranial hemorrhage in atrial fibrillation patients during anticoagulation with warfarin or dabigatran: the RE-LY trial. Stroke. 2012;43(6):1511-1517. doi: 10.1161/STROKEAHA.112.650614. PubMed
24. Hankey GJ, Stevens SR, Piccini JP, et al. Intracranial hemorrhage among patients with atrial fibrillation anticoagulated with warfarin or rivaroxaban: the rivaroxaban once daily, oral, direct factor Xa inhibition compared with vitamin K antagonism for prevention of stroke and embolism trial in atrial fibrillation. Stroke. 2014;45(5):1304-1312. doi: 10.1161/STROKEAHA.113.004506. PubMed
25. Eikelboom JW, Wallentin L, Connolly SJ, et al. Risk of bleeding with 2 doses of dabigatran compared with warfarin in older and younger patients with atrial fibrillation : an analysis of the randomized evaluation of long-term anticoagulant therapy (RE-LY trial). Circulation. 2011;123(21):2363-2372. doi: 10.1161/CIRCULATIONAHA.110.004747. PubMed
26. El Ouali S, Barkun A, Martel M, Maggio D. Timing of rebleeding in high-risk peptic ulcer bleeding after successful hemostasis: a systematic review. Can J Gastroenterol Hepatol. 2014;28(10):543-548. doi: 0.1016/S0016-5085(14)60738-1. PubMed
27. Kimmel SE, French B, Kasner SE, et al. A pharmacogenetic versus a clinical algorithm for warfarin dosing. N Engl J Med. 2013;369(24):2283-2293. doi: 10.1056/NEJMoa1310669. PubMed
28. Healthcare Cost and Utilization Project (HCUP), Agency for Healthcare Research and Quality. HCUP Databases. https://www.hcup-us.ahrq.gov/nisoverview.jsp. Accessed August 31, 2018.
29. Guerrouij M, Uppal CS, Alklabi A, Douketis JD. The clinical impact of bleeding during oral anticoagulant therapy: assessment of morbidity, mortality and post-bleed anticoagulant management. J Thromb Thrombolysis. 2011;31(4):419-423. doi: 10.1007/s11239-010-0536-7. PubMed
30. Fang MC, Go AS, Chang Y, et al. Death and disability from warfarin-associated intracranial and extracranial hemorrhages. Am J Med. 2007;120(8):700-705. doi: 10.1016/j.amjmed.2006.07.034. PubMed
31. Guertin JR, Feeny D, Tarride JE. Age- and sex-specific Canadian utility norms, based on the 2013-2014 Canadian Community Health Survey. CMAJ. 2018;190(6):E155-E161. doi: 10.1503/cmaj.170317. PubMed
32. Gage BF, Cardinalli AB, Albers GW, Owens DK. Cost-effectiveness of warfarin and aspirin for prophylaxis of stroke in patients with nonvalvular atrial fibrillation. JAMA. 1995;274(23):1839-1845. doi: 10.1001/jama.1995.03530230025025. PubMed
33. Fang MC, Go AS, Chang Y, et al. Long-term survival after ischemic stroke in patients with atrial fibrillation. Neurology. 2014;82(12):1033-1037. doi: 10.1212/WNL.0000000000000248. PubMed
34. Hong KS, Saver JL. Quantifying the value of stroke disability outcomes: WHO global burden of disease project disability weights for each level of the modified Rankin scale * Supplemental Mathematical Appendix. Stroke. 2009;40(12):3828-3833. doi: 10.1161/STROKEAHA.109.561365. PubMed
35. Jalal H, Dowd B, Sainfort F, Kuntz KM. Linear regression metamodeling as a tool to summarize and present simulation model results. Med Decis Mak. 2013;33(7):880-890. doi: 10.1177/0272989X13492014. PubMed
36. Staerk L, Lip GYH, Olesen JB, et al. Stroke and recurrent haemorrhage associated with antithrombotic treatment after gastrointestinal bleeding in patients with atrial fibrillation: nationwide cohort study. BMJ. 2015;351:h5876. doi: 10.1136/bmj.h5876. PubMed
37. Kopec JA, Finès P, Manuel DG, et al. Validation of population-based disease simulation models: a review of concepts and methods. BMC Public Health. 2010;10(1):710. doi: 10.1186/1471-2458-10-710. PubMed
38. Smith EE, Shobha N, Dai D, et al. Risk score for in-hospital ischemic stroke mortality derived and validated within the Get With The Guidelines-Stroke Program. Circulation. 2010;122(15):1496-1504. doi: 10.1161/CIRCULATIONAHA.109.932822. PubMed
39. Smith EE, Shobha N, Dai D, et al. A risk score for in-hospital death in patients admitted with ischemic or hemorrhagic stroke. J Am Heart Assoc. 2013;2(1):e005207. doi: 10.1161/JAHA.112.005207. PubMed
40. Busl KM, Prabhakaran S. Predictors of mortality in nontraumatic subdural hematoma. J Neurosurg. 2013;119(5):1296-1301. doi: 10.3171/2013.4.JNS122236. PubMed
41. Murphy SL, Kochanek KD, Xu J, Heron M. Deaths: final data for 2012. Natl Vital Stat Rep. 2015;63(9):1-117. http://www.ncbi.nlm.nih.gov/pubmed/26759855. Accessed August 31, 2018.
42. Dachs RJ, Burton JH, Joslin J. A user’s guide to the NINDS rt-PA stroke trial database. PLOS Med. 2008;5(5):e113. doi: 10.1371/journal.pmed.0050113. PubMed
43. Ashburner JM, Go AS, Reynolds K, et al. Comparison of frequency and outcome of major gastrointestinal hemorrhage in patients with atrial fibrillation on versus not receiving warfarin therapy (from the ATRIA and ATRIA-CVRN cohorts). Am J Cardiol. 2015;115(1):40-46. doi: 10.1016/j.amjcard.2014.10.006. PubMed
44. Weinstein MC, Siegel JE, Gold MR, Kamlet MS, Russell LB. Recommendations of the panel on cost-effectiveness in health and medicine. JAMA. 1996;276(15):1253-1258. doi: 10.1001/jama.1996.03540150055031. PubMed
Anticoagulation is commonly used in the management of atrial fibrillation to reduce the risk of ischemic stroke. Warfarin and other anticoagulants increase the risk of hemorrhagic complications, including upper gastrointestinal bleeding (UGIB). Following UGIB, management of anticoagulation is highly variable. Many patients permanently discontinue anticoagulation, while others continue without interruption.1-4 Among patients who resume warfarin, different cohorts have measured median times to resumption ranging from four days to 50 days.1-3 Outcomes data are sparse, and clinical guidelines offer little direction.5
Following UGIB, the balance between the risks and benefits of anticoagulation changes over time. Rebleeding risk is highest immediately after the event and declines quickly; therefore, rapid resumption of anticoagulation causes patient harm.3 Meanwhile, the risk of stroke remains constant, and delay in resumption of anticoagulation is associated with increased risk of stroke and death.1 At some point in time following the initial UGIB, the expected harm from bleeding would equal the expected harm from stroke. This time point would represent the optimal time to restart anticoagulation.
Trial data are unlikely to identify the optimal time for restarting anticoagulation. A randomized trial comparing discrete reinitiation times (eg, two weeks vs six weeks) may easily miss the optimal timing. Moreover, because the daily probability of thromboembolic events is low, large numbers of patients would be required to power such a study. In addition, a number of oral anticoagulants are now approved for prevention of thromboembolic stroke in atrial fibrillation, and each drug may have different optimal timing.
In contrast to randomized trials that would be impracticable for addressing this clinical issue, microsimulation modeling can provide granular information regarding the optimal time to restart anticoagulation. Herein, we set out to estimate the expected benefit of reinitiation of warfarin, the most commonly used oral anticoagulant,6 or apixaban, the direct oral anticoagulant with the most favorable risk profile,7 as a function of days after UGIB.
METHODS
We previously described a microsimulation model of anticoagulation among patients with nonvalvular atrial fibrillation (NVAF; hereafter, we refer to this model as the Personalized Anticoagulation Decision-Making Assistance model, or PADMA).8,9 For this study, we extended this model to incorporate the probability of rebleeding following UGIB and include apixaban as an alternative to warfarin. This model begins with a synthetic population following UGIB, the members of which are at varying risk for thromboembolism, recurrent UGIB, and other hemorrhages. For each patient, the model simulates a number of possible events (eg, thromboembolic stroke, intracranial hemorrhage, rebleeding, and other major extracranial hemorrhages) on each day of an acute period of 90 days after hemostasis. Patients who survive until the end of the acute period enter a simulation with annual, rather than daily, cycles. Our model then estimates total quality-adjusted life-years (QALYs) for each patient, discounted to the present. We report the average discounted QALYs produced by the model for the same population if all individuals in our input population were to resume either warfarin or apixaban on a specific day. Input parameters and ranges are summarized in Table 1, a simplified schematic of our model is shown in the Supplemental Appendix, and additional details regarding model structure and assumptions can be found in earlier work.8,9 We simulated from a health system perspective over a lifelong time horizon. All analyses were performed in version 14 of Stata (StataCorp, LLC, College Station, Texas).
Synthetic Population
To generate a population reflective of the comorbidities and age distribution of the US population with NVAF, we merged relevant variables from the National Health and Nutrition Examination Survey (NHANES; 2011-2012), using multiple imputation to correct for missing variables.10 We then bootstrapped to national population estimates by age and sex to arrive at a hypothetical population of the United States.11 Because NHANES does not include atrial fibrillation, we applied sex- and age-specific prevalence rates from the AnTicoagulation and Risk Factors In Atrial Fibrillation study.12 We then calculated commonly used risk scores (CHA2DS2-Vasc and HAS-BLED) for each patient and limited the population to patients with a CHA2DS2-Vasc score of one or greater.13,14 The population resuming apixaban was further limited to patients whose creatinine clearance was 25 mL/min or greater in keeping with the entry criteria in the phase 3 clinical trial on which the medication’s approval was based.15
To estimate patient-specific probability of rebleeding, we generated a Rockall score for each patient.16 Although the discrimination of the Rockall score is limited for individual patients, as with all other tools used to predict rebleeding following UGIB, the Rockall score has demonstrated reasonable calibration across a threefold risk gradient.17-19 International consensus guidelines recommend the Rockall score as one of two risk prediction tools for clinical use in the management of patients with UGIB.20 In addition, because the Rockall score includes some demographic components (five of a possible 11 points), our estimates of rebleeding risk are covariant with other patient-specific risks. We assumed that the endoscopic components of the Rockall score were present in our cohort at the same frequency as in the original derivation and are independent of known patient risk factors.16 For example, 441 out of 4,025 patients in the original Rockall derivation cohort presented with a systolic blood pressure less than 100 mm Hg. We assumed that an independent and random 10.96% of the cohort would present with shock, which confers two points in the Rockall score.
The population was replicated 60 times, with identical copies of the population resuming anticoagulation on each of days 1-60 (where day zero represents hemostasis). Intermediate data regarding our simulated population can be found in the Supplemental Appendix and in prior work.
Event Type, Severity, and Mortality
Each patient in our simulation could sustain several discrete and independent events: ischemic stroke, intracranial hemorrhage, recurrent UGIB, or extracranial major hemorrhage other than recurrent UGIB. As in prior analyses using the PADMA model, we did not consider minor hemorrhagic events.8
The probability of each event was conditional on the corresponding risk scoring system. Patient-specific probability of ischemic stroke was conditional on CHA2DS2-Vasc score.21,22 Patient-specific probability of intracranial hemorrhage was conditional on HAS-BLED score, with the proportions of intracranial hemorrhage of each considered subtype (intracerebral, subarachnoid, or subdural) bootstrapped from previously-published data.21-24 Patient-specific probability of rebleeding was conditional on Rockall score from the combined Rockall and Vreeburg validation cohorts.17 Patient-specific probability of extracranial major hemorrhage was conditional on HAS-BLED score.21 To avoid double-counting of UGIB, we subtracted the baseline risk of UGIB from the overall rate of extracranial major hemorrhages using previously-published data regarding relative frequency and a bootstrapping approach.25
Probability of Rebleeding Over Time
To estimate the decrease in rebleeding risk over time, we searched the Medline database for systematic reviews of recurrent bleeding following UGIB using the strategy detailed in the Supplemental Appendix. Using the interval rates of rebleeding we identified, we calculated implied daily rates of rebleeding at the midpoint of each interval. For example, 39.5% of rebleeding events occurred within three days of hemostasis, implying a daily rate of approximately 13.2% on day two (32 of 81 events over a three-day period). We repeated this process to estimate daily rates at the midpoint of each reported time interval and fitted an exponential decay function.26 Our exponential fitted these datapoints quite well, but we lacked sufficient data to test other survival functions (eg, Gompertz, lognormal, etc.). Our fitted exponential can be expressed as:
P rebleeding = b 0 *exp(b 1 *day)
where b0 = 0.1843 (SE: 0.0136) and b1 = –0.1563 (SE: 0.0188). For example, a mean of 3.9% of rebleeding episodes will occur on day 10 (0.1843 *exp(–0.1563 *10)).
Relative Risks of Events with Anticoagulation
For patients resuming warfarin, the probabilities of each event were adjusted based on patient-specific daily INR. All INRs were assumed to be 1.0 until the day of warfarin reinitiation, after which interpolated trajectories of postinitiation INR measurements were sampled for each patient from an earlier study of clinical warfarin initiation.27 Relative risks of ischemic stroke and hemorrhagic events were calculated based on each day’s INR.
For patients taking apixaban, we assumed that the medication would reach full therapeutic effect one day after reinitiation. Based on available evidence, we applied the relative risks of each event with apixaban compared with warfarin.25
Future Disability and Mortality
Each event in our simulation resulted in hospitalization. Length of stay was sampled for each diagnosis.28 The disutility of hospitalization was estimated based on length of stay.8 Inpatient mortality and future disability were predicted for each event as previously described.8 We assumed that recurrent episodes of UGIB conferred morbidity and mortality identical to extracranial major hemorrhages more broadly.29,30
Disutilities
We used a multiplicative model for disutility with baseline utilities conditional on age and sex.31 Each day after resumption of anticoagulation carried a disutility of 0.012 for warfarin or 0.002 for apixaban, which we assumed to be equivalent to aspirin in disutility.32 Long-term disutility and life expectancy were conditional on modified Rankin Score (mRS).33,34 We discounted all QALYs to day zero using standard exponential discounting and a discount rate centered at 3%. We then computed the average discounted QALYs among the cohort of patients that resumed anticoagulation on each day following the index UGIB.
Sensitivity Analyses and Metamodel
To assess sensitivity to continuously varying input parameters, such as discount rate, the proportion of extracranial major hemorrhages that are upper GI bleeds, and inpatient mortality from extracranial major hemorrhage, we constructed a metamodel (a regression model of our microsimulation results).35 We tested for interactions among input parameters and dropped parameters that were not statistically significant predictors of discounted QALYs from our metamodel. We then tested for interactions between each parameter and day resuming anticoagulation to determine which factors may impact the optimal day of reinitiation. Finally, we used predicted marginal effects from our metamodel to assess the change in optimal day across the ranges of each input parameter when other parameters were held at their medians.
RESULTS
Resuming warfarin on day zero produced the fewest QALYs. With delay in reinitiation of anticoagulation, expected QALYs increased, peaked, and then declined for all scenarios. In our base-case simulation of warfarin, peak utility was achieved by resumption 41 days after the index UGIB. Resumption between days 32 and 51 produced greater than 99.9% of peak utility. In our base-case simulation of apixaban, peak utility was achieved by resumption 32 days after the index UGIB. Resumption between days 21 and 47 produced greater than 99.9% of peak utility. Results for warfarin and apixaban are shown in Figures 1 and 2, respectively.
The optimal day of warfarin reinitiation was most sensitive to CHA2DS2-Vasc scores and varied by around 11 days between a CHA2DS2-Vasc score of one and a CHA2DS2-Vasc score of six (the 5th and 95th percentiles, respectively) when all other parameters are held at their medians. Results were comparatively insensitive to rebleeding risk. Varying Rockall score from two to seven (the 5th and 95th percentiles, respectively) added three days to optimal warfarin resumption. Varying other parameters from the 5th to the 95th percentile (including HAS-BLED score, sex, age, and discount rate) changed expected QALYs but did not change the optimal day of reinitiation of warfarin. Optimal day of reinitiation for warfarin stratified by CHA2DS2-Vasc score is shown in Table 2.
Sensitivity analyses for apixaban produced broadly similar results, but with greater sensitivity to rebleeding risk. Optimal day of reinitiation varied by 15 days over the examined range of CHA2DS2-Vasc scores (Table 2) and by six days over the range of Rockall scores (Supplemental Appendix). Other input parameters, including HAS-BLED score, age, sex, and discount rate, changed expected QALYs and were significant in our metamodel but did not affect the optimal day of reinitiation. Metamodel results for both warfarin and apixaban are included in the Supplemental Appendix.
DISCUSSION
Anticoagulation is frequently prescribed for patients with NVAF, and hemorrhagic complications are common. Although anticoagulants are withheld following hemorrhages, scant evidence to inform the optimal timing of reinitiation is available. In this microsimulation analysis, we found that the optimal time to reinitiate anticoagulation following UGIB is around 41 days for warfarin and around 32 days for apixaban. We have further demonstrated that the optimal timing of reinitiation can vary by nearly two weeks, depending on a patient’s underlying risk of stroke, and that early reinitiation is more sensitive to rebleeding risk than late reinitiation.
Prior work has shown that early reinitiation of anticoagulation leads to higher rates of recurrent hemorrhage while failure to reinitiate anticoagulation is associated with higher rates of stroke and mortality.1-4,36 Our results add to the literature in a number of important ways. First, our model not only confirms that anticoagulation should be restarted but also suggests when this action should be taken. The competing risks of bleeding and stroke have left clinicians with little guidance; we have quantified the clinical reasoning required for the decision to resume anticoagulation. Second, by including the disutility of hospitalization and long-term disability, our model more accurately represents the complex tradeoffs between recurrent hemorrhage and (potentially disabling) stroke than would a comparison of event rates. Third, our model is conditional upon patient risk factors, allowing clinicians to personalize the timing of anticoagulation resumption. Theory would suggest that patients at higher risk of ischemic stroke benefit from earlier resumption of anticoagulation, while patients at higher risk of hemorrhage benefit from delayed reinitiation. We have quantified the extent to which patient-specific risks should change timing. Fourth, we offer a means of improving expected health outcomes that requires little more than appropriate scheduling. Current practice regarding resuming anticoagulation is widely variable. Many patients never resume warfarin, and those that do resume do so after highly varied periods of time.1-5,36 We offer a means of standardizing clinical practice and improving expected patient outcomes.
Interestingly, patient-specific risk of rebleeding had little effect on our primary outcome for warfarin, and a greater effect in our simulation of apixaban. It would seem that rebleeding risk, which decreases roughly exponentially, is sufficiently low by the time period at which warfarin should be resumed that patient-specific hemorrhage risk factors have little impact. Meanwhile, at the shorter post-event intervals at which apixaban can be resumed, both stroke risk and patient-specific bleeding risk are worthy considerations.
Our model is subject to several important limitations. First, our predictions of the optimal day as a function of risk scores can only be as well-calibrated as the input scoring systems. It is intuitive that patients with higher risk of rebleeding benefit from delayed reinitiation, while patients with higher risk of thromboembolic stroke benefit from earlier reinitiation. Still, clinicians seeking to operationalize competing risks through these two scores—or, indeed, any score—should be mindful of their limited calibration and shared variance. In other words, while the optimal day of reinitiation is likely in the range we have predicted and varies to the degree demonstrated here, the optimal day we have predicted for each score is likely overly precise. However, while better-calibrated prediction models would improve the accuracy of our model, we believe ours to be the best estimate of timing given available data and this approach to be the most appropriate way to personalize anticoagulation resumption.
Our simulation of apixaban carries an additional source of potential miscalibration. In the clinical trials that led to their approval, apixaban and other direct oral anticoagulants (DOACs) were compared with warfarin over longer periods of time than the acute period simulated in this work. Over a short period of time, patients treated with more rapidly therapeutic medications (in this case, apixaban) would receive more days of effective therapy compared with a slower-onset medication, such as warfarin. Therefore, the relative risks experienced by patients are likely different over the time period we have simulated compared with those measured over longer periods of time (as in phase 3 clinical trials). Our results for apixaban should be viewed as more limited than our estimates for warfarin. More broadly, simulation analyses are intended to predict overall outcomes that are difficult to measure. While other frameworks to assess model credibility exist, the fact remains that no extant datasets can directly validate our predictions.37
Our findings are limited to patients with NVAF. Anticoagulants are prescribed for a variety of indications with widely varied underlying risks and benefits. Models constructed for these conditions would likely produce different timing for resumption of anticoagulation. Unfortunately, large scale cohort studies to inform such models are lacking. Similarly, we simulated UGIB, and our results should not be generalized to populations with other types of bleeding (eg, intracranial hemorrhage). Again, cohort studies of other types of bleeding would be necessary to understand the risks of anticoagulation over time in such populations.
Higher-quality data regarding risk of rebleeding over time would improve our estimates. Our literature search identified only one systematic review that could be used to estimate the risk of recurrent UGIB over time. These data are not adequate to interrogate other forms this survival curve could take, such as Gompertz or Weibull distributions. Recurrence risk almost certainly declines over time, but how quickly it declines carries additional uncertainty.
Despite these limitations, we believe our results to be the best estimates to date of the optimal time of anticoagulation reinitiation following UGIB. Our findings could help inform clinical practice guidelines and reduce variation in care where current practice guidelines are largely silent. Given the potential ease of implementing scheduling changes, our results represent an opportunity to improve patient outcomes with little resource investment.
In conclusion, after UGIB associated with anticoagulation, our model suggests that warfarin is optimally restarted approximately six weeks following hemostasis and that apixaban is optimally restarted approximately one month following hemostasis. Modest changes to this timing based on probability of thromboembolic stroke are reasonable.
Disclosures
The authors have nothing to disclose.
Funding
The authors received no specific funding for this work.
Anticoagulation is commonly used in the management of atrial fibrillation to reduce the risk of ischemic stroke. Warfarin and other anticoagulants increase the risk of hemorrhagic complications, including upper gastrointestinal bleeding (UGIB). Following UGIB, management of anticoagulation is highly variable. Many patients permanently discontinue anticoagulation, while others continue without interruption.1-4 Among patients who resume warfarin, different cohorts have measured median times to resumption ranging from four days to 50 days.1-3 Outcomes data are sparse, and clinical guidelines offer little direction.5
Following UGIB, the balance between the risks and benefits of anticoagulation changes over time. Rebleeding risk is highest immediately after the event and declines quickly; therefore, rapid resumption of anticoagulation causes patient harm.3 Meanwhile, the risk of stroke remains constant, and delay in resumption of anticoagulation is associated with increased risk of stroke and death.1 At some point in time following the initial UGIB, the expected harm from bleeding would equal the expected harm from stroke. This time point would represent the optimal time to restart anticoagulation.
Trial data are unlikely to identify the optimal time for restarting anticoagulation. A randomized trial comparing discrete reinitiation times (eg, two weeks vs six weeks) may easily miss the optimal timing. Moreover, because the daily probability of thromboembolic events is low, large numbers of patients would be required to power such a study. In addition, a number of oral anticoagulants are now approved for prevention of thromboembolic stroke in atrial fibrillation, and each drug may have different optimal timing.
In contrast to randomized trials that would be impracticable for addressing this clinical issue, microsimulation modeling can provide granular information regarding the optimal time to restart anticoagulation. Herein, we set out to estimate the expected benefit of reinitiation of warfarin, the most commonly used oral anticoagulant,6 or apixaban, the direct oral anticoagulant with the most favorable risk profile,7 as a function of days after UGIB.
METHODS
We previously described a microsimulation model of anticoagulation among patients with nonvalvular atrial fibrillation (NVAF; hereafter, we refer to this model as the Personalized Anticoagulation Decision-Making Assistance model, or PADMA).8,9 For this study, we extended this model to incorporate the probability of rebleeding following UGIB and include apixaban as an alternative to warfarin. This model begins with a synthetic population following UGIB, the members of which are at varying risk for thromboembolism, recurrent UGIB, and other hemorrhages. For each patient, the model simulates a number of possible events (eg, thromboembolic stroke, intracranial hemorrhage, rebleeding, and other major extracranial hemorrhages) on each day of an acute period of 90 days after hemostasis. Patients who survive until the end of the acute period enter a simulation with annual, rather than daily, cycles. Our model then estimates total quality-adjusted life-years (QALYs) for each patient, discounted to the present. We report the average discounted QALYs produced by the model for the same population if all individuals in our input population were to resume either warfarin or apixaban on a specific day. Input parameters and ranges are summarized in Table 1, a simplified schematic of our model is shown in the Supplemental Appendix, and additional details regarding model structure and assumptions can be found in earlier work.8,9 We simulated from a health system perspective over a lifelong time horizon. All analyses were performed in version 14 of Stata (StataCorp, LLC, College Station, Texas).
Synthetic Population
To generate a population reflective of the comorbidities and age distribution of the US population with NVAF, we merged relevant variables from the National Health and Nutrition Examination Survey (NHANES; 2011-2012), using multiple imputation to correct for missing variables.10 We then bootstrapped to national population estimates by age and sex to arrive at a hypothetical population of the United States.11 Because NHANES does not include atrial fibrillation, we applied sex- and age-specific prevalence rates from the AnTicoagulation and Risk Factors In Atrial Fibrillation study.12 We then calculated commonly used risk scores (CHA2DS2-Vasc and HAS-BLED) for each patient and limited the population to patients with a CHA2DS2-Vasc score of one or greater.13,14 The population resuming apixaban was further limited to patients whose creatinine clearance was 25 mL/min or greater in keeping with the entry criteria in the phase 3 clinical trial on which the medication’s approval was based.15
To estimate patient-specific probability of rebleeding, we generated a Rockall score for each patient.16 Although the discrimination of the Rockall score is limited for individual patients, as with all other tools used to predict rebleeding following UGIB, the Rockall score has demonstrated reasonable calibration across a threefold risk gradient.17-19 International consensus guidelines recommend the Rockall score as one of two risk prediction tools for clinical use in the management of patients with UGIB.20 In addition, because the Rockall score includes some demographic components (five of a possible 11 points), our estimates of rebleeding risk are covariant with other patient-specific risks. We assumed that the endoscopic components of the Rockall score were present in our cohort at the same frequency as in the original derivation and are independent of known patient risk factors.16 For example, 441 out of 4,025 patients in the original Rockall derivation cohort presented with a systolic blood pressure less than 100 mm Hg. We assumed that an independent and random 10.96% of the cohort would present with shock, which confers two points in the Rockall score.
The population was replicated 60 times, with identical copies of the population resuming anticoagulation on each of days 1-60 (where day zero represents hemostasis). Intermediate data regarding our simulated population can be found in the Supplemental Appendix and in prior work.
Event Type, Severity, and Mortality
Each patient in our simulation could sustain several discrete and independent events: ischemic stroke, intracranial hemorrhage, recurrent UGIB, or extracranial major hemorrhage other than recurrent UGIB. As in prior analyses using the PADMA model, we did not consider minor hemorrhagic events.8
The probability of each event was conditional on the corresponding risk scoring system. Patient-specific probability of ischemic stroke was conditional on CHA2DS2-Vasc score.21,22 Patient-specific probability of intracranial hemorrhage was conditional on HAS-BLED score, with the proportions of intracranial hemorrhage of each considered subtype (intracerebral, subarachnoid, or subdural) bootstrapped from previously-published data.21-24 Patient-specific probability of rebleeding was conditional on Rockall score from the combined Rockall and Vreeburg validation cohorts.17 Patient-specific probability of extracranial major hemorrhage was conditional on HAS-BLED score.21 To avoid double-counting of UGIB, we subtracted the baseline risk of UGIB from the overall rate of extracranial major hemorrhages using previously-published data regarding relative frequency and a bootstrapping approach.25
Probability of Rebleeding Over Time
To estimate the decrease in rebleeding risk over time, we searched the Medline database for systematic reviews of recurrent bleeding following UGIB using the strategy detailed in the Supplemental Appendix. Using the interval rates of rebleeding we identified, we calculated implied daily rates of rebleeding at the midpoint of each interval. For example, 39.5% of rebleeding events occurred within three days of hemostasis, implying a daily rate of approximately 13.2% on day two (32 of 81 events over a three-day period). We repeated this process to estimate daily rates at the midpoint of each reported time interval and fitted an exponential decay function.26 Our exponential fitted these datapoints quite well, but we lacked sufficient data to test other survival functions (eg, Gompertz, lognormal, etc.). Our fitted exponential can be expressed as:
P rebleeding = b 0 *exp(b 1 *day)
where b0 = 0.1843 (SE: 0.0136) and b1 = –0.1563 (SE: 0.0188). For example, a mean of 3.9% of rebleeding episodes will occur on day 10 (0.1843 *exp(–0.1563 *10)).
Relative Risks of Events with Anticoagulation
For patients resuming warfarin, the probabilities of each event were adjusted based on patient-specific daily INR. All INRs were assumed to be 1.0 until the day of warfarin reinitiation, after which interpolated trajectories of postinitiation INR measurements were sampled for each patient from an earlier study of clinical warfarin initiation.27 Relative risks of ischemic stroke and hemorrhagic events were calculated based on each day’s INR.
For patients taking apixaban, we assumed that the medication would reach full therapeutic effect one day after reinitiation. Based on available evidence, we applied the relative risks of each event with apixaban compared with warfarin.25
Future Disability and Mortality
Each event in our simulation resulted in hospitalization. Length of stay was sampled for each diagnosis.28 The disutility of hospitalization was estimated based on length of stay.8 Inpatient mortality and future disability were predicted for each event as previously described.8 We assumed that recurrent episodes of UGIB conferred morbidity and mortality identical to extracranial major hemorrhages more broadly.29,30
Disutilities
We used a multiplicative model for disutility with baseline utilities conditional on age and sex.31 Each day after resumption of anticoagulation carried a disutility of 0.012 for warfarin or 0.002 for apixaban, which we assumed to be equivalent to aspirin in disutility.32 Long-term disutility and life expectancy were conditional on modified Rankin Score (mRS).33,34 We discounted all QALYs to day zero using standard exponential discounting and a discount rate centered at 3%. We then computed the average discounted QALYs among the cohort of patients that resumed anticoagulation on each day following the index UGIB.
Sensitivity Analyses and Metamodel
To assess sensitivity to continuously varying input parameters, such as discount rate, the proportion of extracranial major hemorrhages that are upper GI bleeds, and inpatient mortality from extracranial major hemorrhage, we constructed a metamodel (a regression model of our microsimulation results).35 We tested for interactions among input parameters and dropped parameters that were not statistically significant predictors of discounted QALYs from our metamodel. We then tested for interactions between each parameter and day resuming anticoagulation to determine which factors may impact the optimal day of reinitiation. Finally, we used predicted marginal effects from our metamodel to assess the change in optimal day across the ranges of each input parameter when other parameters were held at their medians.
RESULTS
Resuming warfarin on day zero produced the fewest QALYs. With delay in reinitiation of anticoagulation, expected QALYs increased, peaked, and then declined for all scenarios. In our base-case simulation of warfarin, peak utility was achieved by resumption 41 days after the index UGIB. Resumption between days 32 and 51 produced greater than 99.9% of peak utility. In our base-case simulation of apixaban, peak utility was achieved by resumption 32 days after the index UGIB. Resumption between days 21 and 47 produced greater than 99.9% of peak utility. Results for warfarin and apixaban are shown in Figures 1 and 2, respectively.
The optimal day of warfarin reinitiation was most sensitive to CHA2DS2-Vasc scores and varied by around 11 days between a CHA2DS2-Vasc score of one and a CHA2DS2-Vasc score of six (the 5th and 95th percentiles, respectively) when all other parameters are held at their medians. Results were comparatively insensitive to rebleeding risk. Varying Rockall score from two to seven (the 5th and 95th percentiles, respectively) added three days to optimal warfarin resumption. Varying other parameters from the 5th to the 95th percentile (including HAS-BLED score, sex, age, and discount rate) changed expected QALYs but did not change the optimal day of reinitiation of warfarin. Optimal day of reinitiation for warfarin stratified by CHA2DS2-Vasc score is shown in Table 2.
Sensitivity analyses for apixaban produced broadly similar results, but with greater sensitivity to rebleeding risk. Optimal day of reinitiation varied by 15 days over the examined range of CHA2DS2-Vasc scores (Table 2) and by six days over the range of Rockall scores (Supplemental Appendix). Other input parameters, including HAS-BLED score, age, sex, and discount rate, changed expected QALYs and were significant in our metamodel but did not affect the optimal day of reinitiation. Metamodel results for both warfarin and apixaban are included in the Supplemental Appendix.
DISCUSSION
Anticoagulation is frequently prescribed for patients with NVAF, and hemorrhagic complications are common. Although anticoagulants are withheld following hemorrhages, scant evidence to inform the optimal timing of reinitiation is available. In this microsimulation analysis, we found that the optimal time to reinitiate anticoagulation following UGIB is around 41 days for warfarin and around 32 days for apixaban. We have further demonstrated that the optimal timing of reinitiation can vary by nearly two weeks, depending on a patient’s underlying risk of stroke, and that early reinitiation is more sensitive to rebleeding risk than late reinitiation.
Prior work has shown that early reinitiation of anticoagulation leads to higher rates of recurrent hemorrhage while failure to reinitiate anticoagulation is associated with higher rates of stroke and mortality.1-4,36 Our results add to the literature in a number of important ways. First, our model not only confirms that anticoagulation should be restarted but also suggests when this action should be taken. The competing risks of bleeding and stroke have left clinicians with little guidance; we have quantified the clinical reasoning required for the decision to resume anticoagulation. Second, by including the disutility of hospitalization and long-term disability, our model more accurately represents the complex tradeoffs between recurrent hemorrhage and (potentially disabling) stroke than would a comparison of event rates. Third, our model is conditional upon patient risk factors, allowing clinicians to personalize the timing of anticoagulation resumption. Theory would suggest that patients at higher risk of ischemic stroke benefit from earlier resumption of anticoagulation, while patients at higher risk of hemorrhage benefit from delayed reinitiation. We have quantified the extent to which patient-specific risks should change timing. Fourth, we offer a means of improving expected health outcomes that requires little more than appropriate scheduling. Current practice regarding resuming anticoagulation is widely variable. Many patients never resume warfarin, and those that do resume do so after highly varied periods of time.1-5,36 We offer a means of standardizing clinical practice and improving expected patient outcomes.
Interestingly, patient-specific risk of rebleeding had little effect on our primary outcome for warfarin, and a greater effect in our simulation of apixaban. It would seem that rebleeding risk, which decreases roughly exponentially, is sufficiently low by the time period at which warfarin should be resumed that patient-specific hemorrhage risk factors have little impact. Meanwhile, at the shorter post-event intervals at which apixaban can be resumed, both stroke risk and patient-specific bleeding risk are worthy considerations.
Our model is subject to several important limitations. First, our predictions of the optimal day as a function of risk scores can only be as well-calibrated as the input scoring systems. It is intuitive that patients with higher risk of rebleeding benefit from delayed reinitiation, while patients with higher risk of thromboembolic stroke benefit from earlier reinitiation. Still, clinicians seeking to operationalize competing risks through these two scores—or, indeed, any score—should be mindful of their limited calibration and shared variance. In other words, while the optimal day of reinitiation is likely in the range we have predicted and varies to the degree demonstrated here, the optimal day we have predicted for each score is likely overly precise. However, while better-calibrated prediction models would improve the accuracy of our model, we believe ours to be the best estimate of timing given available data and this approach to be the most appropriate way to personalize anticoagulation resumption.
Our simulation of apixaban carries an additional source of potential miscalibration. In the clinical trials that led to their approval, apixaban and other direct oral anticoagulants (DOACs) were compared with warfarin over longer periods of time than the acute period simulated in this work. Over a short period of time, patients treated with more rapidly therapeutic medications (in this case, apixaban) would receive more days of effective therapy compared with a slower-onset medication, such as warfarin. Therefore, the relative risks experienced by patients are likely different over the time period we have simulated compared with those measured over longer periods of time (as in phase 3 clinical trials). Our results for apixaban should be viewed as more limited than our estimates for warfarin. More broadly, simulation analyses are intended to predict overall outcomes that are difficult to measure. While other frameworks to assess model credibility exist, the fact remains that no extant datasets can directly validate our predictions.37
Our findings are limited to patients with NVAF. Anticoagulants are prescribed for a variety of indications with widely varied underlying risks and benefits. Models constructed for these conditions would likely produce different timing for resumption of anticoagulation. Unfortunately, large scale cohort studies to inform such models are lacking. Similarly, we simulated UGIB, and our results should not be generalized to populations with other types of bleeding (eg, intracranial hemorrhage). Again, cohort studies of other types of bleeding would be necessary to understand the risks of anticoagulation over time in such populations.
Higher-quality data regarding risk of rebleeding over time would improve our estimates. Our literature search identified only one systematic review that could be used to estimate the risk of recurrent UGIB over time. These data are not adequate to interrogate other forms this survival curve could take, such as Gompertz or Weibull distributions. Recurrence risk almost certainly declines over time, but how quickly it declines carries additional uncertainty.
Despite these limitations, we believe our results to be the best estimates to date of the optimal time of anticoagulation reinitiation following UGIB. Our findings could help inform clinical practice guidelines and reduce variation in care where current practice guidelines are largely silent. Given the potential ease of implementing scheduling changes, our results represent an opportunity to improve patient outcomes with little resource investment.
In conclusion, after UGIB associated with anticoagulation, our model suggests that warfarin is optimally restarted approximately six weeks following hemostasis and that apixaban is optimally restarted approximately one month following hemostasis. Modest changes to this timing based on probability of thromboembolic stroke are reasonable.
Disclosures
The authors have nothing to disclose.
Funding
The authors received no specific funding for this work.
1. Witt DM, Delate T, Garcia DA, et al. Risk of thromboembolism, recurrent hemorrhage, and death after warfarin therapy interruption for gastrointestinal tract bleeding. Arch Intern Med. 2012;172(19):1484-1491. doi: 10.1001/archinternmed.2012.4261. PubMed
2. Sengupta N, Feuerstein JD, Patwardhan VR, et al. The risks of thromboembolism vs recurrent gastrointestinal bleeding after interruption of systemic anticoagulation in hospitalized inpatients with gastrointestinal bleeding: a prospective study. Am J Gastroenterol. 2015;110(2):328-335. doi: 10.1038/ajg.2014.398. PubMed
3. Qureshi W, Mittal C, Patsias I, et al. Restarting anticoagulation and outcomes after major gastrointestinal bleeding in atrial fibrillation. Am J Cardiol. 2014;113(4):662-668. doi: 10.1016/j.amjcard.2013.10.044. PubMed
4. Milling TJ, Spyropoulos AC. Re-initiation of dabigatran and direct factor Xa antagonists after a major bleed. Am J Emerg Med. 2016;34(11):19-25. doi: 10.1016/j.ajem.2016.09.049. PubMed
5. Brotman DJ, Jaffer AK. Resuming anticoagulation in the first week following gastrointestinal tract hemorrhage. Arch Intern Med. 2012;172(19):1492-1493. doi: 10.1001/archinternmed.2012.4309. PubMed
6. Barnes GD, Lucas E, Alexander GC, Goldberger ZD. National trends in ambulatory oral anticoagulant use. Am J Med. 2015;128(12):1300-5. doi: 10.1016/j.amjmed.2015.05.044. PubMed
7. Noseworthy PA, Yao X, Abraham NS, Sangaralingham LR, McBane RD, Shah ND. Direct comparison of dabigatran, rivaroxaban, and apixaban for effectiveness and safety in nonvalvular atrial fibrillation. Chest. 2016;150(6):1302-1312. doi: 10.1016/j.chest.2016.07.013. PubMed
8. Pappas MA, Barnes GD, Vijan S. Personalizing bridging anticoagulation in patients with nonvalvular atrial fibrillation—a microsimulation analysis. J Gen Intern Med. 2017;32(4):464-470. doi: 10.1007/s11606-016-3932-7. PubMed
9. Pappas MA, Vijan S, Rothberg MB, Singer DE. Reducing age bias in decision analyses of anticoagulation for patients with nonvalvular atrial fibrillation – a microsimulation study. PloS One. 2018;13(7):e0199593. doi: 10.1371/journal.pone.0199593. PubMed
10. National Center for Health Statistics. National Health and Nutrition Examination Survey. https://www.cdc.gov/nchs/nhanes/about_nhanes.htm. Accessed August 30, 2018.
11. United States Census Bureau. Age and sex composition in the United States: 2014. https://www.census.gov/data/tables/2014/demo/age-and-sex/2014-age-sex-composition.html. Accessed August 30, 2018.
12. Go AS, Hylek EM, Phillips KA, et al. Prevalence of diagnosed atrial fibrillation in adults: national implications for rhythm management and stroke prevention: the AnTicoagulation and Risk Factors in Atrial Fibrillation (ATRIA) study. JAMA. 2001;285(18):2370-2375. doi: 10.1001/jama.285.18.2370. PubMed
13. Lip GYH, Nieuwlaat R, Pisters R, Lane DA, Crijns HJGM. Refining clinical risk stratification for predicting stroke and thromboembolism in atrial fibrillation using a novel risk factor-based approach: the euro heart survey on atrial fibrillation. Chest. 2010;137(2):263-272. doi: 10.1378/chest.09-1584. PubMed
14. Pisters R, Lane DA, Nieuwlaat R, de Vos CB, Crijns HJGM, Lip GYH. A novel user-friendly score (HAS-BLED) to assess 1-year risk of major bleeding in patients with atrial fibrillation. Chest. 2010;138(5):1093-1100. doi: 10.1378/chest.10-0134. PubMed
15. Granger CB, Alexander JH, McMurray JJV, et al. Apixaban versus warfarin in patients with atrial fibrillation. N Engl J Med. 2011;365(11):981-992. doi: 10.1056/NEJMoa1107039.
16. Rockall TA, Logan RF, Devlin HB, Northfield TC. Risk assessment after acute upper gastrointestinal haemorrhage. Gut. 1996;38(3):316-321. doi: 10.1136/gut.38.3.316. PubMed
17. Vreeburg EM, Terwee CB, Snel P, et al. Validation of the Rockall risk scoring system in upper gastrointestinal bleeding. Gut. 1999;44(3):331-335. doi: 10.1136/gut.44.3.331. PubMed
18. Enns RA, Gagnon YM, Barkun AN, et al. Validation of the Rockall scoring system for outcomes from non-variceal upper gastrointestinal bleeding in a Canadian setting. World J Gastroenterol. 2006;12(48):7779-7785. doi: 10.3748/wjg.v12.i48.7779. PubMed
19. Stanley AJ, Laine L, Dalton HR, et al. Comparison of risk scoring systems for patients presenting with upper gastrointestinal bleeding: international multicentre prospective study. BMJ. 2017;356:i6432. doi: 10.1136/bmj.i6432. PubMed
20. Barkun AN, Bardou M, Kuipers EJ, et al. International consensus recommendations on the management of patients with nonvariceal upper gastrointestinal bleeding. Ann Intern Med. 2010;152(2):101-113. doi: 10.7326/0003-4819-152-2-201001190-00009. PubMed
21. Friberg L, Rosenqvist M, Lip GYH. Evaluation of risk stratification schemes for ischaemic stroke and bleeding in 182 678 patients with atrial fibrillation: the Swedish atrial fibrillation cohort study. Eur Heart J. 2012;33(12):1500-1510. doi: 10.1093/eurheartj/ehr488. PubMed
22. Friberg L, Rosenqvist M, Lip GYH. Net clinical benefit of warfarin in patients with atrial fibrillation: a report from the Swedish atrial fibrillation cohort study. Circulation. 2012;125(19):2298-2307. doi: 10.1161/CIRCULATIONAHA.111.055079. PubMed
23. Hart RG, Diener HC, Yang S, et al. Intracranial hemorrhage in atrial fibrillation patients during anticoagulation with warfarin or dabigatran: the RE-LY trial. Stroke. 2012;43(6):1511-1517. doi: 10.1161/STROKEAHA.112.650614. PubMed
24. Hankey GJ, Stevens SR, Piccini JP, et al. Intracranial hemorrhage among patients with atrial fibrillation anticoagulated with warfarin or rivaroxaban: the rivaroxaban once daily, oral, direct factor Xa inhibition compared with vitamin K antagonism for prevention of stroke and embolism trial in atrial fibrillation. Stroke. 2014;45(5):1304-1312. doi: 10.1161/STROKEAHA.113.004506. PubMed
25. Eikelboom JW, Wallentin L, Connolly SJ, et al. Risk of bleeding with 2 doses of dabigatran compared with warfarin in older and younger patients with atrial fibrillation : an analysis of the randomized evaluation of long-term anticoagulant therapy (RE-LY trial). Circulation. 2011;123(21):2363-2372. doi: 10.1161/CIRCULATIONAHA.110.004747. PubMed
26. El Ouali S, Barkun A, Martel M, Maggio D. Timing of rebleeding in high-risk peptic ulcer bleeding after successful hemostasis: a systematic review. Can J Gastroenterol Hepatol. 2014;28(10):543-548. doi: 0.1016/S0016-5085(14)60738-1. PubMed
27. Kimmel SE, French B, Kasner SE, et al. A pharmacogenetic versus a clinical algorithm for warfarin dosing. N Engl J Med. 2013;369(24):2283-2293. doi: 10.1056/NEJMoa1310669. PubMed
28. Healthcare Cost and Utilization Project (HCUP), Agency for Healthcare Research and Quality. HCUP Databases. https://www.hcup-us.ahrq.gov/nisoverview.jsp. Accessed August 31, 2018.
29. Guerrouij M, Uppal CS, Alklabi A, Douketis JD. The clinical impact of bleeding during oral anticoagulant therapy: assessment of morbidity, mortality and post-bleed anticoagulant management. J Thromb Thrombolysis. 2011;31(4):419-423. doi: 10.1007/s11239-010-0536-7. PubMed
30. Fang MC, Go AS, Chang Y, et al. Death and disability from warfarin-associated intracranial and extracranial hemorrhages. Am J Med. 2007;120(8):700-705. doi: 10.1016/j.amjmed.2006.07.034. PubMed
31. Guertin JR, Feeny D, Tarride JE. Age- and sex-specific Canadian utility norms, based on the 2013-2014 Canadian Community Health Survey. CMAJ. 2018;190(6):E155-E161. doi: 10.1503/cmaj.170317. PubMed
32. Gage BF, Cardinalli AB, Albers GW, Owens DK. Cost-effectiveness of warfarin and aspirin for prophylaxis of stroke in patients with nonvalvular atrial fibrillation. JAMA. 1995;274(23):1839-1845. doi: 10.1001/jama.1995.03530230025025. PubMed
33. Fang MC, Go AS, Chang Y, et al. Long-term survival after ischemic stroke in patients with atrial fibrillation. Neurology. 2014;82(12):1033-1037. doi: 10.1212/WNL.0000000000000248. PubMed
34. Hong KS, Saver JL. Quantifying the value of stroke disability outcomes: WHO global burden of disease project disability weights for each level of the modified Rankin scale * Supplemental Mathematical Appendix. Stroke. 2009;40(12):3828-3833. doi: 10.1161/STROKEAHA.109.561365. PubMed
35. Jalal H, Dowd B, Sainfort F, Kuntz KM. Linear regression metamodeling as a tool to summarize and present simulation model results. Med Decis Mak. 2013;33(7):880-890. doi: 10.1177/0272989X13492014. PubMed
36. Staerk L, Lip GYH, Olesen JB, et al. Stroke and recurrent haemorrhage associated with antithrombotic treatment after gastrointestinal bleeding in patients with atrial fibrillation: nationwide cohort study. BMJ. 2015;351:h5876. doi: 10.1136/bmj.h5876. PubMed
37. Kopec JA, Finès P, Manuel DG, et al. Validation of population-based disease simulation models: a review of concepts and methods. BMC Public Health. 2010;10(1):710. doi: 10.1186/1471-2458-10-710. PubMed
38. Smith EE, Shobha N, Dai D, et al. Risk score for in-hospital ischemic stroke mortality derived and validated within the Get With The Guidelines-Stroke Program. Circulation. 2010;122(15):1496-1504. doi: 10.1161/CIRCULATIONAHA.109.932822. PubMed
39. Smith EE, Shobha N, Dai D, et al. A risk score for in-hospital death in patients admitted with ischemic or hemorrhagic stroke. J Am Heart Assoc. 2013;2(1):e005207. doi: 10.1161/JAHA.112.005207. PubMed
40. Busl KM, Prabhakaran S. Predictors of mortality in nontraumatic subdural hematoma. J Neurosurg. 2013;119(5):1296-1301. doi: 10.3171/2013.4.JNS122236. PubMed
41. Murphy SL, Kochanek KD, Xu J, Heron M. Deaths: final data for 2012. Natl Vital Stat Rep. 2015;63(9):1-117. http://www.ncbi.nlm.nih.gov/pubmed/26759855. Accessed August 31, 2018.
42. Dachs RJ, Burton JH, Joslin J. A user’s guide to the NINDS rt-PA stroke trial database. PLOS Med. 2008;5(5):e113. doi: 10.1371/journal.pmed.0050113. PubMed
43. Ashburner JM, Go AS, Reynolds K, et al. Comparison of frequency and outcome of major gastrointestinal hemorrhage in patients with atrial fibrillation on versus not receiving warfarin therapy (from the ATRIA and ATRIA-CVRN cohorts). Am J Cardiol. 2015;115(1):40-46. doi: 10.1016/j.amjcard.2014.10.006. PubMed
44. Weinstein MC, Siegel JE, Gold MR, Kamlet MS, Russell LB. Recommendations of the panel on cost-effectiveness in health and medicine. JAMA. 1996;276(15):1253-1258. doi: 10.1001/jama.1996.03540150055031. PubMed
1. Witt DM, Delate T, Garcia DA, et al. Risk of thromboembolism, recurrent hemorrhage, and death after warfarin therapy interruption for gastrointestinal tract bleeding. Arch Intern Med. 2012;172(19):1484-1491. doi: 10.1001/archinternmed.2012.4261. PubMed
2. Sengupta N, Feuerstein JD, Patwardhan VR, et al. The risks of thromboembolism vs recurrent gastrointestinal bleeding after interruption of systemic anticoagulation in hospitalized inpatients with gastrointestinal bleeding: a prospective study. Am J Gastroenterol. 2015;110(2):328-335. doi: 10.1038/ajg.2014.398. PubMed
3. Qureshi W, Mittal C, Patsias I, et al. Restarting anticoagulation and outcomes after major gastrointestinal bleeding in atrial fibrillation. Am J Cardiol. 2014;113(4):662-668. doi: 10.1016/j.amjcard.2013.10.044. PubMed
4. Milling TJ, Spyropoulos AC. Re-initiation of dabigatran and direct factor Xa antagonists after a major bleed. Am J Emerg Med. 2016;34(11):19-25. doi: 10.1016/j.ajem.2016.09.049. PubMed
5. Brotman DJ, Jaffer AK. Resuming anticoagulation in the first week following gastrointestinal tract hemorrhage. Arch Intern Med. 2012;172(19):1492-1493. doi: 10.1001/archinternmed.2012.4309. PubMed
6. Barnes GD, Lucas E, Alexander GC, Goldberger ZD. National trends in ambulatory oral anticoagulant use. Am J Med. 2015;128(12):1300-5. doi: 10.1016/j.amjmed.2015.05.044. PubMed
7. Noseworthy PA, Yao X, Abraham NS, Sangaralingham LR, McBane RD, Shah ND. Direct comparison of dabigatran, rivaroxaban, and apixaban for effectiveness and safety in nonvalvular atrial fibrillation. Chest. 2016;150(6):1302-1312. doi: 10.1016/j.chest.2016.07.013. PubMed
8. Pappas MA, Barnes GD, Vijan S. Personalizing bridging anticoagulation in patients with nonvalvular atrial fibrillation—a microsimulation analysis. J Gen Intern Med. 2017;32(4):464-470. doi: 10.1007/s11606-016-3932-7. PubMed
9. Pappas MA, Vijan S, Rothberg MB, Singer DE. Reducing age bias in decision analyses of anticoagulation for patients with nonvalvular atrial fibrillation – a microsimulation study. PloS One. 2018;13(7):e0199593. doi: 10.1371/journal.pone.0199593. PubMed
10. National Center for Health Statistics. National Health and Nutrition Examination Survey. https://www.cdc.gov/nchs/nhanes/about_nhanes.htm. Accessed August 30, 2018.
11. United States Census Bureau. Age and sex composition in the United States: 2014. https://www.census.gov/data/tables/2014/demo/age-and-sex/2014-age-sex-composition.html. Accessed August 30, 2018.
12. Go AS, Hylek EM, Phillips KA, et al. Prevalence of diagnosed atrial fibrillation in adults: national implications for rhythm management and stroke prevention: the AnTicoagulation and Risk Factors in Atrial Fibrillation (ATRIA) study. JAMA. 2001;285(18):2370-2375. doi: 10.1001/jama.285.18.2370. PubMed
13. Lip GYH, Nieuwlaat R, Pisters R, Lane DA, Crijns HJGM. Refining clinical risk stratification for predicting stroke and thromboembolism in atrial fibrillation using a novel risk factor-based approach: the euro heart survey on atrial fibrillation. Chest. 2010;137(2):263-272. doi: 10.1378/chest.09-1584. PubMed
14. Pisters R, Lane DA, Nieuwlaat R, de Vos CB, Crijns HJGM, Lip GYH. A novel user-friendly score (HAS-BLED) to assess 1-year risk of major bleeding in patients with atrial fibrillation. Chest. 2010;138(5):1093-1100. doi: 10.1378/chest.10-0134. PubMed
15. Granger CB, Alexander JH, McMurray JJV, et al. Apixaban versus warfarin in patients with atrial fibrillation. N Engl J Med. 2011;365(11):981-992. doi: 10.1056/NEJMoa1107039.
16. Rockall TA, Logan RF, Devlin HB, Northfield TC. Risk assessment after acute upper gastrointestinal haemorrhage. Gut. 1996;38(3):316-321. doi: 10.1136/gut.38.3.316. PubMed
17. Vreeburg EM, Terwee CB, Snel P, et al. Validation of the Rockall risk scoring system in upper gastrointestinal bleeding. Gut. 1999;44(3):331-335. doi: 10.1136/gut.44.3.331. PubMed
18. Enns RA, Gagnon YM, Barkun AN, et al. Validation of the Rockall scoring system for outcomes from non-variceal upper gastrointestinal bleeding in a Canadian setting. World J Gastroenterol. 2006;12(48):7779-7785. doi: 10.3748/wjg.v12.i48.7779. PubMed
19. Stanley AJ, Laine L, Dalton HR, et al. Comparison of risk scoring systems for patients presenting with upper gastrointestinal bleeding: international multicentre prospective study. BMJ. 2017;356:i6432. doi: 10.1136/bmj.i6432. PubMed
20. Barkun AN, Bardou M, Kuipers EJ, et al. International consensus recommendations on the management of patients with nonvariceal upper gastrointestinal bleeding. Ann Intern Med. 2010;152(2):101-113. doi: 10.7326/0003-4819-152-2-201001190-00009. PubMed
21. Friberg L, Rosenqvist M, Lip GYH. Evaluation of risk stratification schemes for ischaemic stroke and bleeding in 182 678 patients with atrial fibrillation: the Swedish atrial fibrillation cohort study. Eur Heart J. 2012;33(12):1500-1510. doi: 10.1093/eurheartj/ehr488. PubMed
22. Friberg L, Rosenqvist M, Lip GYH. Net clinical benefit of warfarin in patients with atrial fibrillation: a report from the Swedish atrial fibrillation cohort study. Circulation. 2012;125(19):2298-2307. doi: 10.1161/CIRCULATIONAHA.111.055079. PubMed
23. Hart RG, Diener HC, Yang S, et al. Intracranial hemorrhage in atrial fibrillation patients during anticoagulation with warfarin or dabigatran: the RE-LY trial. Stroke. 2012;43(6):1511-1517. doi: 10.1161/STROKEAHA.112.650614. PubMed
24. Hankey GJ, Stevens SR, Piccini JP, et al. Intracranial hemorrhage among patients with atrial fibrillation anticoagulated with warfarin or rivaroxaban: the rivaroxaban once daily, oral, direct factor Xa inhibition compared with vitamin K antagonism for prevention of stroke and embolism trial in atrial fibrillation. Stroke. 2014;45(5):1304-1312. doi: 10.1161/STROKEAHA.113.004506. PubMed
25. Eikelboom JW, Wallentin L, Connolly SJ, et al. Risk of bleeding with 2 doses of dabigatran compared with warfarin in older and younger patients with atrial fibrillation : an analysis of the randomized evaluation of long-term anticoagulant therapy (RE-LY trial). Circulation. 2011;123(21):2363-2372. doi: 10.1161/CIRCULATIONAHA.110.004747. PubMed
26. El Ouali S, Barkun A, Martel M, Maggio D. Timing of rebleeding in high-risk peptic ulcer bleeding after successful hemostasis: a systematic review. Can J Gastroenterol Hepatol. 2014;28(10):543-548. doi: 0.1016/S0016-5085(14)60738-1. PubMed
27. Kimmel SE, French B, Kasner SE, et al. A pharmacogenetic versus a clinical algorithm for warfarin dosing. N Engl J Med. 2013;369(24):2283-2293. doi: 10.1056/NEJMoa1310669. PubMed
28. Healthcare Cost and Utilization Project (HCUP), Agency for Healthcare Research and Quality. HCUP Databases. https://www.hcup-us.ahrq.gov/nisoverview.jsp. Accessed August 31, 2018.
29. Guerrouij M, Uppal CS, Alklabi A, Douketis JD. The clinical impact of bleeding during oral anticoagulant therapy: assessment of morbidity, mortality and post-bleed anticoagulant management. J Thromb Thrombolysis. 2011;31(4):419-423. doi: 10.1007/s11239-010-0536-7. PubMed
30. Fang MC, Go AS, Chang Y, et al. Death and disability from warfarin-associated intracranial and extracranial hemorrhages. Am J Med. 2007;120(8):700-705. doi: 10.1016/j.amjmed.2006.07.034. PubMed
31. Guertin JR, Feeny D, Tarride JE. Age- and sex-specific Canadian utility norms, based on the 2013-2014 Canadian Community Health Survey. CMAJ. 2018;190(6):E155-E161. doi: 10.1503/cmaj.170317. PubMed
32. Gage BF, Cardinalli AB, Albers GW, Owens DK. Cost-effectiveness of warfarin and aspirin for prophylaxis of stroke in patients with nonvalvular atrial fibrillation. JAMA. 1995;274(23):1839-1845. doi: 10.1001/jama.1995.03530230025025. PubMed
33. Fang MC, Go AS, Chang Y, et al. Long-term survival after ischemic stroke in patients with atrial fibrillation. Neurology. 2014;82(12):1033-1037. doi: 10.1212/WNL.0000000000000248. PubMed
34. Hong KS, Saver JL. Quantifying the value of stroke disability outcomes: WHO global burden of disease project disability weights for each level of the modified Rankin scale * Supplemental Mathematical Appendix. Stroke. 2009;40(12):3828-3833. doi: 10.1161/STROKEAHA.109.561365. PubMed
35. Jalal H, Dowd B, Sainfort F, Kuntz KM. Linear regression metamodeling as a tool to summarize and present simulation model results. Med Decis Mak. 2013;33(7):880-890. doi: 10.1177/0272989X13492014. PubMed
36. Staerk L, Lip GYH, Olesen JB, et al. Stroke and recurrent haemorrhage associated with antithrombotic treatment after gastrointestinal bleeding in patients with atrial fibrillation: nationwide cohort study. BMJ. 2015;351:h5876. doi: 10.1136/bmj.h5876. PubMed
37. Kopec JA, Finès P, Manuel DG, et al. Validation of population-based disease simulation models: a review of concepts and methods. BMC Public Health. 2010;10(1):710. doi: 10.1186/1471-2458-10-710. PubMed
38. Smith EE, Shobha N, Dai D, et al. Risk score for in-hospital ischemic stroke mortality derived and validated within the Get With The Guidelines-Stroke Program. Circulation. 2010;122(15):1496-1504. doi: 10.1161/CIRCULATIONAHA.109.932822. PubMed
39. Smith EE, Shobha N, Dai D, et al. A risk score for in-hospital death in patients admitted with ischemic or hemorrhagic stroke. J Am Heart Assoc. 2013;2(1):e005207. doi: 10.1161/JAHA.112.005207. PubMed
40. Busl KM, Prabhakaran S. Predictors of mortality in nontraumatic subdural hematoma. J Neurosurg. 2013;119(5):1296-1301. doi: 10.3171/2013.4.JNS122236. PubMed
41. Murphy SL, Kochanek KD, Xu J, Heron M. Deaths: final data for 2012. Natl Vital Stat Rep. 2015;63(9):1-117. http://www.ncbi.nlm.nih.gov/pubmed/26759855. Accessed August 31, 2018.
42. Dachs RJ, Burton JH, Joslin J. A user’s guide to the NINDS rt-PA stroke trial database. PLOS Med. 2008;5(5):e113. doi: 10.1371/journal.pmed.0050113. PubMed
43. Ashburner JM, Go AS, Reynolds K, et al. Comparison of frequency and outcome of major gastrointestinal hemorrhage in patients with atrial fibrillation on versus not receiving warfarin therapy (from the ATRIA and ATRIA-CVRN cohorts). Am J Cardiol. 2015;115(1):40-46. doi: 10.1016/j.amjcard.2014.10.006. PubMed
44. Weinstein MC, Siegel JE, Gold MR, Kamlet MS, Russell LB. Recommendations of the panel on cost-effectiveness in health and medicine. JAMA. 1996;276(15):1253-1258. doi: 10.1001/jama.1996.03540150055031. PubMed
© 2019 Society of Hospital Medicine