User login
Evaluating Pharmacists’ Time Collecting Self-Monitoring Blood Glucose Data
The American Diabetes Association recommends that patients on intensive insulin regimens self-monitor blood glucose (SMBG) to assist in therapy optimization.1 To be useful, SMBG data must be captured by patients, shared with care teams, and used and interpreted by patients and practitioners.2,3 Communication of SMBG data from the patient to practitioner can be challenging. Although technology can help in this process, limitations exist, such as manual data entry into systems, patient and/or practitioner technological challenges (eg, accessing interface), and compatibility and integration between SMBG devices and electronic health record (EHR) systems.4
The Boise Veterans Affairs Medical Center (BVAMC) in Idaho serves more than 100,000 veterans. It includes a main site, community-based outpatient clinics, and a clinical resource hub that provides telehealth services to veterans residing in rural neighboring states. The BVAMC pharmacy department provides both inpatient and outpatient services. At the BVAMC, clinical pharmacist practitioners (CPPs) are independent practitioners who support their care teams in comprehensive medication management and have the ability to initiate, modify, and discontinue drug therapy for referred patients.5 A prominent role of CPPs in primary care teams is to manage patients with uncontrolled diabetes and intensive insulin regimens in which SMBG data are vital to therapy optimization. As collecting SMBG data from patients is seen anecdotally as time intensive, we determined the mean time spent by CPPs collecting patient SMBG data and its potential implications.
Methods
Pharmacists at BVAMC were asked to estimate and record the following: SMBG data collection method, time spent collecting data, extra time spent documenting or formatting SMBG readings, total patient visit time, and visit type. Time was collected in minutes. Extra time spent documenting or formatting SMBG readings included any additional time formatting or entering data in the clinical note after talking to the patient; if this was done while multitasking and talking to the patient, it was not considered extra time. For total patient visit time, pharmacists were asked to estimate only time spent discussing diabetes care and collecting SMBG data. Visit types were categorized as in-person/face-to-face, telephone, and telehealth using clinical video telehealth (CVT)/VA Video Connect (VVC). Data were collected using a standardized spreadsheet. The spreadsheet was pilot tested by a CPP before distribution to all pharmacists.
CPPs were educated about the project in March 2021 and were asked to record data for a 1-week period between April 5, 2021, and April 30, 2021. One CPP also provided delayed data collected from May 17 to 21, 2021, and these data were included in our analysis.
Descriptive statistics were used to determine the mean time spent by CPPs collecting SMBG data. Unpaired t tests were used to compare time spent collecting SMBG data by different collection methods and patient visit types. A P value of ≤ .05 was considered statistically significant. Data were organized in Microsoft Excel, and statistics were completed with JMP Pro v15.
Results
Eight CPPs provided data from 120 patient encounters. For all pa
When compared by the SMBG collection method, the longest time spent collecting SMBG data was with patient report (3.7 minutes), and the longest time spent documenting/formatting time was with meter download/home telehealth (2 minutes). There was no statistically significant difference in the time to collect SMBG data between patient report and other methods (3.7 minutes vs 2.8 minutes; P = .07).
When compared by visit type, there was not a statistically significant difference between time spent collecting in person vs telephone or video SMBG data (3.8 minutes vs 3.2 minutes; P = .39) (Table 2). The most common SMBG collection method for in-person/face-to-face visits was continuous glucose monitor (CGM) (n = 10), followed by meter download/home telehealth (n = 5), patient report (n = 3), and directly from log/meter (n = 1). For telephone or video visits, the most common collection method was patient report (n = 72), followed by directly from log/meter (n = 18), CGM (n = 5), meter download/home telehealth (n = 4), and secure message (n = 2).
Discussion
We found that the mean amount of time spent collecting and documenting/formatting SMBG data was only 4.6 minutes; however, this still represented a substantial portion of visit time. For telephone and CVT/VVC appointments, this represented > 25% of total visit time. While CPPs make important contributions to interprofessional team management of patients with diabetes, their cost is not trivial.6-8 It is worth exploring the most effective and efficient ways to use CPPs. Our results indicate that streamlining SMBG data collection may be beneficial.
Pharmacy technicians, licensed practical nurses/clinical associates, registered nurses/nurse care managers, or other team members could help improve SMBG data collection. Using other team members is also an opportunity for comanagement, for team collaboration, and for more patients to be seen. For example, if a CPP currently has 12 patient encounters that last 20 minutes each, this results in about 240 minutes of direct patient care. If patient encounters were 16 minutes, CPPs could have 15 patient encounters in 240 minutes. Saved time could be used for other clinical tasks involved in disease management or clinical reminder reviews. While there are benefits to CPPs collecting SMBG data, such as further inquiry about patient-reported values, other team members could be trained to ask appropriate follow-up questions for abnormal blood glucose readings. In addition, leveraging current team members and optimizing their roles could prevent the need to acquire additional full-time equivalent employees.
Another opportunity to increase efficiency in SMBG data collection is with SMBG devices and EHR integration.4,9 However, integration can be difficult with different types of SMBG devices and EHR platforms. Education for patients and practitioners could help to ensure accurate and reliable data uploads; patient internet availability; data protection, privacy, and sharing; workflow management; and clear patient-practitioner expectations.10 For example, if patient SMBG data are automatically uploaded to practitioners, patients’ expectations for practitioner review of data and follow-up need to be determined.
We found a subset of 23 patient encounters where data collection and documenting/formatting represented more than half of the total visit time. In this subset, 13 SMBG reports were pulled from a log or meter, 8 were patient reported, and 3 were meter download or home telehealth.
Limitations
A potential reason for the lack of statistically significant differences in SMBG collection method or visit type in this study includes the small sample size. Participation in this work was voluntary, and all participating CPPs had ≥ 3 years of practice in their current setting, which includes a heavy workload of diabetes management. These pharmacists noted self-established procedures/systems for SMBG data collection, including the use of Excel spreadsheets with pregenerated formulas. For less experienced CPPs, SMBG data collection time may be even longer. Pharmacists also noted that they may limit time spent collecting SMBG data depending on the patient encounter and whether they have gathered sufficient data to guide clinical care. Other limitations of this work include data collection from a single institution and that the time documented represented estimates; there was no external monitor.
Conclusions
In this analysis, we found that CPPs spend about 3 minutes collecting SMBG data from patients and about an additional 1 minute documenting and formatting data. While 4 to 5 minutes may not represent a substantial amount of time for 1 patient, it can be when multiplied by several patient encounters. The time spent collecting SMBG data did not significantly differ by collection method or visit type. Opportunities to increase efficiency in SMBG data collection, such as the use of nonpharmacist team members, are worth exploring.
Acknowledgments
Thank you to the pharmacists at the Boise Veterans Affairs Medical Center for their time and support of this work: Danielle Ahlstrom, Paul Black, Robyn Cruz, Sarah Naidoo, Anthony Nelson, Laura Spoutz, Eileen Twomey, Donovan Victorine, and Michelle Wilkin.
1. American Diabetes Association. 7. Diabetes Technology: Standards of Medical Care in Diabetes-2021. Diabetes Care. 2021;44(suppl 1):S85-S99. doi:10.2337/dc21-S007
2. Austin MM. The two skill sets of self-monitoring of blood glucose education: the operational and the interpretive. Diabetes Spectr. 2013;26(2):83-90. doi:10.2337/diaspect.26.2.83
3. Gallichan M. Self monitoring of glucose by people with diabetes: evidence based practice. BMJ. 1997;314(7085):964-967. doi:10.1136/bmj.314.7085.964
4. Lewinski AA, Drake C, Shaw RJ, et al. Bridging the integration gap between patient-generated blood glucose data and electronic health records. J Am Med Inform Assoc. 2019;26(7):667-672. doi:10.1093/jamia/ocz039
5. McFarland MS, Groppi J, Jorgenson T, et al. Role of the US Veterans Health Administration clinical pharmacy specialist provider: shaping the future of comprehensive medication management. Can J Hosp Pharm. 2020;73(2):152-158. doi:10.4212/cjhp.v73i2.2982
6. Schmidt K, Caudill J. Hamilton T. Impact of clinical pharmacy specialists on glycemic control in veterans with type 2 diabetes. Am J Health Syst Pharm. 2019;76(suppl 1):S9-S14. doi:10.1093/ajhp/zxy015
7. Sullivan J, Jett BP, Cradick M, Zuber J. Effect of clinical pharmacist intervention on hemoglobin A1c reduction in veteran patients with type 2 diabetes in a rural setting. Ann Pharmacother. 2016;50(12):1023-1027. doi:10.1177/1060028016663564
8. Bloom CI, Ku M, Williams M. Clinical pharmacy specialists’ impact in patient aligned care teams for type 2 diabetes management. J Am Pharm Assoc (2003). 2019;59(5):717-721. doi:10.1016/j.japh.2019.05.002
9. Kumar RB, Goren ND, Stark DE, Wall DP, Longhurst CA. Automated integration of continuous glucose monitor data in the electronic health record using consumer technology. J Am Med Inform Assoc. 2016;23(3):532-537. doi:10.1093/jamia/ocv206
10. Reading MJ, Merrill JA. Converging and diverging needs between patients and providers who are collecting and using patient-generated health data: an integrative review. J Am Med Inform Assoc. 2018;25(6):759-771. doi:10.1093/jamia/ocy006
The American Diabetes Association recommends that patients on intensive insulin regimens self-monitor blood glucose (SMBG) to assist in therapy optimization.1 To be useful, SMBG data must be captured by patients, shared with care teams, and used and interpreted by patients and practitioners.2,3 Communication of SMBG data from the patient to practitioner can be challenging. Although technology can help in this process, limitations exist, such as manual data entry into systems, patient and/or practitioner technological challenges (eg, accessing interface), and compatibility and integration between SMBG devices and electronic health record (EHR) systems.4
The Boise Veterans Affairs Medical Center (BVAMC) in Idaho serves more than 100,000 veterans. It includes a main site, community-based outpatient clinics, and a clinical resource hub that provides telehealth services to veterans residing in rural neighboring states. The BVAMC pharmacy department provides both inpatient and outpatient services. At the BVAMC, clinical pharmacist practitioners (CPPs) are independent practitioners who support their care teams in comprehensive medication management and have the ability to initiate, modify, and discontinue drug therapy for referred patients.5 A prominent role of CPPs in primary care teams is to manage patients with uncontrolled diabetes and intensive insulin regimens in which SMBG data are vital to therapy optimization. As collecting SMBG data from patients is seen anecdotally as time intensive, we determined the mean time spent by CPPs collecting patient SMBG data and its potential implications.
Methods
Pharmacists at BVAMC were asked to estimate and record the following: SMBG data collection method, time spent collecting data, extra time spent documenting or formatting SMBG readings, total patient visit time, and visit type. Time was collected in minutes. Extra time spent documenting or formatting SMBG readings included any additional time formatting or entering data in the clinical note after talking to the patient; if this was done while multitasking and talking to the patient, it was not considered extra time. For total patient visit time, pharmacists were asked to estimate only time spent discussing diabetes care and collecting SMBG data. Visit types were categorized as in-person/face-to-face, telephone, and telehealth using clinical video telehealth (CVT)/VA Video Connect (VVC). Data were collected using a standardized spreadsheet. The spreadsheet was pilot tested by a CPP before distribution to all pharmacists.
CPPs were educated about the project in March 2021 and were asked to record data for a 1-week period between April 5, 2021, and April 30, 2021. One CPP also provided delayed data collected from May 17 to 21, 2021, and these data were included in our analysis.
Descriptive statistics were used to determine the mean time spent by CPPs collecting SMBG data. Unpaired t tests were used to compare time spent collecting SMBG data by different collection methods and patient visit types. A P value of ≤ .05 was considered statistically significant. Data were organized in Microsoft Excel, and statistics were completed with JMP Pro v15.
Results
Eight CPPs provided data from 120 patient encounters. For all pa
When compared by the SMBG collection method, the longest time spent collecting SMBG data was with patient report (3.7 minutes), and the longest time spent documenting/formatting time was with meter download/home telehealth (2 minutes). There was no statistically significant difference in the time to collect SMBG data between patient report and other methods (3.7 minutes vs 2.8 minutes; P = .07).
When compared by visit type, there was not a statistically significant difference between time spent collecting in person vs telephone or video SMBG data (3.8 minutes vs 3.2 minutes; P = .39) (Table 2). The most common SMBG collection method for in-person/face-to-face visits was continuous glucose monitor (CGM) (n = 10), followed by meter download/home telehealth (n = 5), patient report (n = 3), and directly from log/meter (n = 1). For telephone or video visits, the most common collection method was patient report (n = 72), followed by directly from log/meter (n = 18), CGM (n = 5), meter download/home telehealth (n = 4), and secure message (n = 2).
Discussion
We found that the mean amount of time spent collecting and documenting/formatting SMBG data was only 4.6 minutes; however, this still represented a substantial portion of visit time. For telephone and CVT/VVC appointments, this represented > 25% of total visit time. While CPPs make important contributions to interprofessional team management of patients with diabetes, their cost is not trivial.6-8 It is worth exploring the most effective and efficient ways to use CPPs. Our results indicate that streamlining SMBG data collection may be beneficial.
Pharmacy technicians, licensed practical nurses/clinical associates, registered nurses/nurse care managers, or other team members could help improve SMBG data collection. Using other team members is also an opportunity for comanagement, for team collaboration, and for more patients to be seen. For example, if a CPP currently has 12 patient encounters that last 20 minutes each, this results in about 240 minutes of direct patient care. If patient encounters were 16 minutes, CPPs could have 15 patient encounters in 240 minutes. Saved time could be used for other clinical tasks involved in disease management or clinical reminder reviews. While there are benefits to CPPs collecting SMBG data, such as further inquiry about patient-reported values, other team members could be trained to ask appropriate follow-up questions for abnormal blood glucose readings. In addition, leveraging current team members and optimizing their roles could prevent the need to acquire additional full-time equivalent employees.
Another opportunity to increase efficiency in SMBG data collection is with SMBG devices and EHR integration.4,9 However, integration can be difficult with different types of SMBG devices and EHR platforms. Education for patients and practitioners could help to ensure accurate and reliable data uploads; patient internet availability; data protection, privacy, and sharing; workflow management; and clear patient-practitioner expectations.10 For example, if patient SMBG data are automatically uploaded to practitioners, patients’ expectations for practitioner review of data and follow-up need to be determined.
We found a subset of 23 patient encounters where data collection and documenting/formatting represented more than half of the total visit time. In this subset, 13 SMBG reports were pulled from a log or meter, 8 were patient reported, and 3 were meter download or home telehealth.
Limitations
A potential reason for the lack of statistically significant differences in SMBG collection method or visit type in this study includes the small sample size. Participation in this work was voluntary, and all participating CPPs had ≥ 3 years of practice in their current setting, which includes a heavy workload of diabetes management. These pharmacists noted self-established procedures/systems for SMBG data collection, including the use of Excel spreadsheets with pregenerated formulas. For less experienced CPPs, SMBG data collection time may be even longer. Pharmacists also noted that they may limit time spent collecting SMBG data depending on the patient encounter and whether they have gathered sufficient data to guide clinical care. Other limitations of this work include data collection from a single institution and that the time documented represented estimates; there was no external monitor.
Conclusions
In this analysis, we found that CPPs spend about 3 minutes collecting SMBG data from patients and about an additional 1 minute documenting and formatting data. While 4 to 5 minutes may not represent a substantial amount of time for 1 patient, it can be when multiplied by several patient encounters. The time spent collecting SMBG data did not significantly differ by collection method or visit type. Opportunities to increase efficiency in SMBG data collection, such as the use of nonpharmacist team members, are worth exploring.
Acknowledgments
Thank you to the pharmacists at the Boise Veterans Affairs Medical Center for their time and support of this work: Danielle Ahlstrom, Paul Black, Robyn Cruz, Sarah Naidoo, Anthony Nelson, Laura Spoutz, Eileen Twomey, Donovan Victorine, and Michelle Wilkin.
The American Diabetes Association recommends that patients on intensive insulin regimens self-monitor blood glucose (SMBG) to assist in therapy optimization.1 To be useful, SMBG data must be captured by patients, shared with care teams, and used and interpreted by patients and practitioners.2,3 Communication of SMBG data from the patient to practitioner can be challenging. Although technology can help in this process, limitations exist, such as manual data entry into systems, patient and/or practitioner technological challenges (eg, accessing interface), and compatibility and integration between SMBG devices and electronic health record (EHR) systems.4
The Boise Veterans Affairs Medical Center (BVAMC) in Idaho serves more than 100,000 veterans. It includes a main site, community-based outpatient clinics, and a clinical resource hub that provides telehealth services to veterans residing in rural neighboring states. The BVAMC pharmacy department provides both inpatient and outpatient services. At the BVAMC, clinical pharmacist practitioners (CPPs) are independent practitioners who support their care teams in comprehensive medication management and have the ability to initiate, modify, and discontinue drug therapy for referred patients.5 A prominent role of CPPs in primary care teams is to manage patients with uncontrolled diabetes and intensive insulin regimens in which SMBG data are vital to therapy optimization. As collecting SMBG data from patients is seen anecdotally as time intensive, we determined the mean time spent by CPPs collecting patient SMBG data and its potential implications.
Methods
Pharmacists at BVAMC were asked to estimate and record the following: SMBG data collection method, time spent collecting data, extra time spent documenting or formatting SMBG readings, total patient visit time, and visit type. Time was collected in minutes. Extra time spent documenting or formatting SMBG readings included any additional time formatting or entering data in the clinical note after talking to the patient; if this was done while multitasking and talking to the patient, it was not considered extra time. For total patient visit time, pharmacists were asked to estimate only time spent discussing diabetes care and collecting SMBG data. Visit types were categorized as in-person/face-to-face, telephone, and telehealth using clinical video telehealth (CVT)/VA Video Connect (VVC). Data were collected using a standardized spreadsheet. The spreadsheet was pilot tested by a CPP before distribution to all pharmacists.
CPPs were educated about the project in March 2021 and were asked to record data for a 1-week period between April 5, 2021, and April 30, 2021. One CPP also provided delayed data collected from May 17 to 21, 2021, and these data were included in our analysis.
Descriptive statistics were used to determine the mean time spent by CPPs collecting SMBG data. Unpaired t tests were used to compare time spent collecting SMBG data by different collection methods and patient visit types. A P value of ≤ .05 was considered statistically significant. Data were organized in Microsoft Excel, and statistics were completed with JMP Pro v15.
Results
Eight CPPs provided data from 120 patient encounters. For all pa
When compared by the SMBG collection method, the longest time spent collecting SMBG data was with patient report (3.7 minutes), and the longest time spent documenting/formatting time was with meter download/home telehealth (2 minutes). There was no statistically significant difference in the time to collect SMBG data between patient report and other methods (3.7 minutes vs 2.8 minutes; P = .07).
When compared by visit type, there was not a statistically significant difference between time spent collecting in person vs telephone or video SMBG data (3.8 minutes vs 3.2 minutes; P = .39) (Table 2). The most common SMBG collection method for in-person/face-to-face visits was continuous glucose monitor (CGM) (n = 10), followed by meter download/home telehealth (n = 5), patient report (n = 3), and directly from log/meter (n = 1). For telephone or video visits, the most common collection method was patient report (n = 72), followed by directly from log/meter (n = 18), CGM (n = 5), meter download/home telehealth (n = 4), and secure message (n = 2).
Discussion
We found that the mean amount of time spent collecting and documenting/formatting SMBG data was only 4.6 minutes; however, this still represented a substantial portion of visit time. For telephone and CVT/VVC appointments, this represented > 25% of total visit time. While CPPs make important contributions to interprofessional team management of patients with diabetes, their cost is not trivial.6-8 It is worth exploring the most effective and efficient ways to use CPPs. Our results indicate that streamlining SMBG data collection may be beneficial.
Pharmacy technicians, licensed practical nurses/clinical associates, registered nurses/nurse care managers, or other team members could help improve SMBG data collection. Using other team members is also an opportunity for comanagement, for team collaboration, and for more patients to be seen. For example, if a CPP currently has 12 patient encounters that last 20 minutes each, this results in about 240 minutes of direct patient care. If patient encounters were 16 minutes, CPPs could have 15 patient encounters in 240 minutes. Saved time could be used for other clinical tasks involved in disease management or clinical reminder reviews. While there are benefits to CPPs collecting SMBG data, such as further inquiry about patient-reported values, other team members could be trained to ask appropriate follow-up questions for abnormal blood glucose readings. In addition, leveraging current team members and optimizing their roles could prevent the need to acquire additional full-time equivalent employees.
Another opportunity to increase efficiency in SMBG data collection is with SMBG devices and EHR integration.4,9 However, integration can be difficult with different types of SMBG devices and EHR platforms. Education for patients and practitioners could help to ensure accurate and reliable data uploads; patient internet availability; data protection, privacy, and sharing; workflow management; and clear patient-practitioner expectations.10 For example, if patient SMBG data are automatically uploaded to practitioners, patients’ expectations for practitioner review of data and follow-up need to be determined.
We found a subset of 23 patient encounters where data collection and documenting/formatting represented more than half of the total visit time. In this subset, 13 SMBG reports were pulled from a log or meter, 8 were patient reported, and 3 were meter download or home telehealth.
Limitations
A potential reason for the lack of statistically significant differences in SMBG collection method or visit type in this study includes the small sample size. Participation in this work was voluntary, and all participating CPPs had ≥ 3 years of practice in their current setting, which includes a heavy workload of diabetes management. These pharmacists noted self-established procedures/systems for SMBG data collection, including the use of Excel spreadsheets with pregenerated formulas. For less experienced CPPs, SMBG data collection time may be even longer. Pharmacists also noted that they may limit time spent collecting SMBG data depending on the patient encounter and whether they have gathered sufficient data to guide clinical care. Other limitations of this work include data collection from a single institution and that the time documented represented estimates; there was no external monitor.
Conclusions
In this analysis, we found that CPPs spend about 3 minutes collecting SMBG data from patients and about an additional 1 minute documenting and formatting data. While 4 to 5 minutes may not represent a substantial amount of time for 1 patient, it can be when multiplied by several patient encounters. The time spent collecting SMBG data did not significantly differ by collection method or visit type. Opportunities to increase efficiency in SMBG data collection, such as the use of nonpharmacist team members, are worth exploring.
Acknowledgments
Thank you to the pharmacists at the Boise Veterans Affairs Medical Center for their time and support of this work: Danielle Ahlstrom, Paul Black, Robyn Cruz, Sarah Naidoo, Anthony Nelson, Laura Spoutz, Eileen Twomey, Donovan Victorine, and Michelle Wilkin.
1. American Diabetes Association. 7. Diabetes Technology: Standards of Medical Care in Diabetes-2021. Diabetes Care. 2021;44(suppl 1):S85-S99. doi:10.2337/dc21-S007
2. Austin MM. The two skill sets of self-monitoring of blood glucose education: the operational and the interpretive. Diabetes Spectr. 2013;26(2):83-90. doi:10.2337/diaspect.26.2.83
3. Gallichan M. Self monitoring of glucose by people with diabetes: evidence based practice. BMJ. 1997;314(7085):964-967. doi:10.1136/bmj.314.7085.964
4. Lewinski AA, Drake C, Shaw RJ, et al. Bridging the integration gap between patient-generated blood glucose data and electronic health records. J Am Med Inform Assoc. 2019;26(7):667-672. doi:10.1093/jamia/ocz039
5. McFarland MS, Groppi J, Jorgenson T, et al. Role of the US Veterans Health Administration clinical pharmacy specialist provider: shaping the future of comprehensive medication management. Can J Hosp Pharm. 2020;73(2):152-158. doi:10.4212/cjhp.v73i2.2982
6. Schmidt K, Caudill J. Hamilton T. Impact of clinical pharmacy specialists on glycemic control in veterans with type 2 diabetes. Am J Health Syst Pharm. 2019;76(suppl 1):S9-S14. doi:10.1093/ajhp/zxy015
7. Sullivan J, Jett BP, Cradick M, Zuber J. Effect of clinical pharmacist intervention on hemoglobin A1c reduction in veteran patients with type 2 diabetes in a rural setting. Ann Pharmacother. 2016;50(12):1023-1027. doi:10.1177/1060028016663564
8. Bloom CI, Ku M, Williams M. Clinical pharmacy specialists’ impact in patient aligned care teams for type 2 diabetes management. J Am Pharm Assoc (2003). 2019;59(5):717-721. doi:10.1016/j.japh.2019.05.002
9. Kumar RB, Goren ND, Stark DE, Wall DP, Longhurst CA. Automated integration of continuous glucose monitor data in the electronic health record using consumer technology. J Am Med Inform Assoc. 2016;23(3):532-537. doi:10.1093/jamia/ocv206
10. Reading MJ, Merrill JA. Converging and diverging needs between patients and providers who are collecting and using patient-generated health data: an integrative review. J Am Med Inform Assoc. 2018;25(6):759-771. doi:10.1093/jamia/ocy006
1. American Diabetes Association. 7. Diabetes Technology: Standards of Medical Care in Diabetes-2021. Diabetes Care. 2021;44(suppl 1):S85-S99. doi:10.2337/dc21-S007
2. Austin MM. The two skill sets of self-monitoring of blood glucose education: the operational and the interpretive. Diabetes Spectr. 2013;26(2):83-90. doi:10.2337/diaspect.26.2.83
3. Gallichan M. Self monitoring of glucose by people with diabetes: evidence based practice. BMJ. 1997;314(7085):964-967. doi:10.1136/bmj.314.7085.964
4. Lewinski AA, Drake C, Shaw RJ, et al. Bridging the integration gap between patient-generated blood glucose data and electronic health records. J Am Med Inform Assoc. 2019;26(7):667-672. doi:10.1093/jamia/ocz039
5. McFarland MS, Groppi J, Jorgenson T, et al. Role of the US Veterans Health Administration clinical pharmacy specialist provider: shaping the future of comprehensive medication management. Can J Hosp Pharm. 2020;73(2):152-158. doi:10.4212/cjhp.v73i2.2982
6. Schmidt K, Caudill J. Hamilton T. Impact of clinical pharmacy specialists on glycemic control in veterans with type 2 diabetes. Am J Health Syst Pharm. 2019;76(suppl 1):S9-S14. doi:10.1093/ajhp/zxy015
7. Sullivan J, Jett BP, Cradick M, Zuber J. Effect of clinical pharmacist intervention on hemoglobin A1c reduction in veteran patients with type 2 diabetes in a rural setting. Ann Pharmacother. 2016;50(12):1023-1027. doi:10.1177/1060028016663564
8. Bloom CI, Ku M, Williams M. Clinical pharmacy specialists’ impact in patient aligned care teams for type 2 diabetes management. J Am Pharm Assoc (2003). 2019;59(5):717-721. doi:10.1016/j.japh.2019.05.002
9. Kumar RB, Goren ND, Stark DE, Wall DP, Longhurst CA. Automated integration of continuous glucose monitor data in the electronic health record using consumer technology. J Am Med Inform Assoc. 2016;23(3):532-537. doi:10.1093/jamia/ocv206
10. Reading MJ, Merrill JA. Converging and diverging needs between patients and providers who are collecting and using patient-generated health data: an integrative review. J Am Med Inform Assoc. 2018;25(6):759-771. doi:10.1093/jamia/ocy006
VA Home Telehealth Program for Initiating and Optimizing Heart Failure Guideline-Directed Medical Therapy
Heart failure (HF) is a chronic, progressive condition that is characterized by the heart’s inability to effectively pump blood throughout the body. In 2018, approximately 6.2 million US adults had HF, and 13.4% of all death certificates noted HF as a precipitating factor.1 Patients not receiving appropriate guideline-directed medical therapy (GDMT) face a 29% excess mortality risk over a 2-year period.2 Each additional GDMT included in a patient’s regimen significantly reduces all-cause mortality.3
The Change the Management of Patients with Heart Failure (CHAMP) registry reports that only about 1% of patients with HF are prescribed 3 agents from contemporary GDMT at target doses, highlighting the need for optimizing clinicians’ approaches to GDMT.4 Similarly, The Get With The Guidelines Heart Failure Registry has noted that only 20.2% of patients with HF with reduced ejection fraction (HFrEF) are prescribed a sodium-glucose cotransporter 2 inhibitor (SGLT2i) following hospital discharge for HFrEF exacerbation.5 Overall, treatment rates with GDMT saw limited improvement between 2013 and 2019, with no significant difference between groups in mortality, indicating the need for optimized methods to encourage the initiation of GDMT.6
Remote monitoring and telecare are novel ways to improve GDMT rates in those with HFrEF. However, data are inconsistent regarding the impact of remote HF monitoring and improvements in GDMT or HF-related outcomes.6-10 The modalities of remote monitoring for GDMT vary among studies, but the potential for telehealth monitoring to improve GDMT, thereby potentially reducing HF-related hospitalizations, is clear.
Telemonitoring has demonstrated improved participant adherence with weight monitoring, although the withdrawal rate was high, and has the potential to reduce all-cause mortality and HF-related hospitalizations.11,12 Telemonitoring for GDMT optimization led to an increase in the proportion of patients who achieved optimal GDMT doses, a decrease in the time to dose optimization, and a reduction in the number of clinic visits.13 Remote GDMT titration was accomplished in the general patient population with HFrEF; however, in populations already followed by cardiologists or HF specialists, remote optimization strategies did not yield different proportions of GDMT use.14 The aim of this study was to assess the impact of the home telehealth (HT) monitoring program on the initiation and optimization of HF GDMT among veterans with HFrEF at the Veterans Affairs Ann Arbor Healthcare System (VAAAHS) in Michigan.
Methods
This was a single-center retrospective study of Computerized Patient Record System (CPRS)data. Patients at the VAAAHS were evaluated if they were diagnosed with HFrEF and were eligible for enrollment in the HT monitoring program. Eligibility criteria included a diagnosis of stage C HF, irrespective of EF, and a history of any HF-related hospitalization. We focused on patients with HFrEF due to stronger guideline-based recommendations for certain pharmacotherapies as compared with HF with mildly reduced ejection fraction (HFmrEF) and preserved ejection fraction (HFpEF). Initial patient data for HT enrolling were accessed using the Heart Failure Dashboard via the US Department of Veterans Affairs (VA) Academic Detailing Service. The target daily doses of typical agents used in HFrEF GDMT are listed in the Appendix.
The HT program is an embedded model in which HT nurses receive remote data from the patient and triage that with the VAAAHS cardiology team. Patients’ questions, concerns, and/or vital signs are recovered remotely. In this model, nurses are embedded in the cardiology team, working with the cardiologists, cardiology clinical pharmacist, and/or cardiology nurse practitioners to make medication interventions. Data are recorded with an HT device, including weight, blood pressure (BP), heart rate, and pulse oximetry. HT nurses are also available to the patient via phone or video. The program uses a 180-day disease management protocol for HF via remote device, enabling the patient to answer questions and receive education on their disease daily. Responses to questions and data are then reviewed by an HT nurse remotely during business hours and triaged as appropriate with the cardiology team. Data can be communicated to the cardiology team via the patient record, eliminating the need for the cardiology team to use the proprietary portal affiliated with the HT device.
Study Sample
Patient information was obtained from a list of 417 patients eligible for enrollment in the HT program; the list was sent to the HT program for review and enrollment. Patient data were extracted from the VAAAHS HF Dashboard and included all patients with HFrEF and available data on the platform. The sample for the retrospective chart review included 40 adults who had HFrEF, defined as a left ventricular EF (LVEF) of ≤ 40% as evidenced by a transthoracic echocardiogram or cardiac magnetic resonance imaging. These patients were contacted and agreed to enroll in the HT monitoring program. The HT program population was compared against a control group of 33 patients who were ineligible for the HT program. Patients were deemed ineligible for HT if they resided in a nursing home, lacked a VAAAHS primary care clinician, or declined participation in the HT program.
Procedures
Patients who declined participation in the HT program followed the standard of care, which was limited to visits with primary care clinicians and/or cardiologists as per the follow-up plan. Patient data were collected over 12 months. The study was approved by the VAAAHS Institutional Review Board (reference number, 1703034), Research and Development Committee, and Research Administration.
Primary and Secondary Goals
The primary goal of the study was to assess the impact of the HT program on drug interventions, specifically initiating and titrating HFrEF pharmacotherapies. Interventions were based on GDMT with known mortality- and morbidity-reducing properties when used at their maximum tolerated doses, including angiotensin-converting enzyme inhibitors (ACEi), angiotensin receptor-neprilysin inhibitor (ARNi), or angiotensin receptor blockers (ARB), with a preference for ARNi, β-blockers for HFrEF (metoprolol succinate, bisoprolol, or carvedilol), aldosterone antagonists, and SGLT2is.
Secondary goals included HF-related hospitalizations, medication adherence, time to enrollment in HT, time to laboratory analysis after the initiation or titration of an ACEi/ARB/ARNi or aldosterone antagonist, and time enrolled in the HT program. Patients were considered adherent if their drug refill history showed consistent fills of their medications. The χ2 test was used for total interventions made during the study period and Fisher exact test for all others.
Results
Patient data were collected between July 2022 and June 2023. All 73 patients were male, and the mean age in the HT group (n = 40) was 72.6 years and 75.2 years for the control group (n = 33). Overall, the baseline demographics were similar between the groups (Table 1). Of 40 patients screened for enrollment in the HT program, 33 were included in the analysis (Figure 1).
At baseline, the HT group included more individuals than the control group on ACEi/ARB/ARNi (24 vs 19, respectively), β-blocker (28 vs 24, respectively), SGLT2i (14 vs 11, respectively), and aldosterone antagonist (15 vs 9, respectively) (Figure 2). There were 20 interventions made in the HT group compared with 11 therapy changes in the control arm during the study (odds ratio, 1.43; P = .23) (Table 2). In the HT group, 1 patient achieved an ACEi target dose, 3 patients achieved a β-blocker target dose, and 7 achieved a target dose of spironolactone (titration is not required for SGLT2i therapy and is counted as target dose). In the HT group, 17 patients were on ≥ 3 recommended agents, while 9 patients were taking 4 agents. Seven of 20 HT group interventions resulted in titration to the target dose. In the control group, no patients achieved an ARNi target dose, 3 patients achieved a β-blocker target dose, and 2 patients achieved a spironolactone antagonist target dose. In the control arm, 7 patients were on ≥ 3 GDMTs, and 2 were taking 4 agents. No patient in either group achieved a target dose of 4 agents. Five of 11 control group interventions resulted in initiation or titration of GDMT to the target dose.
Of the 40 HT group patients, 7 were excluded from analysis (3 failed to schedule HT, 1 was at a long-term care facility, 1 was nonadherent, 1 declined participation, and 1 died) and 33 remained in the program for a mean (SD) 5.3 (3.5) months. Death rates were tracked during the study: 1 patient died in the HT group and 3 in the control group.
We analyzed the overall percentage of VAAAHS patients with HFrEF who were on appropriate GDMT. Given the ongoing drive to improve HF-related outcomes, HT interventions could not be compared to a static population, so the HT and control patients were compared with the rates of GDMT at VAAAHS, which was available in the Academic Detailing Service Heart Failure Dashboard (Figure 3). Titration and optimization rates were also compared (Figure 4). From July 2022 to June 2023, ARNi use increased by 16.6%, aldosterone antagonist by 6.8%, and β-blockers by 2.4%. Target doses of GDMTs were more difficult to achieve in the hospital system. There was an increase in aldosterone antagonist target dose achievement by 4.7%, but overall there were decreases in target doses in other GDMTs: ACEi/ARB/ARNi target dose use decreased by 3.2%, ARNi target dose use decreased by 2.7% and target β-blocker use decreased by 0.9%.
Discussion
Telehealth yielded clinically important interventions, with some titrations bringing patients to their target doses of medications for HFrEF. The 20 interventions made in the HT group can be largely attributed to the nurses’ efforts to alert clinicians to drug titrations or ACEi/ARB to ARNi transitions. Although the findings were not statistically significant, the difference in the number of drug therapy changes supports the use of the HT program for a GDMT optimization strategy. Patients may be difficult to titrate secondary to adverse effects that make medication initiation or titration inappropriate, such as hypotension and hyperkalemia, although this was not observed in this small sample size. Considering a mean HT enrollment of 5.3 months, many patients had adequate disease assessment and medication titration. Given that patients are discharged from the service once deemed appropriate, this decreases the burden on the patient and increases the utility and implementation of the HT program for other patients.
A surprising finding of this study was the lower rate of HF-related hospitalizations in the HT group. Although not the primary subject of interest in the study, it reinforced the importance of close health care professional follow-up for patients living with HF. Telehealth may improve communication and shared decision making over medication use. Because the finding was unanticipated, the rate of diuretic adjustments was not tracked.
Patients were reevaluated every 6 months for willingness to continue the program, adherence, and clinical needs. These results are similar to those of other trials that demonstrated improved rates of GDMT in the setting of pharmacist- or nurse-led HF treatment optimization.15,16 These positive results differ from other trials incorporating remote monitoring regarding patient continuation in HT programs. For example, in a study by Ding and colleagues, the withdrawal rate from their monitoring service was about 22%, while in our study only 1 patient withdrew from the HT program.11
The HT program resulted in fewer hospitalizations than the control arm. There were 6 HF-related hospitalizations in the control group, although 5 involved a single patient. Typically, such a patient would be encouraged to follow HT monitoring after just 1 HF-related hospitalization; however, the patient declined to participate.
Early optimization of GDMT in patients who were recently discharged from the hospital for an HF-related hospitalization yields a reduction in hospital rehospitalization.17 GDMT optimization has unequivocal benefits in HF outcomes. Unfortunately, the issues surrounding methodologies on how to best optimize GDMT are lacking. While HT has been found to be feasible in the aid of optimizing medical therapy, the TIM-HF trial concluded that remote monitoring services had no significant benefit in reducing mortality.7,8 On the other hand, the OptiLink HF Study showed that when clinicians respond to remote monitoring prompts from fluid index threshold crossing alerts, these interventions are associated with significantly improved clinical outcomes in patients with implantable cardioverter-defibrillators and advanced HF.9 In contrast to previous trials, the AMULET trial showed that remote monitoring compared with standard care significantly reduced the risk of HF hospitalization or cardiovascular death during the 12-month follow-up among patients with HF and LVEF ≤ 49% after an episode of acute exacerbation.10 Additionally, patients who received skilled home health services and participated in remote monitoring for their chronic HF experienced a reduction in all-cause 30-day readmission.18
Given the contrasting evidence regarding remote monitoring and variable modalities of implementing interventions, we investigated whether HT monitoring yields improvements in GDMT optimization. We found that HT nurses were able to nearly double the rate of interventions for patients with HFrEF. The HT program in providing expanded services will require adequate staffing responsibilities and support. The HT program is geared toward following a large, diverse patient population, such as those with chronic obstructive pulmonary disease, hypertension, and HF. We only evaluated services for patients with HFrEF, but the program also follows patients with HfmrEF and HfpEF. These patients were not included as GDMT optimization differs for patients with an LVEF > 40%.19,20
The lower rates of achieving target doses of GDMTs were likely obstructed by continuous use of initial drug doses and further limited by lack of follow-up. When compared with the rest of the VAAAHS, there was a greater effort to increase ARNi use in the HT group as 7 of 33 patients (21%) were started on ARNi compared with a background increase of ARNi use of 17%. There was a lower mortality rate observed in the HT group compared with the control group. One patient in each group died of unrelated causes, while 2 deaths in the control group were due to worsening HF. The difference in mortality is likely multifactorial, possibly related to the control group’s greater disease burden or higher mean age (75.2 years vs 72.6 years).
Limitations
This was an observational cohort design, which is subject to bias. Thus, the findings of this study are entirely hypothesis-generating and a randomized controlled trial would be necessary for clearer results. Second, low numbers of participants may have skewed the data set. Given the observational nature of the study, this nonetheless is a positive finding to support the HT program for assisting with HF monitoring and prompting drug interventions. Due to the low number of participants, a single patient may have skewed the results with 5 hospitalizations.
Conclusions
This pilot study demonstrates the applicability of HT monitoring to optimize veteran HFrEF GDMT. The HT program yielded numerically relevant interventions and fewer HF-related hospitalizations compared with the control arm. The study shows the feasibility of the program to safely optimize GDMT toward their target doses and may serve as an additional catalyst to further develop HT programs specifically targeted toward HF monitoring and management. Cost-savings analyses would likely need to demonstrate the cost utility of such a service.
Acknowledgments
We thank the home telehealth nursing staff for their assistance in data collection and enrollment of patients into the monitoring program.
1. Tsao CW, Aday AW, Almarzooq ZI, et al. Heart disease and stroke statistics-2022 update: a report from the American Heart Association. Circulation. 2022;145(8):e153-e639. doi:10.1161/CIR.0000000000001052
2. McCullough PA, Mehta HS, Barker CM, et al. Mortality and guideline-directed medical therapy in real-world heart failure patients with reduced ejection fraction. Clin Cardiol. 2021;44(9):1192-1198. doi:10.1002/clc.23664
3. Tromp J, Ouwerkerk W, van Veldhuisen DJ, et al. A systematic review and network meta-analysis of pharmacological treatment of heart failure with reduced ejection fraction. JACC Heart Fail. 2022;10(2):73-84. doi:10.1016/j.jchf.2021.09.004
4. Greene SJ, Butler J, Albert NM, et al. Medical therapy for heart failure with reduced ejection fraction: the CHAMP-HF Registry. J Am Coll Cardiol. 2018;72(4):351-366. doi:10.1016/j.jacc.2018.04.070
5. Pierce JB, Vaduganathan M, Fonarow GC, et al. Contemporary use of sodium-glucose cotransporter-2 inhibitor therapy among patients hospitalized for heart failure with reduced ejection fraction in the US: The Get With The Guidelines-Heart Failure Registry. JAMA Cardiol. 2023;8(7):652-661. doi:10.1001/jamacardio.2023.1266
6. Sandhu AT, Kohsaka S, Turakhia MP, Lewis EF, Heidenreich PA. Evaluation of quality of care for US veterans with recent-onset heart failure with reduced ejection fraction. JAMA Cardiol. 2022;7(2):130-139. doi:10.1001/jamacardio.2021.4585 7. Rahimi K, Nazarzadeh M, Pinho-Gomes AC, et al. Home monitoring with technology-supported management in chronic heart failure: a randomised trial. Heart. 2020;106(20):1573-1578. doi:10.1136/heartjnl-2020-316773 8. Koehler F, Winkler S, Schieber M, et al. Impact of remote telemedical management on mortality and hospitalizations in ambulatory patients with chronic heart failure: the telemedical interventional monitoring in heart failure study. Circulation. 2011;123(17):1873-1880. doi:10.1161/CIRCULATIONAHA.111.018473
9. Wintrich J, Pavlicek V, Brachmann J, et al. Remote monitoring with appropriate reaction to alerts was associated with improved outcomes in chronic heart failure: results from the OptiLink HF study. Circ Arrhythm Electrophysiol. 2021;14(1):e008693. doi:10.1161/CIRCEP.120.008693
10. Krzesinski P, Jankowska EA, Siebert J, et al. Effects of an outpatient intervention comprising nurse-led non-invasive assessments, telemedicine support and remote cardiologists’ decisions in patients with heart failure (AMULET study): a randomised controlled trial. Eur J Heart Fail. 2022;24(3):565-577. doi:10.1002/ejhf.2358
11. Ding H, Jayasena R, Chen SH, et al. The effects of telemonitoring on patient compliance with self-management recommendations and outcomes of the innovative telemonitoring enhanced care program for chronic heart failure: randomized controlled trial. J Med Internet Res. 2020;22(7):e17559. doi:10.2196/17559
12. Kitsiou S, Pare G, Jaana M. Effects of home telemonitoring interventions on patients with chronic heart failure: an overview of systematic reviews. J Med Internet Res. 2015;17(3):e63. doi:10.2196/jmir.4174
13. Artanian V, Ross HJ, Rac VE, O’Sullivan M, Brahmbhatt DH, Seto E. Impact of remote titration combined with telemonitoring on the optimization of guideline-directed medical therapy for patients with heart failure: internal pilot of a randomized controlled trial. JMIR Cardio. 2020;4(1):e21962. doi:10.2196/21962
14. Desai AS, Maclean T, Blood AJ, et al. Remote optimization of guideline-directed medical therapy in patients with heart failure with reduced ejection fraction. JAMA Cardiol. 2020;5(12):1430-1434. doi:10.1001/jamacardio.2020.3757
15. Patil T, Ali S, Kaur A, et al. Impact of pharmacist-led heart failure clinic on optimization of guideline-directed medical therapy (PHARM-HF). J Cardiovasc Transl Res. 2022;15(6):1424-1435. doi:10.1007/s12265-022-10262-9
16. Zheng J, Mednick T, Heidenreich PA, Sandhu AT. Pharmacist- and nurse-led medical optimization in heart failure: a systematic review and meta-analysis. J Card Fail. 2023;29(7):1000-1013. doi:10.1016/j.cardfail.2023.03.012
17. Mebazaa A, Davison B, Chioncel O, et al. Safety, tolerability and efficacy of up-titration of guideline-directed medical therapies for acute heart failure (STRONG-HF): a multinational, open-label, randomised, trial. Lancet. 2022;400(10367):1938-1952. doi:10.1016/S0140-6736(22)02076-1
18. O’Connor M, Asdornwised U, Dempsey ML, et al. Using telehealth to reduce all-cause 30-day hospital readmissions among heart failure patients receiving skilled home health services. Appl Clin Inform. 2016;7(2):238-47. doi:10.4338/ACI-2015-11-SOA-0157
19. Heidenreich PA, Bozkurt B, Aguilar D, et al. 2022 AHA/ACC/HFSA Guideline for the Management of Heart Failure: Executive Summary: A Report of the American College of Cardiology/American Heart Association Joint Committee on Clinical Practice Guidelines. Circulation. 2022;145(18):e876-e894. doi:10.1161/CIR.0000000000001062
20. Kittleson MM, Panjrath GS, Amancherla K, et al. 2023 ACC Expert Consensus Decision Pathway on Management of Heart Failure With Preserved Ejection Fraction: A Report of the American College of Cardiology Solution Set Oversight Committee. J Am Coll Cardiol. 2023;81(18):1835-1878. doi:10.1016/j.jacc.2023.03.393
Heart failure (HF) is a chronic, progressive condition that is characterized by the heart’s inability to effectively pump blood throughout the body. In 2018, approximately 6.2 million US adults had HF, and 13.4% of all death certificates noted HF as a precipitating factor.1 Patients not receiving appropriate guideline-directed medical therapy (GDMT) face a 29% excess mortality risk over a 2-year period.2 Each additional GDMT included in a patient’s regimen significantly reduces all-cause mortality.3
The Change the Management of Patients with Heart Failure (CHAMP) registry reports that only about 1% of patients with HF are prescribed 3 agents from contemporary GDMT at target doses, highlighting the need for optimizing clinicians’ approaches to GDMT.4 Similarly, The Get With The Guidelines Heart Failure Registry has noted that only 20.2% of patients with HF with reduced ejection fraction (HFrEF) are prescribed a sodium-glucose cotransporter 2 inhibitor (SGLT2i) following hospital discharge for HFrEF exacerbation.5 Overall, treatment rates with GDMT saw limited improvement between 2013 and 2019, with no significant difference between groups in mortality, indicating the need for optimized methods to encourage the initiation of GDMT.6
Remote monitoring and telecare are novel ways to improve GDMT rates in those with HFrEF. However, data are inconsistent regarding the impact of remote HF monitoring and improvements in GDMT or HF-related outcomes.6-10 The modalities of remote monitoring for GDMT vary among studies, but the potential for telehealth monitoring to improve GDMT, thereby potentially reducing HF-related hospitalizations, is clear.
Telemonitoring has demonstrated improved participant adherence with weight monitoring, although the withdrawal rate was high, and has the potential to reduce all-cause mortality and HF-related hospitalizations.11,12 Telemonitoring for GDMT optimization led to an increase in the proportion of patients who achieved optimal GDMT doses, a decrease in the time to dose optimization, and a reduction in the number of clinic visits.13 Remote GDMT titration was accomplished in the general patient population with HFrEF; however, in populations already followed by cardiologists or HF specialists, remote optimization strategies did not yield different proportions of GDMT use.14 The aim of this study was to assess the impact of the home telehealth (HT) monitoring program on the initiation and optimization of HF GDMT among veterans with HFrEF at the Veterans Affairs Ann Arbor Healthcare System (VAAAHS) in Michigan.
Methods
This was a single-center retrospective study of Computerized Patient Record System (CPRS)data. Patients at the VAAAHS were evaluated if they were diagnosed with HFrEF and were eligible for enrollment in the HT monitoring program. Eligibility criteria included a diagnosis of stage C HF, irrespective of EF, and a history of any HF-related hospitalization. We focused on patients with HFrEF due to stronger guideline-based recommendations for certain pharmacotherapies as compared with HF with mildly reduced ejection fraction (HFmrEF) and preserved ejection fraction (HFpEF). Initial patient data for HT enrolling were accessed using the Heart Failure Dashboard via the US Department of Veterans Affairs (VA) Academic Detailing Service. The target daily doses of typical agents used in HFrEF GDMT are listed in the Appendix.
The HT program is an embedded model in which HT nurses receive remote data from the patient and triage that with the VAAAHS cardiology team. Patients’ questions, concerns, and/or vital signs are recovered remotely. In this model, nurses are embedded in the cardiology team, working with the cardiologists, cardiology clinical pharmacist, and/or cardiology nurse practitioners to make medication interventions. Data are recorded with an HT device, including weight, blood pressure (BP), heart rate, and pulse oximetry. HT nurses are also available to the patient via phone or video. The program uses a 180-day disease management protocol for HF via remote device, enabling the patient to answer questions and receive education on their disease daily. Responses to questions and data are then reviewed by an HT nurse remotely during business hours and triaged as appropriate with the cardiology team. Data can be communicated to the cardiology team via the patient record, eliminating the need for the cardiology team to use the proprietary portal affiliated with the HT device.
Study Sample
Patient information was obtained from a list of 417 patients eligible for enrollment in the HT program; the list was sent to the HT program for review and enrollment. Patient data were extracted from the VAAAHS HF Dashboard and included all patients with HFrEF and available data on the platform. The sample for the retrospective chart review included 40 adults who had HFrEF, defined as a left ventricular EF (LVEF) of ≤ 40% as evidenced by a transthoracic echocardiogram or cardiac magnetic resonance imaging. These patients were contacted and agreed to enroll in the HT monitoring program. The HT program population was compared against a control group of 33 patients who were ineligible for the HT program. Patients were deemed ineligible for HT if they resided in a nursing home, lacked a VAAAHS primary care clinician, or declined participation in the HT program.
Procedures
Patients who declined participation in the HT program followed the standard of care, which was limited to visits with primary care clinicians and/or cardiologists as per the follow-up plan. Patient data were collected over 12 months. The study was approved by the VAAAHS Institutional Review Board (reference number, 1703034), Research and Development Committee, and Research Administration.
Primary and Secondary Goals
The primary goal of the study was to assess the impact of the HT program on drug interventions, specifically initiating and titrating HFrEF pharmacotherapies. Interventions were based on GDMT with known mortality- and morbidity-reducing properties when used at their maximum tolerated doses, including angiotensin-converting enzyme inhibitors (ACEi), angiotensin receptor-neprilysin inhibitor (ARNi), or angiotensin receptor blockers (ARB), with a preference for ARNi, β-blockers for HFrEF (metoprolol succinate, bisoprolol, or carvedilol), aldosterone antagonists, and SGLT2is.
Secondary goals included HF-related hospitalizations, medication adherence, time to enrollment in HT, time to laboratory analysis after the initiation or titration of an ACEi/ARB/ARNi or aldosterone antagonist, and time enrolled in the HT program. Patients were considered adherent if their drug refill history showed consistent fills of their medications. The χ2 test was used for total interventions made during the study period and Fisher exact test for all others.
Results
Patient data were collected between July 2022 and June 2023. All 73 patients were male, and the mean age in the HT group (n = 40) was 72.6 years and 75.2 years for the control group (n = 33). Overall, the baseline demographics were similar between the groups (Table 1). Of 40 patients screened for enrollment in the HT program, 33 were included in the analysis (Figure 1).
At baseline, the HT group included more individuals than the control group on ACEi/ARB/ARNi (24 vs 19, respectively), β-blocker (28 vs 24, respectively), SGLT2i (14 vs 11, respectively), and aldosterone antagonist (15 vs 9, respectively) (Figure 2). There were 20 interventions made in the HT group compared with 11 therapy changes in the control arm during the study (odds ratio, 1.43; P = .23) (Table 2). In the HT group, 1 patient achieved an ACEi target dose, 3 patients achieved a β-blocker target dose, and 7 achieved a target dose of spironolactone (titration is not required for SGLT2i therapy and is counted as target dose). In the HT group, 17 patients were on ≥ 3 recommended agents, while 9 patients were taking 4 agents. Seven of 20 HT group interventions resulted in titration to the target dose. In the control group, no patients achieved an ARNi target dose, 3 patients achieved a β-blocker target dose, and 2 patients achieved a spironolactone antagonist target dose. In the control arm, 7 patients were on ≥ 3 GDMTs, and 2 were taking 4 agents. No patient in either group achieved a target dose of 4 agents. Five of 11 control group interventions resulted in initiation or titration of GDMT to the target dose.
Of the 40 HT group patients, 7 were excluded from analysis (3 failed to schedule HT, 1 was at a long-term care facility, 1 was nonadherent, 1 declined participation, and 1 died) and 33 remained in the program for a mean (SD) 5.3 (3.5) months. Death rates were tracked during the study: 1 patient died in the HT group and 3 in the control group.
We analyzed the overall percentage of VAAAHS patients with HFrEF who were on appropriate GDMT. Given the ongoing drive to improve HF-related outcomes, HT interventions could not be compared to a static population, so the HT and control patients were compared with the rates of GDMT at VAAAHS, which was available in the Academic Detailing Service Heart Failure Dashboard (Figure 3). Titration and optimization rates were also compared (Figure 4). From July 2022 to June 2023, ARNi use increased by 16.6%, aldosterone antagonist by 6.8%, and β-blockers by 2.4%. Target doses of GDMTs were more difficult to achieve in the hospital system. There was an increase in aldosterone antagonist target dose achievement by 4.7%, but overall there were decreases in target doses in other GDMTs: ACEi/ARB/ARNi target dose use decreased by 3.2%, ARNi target dose use decreased by 2.7% and target β-blocker use decreased by 0.9%.
Discussion
Telehealth yielded clinically important interventions, with some titrations bringing patients to their target doses of medications for HFrEF. The 20 interventions made in the HT group can be largely attributed to the nurses’ efforts to alert clinicians to drug titrations or ACEi/ARB to ARNi transitions. Although the findings were not statistically significant, the difference in the number of drug therapy changes supports the use of the HT program for a GDMT optimization strategy. Patients may be difficult to titrate secondary to adverse effects that make medication initiation or titration inappropriate, such as hypotension and hyperkalemia, although this was not observed in this small sample size. Considering a mean HT enrollment of 5.3 months, many patients had adequate disease assessment and medication titration. Given that patients are discharged from the service once deemed appropriate, this decreases the burden on the patient and increases the utility and implementation of the HT program for other patients.
A surprising finding of this study was the lower rate of HF-related hospitalizations in the HT group. Although not the primary subject of interest in the study, it reinforced the importance of close health care professional follow-up for patients living with HF. Telehealth may improve communication and shared decision making over medication use. Because the finding was unanticipated, the rate of diuretic adjustments was not tracked.
Patients were reevaluated every 6 months for willingness to continue the program, adherence, and clinical needs. These results are similar to those of other trials that demonstrated improved rates of GDMT in the setting of pharmacist- or nurse-led HF treatment optimization.15,16 These positive results differ from other trials incorporating remote monitoring regarding patient continuation in HT programs. For example, in a study by Ding and colleagues, the withdrawal rate from their monitoring service was about 22%, while in our study only 1 patient withdrew from the HT program.11
The HT program resulted in fewer hospitalizations than the control arm. There were 6 HF-related hospitalizations in the control group, although 5 involved a single patient. Typically, such a patient would be encouraged to follow HT monitoring after just 1 HF-related hospitalization; however, the patient declined to participate.
Early optimization of GDMT in patients who were recently discharged from the hospital for an HF-related hospitalization yields a reduction in hospital rehospitalization.17 GDMT optimization has unequivocal benefits in HF outcomes. Unfortunately, the issues surrounding methodologies on how to best optimize GDMT are lacking. While HT has been found to be feasible in the aid of optimizing medical therapy, the TIM-HF trial concluded that remote monitoring services had no significant benefit in reducing mortality.7,8 On the other hand, the OptiLink HF Study showed that when clinicians respond to remote monitoring prompts from fluid index threshold crossing alerts, these interventions are associated with significantly improved clinical outcomes in patients with implantable cardioverter-defibrillators and advanced HF.9 In contrast to previous trials, the AMULET trial showed that remote monitoring compared with standard care significantly reduced the risk of HF hospitalization or cardiovascular death during the 12-month follow-up among patients with HF and LVEF ≤ 49% after an episode of acute exacerbation.10 Additionally, patients who received skilled home health services and participated in remote monitoring for their chronic HF experienced a reduction in all-cause 30-day readmission.18
Given the contrasting evidence regarding remote monitoring and variable modalities of implementing interventions, we investigated whether HT monitoring yields improvements in GDMT optimization. We found that HT nurses were able to nearly double the rate of interventions for patients with HFrEF. The HT program in providing expanded services will require adequate staffing responsibilities and support. The HT program is geared toward following a large, diverse patient population, such as those with chronic obstructive pulmonary disease, hypertension, and HF. We only evaluated services for patients with HFrEF, but the program also follows patients with HfmrEF and HfpEF. These patients were not included as GDMT optimization differs for patients with an LVEF > 40%.19,20
The lower rates of achieving target doses of GDMTs were likely obstructed by continuous use of initial drug doses and further limited by lack of follow-up. When compared with the rest of the VAAAHS, there was a greater effort to increase ARNi use in the HT group as 7 of 33 patients (21%) were started on ARNi compared with a background increase of ARNi use of 17%. There was a lower mortality rate observed in the HT group compared with the control group. One patient in each group died of unrelated causes, while 2 deaths in the control group were due to worsening HF. The difference in mortality is likely multifactorial, possibly related to the control group’s greater disease burden or higher mean age (75.2 years vs 72.6 years).
Limitations
This was an observational cohort design, which is subject to bias. Thus, the findings of this study are entirely hypothesis-generating and a randomized controlled trial would be necessary for clearer results. Second, low numbers of participants may have skewed the data set. Given the observational nature of the study, this nonetheless is a positive finding to support the HT program for assisting with HF monitoring and prompting drug interventions. Due to the low number of participants, a single patient may have skewed the results with 5 hospitalizations.
Conclusions
This pilot study demonstrates the applicability of HT monitoring to optimize veteran HFrEF GDMT. The HT program yielded numerically relevant interventions and fewer HF-related hospitalizations compared with the control arm. The study shows the feasibility of the program to safely optimize GDMT toward their target doses and may serve as an additional catalyst to further develop HT programs specifically targeted toward HF monitoring and management. Cost-savings analyses would likely need to demonstrate the cost utility of such a service.
Acknowledgments
We thank the home telehealth nursing staff for their assistance in data collection and enrollment of patients into the monitoring program.
Heart failure (HF) is a chronic, progressive condition that is characterized by the heart’s inability to effectively pump blood throughout the body. In 2018, approximately 6.2 million US adults had HF, and 13.4% of all death certificates noted HF as a precipitating factor.1 Patients not receiving appropriate guideline-directed medical therapy (GDMT) face a 29% excess mortality risk over a 2-year period.2 Each additional GDMT included in a patient’s regimen significantly reduces all-cause mortality.3
The Change the Management of Patients with Heart Failure (CHAMP) registry reports that only about 1% of patients with HF are prescribed 3 agents from contemporary GDMT at target doses, highlighting the need for optimizing clinicians’ approaches to GDMT.4 Similarly, The Get With The Guidelines Heart Failure Registry has noted that only 20.2% of patients with HF with reduced ejection fraction (HFrEF) are prescribed a sodium-glucose cotransporter 2 inhibitor (SGLT2i) following hospital discharge for HFrEF exacerbation.5 Overall, treatment rates with GDMT saw limited improvement between 2013 and 2019, with no significant difference between groups in mortality, indicating the need for optimized methods to encourage the initiation of GDMT.6
Remote monitoring and telecare are novel ways to improve GDMT rates in those with HFrEF. However, data are inconsistent regarding the impact of remote HF monitoring and improvements in GDMT or HF-related outcomes.6-10 The modalities of remote monitoring for GDMT vary among studies, but the potential for telehealth monitoring to improve GDMT, thereby potentially reducing HF-related hospitalizations, is clear.
Telemonitoring has demonstrated improved participant adherence with weight monitoring, although the withdrawal rate was high, and has the potential to reduce all-cause mortality and HF-related hospitalizations.11,12 Telemonitoring for GDMT optimization led to an increase in the proportion of patients who achieved optimal GDMT doses, a decrease in the time to dose optimization, and a reduction in the number of clinic visits.13 Remote GDMT titration was accomplished in the general patient population with HFrEF; however, in populations already followed by cardiologists or HF specialists, remote optimization strategies did not yield different proportions of GDMT use.14 The aim of this study was to assess the impact of the home telehealth (HT) monitoring program on the initiation and optimization of HF GDMT among veterans with HFrEF at the Veterans Affairs Ann Arbor Healthcare System (VAAAHS) in Michigan.
Methods
This was a single-center retrospective study of Computerized Patient Record System (CPRS)data. Patients at the VAAAHS were evaluated if they were diagnosed with HFrEF and were eligible for enrollment in the HT monitoring program. Eligibility criteria included a diagnosis of stage C HF, irrespective of EF, and a history of any HF-related hospitalization. We focused on patients with HFrEF due to stronger guideline-based recommendations for certain pharmacotherapies as compared with HF with mildly reduced ejection fraction (HFmrEF) and preserved ejection fraction (HFpEF). Initial patient data for HT enrolling were accessed using the Heart Failure Dashboard via the US Department of Veterans Affairs (VA) Academic Detailing Service. The target daily doses of typical agents used in HFrEF GDMT are listed in the Appendix.
The HT program is an embedded model in which HT nurses receive remote data from the patient and triage that with the VAAAHS cardiology team. Patients’ questions, concerns, and/or vital signs are recovered remotely. In this model, nurses are embedded in the cardiology team, working with the cardiologists, cardiology clinical pharmacist, and/or cardiology nurse practitioners to make medication interventions. Data are recorded with an HT device, including weight, blood pressure (BP), heart rate, and pulse oximetry. HT nurses are also available to the patient via phone or video. The program uses a 180-day disease management protocol for HF via remote device, enabling the patient to answer questions and receive education on their disease daily. Responses to questions and data are then reviewed by an HT nurse remotely during business hours and triaged as appropriate with the cardiology team. Data can be communicated to the cardiology team via the patient record, eliminating the need for the cardiology team to use the proprietary portal affiliated with the HT device.
Study Sample
Patient information was obtained from a list of 417 patients eligible for enrollment in the HT program; the list was sent to the HT program for review and enrollment. Patient data were extracted from the VAAAHS HF Dashboard and included all patients with HFrEF and available data on the platform. The sample for the retrospective chart review included 40 adults who had HFrEF, defined as a left ventricular EF (LVEF) of ≤ 40% as evidenced by a transthoracic echocardiogram or cardiac magnetic resonance imaging. These patients were contacted and agreed to enroll in the HT monitoring program. The HT program population was compared against a control group of 33 patients who were ineligible for the HT program. Patients were deemed ineligible for HT if they resided in a nursing home, lacked a VAAAHS primary care clinician, or declined participation in the HT program.
Procedures
Patients who declined participation in the HT program followed the standard of care, which was limited to visits with primary care clinicians and/or cardiologists as per the follow-up plan. Patient data were collected over 12 months. The study was approved by the VAAAHS Institutional Review Board (reference number, 1703034), Research and Development Committee, and Research Administration.
Primary and Secondary Goals
The primary goal of the study was to assess the impact of the HT program on drug interventions, specifically initiating and titrating HFrEF pharmacotherapies. Interventions were based on GDMT with known mortality- and morbidity-reducing properties when used at their maximum tolerated doses, including angiotensin-converting enzyme inhibitors (ACEi), angiotensin receptor-neprilysin inhibitor (ARNi), or angiotensin receptor blockers (ARB), with a preference for ARNi, β-blockers for HFrEF (metoprolol succinate, bisoprolol, or carvedilol), aldosterone antagonists, and SGLT2is.
Secondary goals included HF-related hospitalizations, medication adherence, time to enrollment in HT, time to laboratory analysis after the initiation or titration of an ACEi/ARB/ARNi or aldosterone antagonist, and time enrolled in the HT program. Patients were considered adherent if their drug refill history showed consistent fills of their medications. The χ2 test was used for total interventions made during the study period and Fisher exact test for all others.
Results
Patient data were collected between July 2022 and June 2023. All 73 patients were male, and the mean age in the HT group (n = 40) was 72.6 years and 75.2 years for the control group (n = 33). Overall, the baseline demographics were similar between the groups (Table 1). Of 40 patients screened for enrollment in the HT program, 33 were included in the analysis (Figure 1).
At baseline, the HT group included more individuals than the control group on ACEi/ARB/ARNi (24 vs 19, respectively), β-blocker (28 vs 24, respectively), SGLT2i (14 vs 11, respectively), and aldosterone antagonist (15 vs 9, respectively) (Figure 2). There were 20 interventions made in the HT group compared with 11 therapy changes in the control arm during the study (odds ratio, 1.43; P = .23) (Table 2). In the HT group, 1 patient achieved an ACEi target dose, 3 patients achieved a β-blocker target dose, and 7 achieved a target dose of spironolactone (titration is not required for SGLT2i therapy and is counted as target dose). In the HT group, 17 patients were on ≥ 3 recommended agents, while 9 patients were taking 4 agents. Seven of 20 HT group interventions resulted in titration to the target dose. In the control group, no patients achieved an ARNi target dose, 3 patients achieved a β-blocker target dose, and 2 patients achieved a spironolactone antagonist target dose. In the control arm, 7 patients were on ≥ 3 GDMTs, and 2 were taking 4 agents. No patient in either group achieved a target dose of 4 agents. Five of 11 control group interventions resulted in initiation or titration of GDMT to the target dose.
Of the 40 HT group patients, 7 were excluded from analysis (3 failed to schedule HT, 1 was at a long-term care facility, 1 was nonadherent, 1 declined participation, and 1 died) and 33 remained in the program for a mean (SD) 5.3 (3.5) months. Death rates were tracked during the study: 1 patient died in the HT group and 3 in the control group.
We analyzed the overall percentage of VAAAHS patients with HFrEF who were on appropriate GDMT. Given the ongoing drive to improve HF-related outcomes, HT interventions could not be compared to a static population, so the HT and control patients were compared with the rates of GDMT at VAAAHS, which was available in the Academic Detailing Service Heart Failure Dashboard (Figure 3). Titration and optimization rates were also compared (Figure 4). From July 2022 to June 2023, ARNi use increased by 16.6%, aldosterone antagonist by 6.8%, and β-blockers by 2.4%. Target doses of GDMTs were more difficult to achieve in the hospital system. There was an increase in aldosterone antagonist target dose achievement by 4.7%, but overall there were decreases in target doses in other GDMTs: ACEi/ARB/ARNi target dose use decreased by 3.2%, ARNi target dose use decreased by 2.7% and target β-blocker use decreased by 0.9%.
Discussion
Telehealth yielded clinically important interventions, with some titrations bringing patients to their target doses of medications for HFrEF. The 20 interventions made in the HT group can be largely attributed to the nurses’ efforts to alert clinicians to drug titrations or ACEi/ARB to ARNi transitions. Although the findings were not statistically significant, the difference in the number of drug therapy changes supports the use of the HT program for a GDMT optimization strategy. Patients may be difficult to titrate secondary to adverse effects that make medication initiation or titration inappropriate, such as hypotension and hyperkalemia, although this was not observed in this small sample size. Considering a mean HT enrollment of 5.3 months, many patients had adequate disease assessment and medication titration. Given that patients are discharged from the service once deemed appropriate, this decreases the burden on the patient and increases the utility and implementation of the HT program for other patients.
A surprising finding of this study was the lower rate of HF-related hospitalizations in the HT group. Although not the primary subject of interest in the study, it reinforced the importance of close health care professional follow-up for patients living with HF. Telehealth may improve communication and shared decision making over medication use. Because the finding was unanticipated, the rate of diuretic adjustments was not tracked.
Patients were reevaluated every 6 months for willingness to continue the program, adherence, and clinical needs. These results are similar to those of other trials that demonstrated improved rates of GDMT in the setting of pharmacist- or nurse-led HF treatment optimization.15,16 These positive results differ from other trials incorporating remote monitoring regarding patient continuation in HT programs. For example, in a study by Ding and colleagues, the withdrawal rate from their monitoring service was about 22%, while in our study only 1 patient withdrew from the HT program.11
The HT program resulted in fewer hospitalizations than the control arm. There were 6 HF-related hospitalizations in the control group, although 5 involved a single patient. Typically, such a patient would be encouraged to follow HT monitoring after just 1 HF-related hospitalization; however, the patient declined to participate.
Early optimization of GDMT in patients who were recently discharged from the hospital for an HF-related hospitalization yields a reduction in hospital rehospitalization.17 GDMT optimization has unequivocal benefits in HF outcomes. Unfortunately, the issues surrounding methodologies on how to best optimize GDMT are lacking. While HT has been found to be feasible in the aid of optimizing medical therapy, the TIM-HF trial concluded that remote monitoring services had no significant benefit in reducing mortality.7,8 On the other hand, the OptiLink HF Study showed that when clinicians respond to remote monitoring prompts from fluid index threshold crossing alerts, these interventions are associated with significantly improved clinical outcomes in patients with implantable cardioverter-defibrillators and advanced HF.9 In contrast to previous trials, the AMULET trial showed that remote monitoring compared with standard care significantly reduced the risk of HF hospitalization or cardiovascular death during the 12-month follow-up among patients with HF and LVEF ≤ 49% after an episode of acute exacerbation.10 Additionally, patients who received skilled home health services and participated in remote monitoring for their chronic HF experienced a reduction in all-cause 30-day readmission.18
Given the contrasting evidence regarding remote monitoring and variable modalities of implementing interventions, we investigated whether HT monitoring yields improvements in GDMT optimization. We found that HT nurses were able to nearly double the rate of interventions for patients with HFrEF. The HT program in providing expanded services will require adequate staffing responsibilities and support. The HT program is geared toward following a large, diverse patient population, such as those with chronic obstructive pulmonary disease, hypertension, and HF. We only evaluated services for patients with HFrEF, but the program also follows patients with HfmrEF and HfpEF. These patients were not included as GDMT optimization differs for patients with an LVEF > 40%.19,20
The lower rates of achieving target doses of GDMTs were likely obstructed by continuous use of initial drug doses and further limited by lack of follow-up. When compared with the rest of the VAAAHS, there was a greater effort to increase ARNi use in the HT group as 7 of 33 patients (21%) were started on ARNi compared with a background increase of ARNi use of 17%. There was a lower mortality rate observed in the HT group compared with the control group. One patient in each group died of unrelated causes, while 2 deaths in the control group were due to worsening HF. The difference in mortality is likely multifactorial, possibly related to the control group’s greater disease burden or higher mean age (75.2 years vs 72.6 years).
Limitations
This was an observational cohort design, which is subject to bias. Thus, the findings of this study are entirely hypothesis-generating and a randomized controlled trial would be necessary for clearer results. Second, low numbers of participants may have skewed the data set. Given the observational nature of the study, this nonetheless is a positive finding to support the HT program for assisting with HF monitoring and prompting drug interventions. Due to the low number of participants, a single patient may have skewed the results with 5 hospitalizations.
Conclusions
This pilot study demonstrates the applicability of HT monitoring to optimize veteran HFrEF GDMT. The HT program yielded numerically relevant interventions and fewer HF-related hospitalizations compared with the control arm. The study shows the feasibility of the program to safely optimize GDMT toward their target doses and may serve as an additional catalyst to further develop HT programs specifically targeted toward HF monitoring and management. Cost-savings analyses would likely need to demonstrate the cost utility of such a service.
Acknowledgments
We thank the home telehealth nursing staff for their assistance in data collection and enrollment of patients into the monitoring program.
1. Tsao CW, Aday AW, Almarzooq ZI, et al. Heart disease and stroke statistics-2022 update: a report from the American Heart Association. Circulation. 2022;145(8):e153-e639. doi:10.1161/CIR.0000000000001052
2. McCullough PA, Mehta HS, Barker CM, et al. Mortality and guideline-directed medical therapy in real-world heart failure patients with reduced ejection fraction. Clin Cardiol. 2021;44(9):1192-1198. doi:10.1002/clc.23664
3. Tromp J, Ouwerkerk W, van Veldhuisen DJ, et al. A systematic review and network meta-analysis of pharmacological treatment of heart failure with reduced ejection fraction. JACC Heart Fail. 2022;10(2):73-84. doi:10.1016/j.jchf.2021.09.004
4. Greene SJ, Butler J, Albert NM, et al. Medical therapy for heart failure with reduced ejection fraction: the CHAMP-HF Registry. J Am Coll Cardiol. 2018;72(4):351-366. doi:10.1016/j.jacc.2018.04.070
5. Pierce JB, Vaduganathan M, Fonarow GC, et al. Contemporary use of sodium-glucose cotransporter-2 inhibitor therapy among patients hospitalized for heart failure with reduced ejection fraction in the US: The Get With The Guidelines-Heart Failure Registry. JAMA Cardiol. 2023;8(7):652-661. doi:10.1001/jamacardio.2023.1266
6. Sandhu AT, Kohsaka S, Turakhia MP, Lewis EF, Heidenreich PA. Evaluation of quality of care for US veterans with recent-onset heart failure with reduced ejection fraction. JAMA Cardiol. 2022;7(2):130-139. doi:10.1001/jamacardio.2021.4585 7. Rahimi K, Nazarzadeh M, Pinho-Gomes AC, et al. Home monitoring with technology-supported management in chronic heart failure: a randomised trial. Heart. 2020;106(20):1573-1578. doi:10.1136/heartjnl-2020-316773 8. Koehler F, Winkler S, Schieber M, et al. Impact of remote telemedical management on mortality and hospitalizations in ambulatory patients with chronic heart failure: the telemedical interventional monitoring in heart failure study. Circulation. 2011;123(17):1873-1880. doi:10.1161/CIRCULATIONAHA.111.018473
9. Wintrich J, Pavlicek V, Brachmann J, et al. Remote monitoring with appropriate reaction to alerts was associated with improved outcomes in chronic heart failure: results from the OptiLink HF study. Circ Arrhythm Electrophysiol. 2021;14(1):e008693. doi:10.1161/CIRCEP.120.008693
10. Krzesinski P, Jankowska EA, Siebert J, et al. Effects of an outpatient intervention comprising nurse-led non-invasive assessments, telemedicine support and remote cardiologists’ decisions in patients with heart failure (AMULET study): a randomised controlled trial. Eur J Heart Fail. 2022;24(3):565-577. doi:10.1002/ejhf.2358
11. Ding H, Jayasena R, Chen SH, et al. The effects of telemonitoring on patient compliance with self-management recommendations and outcomes of the innovative telemonitoring enhanced care program for chronic heart failure: randomized controlled trial. J Med Internet Res. 2020;22(7):e17559. doi:10.2196/17559
12. Kitsiou S, Pare G, Jaana M. Effects of home telemonitoring interventions on patients with chronic heart failure: an overview of systematic reviews. J Med Internet Res. 2015;17(3):e63. doi:10.2196/jmir.4174
13. Artanian V, Ross HJ, Rac VE, O’Sullivan M, Brahmbhatt DH, Seto E. Impact of remote titration combined with telemonitoring on the optimization of guideline-directed medical therapy for patients with heart failure: internal pilot of a randomized controlled trial. JMIR Cardio. 2020;4(1):e21962. doi:10.2196/21962
14. Desai AS, Maclean T, Blood AJ, et al. Remote optimization of guideline-directed medical therapy in patients with heart failure with reduced ejection fraction. JAMA Cardiol. 2020;5(12):1430-1434. doi:10.1001/jamacardio.2020.3757
15. Patil T, Ali S, Kaur A, et al. Impact of pharmacist-led heart failure clinic on optimization of guideline-directed medical therapy (PHARM-HF). J Cardiovasc Transl Res. 2022;15(6):1424-1435. doi:10.1007/s12265-022-10262-9
16. Zheng J, Mednick T, Heidenreich PA, Sandhu AT. Pharmacist- and nurse-led medical optimization in heart failure: a systematic review and meta-analysis. J Card Fail. 2023;29(7):1000-1013. doi:10.1016/j.cardfail.2023.03.012
17. Mebazaa A, Davison B, Chioncel O, et al. Safety, tolerability and efficacy of up-titration of guideline-directed medical therapies for acute heart failure (STRONG-HF): a multinational, open-label, randomised, trial. Lancet. 2022;400(10367):1938-1952. doi:10.1016/S0140-6736(22)02076-1
18. O’Connor M, Asdornwised U, Dempsey ML, et al. Using telehealth to reduce all-cause 30-day hospital readmissions among heart failure patients receiving skilled home health services. Appl Clin Inform. 2016;7(2):238-47. doi:10.4338/ACI-2015-11-SOA-0157
19. Heidenreich PA, Bozkurt B, Aguilar D, et al. 2022 AHA/ACC/HFSA Guideline for the Management of Heart Failure: Executive Summary: A Report of the American College of Cardiology/American Heart Association Joint Committee on Clinical Practice Guidelines. Circulation. 2022;145(18):e876-e894. doi:10.1161/CIR.0000000000001062
20. Kittleson MM, Panjrath GS, Amancherla K, et al. 2023 ACC Expert Consensus Decision Pathway on Management of Heart Failure With Preserved Ejection Fraction: A Report of the American College of Cardiology Solution Set Oversight Committee. J Am Coll Cardiol. 2023;81(18):1835-1878. doi:10.1016/j.jacc.2023.03.393
1. Tsao CW, Aday AW, Almarzooq ZI, et al. Heart disease and stroke statistics-2022 update: a report from the American Heart Association. Circulation. 2022;145(8):e153-e639. doi:10.1161/CIR.0000000000001052
2. McCullough PA, Mehta HS, Barker CM, et al. Mortality and guideline-directed medical therapy in real-world heart failure patients with reduced ejection fraction. Clin Cardiol. 2021;44(9):1192-1198. doi:10.1002/clc.23664
3. Tromp J, Ouwerkerk W, van Veldhuisen DJ, et al. A systematic review and network meta-analysis of pharmacological treatment of heart failure with reduced ejection fraction. JACC Heart Fail. 2022;10(2):73-84. doi:10.1016/j.jchf.2021.09.004
4. Greene SJ, Butler J, Albert NM, et al. Medical therapy for heart failure with reduced ejection fraction: the CHAMP-HF Registry. J Am Coll Cardiol. 2018;72(4):351-366. doi:10.1016/j.jacc.2018.04.070
5. Pierce JB, Vaduganathan M, Fonarow GC, et al. Contemporary use of sodium-glucose cotransporter-2 inhibitor therapy among patients hospitalized for heart failure with reduced ejection fraction in the US: The Get With The Guidelines-Heart Failure Registry. JAMA Cardiol. 2023;8(7):652-661. doi:10.1001/jamacardio.2023.1266
6. Sandhu AT, Kohsaka S, Turakhia MP, Lewis EF, Heidenreich PA. Evaluation of quality of care for US veterans with recent-onset heart failure with reduced ejection fraction. JAMA Cardiol. 2022;7(2):130-139. doi:10.1001/jamacardio.2021.4585 7. Rahimi K, Nazarzadeh M, Pinho-Gomes AC, et al. Home monitoring with technology-supported management in chronic heart failure: a randomised trial. Heart. 2020;106(20):1573-1578. doi:10.1136/heartjnl-2020-316773 8. Koehler F, Winkler S, Schieber M, et al. Impact of remote telemedical management on mortality and hospitalizations in ambulatory patients with chronic heart failure: the telemedical interventional monitoring in heart failure study. Circulation. 2011;123(17):1873-1880. doi:10.1161/CIRCULATIONAHA.111.018473
9. Wintrich J, Pavlicek V, Brachmann J, et al. Remote monitoring with appropriate reaction to alerts was associated with improved outcomes in chronic heart failure: results from the OptiLink HF study. Circ Arrhythm Electrophysiol. 2021;14(1):e008693. doi:10.1161/CIRCEP.120.008693
10. Krzesinski P, Jankowska EA, Siebert J, et al. Effects of an outpatient intervention comprising nurse-led non-invasive assessments, telemedicine support and remote cardiologists’ decisions in patients with heart failure (AMULET study): a randomised controlled trial. Eur J Heart Fail. 2022;24(3):565-577. doi:10.1002/ejhf.2358
11. Ding H, Jayasena R, Chen SH, et al. The effects of telemonitoring on patient compliance with self-management recommendations and outcomes of the innovative telemonitoring enhanced care program for chronic heart failure: randomized controlled trial. J Med Internet Res. 2020;22(7):e17559. doi:10.2196/17559
12. Kitsiou S, Pare G, Jaana M. Effects of home telemonitoring interventions on patients with chronic heart failure: an overview of systematic reviews. J Med Internet Res. 2015;17(3):e63. doi:10.2196/jmir.4174
13. Artanian V, Ross HJ, Rac VE, O’Sullivan M, Brahmbhatt DH, Seto E. Impact of remote titration combined with telemonitoring on the optimization of guideline-directed medical therapy for patients with heart failure: internal pilot of a randomized controlled trial. JMIR Cardio. 2020;4(1):e21962. doi:10.2196/21962
14. Desai AS, Maclean T, Blood AJ, et al. Remote optimization of guideline-directed medical therapy in patients with heart failure with reduced ejection fraction. JAMA Cardiol. 2020;5(12):1430-1434. doi:10.1001/jamacardio.2020.3757
15. Patil T, Ali S, Kaur A, et al. Impact of pharmacist-led heart failure clinic on optimization of guideline-directed medical therapy (PHARM-HF). J Cardiovasc Transl Res. 2022;15(6):1424-1435. doi:10.1007/s12265-022-10262-9
16. Zheng J, Mednick T, Heidenreich PA, Sandhu AT. Pharmacist- and nurse-led medical optimization in heart failure: a systematic review and meta-analysis. J Card Fail. 2023;29(7):1000-1013. doi:10.1016/j.cardfail.2023.03.012
17. Mebazaa A, Davison B, Chioncel O, et al. Safety, tolerability and efficacy of up-titration of guideline-directed medical therapies for acute heart failure (STRONG-HF): a multinational, open-label, randomised, trial. Lancet. 2022;400(10367):1938-1952. doi:10.1016/S0140-6736(22)02076-1
18. O’Connor M, Asdornwised U, Dempsey ML, et al. Using telehealth to reduce all-cause 30-day hospital readmissions among heart failure patients receiving skilled home health services. Appl Clin Inform. 2016;7(2):238-47. doi:10.4338/ACI-2015-11-SOA-0157
19. Heidenreich PA, Bozkurt B, Aguilar D, et al. 2022 AHA/ACC/HFSA Guideline for the Management of Heart Failure: Executive Summary: A Report of the American College of Cardiology/American Heart Association Joint Committee on Clinical Practice Guidelines. Circulation. 2022;145(18):e876-e894. doi:10.1161/CIR.0000000000001062
20. Kittleson MM, Panjrath GS, Amancherla K, et al. 2023 ACC Expert Consensus Decision Pathway on Management of Heart Failure With Preserved Ejection Fraction: A Report of the American College of Cardiology Solution Set Oversight Committee. J Am Coll Cardiol. 2023;81(18):1835-1878. doi:10.1016/j.jacc.2023.03.393
Effect of Multidisciplinary Transitional Pain Service on Health Care Use and Costs Following Orthopedic Surgery
Opioid use disorder (OUD) is a significant cause of morbidity, mortality, and health care costs in the US.1,2 Surgery can be the inciting cause for exposure to an opioid; as many as 23% of patients develop chronic OUD following surgery.3,4 Patients with a history of substance use, mood disorders, anxiety, or previous chronic opioid use (COU) are at risk for relapse, dose escalation, and poor pain control after high-risk surgery, such as orthopedic joint procedures.5 Recently focus has been on identifying high-risk patients before orthopedic joint surgery and implementing evidence-based strategies that reduce the postoperative incidence of COU.
A transitional pain service (TPS) has been shown to reduce COU for high-risk surgical patients in different health care settings.6-9 The TPS model bundles multiple interventions that can be applied to patients at high risk for COU within a health care system. This includes individually tailored programs for preoperative education or pain management planning, use of multimodal analgesia (including regional or neuraxial techniques or nonopioid systemic medications), application of nonpharmacologic modalities (such as cognitive-based intervention), and a coordinated approach to postdischarge instructions and transitions of care. These interventions are coordinated by a multidisciplinary clinical service consisting of anesthesiologists and advanced practice clinicians with specialization in acute pain management and opioid tapering, nurse care coordinators, and psychologists with expertise in cognitive behavioral therapy.
TPS has been shown to reduce the incidence of COU for patients undergoing orthopedic joint surgery, but its impact on health care use and costs is unknown.6-9 The TPS intervention is resource intensive and increases the use of health care for preoperative education or pain management, which may increase the burden of costs. However, reducing long-term COU may reduce the use of health care for COU- and OUD-related complications, leading to cost savings. This study evaluated whether the TPS intervention influenced health care use and cost for inpatient, outpatient, or pharmacy services during the year following orthopedic joint surgery compared with that of the standard pain management care for procedures that place patients at high risk for COU. We used a difference-in-differences (DID) analysis to estimate this intervention effect, using multivariable regression models that controlled for unobserved time trends and cohort characteristics.
METHODS
This was a quasi-experimental study of patients who underwent orthopedic joint surgery and associated procedures at high risk for COU at the Veterans Affairs Salt Lake City Healthcare System (VASLCHS) between January 2016 through April 2020. The pre-TPS period between January 2016 through December 2017 was compared with the post-TPS period between January 2018 to September 2019. The control patient cohort was selected from 5 geographically diverse VA health care systems throughout the US: Eastern Colorado, Central Plains (Nebraska), White River Junction (Vermont), North Florida/South Georgia, and Portland (Oregon). By sampling health care costs from VA medical centers (VAMCs) across these different regions, our control group was generalizable to veterans receiving orthopedic joint surgery across the US. This study used data from the US Department of Veterans Affairs (VA) Corporate Data Warehouse, a repository of nearly all clinical and administrative data found in electronic health records for VA-provided care and fee-basis care paid for by the VA.10 All data were hosted and analyzed in the VA Informatics and Computing Infrastructure (VINCI) workspace. The University of Utah Institutional Review Board and the VASLCHS Office of Research and Development approved the protocol for this study.
TPS Intervention
The VASLCHS TPS has already been described in detail elsewhere.6,7 Briefly, patients at high risk for COU at the VASLCHS were enrolled in the TPS program before surgery for total knee, hip, or shoulder arthroplasty or rotator cuff procedures. The TPS service consists of an anesthesiologist and advanced practice clinician with specialization in acute pain management and opioid tapering, a psychologist with expertise in cognitive behavioral therapy, and 3 nurse care coordinators. These TPS practitioners work together to provide preoperative education, including setting expectations regarding postoperative pain, recommending nonopioid pain management strategies, and providing guidance regarding the appropriate use of opioids for surgical pain. Individual pain plans were developed and implemented for the perioperative period. After surgery, the TPS provided recommendations and support for nonopioid pain therapies and opioid tapers. Patients were followed by the TPS team for at least 12 months after surgery. At a minimum, the goals set by TPS included cessation of all opioid use for prior nonopioid users (NOU) by 90 days after surgery and the return to baseline opioid use or lower for prior COU patients by 90 days after surgery. The TPS also encouraged and supported opioid tapering among COU patients to reduce or completely stop opioid use after surgery.
Patient Cohorts
Veterans having primary or revision total knee, hip, or shoulder arthroplasty or rotator cuff repair between January 1, 2016, and September 30, 2019, at the aforementioned VAMCs were included in the study. Patients who had any hospitalization within 90 days pre- or postindex surgery or who died within 8 months after surgery were excluded from analysis. Patients who had multiple surgeries during the index inpatient visit or within 90 days after the index surgery also were excluded. Comorbid conditions for mental health and substance use were identified using the International Classification of Diseases, 10th revision Clinical Modification (ICD-10) codes or 9th revision equivalent grouped by Clinical Classifications Software Refined (CCS-R).11 Preoperative exposure to clinically relevant pharmacotherapy (ie, agents associated with prolonged opioid use and nonopioid adjuvants) was captured using VA outpatient prescription records (eAppendix 1).
Outcome Variables
Outcome variables included health care use and costs during 1-year pre- and postperiods from the date of surgery. VA health care costs for outpatient, inpatient, and pharmacy services for direct patient care were collected from the Managerial Cost Accounting System, an activity-based cost allocation system that generates estimates of the cost of individual VA hospital stays, health care encounters, and medications. Health care use was defined as the number of encounters for each visit type in the Managerial Cost Accounting System. All costs were adjusted to 2019 US dollars, using the Personal Consumption Expenditures price index for health care services.15
A set of sociodemographic variables including sex, age at surgery, race and ethnicity, rurality, military branch (Army, Air Force, Marine Corps, Navy, and other), and service connectivity were included as covariates in our regression models.
Statistical Analyses
Descriptive analyses were used to evaluate differences in baseline patient sociodemographic and clinical characteristics between pre- and postperiods for TPS intervention and control cohorts using 2-sample t tests for continuous variables and χ2 tests for categorical variables. We summarized unadjusted health care use and costs for outpatient, inpatient, and pharmacy visits and compared the pre- and postintervention periods using the Mann-Whitney test. Both mean (SD) and median (IQR) were considered, reflecting the skewed distribution of the outcome variables.
We used a DID approach to assess the intervention effect while minimizing confounding from the nonrandom sample. The DID approach controls for unobserved differences between VAMCs that are related to both the intervention and outcomes while controlling for trends over time that could affect outcomes across clinics. To implement the DID approach, we included 3 key independent variables in our regression models: (1) an indicator for whether the observation occurred in the postintervention period; (2) an indicator for whether the patient was exposed to the TPS intervention; and (3) the interaction between these 2 variables.
For cost outcomes, we used multivariable generalized linear models with a log link and a Poisson or Υ family. We analyzed inpatient costs using a 2-part generalized linear model because only 17% to 20% of patients had ≥ 1 inpatient visit. We used multivariable negative binomial regression for health care use outcomes. Demographic and clinical covariates described earlier were included in the regression models to control for differences in the composition of patient groups and clinics that could lead to confounding bias.
RESULTS
Of the 4954 patients included in our study cohort, 3545 (71.6%) were in the NOU group and 1409 (28.4%) were in the COU group. Among the NOU cohort, 361 patients were in the intervention group and 3184 in the control group. Among the COU cohort, 149 patients were in the intervention group and 1260 in the control group (Table 1). Most patients were male, White race, with a mean (SD) age of 64 (11) years. The most common orthopedic procedure was total knee arthroplasty, followed by total hip arthroplasty. Among both NOU and COU cohorts, patients’ characteristics were similar between the pre- and postintervention period among either TPS or control cohort.
Figures 1 and 2 and eAppendix 2 depict unadjusted per-person average outpatient, inpatient, and pharmacy visits and costs incurred during the 1-year pre- and postintervention periods for the NOU and COU cohorts. Average total health care follow-up costs ranged from $40,000 to $53,000 for NOU and from $47,000 to $82,000 for COU cohort. Cost for outpatient visits accounted for about 70% of the average total costs, followed by costs for inpatient visits of about 20%, and costs for pharmacy for the remaining.
For the NOU cohort, the number of health care encounters remained fairly stable between periods except for the outpatient visits among the TPS group. The TPS group experienced an increase in mean outpatient visits in the postperiod: 30 vs 37 visits (23%) (
Table 2 summarizes the results from the multivariable DID analyses for the outpatient, inpatient, and pharmacy visit and cost outcomes. Here, the estimated effect of the TPS intervention is the coefficient from the interaction between the postintervention and TPS exposure indicator variables. This coefficient was calculated as the difference in the outcome before and after the TPS intervention among the TPS group minus the difference in the outcome before and after the TPS intervention among the control group. For the NOU cohort, TPS was associated with an increase in the use of outpatient health care (mean [SD] increase of 6.9 [2] visits; P < .001) after the surgery with no statistically significant effect on outpatient costs (mean [SD] increase of $2787 [$3749]; P = .55). There was no statistically significant effect of TPS on the use of inpatient visits or pharmacy, but a decrease in costs for inpatient visits among those who had at least 1 inpatient visit (mean [SD] decrease of $12,170 [$6100]; P = .02). For the COU cohort, TPS had no statistically significant impact on the use of outpatient, inpatient, or pharmacy or the corresponding costs.
DISCUSSION
TPS is a multidisciplinary approach to perioperative pain management that has been shown to reduce both the quantity and duration of opioid use among orthopedic surgery patients.6,7 Although the cost burden of providing TPS services to prevent COU is borne by the individual health care system, it is unclear whether this expense is offset by lower long-term medical costs and health care use for COU- and OUD-related complications. In this study focused on a veteran population undergoing orthopedic joint procedures, a DID analysis of cost and health care use showed that TPS, which has been shown to reduce COU for high-risk surgical patients, can be implemented without increasing the overall costs to the VA health care system during the 1 year following surgery, even with increased outpatient visits. For NOU patients, there was no difference in outpatient visit costs or pharmacy costs over 12 months after surgery, although there was a significant reduction in subsequent inpatient costs over the same period. Further, there was no difference in outpatient, inpatient, or pharmacy costs after surgery for COU patients. These findings suggest that TPS can be a cost-effective approach to reduce opioid use among patients undergoing orthopedic joint surgery in VAMCs.
The costs of managing COU after surgery are substantial. Prior reports have shown that adjusted total health care costs are 1.6 to 2.5 times higher for previously NOU patients with new COU after major surgery than those for such patients without persistent use.16 The 1-year costs associated with new COU in this prior study ranged between $7944 and $17,702 after inpatient surgery and between $5598 and $12,834 after outpatient index surgery, depending on the payer, which are in line with the cost differences found in our current study. Another report among patients with COU following orthopedic joint replacement showed that they had higher use of inpatient, emergency department, and ambulance/paramedic services in the 12 months following their surgery than did those without persistent use.17 Although these results highlight the impact that COU plays in driving increased costs after major surgery, there have been limited studies focused on interventions that can neutralize the costs associated with opioid misuse after surgery. To our knowledge, our study is the first analysis to show the impact of using an intervention such as TPS to reduce postoperative opioid use on health care use and cost.
Although a rigorous and comprehensive return on investment analysis was beyond the scope of this analysis, these results may have several implications for other health care systems and hospitals that wish to invest in a multidisciplinary perioperative pain management program such as TPS but may be reluctant due to the upfront investment. First, the increased number of patient follow-up visits needed during TPS seems to be more than offset by the reduction in opioid use and associated complications that may occur after surgery. Second, TPS did not seem to be associated with an increase in overall health care costs during the 1-year follow-up period. Together, these results indicate that the return on investment for a TPS approach to perioperative pain management in which optimal patient-centered outcomes are achieved without increasing long-term costs to a health care system may be positive.
Limitations
This study has several limitations. First, this was a quasi-experimental observational study, and the associations we identified between intervention and outcomes should not be assumed to demonstrate causality. Although our DID analysis controlled for an array of demographic and clinical characteristics, differences in medical costs and health care use between the 2 cohorts might be driven by unobserved confounding variables.
Our study also was limited to veterans who received medical care at the VA, and results may not be generalizable to other non-VA health care systems or to veterans with Medicare insurance who have dual benefits. While our finding on health care use and costs may be incomplete because of the uncaptured health care use outside the VA, our DID analysis helped reduce unobserved bias because the absence of data outside of VA care applies to both TPS and control groups. Further, the total costs of operating a TPS program at any given institution will depend on the size of the hospital and volume of surgical patients who meet criteria for enrollment. However, the relative differences in health care use and costs may be extrapolated to patients undergoing orthopedic surgery in other types of academic and community-based health care systems.
Furthermore, this analysis focused primarily on COU and NOU patients undergoing orthopedic joint surgery. While this represents a high-risk population for OUD, the costs and health care use associated with delivering the TPS intervention to other types of surgical procedures may be significantly different. All costs in this analysis were based on 2019 estimates and do not account for the potential inflation over the past several years. Nonmonetary costs to the patient and per-person average total intervention costs were not included in the study. However, we assumed that costs associated with TPS and standard of care would have increased to an equivalent degree over the same period. Further, the average cost of TPS per patient (approximately $900) is relatively small compared with the average annual costs during 1-year pre- and postoperative periods and was not expected to have a significant effect on the analysis.
Conclusions
We found that the significant reduction in COU seen in previous studies following the implementation of TPS was not accompanied by increased health care costs.6,7 When considering the other costs of long-term opioid use, such as abuse potential, overdose, death, and increased disability, implementation of a TPS service has the potential to improve patient quality of life while reducing other health-related costs. Health care systems should consider the implementation of similar multidisciplinary approaches to perioperative pain management to improve outcomes after orthopedic joint surgery and other high-risk procedures.
1. Rudd RA, Seth P, David F, et al. Increases in drug and opioid-involved overdose deaths—United States, 2010-2015. MMWR Morb Mortal Wkly Rep. 2016;65(50-51):1445-1452. doi:10.15585/mmwr.mm655051e1
2. Florence CS, Zhou C, Luo F, Xu L. The economic burden of prescription opioid overdose, abuse, and dependence in the United States, 2013. Med Care. 2016;54(10):901-906. doi:10.1097/MLR.0000000000000625
3. Jiang X, Orton M, Feng R, et al. Chronic opioid usage in surgical patients in a large academic center. Ann Surg. 2017;265(4):722-727. doi:10.1097/SLA.0000000000001780
4. Johnson SP, Chung KC, Zhong L, et al. Risk of prolonged opioid use among opioid-naive patients following common hand surgery procedures. J Hand Surg Am. 2016;41(10):947-957, e3. doi:10.1016/j.jhsa.2016.07.113
5. Brummett CM, Waljee JF, Goesling J, et al. New persistent opioid use after minor and major surgical procedures in US adults. JAMA Surg. 2017;152(6):e170504. doi:10.1001/jamasurg.2017.0504
6. Buys MJ, Bayless K, Romesser J, et al. Multidisciplinary transitional pain service for the veteran population. Fed Pract. 2020;37(10):472-478. doi:10.12788/fp.0053
7. Buys MJ, Bayless K, Romesser J, et al. Opioid use among veterans undergoing major joint surgery managed by a multidisciplinary transitional pain service. Reg Anesth Pain Med. 2020;45(11):847-852. doi:10.1136/rapm-2020-101797
8. Huang A, Katz J, Clarke H. Ensuring safe prescribing of controlled substances for pain following surgery by developing a transitional pain service. Pain Manag. 2015;5(2):97-105. doi:10.2217/pmt.15.7
9. Katz J, Weinrib A, Fashler SR, et al. The Toronto General Hospital Transitional Pain Service: development and implementation of a multidisciplinary program to prevent chronic postsurgical pain. J Pain Res. 2015;8:695-702. doi:10.2147/JPR.S91924
10. Fihn SD, Francis J, Clancy C, et al. Insights from advanced analytics at the Veterans Health Administration. Health Aff (Millwood). 2014;33(7):1203-1211. doi:10.1377/hlthaff.2014.0054
11. Agency for Healthcare Research and Quality. Clinical Classifications Software Refined (CCSR). Updated December 9, 2022. Accessed October 30, 2023. www.hcup-us.ahrq.gov/toolssoftware/ccsr/ccs_refined.jsp
12. Mosher HJ, Richardson KK, Lund BC. The 1-year treatment course of new opioid recipients in Veterans Health Administration. Pain Med. 2016;17(7):1282-1291. doi:10.1093/pm/pnw058
13. Hadlandsmyth K, Mosher HJ, Vander Weg MW, O’Shea AM, McCoy KD, Lund BC. Utility of accumulated opioid supply days and individual patient factors in predicting probability of transitioning to long-term opioid use: an observational study in the Veterans Health Administration. Pharmacol Res Perspect. 2020;8(2):e00571. doi:10.1002/prp2.571
14. Pagé MG, Kudrina I, Zomahoun HTV, et al. Relative frequency and risk factors for long-term opioid therapy following surgery and trauma among adults: a systematic review protocol. Syst Rev. 2018;7(1):97. doi:10.1186/s13643-018-0760-3
15. US. Bureau of Economic Analysis. Price indexes for personal consumption expenditures by major type of product. Accessed October 30, 2023. https://apps.bea.gov/iTable/?reqid=19&step=3&isuri=1&nipa_table_list=64&categories=survey
16. Brummett CM, Evans-Shields J, England C, et al. Increased health care costs associated with new persistent opioid use after major surgery in opioid-naive patients. J Manag Care Spec Pharm. 2021;27(6):760-771. doi:10.18553/jmcp.2021.20507
17. Gold LS, Strassels SA, Hansen RN. Health care costs and utilization in patients receiving prescriptions for long-acting opioids for acute postsurgical pain. Clin J Pain. 2016;32(9):747-754. doi:10.1097/ajp.0000000000000322
Opioid use disorder (OUD) is a significant cause of morbidity, mortality, and health care costs in the US.1,2 Surgery can be the inciting cause for exposure to an opioid; as many as 23% of patients develop chronic OUD following surgery.3,4 Patients with a history of substance use, mood disorders, anxiety, or previous chronic opioid use (COU) are at risk for relapse, dose escalation, and poor pain control after high-risk surgery, such as orthopedic joint procedures.5 Recently focus has been on identifying high-risk patients before orthopedic joint surgery and implementing evidence-based strategies that reduce the postoperative incidence of COU.
A transitional pain service (TPS) has been shown to reduce COU for high-risk surgical patients in different health care settings.6-9 The TPS model bundles multiple interventions that can be applied to patients at high risk for COU within a health care system. This includes individually tailored programs for preoperative education or pain management planning, use of multimodal analgesia (including regional or neuraxial techniques or nonopioid systemic medications), application of nonpharmacologic modalities (such as cognitive-based intervention), and a coordinated approach to postdischarge instructions and transitions of care. These interventions are coordinated by a multidisciplinary clinical service consisting of anesthesiologists and advanced practice clinicians with specialization in acute pain management and opioid tapering, nurse care coordinators, and psychologists with expertise in cognitive behavioral therapy.
TPS has been shown to reduce the incidence of COU for patients undergoing orthopedic joint surgery, but its impact on health care use and costs is unknown.6-9 The TPS intervention is resource intensive and increases the use of health care for preoperative education or pain management, which may increase the burden of costs. However, reducing long-term COU may reduce the use of health care for COU- and OUD-related complications, leading to cost savings. This study evaluated whether the TPS intervention influenced health care use and cost for inpatient, outpatient, or pharmacy services during the year following orthopedic joint surgery compared with that of the standard pain management care for procedures that place patients at high risk for COU. We used a difference-in-differences (DID) analysis to estimate this intervention effect, using multivariable regression models that controlled for unobserved time trends and cohort characteristics.
METHODS
This was a quasi-experimental study of patients who underwent orthopedic joint surgery and associated procedures at high risk for COU at the Veterans Affairs Salt Lake City Healthcare System (VASLCHS) between January 2016 through April 2020. The pre-TPS period between January 2016 through December 2017 was compared with the post-TPS period between January 2018 to September 2019. The control patient cohort was selected from 5 geographically diverse VA health care systems throughout the US: Eastern Colorado, Central Plains (Nebraska), White River Junction (Vermont), North Florida/South Georgia, and Portland (Oregon). By sampling health care costs from VA medical centers (VAMCs) across these different regions, our control group was generalizable to veterans receiving orthopedic joint surgery across the US. This study used data from the US Department of Veterans Affairs (VA) Corporate Data Warehouse, a repository of nearly all clinical and administrative data found in electronic health records for VA-provided care and fee-basis care paid for by the VA.10 All data were hosted and analyzed in the VA Informatics and Computing Infrastructure (VINCI) workspace. The University of Utah Institutional Review Board and the VASLCHS Office of Research and Development approved the protocol for this study.
TPS Intervention
The VASLCHS TPS has already been described in detail elsewhere.6,7 Briefly, patients at high risk for COU at the VASLCHS were enrolled in the TPS program before surgery for total knee, hip, or shoulder arthroplasty or rotator cuff procedures. The TPS service consists of an anesthesiologist and advanced practice clinician with specialization in acute pain management and opioid tapering, a psychologist with expertise in cognitive behavioral therapy, and 3 nurse care coordinators. These TPS practitioners work together to provide preoperative education, including setting expectations regarding postoperative pain, recommending nonopioid pain management strategies, and providing guidance regarding the appropriate use of opioids for surgical pain. Individual pain plans were developed and implemented for the perioperative period. After surgery, the TPS provided recommendations and support for nonopioid pain therapies and opioid tapers. Patients were followed by the TPS team for at least 12 months after surgery. At a minimum, the goals set by TPS included cessation of all opioid use for prior nonopioid users (NOU) by 90 days after surgery and the return to baseline opioid use or lower for prior COU patients by 90 days after surgery. The TPS also encouraged and supported opioid tapering among COU patients to reduce or completely stop opioid use after surgery.
Patient Cohorts
Veterans having primary or revision total knee, hip, or shoulder arthroplasty or rotator cuff repair between January 1, 2016, and September 30, 2019, at the aforementioned VAMCs were included in the study. Patients who had any hospitalization within 90 days pre- or postindex surgery or who died within 8 months after surgery were excluded from analysis. Patients who had multiple surgeries during the index inpatient visit or within 90 days after the index surgery also were excluded. Comorbid conditions for mental health and substance use were identified using the International Classification of Diseases, 10th revision Clinical Modification (ICD-10) codes or 9th revision equivalent grouped by Clinical Classifications Software Refined (CCS-R).11 Preoperative exposure to clinically relevant pharmacotherapy (ie, agents associated with prolonged opioid use and nonopioid adjuvants) was captured using VA outpatient prescription records (eAppendix 1).
Outcome Variables
Outcome variables included health care use and costs during 1-year pre- and postperiods from the date of surgery. VA health care costs for outpatient, inpatient, and pharmacy services for direct patient care were collected from the Managerial Cost Accounting System, an activity-based cost allocation system that generates estimates of the cost of individual VA hospital stays, health care encounters, and medications. Health care use was defined as the number of encounters for each visit type in the Managerial Cost Accounting System. All costs were adjusted to 2019 US dollars, using the Personal Consumption Expenditures price index for health care services.15
A set of sociodemographic variables including sex, age at surgery, race and ethnicity, rurality, military branch (Army, Air Force, Marine Corps, Navy, and other), and service connectivity were included as covariates in our regression models.
Statistical Analyses
Descriptive analyses were used to evaluate differences in baseline patient sociodemographic and clinical characteristics between pre- and postperiods for TPS intervention and control cohorts using 2-sample t tests for continuous variables and χ2 tests for categorical variables. We summarized unadjusted health care use and costs for outpatient, inpatient, and pharmacy visits and compared the pre- and postintervention periods using the Mann-Whitney test. Both mean (SD) and median (IQR) were considered, reflecting the skewed distribution of the outcome variables.
We used a DID approach to assess the intervention effect while minimizing confounding from the nonrandom sample. The DID approach controls for unobserved differences between VAMCs that are related to both the intervention and outcomes while controlling for trends over time that could affect outcomes across clinics. To implement the DID approach, we included 3 key independent variables in our regression models: (1) an indicator for whether the observation occurred in the postintervention period; (2) an indicator for whether the patient was exposed to the TPS intervention; and (3) the interaction between these 2 variables.
For cost outcomes, we used multivariable generalized linear models with a log link and a Poisson or Υ family. We analyzed inpatient costs using a 2-part generalized linear model because only 17% to 20% of patients had ≥ 1 inpatient visit. We used multivariable negative binomial regression for health care use outcomes. Demographic and clinical covariates described earlier were included in the regression models to control for differences in the composition of patient groups and clinics that could lead to confounding bias.
RESULTS
Of the 4954 patients included in our study cohort, 3545 (71.6%) were in the NOU group and 1409 (28.4%) were in the COU group. Among the NOU cohort, 361 patients were in the intervention group and 3184 in the control group. Among the COU cohort, 149 patients were in the intervention group and 1260 in the control group (Table 1). Most patients were male, White race, with a mean (SD) age of 64 (11) years. The most common orthopedic procedure was total knee arthroplasty, followed by total hip arthroplasty. Among both NOU and COU cohorts, patients’ characteristics were similar between the pre- and postintervention period among either TPS or control cohort.
Figures 1 and 2 and eAppendix 2 depict unadjusted per-person average outpatient, inpatient, and pharmacy visits and costs incurred during the 1-year pre- and postintervention periods for the NOU and COU cohorts. Average total health care follow-up costs ranged from $40,000 to $53,000 for NOU and from $47,000 to $82,000 for COU cohort. Cost for outpatient visits accounted for about 70% of the average total costs, followed by costs for inpatient visits of about 20%, and costs for pharmacy for the remaining.
For the NOU cohort, the number of health care encounters remained fairly stable between periods except for the outpatient visits among the TPS group. The TPS group experienced an increase in mean outpatient visits in the postperiod: 30 vs 37 visits (23%) (
Table 2 summarizes the results from the multivariable DID analyses for the outpatient, inpatient, and pharmacy visit and cost outcomes. Here, the estimated effect of the TPS intervention is the coefficient from the interaction between the postintervention and TPS exposure indicator variables. This coefficient was calculated as the difference in the outcome before and after the TPS intervention among the TPS group minus the difference in the outcome before and after the TPS intervention among the control group. For the NOU cohort, TPS was associated with an increase in the use of outpatient health care (mean [SD] increase of 6.9 [2] visits; P < .001) after the surgery with no statistically significant effect on outpatient costs (mean [SD] increase of $2787 [$3749]; P = .55). There was no statistically significant effect of TPS on the use of inpatient visits or pharmacy, but a decrease in costs for inpatient visits among those who had at least 1 inpatient visit (mean [SD] decrease of $12,170 [$6100]; P = .02). For the COU cohort, TPS had no statistically significant impact on the use of outpatient, inpatient, or pharmacy or the corresponding costs.
DISCUSSION
TPS is a multidisciplinary approach to perioperative pain management that has been shown to reduce both the quantity and duration of opioid use among orthopedic surgery patients.6,7 Although the cost burden of providing TPS services to prevent COU is borne by the individual health care system, it is unclear whether this expense is offset by lower long-term medical costs and health care use for COU- and OUD-related complications. In this study focused on a veteran population undergoing orthopedic joint procedures, a DID analysis of cost and health care use showed that TPS, which has been shown to reduce COU for high-risk surgical patients, can be implemented without increasing the overall costs to the VA health care system during the 1 year following surgery, even with increased outpatient visits. For NOU patients, there was no difference in outpatient visit costs or pharmacy costs over 12 months after surgery, although there was a significant reduction in subsequent inpatient costs over the same period. Further, there was no difference in outpatient, inpatient, or pharmacy costs after surgery for COU patients. These findings suggest that TPS can be a cost-effective approach to reduce opioid use among patients undergoing orthopedic joint surgery in VAMCs.
The costs of managing COU after surgery are substantial. Prior reports have shown that adjusted total health care costs are 1.6 to 2.5 times higher for previously NOU patients with new COU after major surgery than those for such patients without persistent use.16 The 1-year costs associated with new COU in this prior study ranged between $7944 and $17,702 after inpatient surgery and between $5598 and $12,834 after outpatient index surgery, depending on the payer, which are in line with the cost differences found in our current study. Another report among patients with COU following orthopedic joint replacement showed that they had higher use of inpatient, emergency department, and ambulance/paramedic services in the 12 months following their surgery than did those without persistent use.17 Although these results highlight the impact that COU plays in driving increased costs after major surgery, there have been limited studies focused on interventions that can neutralize the costs associated with opioid misuse after surgery. To our knowledge, our study is the first analysis to show the impact of using an intervention such as TPS to reduce postoperative opioid use on health care use and cost.
Although a rigorous and comprehensive return on investment analysis was beyond the scope of this analysis, these results may have several implications for other health care systems and hospitals that wish to invest in a multidisciplinary perioperative pain management program such as TPS but may be reluctant due to the upfront investment. First, the increased number of patient follow-up visits needed during TPS seems to be more than offset by the reduction in opioid use and associated complications that may occur after surgery. Second, TPS did not seem to be associated with an increase in overall health care costs during the 1-year follow-up period. Together, these results indicate that the return on investment for a TPS approach to perioperative pain management in which optimal patient-centered outcomes are achieved without increasing long-term costs to a health care system may be positive.
Limitations
This study has several limitations. First, this was a quasi-experimental observational study, and the associations we identified between intervention and outcomes should not be assumed to demonstrate causality. Although our DID analysis controlled for an array of demographic and clinical characteristics, differences in medical costs and health care use between the 2 cohorts might be driven by unobserved confounding variables.
Our study also was limited to veterans who received medical care at the VA, and results may not be generalizable to other non-VA health care systems or to veterans with Medicare insurance who have dual benefits. While our finding on health care use and costs may be incomplete because of the uncaptured health care use outside the VA, our DID analysis helped reduce unobserved bias because the absence of data outside of VA care applies to both TPS and control groups. Further, the total costs of operating a TPS program at any given institution will depend on the size of the hospital and volume of surgical patients who meet criteria for enrollment. However, the relative differences in health care use and costs may be extrapolated to patients undergoing orthopedic surgery in other types of academic and community-based health care systems.
Furthermore, this analysis focused primarily on COU and NOU patients undergoing orthopedic joint surgery. While this represents a high-risk population for OUD, the costs and health care use associated with delivering the TPS intervention to other types of surgical procedures may be significantly different. All costs in this analysis were based on 2019 estimates and do not account for the potential inflation over the past several years. Nonmonetary costs to the patient and per-person average total intervention costs were not included in the study. However, we assumed that costs associated with TPS and standard of care would have increased to an equivalent degree over the same period. Further, the average cost of TPS per patient (approximately $900) is relatively small compared with the average annual costs during 1-year pre- and postoperative periods and was not expected to have a significant effect on the analysis.
Conclusions
We found that the significant reduction in COU seen in previous studies following the implementation of TPS was not accompanied by increased health care costs.6,7 When considering the other costs of long-term opioid use, such as abuse potential, overdose, death, and increased disability, implementation of a TPS service has the potential to improve patient quality of life while reducing other health-related costs. Health care systems should consider the implementation of similar multidisciplinary approaches to perioperative pain management to improve outcomes after orthopedic joint surgery and other high-risk procedures.
Opioid use disorder (OUD) is a significant cause of morbidity, mortality, and health care costs in the US.1,2 Surgery can be the inciting cause for exposure to an opioid; as many as 23% of patients develop chronic OUD following surgery.3,4 Patients with a history of substance use, mood disorders, anxiety, or previous chronic opioid use (COU) are at risk for relapse, dose escalation, and poor pain control after high-risk surgery, such as orthopedic joint procedures.5 Recently focus has been on identifying high-risk patients before orthopedic joint surgery and implementing evidence-based strategies that reduce the postoperative incidence of COU.
A transitional pain service (TPS) has been shown to reduce COU for high-risk surgical patients in different health care settings.6-9 The TPS model bundles multiple interventions that can be applied to patients at high risk for COU within a health care system. This includes individually tailored programs for preoperative education or pain management planning, use of multimodal analgesia (including regional or neuraxial techniques or nonopioid systemic medications), application of nonpharmacologic modalities (such as cognitive-based intervention), and a coordinated approach to postdischarge instructions and transitions of care. These interventions are coordinated by a multidisciplinary clinical service consisting of anesthesiologists and advanced practice clinicians with specialization in acute pain management and opioid tapering, nurse care coordinators, and psychologists with expertise in cognitive behavioral therapy.
TPS has been shown to reduce the incidence of COU for patients undergoing orthopedic joint surgery, but its impact on health care use and costs is unknown.6-9 The TPS intervention is resource intensive and increases the use of health care for preoperative education or pain management, which may increase the burden of costs. However, reducing long-term COU may reduce the use of health care for COU- and OUD-related complications, leading to cost savings. This study evaluated whether the TPS intervention influenced health care use and cost for inpatient, outpatient, or pharmacy services during the year following orthopedic joint surgery compared with that of the standard pain management care for procedures that place patients at high risk for COU. We used a difference-in-differences (DID) analysis to estimate this intervention effect, using multivariable regression models that controlled for unobserved time trends and cohort characteristics.
METHODS
This was a quasi-experimental study of patients who underwent orthopedic joint surgery and associated procedures at high risk for COU at the Veterans Affairs Salt Lake City Healthcare System (VASLCHS) between January 2016 through April 2020. The pre-TPS period between January 2016 through December 2017 was compared with the post-TPS period between January 2018 to September 2019. The control patient cohort was selected from 5 geographically diverse VA health care systems throughout the US: Eastern Colorado, Central Plains (Nebraska), White River Junction (Vermont), North Florida/South Georgia, and Portland (Oregon). By sampling health care costs from VA medical centers (VAMCs) across these different regions, our control group was generalizable to veterans receiving orthopedic joint surgery across the US. This study used data from the US Department of Veterans Affairs (VA) Corporate Data Warehouse, a repository of nearly all clinical and administrative data found in electronic health records for VA-provided care and fee-basis care paid for by the VA.10 All data were hosted and analyzed in the VA Informatics and Computing Infrastructure (VINCI) workspace. The University of Utah Institutional Review Board and the VASLCHS Office of Research and Development approved the protocol for this study.
TPS Intervention
The VASLCHS TPS has already been described in detail elsewhere.6,7 Briefly, patients at high risk for COU at the VASLCHS were enrolled in the TPS program before surgery for total knee, hip, or shoulder arthroplasty or rotator cuff procedures. The TPS service consists of an anesthesiologist and advanced practice clinician with specialization in acute pain management and opioid tapering, a psychologist with expertise in cognitive behavioral therapy, and 3 nurse care coordinators. These TPS practitioners work together to provide preoperative education, including setting expectations regarding postoperative pain, recommending nonopioid pain management strategies, and providing guidance regarding the appropriate use of opioids for surgical pain. Individual pain plans were developed and implemented for the perioperative period. After surgery, the TPS provided recommendations and support for nonopioid pain therapies and opioid tapers. Patients were followed by the TPS team for at least 12 months after surgery. At a minimum, the goals set by TPS included cessation of all opioid use for prior nonopioid users (NOU) by 90 days after surgery and the return to baseline opioid use or lower for prior COU patients by 90 days after surgery. The TPS also encouraged and supported opioid tapering among COU patients to reduce or completely stop opioid use after surgery.
Patient Cohorts
Veterans having primary or revision total knee, hip, or shoulder arthroplasty or rotator cuff repair between January 1, 2016, and September 30, 2019, at the aforementioned VAMCs were included in the study. Patients who had any hospitalization within 90 days pre- or postindex surgery or who died within 8 months after surgery were excluded from analysis. Patients who had multiple surgeries during the index inpatient visit or within 90 days after the index surgery also were excluded. Comorbid conditions for mental health and substance use were identified using the International Classification of Diseases, 10th revision Clinical Modification (ICD-10) codes or 9th revision equivalent grouped by Clinical Classifications Software Refined (CCS-R).11 Preoperative exposure to clinically relevant pharmacotherapy (ie, agents associated with prolonged opioid use and nonopioid adjuvants) was captured using VA outpatient prescription records (eAppendix 1).
Outcome Variables
Outcome variables included health care use and costs during 1-year pre- and postperiods from the date of surgery. VA health care costs for outpatient, inpatient, and pharmacy services for direct patient care were collected from the Managerial Cost Accounting System, an activity-based cost allocation system that generates estimates of the cost of individual VA hospital stays, health care encounters, and medications. Health care use was defined as the number of encounters for each visit type in the Managerial Cost Accounting System. All costs were adjusted to 2019 US dollars, using the Personal Consumption Expenditures price index for health care services.15
A set of sociodemographic variables including sex, age at surgery, race and ethnicity, rurality, military branch (Army, Air Force, Marine Corps, Navy, and other), and service connectivity were included as covariates in our regression models.
Statistical Analyses
Descriptive analyses were used to evaluate differences in baseline patient sociodemographic and clinical characteristics between pre- and postperiods for TPS intervention and control cohorts using 2-sample t tests for continuous variables and χ2 tests for categorical variables. We summarized unadjusted health care use and costs for outpatient, inpatient, and pharmacy visits and compared the pre- and postintervention periods using the Mann-Whitney test. Both mean (SD) and median (IQR) were considered, reflecting the skewed distribution of the outcome variables.
We used a DID approach to assess the intervention effect while minimizing confounding from the nonrandom sample. The DID approach controls for unobserved differences between VAMCs that are related to both the intervention and outcomes while controlling for trends over time that could affect outcomes across clinics. To implement the DID approach, we included 3 key independent variables in our regression models: (1) an indicator for whether the observation occurred in the postintervention period; (2) an indicator for whether the patient was exposed to the TPS intervention; and (3) the interaction between these 2 variables.
For cost outcomes, we used multivariable generalized linear models with a log link and a Poisson or Υ family. We analyzed inpatient costs using a 2-part generalized linear model because only 17% to 20% of patients had ≥ 1 inpatient visit. We used multivariable negative binomial regression for health care use outcomes. Demographic and clinical covariates described earlier were included in the regression models to control for differences in the composition of patient groups and clinics that could lead to confounding bias.
RESULTS
Of the 4954 patients included in our study cohort, 3545 (71.6%) were in the NOU group and 1409 (28.4%) were in the COU group. Among the NOU cohort, 361 patients were in the intervention group and 3184 in the control group. Among the COU cohort, 149 patients were in the intervention group and 1260 in the control group (Table 1). Most patients were male, White race, with a mean (SD) age of 64 (11) years. The most common orthopedic procedure was total knee arthroplasty, followed by total hip arthroplasty. Among both NOU and COU cohorts, patients’ characteristics were similar between the pre- and postintervention period among either TPS or control cohort.
Figures 1 and 2 and eAppendix 2 depict unadjusted per-person average outpatient, inpatient, and pharmacy visits and costs incurred during the 1-year pre- and postintervention periods for the NOU and COU cohorts. Average total health care follow-up costs ranged from $40,000 to $53,000 for NOU and from $47,000 to $82,000 for COU cohort. Cost for outpatient visits accounted for about 70% of the average total costs, followed by costs for inpatient visits of about 20%, and costs for pharmacy for the remaining.
For the NOU cohort, the number of health care encounters remained fairly stable between periods except for the outpatient visits among the TPS group. The TPS group experienced an increase in mean outpatient visits in the postperiod: 30 vs 37 visits (23%) (
Table 2 summarizes the results from the multivariable DID analyses for the outpatient, inpatient, and pharmacy visit and cost outcomes. Here, the estimated effect of the TPS intervention is the coefficient from the interaction between the postintervention and TPS exposure indicator variables. This coefficient was calculated as the difference in the outcome before and after the TPS intervention among the TPS group minus the difference in the outcome before and after the TPS intervention among the control group. For the NOU cohort, TPS was associated with an increase in the use of outpatient health care (mean [SD] increase of 6.9 [2] visits; P < .001) after the surgery with no statistically significant effect on outpatient costs (mean [SD] increase of $2787 [$3749]; P = .55). There was no statistically significant effect of TPS on the use of inpatient visits or pharmacy, but a decrease in costs for inpatient visits among those who had at least 1 inpatient visit (mean [SD] decrease of $12,170 [$6100]; P = .02). For the COU cohort, TPS had no statistically significant impact on the use of outpatient, inpatient, or pharmacy or the corresponding costs.
DISCUSSION
TPS is a multidisciplinary approach to perioperative pain management that has been shown to reduce both the quantity and duration of opioid use among orthopedic surgery patients.6,7 Although the cost burden of providing TPS services to prevent COU is borne by the individual health care system, it is unclear whether this expense is offset by lower long-term medical costs and health care use for COU- and OUD-related complications. In this study focused on a veteran population undergoing orthopedic joint procedures, a DID analysis of cost and health care use showed that TPS, which has been shown to reduce COU for high-risk surgical patients, can be implemented without increasing the overall costs to the VA health care system during the 1 year following surgery, even with increased outpatient visits. For NOU patients, there was no difference in outpatient visit costs or pharmacy costs over 12 months after surgery, although there was a significant reduction in subsequent inpatient costs over the same period. Further, there was no difference in outpatient, inpatient, or pharmacy costs after surgery for COU patients. These findings suggest that TPS can be a cost-effective approach to reduce opioid use among patients undergoing orthopedic joint surgery in VAMCs.
The costs of managing COU after surgery are substantial. Prior reports have shown that adjusted total health care costs are 1.6 to 2.5 times higher for previously NOU patients with new COU after major surgery than those for such patients without persistent use.16 The 1-year costs associated with new COU in this prior study ranged between $7944 and $17,702 after inpatient surgery and between $5598 and $12,834 after outpatient index surgery, depending on the payer, which are in line with the cost differences found in our current study. Another report among patients with COU following orthopedic joint replacement showed that they had higher use of inpatient, emergency department, and ambulance/paramedic services in the 12 months following their surgery than did those without persistent use.17 Although these results highlight the impact that COU plays in driving increased costs after major surgery, there have been limited studies focused on interventions that can neutralize the costs associated with opioid misuse after surgery. To our knowledge, our study is the first analysis to show the impact of using an intervention such as TPS to reduce postoperative opioid use on health care use and cost.
Although a rigorous and comprehensive return on investment analysis was beyond the scope of this analysis, these results may have several implications for other health care systems and hospitals that wish to invest in a multidisciplinary perioperative pain management program such as TPS but may be reluctant due to the upfront investment. First, the increased number of patient follow-up visits needed during TPS seems to be more than offset by the reduction in opioid use and associated complications that may occur after surgery. Second, TPS did not seem to be associated with an increase in overall health care costs during the 1-year follow-up period. Together, these results indicate that the return on investment for a TPS approach to perioperative pain management in which optimal patient-centered outcomes are achieved without increasing long-term costs to a health care system may be positive.
Limitations
This study has several limitations. First, this was a quasi-experimental observational study, and the associations we identified between intervention and outcomes should not be assumed to demonstrate causality. Although our DID analysis controlled for an array of demographic and clinical characteristics, differences in medical costs and health care use between the 2 cohorts might be driven by unobserved confounding variables.
Our study also was limited to veterans who received medical care at the VA, and results may not be generalizable to other non-VA health care systems or to veterans with Medicare insurance who have dual benefits. While our finding on health care use and costs may be incomplete because of the uncaptured health care use outside the VA, our DID analysis helped reduce unobserved bias because the absence of data outside of VA care applies to both TPS and control groups. Further, the total costs of operating a TPS program at any given institution will depend on the size of the hospital and volume of surgical patients who meet criteria for enrollment. However, the relative differences in health care use and costs may be extrapolated to patients undergoing orthopedic surgery in other types of academic and community-based health care systems.
Furthermore, this analysis focused primarily on COU and NOU patients undergoing orthopedic joint surgery. While this represents a high-risk population for OUD, the costs and health care use associated with delivering the TPS intervention to other types of surgical procedures may be significantly different. All costs in this analysis were based on 2019 estimates and do not account for the potential inflation over the past several years. Nonmonetary costs to the patient and per-person average total intervention costs were not included in the study. However, we assumed that costs associated with TPS and standard of care would have increased to an equivalent degree over the same period. Further, the average cost of TPS per patient (approximately $900) is relatively small compared with the average annual costs during 1-year pre- and postoperative periods and was not expected to have a significant effect on the analysis.
Conclusions
We found that the significant reduction in COU seen in previous studies following the implementation of TPS was not accompanied by increased health care costs.6,7 When considering the other costs of long-term opioid use, such as abuse potential, overdose, death, and increased disability, implementation of a TPS service has the potential to improve patient quality of life while reducing other health-related costs. Health care systems should consider the implementation of similar multidisciplinary approaches to perioperative pain management to improve outcomes after orthopedic joint surgery and other high-risk procedures.
1. Rudd RA, Seth P, David F, et al. Increases in drug and opioid-involved overdose deaths—United States, 2010-2015. MMWR Morb Mortal Wkly Rep. 2016;65(50-51):1445-1452. doi:10.15585/mmwr.mm655051e1
2. Florence CS, Zhou C, Luo F, Xu L. The economic burden of prescription opioid overdose, abuse, and dependence in the United States, 2013. Med Care. 2016;54(10):901-906. doi:10.1097/MLR.0000000000000625
3. Jiang X, Orton M, Feng R, et al. Chronic opioid usage in surgical patients in a large academic center. Ann Surg. 2017;265(4):722-727. doi:10.1097/SLA.0000000000001780
4. Johnson SP, Chung KC, Zhong L, et al. Risk of prolonged opioid use among opioid-naive patients following common hand surgery procedures. J Hand Surg Am. 2016;41(10):947-957, e3. doi:10.1016/j.jhsa.2016.07.113
5. Brummett CM, Waljee JF, Goesling J, et al. New persistent opioid use after minor and major surgical procedures in US adults. JAMA Surg. 2017;152(6):e170504. doi:10.1001/jamasurg.2017.0504
6. Buys MJ, Bayless K, Romesser J, et al. Multidisciplinary transitional pain service for the veteran population. Fed Pract. 2020;37(10):472-478. doi:10.12788/fp.0053
7. Buys MJ, Bayless K, Romesser J, et al. Opioid use among veterans undergoing major joint surgery managed by a multidisciplinary transitional pain service. Reg Anesth Pain Med. 2020;45(11):847-852. doi:10.1136/rapm-2020-101797
8. Huang A, Katz J, Clarke H. Ensuring safe prescribing of controlled substances for pain following surgery by developing a transitional pain service. Pain Manag. 2015;5(2):97-105. doi:10.2217/pmt.15.7
9. Katz J, Weinrib A, Fashler SR, et al. The Toronto General Hospital Transitional Pain Service: development and implementation of a multidisciplinary program to prevent chronic postsurgical pain. J Pain Res. 2015;8:695-702. doi:10.2147/JPR.S91924
10. Fihn SD, Francis J, Clancy C, et al. Insights from advanced analytics at the Veterans Health Administration. Health Aff (Millwood). 2014;33(7):1203-1211. doi:10.1377/hlthaff.2014.0054
11. Agency for Healthcare Research and Quality. Clinical Classifications Software Refined (CCSR). Updated December 9, 2022. Accessed October 30, 2023. www.hcup-us.ahrq.gov/toolssoftware/ccsr/ccs_refined.jsp
12. Mosher HJ, Richardson KK, Lund BC. The 1-year treatment course of new opioid recipients in Veterans Health Administration. Pain Med. 2016;17(7):1282-1291. doi:10.1093/pm/pnw058
13. Hadlandsmyth K, Mosher HJ, Vander Weg MW, O’Shea AM, McCoy KD, Lund BC. Utility of accumulated opioid supply days and individual patient factors in predicting probability of transitioning to long-term opioid use: an observational study in the Veterans Health Administration. Pharmacol Res Perspect. 2020;8(2):e00571. doi:10.1002/prp2.571
14. Pagé MG, Kudrina I, Zomahoun HTV, et al. Relative frequency and risk factors for long-term opioid therapy following surgery and trauma among adults: a systematic review protocol. Syst Rev. 2018;7(1):97. doi:10.1186/s13643-018-0760-3
15. US. Bureau of Economic Analysis. Price indexes for personal consumption expenditures by major type of product. Accessed October 30, 2023. https://apps.bea.gov/iTable/?reqid=19&step=3&isuri=1&nipa_table_list=64&categories=survey
16. Brummett CM, Evans-Shields J, England C, et al. Increased health care costs associated with new persistent opioid use after major surgery in opioid-naive patients. J Manag Care Spec Pharm. 2021;27(6):760-771. doi:10.18553/jmcp.2021.20507
17. Gold LS, Strassels SA, Hansen RN. Health care costs and utilization in patients receiving prescriptions for long-acting opioids for acute postsurgical pain. Clin J Pain. 2016;32(9):747-754. doi:10.1097/ajp.0000000000000322
1. Rudd RA, Seth P, David F, et al. Increases in drug and opioid-involved overdose deaths—United States, 2010-2015. MMWR Morb Mortal Wkly Rep. 2016;65(50-51):1445-1452. doi:10.15585/mmwr.mm655051e1
2. Florence CS, Zhou C, Luo F, Xu L. The economic burden of prescription opioid overdose, abuse, and dependence in the United States, 2013. Med Care. 2016;54(10):901-906. doi:10.1097/MLR.0000000000000625
3. Jiang X, Orton M, Feng R, et al. Chronic opioid usage in surgical patients in a large academic center. Ann Surg. 2017;265(4):722-727. doi:10.1097/SLA.0000000000001780
4. Johnson SP, Chung KC, Zhong L, et al. Risk of prolonged opioid use among opioid-naive patients following common hand surgery procedures. J Hand Surg Am. 2016;41(10):947-957, e3. doi:10.1016/j.jhsa.2016.07.113
5. Brummett CM, Waljee JF, Goesling J, et al. New persistent opioid use after minor and major surgical procedures in US adults. JAMA Surg. 2017;152(6):e170504. doi:10.1001/jamasurg.2017.0504
6. Buys MJ, Bayless K, Romesser J, et al. Multidisciplinary transitional pain service for the veteran population. Fed Pract. 2020;37(10):472-478. doi:10.12788/fp.0053
7. Buys MJ, Bayless K, Romesser J, et al. Opioid use among veterans undergoing major joint surgery managed by a multidisciplinary transitional pain service. Reg Anesth Pain Med. 2020;45(11):847-852. doi:10.1136/rapm-2020-101797
8. Huang A, Katz J, Clarke H. Ensuring safe prescribing of controlled substances for pain following surgery by developing a transitional pain service. Pain Manag. 2015;5(2):97-105. doi:10.2217/pmt.15.7
9. Katz J, Weinrib A, Fashler SR, et al. The Toronto General Hospital Transitional Pain Service: development and implementation of a multidisciplinary program to prevent chronic postsurgical pain. J Pain Res. 2015;8:695-702. doi:10.2147/JPR.S91924
10. Fihn SD, Francis J, Clancy C, et al. Insights from advanced analytics at the Veterans Health Administration. Health Aff (Millwood). 2014;33(7):1203-1211. doi:10.1377/hlthaff.2014.0054
11. Agency for Healthcare Research and Quality. Clinical Classifications Software Refined (CCSR). Updated December 9, 2022. Accessed October 30, 2023. www.hcup-us.ahrq.gov/toolssoftware/ccsr/ccs_refined.jsp
12. Mosher HJ, Richardson KK, Lund BC. The 1-year treatment course of new opioid recipients in Veterans Health Administration. Pain Med. 2016;17(7):1282-1291. doi:10.1093/pm/pnw058
13. Hadlandsmyth K, Mosher HJ, Vander Weg MW, O’Shea AM, McCoy KD, Lund BC. Utility of accumulated opioid supply days and individual patient factors in predicting probability of transitioning to long-term opioid use: an observational study in the Veterans Health Administration. Pharmacol Res Perspect. 2020;8(2):e00571. doi:10.1002/prp2.571
14. Pagé MG, Kudrina I, Zomahoun HTV, et al. Relative frequency and risk factors for long-term opioid therapy following surgery and trauma among adults: a systematic review protocol. Syst Rev. 2018;7(1):97. doi:10.1186/s13643-018-0760-3
15. US. Bureau of Economic Analysis. Price indexes for personal consumption expenditures by major type of product. Accessed October 30, 2023. https://apps.bea.gov/iTable/?reqid=19&step=3&isuri=1&nipa_table_list=64&categories=survey
16. Brummett CM, Evans-Shields J, England C, et al. Increased health care costs associated with new persistent opioid use after major surgery in opioid-naive patients. J Manag Care Spec Pharm. 2021;27(6):760-771. doi:10.18553/jmcp.2021.20507
17. Gold LS, Strassels SA, Hansen RN. Health care costs and utilization in patients receiving prescriptions for long-acting opioids for acute postsurgical pain. Clin J Pain. 2016;32(9):747-754. doi:10.1097/ajp.0000000000000322
Increasing Local Productivity Through a Regional Antimicrobial Stewardship Collaborative
The importance of formalized antimicrobial stewardship programs (ASPs) has gained recognition over the past 2 decades. The increasing requirements for ASP programs from national entities often outpace the staffing, technology, and analytic support needed to meet these demands.1,2 A multimodal approach to stewardship that includes education initiatives, audit-and-feedback methodology, and system support is effective in producing sustained change.3 However, this approach is resource intensive, and many ASPs must look outward for additional support.
Centralized ASP collaboratives and stewardship networks have been effective in positively impacting initiatives and outcomes through resource sharing.3-5 These collaboratives can take on multiple forms ranging from centralized education distribution to individual sites coming together to set goals and develop strategies to address common issues.5-8 Collaboratives can provide enhanced data analysis through data pooling, which may lead to shared dashboards or antibiotic use (AU) reports, allowing for robust benchmarking.5-7 Productivity at individual centers is often measured by AU and antimicrobial resistance (AMR) rates, but these measures alone do not fully capture the benefits of collaborative participation.
The US Department of Veterans Affairs (VA), similar to other large health care systems, is uniquely positioned to promote the development of ASP collaboratives due to the use of the same electronic health record system and infrastructure for data. This centralized data lends itself more readily to data dashboards and interfacility comparison. In turn, the identification of facilities that have outlying data for specific measures can lead to a collaborative effort to identify aberrant processes or facility-specific problems and identify, implement, and track the progress of appropriate solutions with less effort and resources.7 The VA has a national stewardship group, the Antimicrobial Stewardship Task Force (ASTF), that identifies and disseminates best practices and advocates for stewardship resources.
VA facilities are heterogeneous with regard to patient population, services, availability of specialists, and antibiotic resistance patterns.9 Therefore, clinical practice and needs vary. The ASTF has spearheaded the development of regional collaboratives, recognizing the potential benefit of smaller groups with shared leadership.The Veterans Integrated Services Networks (VISNs) are geographically demarcated regions that lend themselves well to coordination among member facilities due to similar populations, challenges, and opportunities. The Veterans Affairs Midsouth Healthcare Network (VISN 9) includes 5 facilities across Tennessee, Kentucky, Mississippi, Arkansas, Georgia, Virginia, and Indiana and serves about 293,000 veterans, ranging from 35,000 to 105,000 per facility.
A VISN 9 stewardship collaborative (as described by Buckel and colleagues in 2022) was established to enhance member facility ASPs through shared goal setting.6 Initially, the collaborative met quarterly; however, with increased participation and the onset of COVID-19, the collaborative evolved to meet burgeoning ASP needs. While intrafacility multidisciplinary ASP collaboration has been previously published, few publications on interfacility collaborations exist.3-6 To our knowledge, no previous publications have reported the impact of a VA ASP collaborative on the productivity and effectiveness of participating ASP facilities and the region. We aim to share the structure and processes of this ASP collaborative, demonstrate its impact through quantification of productivity, and aid others in developing similar collaboratives to further ASPs’ impact.
Methods
The regional VISN 9 ASP collaborative was formed in January 2020 to address common issues across facilities and optimize human capital and resources. The initial collaborative included ASP pharmacists but quickly expanded to include physicians and nurse practitioners. The collaborative is co-led by 2 members from different facilities that rotate.
In April 2021, clinical guidance and research/quality improvement (QI) subcommittees were created. The monthly research/QI subcommittee discusses current initiatives and barriers to ongoing research, adapt and disseminate successful interventions to other facilities, and develop new collaborative initiatives. The clinical guidance subcommittee creates and disseminates clinical expert recommendations regarding common issues or emerging needs.
Data Plan and Collection
To measure success and growth, we evaluated annual facility reports that convey the state of each facility’s ASP, outline its current initiatives and progress, highlight areas of need, and set a programmatic goal and strategy for the upcoming year. These reports, required by a VA directive, are submitted annually by each facility to local and VISN leadership and must address the following 7 areas: (1) ASP structure and fulfillment of national VA policy for ASP; (2) fulfillment of the Joint Commission ASP standards; (3) ASP metrics; (4) ASP activities and interventions; (5) ASP QI and research initiatives; (6) education; and (7) goals and priorities.
To standardize evaluation and accurately reflect ASP effort across heterogeneous reports, 4 core areas were identified from areas 1, 3, 4 and 5 listed previously. Area 2 was excluded for its similarity among all facilities, and areas 6 and 7 were excluded for significant differences in definitions and reporting across facilities.
The project team consisted of 5 members from the collaborative who initially discussed definitions and annual report review methodology. A subgroup was assigned to area 1 and another to areas 3, 4, and 5 for initial review and data extraction. Results were later reviewed to address discrepancies and finalize collation and presentation.
The impact of the collaborative on individual facilities was measured by both quantitative and qualitative measures. Quantitative measures included: (1) designated ASP pharmacy, physician, or advanced practice provider (APP) full-time equivalents (FTE) at each facility compared with the recommended FTE for facility size; (2) the number of inpatient and outpatient ASP AU metrics for each facility and the VISN total; (3) reported improvement in annual ASP metrics calculated as frequency of improved metrics for each facility and the VISN; (4) the number of QI or research initiatives for each facility and the VISN, which included clinical pathways and order sets; and (5) the number of initiatives published as either abstract or manuscript.10 Additionally, the number of collaborative efforts involving more than 1 facility was tracked. Qualitative data included categories of metrics and QI and research initiatives. Data were collected by year and facility. Facilities are labeled A to E throughout this article.
Along with facility annual ASP reports, facility and VISN AU trends for fiscal years (FY) 2019-2022 were collected from existing VA dashboards tracking AU in both acute respiratory infections (ARI) and in patients with COVID-19. Quantitative data included facility and VISN quarterly AU rates for ARI, extracted from the national VA dashboard. Facility and VISN AU rates in patients with COVID-19 were extracted from a dashboard developed by the VISN 9 ASP collaborative. The VISN 9 Institutional Review Board deemed this work QI and approval was waived.
Results
In 2019, only 2 sites (A and C) reported dedicated FTE compared with recommended minimum staffing; neither met minimum requirements. In 2020, 1 facility (B) met the physician FTE recommendation, and 2 facilities met the pharmacy minimum FTE (D and E). In 2021 and 2022, 2 of 5 facilities (B and E) met the physician minimum FTE, and 2 of 5 (D and E) met the minimum pharmacy FTE recommendations. For the study years 2019 to 2022, 1 facility (E) met both pharmacy and physician FTE recommendations in 2021 and 2022, and 2 facilities (A and C) never met minimum FTE recommendations.
Regarding ASP metrics, all facilities tracked and reported inpatient AU; however, facility A did not document inpatient metrics for FY 2021. The number of individual inpatient metrics varied annually; however, FY 2022 saw the highest reported for the VISN (n = 40), with a more even distribution across facilities (Figure 1). Common metrics in 2022 included total AU, broad-spectrum gram-negative AU, anti–methicillin-resistant Staphylococcus aureus (MRSA) agent use, antibiotics with high risk for Clostridioides difficile infection (CDI), and AU in patients with COVID-19. The percentage of improved metrics for VISN 9 was consistent, ranging from 26.5% to 34.8%, throughout the study period.
From 2019 to 2022, facilities reporting outpatient AU increased from 3 to 5 and included fluoroquinolone use and AU in ARI. VISN 9 outpatient metrics increased every year except in 2021 with improved distribution across facilities. The number of total metrics with reported improvement in the outpatient setting overall increased from 3 of 11 (27%) in 2019 to 20 of 33 (60%) in 2022.
Antimicrobial Stewardship Initiatives
Quantitative and qualitative data regarding initiatives are reported in Figure 2 and the eAppendix respectively. Since the formation of the collaborative, total initiatives increased from 33 in 2019 to 41 in 2022. In 2019, before the collaborative, individual facilities were working on similar projects in parallel, which included MRSA decolonization (A and C), surgical prophylaxis (A and E), asymptomatic bacteriuria (A and C), and CDI (B, C, D, and E). The development of clinical pathways and order sets remained consistent, ranging from 15 to 19 throughout the study period except for 2020, when 33 clinical pathways and/or order sets were developed. Collaboration between sites also remained consistent, with 1 shared clinical pathway and/or order menu between at least 1 site reported yearly for 2020, 2021, and 2022. The number of publications from VISN 9 grew from 2 in 2019 to 17 in 2022. In 2019, there were no collaborative research or QI publications, but in 2022 there were 2 joint publications, 1 between 2 facilities (A and C) and 1 including all facilities.
ARI and COVID-19 were identified by the collaborative as VISN priorities, leading to shared metrics and benchmarking across facilities. From 2019 to 2022, increased collaboration on these initiatives was noted at all facilities. The ARI goal was established to reduce inappropriate prescribing for ARI/bronchitis to under 20% across VISN 9. Rates dropped from 50.3% (range, 35.4%-77.6%) in FY 2019 quarter (Q) 1 to 15% (range, 8%-18.3%) in FY 2022 Q4. The clinical guidance subcommittee developed a guideline for AU in patients with COVID-19 that was approved by the VISN 9 Pharmacy & Therapeutics Committee. A VISN 9 dashboard was developed to track inpatient and outpatient AU for COVID-19. Antibiotic prescribing in the first 4 days of hospitalization decreased from 62.2% at the start of the COVID-19 pandemic to 48.7% after dissemination of COVID-19 guidance.
Discussion
This study demonstrates the benefit of participating in a regional ASP collaborative for individual facilities and the region. Some products from the collaborative include the development of regionwide guidance for the use of antimicrobials in COVID-19, interfacility collaborative initiatives, a COVID-19 dashboard, improvement in metrics, and several publications. Importantly, this expansion occurred during the COVID-19 pandemic when many ASP members were spread thin. Moreover, despite 4 sites not meeting VA-recommended ASP staffing requirements for both pharmacists and physicians, productivity increased within the VISN as facilities worked together sharing local challenges and successful paths in removing ASP barriers.The collaborative shared QI strategies, advocated for technological support (ie, Theradoc and dashboards) to maximize available ASP human capital, standardized metric reporting, and made continued efforts sustainable.
Previous reports in the literature have found ASP collaboratives to be an effective model for long-term program growth.3 Two collaboratives found improved adherence to the Centers for Disease Control and Prevention core elements for ASP.4,5
Our findings highlight that ASP collaboratives can help answer the recent call to action from McGregor, Fitzpatrick, and Suda who advocated for ASPs to take the next steps in stewardship, which include standardization of evaluating metrics and the use of robust QI frameworks.11 Moving forward, an area for research could include a comparison of ASP collaborative infrastructures and productivity to identify optimal fit dependent on facility structure and setting. Parallel to our experience, other reports cite heterogeneous ASP metrics and a lack of benchmarking, spotlighting the need for standardization.8,11,12
Limitations
Using annual reports was a limitation for analyzing and reporting the full impact of the collaborative. Local facility-level discretion of content inclusion led to many facilities only reporting on the forefront of new initiatives that they had developed and may have led to the omission of other ongoing work. Further, time invested into the ASP regional collaborative was not captured within annual reports; therefore, the opportunity cost cannot be determined.
Conclusions
The VA has an advantage that many private health care facilities do not: the ability to work across systems to ease the burden of duplicative work and more readily disseminate effective strategies. The regional ASP collaborative bred innovation and the tearing down of silos. The implementation of the collaborative aided in robust QI infrastructure, standardization of reporting and metrics, and greater support through facility alignments with regional guidance. ASP interfacility collaboratives provide a sustainable solution in a resource-limited landscape.
Acknowledgments
This work was made possible by the resources provided through the Antimicrobial Stewardship Programs in the Veterans Integrated Services Network (VISN) 9.
1. Pierce J, Stevens MP. COVID-19 and antimicrobial stewardship: lessons learned, best practices, and future implications. Int J Infect Dis. 2021;113:103-108. doi:10.1016/j.ijid.2021.10.001
2. Emberger J, Tassone D, Stevens MP, Markley JD. The current state of antimicrobial stewardship: challenges, successes, and future directions. Curr Infect Dis Rep. 2018;20(9):31. doi:10.1007/s11908-018-0637-6
3. Moehring RW, Yarrington ME, Davis AE, et al. Effects of a collaborative, community hospital network for antimicrobial stewardship program implementation. Clin Infect Dis. 2021;73(9):1656-1663. doi:10.1093/cid/ciab356
4. Logan AY, Williamson JE, Reinke EK, Jarrett SW, Boger MS, Davidson LE. Establishing an antimicrobial stewardship collaborative across a large, diverse health care system. Jt Comm J Qual Patient Saf. 2019;45(9):591-599. doi:10.1016/j.jcjq.2019.03.002
5. Dukhovny D, Buus-Frank ME, Edwards EM, et al. A collaborative multicenter QI initiative to improve antibiotic stewardship in newborns. Pediatrics. 2019;144(6):e20190589. doi:10.1542/peds.2019-0589
6. Buckel WR, Stenehjem EA, Hersh AL, Hyun DY, Zetts RM. Harnessing the power of health systems and networks for antimicrobial stewardship. Clin Infect Dis. 2022;75(11):2038-2044. doi:10.1093/cid/ciac515
7. Graber CJ, Jones MM, Goetz MB, et al. Decreases in antimicrobial use associated with multihospital implementation of electronic antimicrobial stewardship tools. Clin Infect Dis. 2020;71(5):1168-1176. doi:10.1093/cid/ciz941
8. Kelly AA, Jones MM, Echevarria KL, et al. A report of the efforts of the Veterans Health Administration national antimicrobial stewardship initiative. Infect Control Hosp Epidemiol. 2017;38(5):513-520. doi:10.1017/ice.2016.328
9. US Department of Veterans Affairs. About VHA. 2022. Updated September 7, 2023. Accessed November 7, 2023. https://www.va.gov/health/aboutVHA.asp
10. Echevarria K, Groppi J, Kelly AA, Morreale AP, Neuhauser MM, Roselle GA. Development and application of an objective staffing calculator for antimicrobial stewardship programs in the Veterans Health Administration. Am J Health Syst Pharm. 2017;74(21):1785-1790. doi:10.2146/ajhp160825
11. McGregor JC, Fitzpatrick MA, Suda KJ. Expanding antimicrobial stewardship through quality improvement. JAMA Netw Open. 2021;4(2):e211072. doi:10.1001/jamanetworkopen.2021.1072
12. Newland JG, Gerber JS, Kronman MP, et al. Sharing Antimicrobial Reports for Pediatric Stewardship (SHARPS): a quality improvement collaborative. J Pediatr Infect Dis Soc. 2018;7(2):124-128. doi:10.1093/jpids/pix020
The importance of formalized antimicrobial stewardship programs (ASPs) has gained recognition over the past 2 decades. The increasing requirements for ASP programs from national entities often outpace the staffing, technology, and analytic support needed to meet these demands.1,2 A multimodal approach to stewardship that includes education initiatives, audit-and-feedback methodology, and system support is effective in producing sustained change.3 However, this approach is resource intensive, and many ASPs must look outward for additional support.
Centralized ASP collaboratives and stewardship networks have been effective in positively impacting initiatives and outcomes through resource sharing.3-5 These collaboratives can take on multiple forms ranging from centralized education distribution to individual sites coming together to set goals and develop strategies to address common issues.5-8 Collaboratives can provide enhanced data analysis through data pooling, which may lead to shared dashboards or antibiotic use (AU) reports, allowing for robust benchmarking.5-7 Productivity at individual centers is often measured by AU and antimicrobial resistance (AMR) rates, but these measures alone do not fully capture the benefits of collaborative participation.
The US Department of Veterans Affairs (VA), similar to other large health care systems, is uniquely positioned to promote the development of ASP collaboratives due to the use of the same electronic health record system and infrastructure for data. This centralized data lends itself more readily to data dashboards and interfacility comparison. In turn, the identification of facilities that have outlying data for specific measures can lead to a collaborative effort to identify aberrant processes or facility-specific problems and identify, implement, and track the progress of appropriate solutions with less effort and resources.7 The VA has a national stewardship group, the Antimicrobial Stewardship Task Force (ASTF), that identifies and disseminates best practices and advocates for stewardship resources.
VA facilities are heterogeneous with regard to patient population, services, availability of specialists, and antibiotic resistance patterns.9 Therefore, clinical practice and needs vary. The ASTF has spearheaded the development of regional collaboratives, recognizing the potential benefit of smaller groups with shared leadership.The Veterans Integrated Services Networks (VISNs) are geographically demarcated regions that lend themselves well to coordination among member facilities due to similar populations, challenges, and opportunities. The Veterans Affairs Midsouth Healthcare Network (VISN 9) includes 5 facilities across Tennessee, Kentucky, Mississippi, Arkansas, Georgia, Virginia, and Indiana and serves about 293,000 veterans, ranging from 35,000 to 105,000 per facility.
A VISN 9 stewardship collaborative (as described by Buckel and colleagues in 2022) was established to enhance member facility ASPs through shared goal setting.6 Initially, the collaborative met quarterly; however, with increased participation and the onset of COVID-19, the collaborative evolved to meet burgeoning ASP needs. While intrafacility multidisciplinary ASP collaboration has been previously published, few publications on interfacility collaborations exist.3-6 To our knowledge, no previous publications have reported the impact of a VA ASP collaborative on the productivity and effectiveness of participating ASP facilities and the region. We aim to share the structure and processes of this ASP collaborative, demonstrate its impact through quantification of productivity, and aid others in developing similar collaboratives to further ASPs’ impact.
Methods
The regional VISN 9 ASP collaborative was formed in January 2020 to address common issues across facilities and optimize human capital and resources. The initial collaborative included ASP pharmacists but quickly expanded to include physicians and nurse practitioners. The collaborative is co-led by 2 members from different facilities that rotate.
In April 2021, clinical guidance and research/quality improvement (QI) subcommittees were created. The monthly research/QI subcommittee discusses current initiatives and barriers to ongoing research, adapt and disseminate successful interventions to other facilities, and develop new collaborative initiatives. The clinical guidance subcommittee creates and disseminates clinical expert recommendations regarding common issues or emerging needs.
Data Plan and Collection
To measure success and growth, we evaluated annual facility reports that convey the state of each facility’s ASP, outline its current initiatives and progress, highlight areas of need, and set a programmatic goal and strategy for the upcoming year. These reports, required by a VA directive, are submitted annually by each facility to local and VISN leadership and must address the following 7 areas: (1) ASP structure and fulfillment of national VA policy for ASP; (2) fulfillment of the Joint Commission ASP standards; (3) ASP metrics; (4) ASP activities and interventions; (5) ASP QI and research initiatives; (6) education; and (7) goals and priorities.
To standardize evaluation and accurately reflect ASP effort across heterogeneous reports, 4 core areas were identified from areas 1, 3, 4 and 5 listed previously. Area 2 was excluded for its similarity among all facilities, and areas 6 and 7 were excluded for significant differences in definitions and reporting across facilities.
The project team consisted of 5 members from the collaborative who initially discussed definitions and annual report review methodology. A subgroup was assigned to area 1 and another to areas 3, 4, and 5 for initial review and data extraction. Results were later reviewed to address discrepancies and finalize collation and presentation.
The impact of the collaborative on individual facilities was measured by both quantitative and qualitative measures. Quantitative measures included: (1) designated ASP pharmacy, physician, or advanced practice provider (APP) full-time equivalents (FTE) at each facility compared with the recommended FTE for facility size; (2) the number of inpatient and outpatient ASP AU metrics for each facility and the VISN total; (3) reported improvement in annual ASP metrics calculated as frequency of improved metrics for each facility and the VISN; (4) the number of QI or research initiatives for each facility and the VISN, which included clinical pathways and order sets; and (5) the number of initiatives published as either abstract or manuscript.10 Additionally, the number of collaborative efforts involving more than 1 facility was tracked. Qualitative data included categories of metrics and QI and research initiatives. Data were collected by year and facility. Facilities are labeled A to E throughout this article.
Along with facility annual ASP reports, facility and VISN AU trends for fiscal years (FY) 2019-2022 were collected from existing VA dashboards tracking AU in both acute respiratory infections (ARI) and in patients with COVID-19. Quantitative data included facility and VISN quarterly AU rates for ARI, extracted from the national VA dashboard. Facility and VISN AU rates in patients with COVID-19 were extracted from a dashboard developed by the VISN 9 ASP collaborative. The VISN 9 Institutional Review Board deemed this work QI and approval was waived.
Results
In 2019, only 2 sites (A and C) reported dedicated FTE compared with recommended minimum staffing; neither met minimum requirements. In 2020, 1 facility (B) met the physician FTE recommendation, and 2 facilities met the pharmacy minimum FTE (D and E). In 2021 and 2022, 2 of 5 facilities (B and E) met the physician minimum FTE, and 2 of 5 (D and E) met the minimum pharmacy FTE recommendations. For the study years 2019 to 2022, 1 facility (E) met both pharmacy and physician FTE recommendations in 2021 and 2022, and 2 facilities (A and C) never met minimum FTE recommendations.
Regarding ASP metrics, all facilities tracked and reported inpatient AU; however, facility A did not document inpatient metrics for FY 2021. The number of individual inpatient metrics varied annually; however, FY 2022 saw the highest reported for the VISN (n = 40), with a more even distribution across facilities (Figure 1). Common metrics in 2022 included total AU, broad-spectrum gram-negative AU, anti–methicillin-resistant Staphylococcus aureus (MRSA) agent use, antibiotics with high risk for Clostridioides difficile infection (CDI), and AU in patients with COVID-19. The percentage of improved metrics for VISN 9 was consistent, ranging from 26.5% to 34.8%, throughout the study period.
From 2019 to 2022, facilities reporting outpatient AU increased from 3 to 5 and included fluoroquinolone use and AU in ARI. VISN 9 outpatient metrics increased every year except in 2021 with improved distribution across facilities. The number of total metrics with reported improvement in the outpatient setting overall increased from 3 of 11 (27%) in 2019 to 20 of 33 (60%) in 2022.
Antimicrobial Stewardship Initiatives
Quantitative and qualitative data regarding initiatives are reported in Figure 2 and the eAppendix respectively. Since the formation of the collaborative, total initiatives increased from 33 in 2019 to 41 in 2022. In 2019, before the collaborative, individual facilities were working on similar projects in parallel, which included MRSA decolonization (A and C), surgical prophylaxis (A and E), asymptomatic bacteriuria (A and C), and CDI (B, C, D, and E). The development of clinical pathways and order sets remained consistent, ranging from 15 to 19 throughout the study period except for 2020, when 33 clinical pathways and/or order sets were developed. Collaboration between sites also remained consistent, with 1 shared clinical pathway and/or order menu between at least 1 site reported yearly for 2020, 2021, and 2022. The number of publications from VISN 9 grew from 2 in 2019 to 17 in 2022. In 2019, there were no collaborative research or QI publications, but in 2022 there were 2 joint publications, 1 between 2 facilities (A and C) and 1 including all facilities.
ARI and COVID-19 were identified by the collaborative as VISN priorities, leading to shared metrics and benchmarking across facilities. From 2019 to 2022, increased collaboration on these initiatives was noted at all facilities. The ARI goal was established to reduce inappropriate prescribing for ARI/bronchitis to under 20% across VISN 9. Rates dropped from 50.3% (range, 35.4%-77.6%) in FY 2019 quarter (Q) 1 to 15% (range, 8%-18.3%) in FY 2022 Q4. The clinical guidance subcommittee developed a guideline for AU in patients with COVID-19 that was approved by the VISN 9 Pharmacy & Therapeutics Committee. A VISN 9 dashboard was developed to track inpatient and outpatient AU for COVID-19. Antibiotic prescribing in the first 4 days of hospitalization decreased from 62.2% at the start of the COVID-19 pandemic to 48.7% after dissemination of COVID-19 guidance.
Discussion
This study demonstrates the benefit of participating in a regional ASP collaborative for individual facilities and the region. Some products from the collaborative include the development of regionwide guidance for the use of antimicrobials in COVID-19, interfacility collaborative initiatives, a COVID-19 dashboard, improvement in metrics, and several publications. Importantly, this expansion occurred during the COVID-19 pandemic when many ASP members were spread thin. Moreover, despite 4 sites not meeting VA-recommended ASP staffing requirements for both pharmacists and physicians, productivity increased within the VISN as facilities worked together sharing local challenges and successful paths in removing ASP barriers.The collaborative shared QI strategies, advocated for technological support (ie, Theradoc and dashboards) to maximize available ASP human capital, standardized metric reporting, and made continued efforts sustainable.
Previous reports in the literature have found ASP collaboratives to be an effective model for long-term program growth.3 Two collaboratives found improved adherence to the Centers for Disease Control and Prevention core elements for ASP.4,5
Our findings highlight that ASP collaboratives can help answer the recent call to action from McGregor, Fitzpatrick, and Suda who advocated for ASPs to take the next steps in stewardship, which include standardization of evaluating metrics and the use of robust QI frameworks.11 Moving forward, an area for research could include a comparison of ASP collaborative infrastructures and productivity to identify optimal fit dependent on facility structure and setting. Parallel to our experience, other reports cite heterogeneous ASP metrics and a lack of benchmarking, spotlighting the need for standardization.8,11,12
Limitations
Using annual reports was a limitation for analyzing and reporting the full impact of the collaborative. Local facility-level discretion of content inclusion led to many facilities only reporting on the forefront of new initiatives that they had developed and may have led to the omission of other ongoing work. Further, time invested into the ASP regional collaborative was not captured within annual reports; therefore, the opportunity cost cannot be determined.
Conclusions
The VA has an advantage that many private health care facilities do not: the ability to work across systems to ease the burden of duplicative work and more readily disseminate effective strategies. The regional ASP collaborative bred innovation and the tearing down of silos. The implementation of the collaborative aided in robust QI infrastructure, standardization of reporting and metrics, and greater support through facility alignments with regional guidance. ASP interfacility collaboratives provide a sustainable solution in a resource-limited landscape.
Acknowledgments
This work was made possible by the resources provided through the Antimicrobial Stewardship Programs in the Veterans Integrated Services Network (VISN) 9.
The importance of formalized antimicrobial stewardship programs (ASPs) has gained recognition over the past 2 decades. The increasing requirements for ASP programs from national entities often outpace the staffing, technology, and analytic support needed to meet these demands.1,2 A multimodal approach to stewardship that includes education initiatives, audit-and-feedback methodology, and system support is effective in producing sustained change.3 However, this approach is resource intensive, and many ASPs must look outward for additional support.
Centralized ASP collaboratives and stewardship networks have been effective in positively impacting initiatives and outcomes through resource sharing.3-5 These collaboratives can take on multiple forms ranging from centralized education distribution to individual sites coming together to set goals and develop strategies to address common issues.5-8 Collaboratives can provide enhanced data analysis through data pooling, which may lead to shared dashboards or antibiotic use (AU) reports, allowing for robust benchmarking.5-7 Productivity at individual centers is often measured by AU and antimicrobial resistance (AMR) rates, but these measures alone do not fully capture the benefits of collaborative participation.
The US Department of Veterans Affairs (VA), similar to other large health care systems, is uniquely positioned to promote the development of ASP collaboratives due to the use of the same electronic health record system and infrastructure for data. This centralized data lends itself more readily to data dashboards and interfacility comparison. In turn, the identification of facilities that have outlying data for specific measures can lead to a collaborative effort to identify aberrant processes or facility-specific problems and identify, implement, and track the progress of appropriate solutions with less effort and resources.7 The VA has a national stewardship group, the Antimicrobial Stewardship Task Force (ASTF), that identifies and disseminates best practices and advocates for stewardship resources.
VA facilities are heterogeneous with regard to patient population, services, availability of specialists, and antibiotic resistance patterns.9 Therefore, clinical practice and needs vary. The ASTF has spearheaded the development of regional collaboratives, recognizing the potential benefit of smaller groups with shared leadership.The Veterans Integrated Services Networks (VISNs) are geographically demarcated regions that lend themselves well to coordination among member facilities due to similar populations, challenges, and opportunities. The Veterans Affairs Midsouth Healthcare Network (VISN 9) includes 5 facilities across Tennessee, Kentucky, Mississippi, Arkansas, Georgia, Virginia, and Indiana and serves about 293,000 veterans, ranging from 35,000 to 105,000 per facility.
A VISN 9 stewardship collaborative (as described by Buckel and colleagues in 2022) was established to enhance member facility ASPs through shared goal setting.6 Initially, the collaborative met quarterly; however, with increased participation and the onset of COVID-19, the collaborative evolved to meet burgeoning ASP needs. While intrafacility multidisciplinary ASP collaboration has been previously published, few publications on interfacility collaborations exist.3-6 To our knowledge, no previous publications have reported the impact of a VA ASP collaborative on the productivity and effectiveness of participating ASP facilities and the region. We aim to share the structure and processes of this ASP collaborative, demonstrate its impact through quantification of productivity, and aid others in developing similar collaboratives to further ASPs’ impact.
Methods
The regional VISN 9 ASP collaborative was formed in January 2020 to address common issues across facilities and optimize human capital and resources. The initial collaborative included ASP pharmacists but quickly expanded to include physicians and nurse practitioners. The collaborative is co-led by 2 members from different facilities that rotate.
In April 2021, clinical guidance and research/quality improvement (QI) subcommittees were created. The monthly research/QI subcommittee discusses current initiatives and barriers to ongoing research, adapt and disseminate successful interventions to other facilities, and develop new collaborative initiatives. The clinical guidance subcommittee creates and disseminates clinical expert recommendations regarding common issues or emerging needs.
Data Plan and Collection
To measure success and growth, we evaluated annual facility reports that convey the state of each facility’s ASP, outline its current initiatives and progress, highlight areas of need, and set a programmatic goal and strategy for the upcoming year. These reports, required by a VA directive, are submitted annually by each facility to local and VISN leadership and must address the following 7 areas: (1) ASP structure and fulfillment of national VA policy for ASP; (2) fulfillment of the Joint Commission ASP standards; (3) ASP metrics; (4) ASP activities and interventions; (5) ASP QI and research initiatives; (6) education; and (7) goals and priorities.
To standardize evaluation and accurately reflect ASP effort across heterogeneous reports, 4 core areas were identified from areas 1, 3, 4 and 5 listed previously. Area 2 was excluded for its similarity among all facilities, and areas 6 and 7 were excluded for significant differences in definitions and reporting across facilities.
The project team consisted of 5 members from the collaborative who initially discussed definitions and annual report review methodology. A subgroup was assigned to area 1 and another to areas 3, 4, and 5 for initial review and data extraction. Results were later reviewed to address discrepancies and finalize collation and presentation.
The impact of the collaborative on individual facilities was measured by both quantitative and qualitative measures. Quantitative measures included: (1) designated ASP pharmacy, physician, or advanced practice provider (APP) full-time equivalents (FTE) at each facility compared with the recommended FTE for facility size; (2) the number of inpatient and outpatient ASP AU metrics for each facility and the VISN total; (3) reported improvement in annual ASP metrics calculated as frequency of improved metrics for each facility and the VISN; (4) the number of QI or research initiatives for each facility and the VISN, which included clinical pathways and order sets; and (5) the number of initiatives published as either abstract or manuscript.10 Additionally, the number of collaborative efforts involving more than 1 facility was tracked. Qualitative data included categories of metrics and QI and research initiatives. Data were collected by year and facility. Facilities are labeled A to E throughout this article.
Along with facility annual ASP reports, facility and VISN AU trends for fiscal years (FY) 2019-2022 were collected from existing VA dashboards tracking AU in both acute respiratory infections (ARI) and in patients with COVID-19. Quantitative data included facility and VISN quarterly AU rates for ARI, extracted from the national VA dashboard. Facility and VISN AU rates in patients with COVID-19 were extracted from a dashboard developed by the VISN 9 ASP collaborative. The VISN 9 Institutional Review Board deemed this work QI and approval was waived.
Results
In 2019, only 2 sites (A and C) reported dedicated FTE compared with recommended minimum staffing; neither met minimum requirements. In 2020, 1 facility (B) met the physician FTE recommendation, and 2 facilities met the pharmacy minimum FTE (D and E). In 2021 and 2022, 2 of 5 facilities (B and E) met the physician minimum FTE, and 2 of 5 (D and E) met the minimum pharmacy FTE recommendations. For the study years 2019 to 2022, 1 facility (E) met both pharmacy and physician FTE recommendations in 2021 and 2022, and 2 facilities (A and C) never met minimum FTE recommendations.
Regarding ASP metrics, all facilities tracked and reported inpatient AU; however, facility A did not document inpatient metrics for FY 2021. The number of individual inpatient metrics varied annually; however, FY 2022 saw the highest reported for the VISN (n = 40), with a more even distribution across facilities (Figure 1). Common metrics in 2022 included total AU, broad-spectrum gram-negative AU, anti–methicillin-resistant Staphylococcus aureus (MRSA) agent use, antibiotics with high risk for Clostridioides difficile infection (CDI), and AU in patients with COVID-19. The percentage of improved metrics for VISN 9 was consistent, ranging from 26.5% to 34.8%, throughout the study period.
From 2019 to 2022, facilities reporting outpatient AU increased from 3 to 5 and included fluoroquinolone use and AU in ARI. VISN 9 outpatient metrics increased every year except in 2021 with improved distribution across facilities. The number of total metrics with reported improvement in the outpatient setting overall increased from 3 of 11 (27%) in 2019 to 20 of 33 (60%) in 2022.
Antimicrobial Stewardship Initiatives
Quantitative and qualitative data regarding initiatives are reported in Figure 2 and the eAppendix respectively. Since the formation of the collaborative, total initiatives increased from 33 in 2019 to 41 in 2022. In 2019, before the collaborative, individual facilities were working on similar projects in parallel, which included MRSA decolonization (A and C), surgical prophylaxis (A and E), asymptomatic bacteriuria (A and C), and CDI (B, C, D, and E). The development of clinical pathways and order sets remained consistent, ranging from 15 to 19 throughout the study period except for 2020, when 33 clinical pathways and/or order sets were developed. Collaboration between sites also remained consistent, with 1 shared clinical pathway and/or order menu between at least 1 site reported yearly for 2020, 2021, and 2022. The number of publications from VISN 9 grew from 2 in 2019 to 17 in 2022. In 2019, there were no collaborative research or QI publications, but in 2022 there were 2 joint publications, 1 between 2 facilities (A and C) and 1 including all facilities.
ARI and COVID-19 were identified by the collaborative as VISN priorities, leading to shared metrics and benchmarking across facilities. From 2019 to 2022, increased collaboration on these initiatives was noted at all facilities. The ARI goal was established to reduce inappropriate prescribing for ARI/bronchitis to under 20% across VISN 9. Rates dropped from 50.3% (range, 35.4%-77.6%) in FY 2019 quarter (Q) 1 to 15% (range, 8%-18.3%) in FY 2022 Q4. The clinical guidance subcommittee developed a guideline for AU in patients with COVID-19 that was approved by the VISN 9 Pharmacy & Therapeutics Committee. A VISN 9 dashboard was developed to track inpatient and outpatient AU for COVID-19. Antibiotic prescribing in the first 4 days of hospitalization decreased from 62.2% at the start of the COVID-19 pandemic to 48.7% after dissemination of COVID-19 guidance.
Discussion
This study demonstrates the benefit of participating in a regional ASP collaborative for individual facilities and the region. Some products from the collaborative include the development of regionwide guidance for the use of antimicrobials in COVID-19, interfacility collaborative initiatives, a COVID-19 dashboard, improvement in metrics, and several publications. Importantly, this expansion occurred during the COVID-19 pandemic when many ASP members were spread thin. Moreover, despite 4 sites not meeting VA-recommended ASP staffing requirements for both pharmacists and physicians, productivity increased within the VISN as facilities worked together sharing local challenges and successful paths in removing ASP barriers.The collaborative shared QI strategies, advocated for technological support (ie, Theradoc and dashboards) to maximize available ASP human capital, standardized metric reporting, and made continued efforts sustainable.
Previous reports in the literature have found ASP collaboratives to be an effective model for long-term program growth.3 Two collaboratives found improved adherence to the Centers for Disease Control and Prevention core elements for ASP.4,5
Our findings highlight that ASP collaboratives can help answer the recent call to action from McGregor, Fitzpatrick, and Suda who advocated for ASPs to take the next steps in stewardship, which include standardization of evaluating metrics and the use of robust QI frameworks.11 Moving forward, an area for research could include a comparison of ASP collaborative infrastructures and productivity to identify optimal fit dependent on facility structure and setting. Parallel to our experience, other reports cite heterogeneous ASP metrics and a lack of benchmarking, spotlighting the need for standardization.8,11,12
Limitations
Using annual reports was a limitation for analyzing and reporting the full impact of the collaborative. Local facility-level discretion of content inclusion led to many facilities only reporting on the forefront of new initiatives that they had developed and may have led to the omission of other ongoing work. Further, time invested into the ASP regional collaborative was not captured within annual reports; therefore, the opportunity cost cannot be determined.
Conclusions
The VA has an advantage that many private health care facilities do not: the ability to work across systems to ease the burden of duplicative work and more readily disseminate effective strategies. The regional ASP collaborative bred innovation and the tearing down of silos. The implementation of the collaborative aided in robust QI infrastructure, standardization of reporting and metrics, and greater support through facility alignments with regional guidance. ASP interfacility collaboratives provide a sustainable solution in a resource-limited landscape.
Acknowledgments
This work was made possible by the resources provided through the Antimicrobial Stewardship Programs in the Veterans Integrated Services Network (VISN) 9.
1. Pierce J, Stevens MP. COVID-19 and antimicrobial stewardship: lessons learned, best practices, and future implications. Int J Infect Dis. 2021;113:103-108. doi:10.1016/j.ijid.2021.10.001
2. Emberger J, Tassone D, Stevens MP, Markley JD. The current state of antimicrobial stewardship: challenges, successes, and future directions. Curr Infect Dis Rep. 2018;20(9):31. doi:10.1007/s11908-018-0637-6
3. Moehring RW, Yarrington ME, Davis AE, et al. Effects of a collaborative, community hospital network for antimicrobial stewardship program implementation. Clin Infect Dis. 2021;73(9):1656-1663. doi:10.1093/cid/ciab356
4. Logan AY, Williamson JE, Reinke EK, Jarrett SW, Boger MS, Davidson LE. Establishing an antimicrobial stewardship collaborative across a large, diverse health care system. Jt Comm J Qual Patient Saf. 2019;45(9):591-599. doi:10.1016/j.jcjq.2019.03.002
5. Dukhovny D, Buus-Frank ME, Edwards EM, et al. A collaborative multicenter QI initiative to improve antibiotic stewardship in newborns. Pediatrics. 2019;144(6):e20190589. doi:10.1542/peds.2019-0589
6. Buckel WR, Stenehjem EA, Hersh AL, Hyun DY, Zetts RM. Harnessing the power of health systems and networks for antimicrobial stewardship. Clin Infect Dis. 2022;75(11):2038-2044. doi:10.1093/cid/ciac515
7. Graber CJ, Jones MM, Goetz MB, et al. Decreases in antimicrobial use associated with multihospital implementation of electronic antimicrobial stewardship tools. Clin Infect Dis. 2020;71(5):1168-1176. doi:10.1093/cid/ciz941
8. Kelly AA, Jones MM, Echevarria KL, et al. A report of the efforts of the Veterans Health Administration national antimicrobial stewardship initiative. Infect Control Hosp Epidemiol. 2017;38(5):513-520. doi:10.1017/ice.2016.328
9. US Department of Veterans Affairs. About VHA. 2022. Updated September 7, 2023. Accessed November 7, 2023. https://www.va.gov/health/aboutVHA.asp
10. Echevarria K, Groppi J, Kelly AA, Morreale AP, Neuhauser MM, Roselle GA. Development and application of an objective staffing calculator for antimicrobial stewardship programs in the Veterans Health Administration. Am J Health Syst Pharm. 2017;74(21):1785-1790. doi:10.2146/ajhp160825
11. McGregor JC, Fitzpatrick MA, Suda KJ. Expanding antimicrobial stewardship through quality improvement. JAMA Netw Open. 2021;4(2):e211072. doi:10.1001/jamanetworkopen.2021.1072
12. Newland JG, Gerber JS, Kronman MP, et al. Sharing Antimicrobial Reports for Pediatric Stewardship (SHARPS): a quality improvement collaborative. J Pediatr Infect Dis Soc. 2018;7(2):124-128. doi:10.1093/jpids/pix020
1. Pierce J, Stevens MP. COVID-19 and antimicrobial stewardship: lessons learned, best practices, and future implications. Int J Infect Dis. 2021;113:103-108. doi:10.1016/j.ijid.2021.10.001
2. Emberger J, Tassone D, Stevens MP, Markley JD. The current state of antimicrobial stewardship: challenges, successes, and future directions. Curr Infect Dis Rep. 2018;20(9):31. doi:10.1007/s11908-018-0637-6
3. Moehring RW, Yarrington ME, Davis AE, et al. Effects of a collaborative, community hospital network for antimicrobial stewardship program implementation. Clin Infect Dis. 2021;73(9):1656-1663. doi:10.1093/cid/ciab356
4. Logan AY, Williamson JE, Reinke EK, Jarrett SW, Boger MS, Davidson LE. Establishing an antimicrobial stewardship collaborative across a large, diverse health care system. Jt Comm J Qual Patient Saf. 2019;45(9):591-599. doi:10.1016/j.jcjq.2019.03.002
5. Dukhovny D, Buus-Frank ME, Edwards EM, et al. A collaborative multicenter QI initiative to improve antibiotic stewardship in newborns. Pediatrics. 2019;144(6):e20190589. doi:10.1542/peds.2019-0589
6. Buckel WR, Stenehjem EA, Hersh AL, Hyun DY, Zetts RM. Harnessing the power of health systems and networks for antimicrobial stewardship. Clin Infect Dis. 2022;75(11):2038-2044. doi:10.1093/cid/ciac515
7. Graber CJ, Jones MM, Goetz MB, et al. Decreases in antimicrobial use associated with multihospital implementation of electronic antimicrobial stewardship tools. Clin Infect Dis. 2020;71(5):1168-1176. doi:10.1093/cid/ciz941
8. Kelly AA, Jones MM, Echevarria KL, et al. A report of the efforts of the Veterans Health Administration national antimicrobial stewardship initiative. Infect Control Hosp Epidemiol. 2017;38(5):513-520. doi:10.1017/ice.2016.328
9. US Department of Veterans Affairs. About VHA. 2022. Updated September 7, 2023. Accessed November 7, 2023. https://www.va.gov/health/aboutVHA.asp
10. Echevarria K, Groppi J, Kelly AA, Morreale AP, Neuhauser MM, Roselle GA. Development and application of an objective staffing calculator for antimicrobial stewardship programs in the Veterans Health Administration. Am J Health Syst Pharm. 2017;74(21):1785-1790. doi:10.2146/ajhp160825
11. McGregor JC, Fitzpatrick MA, Suda KJ. Expanding antimicrobial stewardship through quality improvement. JAMA Netw Open. 2021;4(2):e211072. doi:10.1001/jamanetworkopen.2021.1072
12. Newland JG, Gerber JS, Kronman MP, et al. Sharing Antimicrobial Reports for Pediatric Stewardship (SHARPS): a quality improvement collaborative. J Pediatr Infect Dis Soc. 2018;7(2):124-128. doi:10.1093/jpids/pix020
Chronic Kidney Disease and Military Service in US Adults, 1999-2018
Chronic kidney disease (CKD) affects nearly 37 million people (11%) in the US and is a leading cause of death and morbidity. Due to their older age and higher prevalence of comorbid conditions, the prevalence of CKD among veterans is approximately 34% higher than in the general population and the fourth most common chronic disease diagnosed among US veterans.1,2 US veterans and those with prior military service (MS) may be at a particularly high risk for CKD and associated health care outcomes including increased hospitalization and death. The observed excess burden of CKD is not mirrored in the general population, and it is unclear whether prior MS confers a unique risk profile for CKD.
Current estimates of CKD burden among veterans or those with prior MS are widely variable and have been limited by unique regions, specific exposure profiles, or to single health care systems. As such, there remains a paucity of data examining CKD burden more broadly. We performed a study in the adult population of the US to quantify associations with the extent of CKD, enumerate temporal trends of CKD among those with prior MS, describe risk within subgroups, and compare heterogeneity of risk factors for CKD by MS.
Methods
The National Health and Nutrition Examination Survey (NHANES) is a suite of nationally representative, cross-sectional surveys of the noninstitutionalized US population. It is conducted by the National Center for Health Statistics and uses a stratified, clustered probability design, with surveys carried out without interruption, collated, and made accessible to the public at 2-year intervals.3 The survey consists of a questionnaire, physical examination, and laboratory data.
The inclusion criteria for our study were age ≥ 20 years along with serum creatinine and urinary albumin-creatinine measurements. The following definitions were used for the study:
• CKD: Estimated glomerular filtration rate < 60 mL/min/1.73 m2 calibrated to isotope dilution mass spectrometry (IDMS).
• Traceable: Creatinine-based CKD Epidemiology Collaboration formula or urinary albumin-creatine ratio ≥ 30 mg/g.
• MS: Positive response to the questions “Did you ever serve in the Armed Forces of the United States?” (1999 to 2010) or “Have you ever served on active duty in the US Armed Forces, military Reserves, or National Guard?” (2011 to 2018).
• Diabetes: Self-reported history, medication for diabetes, or glycated hemoglobin ≥ 7%.
• Hypertension: Blood pressure ≥ 140/90 or ≥ 130/40 mm Hg in the presence of diabetes, medication for hypertension, cardiovascular disease, or CKD, myocardial infarction, cardiac failure, or cerebrovascular disease by self-report.2,3
Analysis
Primary sampling unit, stratum, and weight variables were employed throughout to generate parameter estimates that are generalizable to the US population.4,5 The χ2 test and logistic regression, respectively, were employed for comparison of proportions and estimation of odds ratios. R Version 4.1.2 was employed for data analysis.
Results
In the overall sample, the frequencies (95% standard error [SE]) of CKD and prior MS were 15.2% (0.3) and 11.5% (0.3) (Table 1). The proportion (SE) with CKD was significantly higher among those with prior MS vs the overall population: 22.7% (0.7) vs 15.2% (0.3) (P < .001). Significant associations with CKD were observed (P < .05) by age, sex, race and ethnicity, family poverty, school education, health insurance, smoking, body mass index, diabetes, hypertension, cardiovascular disease, and malignancy. Within those reporting prior MS, the proportion (SE) with CKD differed by era: 1999 to 2002, 18.9% (1.1); 2003 to 2006, 24.9% (1.5); 2007 to 2010, 22.3% (1.5); 2011 to 2014, 24.3% (1.7); and 2015 to 2018, 24.0% (1.8) (P = .02) (Figure 1).
Without covariate adjustment, prior MS was significantly associated with an increased risk of CKD (unadjusted odds ratio [OR], 1.78; 95% CI, 1.64-1.93; P < .05) (Table 2). Prior MS was significantly associated with CKD in the following subgroups: 2003 to 2006, 2011 to 2014, 2015 to 2018, age groups of 40 to 64 years and ≥ 65 years, male sex, non-Hispanic White and Hispanic ethnicity, school education of grade 0 to 11, and private or other health insurance. Additional comorbidities strongly associated with CKD included hypertension (OR, 6.37; 95% CI, 5.37-7.55), diabetes (OR, 4.16; 95% CI, 3.45-5.03), and cardiovascular disease (OR, 4.20; 95% CI, 3.57-4.95).
In the population reporting prior MS, the unadjusted OR of CKD vs 1999 to 2002 was greater for all other examined eras; with the greatest likelihood observed for the 2003 to 2006 era. Unadjusted ORs of CKD differed in groups with and without prior MS (P value for interaction < .05) for 2003 to 2006, those aged 40 to 64 years and ≥ 65 years, female sex, non-Hispanic African American and Hispanic race and ethnicity, family poverty, high school education, private health insurance, any smoking history, diabetes, hypertension, and cardiovascular disease (Figure 2A).
Following adjustment for age, sex, and race and ethnicity, MS was associated with a 17% higher likelihood of CKD (adjusted odds ratio [AOR], 1.17; 95% CI, 1.06-1.28; P < .01) (Table 3). Prior MS was significantly associated (P < .05) with CKD in the subgroups: age groups 40 to 64 years and ≥ 65 years, non-Hispanic African American, and body mass index ≥ 30. Among those with prior MS, comorbidities strongly associated with CKD in adjusted models included hypertension (AOR, 3.86; 95% CI, 3.18-4.69), diabetes (AOR, 3.05; 95% CI, 2.44-3.82), and cardiovascular disease (AOR, 2.51; 95% CI, 2.09-3.01). In the population with prior MS, the adjusted likelihood of CKD vs 1999 to 2002 was similar across all eras. Adjusted associations of CKD differed in groups with and without prior MS for age groups 40 to 64 years and ≥ 65 years, female sex, and family poverty (P < .05) (Figure 2B).
Discussion
We observed that prior MS was associated with CKD, all eras were associated with CKD in the subgroup with MS, and risk factors for CKD differed among many subgroups both with and without MS history, a finding that remained present in adjusted models. In addition, the finding of CKD was relatively common among those with prior MS (approximately 15%) and was most strongly associated with increasing age and comorbidities frequently associated with CKD.
Although many studies have demonstrated associations of US veteran status with various comorbidities, including hypertension, obesity, and diabetes, these studies often are limited to those both qualifying and receiving care within the US Department of Veterans Affairs (VA) health care system.6-9 The crude proportion of individuals reporting multiple chronic conditions, which included hypertension, diabetes, and weak or failing kidneys, was 49.7% for US veterans compared with 24.1% for nonveterans.2 Large-scale, nationally representative cohorts for use in this context have been limited by the heterogeneity of definitions of CKD applied with limited timeframes yielding variable estimates.1,10 Moreover, few studies have examined the clinical epidemiology of CKD more broadly in the US among those with prior MS. For example, a PubMed search on March 3, 2022, with the terms “epidemiology”, “military service”, and “chronic kidney disease” produced only 9 citations, one of which examined trends among a non-US cohort and quantifying disease burden another among adolescents.
Whether or not prior MS confers a unique risk profile for CKD is unknown. While our findings of an increased CKD burden among those reporting MS may partially reflect observed increases in baseline comorbidities, the observed excess CKD among those with MS remained across multiple categories even after adjustment for baseline demography. As several studies have demonstrated, enlistment into MS may select for a more diverse population; however those enlisted personnel may be of lower socioeconomic status and possibly at higher risk of CKD.11,12 Our findings of important differences in baseline determinants of health mirror this. The proportion of MS respondents with CKD vs CKD alone reporting a high school education or lower was higher (36.0% vs 21.8%) as well as among those with a history of family poverty (21.1% vs 18.0%).
Limitations
Our study has several limitations, including its cross-sectional study design, a lack of longitudinal data within individuals, and exclusion of institutionalized individuals. Limitations notwithstanding this study has several important aspects. As prior MS is highly variable, we were limited in our inability to stratify by service type or length of service. For example, veteran status is conferred to a “Reservist or member of the National Guard called to federal active duty or disabled from a disease or injury incurred or aggravated in line of duty or while in training status also qualify as a veteran” (13 CFR § 125.11). For the purposes of our study, prior MS would include all active-duty service (veterans) as well as reservists and National Guard members who have not been activated. This may be more representative of the overall effect of MS, as limitation to those receiving care within the VA may select for an older, more multimorbid population of patients, limiting generalizability.
In addition, more detailed information regarding service-related exposures and other service-connected conditions would allow for a more granular risk assessment by service type, era, and military conflict. Our finding of excess CKD burden among those with prior MS compared with the overall population is timely given the recent passage of the Promise to Address Comprehensive Toxics (PACT) Act. Exposure to and injury from Agent Orange—a known service-connected exposure associated with incident hypertension and diabetes—may be a significant contributor to CKD that may have a significant era effect. In addition, water contamination among those stationed in Camp Lejeune in North Carolina has notable genitourinary associations. Finally, burn pit exposures in more recent military conflicts may also have important associations with chronic disease, possibly including CKD. While similar attempts at the creation of large-scale US veteran cohorts have been limited by incomplete capture of creatinine, the large proportion of missing race data, and limited inclusion of additional markers of kidney disease, our use of a well-described, nationally representative survey along with standardized capture of clinical and laboratory elements mitigate the use of various societal or other codified definitions.1
Conclusions
Prior MS is associated with an increased risk of CKD overall and across several important subgroups. This finding was observed in various unadjusted and adjusted models and may constitute a unique risk profile of risk.
1. Ozieh MN, Gebregziabher M, Ward RC, Taber DJ, Egede LE. Creating a 13-year National Longitudinal Cohort of veterans with chronic kidney disease. BMC Nephrol. 2019;20(1):241. doi:10.1186/s12882-019-1430-y
2. Boersma P, Cohen RA, Zelaya CE, Moy E. Multiple chronic conditions among veterans and nonveterans: United States, 2015-2018. Natl Health Stat Report. 2021;(153):1-13.
3. Centers for Disease Control and Prevention, National Center for Health Statistics. National Health and Nutrition Survey. 2022. Accessed October 31, 2023. www.cdc.gov/nchs/nhanes/index.htm
4. Levey AS, Stevens LA, Schmid CH, et al. A new equation to estimate glomerular filtration rate. Ann Intern Med. 2009;150(9):604-612. doi:10.7326/0003-4819-150-9-200905050-00006
5. Selvin E, Manzi J, Stevens LA, et al. Calibration of serum creatinine in the National Health and Nutrition Examination Surveys (NHANES) 1988-1994, 1999-2004. Am J Kidney Dis. 2007;50(6):918-926. doi:10.1053/j.ajkd.2007.08.020
6. Smoley BA, Smith NL, Runkle GP. Hypertension in a population of active duty service members. J Am Board Fam Med. 2008;21(6):504-511. doi:10.3122/jabfm.2008.06.070182
7. Duckworth W, Abraira C, Moritz T, et al. Glucose control and vascular complications in veterans with type 2 diabetes. N Engl J Med. 2009;360(2):129-139. doi:10.1056/NEJMoa0808431
8. Smith TJ, Marriott BP, Dotson L, et al. Overweight and obesity in military personnel: sociodemographic predictors. Obesity (Silver Spring). 2012;20(7):1534-1538. doi:10.1038/oby.2012.25
9. Agha Z, Lofgren RP, VanRuiswyk JV, Layde PM. Are patients at Veterans Affairs medical centers sicker? A comparative analysis of health status and medical resource use. Arch Intern Med. 2000;160(21):3252-3257. doi:10.1001/archinte.160.21.3252
10. Saran R, Pearson A, Tilea A, et al. Burden and cost of caring for US veterans with CKD: initial findings from the VA Renal Information System (VA-REINS). Am J Kidney Dis. 2021;77(3):397-405. doi:10.1053/j.ajkd.2020.07.013
11. Wang L, Elder GH, Jr., Spence NJ. Status configurations, military service and higher education. Soc Forces. 2012;91(2):397-422. doi:10.1093/sf/sos174
12. Zeng X, Liu J, Tao S, Hong HG, Li Y, Fu P. Associations between socioeconomic status and chronic kidney disease: a meta-analysis. J Epidemiol Community Health. 2018;72(4):270-279. doi:10.1136/jech-2017-209815
Chronic kidney disease (CKD) affects nearly 37 million people (11%) in the US and is a leading cause of death and morbidity. Due to their older age and higher prevalence of comorbid conditions, the prevalence of CKD among veterans is approximately 34% higher than in the general population and the fourth most common chronic disease diagnosed among US veterans.1,2 US veterans and those with prior military service (MS) may be at a particularly high risk for CKD and associated health care outcomes including increased hospitalization and death. The observed excess burden of CKD is not mirrored in the general population, and it is unclear whether prior MS confers a unique risk profile for CKD.
Current estimates of CKD burden among veterans or those with prior MS are widely variable and have been limited by unique regions, specific exposure profiles, or to single health care systems. As such, there remains a paucity of data examining CKD burden more broadly. We performed a study in the adult population of the US to quantify associations with the extent of CKD, enumerate temporal trends of CKD among those with prior MS, describe risk within subgroups, and compare heterogeneity of risk factors for CKD by MS.
Methods
The National Health and Nutrition Examination Survey (NHANES) is a suite of nationally representative, cross-sectional surveys of the noninstitutionalized US population. It is conducted by the National Center for Health Statistics and uses a stratified, clustered probability design, with surveys carried out without interruption, collated, and made accessible to the public at 2-year intervals.3 The survey consists of a questionnaire, physical examination, and laboratory data.
The inclusion criteria for our study were age ≥ 20 years along with serum creatinine and urinary albumin-creatinine measurements. The following definitions were used for the study:
• CKD: Estimated glomerular filtration rate < 60 mL/min/1.73 m2 calibrated to isotope dilution mass spectrometry (IDMS).
• Traceable: Creatinine-based CKD Epidemiology Collaboration formula or urinary albumin-creatine ratio ≥ 30 mg/g.
• MS: Positive response to the questions “Did you ever serve in the Armed Forces of the United States?” (1999 to 2010) or “Have you ever served on active duty in the US Armed Forces, military Reserves, or National Guard?” (2011 to 2018).
• Diabetes: Self-reported history, medication for diabetes, or glycated hemoglobin ≥ 7%.
• Hypertension: Blood pressure ≥ 140/90 or ≥ 130/40 mm Hg in the presence of diabetes, medication for hypertension, cardiovascular disease, or CKD, myocardial infarction, cardiac failure, or cerebrovascular disease by self-report.2,3
Analysis
Primary sampling unit, stratum, and weight variables were employed throughout to generate parameter estimates that are generalizable to the US population.4,5 The χ2 test and logistic regression, respectively, were employed for comparison of proportions and estimation of odds ratios. R Version 4.1.2 was employed for data analysis.
Results
In the overall sample, the frequencies (95% standard error [SE]) of CKD and prior MS were 15.2% (0.3) and 11.5% (0.3) (Table 1). The proportion (SE) with CKD was significantly higher among those with prior MS vs the overall population: 22.7% (0.7) vs 15.2% (0.3) (P < .001). Significant associations with CKD were observed (P < .05) by age, sex, race and ethnicity, family poverty, school education, health insurance, smoking, body mass index, diabetes, hypertension, cardiovascular disease, and malignancy. Within those reporting prior MS, the proportion (SE) with CKD differed by era: 1999 to 2002, 18.9% (1.1); 2003 to 2006, 24.9% (1.5); 2007 to 2010, 22.3% (1.5); 2011 to 2014, 24.3% (1.7); and 2015 to 2018, 24.0% (1.8) (P = .02) (Figure 1).
Without covariate adjustment, prior MS was significantly associated with an increased risk of CKD (unadjusted odds ratio [OR], 1.78; 95% CI, 1.64-1.93; P < .05) (Table 2). Prior MS was significantly associated with CKD in the following subgroups: 2003 to 2006, 2011 to 2014, 2015 to 2018, age groups of 40 to 64 years and ≥ 65 years, male sex, non-Hispanic White and Hispanic ethnicity, school education of grade 0 to 11, and private or other health insurance. Additional comorbidities strongly associated with CKD included hypertension (OR, 6.37; 95% CI, 5.37-7.55), diabetes (OR, 4.16; 95% CI, 3.45-5.03), and cardiovascular disease (OR, 4.20; 95% CI, 3.57-4.95).
In the population reporting prior MS, the unadjusted OR of CKD vs 1999 to 2002 was greater for all other examined eras; with the greatest likelihood observed for the 2003 to 2006 era. Unadjusted ORs of CKD differed in groups with and without prior MS (P value for interaction < .05) for 2003 to 2006, those aged 40 to 64 years and ≥ 65 years, female sex, non-Hispanic African American and Hispanic race and ethnicity, family poverty, high school education, private health insurance, any smoking history, diabetes, hypertension, and cardiovascular disease (Figure 2A).
Following adjustment for age, sex, and race and ethnicity, MS was associated with a 17% higher likelihood of CKD (adjusted odds ratio [AOR], 1.17; 95% CI, 1.06-1.28; P < .01) (Table 3). Prior MS was significantly associated (P < .05) with CKD in the subgroups: age groups 40 to 64 years and ≥ 65 years, non-Hispanic African American, and body mass index ≥ 30. Among those with prior MS, comorbidities strongly associated with CKD in adjusted models included hypertension (AOR, 3.86; 95% CI, 3.18-4.69), diabetes (AOR, 3.05; 95% CI, 2.44-3.82), and cardiovascular disease (AOR, 2.51; 95% CI, 2.09-3.01). In the population with prior MS, the adjusted likelihood of CKD vs 1999 to 2002 was similar across all eras. Adjusted associations of CKD differed in groups with and without prior MS for age groups 40 to 64 years and ≥ 65 years, female sex, and family poverty (P < .05) (Figure 2B).
Discussion
We observed that prior MS was associated with CKD, all eras were associated with CKD in the subgroup with MS, and risk factors for CKD differed among many subgroups both with and without MS history, a finding that remained present in adjusted models. In addition, the finding of CKD was relatively common among those with prior MS (approximately 15%) and was most strongly associated with increasing age and comorbidities frequently associated with CKD.
Although many studies have demonstrated associations of US veteran status with various comorbidities, including hypertension, obesity, and diabetes, these studies often are limited to those both qualifying and receiving care within the US Department of Veterans Affairs (VA) health care system.6-9 The crude proportion of individuals reporting multiple chronic conditions, which included hypertension, diabetes, and weak or failing kidneys, was 49.7% for US veterans compared with 24.1% for nonveterans.2 Large-scale, nationally representative cohorts for use in this context have been limited by the heterogeneity of definitions of CKD applied with limited timeframes yielding variable estimates.1,10 Moreover, few studies have examined the clinical epidemiology of CKD more broadly in the US among those with prior MS. For example, a PubMed search on March 3, 2022, with the terms “epidemiology”, “military service”, and “chronic kidney disease” produced only 9 citations, one of which examined trends among a non-US cohort and quantifying disease burden another among adolescents.
Whether or not prior MS confers a unique risk profile for CKD is unknown. While our findings of an increased CKD burden among those reporting MS may partially reflect observed increases in baseline comorbidities, the observed excess CKD among those with MS remained across multiple categories even after adjustment for baseline demography. As several studies have demonstrated, enlistment into MS may select for a more diverse population; however those enlisted personnel may be of lower socioeconomic status and possibly at higher risk of CKD.11,12 Our findings of important differences in baseline determinants of health mirror this. The proportion of MS respondents with CKD vs CKD alone reporting a high school education or lower was higher (36.0% vs 21.8%) as well as among those with a history of family poverty (21.1% vs 18.0%).
Limitations
Our study has several limitations, including its cross-sectional study design, a lack of longitudinal data within individuals, and exclusion of institutionalized individuals. Limitations notwithstanding this study has several important aspects. As prior MS is highly variable, we were limited in our inability to stratify by service type or length of service. For example, veteran status is conferred to a “Reservist or member of the National Guard called to federal active duty or disabled from a disease or injury incurred or aggravated in line of duty or while in training status also qualify as a veteran” (13 CFR § 125.11). For the purposes of our study, prior MS would include all active-duty service (veterans) as well as reservists and National Guard members who have not been activated. This may be more representative of the overall effect of MS, as limitation to those receiving care within the VA may select for an older, more multimorbid population of patients, limiting generalizability.
In addition, more detailed information regarding service-related exposures and other service-connected conditions would allow for a more granular risk assessment by service type, era, and military conflict. Our finding of excess CKD burden among those with prior MS compared with the overall population is timely given the recent passage of the Promise to Address Comprehensive Toxics (PACT) Act. Exposure to and injury from Agent Orange—a known service-connected exposure associated with incident hypertension and diabetes—may be a significant contributor to CKD that may have a significant era effect. In addition, water contamination among those stationed in Camp Lejeune in North Carolina has notable genitourinary associations. Finally, burn pit exposures in more recent military conflicts may also have important associations with chronic disease, possibly including CKD. While similar attempts at the creation of large-scale US veteran cohorts have been limited by incomplete capture of creatinine, the large proportion of missing race data, and limited inclusion of additional markers of kidney disease, our use of a well-described, nationally representative survey along with standardized capture of clinical and laboratory elements mitigate the use of various societal or other codified definitions.1
Conclusions
Prior MS is associated with an increased risk of CKD overall and across several important subgroups. This finding was observed in various unadjusted and adjusted models and may constitute a unique risk profile of risk.
Chronic kidney disease (CKD) affects nearly 37 million people (11%) in the US and is a leading cause of death and morbidity. Due to their older age and higher prevalence of comorbid conditions, the prevalence of CKD among veterans is approximately 34% higher than in the general population and the fourth most common chronic disease diagnosed among US veterans.1,2 US veterans and those with prior military service (MS) may be at a particularly high risk for CKD and associated health care outcomes including increased hospitalization and death. The observed excess burden of CKD is not mirrored in the general population, and it is unclear whether prior MS confers a unique risk profile for CKD.
Current estimates of CKD burden among veterans or those with prior MS are widely variable and have been limited by unique regions, specific exposure profiles, or to single health care systems. As such, there remains a paucity of data examining CKD burden more broadly. We performed a study in the adult population of the US to quantify associations with the extent of CKD, enumerate temporal trends of CKD among those with prior MS, describe risk within subgroups, and compare heterogeneity of risk factors for CKD by MS.
Methods
The National Health and Nutrition Examination Survey (NHANES) is a suite of nationally representative, cross-sectional surveys of the noninstitutionalized US population. It is conducted by the National Center for Health Statistics and uses a stratified, clustered probability design, with surveys carried out without interruption, collated, and made accessible to the public at 2-year intervals.3 The survey consists of a questionnaire, physical examination, and laboratory data.
The inclusion criteria for our study were age ≥ 20 years along with serum creatinine and urinary albumin-creatinine measurements. The following definitions were used for the study:
• CKD: Estimated glomerular filtration rate < 60 mL/min/1.73 m2 calibrated to isotope dilution mass spectrometry (IDMS).
• Traceable: Creatinine-based CKD Epidemiology Collaboration formula or urinary albumin-creatine ratio ≥ 30 mg/g.
• MS: Positive response to the questions “Did you ever serve in the Armed Forces of the United States?” (1999 to 2010) or “Have you ever served on active duty in the US Armed Forces, military Reserves, or National Guard?” (2011 to 2018).
• Diabetes: Self-reported history, medication for diabetes, or glycated hemoglobin ≥ 7%.
• Hypertension: Blood pressure ≥ 140/90 or ≥ 130/40 mm Hg in the presence of diabetes, medication for hypertension, cardiovascular disease, or CKD, myocardial infarction, cardiac failure, or cerebrovascular disease by self-report.2,3
Analysis
Primary sampling unit, stratum, and weight variables were employed throughout to generate parameter estimates that are generalizable to the US population.4,5 The χ2 test and logistic regression, respectively, were employed for comparison of proportions and estimation of odds ratios. R Version 4.1.2 was employed for data analysis.
Results
In the overall sample, the frequencies (95% standard error [SE]) of CKD and prior MS were 15.2% (0.3) and 11.5% (0.3) (Table 1). The proportion (SE) with CKD was significantly higher among those with prior MS vs the overall population: 22.7% (0.7) vs 15.2% (0.3) (P < .001). Significant associations with CKD were observed (P < .05) by age, sex, race and ethnicity, family poverty, school education, health insurance, smoking, body mass index, diabetes, hypertension, cardiovascular disease, and malignancy. Within those reporting prior MS, the proportion (SE) with CKD differed by era: 1999 to 2002, 18.9% (1.1); 2003 to 2006, 24.9% (1.5); 2007 to 2010, 22.3% (1.5); 2011 to 2014, 24.3% (1.7); and 2015 to 2018, 24.0% (1.8) (P = .02) (Figure 1).
Without covariate adjustment, prior MS was significantly associated with an increased risk of CKD (unadjusted odds ratio [OR], 1.78; 95% CI, 1.64-1.93; P < .05) (Table 2). Prior MS was significantly associated with CKD in the following subgroups: 2003 to 2006, 2011 to 2014, 2015 to 2018, age groups of 40 to 64 years and ≥ 65 years, male sex, non-Hispanic White and Hispanic ethnicity, school education of grade 0 to 11, and private or other health insurance. Additional comorbidities strongly associated with CKD included hypertension (OR, 6.37; 95% CI, 5.37-7.55), diabetes (OR, 4.16; 95% CI, 3.45-5.03), and cardiovascular disease (OR, 4.20; 95% CI, 3.57-4.95).
In the population reporting prior MS, the unadjusted OR of CKD vs 1999 to 2002 was greater for all other examined eras; with the greatest likelihood observed for the 2003 to 2006 era. Unadjusted ORs of CKD differed in groups with and without prior MS (P value for interaction < .05) for 2003 to 2006, those aged 40 to 64 years and ≥ 65 years, female sex, non-Hispanic African American and Hispanic race and ethnicity, family poverty, high school education, private health insurance, any smoking history, diabetes, hypertension, and cardiovascular disease (Figure 2A).
Following adjustment for age, sex, and race and ethnicity, MS was associated with a 17% higher likelihood of CKD (adjusted odds ratio [AOR], 1.17; 95% CI, 1.06-1.28; P < .01) (Table 3). Prior MS was significantly associated (P < .05) with CKD in the subgroups: age groups 40 to 64 years and ≥ 65 years, non-Hispanic African American, and body mass index ≥ 30. Among those with prior MS, comorbidities strongly associated with CKD in adjusted models included hypertension (AOR, 3.86; 95% CI, 3.18-4.69), diabetes (AOR, 3.05; 95% CI, 2.44-3.82), and cardiovascular disease (AOR, 2.51; 95% CI, 2.09-3.01). In the population with prior MS, the adjusted likelihood of CKD vs 1999 to 2002 was similar across all eras. Adjusted associations of CKD differed in groups with and without prior MS for age groups 40 to 64 years and ≥ 65 years, female sex, and family poverty (P < .05) (Figure 2B).
Discussion
We observed that prior MS was associated with CKD, all eras were associated with CKD in the subgroup with MS, and risk factors for CKD differed among many subgroups both with and without MS history, a finding that remained present in adjusted models. In addition, the finding of CKD was relatively common among those with prior MS (approximately 15%) and was most strongly associated with increasing age and comorbidities frequently associated with CKD.
Although many studies have demonstrated associations of US veteran status with various comorbidities, including hypertension, obesity, and diabetes, these studies often are limited to those both qualifying and receiving care within the US Department of Veterans Affairs (VA) health care system.6-9 The crude proportion of individuals reporting multiple chronic conditions, which included hypertension, diabetes, and weak or failing kidneys, was 49.7% for US veterans compared with 24.1% for nonveterans.2 Large-scale, nationally representative cohorts for use in this context have been limited by the heterogeneity of definitions of CKD applied with limited timeframes yielding variable estimates.1,10 Moreover, few studies have examined the clinical epidemiology of CKD more broadly in the US among those with prior MS. For example, a PubMed search on March 3, 2022, with the terms “epidemiology”, “military service”, and “chronic kidney disease” produced only 9 citations, one of which examined trends among a non-US cohort and quantifying disease burden another among adolescents.
Whether or not prior MS confers a unique risk profile for CKD is unknown. While our findings of an increased CKD burden among those reporting MS may partially reflect observed increases in baseline comorbidities, the observed excess CKD among those with MS remained across multiple categories even after adjustment for baseline demography. As several studies have demonstrated, enlistment into MS may select for a more diverse population; however those enlisted personnel may be of lower socioeconomic status and possibly at higher risk of CKD.11,12 Our findings of important differences in baseline determinants of health mirror this. The proportion of MS respondents with CKD vs CKD alone reporting a high school education or lower was higher (36.0% vs 21.8%) as well as among those with a history of family poverty (21.1% vs 18.0%).
Limitations
Our study has several limitations, including its cross-sectional study design, a lack of longitudinal data within individuals, and exclusion of institutionalized individuals. Limitations notwithstanding this study has several important aspects. As prior MS is highly variable, we were limited in our inability to stratify by service type or length of service. For example, veteran status is conferred to a “Reservist or member of the National Guard called to federal active duty or disabled from a disease or injury incurred or aggravated in line of duty or while in training status also qualify as a veteran” (13 CFR § 125.11). For the purposes of our study, prior MS would include all active-duty service (veterans) as well as reservists and National Guard members who have not been activated. This may be more representative of the overall effect of MS, as limitation to those receiving care within the VA may select for an older, more multimorbid population of patients, limiting generalizability.
In addition, more detailed information regarding service-related exposures and other service-connected conditions would allow for a more granular risk assessment by service type, era, and military conflict. Our finding of excess CKD burden among those with prior MS compared with the overall population is timely given the recent passage of the Promise to Address Comprehensive Toxics (PACT) Act. Exposure to and injury from Agent Orange—a known service-connected exposure associated with incident hypertension and diabetes—may be a significant contributor to CKD that may have a significant era effect. In addition, water contamination among those stationed in Camp Lejeune in North Carolina has notable genitourinary associations. Finally, burn pit exposures in more recent military conflicts may also have important associations with chronic disease, possibly including CKD. While similar attempts at the creation of large-scale US veteran cohorts have been limited by incomplete capture of creatinine, the large proportion of missing race data, and limited inclusion of additional markers of kidney disease, our use of a well-described, nationally representative survey along with standardized capture of clinical and laboratory elements mitigate the use of various societal or other codified definitions.1
Conclusions
Prior MS is associated with an increased risk of CKD overall and across several important subgroups. This finding was observed in various unadjusted and adjusted models and may constitute a unique risk profile of risk.
1. Ozieh MN, Gebregziabher M, Ward RC, Taber DJ, Egede LE. Creating a 13-year National Longitudinal Cohort of veterans with chronic kidney disease. BMC Nephrol. 2019;20(1):241. doi:10.1186/s12882-019-1430-y
2. Boersma P, Cohen RA, Zelaya CE, Moy E. Multiple chronic conditions among veterans and nonveterans: United States, 2015-2018. Natl Health Stat Report. 2021;(153):1-13.
3. Centers for Disease Control and Prevention, National Center for Health Statistics. National Health and Nutrition Survey. 2022. Accessed October 31, 2023. www.cdc.gov/nchs/nhanes/index.htm
4. Levey AS, Stevens LA, Schmid CH, et al. A new equation to estimate glomerular filtration rate. Ann Intern Med. 2009;150(9):604-612. doi:10.7326/0003-4819-150-9-200905050-00006
5. Selvin E, Manzi J, Stevens LA, et al. Calibration of serum creatinine in the National Health and Nutrition Examination Surveys (NHANES) 1988-1994, 1999-2004. Am J Kidney Dis. 2007;50(6):918-926. doi:10.1053/j.ajkd.2007.08.020
6. Smoley BA, Smith NL, Runkle GP. Hypertension in a population of active duty service members. J Am Board Fam Med. 2008;21(6):504-511. doi:10.3122/jabfm.2008.06.070182
7. Duckworth W, Abraira C, Moritz T, et al. Glucose control and vascular complications in veterans with type 2 diabetes. N Engl J Med. 2009;360(2):129-139. doi:10.1056/NEJMoa0808431
8. Smith TJ, Marriott BP, Dotson L, et al. Overweight and obesity in military personnel: sociodemographic predictors. Obesity (Silver Spring). 2012;20(7):1534-1538. doi:10.1038/oby.2012.25
9. Agha Z, Lofgren RP, VanRuiswyk JV, Layde PM. Are patients at Veterans Affairs medical centers sicker? A comparative analysis of health status and medical resource use. Arch Intern Med. 2000;160(21):3252-3257. doi:10.1001/archinte.160.21.3252
10. Saran R, Pearson A, Tilea A, et al. Burden and cost of caring for US veterans with CKD: initial findings from the VA Renal Information System (VA-REINS). Am J Kidney Dis. 2021;77(3):397-405. doi:10.1053/j.ajkd.2020.07.013
11. Wang L, Elder GH, Jr., Spence NJ. Status configurations, military service and higher education. Soc Forces. 2012;91(2):397-422. doi:10.1093/sf/sos174
12. Zeng X, Liu J, Tao S, Hong HG, Li Y, Fu P. Associations between socioeconomic status and chronic kidney disease: a meta-analysis. J Epidemiol Community Health. 2018;72(4):270-279. doi:10.1136/jech-2017-209815
1. Ozieh MN, Gebregziabher M, Ward RC, Taber DJ, Egede LE. Creating a 13-year National Longitudinal Cohort of veterans with chronic kidney disease. BMC Nephrol. 2019;20(1):241. doi:10.1186/s12882-019-1430-y
2. Boersma P, Cohen RA, Zelaya CE, Moy E. Multiple chronic conditions among veterans and nonveterans: United States, 2015-2018. Natl Health Stat Report. 2021;(153):1-13.
3. Centers for Disease Control and Prevention, National Center for Health Statistics. National Health and Nutrition Survey. 2022. Accessed October 31, 2023. www.cdc.gov/nchs/nhanes/index.htm
4. Levey AS, Stevens LA, Schmid CH, et al. A new equation to estimate glomerular filtration rate. Ann Intern Med. 2009;150(9):604-612. doi:10.7326/0003-4819-150-9-200905050-00006
5. Selvin E, Manzi J, Stevens LA, et al. Calibration of serum creatinine in the National Health and Nutrition Examination Surveys (NHANES) 1988-1994, 1999-2004. Am J Kidney Dis. 2007;50(6):918-926. doi:10.1053/j.ajkd.2007.08.020
6. Smoley BA, Smith NL, Runkle GP. Hypertension in a population of active duty service members. J Am Board Fam Med. 2008;21(6):504-511. doi:10.3122/jabfm.2008.06.070182
7. Duckworth W, Abraira C, Moritz T, et al. Glucose control and vascular complications in veterans with type 2 diabetes. N Engl J Med. 2009;360(2):129-139. doi:10.1056/NEJMoa0808431
8. Smith TJ, Marriott BP, Dotson L, et al. Overweight and obesity in military personnel: sociodemographic predictors. Obesity (Silver Spring). 2012;20(7):1534-1538. doi:10.1038/oby.2012.25
9. Agha Z, Lofgren RP, VanRuiswyk JV, Layde PM. Are patients at Veterans Affairs medical centers sicker? A comparative analysis of health status and medical resource use. Arch Intern Med. 2000;160(21):3252-3257. doi:10.1001/archinte.160.21.3252
10. Saran R, Pearson A, Tilea A, et al. Burden and cost of caring for US veterans with CKD: initial findings from the VA Renal Information System (VA-REINS). Am J Kidney Dis. 2021;77(3):397-405. doi:10.1053/j.ajkd.2020.07.013
11. Wang L, Elder GH, Jr., Spence NJ. Status configurations, military service and higher education. Soc Forces. 2012;91(2):397-422. doi:10.1093/sf/sos174
12. Zeng X, Liu J, Tao S, Hong HG, Li Y, Fu P. Associations between socioeconomic status and chronic kidney disease: a meta-analysis. J Epidemiol Community Health. 2018;72(4):270-279. doi:10.1136/jech-2017-209815
Discontinuation Schedule of Inhaled Corticosteroids in Patients With Chronic Obstructive Pulmonary Disease
Inhaled corticosteroids (ICSs) are frequently prescribed for the treatment of chronic obstructive pulmonary disease (COPD) to reduce exacerbations in a specific subset of patients. The long-term use of ICSs, however, is associated with several potential systemic adverse effects, including adrenal suppression, decreased bone mineral density, and immunosuppression.1 The concern for immunosuppression is particularly notable and leads to a known increased risk for developing pneumonia in patients with COPD. These patients frequently have other concurrent risk factors for pneumonia (eg, history of tobacco use, older age, and severe airway limitations) and are at higher risk for more severe outcomes in the setting of pneumonia.2,3
Primarily due to the concern of pneumonia risks, the Global Initiative for Chronic Obstructive Lung Disease (GOLD) guidelines have recommended ICS discontinuation in patients who are less likely to receive significant benefits from therapy.4 Likely due to an anti-inflammatory mechanism of action, ICSs have been shown to reduce COPD exacerbation rates in patients with comorbid asthma or who have evidence of a strong inflammatory component to their COPD. The strongest indicator of an inflammatory component is an elevated blood eosinophil (EOS) count; those with EOS > 300 cells/µL are most likely to benefit from ICSs, whereas those with a count < 100 cells/µL are unlikely to have a significant response. In addition to the inflammatory component consideration, prior studies have shown improvements in lung function and reduction of exacerbations with ICS use in patients with frequent moderate-to-severe COPD exacerbations.5 Although the GOLD guidelines provide recommendations about who is appropriate to discontinue ICS use, clinicians have no clear guidance on the risks or the best discontinuation strategy.
Based primarily on data from a prior randomized controlled trial, the Veterans Integrated Services Network (VISN) 17, which includes the Veterans Affairs North Texas Health Care System (VANTHCS) in Dallas, established a recommended ICS de-escalation strategy.6,7 The strategy included a 12-week stepwise taper using a mometasone inhaler for all patients discontinuing a moderate or high dose ICS. The lack of substantial clinical trial data or expert consensus guideline recommendations has left open the question of whether a taper is necessary. To answer that question, this study was conducted to evaluate whether there is a difference in the rate of COPD exacerbations following abrupt discontinuation vs gradual taper of ICS therapy.
Methods
This single-center, retrospective cohort study was conducted at VANTHCS. Patient electronic health records between January 10, 2021, and September 1, 2021, were reviewed for the last documented fill date of any inhaler containing a steroid component. This time frame was chosen to coincide with a VANTHCS initiative to follow GOLD guidelines for ICS discontinuation. Patients were followed for outcomes until November 1, 2022.
To be included in this study, patients had to have active prescriptions at VANTHCS, have a documented diagnosis of COPD in their chart, and be prescribed a stable dose of ICS for ≥ 1 year prior to their latest refill. The inhaler used could contain an ICS as monotherapy, in combination with a long-acting β-agonist (LABA), or as part of triple therapy with an additional long-acting muscarinic antagonist (LAMA). The inhaler needed to be discontinued during the study period of interest.
Patients were excluded if they had a diagnosis of asthma, were aged < 40 years, had active prescriptions for multiple ICS inhalers or nebulizers, or had significant oral steroid use (≥ 5 mg/d prednisone or an equivalent steroid for > 6 weeks) within 1 year of their ICS discontinuation date. In addition, to reduce the risk of future events being misclassified as COPD exacerbations, patients were excluded if they had a congestive heart failure exacerbation up to 2 years before ICS discontinuation or a diagnosis of COVID-19 infection up to 1 year before or 6 months after ICS discontinuation. Patients with a COPD exacerbation requiring an emergency department or hospital visit within 2 years prior to ICS discontinuation were also excluded, as de-escalation of ICS therapy was likely inappropriate in these cases. Finally, patients were excluded if they were started on a different ICS immediately following the discontinuation of their first ICS.
The primary outcome for this study was COPD exacerbations requiring an emergency department visit or hospitalization within 6 months of ICS discontinuation. A secondary outcome examining the rates of COPD exacerbations within 12 months also was used. The original study design called for the use of inferential statistics to compare the rates of primary and secondary outcomes in patients whose ICS was abruptly discontinued with those who were tapered slowly. After data collection, however, the small sample size and low event rate meant that the planned statistical tests were no longer appropriate. Instead, we decided to analyze the planned outcomes using descriptive statistics and look at an additional number of post hoc outcomes to provide deeper insight into clinical practice. We examined the association between relevant demographic factors, such as age, comorbidity burden, ICS potency, duration of ICS therapy, and EOS count and the clinician decision whether to taper the ICS. These same factors were also evaluated for potential association with the increased risk of COPD exacerbations following ICS discontinuation.
Results
A total of 75 patients were included. Most patients were White race and male with a mean (SD) age of 71.6 (7.4) years. Charlson Comorbidity Index scores were calculated for all included patients with a mean (SD) score of 5.4 (2.0). Of note, scores > 5 are considered a severe comorbidity burden and have an estimated mean 10-year survival rate < 21%. The overwhelming majority of patients were receiving budesonide/formoterol as their ICS inhaler with 1 receiving mometasone monotherapy. When evaluating the steroid dose, 18 (24%) patients received a low dose ICS (200-400 µg of budesonide or 110-220 µg of mometasone), while 57 (76%) received a medium dose (400-800 µg of budesonide or 440 µg of mometasone). No patients received a high ICS dose. The mean (SD) duration of therapy before discontinuation was 4.0 (2.7) years (Table 1).
Nine (12%) patients had their ICS slowly tapered, while therapy was abruptly discontinued in the other 66 (88%) patients. A variety of taper types were used (Figure) without a strong preference for a particular dosing strategy. The primary outcome of COPD exacerbation requiring emergency department visit or hospitalization within 6 months occurred in 2 patients. When the time frame was extended to 12 months for the secondary outcome, an additional 3 patients experienced an event. The mean time to event was 172 days following ICS discontinuation. All the events occurred in patients whose ICS was discontinued without any type of taper.
In a post hoc analysis, we examined the relationship between specific variables and the clinician choice whether to taper an ICS. There was no discernable impact of age, race and ethnicity, comorbidity score, or ICS dose on whether an ICS was tapered. We observed a slight association between shorter duration of therapy and lower EOS count and use of a taper. When evaluating the relationship between these same factors and exacerbation occurrence, we saw comparable trends (Table 2). Patients with an exacerbation had a slightly longer mean duration of ICS therapy and lower mean EOS count.
Discussion
Despite facility guidance recommending tapering of therapy when discontinuing a moderate- or high-dose ICS, most patients in this study discontinued the ICS abruptly. The clinician may have been concerned with patients being able to adhere to a taper regimen, skeptical of the actual need to taper, or unaware of the VANTHCS recommendations for a specific taper method. Shared decision making with patients may have also played a role in prescribing patterns. Currently, there is not sufficient data to support the use of any one particular type of taper over another, which accounts for the variability seen in practice.
The decision to taper ICSs did not seem to be strongly associated with any specific demographic factor, although the ability to examine the impact of factors (eg, race and ethnicity) was limited due to the largely homogenous population. One may have expected a taper to be more common in older patients or in those with more comorbidities; however, this was not observed in this study. The only discernible trends seen were a lower frequency of tapering in patients who had a shorter duration of ICS therapy and those with lower EOS counts. These patients were at lower risk of repeat COPD exacerbations compared with those with longer ICS therapy duration and higher EOS counts; therefore, this finding was unexpected. This suggests that patient-specific factors may not be the primary driving force in the ICS tapering decision; instead it may be based on general clinician preferences or shared decision making with individual patients.
Overall, we noted very low rates of COPD exacerbations. As ICS discontinuation was occurring in stable patients without any recent exacerbations, lower rates of future exacerbations were expected compared with the population of patients with COPD as a whole. This suggests that ICS therapy can be safely stopped in stable patients with COPD who are not likely to receive significant benefits as defined in the GOLD guidelines. All of the exacerbations that occurred were in patients whose ICS was abruptly discontinued; however, given the small number of patients who had a taper, it is difficult to draw conclusions. The low overall rate of exacerbations suggests that a taper may not be necessary to ensure safety while stopping a low- or moderate-intensity ICS.
Several randomized controlled trials have attempted to evaluate the need for an ICS taper; however, results remain mixed. The COSMIC study showed a decline in lung function following ICS discontinuation in patients with ≥ 2 COPD exacerbations in the previous year.8 Similar results were seen in the SUNSET study with increased exacerbation rates after ICS discontinuation in patients with elevated EOS counts.9 However, these studies included patients for whom ICS discontinuation is currently not recommended. Alternatively, the INSTEAD trial looked at patients without frequent recent exacerbations and found no difference in lung function, exacerbation rates, or rescue inhaler use in patients that continued combination ICS plus bronchodilator use vs those de-escalated to bronchodilator monotherapy.10
All 3 studies chose to abruptly stop the ICS when discontinuing therapy; however, using a slow, stepwise taper similar to that used after long periods of oral steroid use may reduce the risk of worsening exacerbations. The WISDOM trial is the only major randomized trial to date that stopped ICS therapy using a stepwise withdrawal of therapy.7 In patients who were continued on triple inhaled therapy (2 bronchodilators plus ICS) vs those who were de-escalated to dual bronchodilator therapy, de-escalation was noninferior to continuation of therapy in time to first COPD exacerbation. Both the WISDOM and INSTEAD trials were consistent with the results found in our real-world retrospective evaluation.
There did not seem to be an increased exacerbation risk following ICS discontinuation in any patient subpopulation based on sex, age, race and ethnicity, or comorbidity burden. We noted a trend toward more exacerbations in patients with a longer duration of ICS therapy, suggesting that additional caution may be needed when stopping ICS therapy for these patients. We also noted a trend toward more exacerbations in patients with a lower mean EOS count; however, given the low event rate and wide variability in observed patient EOS counts, this is likely a spurious finding.
Limitations
The small sample size, resulting from the strict exclusion criteria, limits the generalizability of the results. Although the low number of events seen in this study supports safety in ICS discontinuation, there may have been higher rates observed in a larger population. The most common reason for patient exclusion was the initiation of another ICS immediately following discontinuation of the original ICS. During the study period, VANTHCS underwent a change to its formulary: Fluticasone/salmeterol replaced budesonide/formoterol as the preferred ICS/LABA combination. As a result, many patients had their budesonide/formoterol discontinued during the study period solely to initiate fluticasone/salmeterol therapy. As these patients did not truly have their ICS discontinued or have a significant period without ICS therapy, they were not included in the results, and the total patient population available to analyze was relatively limited.
The low event rate also limits the ability to compare various factors influencing exacerbation risk, particularly taper vs abrupt ICS discontinuation. This is further compounded by the small number of patients who had a taper performed and the lack of consistency in the method of tapering used. Statistical significance could not be determined for any outcome, and all findings were purely hypothesis generating. Finally, data were only collected for moderate or severe COPD exacerbations that resulted in an emergency department visit or hospitalization, so there may have been mild exacerbations treated in the outpatient setting that were not captured.
Despite these limitations, this study adds data to an area of COPD management that currently lacks strong clinical guidance. Since investigators had access to clinician notes, we were able to capture ICS tapers even if patients did not receive a prescription with specific taper instructions. The extended follow-up period of 12 months evaluated a longer potential time to impact of ICS discontinuation than is done in most COPD clinical trials.
Conclusions
Overall, very low rates of COPD exacerbations occurred following ICS discontinuation, regardless of whether a taper
1. Yang IA, Clarke MS, Sim EH, Fong KM. Inhaled corticosteroids for stable chronic obstructive pulmonary disease. Cochrane Database Syst Rev. 2012;7(7):CD002991. doi:10.1002/14651858.CD002991.pub3
2. Crim C, Dransfield MT, Bourbeau J, et al. Pneumonia risk with inhaled fluticasone furoate and vilanterol compared with vilanterol alone in patients with COPD. Ann Am Thorac Soc. 2015;12(1):27-34. doi:10.1513/AnnalsATS.201409-413OC
3. Crim C, Calverley PMA, Anderson JA, et al. Pneumonia risk with inhaled fluticasone furoate and vilanterol in COPD patients with moderate airflow limitation: The SUMMIT trial. Respir Med. 2017;131:27-34. doi:10.1016/j.rmed.2017.07.060
4. Global Initiative for Chronic Obstructive Lung Disease. Global strategy for the diagnosis, management, and prevention of chronic obstructive pulmonary disease (2023 Report). Accessed November 3, 2023. https://goldcopd.org/wp-content/uploads/2023/03/GOLD-2023-ver-1.3-17Feb2023_WMV.pdf
5. Nannini LJ, Lasserson TJ, Poole P. Combined corticosteroid and long-acting beta(2)-agonist in one inhaler versus long-acting beta(2)-agonists for chronic obstructive pulmonary disease. Cochrane Database Syst Rev. 2012;9(9):CD006829. doi:10.1002/14651858.CD006826.pub2
6. Kaplan AG. Applying the wisdom of stepping down inhaled corticosteroids in patients with COPD: a proposed algorithm for clinical practice. Int J Chron Obstruct Pulmon Dis. 2015;10:2535-2548. doi:10.2147/COPD.S93321
7. Magnussen H, Disse B, Rodriguez-Roisin R, et al; WISDOM Investigators. Withdrawal of inhaled glucocorticoids and exacerbations of COPD. N Engl J Med. 2014;371(14):1285-1294. doi:10.1056/NEJMoa1407154
8. Wouters EFM, Postma DS, Fokkens B. COSMIC (COPD and Seretide: a Multi-Center Intervention and Characterization) Study Group. Withdrawal of fluticasone propionate from combined salmeterol/fluticasone treatment in patients with COPD causes immediate and sustained disease deterioration: a randomized controlled trial. Thorax. 2005;60(6):480-487. doi:10.1136/thx.2004.034280
9. Chapman KR, Hurst JR, Frent S-M, et al. Long-term triple therapy de-escalation to indacaterol/glycopyrronium in patients with chronic obstructive pulmonary disease (SUNSET): a randomized, double-blind, triple-dummy clinical trial. Am J Respir Crit Care Med. 2018;198(3):329-339. doi:10.1164/rccm.201803-0405OC
10. Rossi A, van der Molen T, del Olmo R, et al. INSTEAD: a randomized switch trial of indacaterol versus salmeterol/fluticasone in moderate COPD. Eur Respir J. 2014;44(6):1548-1556. doi:10.1183/09031936.00126814
Inhaled corticosteroids (ICSs) are frequently prescribed for the treatment of chronic obstructive pulmonary disease (COPD) to reduce exacerbations in a specific subset of patients. The long-term use of ICSs, however, is associated with several potential systemic adverse effects, including adrenal suppression, decreased bone mineral density, and immunosuppression.1 The concern for immunosuppression is particularly notable and leads to a known increased risk for developing pneumonia in patients with COPD. These patients frequently have other concurrent risk factors for pneumonia (eg, history of tobacco use, older age, and severe airway limitations) and are at higher risk for more severe outcomes in the setting of pneumonia.2,3
Primarily due to the concern of pneumonia risks, the Global Initiative for Chronic Obstructive Lung Disease (GOLD) guidelines have recommended ICS discontinuation in patients who are less likely to receive significant benefits from therapy.4 Likely due to an anti-inflammatory mechanism of action, ICSs have been shown to reduce COPD exacerbation rates in patients with comorbid asthma or who have evidence of a strong inflammatory component to their COPD. The strongest indicator of an inflammatory component is an elevated blood eosinophil (EOS) count; those with EOS > 300 cells/µL are most likely to benefit from ICSs, whereas those with a count < 100 cells/µL are unlikely to have a significant response. In addition to the inflammatory component consideration, prior studies have shown improvements in lung function and reduction of exacerbations with ICS use in patients with frequent moderate-to-severe COPD exacerbations.5 Although the GOLD guidelines provide recommendations about who is appropriate to discontinue ICS use, clinicians have no clear guidance on the risks or the best discontinuation strategy.
Based primarily on data from a prior randomized controlled trial, the Veterans Integrated Services Network (VISN) 17, which includes the Veterans Affairs North Texas Health Care System (VANTHCS) in Dallas, established a recommended ICS de-escalation strategy.6,7 The strategy included a 12-week stepwise taper using a mometasone inhaler for all patients discontinuing a moderate or high dose ICS. The lack of substantial clinical trial data or expert consensus guideline recommendations has left open the question of whether a taper is necessary. To answer that question, this study was conducted to evaluate whether there is a difference in the rate of COPD exacerbations following abrupt discontinuation vs gradual taper of ICS therapy.
Methods
This single-center, retrospective cohort study was conducted at VANTHCS. Patient electronic health records between January 10, 2021, and September 1, 2021, were reviewed for the last documented fill date of any inhaler containing a steroid component. This time frame was chosen to coincide with a VANTHCS initiative to follow GOLD guidelines for ICS discontinuation. Patients were followed for outcomes until November 1, 2022.
To be included in this study, patients had to have active prescriptions at VANTHCS, have a documented diagnosis of COPD in their chart, and be prescribed a stable dose of ICS for ≥ 1 year prior to their latest refill. The inhaler used could contain an ICS as monotherapy, in combination with a long-acting β-agonist (LABA), or as part of triple therapy with an additional long-acting muscarinic antagonist (LAMA). The inhaler needed to be discontinued during the study period of interest.
Patients were excluded if they had a diagnosis of asthma, were aged < 40 years, had active prescriptions for multiple ICS inhalers or nebulizers, or had significant oral steroid use (≥ 5 mg/d prednisone or an equivalent steroid for > 6 weeks) within 1 year of their ICS discontinuation date. In addition, to reduce the risk of future events being misclassified as COPD exacerbations, patients were excluded if they had a congestive heart failure exacerbation up to 2 years before ICS discontinuation or a diagnosis of COVID-19 infection up to 1 year before or 6 months after ICS discontinuation. Patients with a COPD exacerbation requiring an emergency department or hospital visit within 2 years prior to ICS discontinuation were also excluded, as de-escalation of ICS therapy was likely inappropriate in these cases. Finally, patients were excluded if they were started on a different ICS immediately following the discontinuation of their first ICS.
The primary outcome for this study was COPD exacerbations requiring an emergency department visit or hospitalization within 6 months of ICS discontinuation. A secondary outcome examining the rates of COPD exacerbations within 12 months also was used. The original study design called for the use of inferential statistics to compare the rates of primary and secondary outcomes in patients whose ICS was abruptly discontinued with those who were tapered slowly. After data collection, however, the small sample size and low event rate meant that the planned statistical tests were no longer appropriate. Instead, we decided to analyze the planned outcomes using descriptive statistics and look at an additional number of post hoc outcomes to provide deeper insight into clinical practice. We examined the association between relevant demographic factors, such as age, comorbidity burden, ICS potency, duration of ICS therapy, and EOS count and the clinician decision whether to taper the ICS. These same factors were also evaluated for potential association with the increased risk of COPD exacerbations following ICS discontinuation.
Results
A total of 75 patients were included. Most patients were White race and male with a mean (SD) age of 71.6 (7.4) years. Charlson Comorbidity Index scores were calculated for all included patients with a mean (SD) score of 5.4 (2.0). Of note, scores > 5 are considered a severe comorbidity burden and have an estimated mean 10-year survival rate < 21%. The overwhelming majority of patients were receiving budesonide/formoterol as their ICS inhaler with 1 receiving mometasone monotherapy. When evaluating the steroid dose, 18 (24%) patients received a low dose ICS (200-400 µg of budesonide or 110-220 µg of mometasone), while 57 (76%) received a medium dose (400-800 µg of budesonide or 440 µg of mometasone). No patients received a high ICS dose. The mean (SD) duration of therapy before discontinuation was 4.0 (2.7) years (Table 1).
Nine (12%) patients had their ICS slowly tapered, while therapy was abruptly discontinued in the other 66 (88%) patients. A variety of taper types were used (Figure) without a strong preference for a particular dosing strategy. The primary outcome of COPD exacerbation requiring emergency department visit or hospitalization within 6 months occurred in 2 patients. When the time frame was extended to 12 months for the secondary outcome, an additional 3 patients experienced an event. The mean time to event was 172 days following ICS discontinuation. All the events occurred in patients whose ICS was discontinued without any type of taper.
In a post hoc analysis, we examined the relationship between specific variables and the clinician choice whether to taper an ICS. There was no discernable impact of age, race and ethnicity, comorbidity score, or ICS dose on whether an ICS was tapered. We observed a slight association between shorter duration of therapy and lower EOS count and use of a taper. When evaluating the relationship between these same factors and exacerbation occurrence, we saw comparable trends (Table 2). Patients with an exacerbation had a slightly longer mean duration of ICS therapy and lower mean EOS count.
Discussion
Despite facility guidance recommending tapering of therapy when discontinuing a moderate- or high-dose ICS, most patients in this study discontinued the ICS abruptly. The clinician may have been concerned with patients being able to adhere to a taper regimen, skeptical of the actual need to taper, or unaware of the VANTHCS recommendations for a specific taper method. Shared decision making with patients may have also played a role in prescribing patterns. Currently, there is not sufficient data to support the use of any one particular type of taper over another, which accounts for the variability seen in practice.
The decision to taper ICSs did not seem to be strongly associated with any specific demographic factor, although the ability to examine the impact of factors (eg, race and ethnicity) was limited due to the largely homogenous population. One may have expected a taper to be more common in older patients or in those with more comorbidities; however, this was not observed in this study. The only discernible trends seen were a lower frequency of tapering in patients who had a shorter duration of ICS therapy and those with lower EOS counts. These patients were at lower risk of repeat COPD exacerbations compared with those with longer ICS therapy duration and higher EOS counts; therefore, this finding was unexpected. This suggests that patient-specific factors may not be the primary driving force in the ICS tapering decision; instead it may be based on general clinician preferences or shared decision making with individual patients.
Overall, we noted very low rates of COPD exacerbations. As ICS discontinuation was occurring in stable patients without any recent exacerbations, lower rates of future exacerbations were expected compared with the population of patients with COPD as a whole. This suggests that ICS therapy can be safely stopped in stable patients with COPD who are not likely to receive significant benefits as defined in the GOLD guidelines. All of the exacerbations that occurred were in patients whose ICS was abruptly discontinued; however, given the small number of patients who had a taper, it is difficult to draw conclusions. The low overall rate of exacerbations suggests that a taper may not be necessary to ensure safety while stopping a low- or moderate-intensity ICS.
Several randomized controlled trials have attempted to evaluate the need for an ICS taper; however, results remain mixed. The COSMIC study showed a decline in lung function following ICS discontinuation in patients with ≥ 2 COPD exacerbations in the previous year.8 Similar results were seen in the SUNSET study with increased exacerbation rates after ICS discontinuation in patients with elevated EOS counts.9 However, these studies included patients for whom ICS discontinuation is currently not recommended. Alternatively, the INSTEAD trial looked at patients without frequent recent exacerbations and found no difference in lung function, exacerbation rates, or rescue inhaler use in patients that continued combination ICS plus bronchodilator use vs those de-escalated to bronchodilator monotherapy.10
All 3 studies chose to abruptly stop the ICS when discontinuing therapy; however, using a slow, stepwise taper similar to that used after long periods of oral steroid use may reduce the risk of worsening exacerbations. The WISDOM trial is the only major randomized trial to date that stopped ICS therapy using a stepwise withdrawal of therapy.7 In patients who were continued on triple inhaled therapy (2 bronchodilators plus ICS) vs those who were de-escalated to dual bronchodilator therapy, de-escalation was noninferior to continuation of therapy in time to first COPD exacerbation. Both the WISDOM and INSTEAD trials were consistent with the results found in our real-world retrospective evaluation.
There did not seem to be an increased exacerbation risk following ICS discontinuation in any patient subpopulation based on sex, age, race and ethnicity, or comorbidity burden. We noted a trend toward more exacerbations in patients with a longer duration of ICS therapy, suggesting that additional caution may be needed when stopping ICS therapy for these patients. We also noted a trend toward more exacerbations in patients with a lower mean EOS count; however, given the low event rate and wide variability in observed patient EOS counts, this is likely a spurious finding.
Limitations
The small sample size, resulting from the strict exclusion criteria, limits the generalizability of the results. Although the low number of events seen in this study supports safety in ICS discontinuation, there may have been higher rates observed in a larger population. The most common reason for patient exclusion was the initiation of another ICS immediately following discontinuation of the original ICS. During the study period, VANTHCS underwent a change to its formulary: Fluticasone/salmeterol replaced budesonide/formoterol as the preferred ICS/LABA combination. As a result, many patients had their budesonide/formoterol discontinued during the study period solely to initiate fluticasone/salmeterol therapy. As these patients did not truly have their ICS discontinued or have a significant period without ICS therapy, they were not included in the results, and the total patient population available to analyze was relatively limited.
The low event rate also limits the ability to compare various factors influencing exacerbation risk, particularly taper vs abrupt ICS discontinuation. This is further compounded by the small number of patients who had a taper performed and the lack of consistency in the method of tapering used. Statistical significance could not be determined for any outcome, and all findings were purely hypothesis generating. Finally, data were only collected for moderate or severe COPD exacerbations that resulted in an emergency department visit or hospitalization, so there may have been mild exacerbations treated in the outpatient setting that were not captured.
Despite these limitations, this study adds data to an area of COPD management that currently lacks strong clinical guidance. Since investigators had access to clinician notes, we were able to capture ICS tapers even if patients did not receive a prescription with specific taper instructions. The extended follow-up period of 12 months evaluated a longer potential time to impact of ICS discontinuation than is done in most COPD clinical trials.
Conclusions
Overall, very low rates of COPD exacerbations occurred following ICS discontinuation, regardless of whether a taper
Inhaled corticosteroids (ICSs) are frequently prescribed for the treatment of chronic obstructive pulmonary disease (COPD) to reduce exacerbations in a specific subset of patients. The long-term use of ICSs, however, is associated with several potential systemic adverse effects, including adrenal suppression, decreased bone mineral density, and immunosuppression.1 The concern for immunosuppression is particularly notable and leads to a known increased risk for developing pneumonia in patients with COPD. These patients frequently have other concurrent risk factors for pneumonia (eg, history of tobacco use, older age, and severe airway limitations) and are at higher risk for more severe outcomes in the setting of pneumonia.2,3
Primarily due to the concern of pneumonia risks, the Global Initiative for Chronic Obstructive Lung Disease (GOLD) guidelines have recommended ICS discontinuation in patients who are less likely to receive significant benefits from therapy.4 Likely due to an anti-inflammatory mechanism of action, ICSs have been shown to reduce COPD exacerbation rates in patients with comorbid asthma or who have evidence of a strong inflammatory component to their COPD. The strongest indicator of an inflammatory component is an elevated blood eosinophil (EOS) count; those with EOS > 300 cells/µL are most likely to benefit from ICSs, whereas those with a count < 100 cells/µL are unlikely to have a significant response. In addition to the inflammatory component consideration, prior studies have shown improvements in lung function and reduction of exacerbations with ICS use in patients with frequent moderate-to-severe COPD exacerbations.5 Although the GOLD guidelines provide recommendations about who is appropriate to discontinue ICS use, clinicians have no clear guidance on the risks or the best discontinuation strategy.
Based primarily on data from a prior randomized controlled trial, the Veterans Integrated Services Network (VISN) 17, which includes the Veterans Affairs North Texas Health Care System (VANTHCS) in Dallas, established a recommended ICS de-escalation strategy.6,7 The strategy included a 12-week stepwise taper using a mometasone inhaler for all patients discontinuing a moderate or high dose ICS. The lack of substantial clinical trial data or expert consensus guideline recommendations has left open the question of whether a taper is necessary. To answer that question, this study was conducted to evaluate whether there is a difference in the rate of COPD exacerbations following abrupt discontinuation vs gradual taper of ICS therapy.
Methods
This single-center, retrospective cohort study was conducted at VANTHCS. Patient electronic health records between January 10, 2021, and September 1, 2021, were reviewed for the last documented fill date of any inhaler containing a steroid component. This time frame was chosen to coincide with a VANTHCS initiative to follow GOLD guidelines for ICS discontinuation. Patients were followed for outcomes until November 1, 2022.
To be included in this study, patients had to have active prescriptions at VANTHCS, have a documented diagnosis of COPD in their chart, and be prescribed a stable dose of ICS for ≥ 1 year prior to their latest refill. The inhaler used could contain an ICS as monotherapy, in combination with a long-acting β-agonist (LABA), or as part of triple therapy with an additional long-acting muscarinic antagonist (LAMA). The inhaler needed to be discontinued during the study period of interest.
Patients were excluded if they had a diagnosis of asthma, were aged < 40 years, had active prescriptions for multiple ICS inhalers or nebulizers, or had significant oral steroid use (≥ 5 mg/d prednisone or an equivalent steroid for > 6 weeks) within 1 year of their ICS discontinuation date. In addition, to reduce the risk of future events being misclassified as COPD exacerbations, patients were excluded if they had a congestive heart failure exacerbation up to 2 years before ICS discontinuation or a diagnosis of COVID-19 infection up to 1 year before or 6 months after ICS discontinuation. Patients with a COPD exacerbation requiring an emergency department or hospital visit within 2 years prior to ICS discontinuation were also excluded, as de-escalation of ICS therapy was likely inappropriate in these cases. Finally, patients were excluded if they were started on a different ICS immediately following the discontinuation of their first ICS.
The primary outcome for this study was COPD exacerbations requiring an emergency department visit or hospitalization within 6 months of ICS discontinuation. A secondary outcome examining the rates of COPD exacerbations within 12 months also was used. The original study design called for the use of inferential statistics to compare the rates of primary and secondary outcomes in patients whose ICS was abruptly discontinued with those who were tapered slowly. After data collection, however, the small sample size and low event rate meant that the planned statistical tests were no longer appropriate. Instead, we decided to analyze the planned outcomes using descriptive statistics and look at an additional number of post hoc outcomes to provide deeper insight into clinical practice. We examined the association between relevant demographic factors, such as age, comorbidity burden, ICS potency, duration of ICS therapy, and EOS count and the clinician decision whether to taper the ICS. These same factors were also evaluated for potential association with the increased risk of COPD exacerbations following ICS discontinuation.
Results
A total of 75 patients were included. Most patients were White race and male with a mean (SD) age of 71.6 (7.4) years. Charlson Comorbidity Index scores were calculated for all included patients with a mean (SD) score of 5.4 (2.0). Of note, scores > 5 are considered a severe comorbidity burden and have an estimated mean 10-year survival rate < 21%. The overwhelming majority of patients were receiving budesonide/formoterol as their ICS inhaler with 1 receiving mometasone monotherapy. When evaluating the steroid dose, 18 (24%) patients received a low dose ICS (200-400 µg of budesonide or 110-220 µg of mometasone), while 57 (76%) received a medium dose (400-800 µg of budesonide or 440 µg of mometasone). No patients received a high ICS dose. The mean (SD) duration of therapy before discontinuation was 4.0 (2.7) years (Table 1).
Nine (12%) patients had their ICS slowly tapered, while therapy was abruptly discontinued in the other 66 (88%) patients. A variety of taper types were used (Figure) without a strong preference for a particular dosing strategy. The primary outcome of COPD exacerbation requiring emergency department visit or hospitalization within 6 months occurred in 2 patients. When the time frame was extended to 12 months for the secondary outcome, an additional 3 patients experienced an event. The mean time to event was 172 days following ICS discontinuation. All the events occurred in patients whose ICS was discontinued without any type of taper.
In a post hoc analysis, we examined the relationship between specific variables and the clinician choice whether to taper an ICS. There was no discernable impact of age, race and ethnicity, comorbidity score, or ICS dose on whether an ICS was tapered. We observed a slight association between shorter duration of therapy and lower EOS count and use of a taper. When evaluating the relationship between these same factors and exacerbation occurrence, we saw comparable trends (Table 2). Patients with an exacerbation had a slightly longer mean duration of ICS therapy and lower mean EOS count.
Discussion
Despite facility guidance recommending tapering of therapy when discontinuing a moderate- or high-dose ICS, most patients in this study discontinued the ICS abruptly. The clinician may have been concerned with patients being able to adhere to a taper regimen, skeptical of the actual need to taper, or unaware of the VANTHCS recommendations for a specific taper method. Shared decision making with patients may have also played a role in prescribing patterns. Currently, there is not sufficient data to support the use of any one particular type of taper over another, which accounts for the variability seen in practice.
The decision to taper ICSs did not seem to be strongly associated with any specific demographic factor, although the ability to examine the impact of factors (eg, race and ethnicity) was limited due to the largely homogenous population. One may have expected a taper to be more common in older patients or in those with more comorbidities; however, this was not observed in this study. The only discernible trends seen were a lower frequency of tapering in patients who had a shorter duration of ICS therapy and those with lower EOS counts. These patients were at lower risk of repeat COPD exacerbations compared with those with longer ICS therapy duration and higher EOS counts; therefore, this finding was unexpected. This suggests that patient-specific factors may not be the primary driving force in the ICS tapering decision; instead it may be based on general clinician preferences or shared decision making with individual patients.
Overall, we noted very low rates of COPD exacerbations. As ICS discontinuation was occurring in stable patients without any recent exacerbations, lower rates of future exacerbations were expected compared with the population of patients with COPD as a whole. This suggests that ICS therapy can be safely stopped in stable patients with COPD who are not likely to receive significant benefits as defined in the GOLD guidelines. All of the exacerbations that occurred were in patients whose ICS was abruptly discontinued; however, given the small number of patients who had a taper, it is difficult to draw conclusions. The low overall rate of exacerbations suggests that a taper may not be necessary to ensure safety while stopping a low- or moderate-intensity ICS.
Several randomized controlled trials have attempted to evaluate the need for an ICS taper; however, results remain mixed. The COSMIC study showed a decline in lung function following ICS discontinuation in patients with ≥ 2 COPD exacerbations in the previous year.8 Similar results were seen in the SUNSET study with increased exacerbation rates after ICS discontinuation in patients with elevated EOS counts.9 However, these studies included patients for whom ICS discontinuation is currently not recommended. Alternatively, the INSTEAD trial looked at patients without frequent recent exacerbations and found no difference in lung function, exacerbation rates, or rescue inhaler use in patients that continued combination ICS plus bronchodilator use vs those de-escalated to bronchodilator monotherapy.10
All 3 studies chose to abruptly stop the ICS when discontinuing therapy; however, using a slow, stepwise taper similar to that used after long periods of oral steroid use may reduce the risk of worsening exacerbations. The WISDOM trial is the only major randomized trial to date that stopped ICS therapy using a stepwise withdrawal of therapy.7 In patients who were continued on triple inhaled therapy (2 bronchodilators plus ICS) vs those who were de-escalated to dual bronchodilator therapy, de-escalation was noninferior to continuation of therapy in time to first COPD exacerbation. Both the WISDOM and INSTEAD trials were consistent with the results found in our real-world retrospective evaluation.
There did not seem to be an increased exacerbation risk following ICS discontinuation in any patient subpopulation based on sex, age, race and ethnicity, or comorbidity burden. We noted a trend toward more exacerbations in patients with a longer duration of ICS therapy, suggesting that additional caution may be needed when stopping ICS therapy for these patients. We also noted a trend toward more exacerbations in patients with a lower mean EOS count; however, given the low event rate and wide variability in observed patient EOS counts, this is likely a spurious finding.
Limitations
The small sample size, resulting from the strict exclusion criteria, limits the generalizability of the results. Although the low number of events seen in this study supports safety in ICS discontinuation, there may have been higher rates observed in a larger population. The most common reason for patient exclusion was the initiation of another ICS immediately following discontinuation of the original ICS. During the study period, VANTHCS underwent a change to its formulary: Fluticasone/salmeterol replaced budesonide/formoterol as the preferred ICS/LABA combination. As a result, many patients had their budesonide/formoterol discontinued during the study period solely to initiate fluticasone/salmeterol therapy. As these patients did not truly have their ICS discontinued or have a significant period without ICS therapy, they were not included in the results, and the total patient population available to analyze was relatively limited.
The low event rate also limits the ability to compare various factors influencing exacerbation risk, particularly taper vs abrupt ICS discontinuation. This is further compounded by the small number of patients who had a taper performed and the lack of consistency in the method of tapering used. Statistical significance could not be determined for any outcome, and all findings were purely hypothesis generating. Finally, data were only collected for moderate or severe COPD exacerbations that resulted in an emergency department visit or hospitalization, so there may have been mild exacerbations treated in the outpatient setting that were not captured.
Despite these limitations, this study adds data to an area of COPD management that currently lacks strong clinical guidance. Since investigators had access to clinician notes, we were able to capture ICS tapers even if patients did not receive a prescription with specific taper instructions. The extended follow-up period of 12 months evaluated a longer potential time to impact of ICS discontinuation than is done in most COPD clinical trials.
Conclusions
Overall, very low rates of COPD exacerbations occurred following ICS discontinuation, regardless of whether a taper
1. Yang IA, Clarke MS, Sim EH, Fong KM. Inhaled corticosteroids for stable chronic obstructive pulmonary disease. Cochrane Database Syst Rev. 2012;7(7):CD002991. doi:10.1002/14651858.CD002991.pub3
2. Crim C, Dransfield MT, Bourbeau J, et al. Pneumonia risk with inhaled fluticasone furoate and vilanterol compared with vilanterol alone in patients with COPD. Ann Am Thorac Soc. 2015;12(1):27-34. doi:10.1513/AnnalsATS.201409-413OC
3. Crim C, Calverley PMA, Anderson JA, et al. Pneumonia risk with inhaled fluticasone furoate and vilanterol in COPD patients with moderate airflow limitation: The SUMMIT trial. Respir Med. 2017;131:27-34. doi:10.1016/j.rmed.2017.07.060
4. Global Initiative for Chronic Obstructive Lung Disease. Global strategy for the diagnosis, management, and prevention of chronic obstructive pulmonary disease (2023 Report). Accessed November 3, 2023. https://goldcopd.org/wp-content/uploads/2023/03/GOLD-2023-ver-1.3-17Feb2023_WMV.pdf
5. Nannini LJ, Lasserson TJ, Poole P. Combined corticosteroid and long-acting beta(2)-agonist in one inhaler versus long-acting beta(2)-agonists for chronic obstructive pulmonary disease. Cochrane Database Syst Rev. 2012;9(9):CD006829. doi:10.1002/14651858.CD006826.pub2
6. Kaplan AG. Applying the wisdom of stepping down inhaled corticosteroids in patients with COPD: a proposed algorithm for clinical practice. Int J Chron Obstruct Pulmon Dis. 2015;10:2535-2548. doi:10.2147/COPD.S93321
7. Magnussen H, Disse B, Rodriguez-Roisin R, et al; WISDOM Investigators. Withdrawal of inhaled glucocorticoids and exacerbations of COPD. N Engl J Med. 2014;371(14):1285-1294. doi:10.1056/NEJMoa1407154
8. Wouters EFM, Postma DS, Fokkens B. COSMIC (COPD and Seretide: a Multi-Center Intervention and Characterization) Study Group. Withdrawal of fluticasone propionate from combined salmeterol/fluticasone treatment in patients with COPD causes immediate and sustained disease deterioration: a randomized controlled trial. Thorax. 2005;60(6):480-487. doi:10.1136/thx.2004.034280
9. Chapman KR, Hurst JR, Frent S-M, et al. Long-term triple therapy de-escalation to indacaterol/glycopyrronium in patients with chronic obstructive pulmonary disease (SUNSET): a randomized, double-blind, triple-dummy clinical trial. Am J Respir Crit Care Med. 2018;198(3):329-339. doi:10.1164/rccm.201803-0405OC
10. Rossi A, van der Molen T, del Olmo R, et al. INSTEAD: a randomized switch trial of indacaterol versus salmeterol/fluticasone in moderate COPD. Eur Respir J. 2014;44(6):1548-1556. doi:10.1183/09031936.00126814
1. Yang IA, Clarke MS, Sim EH, Fong KM. Inhaled corticosteroids for stable chronic obstructive pulmonary disease. Cochrane Database Syst Rev. 2012;7(7):CD002991. doi:10.1002/14651858.CD002991.pub3
2. Crim C, Dransfield MT, Bourbeau J, et al. Pneumonia risk with inhaled fluticasone furoate and vilanterol compared with vilanterol alone in patients with COPD. Ann Am Thorac Soc. 2015;12(1):27-34. doi:10.1513/AnnalsATS.201409-413OC
3. Crim C, Calverley PMA, Anderson JA, et al. Pneumonia risk with inhaled fluticasone furoate and vilanterol in COPD patients with moderate airflow limitation: The SUMMIT trial. Respir Med. 2017;131:27-34. doi:10.1016/j.rmed.2017.07.060
4. Global Initiative for Chronic Obstructive Lung Disease. Global strategy for the diagnosis, management, and prevention of chronic obstructive pulmonary disease (2023 Report). Accessed November 3, 2023. https://goldcopd.org/wp-content/uploads/2023/03/GOLD-2023-ver-1.3-17Feb2023_WMV.pdf
5. Nannini LJ, Lasserson TJ, Poole P. Combined corticosteroid and long-acting beta(2)-agonist in one inhaler versus long-acting beta(2)-agonists for chronic obstructive pulmonary disease. Cochrane Database Syst Rev. 2012;9(9):CD006829. doi:10.1002/14651858.CD006826.pub2
6. Kaplan AG. Applying the wisdom of stepping down inhaled corticosteroids in patients with COPD: a proposed algorithm for clinical practice. Int J Chron Obstruct Pulmon Dis. 2015;10:2535-2548. doi:10.2147/COPD.S93321
7. Magnussen H, Disse B, Rodriguez-Roisin R, et al; WISDOM Investigators. Withdrawal of inhaled glucocorticoids and exacerbations of COPD. N Engl J Med. 2014;371(14):1285-1294. doi:10.1056/NEJMoa1407154
8. Wouters EFM, Postma DS, Fokkens B. COSMIC (COPD and Seretide: a Multi-Center Intervention and Characterization) Study Group. Withdrawal of fluticasone propionate from combined salmeterol/fluticasone treatment in patients with COPD causes immediate and sustained disease deterioration: a randomized controlled trial. Thorax. 2005;60(6):480-487. doi:10.1136/thx.2004.034280
9. Chapman KR, Hurst JR, Frent S-M, et al. Long-term triple therapy de-escalation to indacaterol/glycopyrronium in patients with chronic obstructive pulmonary disease (SUNSET): a randomized, double-blind, triple-dummy clinical trial. Am J Respir Crit Care Med. 2018;198(3):329-339. doi:10.1164/rccm.201803-0405OC
10. Rossi A, van der Molen T, del Olmo R, et al. INSTEAD: a randomized switch trial of indacaterol versus salmeterol/fluticasone in moderate COPD. Eur Respir J. 2014;44(6):1548-1556. doi:10.1183/09031936.00126814
Equity and Inclusion in Military Recruitment: The Case for Neurodiversity in Uniform
The willingness with which our young people are likely to serve in any war, no matter how justified, shall be directly proportional to how they perceive how the veterans of earlier wars were treated and appreciated by their nation.
George Washington? 1
This editorial is the second of the 2-part series on the recruitment crisis currently confronting the Army, Navy, and Air Force. Part 1 focused on rationales for the lack of interest or motivation among those potentially eligible to join the military. This column looks at individuals eager to serve who do not meet eligibility requirements. A 2022 article examining the 2020 Qualified Military Available Study found that without a waiver 77% of Americans in the prime recruiting age group 17 to 24 years would be ineligible for the military due to weight, substance use, or mental and physical health conditions. Most young adults met several ineligibility criteria.2
Obesity and substance use are the most common disqualifiers, mirroring the culture at large. Scores of other physical and mental health conditions render an applicant ineligible for military service or require a waiver. The justification of all eligibility criteria is to: (1) ensure that service members can safely and effectively deploy; and (2) reduce the attrition rate. Both are essential to the mission readiness of the military. In 2022, the military gave 1 in 6 of those seeking enlistment an accession waiver.3 About 4% of waivers issued were for mental health conditions, such as autism and attention-deficit hyperactivity disorder (ADHD). The response to the recruiting crisis resulted in the largest number of waivers granted in a decade. Military Times noted that exact numbers are hard to obtain, interfering with the transparency of public policy as well as high-quality research on waivers’ impact on recruits and the service.3
The War Horse reported that the current waiver process is riddled with procedural injustice and inequity in implementation.4 Each service sets its eligibility requirements: the rationale being that the respective branches have distinct roles necessitating distinguishing qualifications. What is far more difficult to defend is that wide variation exists in the application of the criteria. Similar cases are judged differently, depending on nonmaterial factors, such as geographic location and unwritten policies of recruiting offices. Waiver approval rates for mental health conditions range from 35% for the Army to 71% for the Marines. The prospective recruit, not the military service, bears the burden of demonstrating that their condition does not impair their fitness for duty; hence, thousands have been disqualified based on their diagnosis.4 This comes at a time when the US Department of Veterans Affairs (VA) and the US Department of Defense (DoD) have been battling a suicide epidemic for years. Current qualifying standards send a strong stigmatizing message to those who want to enlist and those already in the ranks at a time the DoD and VA are launching campaigns to persuade active-duty members and veterans to seek mental health treatment.5
The recruiting crisis brought into stark relief more fundamental questions about the clinical and ethical aspects of eligibility criteria that either disqualify outright or require a waiver process for many young Americans with mental health conditions who want to serve their country. One of the most clinically perplexing standards is that applicants with ADHD are disqualified if they have taken medications in the past 12 to 24 months, depending on the service.6 Despite this policy, the Army acknowledges that stimulant medications may improve the function of individuals with ADHD and reduce the rates of substance use and behavior disturbances, the real concerns for recruiters and commanders.7
Requirements like these place otherwise high-functioning individuals whose professional goal is to serve in the military in a double bind. The military’s studies show that recruits’ persistent nondisclosure of their diagnoses results in poorer performance and higher attrition rates of those who have enlisted, even when treated.8 If potential recruits disclose their psychiatric history, they may well be disqualified and/or denied a waiver. This is even more true for service members already in the military who may believe they have one of the conditions but fear that being diagnosed will negatively impact their career. Not disclosing their condition prevents service members from obtaining the clinical care and support they need to succeed and also limits the ability of commanders to make decisions about deployment that ensure maximal unit performance and the safety of the service member.9 However, ADHD is one of 38 diagnoses that the DoD is considering for possible removal or modification of the waiver for some subset of applicants.10
The final irony is that medicine and warfare have changed dramatically and rapidly since the initial determination that diagnoses like ADHD and autism disqualify individuals from serving. A Rand Corporation study found that individuals who are neurodivergent—the name collectively assigned to individuals with diagnoses like autism and ADHD—may have unique abilities that enable them to outperform neurotypical persons in areas like pattern recognition, attention to detail, repetitive tasks, and memory, among others. These highly technical skills are essential to intelligence analysis and cybersecurity domains that are increasingly crucial to both national defense and victory on the battlefield.11 Even congressional representatives who just a few years ago criticized waivers for mental health conditions as “lowering the standards” are now pushing for more moderate policies, especially for those who have received and responded to treatment for their mental health disorders.12
The epigraph has been widely and persistently misattributed to the country’s first commander in chief, George Washington, because it captures a salient sentiment directly bearing on the question of who is fit for duty.1 History has shown that discrimination in enlistment only weakens the fighting force, whereas diversity, including neurodiversity, in the military as in society is a source of strength. Equitable inclusion of those who have the discipline, desire, and dedication to serve their country may be the most positive response to the recruitment crisis.
1. George Washington’s Mount Vernon Washington Library. Accessed November 13, 2023. https://www.mountvernon.org/library/digitalhistory/digital-encyclopedia/article/spurious-quotations/
2. Novelty T. Even more young Americans are unfit to serve, a new study finds, here’s why. Accessed November 20, 2023. https://www.military.com/daily-news/2022/09/28/new-pentagon-study-shows-77-of-young-americans-are-ineligible-military-service.html
3. Cohen RS. Need for accession waivers soars amid historic recruiting challenges. Accessed November 20, 2023. https://www.militarytimes.com/news/your-air-force/2023/04/10/need-for-accession-waivers-soars-amid-historic-recruiting-challenges
4. Barnhill J. The military is missing recruitment goals. Are thousands being disqualified. The War Horse. Accessed November 20, 2023. https://thewarhorse.org/us-military-recruitment-crisis-may-hinge-on-medical-waivers
5. Hauschild V. Army experts: mixed messages can fuel stigma, prevent soldiers from accessing behavioral healthcare. Accessed November 20, 2023. https://www.army.mil/article/262525/army_experts_mixed_messages_can_fuel_stigma_prevent_soldiers_from_accessing_behavioral_healthcare
6. US Department of Defense. DoD Instructions 6130.03 Volume 1. Section 6, Medical Standards for Military Service: Appointment, Enlistment, or Induction. Updated November 16, 2022. Accessed November 20, 2023. https://www.esd.whs.mil/Portals/54/Documents/DD/issuances/dodi/613003_vol1.PDF
7. Sayers D, Hu Z, Clark LL. Attrition rates and incidence of mental health disorders in an attention-deficit hyperactivity disorder (ADHD) cohort, active component, U.S. Armed Forces, 2014-2018. MSMR. 2021;28(1):2-8.
8. Woods J. Serving with ADHD. Accessed November 20, 2023. https://www.armyupress.army.mil/Journals/NCO-Journal/Archives/2022/February/Serving-with-ADHD
9. Thayer RL. Pentagon reviews whether 38 medical conditions should remain as disqualifiers for military service. Accessed November 20, 2023. https://www.stripes.com/theaters/us/2023-03-07/military-medical-waivers-recruitment-9417905.html
10. Weinbaum C. An autistic soldier wants you to read this. Accessed November 20, 2023. https://mwi.usma.edu/an-autistic-soldier-wants-you-to-read-this
11. Weinbaum C, Khan O, Thomas TD, Stein BD. Neurodiversity and national security. Accessed November 20, 2023. https://www.rand.org/pubs/research_reports/RRA1875-1.html
12. Myers M. Senators push DoD to approve recruits who have sought mental health care. Accessed November 20, 2023. https://www.militarytimes.com/news/your-military/2023/03/16/senators-push-dod-to-approve-recruits-whove-sought-mental-health-care
The willingness with which our young people are likely to serve in any war, no matter how justified, shall be directly proportional to how they perceive how the veterans of earlier wars were treated and appreciated by their nation.
George Washington? 1
This editorial is the second of the 2-part series on the recruitment crisis currently confronting the Army, Navy, and Air Force. Part 1 focused on rationales for the lack of interest or motivation among those potentially eligible to join the military. This column looks at individuals eager to serve who do not meet eligibility requirements. A 2022 article examining the 2020 Qualified Military Available Study found that without a waiver 77% of Americans in the prime recruiting age group 17 to 24 years would be ineligible for the military due to weight, substance use, or mental and physical health conditions. Most young adults met several ineligibility criteria.2
Obesity and substance use are the most common disqualifiers, mirroring the culture at large. Scores of other physical and mental health conditions render an applicant ineligible for military service or require a waiver. The justification of all eligibility criteria is to: (1) ensure that service members can safely and effectively deploy; and (2) reduce the attrition rate. Both are essential to the mission readiness of the military. In 2022, the military gave 1 in 6 of those seeking enlistment an accession waiver.3 About 4% of waivers issued were for mental health conditions, such as autism and attention-deficit hyperactivity disorder (ADHD). The response to the recruiting crisis resulted in the largest number of waivers granted in a decade. Military Times noted that exact numbers are hard to obtain, interfering with the transparency of public policy as well as high-quality research on waivers’ impact on recruits and the service.3
The War Horse reported that the current waiver process is riddled with procedural injustice and inequity in implementation.4 Each service sets its eligibility requirements: the rationale being that the respective branches have distinct roles necessitating distinguishing qualifications. What is far more difficult to defend is that wide variation exists in the application of the criteria. Similar cases are judged differently, depending on nonmaterial factors, such as geographic location and unwritten policies of recruiting offices. Waiver approval rates for mental health conditions range from 35% for the Army to 71% for the Marines. The prospective recruit, not the military service, bears the burden of demonstrating that their condition does not impair their fitness for duty; hence, thousands have been disqualified based on their diagnosis.4 This comes at a time when the US Department of Veterans Affairs (VA) and the US Department of Defense (DoD) have been battling a suicide epidemic for years. Current qualifying standards send a strong stigmatizing message to those who want to enlist and those already in the ranks at a time the DoD and VA are launching campaigns to persuade active-duty members and veterans to seek mental health treatment.5
The recruiting crisis brought into stark relief more fundamental questions about the clinical and ethical aspects of eligibility criteria that either disqualify outright or require a waiver process for many young Americans with mental health conditions who want to serve their country. One of the most clinically perplexing standards is that applicants with ADHD are disqualified if they have taken medications in the past 12 to 24 months, depending on the service.6 Despite this policy, the Army acknowledges that stimulant medications may improve the function of individuals with ADHD and reduce the rates of substance use and behavior disturbances, the real concerns for recruiters and commanders.7
Requirements like these place otherwise high-functioning individuals whose professional goal is to serve in the military in a double bind. The military’s studies show that recruits’ persistent nondisclosure of their diagnoses results in poorer performance and higher attrition rates of those who have enlisted, even when treated.8 If potential recruits disclose their psychiatric history, they may well be disqualified and/or denied a waiver. This is even more true for service members already in the military who may believe they have one of the conditions but fear that being diagnosed will negatively impact their career. Not disclosing their condition prevents service members from obtaining the clinical care and support they need to succeed and also limits the ability of commanders to make decisions about deployment that ensure maximal unit performance and the safety of the service member.9 However, ADHD is one of 38 diagnoses that the DoD is considering for possible removal or modification of the waiver for some subset of applicants.10
The final irony is that medicine and warfare have changed dramatically and rapidly since the initial determination that diagnoses like ADHD and autism disqualify individuals from serving. A Rand Corporation study found that individuals who are neurodivergent—the name collectively assigned to individuals with diagnoses like autism and ADHD—may have unique abilities that enable them to outperform neurotypical persons in areas like pattern recognition, attention to detail, repetitive tasks, and memory, among others. These highly technical skills are essential to intelligence analysis and cybersecurity domains that are increasingly crucial to both national defense and victory on the battlefield.11 Even congressional representatives who just a few years ago criticized waivers for mental health conditions as “lowering the standards” are now pushing for more moderate policies, especially for those who have received and responded to treatment for their mental health disorders.12
The epigraph has been widely and persistently misattributed to the country’s first commander in chief, George Washington, because it captures a salient sentiment directly bearing on the question of who is fit for duty.1 History has shown that discrimination in enlistment only weakens the fighting force, whereas diversity, including neurodiversity, in the military as in society is a source of strength. Equitable inclusion of those who have the discipline, desire, and dedication to serve their country may be the most positive response to the recruitment crisis.
The willingness with which our young people are likely to serve in any war, no matter how justified, shall be directly proportional to how they perceive how the veterans of earlier wars were treated and appreciated by their nation.
George Washington? 1
This editorial is the second of the 2-part series on the recruitment crisis currently confronting the Army, Navy, and Air Force. Part 1 focused on rationales for the lack of interest or motivation among those potentially eligible to join the military. This column looks at individuals eager to serve who do not meet eligibility requirements. A 2022 article examining the 2020 Qualified Military Available Study found that without a waiver 77% of Americans in the prime recruiting age group 17 to 24 years would be ineligible for the military due to weight, substance use, or mental and physical health conditions. Most young adults met several ineligibility criteria.2
Obesity and substance use are the most common disqualifiers, mirroring the culture at large. Scores of other physical and mental health conditions render an applicant ineligible for military service or require a waiver. The justification of all eligibility criteria is to: (1) ensure that service members can safely and effectively deploy; and (2) reduce the attrition rate. Both are essential to the mission readiness of the military. In 2022, the military gave 1 in 6 of those seeking enlistment an accession waiver.3 About 4% of waivers issued were for mental health conditions, such as autism and attention-deficit hyperactivity disorder (ADHD). The response to the recruiting crisis resulted in the largest number of waivers granted in a decade. Military Times noted that exact numbers are hard to obtain, interfering with the transparency of public policy as well as high-quality research on waivers’ impact on recruits and the service.3
The War Horse reported that the current waiver process is riddled with procedural injustice and inequity in implementation.4 Each service sets its eligibility requirements: the rationale being that the respective branches have distinct roles necessitating distinguishing qualifications. What is far more difficult to defend is that wide variation exists in the application of the criteria. Similar cases are judged differently, depending on nonmaterial factors, such as geographic location and unwritten policies of recruiting offices. Waiver approval rates for mental health conditions range from 35% for the Army to 71% for the Marines. The prospective recruit, not the military service, bears the burden of demonstrating that their condition does not impair their fitness for duty; hence, thousands have been disqualified based on their diagnosis.4 This comes at a time when the US Department of Veterans Affairs (VA) and the US Department of Defense (DoD) have been battling a suicide epidemic for years. Current qualifying standards send a strong stigmatizing message to those who want to enlist and those already in the ranks at a time the DoD and VA are launching campaigns to persuade active-duty members and veterans to seek mental health treatment.5
The recruiting crisis brought into stark relief more fundamental questions about the clinical and ethical aspects of eligibility criteria that either disqualify outright or require a waiver process for many young Americans with mental health conditions who want to serve their country. One of the most clinically perplexing standards is that applicants with ADHD are disqualified if they have taken medications in the past 12 to 24 months, depending on the service.6 Despite this policy, the Army acknowledges that stimulant medications may improve the function of individuals with ADHD and reduce the rates of substance use and behavior disturbances, the real concerns for recruiters and commanders.7
Requirements like these place otherwise high-functioning individuals whose professional goal is to serve in the military in a double bind. The military’s studies show that recruits’ persistent nondisclosure of their diagnoses results in poorer performance and higher attrition rates of those who have enlisted, even when treated.8 If potential recruits disclose their psychiatric history, they may well be disqualified and/or denied a waiver. This is even more true for service members already in the military who may believe they have one of the conditions but fear that being diagnosed will negatively impact their career. Not disclosing their condition prevents service members from obtaining the clinical care and support they need to succeed and also limits the ability of commanders to make decisions about deployment that ensure maximal unit performance and the safety of the service member.9 However, ADHD is one of 38 diagnoses that the DoD is considering for possible removal or modification of the waiver for some subset of applicants.10
The final irony is that medicine and warfare have changed dramatically and rapidly since the initial determination that diagnoses like ADHD and autism disqualify individuals from serving. A Rand Corporation study found that individuals who are neurodivergent—the name collectively assigned to individuals with diagnoses like autism and ADHD—may have unique abilities that enable them to outperform neurotypical persons in areas like pattern recognition, attention to detail, repetitive tasks, and memory, among others. These highly technical skills are essential to intelligence analysis and cybersecurity domains that are increasingly crucial to both national defense and victory on the battlefield.11 Even congressional representatives who just a few years ago criticized waivers for mental health conditions as “lowering the standards” are now pushing for more moderate policies, especially for those who have received and responded to treatment for their mental health disorders.12
The epigraph has been widely and persistently misattributed to the country’s first commander in chief, George Washington, because it captures a salient sentiment directly bearing on the question of who is fit for duty.1 History has shown that discrimination in enlistment only weakens the fighting force, whereas diversity, including neurodiversity, in the military as in society is a source of strength. Equitable inclusion of those who have the discipline, desire, and dedication to serve their country may be the most positive response to the recruitment crisis.
1. George Washington’s Mount Vernon Washington Library. Accessed November 13, 2023. https://www.mountvernon.org/library/digitalhistory/digital-encyclopedia/article/spurious-quotations/
2. Novelty T. Even more young Americans are unfit to serve, a new study finds, here’s why. Accessed November 20, 2023. https://www.military.com/daily-news/2022/09/28/new-pentagon-study-shows-77-of-young-americans-are-ineligible-military-service.html
3. Cohen RS. Need for accession waivers soars amid historic recruiting challenges. Accessed November 20, 2023. https://www.militarytimes.com/news/your-air-force/2023/04/10/need-for-accession-waivers-soars-amid-historic-recruiting-challenges
4. Barnhill J. The military is missing recruitment goals. Are thousands being disqualified. The War Horse. Accessed November 20, 2023. https://thewarhorse.org/us-military-recruitment-crisis-may-hinge-on-medical-waivers
5. Hauschild V. Army experts: mixed messages can fuel stigma, prevent soldiers from accessing behavioral healthcare. Accessed November 20, 2023. https://www.army.mil/article/262525/army_experts_mixed_messages_can_fuel_stigma_prevent_soldiers_from_accessing_behavioral_healthcare
6. US Department of Defense. DoD Instructions 6130.03 Volume 1. Section 6, Medical Standards for Military Service: Appointment, Enlistment, or Induction. Updated November 16, 2022. Accessed November 20, 2023. https://www.esd.whs.mil/Portals/54/Documents/DD/issuances/dodi/613003_vol1.PDF
7. Sayers D, Hu Z, Clark LL. Attrition rates and incidence of mental health disorders in an attention-deficit hyperactivity disorder (ADHD) cohort, active component, U.S. Armed Forces, 2014-2018. MSMR. 2021;28(1):2-8.
8. Woods J. Serving with ADHD. Accessed November 20, 2023. https://www.armyupress.army.mil/Journals/NCO-Journal/Archives/2022/February/Serving-with-ADHD
9. Thayer RL. Pentagon reviews whether 38 medical conditions should remain as disqualifiers for military service. Accessed November 20, 2023. https://www.stripes.com/theaters/us/2023-03-07/military-medical-waivers-recruitment-9417905.html
10. Weinbaum C. An autistic soldier wants you to read this. Accessed November 20, 2023. https://mwi.usma.edu/an-autistic-soldier-wants-you-to-read-this
11. Weinbaum C, Khan O, Thomas TD, Stein BD. Neurodiversity and national security. Accessed November 20, 2023. https://www.rand.org/pubs/research_reports/RRA1875-1.html
12. Myers M. Senators push DoD to approve recruits who have sought mental health care. Accessed November 20, 2023. https://www.militarytimes.com/news/your-military/2023/03/16/senators-push-dod-to-approve-recruits-whove-sought-mental-health-care
1. George Washington’s Mount Vernon Washington Library. Accessed November 13, 2023. https://www.mountvernon.org/library/digitalhistory/digital-encyclopedia/article/spurious-quotations/
2. Novelty T. Even more young Americans are unfit to serve, a new study finds, here’s why. Accessed November 20, 2023. https://www.military.com/daily-news/2022/09/28/new-pentagon-study-shows-77-of-young-americans-are-ineligible-military-service.html
3. Cohen RS. Need for accession waivers soars amid historic recruiting challenges. Accessed November 20, 2023. https://www.militarytimes.com/news/your-air-force/2023/04/10/need-for-accession-waivers-soars-amid-historic-recruiting-challenges
4. Barnhill J. The military is missing recruitment goals. Are thousands being disqualified. The War Horse. Accessed November 20, 2023. https://thewarhorse.org/us-military-recruitment-crisis-may-hinge-on-medical-waivers
5. Hauschild V. Army experts: mixed messages can fuel stigma, prevent soldiers from accessing behavioral healthcare. Accessed November 20, 2023. https://www.army.mil/article/262525/army_experts_mixed_messages_can_fuel_stigma_prevent_soldiers_from_accessing_behavioral_healthcare
6. US Department of Defense. DoD Instructions 6130.03 Volume 1. Section 6, Medical Standards for Military Service: Appointment, Enlistment, or Induction. Updated November 16, 2022. Accessed November 20, 2023. https://www.esd.whs.mil/Portals/54/Documents/DD/issuances/dodi/613003_vol1.PDF
7. Sayers D, Hu Z, Clark LL. Attrition rates and incidence of mental health disorders in an attention-deficit hyperactivity disorder (ADHD) cohort, active component, U.S. Armed Forces, 2014-2018. MSMR. 2021;28(1):2-8.
8. Woods J. Serving with ADHD. Accessed November 20, 2023. https://www.armyupress.army.mil/Journals/NCO-Journal/Archives/2022/February/Serving-with-ADHD
9. Thayer RL. Pentagon reviews whether 38 medical conditions should remain as disqualifiers for military service. Accessed November 20, 2023. https://www.stripes.com/theaters/us/2023-03-07/military-medical-waivers-recruitment-9417905.html
10. Weinbaum C. An autistic soldier wants you to read this. Accessed November 20, 2023. https://mwi.usma.edu/an-autistic-soldier-wants-you-to-read-this
11. Weinbaum C, Khan O, Thomas TD, Stein BD. Neurodiversity and national security. Accessed November 20, 2023. https://www.rand.org/pubs/research_reports/RRA1875-1.html
12. Myers M. Senators push DoD to approve recruits who have sought mental health care. Accessed November 20, 2023. https://www.militarytimes.com/news/your-military/2023/03/16/senators-push-dod-to-approve-recruits-whove-sought-mental-health-care
Optimizing Care in Metastatic Breast Cancer
Transapical valve replacement relieves mitral regurgitation
, relief of mitral regurgitation, and increases in cardiac hemodynamics and quality of life sustained at 1 year.
Further, patients with severe mitral annular calcification (MAC) showed improvements in hemodynamics, functional status, and quality of life after the procedure.
With 70 centers participating in the Tendyne SUMMIT trial, the first 100 trial roll-in patients accrued from the first one or two patients from each site without previous Tendyne TMVR experience.
“For this new procedure, with new operators, there was no intraprocedural mortality, and procedural survival was 100%,” co-primary investigator Jason Rogers, MD, of the University of California Davis Medical Center, Sacramento, told attendees at a Late-Breaking Clinical Science session at the Transcatheter Cardiovascular Therapeutics annual meeting.
“The survival was 74% at 12 months. The valve was very effective at eliminating much regurgitation, and 96.5% of patients had either zero or 1+ at a year, and 97% at 30 days had no mitral regurgitation,” he reported. As follow-up was during the COVID-19 pandemic, several of the deaths were attributed to COVID.
Device and trial designs
The Tendyne TMVR is placed through the cardiac apex. It has an outer frame contoured to comport with the shape of the native mitral valve. Inside is a circular, self-expanding, tri-leaflet bioprosthetic valve.
A unique aspect of the design is a tether attached to the outflow side of the valve to allow positioning and control of the valve. At the end of the tether is an apical pad that is placed over the apical access site to control bleeding. The device is currently limited to investigational use in the United States.
The trial enrolled patients with grade III/IV MR or severe MAC if valve anatomy was deemed amenable to transcatheter repair or met MitraClip indications and if these treatments were considered more appropriate than surgery.
Dr. Rogers reported on the first 100 roll-in (early experimental) patients who received Tendyne TMVR. There was a separate severe MAC cohort receiving Tendyne implantation (N = 103). A further 1:1 randomized study of 382 patients compared Tendyne investigational treatment with a MitraClip control group.
At baseline, the 100 roll-in patients had an average age of 75 years, 54% were men, 46% had a frailty score of 2 or greater, and 41% had been hospitalized in the prior 12 months for heart failure. Left ventricular ejection fraction (LVEF) was 48.6% ± 10.3%.
Improved cardiac function
Procedural survival was 100%, technical success 94%, and valve implantation occurred in 97%. Of the first 100 patients, 26 had died by 1 year, and two withdrew consent, leaving 72 for evaluation.
Immediate post-procedure survival was 98%, 87.9% at 3 months, 83.7% at 6 months, and 74.3% at 1 year. MR severity decreased from 29% 3+ and 69% 4+ at baseline to 96.5% 0/1+ and 3.5% 2+ at 1 year.
Cumulative adverse outcomes at 1 year were 27% all-cause mortality, 21.6% cardiovascular mortality, 5.4% all-cause stroke, 2.3% myocardial infarction (MI), 2.2% post-operative mitral reintervention, no major but 2.3% minor device thrombosis, and 32.4% major bleeding.
Most adverse events occurred peri-procedurally or within the first month, representing, “I think, a new procedure with new operators and a high real risk population,” Dr. Rogers said.
Echocardiography at 1 year compared with baseline showed significant changes with decreases in left ventricular end diastolic volume (LVEDV), increases in cardiac output (CO) and forward stroke volume, and no change in mitral valve gradient or left ventricular outflow tract (LVOT) gradient. New York Heart Association (NYHA) classification decreased from 69% class III/IV at baseline to 20% at 1 year, at which point 80% of patients were in class I/II.
“There was a consistent and steady improvement in KCCQ [Kansas City Cardiomyopathy Questionnaire] score, as expected, as patients recovered from this invasive procedure,” Dr. Rogers said. The 1-year score was 68.7, representing fair to good quality of life.
Outcomes with severe MAC
After screening for MR 3+ or greater, severe mitral stenosis, or moderate MR plus mitral stenosis, 103 eligible patients were treated with the Tendyne device. The median MAC volume of the cohort was 4000 mm3, with a maximum of 38,000 mm3.
Patients averaged 78 years old, 44.7% male, 55.3% had a frailty score of 2 or greater, 73.8% were in NYHA class III or greater, and 29.1% had been hospitalized within the prior 12 months for heart failure. Grade III or IV MR severity was present in 89%, with MR being primary in 90.3% of patients, and 10.7% had severe mitral stenosis.
Tendyne procedure survival was 98.1%, technical success was 94.2%, and valves were implanted in all patients. Emergency surgery or other intervention was required in 5.8%.
As co-presenter of the SUMMIT results, Vinod Thourani, MD, of the Piedmont Heart Institute in Atlanta, said at 30 days there was 6.8% all-cause mortality, all of it cardiovascular. There was one disabling stroke, one MI, no device thrombosis, and 21.4% major bleeding.
“At 1 month, there was less than grade 1 mitral regurgitation in all patients,” he reported, vs. 89% grade 3+/4+ at baseline. “At 1 month, it was an improvement in the NYHA classification to almost 70% in class I or II, which was improved from baseline of 26% in NYHA class I or II.”
Hemodynamic parameters all showed improvement, with a reduction in LVEF, LVEDV, and mitral valve gradient and increases in CO and forward stroke volume. There was no significant increase in LVOT gradient.
There was a small improvement in the KCCQ quality of life score from a baseline score of 49.2 to 52.3 at 30 days. “We’re expecting the KCCQ overall score to improve on 1 year follow up since the patients [are] still recovering from their thoracotomy incision,” Dr. Thourani predicted.
The primary endpoint will be evaluated at 1 year post procedure, he said at the meeting, sponsored by the Cardiovascular Research Foundation.
No good option
Designated discussant Joanna Chikwe, MD, chair of cardiac surgery at Cedars-Sinai Medical Center in Los Angeles, first thanked the presenters for their trial, saying, “What an absolute pleasure to be a mitral surgeon at a meeting where you’re presenting a solution for something that we find incredibly challenging. There’s no good transcatheter option for MAC. There’s no great surgical option for MAC.”
She noted the small size of the MAC cohort and asked what drove failure in patient screening, starting with 474 patients, identifying 120 who would be eligible, and enrolling 103 in the MAC cohort. The presenters identified neo-LVOT, the residual LVOT created after implanting the mitral valve prosthesis. Screening also eliminated patients with a too large or too small annulus.
Dr. Thourani said in Europe, surgeons have used anterior leaflet splitting before Tendyne, which may help to expand the population of eligible patients, but no leaflet modification was allowed in the SUMMIT trial.
Dr. Chikwe then pointed to the six deaths in the MAC arm and 11 deaths in the roll-in arm and asked about the mechanism of these deaths. “Was it [that] the 22% major bleeding is transapical? Really the Achilles heel of this procedure? Is this something that could become a transcatheter device?”
“We call it a transcatheter procedure, but it’s very much a surgical procedure,” Dr. Rogers answered. “And, you know, despite having great experienced sites...many surgeons don’t deal with the apex very much.” Furthermore, catheter insertion can lead to bleeding complications.
He noted that the roll-in patients were the first one or two cases at each site, and there have been improvements with site experience. The apical pads assist in hemostasis. He said the current design of the Tendyne catheter-delivered valve does not allow it to be adapted to a transfemoral transseptal approach.
Dr. Rogers is a consultant to and co-national principal investigator of the SUMMIT Pivotal Trial for Abbott. He is a consultant to Boston Scientific and a consultant/equity holder in Laminar. Dr. Thourani has received grant/research support from Abbott Vascular, Artivion, AtriCure, Boston Scientific, Croivalve, Edwards Lifesciences, JenaValve, Medtronic, and Trisol; consultant fees/honoraria from Abbott Vascular, Artivion, AtriCure, Boston Scientific, Croivalve, and Edwards Lifesciences; and has an executive role/ownership interest in DASI Simulations. Dr. Chikwe reports no relevant financial relationships. The SUMMIT trial was sponsored by Abbott.
A version of this article first appeared on Medscape.com.
, relief of mitral regurgitation, and increases in cardiac hemodynamics and quality of life sustained at 1 year.
Further, patients with severe mitral annular calcification (MAC) showed improvements in hemodynamics, functional status, and quality of life after the procedure.
With 70 centers participating in the Tendyne SUMMIT trial, the first 100 trial roll-in patients accrued from the first one or two patients from each site without previous Tendyne TMVR experience.
“For this new procedure, with new operators, there was no intraprocedural mortality, and procedural survival was 100%,” co-primary investigator Jason Rogers, MD, of the University of California Davis Medical Center, Sacramento, told attendees at a Late-Breaking Clinical Science session at the Transcatheter Cardiovascular Therapeutics annual meeting.
“The survival was 74% at 12 months. The valve was very effective at eliminating much regurgitation, and 96.5% of patients had either zero or 1+ at a year, and 97% at 30 days had no mitral regurgitation,” he reported. As follow-up was during the COVID-19 pandemic, several of the deaths were attributed to COVID.
Device and trial designs
The Tendyne TMVR is placed through the cardiac apex. It has an outer frame contoured to comport with the shape of the native mitral valve. Inside is a circular, self-expanding, tri-leaflet bioprosthetic valve.
A unique aspect of the design is a tether attached to the outflow side of the valve to allow positioning and control of the valve. At the end of the tether is an apical pad that is placed over the apical access site to control bleeding. The device is currently limited to investigational use in the United States.
The trial enrolled patients with grade III/IV MR or severe MAC if valve anatomy was deemed amenable to transcatheter repair or met MitraClip indications and if these treatments were considered more appropriate than surgery.
Dr. Rogers reported on the first 100 roll-in (early experimental) patients who received Tendyne TMVR. There was a separate severe MAC cohort receiving Tendyne implantation (N = 103). A further 1:1 randomized study of 382 patients compared Tendyne investigational treatment with a MitraClip control group.
At baseline, the 100 roll-in patients had an average age of 75 years, 54% were men, 46% had a frailty score of 2 or greater, and 41% had been hospitalized in the prior 12 months for heart failure. Left ventricular ejection fraction (LVEF) was 48.6% ± 10.3%.
Improved cardiac function
Procedural survival was 100%, technical success 94%, and valve implantation occurred in 97%. Of the first 100 patients, 26 had died by 1 year, and two withdrew consent, leaving 72 for evaluation.
Immediate post-procedure survival was 98%, 87.9% at 3 months, 83.7% at 6 months, and 74.3% at 1 year. MR severity decreased from 29% 3+ and 69% 4+ at baseline to 96.5% 0/1+ and 3.5% 2+ at 1 year.
Cumulative adverse outcomes at 1 year were 27% all-cause mortality, 21.6% cardiovascular mortality, 5.4% all-cause stroke, 2.3% myocardial infarction (MI), 2.2% post-operative mitral reintervention, no major but 2.3% minor device thrombosis, and 32.4% major bleeding.
Most adverse events occurred peri-procedurally or within the first month, representing, “I think, a new procedure with new operators and a high real risk population,” Dr. Rogers said.
Echocardiography at 1 year compared with baseline showed significant changes with decreases in left ventricular end diastolic volume (LVEDV), increases in cardiac output (CO) and forward stroke volume, and no change in mitral valve gradient or left ventricular outflow tract (LVOT) gradient. New York Heart Association (NYHA) classification decreased from 69% class III/IV at baseline to 20% at 1 year, at which point 80% of patients were in class I/II.
“There was a consistent and steady improvement in KCCQ [Kansas City Cardiomyopathy Questionnaire] score, as expected, as patients recovered from this invasive procedure,” Dr. Rogers said. The 1-year score was 68.7, representing fair to good quality of life.
Outcomes with severe MAC
After screening for MR 3+ or greater, severe mitral stenosis, or moderate MR plus mitral stenosis, 103 eligible patients were treated with the Tendyne device. The median MAC volume of the cohort was 4000 mm3, with a maximum of 38,000 mm3.
Patients averaged 78 years old, 44.7% male, 55.3% had a frailty score of 2 or greater, 73.8% were in NYHA class III or greater, and 29.1% had been hospitalized within the prior 12 months for heart failure. Grade III or IV MR severity was present in 89%, with MR being primary in 90.3% of patients, and 10.7% had severe mitral stenosis.
Tendyne procedure survival was 98.1%, technical success was 94.2%, and valves were implanted in all patients. Emergency surgery or other intervention was required in 5.8%.
As co-presenter of the SUMMIT results, Vinod Thourani, MD, of the Piedmont Heart Institute in Atlanta, said at 30 days there was 6.8% all-cause mortality, all of it cardiovascular. There was one disabling stroke, one MI, no device thrombosis, and 21.4% major bleeding.
“At 1 month, there was less than grade 1 mitral regurgitation in all patients,” he reported, vs. 89% grade 3+/4+ at baseline. “At 1 month, it was an improvement in the NYHA classification to almost 70% in class I or II, which was improved from baseline of 26% in NYHA class I or II.”
Hemodynamic parameters all showed improvement, with a reduction in LVEF, LVEDV, and mitral valve gradient and increases in CO and forward stroke volume. There was no significant increase in LVOT gradient.
There was a small improvement in the KCCQ quality of life score from a baseline score of 49.2 to 52.3 at 30 days. “We’re expecting the KCCQ overall score to improve on 1 year follow up since the patients [are] still recovering from their thoracotomy incision,” Dr. Thourani predicted.
The primary endpoint will be evaluated at 1 year post procedure, he said at the meeting, sponsored by the Cardiovascular Research Foundation.
No good option
Designated discussant Joanna Chikwe, MD, chair of cardiac surgery at Cedars-Sinai Medical Center in Los Angeles, first thanked the presenters for their trial, saying, “What an absolute pleasure to be a mitral surgeon at a meeting where you’re presenting a solution for something that we find incredibly challenging. There’s no good transcatheter option for MAC. There’s no great surgical option for MAC.”
She noted the small size of the MAC cohort and asked what drove failure in patient screening, starting with 474 patients, identifying 120 who would be eligible, and enrolling 103 in the MAC cohort. The presenters identified neo-LVOT, the residual LVOT created after implanting the mitral valve prosthesis. Screening also eliminated patients with a too large or too small annulus.
Dr. Thourani said in Europe, surgeons have used anterior leaflet splitting before Tendyne, which may help to expand the population of eligible patients, but no leaflet modification was allowed in the SUMMIT trial.
Dr. Chikwe then pointed to the six deaths in the MAC arm and 11 deaths in the roll-in arm and asked about the mechanism of these deaths. “Was it [that] the 22% major bleeding is transapical? Really the Achilles heel of this procedure? Is this something that could become a transcatheter device?”
“We call it a transcatheter procedure, but it’s very much a surgical procedure,” Dr. Rogers answered. “And, you know, despite having great experienced sites...many surgeons don’t deal with the apex very much.” Furthermore, catheter insertion can lead to bleeding complications.
He noted that the roll-in patients were the first one or two cases at each site, and there have been improvements with site experience. The apical pads assist in hemostasis. He said the current design of the Tendyne catheter-delivered valve does not allow it to be adapted to a transfemoral transseptal approach.
Dr. Rogers is a consultant to and co-national principal investigator of the SUMMIT Pivotal Trial for Abbott. He is a consultant to Boston Scientific and a consultant/equity holder in Laminar. Dr. Thourani has received grant/research support from Abbott Vascular, Artivion, AtriCure, Boston Scientific, Croivalve, Edwards Lifesciences, JenaValve, Medtronic, and Trisol; consultant fees/honoraria from Abbott Vascular, Artivion, AtriCure, Boston Scientific, Croivalve, and Edwards Lifesciences; and has an executive role/ownership interest in DASI Simulations. Dr. Chikwe reports no relevant financial relationships. The SUMMIT trial was sponsored by Abbott.
A version of this article first appeared on Medscape.com.
, relief of mitral regurgitation, and increases in cardiac hemodynamics and quality of life sustained at 1 year.
Further, patients with severe mitral annular calcification (MAC) showed improvements in hemodynamics, functional status, and quality of life after the procedure.
With 70 centers participating in the Tendyne SUMMIT trial, the first 100 trial roll-in patients accrued from the first one or two patients from each site without previous Tendyne TMVR experience.
“For this new procedure, with new operators, there was no intraprocedural mortality, and procedural survival was 100%,” co-primary investigator Jason Rogers, MD, of the University of California Davis Medical Center, Sacramento, told attendees at a Late-Breaking Clinical Science session at the Transcatheter Cardiovascular Therapeutics annual meeting.
“The survival was 74% at 12 months. The valve was very effective at eliminating much regurgitation, and 96.5% of patients had either zero or 1+ at a year, and 97% at 30 days had no mitral regurgitation,” he reported. As follow-up was during the COVID-19 pandemic, several of the deaths were attributed to COVID.
Device and trial designs
The Tendyne TMVR is placed through the cardiac apex. It has an outer frame contoured to comport with the shape of the native mitral valve. Inside is a circular, self-expanding, tri-leaflet bioprosthetic valve.
A unique aspect of the design is a tether attached to the outflow side of the valve to allow positioning and control of the valve. At the end of the tether is an apical pad that is placed over the apical access site to control bleeding. The device is currently limited to investigational use in the United States.
The trial enrolled patients with grade III/IV MR or severe MAC if valve anatomy was deemed amenable to transcatheter repair or met MitraClip indications and if these treatments were considered more appropriate than surgery.
Dr. Rogers reported on the first 100 roll-in (early experimental) patients who received Tendyne TMVR. There was a separate severe MAC cohort receiving Tendyne implantation (N = 103). A further 1:1 randomized study of 382 patients compared Tendyne investigational treatment with a MitraClip control group.
At baseline, the 100 roll-in patients had an average age of 75 years, 54% were men, 46% had a frailty score of 2 or greater, and 41% had been hospitalized in the prior 12 months for heart failure. Left ventricular ejection fraction (LVEF) was 48.6% ± 10.3%.
Improved cardiac function
Procedural survival was 100%, technical success 94%, and valve implantation occurred in 97%. Of the first 100 patients, 26 had died by 1 year, and two withdrew consent, leaving 72 for evaluation.
Immediate post-procedure survival was 98%, 87.9% at 3 months, 83.7% at 6 months, and 74.3% at 1 year. MR severity decreased from 29% 3+ and 69% 4+ at baseline to 96.5% 0/1+ and 3.5% 2+ at 1 year.
Cumulative adverse outcomes at 1 year were 27% all-cause mortality, 21.6% cardiovascular mortality, 5.4% all-cause stroke, 2.3% myocardial infarction (MI), 2.2% post-operative mitral reintervention, no major but 2.3% minor device thrombosis, and 32.4% major bleeding.
Most adverse events occurred peri-procedurally or within the first month, representing, “I think, a new procedure with new operators and a high real risk population,” Dr. Rogers said.
Echocardiography at 1 year compared with baseline showed significant changes with decreases in left ventricular end diastolic volume (LVEDV), increases in cardiac output (CO) and forward stroke volume, and no change in mitral valve gradient or left ventricular outflow tract (LVOT) gradient. New York Heart Association (NYHA) classification decreased from 69% class III/IV at baseline to 20% at 1 year, at which point 80% of patients were in class I/II.
“There was a consistent and steady improvement in KCCQ [Kansas City Cardiomyopathy Questionnaire] score, as expected, as patients recovered from this invasive procedure,” Dr. Rogers said. The 1-year score was 68.7, representing fair to good quality of life.
Outcomes with severe MAC
After screening for MR 3+ or greater, severe mitral stenosis, or moderate MR plus mitral stenosis, 103 eligible patients were treated with the Tendyne device. The median MAC volume of the cohort was 4000 mm3, with a maximum of 38,000 mm3.
Patients averaged 78 years old, 44.7% male, 55.3% had a frailty score of 2 or greater, 73.8% were in NYHA class III or greater, and 29.1% had been hospitalized within the prior 12 months for heart failure. Grade III or IV MR severity was present in 89%, with MR being primary in 90.3% of patients, and 10.7% had severe mitral stenosis.
Tendyne procedure survival was 98.1%, technical success was 94.2%, and valves were implanted in all patients. Emergency surgery or other intervention was required in 5.8%.
As co-presenter of the SUMMIT results, Vinod Thourani, MD, of the Piedmont Heart Institute in Atlanta, said at 30 days there was 6.8% all-cause mortality, all of it cardiovascular. There was one disabling stroke, one MI, no device thrombosis, and 21.4% major bleeding.
“At 1 month, there was less than grade 1 mitral regurgitation in all patients,” he reported, vs. 89% grade 3+/4+ at baseline. “At 1 month, it was an improvement in the NYHA classification to almost 70% in class I or II, which was improved from baseline of 26% in NYHA class I or II.”
Hemodynamic parameters all showed improvement, with a reduction in LVEF, LVEDV, and mitral valve gradient and increases in CO and forward stroke volume. There was no significant increase in LVOT gradient.
There was a small improvement in the KCCQ quality of life score from a baseline score of 49.2 to 52.3 at 30 days. “We’re expecting the KCCQ overall score to improve on 1 year follow up since the patients [are] still recovering from their thoracotomy incision,” Dr. Thourani predicted.
The primary endpoint will be evaluated at 1 year post procedure, he said at the meeting, sponsored by the Cardiovascular Research Foundation.
No good option
Designated discussant Joanna Chikwe, MD, chair of cardiac surgery at Cedars-Sinai Medical Center in Los Angeles, first thanked the presenters for their trial, saying, “What an absolute pleasure to be a mitral surgeon at a meeting where you’re presenting a solution for something that we find incredibly challenging. There’s no good transcatheter option for MAC. There’s no great surgical option for MAC.”
She noted the small size of the MAC cohort and asked what drove failure in patient screening, starting with 474 patients, identifying 120 who would be eligible, and enrolling 103 in the MAC cohort. The presenters identified neo-LVOT, the residual LVOT created after implanting the mitral valve prosthesis. Screening also eliminated patients with a too large or too small annulus.
Dr. Thourani said in Europe, surgeons have used anterior leaflet splitting before Tendyne, which may help to expand the population of eligible patients, but no leaflet modification was allowed in the SUMMIT trial.
Dr. Chikwe then pointed to the six deaths in the MAC arm and 11 deaths in the roll-in arm and asked about the mechanism of these deaths. “Was it [that] the 22% major bleeding is transapical? Really the Achilles heel of this procedure? Is this something that could become a transcatheter device?”
“We call it a transcatheter procedure, but it’s very much a surgical procedure,” Dr. Rogers answered. “And, you know, despite having great experienced sites...many surgeons don’t deal with the apex very much.” Furthermore, catheter insertion can lead to bleeding complications.
He noted that the roll-in patients were the first one or two cases at each site, and there have been improvements with site experience. The apical pads assist in hemostasis. He said the current design of the Tendyne catheter-delivered valve does not allow it to be adapted to a transfemoral transseptal approach.
Dr. Rogers is a consultant to and co-national principal investigator of the SUMMIT Pivotal Trial for Abbott. He is a consultant to Boston Scientific and a consultant/equity holder in Laminar. Dr. Thourani has received grant/research support from Abbott Vascular, Artivion, AtriCure, Boston Scientific, Croivalve, Edwards Lifesciences, JenaValve, Medtronic, and Trisol; consultant fees/honoraria from Abbott Vascular, Artivion, AtriCure, Boston Scientific, Croivalve, and Edwards Lifesciences; and has an executive role/ownership interest in DASI Simulations. Dr. Chikwe reports no relevant financial relationships. The SUMMIT trial was sponsored by Abbott.
A version of this article first appeared on Medscape.com.
FROM TCT 2023
Delirious mania: Presentation, pathogenesis, and management
Delirious mania is a syndrome characterized by the acute onset of severe hyperactivity, psychosis, catatonia, and intermittent confusion. While there have been growing reports of this phenomenon over the last 2 decades, it remains poorly recognized and understood.1,2 There is no widely accepted nosology for delirious mania and the condition is absent from DSM-5, which magnifies the difficulties in making a timely diagnosis and initiating appropriate treatment. Delayed diagnosis and treatment may result in a detrimental outcome.2,3 Delirious mania has also been labeled as lethal catatonia, specific febrile delirium, hyperactive or exhaustive mania, and Bell’s mania.2,4,5 The characterization and diagnosis of this condition have a long and inconsistent history (Box1,6-11).
Box
Delirious mania was originally recognized in 1849 by Luther Bell in McLean Hospital after he observed 40 cases that were uniquely distinct from 1,700 other cases from 1836 to 1849.6 He described these patients as being suddenly confused, demonstrating unprovoked combativeness, remarkable decreased need for sleep, excessive motor restlessness, extreme fearfulness, and certain physiological signs, including rapid pulse and sweating. Bell was limited to the psychiatric treatment of his time, which largely was confined to physical restraints. Approximately three-fourths of these patients died.6
Following Bell’s report, this syndrome remained unexplored and rarely described. Some researchers postulated that the development of confusion was a natural progression of late-phase mania in close to 20% of patients.7 However, this did not account for the rapid onset of symptoms as well as certain unexplained movement abnormalities. In 1980, Bond8 presented 3 cases that were similar in nature to Bell’s depiction: acute onset with extraordinary irritability, withdrawal, delirium, and mania.
For the next 2 decades, delirious mania was seldom reported in the literature. The term was often reserved to illustrate when a patient had nothing more than mania with features of delirium.9
By 1996, catatonia became better recognized in its wide array of symptomology and diagnostic scales.10,11 In 1999, in addition to the sudden onset of excitement, paranoia, grandiosity, and disorientation, Fink1 reported catatonic signs including negativism, stereotypy, posturing, grimacing, and echo phenomena in patients with delirious mania. He identified its sensitive response to electroconvulsive therapy.
Delirious mania continues to be met with incertitude in clinical practice, and numerous inconsistencies have been reported in the literature. For example, some cases that have been reported as delirious mania had more evidence of primary delirium due to another medical condition or primary mania.12,13 Other cases have demonstrated swift improvement of symptoms after monotherapy with antipsychotics without a trial of benzodiazepines or electroconvulsive therapy (ECT); the exclusion of a sudden onset questions the validity of the diagnosis and promotes the use of less efficacious treatments.14,15 Other reports have confirmed that the diagnosis is missed when certain symptoms are more predominant, such as a thought disorder (acute schizophrenia), grandiosity and delusional ideation (bipolar disorder [BD]), and less commonly assessed catatonic signs (ambitendency, automatic obedience). These symptoms are mistakenly attributed to the respective disease.1,16 This especially holds true when delirious mania is initially diagnosed as a primary psychosis, which leads to the administration of antipsychotics.17 Other cases have reported that delirious mania was resistant to treatment, but ECT was never pursued.18
In this review, we provide a more comprehensive perspective of the clinical presentation, pathogenesis, and management of delirious mania. We searched PubMed and Google Scholar using the keywords “delirious mania,” “delirious mania AND catatonia,” or “manic delirium.” Most articles we found were case reports, case series, or retrospective chart reviews. There were no systematic reviews, meta analyses, or randomized control trials (RCTs). The 12 articles included in this review consist of 7 individual case reports, 4 case series, and 1 retrospective chart review that describe a total of 36 cases (Table1,2,5,17,19-26).
Clinical presentation: What to look for
Patients with delirious mania typically develop symptoms extremely rapidly. In virtually all published literature, symptoms were reported to emerge within hours to days and consisted of severe forms of mania, psychosis, and delirium; 100% of the cases in our review had these symptoms. Commonly reported symptoms were:
- intense excitement
- emotional lability
- grandiose delusions
- profound insomnia
- pressured and rapid speech
- auditory and visual hallucinations
- hypersexuality
- thought disorganization.
Exquisite paranoia can also result in violent aggression (and may require the use of physical restraints). Patients may confine themselves to very small spaces (such as a closet) in response to the intense paranoia. Impairments in various neurocognitive domains—including inability to focus; disorientation; language and visuospatial disturbances; difficulty with shifting and sustaining attention; and short-term memory impairments—have been reported. Patients often cannot recall the events during the episode.1,2,5,27,28
Catatonia has been closely associated with delirious mania.29 Features of excited catatonia—such as excessive motor activity, negativism, grimacing, posturing, echolalia, echopraxia, stereotypy, automatic obedience, verbigeration, combativeness, impulsivity, and rigidity—typically accompany delirious mania.1,5,10,19,27
In addition to these symptoms, patients may engage in specific behaviors. They may exhibit inappropriate toileting such as smearing feces on walls or in bags, fecal or urinary incontinence, disrobing or running naked in public places, or pouring liquid on the floor or on one’s head.1,2
Continue to: Of the 36 cases...
Of the 36 cases reported in the literature we reviewed, 20 (55%) were female. Most patients had an underlining psychiatric condition, including BD (72%), major depressive disorder (8%), and schizophrenia (2%). Three patients had no psychiatric history.
Physical examination
On initial presentation, a patient with delirious mania may be dehydrated, with dry mucous membranes, pale conjunctiva, tongue dryness, and poor skin turgor.28,30 Due to excessive motor activity, diaphoresis with tachycardia, fluctuating blood pressure, and fever may be present.31
Certain basic cognitive tasks should be assessed to determine the patient’s orientation to place, date, and time. Assess if the patient can recall recent events, names of objects, or perform serial 7s; clock drawing capabilities also should be ascertained.1,2,5 A Mini-Mental State Examination is useful.32
The Bush-Francis Catatonia Rating Scale should be used to elicit features of catatonia, such as waxy flexibility, negativism, gegenhalten, mitgehen, catalepsy, ambitendency, automatic obedience, and grasp reflex.10
Laboratory findings are nonspecific
Although no specific laboratory findings are associated with delirious mania, bloodwork and imaging are routinely investigated, especially if delirium characteristics are most striking. A complete blood count, chemistries, hepatic panel, thyroid functioning, blood and/or urine cultures, creatinine phosphokinase (CPK), and urinalysis can be ordered. Head imaging such as MRI and CT to rule out intracranial pathology are typically performed.19 However, the diagnosis of delirious mania is based on the presence of the phenotypic features, by verification of catatonia, and by the responsiveness to the treatment delivered.29
Continue to: Pathogenisis: Several hypotheses
Pathogenesis: Several hypotheses
The pathogenesis of delirious mania is not well understood. There are several postulations but no salient theory. Most patients with delirious mania have an underlying systemic medical or psychiatric condition.
Mood disorders. Patients with BD or schizoaffective disorder are especially susceptible to delirious mania. The percentage of manic patients who present with delirious mania varies by study. One study suggested approximately 19% have features of the phenomenon,33 while others estimated 15% to 25%.34 Elias et al35 calculated that 15% of patients with mania succumb to manic exhaustion; from this it can be reasonably concluded that these were cases of misdiagnosed delirious mania.
Delirium hypothesis. Patients with delirious mania typically have features of delirium, including fluctuation of consciousness, disorientation, and/or poor sleep-wake cycle.36 During rapid eye movement (REM) and non-REM sleep, memory circuits are fortified. When there is a substantial loss of REM and non-REM sleep, these circuits become faulty, even after 1 night. Pathological brain waves on EEG reflect the inability to reinforce the memory circuits. Patients with these waves may develop hallucinations, bizarre delusions, and altered sensorium. ECT reduces the pathological slow wave morphologies, thus restoring the synaptic maintenance and correcting the incompetent circuitry. This can explain the robust and rapid response of ECT in a patient with delirious mania.37,38
Neurotransmitter hypothesis. It has been shown that in patients with delirious mania there is dysregulation of dopamine transport, which leads to dopamine overflow in the synapse. In contrast to a drug effect (ie, cocaine or methamphetamine) that acts by inhibiting dopamine reuptake, dopamine overflow in delirious mania is caused by the loss of dopamine transporter regulation. This results in a dysfunctional dopaminergic state that precipitates an acute state of delirium and agitation.39,40
Serotonin plays a role in mood disorders, including mania and depression.41,42 More specifically, serotonin has been implicated in impulsivity and aggression as shown by reduced levels of CSF 5-hydroxyindoleacetic acid (5-HIAA) and depletion of 5-hydroxytryptophan (5-HTP).43
Continue to: Alterations in gamma-aminobutyric acid (GABA) transmission...
Alterations in gamma-aminobutyric acid (GABA) transmission are known to occur in delirium and catatonia. In delirium, GABA signaling is increased, which disrupts the circadian rhythm and melatonin release, thus impairing the sleep-wake cycle.44 Deficiencies in acetylcholine and melatonin are seen as well as excess of other neurotransmitters, including norepinephrine and glutamate.45 Conversely, in catatonia, functional imaging studies found decreased GABA-A binding in orbitofrontal, prefrontal, parietal, and motor cortical regions.46 A study analyzing 10 catatonic patients found decreased density of GABA-A receptors in the left sensorimotor cortex compared to psychiatric and healthy controls.47
Other neurotransmitters, such as glutamate, at the N-methyl-D-aspartate receptors (NMDAR) have been hypothesized to be hyperactive, causing downstream dysregulation of GABA functioning.48 However, the exact connection between delirious mania and all these receptors and neurotransmitters remains unknown.
Encephalitis hypothesis. The relationship between delirious mania and autoimmune encephalitis suggests delirious mania has etiologies other than a primary psychiatric illness. In a 2020 retrospective study49 that analyzed 79 patients with anti-NMDAR encephalitis, 25.3% met criteria for delirious mania, and 95% of these patients had catatonic features. Dalmau et al50 found that in many cases, patients tend to respond to ECT; in a cases series of 3 patients, 2 responded to benzodiazepines.
COVID-19 hypothesis. The SARS-CoV-2 virion has been associated with many neuropsychiatric complications, including mood, psychotic, and neurocognitive disorders.51,52 There also have been cases of COVID-19–induced catatonia.53-55 One case of delirious mania in a patient with COVID-19 has been reported.21 The general mechanism has been proposed to be related to the stimulation of the proinflammatory cytokines, such as tumor necrosis factor-alpha and interleukin-6, which the virus produces in large quantities.56 These cytokines have been linked to psychosis and other psychiatric disorders.57 The patient with COVID-19–induced delirious mania had elevated inflammatory markers, including erythrocyte sedimentation rate, C-reactive protein, ferritin, and D-dimer, which supports a proinflammatory state. This patient had a complete resolution of symptoms with ECT.21
Management: Benzodiazepines and ECT
A step-by-step algorithm for managing delirious mania is outlined in the Figure. Regardless of the underlining etiology, management of delirious mania consists of benzodiazepines (lorazepam and diazepam); prompt use of ECT, particularly for patients who do not improve with large doses of lorazepam; or (if applicable) continued treatment of the underlining medical condition, which does not preclude the use of benzodiazepines or ECT. Recent reports27,58 have described details for using ECT for delirious mania, highlighting the use of high-energy dosing, bilateral electrode placement, and frequent sessions.
Continue to: Knowing which medications...
Knowing which medications to avoid is as important as knowing which agents to administer. Be vigilant in avoiding high-potency antipsychotics, as these medications can worsen extrapyramidal symptoms and may precipitate seizures or neuroleptic malignant syndrome (NMS).28 Anticholinergic agents should also be avoided because they worsen confusion. Although lithium is effective in BD, in delirious mania, high doses of lithium and haloperidol may cause severe encephalopathic syndromes, with symptoms that can include lethargy, tremors, cerebellar dysfunction, and worsened confusion; it may also cause widespread and irreversible brain damage.59
Due to long periods of hyperactivity, withdrawal, and diaphoresis, patients with delirious mania may be severely dehydrated with metabolic derangements, including elevated CPK due to rhabdomyolysis from prolonged exertion, IM antipsychotics, or rigidity. To prevent acute renal failure, this must be immediately addressed with rapid fluid resuscitation and electrolyte repletion.61
Benzodiazepines. The rapid use of lorazepam should be initiated when delirious mania is suspected. Doses of 6 to 20 mg have been reported to be effective if tolerated.5,20 Typically, high-dose lorazepam will not have the sedative effect that would normally occur in a patient who does not have delirious mania.2 Lorazepam should be titrated until full resolution of symptoms. Doses up to 30 mg have been reported as effective and tolerable.62 In our literature review, 50% of patients (18/36) responded or partially responded to lorazepam. However, only 3 case reports documented a complete remission with lorazepam, and many patients needed ECT for remission of symptoms.
ECT is generally reserved for patients who are not helped by benzodiazepine therapy, which is estimated to be up to 20%.5 ECT is highly effective in delirious mania, with remission rates ranging from 80% to 100%.1 ECT is also effective in acute nondelirious mania (comparable to depression); however, it is only used in a small minority of cases (0.2% to 12%).35 In our review, 58% of cases (21/36) reported using ECT, and in all cases it resulted in complete remission.
A dramatic improvement can be seen even after a single ECT session, though most patients show improvement after 4 sessions or 3 to 7 days.1,2,5 In our review, most patients received 4 to 12 sessions until achieving complete remission.
Continue to: No RCTs have evaluated...
No RCTs have evaluated ECT electrode placement in patients with delirious mania. However, several RCTs have investigated electrode placement in patients with acute nondelirious mania. Hiremani et al63 found that bitemporal placement had a more rapid response rate than bifrontal placement, but there was no overall difference in response rate. Barekatain et al64 found no difference between these 2 bilateral approaches. Many of the delirious mania cases report using a bilateral placement (including 42% of the ECT cases in our review) due to the emergent need for rapid relief of symptoms, which is especially necessary if the patient is experiencing hemodynamic instability, excessive violence, risk for self-harm, worsening delirium, or resistance to lorazepam.
Prognosis: Often fatal if left untreated
Patients with delirious mania are at high risk to progress to a more severe form of NMS or malignant catatonia. Therefore, high-potency antipsychotics should be avoided; mortality can be elevated from 60% without antipsychotics to 78% with antipsychotics.4 Some researchers estimate 75% to 78% of cases of delirious mania can be fatal if left untreated.3,6
Bottom Line
Delirious mania is routinely mistaken for more conventional manic or psychotic disorders. Clinicians need to be able to rapidly recognize the symptoms of this syndrome, which include mania, psychosis, delirium, and possible catatonia, so they can avoid administering toxic agents and instead initiate effective treatments such as benzodiazepines and electroconvulsive therapy.
Related Resources
- Arsan C, Baker C, Wong J, et al. Delirious mania: an approach to diagnosis and treatment. Prim Care Companion CNS Disord. 2021;23(1):20f02744. doi:10.4088/PCC.20f02744
- Lamba G, Kennedy EA, Vu CP. Case report: ECT for delirious mania. Clinical Psychiatry News. December 14, 2021. https://www.mdedge.com/psychiatry/article/249909/bipolar-disorder/case-report-ect-delirious-mania
Drug Brand Names
Diazepam • Valium
Haloperidol • Haldol
Lithium • Eskalith, Lithobid
Lorazepam • Ativan
1. Fink M. Delirious mania. Bipolar Disord. 1999;1(1):54-60.
2. Karmacharya R, England ML, Ongür D. Delirious mania: clinical features and treatment response. J Affect Disord. 2008;109(3):312-316.
3. Friedman RS, Mufson MJ, Eisenberg TD, et al. Medically and psychiatrically ill: the challenge of delirious mania. Harv Rev Psychiatry. 2003;11(2):91-98.
4. Mann SC, Caroff SN, Bleier HR, et al. Lethal catatonia. Am J Psychiatry. 1986;143(11):1374-1381.
5. Detweiler MB, Mehra A, Rowell T, et al. Delirious mania and malignant catatonia: a report of 3 cases and review. Psychiatr Q. 2009;80(1):23-40.
6. Bell L. On a form of disease resembling some advanced stages of mania and fever. American Journal of Insanity. 1849;6(2):97-127.
7. Carlson GA, Goodwin FK. The stages of mania. A longitudinal analysis of the manic episode. Arch Gen Psychiatry. 1973;28(2):221-228.
8. Bond TC. Recognition of acute delirious mania. Arch Gen Psychiatry. 1980;37(5):553-554.
9. Hutchinson G, David A. Manic pseudo-delirium - two case reports. Behav Neurol. 1997;10(1):21-23.
10. Bush G, Fink M, Petrides G, et al. Catatonia. I. Rating scale and standardized examination. Acta Psychiatr Scand. 1996;93(2):129-136.
11. Bush G, Fink M, Petrides G, et al. Catatonia. II. Treatment with lorazepam and electroconvulsive therapy. Acta Psychiatr Scand. 1996;93(2):137-143.
12. Cordeiro CR, Saraiva R, Côrte-Real B, et al. When the bell rings: clinical features of Bell’s mania. Prim Care Companion CNS Disord. 2020;22(2):19l02511. doi:10.4088/PCC.19l02511
13. Yeo LX, Kuo TC, Hu KC, et al. Lurasidone-induced delirious mania. Am J Ther. 2019;26(6):e786-e787.
14. Jung WY, Lee BD. Quetiapine treatment for delirious mania in a military soldier. Prim Care Companion J Clin Psychiatry. 2010;12(2):PCC.09l00830. doi:10.4088/PCC.09l00830yel
15. Wahid N, Chin G, Turner AH, et al. Clinical response of clozapine as a treatment for delirious mania. Ment Illn. 2017;9(2):7182. doi:10.4081/mi.2017.7182
16. Taylor MA, Fink M. Catatonia in psychiatric classification: a home of its own. Am J Psychiatry. 2003;160(7):1233-1241.
17. Danivas V, Behere RV, Varambally S, et al. Electroconvulsive therapy in the treatment of delirious mania: a report of 2 patients. J ECT. 2010;26(4):278-279.
18. O’Callaghan N, McDonald C, Hallahan B. Delirious mania intractable to treatment. Ir J Psychol Med. 2016;33(2):129-132.
19. Vasudev K, Grunze H. What works for delirious catatonic mania? BMJ Case Rep. 2010;2010:bcr0220102713. doi:10.1136/bcr.02.2010.2713
20. Jacobowski NL, Heckers S, Bobo WV. Delirious mania: detection, diagnosis, and clinical management in the acute setting. J Psychiatr Pract. 2013;19(1):15-28.
21. Reinfeld S, Yacoub A. A case of delirious mania induced by COVID-19 treated with electroconvulsive therapy. J ECT. 2021;37(4):e38-e39.
22. Lee BS, Huang SS, Hsu WY, et al. Clinical features of delirious mania: a series of five cases and a brief literature review. BMC Psychiatry. 2012;12:65. doi:10.1186/1471-244X-12-65
23. Bipeta R, Khan MA. Delirious mania: can we get away with this concept? A case report and review of the literature. Case Rep Psychiatry. 2012;2012:720354. doi:10.1155/2012/720354
24. Nunes AL, Cheniaux E. Delirium and mania with catatonic features in a Brazilian patient: response to ECT. J Neuropsychiatry Clin Neurosci. 2014;26(1):E1-E3.
25. Tegin C, Kalayil G, Lippmann S. Electroconvulsive therapy and delirious catatonic mania. J ECT. 2017;33(4):e33-e34.
26. Melo AL, Serra M. Delirious mania and catatonia. Bipolar Disord. 2020;22(6):647-649.
27. Fink M. Expanding the catatonia tent: recognizing electroconvulsive therapy responsive syndromes. J ECT. 2021;37(2):77-79.
28. Fink M. Electroconvulsive Therapy: A Guide for Professionals and Their Patients. Oxford University Press; 2009.
29. Fink M, Taylor MA. The many varieties of catatonia. Eur Arch Psychiatry Clin Neurosci. 2001;251 Suppl 1:I8-I13.
30. Vivanti A, Harvey K, Ash S, et al. Clinical assessment of dehydration in older people admitted to hospital: what are the strongest indicators? Arch Gerontol Geriatr. 2008;47(3):340-355.
31. Ware MR, Feller DB, Hall KL. Neuroleptic malignant syndrome: diagnosis and management. Prim Care Companion CNS Disord. 2018;20(1):17r02185. doi:10.4088/PCC.17r0218
32. Folstein MF, Folstein SE, McHugh PR. “Mini-mental state”. A practical method for grading the cognitive state of patients for the clinician. J Psychiatr Res. 1975;12(3):189-198.
33. Taylor MA, Abrams R. The phenomenology of mania. A new look at some old patients. Arch Gen Psychiatry. 1973;29(4):520-522.
34. Klerman GL. The spectrum of mania. Compr Psychiatry. 1981;22(1):11-20.
35. Elias A, Thomas N, Sackeim HA. Electroconvulsive therapy in mania: a review of 80 years of clinical experience. Am J Psychiatry. 2021;178(3):229-239.
36. Thom RP, Levy-Carrick NC, Bui M, et al. Delirium. Am J Psychiatry. 2019;176(10):785-793.
37. Charlton BG, Kavanau JL. Delirium and psychotic symptoms--an integrative model. Med Hypotheses. 2002;58(1):24-27.
38. Kramp P, Bolwig TG. Electroconvulsive therapy in acute delirious states. Compr Psychiatry. 1981;22(4):368-371.
39. Mash DC. Excited delirium and sudden death: a syndromal disorder at the extreme end of the neuropsychiatric continuum. Front Physiol. 2016;7:435.
40. Strawn JR, Keck PE Jr, Caroff SN. Neuroleptic malignant syndrome. Am J Psychiatry. 2007;164(6):870-876.
41. Charney DS. Monoamine dysfunction and the pathophysiology and treatment of depression. J Clin Psychiatry. 1998;59 Suppl 14:11-14.
42. Shiah IS, Yatham LN. Serotonin in mania and in the mechanism of action of mood stabilizers: a review of clinical studies. Bipolar Disord. 2000;2(2):77-92.
43. Dalley JW, Roiser JP. Dopamine, serotonin and impulsivity. Neuroscience. 2012;215:42-58.
44. Maldonado JR. Pathoetiological model of delirium: a comprehensive understanding of the neurobiology of delirium and an evidence-based approach to prevention and treatment. Crit Care Clin. 2008;24(4):789-856, ix.
45. Maldonado JR. Neuropathogenesis of delirium: review of current etiologic theories and common pathways. Am J Geriatr Psychiatry. 2013;21(12):1190-1222.
46. Rasmussen SA, Mazurek MF, Rosebush PI. Catatonia: our current understanding of its diagnosis, treatment and pathophysiology. World J Psychiatry. 2016;6(4):391-398.
47. Northoff G, Steinke R, Czcervenka C, et al. Decreased density of GABA-A receptors in the left sensorimotor cortex in akinetic catatonia: investigation of in vivo benzodiazepine receptor binding. J Neurol Neurosurg Psychiatry. 1999;67(4):445-450.
48. Daniels J. Catatonia: clinical aspects and neurobiological correlates. J Neuropsychiatry Clin Neurosci. 2009;21(4):371-380.
49. Restrepo-Martínez M, Chacón-González J, Bayliss L, et al. Delirious mania as a neuropsychiatric presentation in patients with anti-N-methyl-D-aspartate receptor encephalitis. Psychosomatics. 2020;61(1):64-69.
50. Dalmau J, Armangué T, Planagumà J, et al. An update on anti-NMDA receptor encephalitis for neurologists and psychiatrists: mechanisms and models. Lancet Neurol. 2019;18(11):1045-1057.
51. Steardo L Jr, Steardo L, Verkhratsky A. Psychiatric face of COVID-19. Transl Psychiatry. 2020;10(1):261.
52. Iqbal Y, Al Abdulla MA, Albrahim S, et al. Psychiatric presentation of patients with acute SARS-CoV-2 infection: a retrospective review of 50 consecutive patients seen by a consultation-liaison psychiatry team. BJPsych Open. 2020;6(5):e109.
53. Gouse BM, Spears WE, Nieves Archibald A, et al. Catatonia in a hospitalized patient with COVID-19 and proposed immune-mediated mechanism. Brain Behav Immun. 2020;89:529-530.
54. Caan MP, Lim CT, Howard M. A case of catatonia in a man with COVID-19. Psychosomatics. 2020;61(5):556-560.
55. Zain SM, Muthukanagaraj P, Rahman N. Excited catatonia - a delayed neuropsychiatric complication of COVID-19 infection. Cureus. 2021;13(3):e13891.
56. Chowdhury MA, Hossain N, Kashem MA, et al. Immune response in COVID-19: a review. J Infect Public Health. 2020;13(11):1619-1629.
57. Radhakrishnan R, Kaser M, Guloksuz S. The link between the immune system, environment, and psychosis. Schizophr Bull. 2017;43(4):693-697.
58. Fink M, Kellner CH, McCall WV. Optimizing ECT technique in treating catatonia. J ECT. 2016;32(3):149-150.
59. Cohen WJ, Cohen NH. Lithium carbonate, haloperidol, and irreversible brain damage. JAMA. 1974;230(9):1283-1287.
60. Davis MJ, de Nesnera A, Folks DG. Confused and nearly naked after going on spending sprees. Current Psychiatry. 2014;13(7):56-62.
61. Stanley M, Chippa V, Aeddula NR, et al. Rhabdomyolysis. StatPearls Publishing; 2021.
62. Fink M, Taylor MA. The catatonia syndrome: forgotten but not gone. Arch Gen Psychiatry. 2009;66(11):1173-1177.
63. Hiremani RM, Thirthalli J, Tharayil BS, et al. Double-blind randomized controlled study comparing short-term efficacy of bifrontal and bitemporal electroconvulsive therapy in acute mania. Bipolar Disord. 2008;10(6):701-707.
64. Barekatain M, Jahangard L, Haghighi M, et al. Bifrontal versus bitemporal electroconvulsive therapy in severe manic patients. J ECT. 2008;24(3):199-202.
Delirious mania is a syndrome characterized by the acute onset of severe hyperactivity, psychosis, catatonia, and intermittent confusion. While there have been growing reports of this phenomenon over the last 2 decades, it remains poorly recognized and understood.1,2 There is no widely accepted nosology for delirious mania and the condition is absent from DSM-5, which magnifies the difficulties in making a timely diagnosis and initiating appropriate treatment. Delayed diagnosis and treatment may result in a detrimental outcome.2,3 Delirious mania has also been labeled as lethal catatonia, specific febrile delirium, hyperactive or exhaustive mania, and Bell’s mania.2,4,5 The characterization and diagnosis of this condition have a long and inconsistent history (Box1,6-11).
Box
Delirious mania was originally recognized in 1849 by Luther Bell in McLean Hospital after he observed 40 cases that were uniquely distinct from 1,700 other cases from 1836 to 1849.6 He described these patients as being suddenly confused, demonstrating unprovoked combativeness, remarkable decreased need for sleep, excessive motor restlessness, extreme fearfulness, and certain physiological signs, including rapid pulse and sweating. Bell was limited to the psychiatric treatment of his time, which largely was confined to physical restraints. Approximately three-fourths of these patients died.6
Following Bell’s report, this syndrome remained unexplored and rarely described. Some researchers postulated that the development of confusion was a natural progression of late-phase mania in close to 20% of patients.7 However, this did not account for the rapid onset of symptoms as well as certain unexplained movement abnormalities. In 1980, Bond8 presented 3 cases that were similar in nature to Bell’s depiction: acute onset with extraordinary irritability, withdrawal, delirium, and mania.
For the next 2 decades, delirious mania was seldom reported in the literature. The term was often reserved to illustrate when a patient had nothing more than mania with features of delirium.9
By 1996, catatonia became better recognized in its wide array of symptomology and diagnostic scales.10,11 In 1999, in addition to the sudden onset of excitement, paranoia, grandiosity, and disorientation, Fink1 reported catatonic signs including negativism, stereotypy, posturing, grimacing, and echo phenomena in patients with delirious mania. He identified its sensitive response to electroconvulsive therapy.
Delirious mania continues to be met with incertitude in clinical practice, and numerous inconsistencies have been reported in the literature. For example, some cases that have been reported as delirious mania had more evidence of primary delirium due to another medical condition or primary mania.12,13 Other cases have demonstrated swift improvement of symptoms after monotherapy with antipsychotics without a trial of benzodiazepines or electroconvulsive therapy (ECT); the exclusion of a sudden onset questions the validity of the diagnosis and promotes the use of less efficacious treatments.14,15 Other reports have confirmed that the diagnosis is missed when certain symptoms are more predominant, such as a thought disorder (acute schizophrenia), grandiosity and delusional ideation (bipolar disorder [BD]), and less commonly assessed catatonic signs (ambitendency, automatic obedience). These symptoms are mistakenly attributed to the respective disease.1,16 This especially holds true when delirious mania is initially diagnosed as a primary psychosis, which leads to the administration of antipsychotics.17 Other cases have reported that delirious mania was resistant to treatment, but ECT was never pursued.18
In this review, we provide a more comprehensive perspective of the clinical presentation, pathogenesis, and management of delirious mania. We searched PubMed and Google Scholar using the keywords “delirious mania,” “delirious mania AND catatonia,” or “manic delirium.” Most articles we found were case reports, case series, or retrospective chart reviews. There were no systematic reviews, meta analyses, or randomized control trials (RCTs). The 12 articles included in this review consist of 7 individual case reports, 4 case series, and 1 retrospective chart review that describe a total of 36 cases (Table1,2,5,17,19-26).
Clinical presentation: What to look for
Patients with delirious mania typically develop symptoms extremely rapidly. In virtually all published literature, symptoms were reported to emerge within hours to days and consisted of severe forms of mania, psychosis, and delirium; 100% of the cases in our review had these symptoms. Commonly reported symptoms were:
- intense excitement
- emotional lability
- grandiose delusions
- profound insomnia
- pressured and rapid speech
- auditory and visual hallucinations
- hypersexuality
- thought disorganization.
Exquisite paranoia can also result in violent aggression (and may require the use of physical restraints). Patients may confine themselves to very small spaces (such as a closet) in response to the intense paranoia. Impairments in various neurocognitive domains—including inability to focus; disorientation; language and visuospatial disturbances; difficulty with shifting and sustaining attention; and short-term memory impairments—have been reported. Patients often cannot recall the events during the episode.1,2,5,27,28
Catatonia has been closely associated with delirious mania.29 Features of excited catatonia—such as excessive motor activity, negativism, grimacing, posturing, echolalia, echopraxia, stereotypy, automatic obedience, verbigeration, combativeness, impulsivity, and rigidity—typically accompany delirious mania.1,5,10,19,27
In addition to these symptoms, patients may engage in specific behaviors. They may exhibit inappropriate toileting such as smearing feces on walls or in bags, fecal or urinary incontinence, disrobing or running naked in public places, or pouring liquid on the floor or on one’s head.1,2
Continue to: Of the 36 cases...
Of the 36 cases reported in the literature we reviewed, 20 (55%) were female. Most patients had an underlining psychiatric condition, including BD (72%), major depressive disorder (8%), and schizophrenia (2%). Three patients had no psychiatric history.
Physical examination
On initial presentation, a patient with delirious mania may be dehydrated, with dry mucous membranes, pale conjunctiva, tongue dryness, and poor skin turgor.28,30 Due to excessive motor activity, diaphoresis with tachycardia, fluctuating blood pressure, and fever may be present.31
Certain basic cognitive tasks should be assessed to determine the patient’s orientation to place, date, and time. Assess if the patient can recall recent events, names of objects, or perform serial 7s; clock drawing capabilities also should be ascertained.1,2,5 A Mini-Mental State Examination is useful.32
The Bush-Francis Catatonia Rating Scale should be used to elicit features of catatonia, such as waxy flexibility, negativism, gegenhalten, mitgehen, catalepsy, ambitendency, automatic obedience, and grasp reflex.10
Laboratory findings are nonspecific
Although no specific laboratory findings are associated with delirious mania, bloodwork and imaging are routinely investigated, especially if delirium characteristics are most striking. A complete blood count, chemistries, hepatic panel, thyroid functioning, blood and/or urine cultures, creatinine phosphokinase (CPK), and urinalysis can be ordered. Head imaging such as MRI and CT to rule out intracranial pathology are typically performed.19 However, the diagnosis of delirious mania is based on the presence of the phenotypic features, by verification of catatonia, and by the responsiveness to the treatment delivered.29
Continue to: Pathogenisis: Several hypotheses
Pathogenesis: Several hypotheses
The pathogenesis of delirious mania is not well understood. There are several postulations but no salient theory. Most patients with delirious mania have an underlying systemic medical or psychiatric condition.
Mood disorders. Patients with BD or schizoaffective disorder are especially susceptible to delirious mania. The percentage of manic patients who present with delirious mania varies by study. One study suggested approximately 19% have features of the phenomenon,33 while others estimated 15% to 25%.34 Elias et al35 calculated that 15% of patients with mania succumb to manic exhaustion; from this it can be reasonably concluded that these were cases of misdiagnosed delirious mania.
Delirium hypothesis. Patients with delirious mania typically have features of delirium, including fluctuation of consciousness, disorientation, and/or poor sleep-wake cycle.36 During rapid eye movement (REM) and non-REM sleep, memory circuits are fortified. When there is a substantial loss of REM and non-REM sleep, these circuits become faulty, even after 1 night. Pathological brain waves on EEG reflect the inability to reinforce the memory circuits. Patients with these waves may develop hallucinations, bizarre delusions, and altered sensorium. ECT reduces the pathological slow wave morphologies, thus restoring the synaptic maintenance and correcting the incompetent circuitry. This can explain the robust and rapid response of ECT in a patient with delirious mania.37,38
Neurotransmitter hypothesis. It has been shown that in patients with delirious mania there is dysregulation of dopamine transport, which leads to dopamine overflow in the synapse. In contrast to a drug effect (ie, cocaine or methamphetamine) that acts by inhibiting dopamine reuptake, dopamine overflow in delirious mania is caused by the loss of dopamine transporter regulation. This results in a dysfunctional dopaminergic state that precipitates an acute state of delirium and agitation.39,40
Serotonin plays a role in mood disorders, including mania and depression.41,42 More specifically, serotonin has been implicated in impulsivity and aggression as shown by reduced levels of CSF 5-hydroxyindoleacetic acid (5-HIAA) and depletion of 5-hydroxytryptophan (5-HTP).43
Continue to: Alterations in gamma-aminobutyric acid (GABA) transmission...
Alterations in gamma-aminobutyric acid (GABA) transmission are known to occur in delirium and catatonia. In delirium, GABA signaling is increased, which disrupts the circadian rhythm and melatonin release, thus impairing the sleep-wake cycle.44 Deficiencies in acetylcholine and melatonin are seen as well as excess of other neurotransmitters, including norepinephrine and glutamate.45 Conversely, in catatonia, functional imaging studies found decreased GABA-A binding in orbitofrontal, prefrontal, parietal, and motor cortical regions.46 A study analyzing 10 catatonic patients found decreased density of GABA-A receptors in the left sensorimotor cortex compared to psychiatric and healthy controls.47
Other neurotransmitters, such as glutamate, at the N-methyl-D-aspartate receptors (NMDAR) have been hypothesized to be hyperactive, causing downstream dysregulation of GABA functioning.48 However, the exact connection between delirious mania and all these receptors and neurotransmitters remains unknown.
Encephalitis hypothesis. The relationship between delirious mania and autoimmune encephalitis suggests delirious mania has etiologies other than a primary psychiatric illness. In a 2020 retrospective study49 that analyzed 79 patients with anti-NMDAR encephalitis, 25.3% met criteria for delirious mania, and 95% of these patients had catatonic features. Dalmau et al50 found that in many cases, patients tend to respond to ECT; in a cases series of 3 patients, 2 responded to benzodiazepines.
COVID-19 hypothesis. The SARS-CoV-2 virion has been associated with many neuropsychiatric complications, including mood, psychotic, and neurocognitive disorders.51,52 There also have been cases of COVID-19–induced catatonia.53-55 One case of delirious mania in a patient with COVID-19 has been reported.21 The general mechanism has been proposed to be related to the stimulation of the proinflammatory cytokines, such as tumor necrosis factor-alpha and interleukin-6, which the virus produces in large quantities.56 These cytokines have been linked to psychosis and other psychiatric disorders.57 The patient with COVID-19–induced delirious mania had elevated inflammatory markers, including erythrocyte sedimentation rate, C-reactive protein, ferritin, and D-dimer, which supports a proinflammatory state. This patient had a complete resolution of symptoms with ECT.21
Management: Benzodiazepines and ECT
A step-by-step algorithm for managing delirious mania is outlined in the Figure. Regardless of the underlining etiology, management of delirious mania consists of benzodiazepines (lorazepam and diazepam); prompt use of ECT, particularly for patients who do not improve with large doses of lorazepam; or (if applicable) continued treatment of the underlining medical condition, which does not preclude the use of benzodiazepines or ECT. Recent reports27,58 have described details for using ECT for delirious mania, highlighting the use of high-energy dosing, bilateral electrode placement, and frequent sessions.
Continue to: Knowing which medications...
Knowing which medications to avoid is as important as knowing which agents to administer. Be vigilant in avoiding high-potency antipsychotics, as these medications can worsen extrapyramidal symptoms and may precipitate seizures or neuroleptic malignant syndrome (NMS).28 Anticholinergic agents should also be avoided because they worsen confusion. Although lithium is effective in BD, in delirious mania, high doses of lithium and haloperidol may cause severe encephalopathic syndromes, with symptoms that can include lethargy, tremors, cerebellar dysfunction, and worsened confusion; it may also cause widespread and irreversible brain damage.59
Due to long periods of hyperactivity, withdrawal, and diaphoresis, patients with delirious mania may be severely dehydrated with metabolic derangements, including elevated CPK due to rhabdomyolysis from prolonged exertion, IM antipsychotics, or rigidity. To prevent acute renal failure, this must be immediately addressed with rapid fluid resuscitation and electrolyte repletion.61
Benzodiazepines. The rapid use of lorazepam should be initiated when delirious mania is suspected. Doses of 6 to 20 mg have been reported to be effective if tolerated.5,20 Typically, high-dose lorazepam will not have the sedative effect that would normally occur in a patient who does not have delirious mania.2 Lorazepam should be titrated until full resolution of symptoms. Doses up to 30 mg have been reported as effective and tolerable.62 In our literature review, 50% of patients (18/36) responded or partially responded to lorazepam. However, only 3 case reports documented a complete remission with lorazepam, and many patients needed ECT for remission of symptoms.
ECT is generally reserved for patients who are not helped by benzodiazepine therapy, which is estimated to be up to 20%.5 ECT is highly effective in delirious mania, with remission rates ranging from 80% to 100%.1 ECT is also effective in acute nondelirious mania (comparable to depression); however, it is only used in a small minority of cases (0.2% to 12%).35 In our review, 58% of cases (21/36) reported using ECT, and in all cases it resulted in complete remission.
A dramatic improvement can be seen even after a single ECT session, though most patients show improvement after 4 sessions or 3 to 7 days.1,2,5 In our review, most patients received 4 to 12 sessions until achieving complete remission.
Continue to: No RCTs have evaluated...
No RCTs have evaluated ECT electrode placement in patients with delirious mania. However, several RCTs have investigated electrode placement in patients with acute nondelirious mania. Hiremani et al63 found that bitemporal placement had a more rapid response rate than bifrontal placement, but there was no overall difference in response rate. Barekatain et al64 found no difference between these 2 bilateral approaches. Many of the delirious mania cases report using a bilateral placement (including 42% of the ECT cases in our review) due to the emergent need for rapid relief of symptoms, which is especially necessary if the patient is experiencing hemodynamic instability, excessive violence, risk for self-harm, worsening delirium, or resistance to lorazepam.
Prognosis: Often fatal if left untreated
Patients with delirious mania are at high risk to progress to a more severe form of NMS or malignant catatonia. Therefore, high-potency antipsychotics should be avoided; mortality can be elevated from 60% without antipsychotics to 78% with antipsychotics.4 Some researchers estimate 75% to 78% of cases of delirious mania can be fatal if left untreated.3,6
Bottom Line
Delirious mania is routinely mistaken for more conventional manic or psychotic disorders. Clinicians need to be able to rapidly recognize the symptoms of this syndrome, which include mania, psychosis, delirium, and possible catatonia, so they can avoid administering toxic agents and instead initiate effective treatments such as benzodiazepines and electroconvulsive therapy.
Related Resources
- Arsan C, Baker C, Wong J, et al. Delirious mania: an approach to diagnosis and treatment. Prim Care Companion CNS Disord. 2021;23(1):20f02744. doi:10.4088/PCC.20f02744
- Lamba G, Kennedy EA, Vu CP. Case report: ECT for delirious mania. Clinical Psychiatry News. December 14, 2021. https://www.mdedge.com/psychiatry/article/249909/bipolar-disorder/case-report-ect-delirious-mania
Drug Brand Names
Diazepam • Valium
Haloperidol • Haldol
Lithium • Eskalith, Lithobid
Lorazepam • Ativan
Delirious mania is a syndrome characterized by the acute onset of severe hyperactivity, psychosis, catatonia, and intermittent confusion. While there have been growing reports of this phenomenon over the last 2 decades, it remains poorly recognized and understood.1,2 There is no widely accepted nosology for delirious mania and the condition is absent from DSM-5, which magnifies the difficulties in making a timely diagnosis and initiating appropriate treatment. Delayed diagnosis and treatment may result in a detrimental outcome.2,3 Delirious mania has also been labeled as lethal catatonia, specific febrile delirium, hyperactive or exhaustive mania, and Bell’s mania.2,4,5 The characterization and diagnosis of this condition have a long and inconsistent history (Box1,6-11).
Box
Delirious mania was originally recognized in 1849 by Luther Bell in McLean Hospital after he observed 40 cases that were uniquely distinct from 1,700 other cases from 1836 to 1849.6 He described these patients as being suddenly confused, demonstrating unprovoked combativeness, remarkable decreased need for sleep, excessive motor restlessness, extreme fearfulness, and certain physiological signs, including rapid pulse and sweating. Bell was limited to the psychiatric treatment of his time, which largely was confined to physical restraints. Approximately three-fourths of these patients died.6
Following Bell’s report, this syndrome remained unexplored and rarely described. Some researchers postulated that the development of confusion was a natural progression of late-phase mania in close to 20% of patients.7 However, this did not account for the rapid onset of symptoms as well as certain unexplained movement abnormalities. In 1980, Bond8 presented 3 cases that were similar in nature to Bell’s depiction: acute onset with extraordinary irritability, withdrawal, delirium, and mania.
For the next 2 decades, delirious mania was seldom reported in the literature. The term was often reserved to illustrate when a patient had nothing more than mania with features of delirium.9
By 1996, catatonia became better recognized in its wide array of symptomology and diagnostic scales.10,11 In 1999, in addition to the sudden onset of excitement, paranoia, grandiosity, and disorientation, Fink1 reported catatonic signs including negativism, stereotypy, posturing, grimacing, and echo phenomena in patients with delirious mania. He identified its sensitive response to electroconvulsive therapy.
Delirious mania continues to be met with incertitude in clinical practice, and numerous inconsistencies have been reported in the literature. For example, some cases that have been reported as delirious mania had more evidence of primary delirium due to another medical condition or primary mania.12,13 Other cases have demonstrated swift improvement of symptoms after monotherapy with antipsychotics without a trial of benzodiazepines or electroconvulsive therapy (ECT); the exclusion of a sudden onset questions the validity of the diagnosis and promotes the use of less efficacious treatments.14,15 Other reports have confirmed that the diagnosis is missed when certain symptoms are more predominant, such as a thought disorder (acute schizophrenia), grandiosity and delusional ideation (bipolar disorder [BD]), and less commonly assessed catatonic signs (ambitendency, automatic obedience). These symptoms are mistakenly attributed to the respective disease.1,16 This especially holds true when delirious mania is initially diagnosed as a primary psychosis, which leads to the administration of antipsychotics.17 Other cases have reported that delirious mania was resistant to treatment, but ECT was never pursued.18
In this review, we provide a more comprehensive perspective of the clinical presentation, pathogenesis, and management of delirious mania. We searched PubMed and Google Scholar using the keywords “delirious mania,” “delirious mania AND catatonia,” or “manic delirium.” Most articles we found were case reports, case series, or retrospective chart reviews. There were no systematic reviews, meta analyses, or randomized control trials (RCTs). The 12 articles included in this review consist of 7 individual case reports, 4 case series, and 1 retrospective chart review that describe a total of 36 cases (Table1,2,5,17,19-26).
Clinical presentation: What to look for
Patients with delirious mania typically develop symptoms extremely rapidly. In virtually all published literature, symptoms were reported to emerge within hours to days and consisted of severe forms of mania, psychosis, and delirium; 100% of the cases in our review had these symptoms. Commonly reported symptoms were:
- intense excitement
- emotional lability
- grandiose delusions
- profound insomnia
- pressured and rapid speech
- auditory and visual hallucinations
- hypersexuality
- thought disorganization.
Exquisite paranoia can also result in violent aggression (and may require the use of physical restraints). Patients may confine themselves to very small spaces (such as a closet) in response to the intense paranoia. Impairments in various neurocognitive domains—including inability to focus; disorientation; language and visuospatial disturbances; difficulty with shifting and sustaining attention; and short-term memory impairments—have been reported. Patients often cannot recall the events during the episode.1,2,5,27,28
Catatonia has been closely associated with delirious mania.29 Features of excited catatonia—such as excessive motor activity, negativism, grimacing, posturing, echolalia, echopraxia, stereotypy, automatic obedience, verbigeration, combativeness, impulsivity, and rigidity—typically accompany delirious mania.1,5,10,19,27
In addition to these symptoms, patients may engage in specific behaviors. They may exhibit inappropriate toileting such as smearing feces on walls or in bags, fecal or urinary incontinence, disrobing or running naked in public places, or pouring liquid on the floor or on one’s head.1,2
Continue to: Of the 36 cases...
Of the 36 cases reported in the literature we reviewed, 20 (55%) were female. Most patients had an underlining psychiatric condition, including BD (72%), major depressive disorder (8%), and schizophrenia (2%). Three patients had no psychiatric history.
Physical examination
On initial presentation, a patient with delirious mania may be dehydrated, with dry mucous membranes, pale conjunctiva, tongue dryness, and poor skin turgor.28,30 Due to excessive motor activity, diaphoresis with tachycardia, fluctuating blood pressure, and fever may be present.31
Certain basic cognitive tasks should be assessed to determine the patient’s orientation to place, date, and time. Assess if the patient can recall recent events, names of objects, or perform serial 7s; clock drawing capabilities also should be ascertained.1,2,5 A Mini-Mental State Examination is useful.32
The Bush-Francis Catatonia Rating Scale should be used to elicit features of catatonia, such as waxy flexibility, negativism, gegenhalten, mitgehen, catalepsy, ambitendency, automatic obedience, and grasp reflex.10
Laboratory findings are nonspecific
Although no specific laboratory findings are associated with delirious mania, bloodwork and imaging are routinely investigated, especially if delirium characteristics are most striking. A complete blood count, chemistries, hepatic panel, thyroid functioning, blood and/or urine cultures, creatinine phosphokinase (CPK), and urinalysis can be ordered. Head imaging such as MRI and CT to rule out intracranial pathology are typically performed.19 However, the diagnosis of delirious mania is based on the presence of the phenotypic features, by verification of catatonia, and by the responsiveness to the treatment delivered.29
Continue to: Pathogenisis: Several hypotheses
Pathogenesis: Several hypotheses
The pathogenesis of delirious mania is not well understood. There are several postulations but no salient theory. Most patients with delirious mania have an underlying systemic medical or psychiatric condition.
Mood disorders. Patients with BD or schizoaffective disorder are especially susceptible to delirious mania. The percentage of manic patients who present with delirious mania varies by study. One study suggested approximately 19% have features of the phenomenon,33 while others estimated 15% to 25%.34 Elias et al35 calculated that 15% of patients with mania succumb to manic exhaustion; from this it can be reasonably concluded that these were cases of misdiagnosed delirious mania.
Delirium hypothesis. Patients with delirious mania typically have features of delirium, including fluctuation of consciousness, disorientation, and/or poor sleep-wake cycle.36 During rapid eye movement (REM) and non-REM sleep, memory circuits are fortified. When there is a substantial loss of REM and non-REM sleep, these circuits become faulty, even after 1 night. Pathological brain waves on EEG reflect the inability to reinforce the memory circuits. Patients with these waves may develop hallucinations, bizarre delusions, and altered sensorium. ECT reduces the pathological slow wave morphologies, thus restoring the synaptic maintenance and correcting the incompetent circuitry. This can explain the robust and rapid response of ECT in a patient with delirious mania.37,38
Neurotransmitter hypothesis. It has been shown that in patients with delirious mania there is dysregulation of dopamine transport, which leads to dopamine overflow in the synapse. In contrast to a drug effect (ie, cocaine or methamphetamine) that acts by inhibiting dopamine reuptake, dopamine overflow in delirious mania is caused by the loss of dopamine transporter regulation. This results in a dysfunctional dopaminergic state that precipitates an acute state of delirium and agitation.39,40
Serotonin plays a role in mood disorders, including mania and depression.41,42 More specifically, serotonin has been implicated in impulsivity and aggression as shown by reduced levels of CSF 5-hydroxyindoleacetic acid (5-HIAA) and depletion of 5-hydroxytryptophan (5-HTP).43
Continue to: Alterations in gamma-aminobutyric acid (GABA) transmission...
Alterations in gamma-aminobutyric acid (GABA) transmission are known to occur in delirium and catatonia. In delirium, GABA signaling is increased, which disrupts the circadian rhythm and melatonin release, thus impairing the sleep-wake cycle.44 Deficiencies in acetylcholine and melatonin are seen as well as excess of other neurotransmitters, including norepinephrine and glutamate.45 Conversely, in catatonia, functional imaging studies found decreased GABA-A binding in orbitofrontal, prefrontal, parietal, and motor cortical regions.46 A study analyzing 10 catatonic patients found decreased density of GABA-A receptors in the left sensorimotor cortex compared to psychiatric and healthy controls.47
Other neurotransmitters, such as glutamate, at the N-methyl-D-aspartate receptors (NMDAR) have been hypothesized to be hyperactive, causing downstream dysregulation of GABA functioning.48 However, the exact connection between delirious mania and all these receptors and neurotransmitters remains unknown.
Encephalitis hypothesis. The relationship between delirious mania and autoimmune encephalitis suggests delirious mania has etiologies other than a primary psychiatric illness. In a 2020 retrospective study49 that analyzed 79 patients with anti-NMDAR encephalitis, 25.3% met criteria for delirious mania, and 95% of these patients had catatonic features. Dalmau et al50 found that in many cases, patients tend to respond to ECT; in a cases series of 3 patients, 2 responded to benzodiazepines.
COVID-19 hypothesis. The SARS-CoV-2 virion has been associated with many neuropsychiatric complications, including mood, psychotic, and neurocognitive disorders.51,52 There also have been cases of COVID-19–induced catatonia.53-55 One case of delirious mania in a patient with COVID-19 has been reported.21 The general mechanism has been proposed to be related to the stimulation of the proinflammatory cytokines, such as tumor necrosis factor-alpha and interleukin-6, which the virus produces in large quantities.56 These cytokines have been linked to psychosis and other psychiatric disorders.57 The patient with COVID-19–induced delirious mania had elevated inflammatory markers, including erythrocyte sedimentation rate, C-reactive protein, ferritin, and D-dimer, which supports a proinflammatory state. This patient had a complete resolution of symptoms with ECT.21
Management: Benzodiazepines and ECT
A step-by-step algorithm for managing delirious mania is outlined in the Figure. Regardless of the underlining etiology, management of delirious mania consists of benzodiazepines (lorazepam and diazepam); prompt use of ECT, particularly for patients who do not improve with large doses of lorazepam; or (if applicable) continued treatment of the underlining medical condition, which does not preclude the use of benzodiazepines or ECT. Recent reports27,58 have described details for using ECT for delirious mania, highlighting the use of high-energy dosing, bilateral electrode placement, and frequent sessions.
Continue to: Knowing which medications...
Knowing which medications to avoid is as important as knowing which agents to administer. Be vigilant in avoiding high-potency antipsychotics, as these medications can worsen extrapyramidal symptoms and may precipitate seizures or neuroleptic malignant syndrome (NMS).28 Anticholinergic agents should also be avoided because they worsen confusion. Although lithium is effective in BD, in delirious mania, high doses of lithium and haloperidol may cause severe encephalopathic syndromes, with symptoms that can include lethargy, tremors, cerebellar dysfunction, and worsened confusion; it may also cause widespread and irreversible brain damage.59
Due to long periods of hyperactivity, withdrawal, and diaphoresis, patients with delirious mania may be severely dehydrated with metabolic derangements, including elevated CPK due to rhabdomyolysis from prolonged exertion, IM antipsychotics, or rigidity. To prevent acute renal failure, this must be immediately addressed with rapid fluid resuscitation and electrolyte repletion.61
Benzodiazepines. The rapid use of lorazepam should be initiated when delirious mania is suspected. Doses of 6 to 20 mg have been reported to be effective if tolerated.5,20 Typically, high-dose lorazepam will not have the sedative effect that would normally occur in a patient who does not have delirious mania.2 Lorazepam should be titrated until full resolution of symptoms. Doses up to 30 mg have been reported as effective and tolerable.62 In our literature review, 50% of patients (18/36) responded or partially responded to lorazepam. However, only 3 case reports documented a complete remission with lorazepam, and many patients needed ECT for remission of symptoms.
ECT is generally reserved for patients who are not helped by benzodiazepine therapy, which is estimated to be up to 20%.5 ECT is highly effective in delirious mania, with remission rates ranging from 80% to 100%.1 ECT is also effective in acute nondelirious mania (comparable to depression); however, it is only used in a small minority of cases (0.2% to 12%).35 In our review, 58% of cases (21/36) reported using ECT, and in all cases it resulted in complete remission.
A dramatic improvement can be seen even after a single ECT session, though most patients show improvement after 4 sessions or 3 to 7 days.1,2,5 In our review, most patients received 4 to 12 sessions until achieving complete remission.
Continue to: No RCTs have evaluated...
No RCTs have evaluated ECT electrode placement in patients with delirious mania. However, several RCTs have investigated electrode placement in patients with acute nondelirious mania. Hiremani et al63 found that bitemporal placement had a more rapid response rate than bifrontal placement, but there was no overall difference in response rate. Barekatain et al64 found no difference between these 2 bilateral approaches. Many of the delirious mania cases report using a bilateral placement (including 42% of the ECT cases in our review) due to the emergent need for rapid relief of symptoms, which is especially necessary if the patient is experiencing hemodynamic instability, excessive violence, risk for self-harm, worsening delirium, or resistance to lorazepam.
Prognosis: Often fatal if left untreated
Patients with delirious mania are at high risk to progress to a more severe form of NMS or malignant catatonia. Therefore, high-potency antipsychotics should be avoided; mortality can be elevated from 60% without antipsychotics to 78% with antipsychotics.4 Some researchers estimate 75% to 78% of cases of delirious mania can be fatal if left untreated.3,6
Bottom Line
Delirious mania is routinely mistaken for more conventional manic or psychotic disorders. Clinicians need to be able to rapidly recognize the symptoms of this syndrome, which include mania, psychosis, delirium, and possible catatonia, so they can avoid administering toxic agents and instead initiate effective treatments such as benzodiazepines and electroconvulsive therapy.
Related Resources
- Arsan C, Baker C, Wong J, et al. Delirious mania: an approach to diagnosis and treatment. Prim Care Companion CNS Disord. 2021;23(1):20f02744. doi:10.4088/PCC.20f02744
- Lamba G, Kennedy EA, Vu CP. Case report: ECT for delirious mania. Clinical Psychiatry News. December 14, 2021. https://www.mdedge.com/psychiatry/article/249909/bipolar-disorder/case-report-ect-delirious-mania
Drug Brand Names
Diazepam • Valium
Haloperidol • Haldol
Lithium • Eskalith, Lithobid
Lorazepam • Ativan
1. Fink M. Delirious mania. Bipolar Disord. 1999;1(1):54-60.
2. Karmacharya R, England ML, Ongür D. Delirious mania: clinical features and treatment response. J Affect Disord. 2008;109(3):312-316.
3. Friedman RS, Mufson MJ, Eisenberg TD, et al. Medically and psychiatrically ill: the challenge of delirious mania. Harv Rev Psychiatry. 2003;11(2):91-98.
4. Mann SC, Caroff SN, Bleier HR, et al. Lethal catatonia. Am J Psychiatry. 1986;143(11):1374-1381.
5. Detweiler MB, Mehra A, Rowell T, et al. Delirious mania and malignant catatonia: a report of 3 cases and review. Psychiatr Q. 2009;80(1):23-40.
6. Bell L. On a form of disease resembling some advanced stages of mania and fever. American Journal of Insanity. 1849;6(2):97-127.
7. Carlson GA, Goodwin FK. The stages of mania. A longitudinal analysis of the manic episode. Arch Gen Psychiatry. 1973;28(2):221-228.
8. Bond TC. Recognition of acute delirious mania. Arch Gen Psychiatry. 1980;37(5):553-554.
9. Hutchinson G, David A. Manic pseudo-delirium - two case reports. Behav Neurol. 1997;10(1):21-23.
10. Bush G, Fink M, Petrides G, et al. Catatonia. I. Rating scale and standardized examination. Acta Psychiatr Scand. 1996;93(2):129-136.
11. Bush G, Fink M, Petrides G, et al. Catatonia. II. Treatment with lorazepam and electroconvulsive therapy. Acta Psychiatr Scand. 1996;93(2):137-143.
12. Cordeiro CR, Saraiva R, Côrte-Real B, et al. When the bell rings: clinical features of Bell’s mania. Prim Care Companion CNS Disord. 2020;22(2):19l02511. doi:10.4088/PCC.19l02511
13. Yeo LX, Kuo TC, Hu KC, et al. Lurasidone-induced delirious mania. Am J Ther. 2019;26(6):e786-e787.
14. Jung WY, Lee BD. Quetiapine treatment for delirious mania in a military soldier. Prim Care Companion J Clin Psychiatry. 2010;12(2):PCC.09l00830. doi:10.4088/PCC.09l00830yel
15. Wahid N, Chin G, Turner AH, et al. Clinical response of clozapine as a treatment for delirious mania. Ment Illn. 2017;9(2):7182. doi:10.4081/mi.2017.7182
16. Taylor MA, Fink M. Catatonia in psychiatric classification: a home of its own. Am J Psychiatry. 2003;160(7):1233-1241.
17. Danivas V, Behere RV, Varambally S, et al. Electroconvulsive therapy in the treatment of delirious mania: a report of 2 patients. J ECT. 2010;26(4):278-279.
18. O’Callaghan N, McDonald C, Hallahan B. Delirious mania intractable to treatment. Ir J Psychol Med. 2016;33(2):129-132.
19. Vasudev K, Grunze H. What works for delirious catatonic mania? BMJ Case Rep. 2010;2010:bcr0220102713. doi:10.1136/bcr.02.2010.2713
20. Jacobowski NL, Heckers S, Bobo WV. Delirious mania: detection, diagnosis, and clinical management in the acute setting. J Psychiatr Pract. 2013;19(1):15-28.
21. Reinfeld S, Yacoub A. A case of delirious mania induced by COVID-19 treated with electroconvulsive therapy. J ECT. 2021;37(4):e38-e39.
22. Lee BS, Huang SS, Hsu WY, et al. Clinical features of delirious mania: a series of five cases and a brief literature review. BMC Psychiatry. 2012;12:65. doi:10.1186/1471-244X-12-65
23. Bipeta R, Khan MA. Delirious mania: can we get away with this concept? A case report and review of the literature. Case Rep Psychiatry. 2012;2012:720354. doi:10.1155/2012/720354
24. Nunes AL, Cheniaux E. Delirium and mania with catatonic features in a Brazilian patient: response to ECT. J Neuropsychiatry Clin Neurosci. 2014;26(1):E1-E3.
25. Tegin C, Kalayil G, Lippmann S. Electroconvulsive therapy and delirious catatonic mania. J ECT. 2017;33(4):e33-e34.
26. Melo AL, Serra M. Delirious mania and catatonia. Bipolar Disord. 2020;22(6):647-649.
27. Fink M. Expanding the catatonia tent: recognizing electroconvulsive therapy responsive syndromes. J ECT. 2021;37(2):77-79.
28. Fink M. Electroconvulsive Therapy: A Guide for Professionals and Their Patients. Oxford University Press; 2009.
29. Fink M, Taylor MA. The many varieties of catatonia. Eur Arch Psychiatry Clin Neurosci. 2001;251 Suppl 1:I8-I13.
30. Vivanti A, Harvey K, Ash S, et al. Clinical assessment of dehydration in older people admitted to hospital: what are the strongest indicators? Arch Gerontol Geriatr. 2008;47(3):340-355.
31. Ware MR, Feller DB, Hall KL. Neuroleptic malignant syndrome: diagnosis and management. Prim Care Companion CNS Disord. 2018;20(1):17r02185. doi:10.4088/PCC.17r0218
32. Folstein MF, Folstein SE, McHugh PR. “Mini-mental state”. A practical method for grading the cognitive state of patients for the clinician. J Psychiatr Res. 1975;12(3):189-198.
33. Taylor MA, Abrams R. The phenomenology of mania. A new look at some old patients. Arch Gen Psychiatry. 1973;29(4):520-522.
34. Klerman GL. The spectrum of mania. Compr Psychiatry. 1981;22(1):11-20.
35. Elias A, Thomas N, Sackeim HA. Electroconvulsive therapy in mania: a review of 80 years of clinical experience. Am J Psychiatry. 2021;178(3):229-239.
36. Thom RP, Levy-Carrick NC, Bui M, et al. Delirium. Am J Psychiatry. 2019;176(10):785-793.
37. Charlton BG, Kavanau JL. Delirium and psychotic symptoms--an integrative model. Med Hypotheses. 2002;58(1):24-27.
38. Kramp P, Bolwig TG. Electroconvulsive therapy in acute delirious states. Compr Psychiatry. 1981;22(4):368-371.
39. Mash DC. Excited delirium and sudden death: a syndromal disorder at the extreme end of the neuropsychiatric continuum. Front Physiol. 2016;7:435.
40. Strawn JR, Keck PE Jr, Caroff SN. Neuroleptic malignant syndrome. Am J Psychiatry. 2007;164(6):870-876.
41. Charney DS. Monoamine dysfunction and the pathophysiology and treatment of depression. J Clin Psychiatry. 1998;59 Suppl 14:11-14.
42. Shiah IS, Yatham LN. Serotonin in mania and in the mechanism of action of mood stabilizers: a review of clinical studies. Bipolar Disord. 2000;2(2):77-92.
43. Dalley JW, Roiser JP. Dopamine, serotonin and impulsivity. Neuroscience. 2012;215:42-58.
44. Maldonado JR. Pathoetiological model of delirium: a comprehensive understanding of the neurobiology of delirium and an evidence-based approach to prevention and treatment. Crit Care Clin. 2008;24(4):789-856, ix.
45. Maldonado JR. Neuropathogenesis of delirium: review of current etiologic theories and common pathways. Am J Geriatr Psychiatry. 2013;21(12):1190-1222.
46. Rasmussen SA, Mazurek MF, Rosebush PI. Catatonia: our current understanding of its diagnosis, treatment and pathophysiology. World J Psychiatry. 2016;6(4):391-398.
47. Northoff G, Steinke R, Czcervenka C, et al. Decreased density of GABA-A receptors in the left sensorimotor cortex in akinetic catatonia: investigation of in vivo benzodiazepine receptor binding. J Neurol Neurosurg Psychiatry. 1999;67(4):445-450.
48. Daniels J. Catatonia: clinical aspects and neurobiological correlates. J Neuropsychiatry Clin Neurosci. 2009;21(4):371-380.
49. Restrepo-Martínez M, Chacón-González J, Bayliss L, et al. Delirious mania as a neuropsychiatric presentation in patients with anti-N-methyl-D-aspartate receptor encephalitis. Psychosomatics. 2020;61(1):64-69.
50. Dalmau J, Armangué T, Planagumà J, et al. An update on anti-NMDA receptor encephalitis for neurologists and psychiatrists: mechanisms and models. Lancet Neurol. 2019;18(11):1045-1057.
51. Steardo L Jr, Steardo L, Verkhratsky A. Psychiatric face of COVID-19. Transl Psychiatry. 2020;10(1):261.
52. Iqbal Y, Al Abdulla MA, Albrahim S, et al. Psychiatric presentation of patients with acute SARS-CoV-2 infection: a retrospective review of 50 consecutive patients seen by a consultation-liaison psychiatry team. BJPsych Open. 2020;6(5):e109.
53. Gouse BM, Spears WE, Nieves Archibald A, et al. Catatonia in a hospitalized patient with COVID-19 and proposed immune-mediated mechanism. Brain Behav Immun. 2020;89:529-530.
54. Caan MP, Lim CT, Howard M. A case of catatonia in a man with COVID-19. Psychosomatics. 2020;61(5):556-560.
55. Zain SM, Muthukanagaraj P, Rahman N. Excited catatonia - a delayed neuropsychiatric complication of COVID-19 infection. Cureus. 2021;13(3):e13891.
56. Chowdhury MA, Hossain N, Kashem MA, et al. Immune response in COVID-19: a review. J Infect Public Health. 2020;13(11):1619-1629.
57. Radhakrishnan R, Kaser M, Guloksuz S. The link between the immune system, environment, and psychosis. Schizophr Bull. 2017;43(4):693-697.
58. Fink M, Kellner CH, McCall WV. Optimizing ECT technique in treating catatonia. J ECT. 2016;32(3):149-150.
59. Cohen WJ, Cohen NH. Lithium carbonate, haloperidol, and irreversible brain damage. JAMA. 1974;230(9):1283-1287.
60. Davis MJ, de Nesnera A, Folks DG. Confused and nearly naked after going on spending sprees. Current Psychiatry. 2014;13(7):56-62.
61. Stanley M, Chippa V, Aeddula NR, et al. Rhabdomyolysis. StatPearls Publishing; 2021.
62. Fink M, Taylor MA. The catatonia syndrome: forgotten but not gone. Arch Gen Psychiatry. 2009;66(11):1173-1177.
63. Hiremani RM, Thirthalli J, Tharayil BS, et al. Double-blind randomized controlled study comparing short-term efficacy of bifrontal and bitemporal electroconvulsive therapy in acute mania. Bipolar Disord. 2008;10(6):701-707.
64. Barekatain M, Jahangard L, Haghighi M, et al. Bifrontal versus bitemporal electroconvulsive therapy in severe manic patients. J ECT. 2008;24(3):199-202.
1. Fink M. Delirious mania. Bipolar Disord. 1999;1(1):54-60.
2. Karmacharya R, England ML, Ongür D. Delirious mania: clinical features and treatment response. J Affect Disord. 2008;109(3):312-316.
3. Friedman RS, Mufson MJ, Eisenberg TD, et al. Medically and psychiatrically ill: the challenge of delirious mania. Harv Rev Psychiatry. 2003;11(2):91-98.
4. Mann SC, Caroff SN, Bleier HR, et al. Lethal catatonia. Am J Psychiatry. 1986;143(11):1374-1381.
5. Detweiler MB, Mehra A, Rowell T, et al. Delirious mania and malignant catatonia: a report of 3 cases and review. Psychiatr Q. 2009;80(1):23-40.
6. Bell L. On a form of disease resembling some advanced stages of mania and fever. American Journal of Insanity. 1849;6(2):97-127.
7. Carlson GA, Goodwin FK. The stages of mania. A longitudinal analysis of the manic episode. Arch Gen Psychiatry. 1973;28(2):221-228.
8. Bond TC. Recognition of acute delirious mania. Arch Gen Psychiatry. 1980;37(5):553-554.
9. Hutchinson G, David A. Manic pseudo-delirium - two case reports. Behav Neurol. 1997;10(1):21-23.
10. Bush G, Fink M, Petrides G, et al. Catatonia. I. Rating scale and standardized examination. Acta Psychiatr Scand. 1996;93(2):129-136.
11. Bush G, Fink M, Petrides G, et al. Catatonia. II. Treatment with lorazepam and electroconvulsive therapy. Acta Psychiatr Scand. 1996;93(2):137-143.
12. Cordeiro CR, Saraiva R, Côrte-Real B, et al. When the bell rings: clinical features of Bell’s mania. Prim Care Companion CNS Disord. 2020;22(2):19l02511. doi:10.4088/PCC.19l02511
13. Yeo LX, Kuo TC, Hu KC, et al. Lurasidone-induced delirious mania. Am J Ther. 2019;26(6):e786-e787.
14. Jung WY, Lee BD. Quetiapine treatment for delirious mania in a military soldier. Prim Care Companion J Clin Psychiatry. 2010;12(2):PCC.09l00830. doi:10.4088/PCC.09l00830yel
15. Wahid N, Chin G, Turner AH, et al. Clinical response of clozapine as a treatment for delirious mania. Ment Illn. 2017;9(2):7182. doi:10.4081/mi.2017.7182
16. Taylor MA, Fink M. Catatonia in psychiatric classification: a home of its own. Am J Psychiatry. 2003;160(7):1233-1241.
17. Danivas V, Behere RV, Varambally S, et al. Electroconvulsive therapy in the treatment of delirious mania: a report of 2 patients. J ECT. 2010;26(4):278-279.
18. O’Callaghan N, McDonald C, Hallahan B. Delirious mania intractable to treatment. Ir J Psychol Med. 2016;33(2):129-132.
19. Vasudev K, Grunze H. What works for delirious catatonic mania? BMJ Case Rep. 2010;2010:bcr0220102713. doi:10.1136/bcr.02.2010.2713
20. Jacobowski NL, Heckers S, Bobo WV. Delirious mania: detection, diagnosis, and clinical management in the acute setting. J Psychiatr Pract. 2013;19(1):15-28.
21. Reinfeld S, Yacoub A. A case of delirious mania induced by COVID-19 treated with electroconvulsive therapy. J ECT. 2021;37(4):e38-e39.
22. Lee BS, Huang SS, Hsu WY, et al. Clinical features of delirious mania: a series of five cases and a brief literature review. BMC Psychiatry. 2012;12:65. doi:10.1186/1471-244X-12-65
23. Bipeta R, Khan MA. Delirious mania: can we get away with this concept? A case report and review of the literature. Case Rep Psychiatry. 2012;2012:720354. doi:10.1155/2012/720354
24. Nunes AL, Cheniaux E. Delirium and mania with catatonic features in a Brazilian patient: response to ECT. J Neuropsychiatry Clin Neurosci. 2014;26(1):E1-E3.
25. Tegin C, Kalayil G, Lippmann S. Electroconvulsive therapy and delirious catatonic mania. J ECT. 2017;33(4):e33-e34.
26. Melo AL, Serra M. Delirious mania and catatonia. Bipolar Disord. 2020;22(6):647-649.
27. Fink M. Expanding the catatonia tent: recognizing electroconvulsive therapy responsive syndromes. J ECT. 2021;37(2):77-79.
28. Fink M. Electroconvulsive Therapy: A Guide for Professionals and Their Patients. Oxford University Press; 2009.
29. Fink M, Taylor MA. The many varieties of catatonia. Eur Arch Psychiatry Clin Neurosci. 2001;251 Suppl 1:I8-I13.
30. Vivanti A, Harvey K, Ash S, et al. Clinical assessment of dehydration in older people admitted to hospital: what are the strongest indicators? Arch Gerontol Geriatr. 2008;47(3):340-355.
31. Ware MR, Feller DB, Hall KL. Neuroleptic malignant syndrome: diagnosis and management. Prim Care Companion CNS Disord. 2018;20(1):17r02185. doi:10.4088/PCC.17r0218
32. Folstein MF, Folstein SE, McHugh PR. “Mini-mental state”. A practical method for grading the cognitive state of patients for the clinician. J Psychiatr Res. 1975;12(3):189-198.
33. Taylor MA, Abrams R. The phenomenology of mania. A new look at some old patients. Arch Gen Psychiatry. 1973;29(4):520-522.
34. Klerman GL. The spectrum of mania. Compr Psychiatry. 1981;22(1):11-20.
35. Elias A, Thomas N, Sackeim HA. Electroconvulsive therapy in mania: a review of 80 years of clinical experience. Am J Psychiatry. 2021;178(3):229-239.
36. Thom RP, Levy-Carrick NC, Bui M, et al. Delirium. Am J Psychiatry. 2019;176(10):785-793.
37. Charlton BG, Kavanau JL. Delirium and psychotic symptoms--an integrative model. Med Hypotheses. 2002;58(1):24-27.
38. Kramp P, Bolwig TG. Electroconvulsive therapy in acute delirious states. Compr Psychiatry. 1981;22(4):368-371.
39. Mash DC. Excited delirium and sudden death: a syndromal disorder at the extreme end of the neuropsychiatric continuum. Front Physiol. 2016;7:435.
40. Strawn JR, Keck PE Jr, Caroff SN. Neuroleptic malignant syndrome. Am J Psychiatry. 2007;164(6):870-876.
41. Charney DS. Monoamine dysfunction and the pathophysiology and treatment of depression. J Clin Psychiatry. 1998;59 Suppl 14:11-14.
42. Shiah IS, Yatham LN. Serotonin in mania and in the mechanism of action of mood stabilizers: a review of clinical studies. Bipolar Disord. 2000;2(2):77-92.
43. Dalley JW, Roiser JP. Dopamine, serotonin and impulsivity. Neuroscience. 2012;215:42-58.
44. Maldonado JR. Pathoetiological model of delirium: a comprehensive understanding of the neurobiology of delirium and an evidence-based approach to prevention and treatment. Crit Care Clin. 2008;24(4):789-856, ix.
45. Maldonado JR. Neuropathogenesis of delirium: review of current etiologic theories and common pathways. Am J Geriatr Psychiatry. 2013;21(12):1190-1222.
46. Rasmussen SA, Mazurek MF, Rosebush PI. Catatonia: our current understanding of its diagnosis, treatment and pathophysiology. World J Psychiatry. 2016;6(4):391-398.
47. Northoff G, Steinke R, Czcervenka C, et al. Decreased density of GABA-A receptors in the left sensorimotor cortex in akinetic catatonia: investigation of in vivo benzodiazepine receptor binding. J Neurol Neurosurg Psychiatry. 1999;67(4):445-450.
48. Daniels J. Catatonia: clinical aspects and neurobiological correlates. J Neuropsychiatry Clin Neurosci. 2009;21(4):371-380.
49. Restrepo-Martínez M, Chacón-González J, Bayliss L, et al. Delirious mania as a neuropsychiatric presentation in patients with anti-N-methyl-D-aspartate receptor encephalitis. Psychosomatics. 2020;61(1):64-69.
50. Dalmau J, Armangué T, Planagumà J, et al. An update on anti-NMDA receptor encephalitis for neurologists and psychiatrists: mechanisms and models. Lancet Neurol. 2019;18(11):1045-1057.
51. Steardo L Jr, Steardo L, Verkhratsky A. Psychiatric face of COVID-19. Transl Psychiatry. 2020;10(1):261.
52. Iqbal Y, Al Abdulla MA, Albrahim S, et al. Psychiatric presentation of patients with acute SARS-CoV-2 infection: a retrospective review of 50 consecutive patients seen by a consultation-liaison psychiatry team. BJPsych Open. 2020;6(5):e109.
53. Gouse BM, Spears WE, Nieves Archibald A, et al. Catatonia in a hospitalized patient with COVID-19 and proposed immune-mediated mechanism. Brain Behav Immun. 2020;89:529-530.
54. Caan MP, Lim CT, Howard M. A case of catatonia in a man with COVID-19. Psychosomatics. 2020;61(5):556-560.
55. Zain SM, Muthukanagaraj P, Rahman N. Excited catatonia - a delayed neuropsychiatric complication of COVID-19 infection. Cureus. 2021;13(3):e13891.
56. Chowdhury MA, Hossain N, Kashem MA, et al. Immune response in COVID-19: a review. J Infect Public Health. 2020;13(11):1619-1629.
57. Radhakrishnan R, Kaser M, Guloksuz S. The link between the immune system, environment, and psychosis. Schizophr Bull. 2017;43(4):693-697.
58. Fink M, Kellner CH, McCall WV. Optimizing ECT technique in treating catatonia. J ECT. 2016;32(3):149-150.
59. Cohen WJ, Cohen NH. Lithium carbonate, haloperidol, and irreversible brain damage. JAMA. 1974;230(9):1283-1287.
60. Davis MJ, de Nesnera A, Folks DG. Confused and nearly naked after going on spending sprees. Current Psychiatry. 2014;13(7):56-62.
61. Stanley M, Chippa V, Aeddula NR, et al. Rhabdomyolysis. StatPearls Publishing; 2021.
62. Fink M, Taylor MA. The catatonia syndrome: forgotten but not gone. Arch Gen Psychiatry. 2009;66(11):1173-1177.
63. Hiremani RM, Thirthalli J, Tharayil BS, et al. Double-blind randomized controlled study comparing short-term efficacy of bifrontal and bitemporal electroconvulsive therapy in acute mania. Bipolar Disord. 2008;10(6):701-707.
64. Barekatain M, Jahangard L, Haghighi M, et al. Bifrontal versus bitemporal electroconvulsive therapy in severe manic patients. J ECT. 2008;24(3):199-202.