Neurosurgery Operating Room Efficiency During the COVID-19 Era

Article Type
Changed
Wed, 04/05/2023 - 15:11
Display Headline
Neurosurgery Operating Room Efficiency During the COVID-19 Era

From the Department of Neurological Surgery, Vanderbilt University Medical Center, Nashville, TN (Stefan W. Koester, Puja Jagasia, and Drs. Liles, Dambrino IV, Feldman, and Chambless), and the Department of Anesthesiology, Vanderbilt University Medical Center, Nashville, TN (Drs. Mathews and Tiwari).

ABSTRACT

Background: The COVID-19 pandemic has had broad effects on surgical care, including operating room (OR) staffing, personal protective equipment (PPE) utilization, and newly implemented anti-infective measures. Our aim was to assess neurosurgery OR efficiency before the COVID-19 pandemic, during peak COVID-19, and during current times.

Methods: Institutional perioperative databases at a single, high-volume neurosurgical center were queried for operations performed from December 2019 until October 2021. March 12, 2020, the day that the state of Tennessee declared a state of emergency, was chosen as the onset of the COVID-19 pandemic. The 90-day periods before and after this day were used to define the pre-COVID-19, peak-COVID-19, and post-peak restrictions time periods for comparative analysis. Outcomes included delay in first-start and OR turnover time between neurosurgical cases. Preset threshold times were used in analyses to adjust for normal leniency in OR scheduling (15 minutes for first start and 90 minutes for turnover). Univariate analysis used Wilcoxon rank-sum test for continuous outcomes, while chi-square test and Fisher’s exact test were used for categorical comparisons. Significance was defined as P < .05.

Results: First-start time was analyzed in 426 pre-COVID-19, 357 peak-restrictions, and 2304 post-peak-restrictions cases. The unadjusted mean delay length was found to be significantly different between the time periods, but the magnitude of increase in minutes was immaterial (mean [SD] minutes, 6 [18] vs 10 [21] vs 8 [20], respectively; P = .004). The adjusted average delay length and proportion of cases delayed beyond the 15-minute threshold were not significantly different. The proportion of cases that started early, as well as significantly early past a 15-minute threshold, have not been impacted. There was no significant change in turnover time during peak restrictions relative to the pre-COVID-19 period (88 [100] minutes vs 85 [95] minutes), and turnover time has since remained unchanged (83 [87] minutes).

Conclusion: Our center was able to maintain OR efficiency before, during, and after peak restrictions even while instituting advanced infection-control strategies. While there were significant changes, delays were relatively small in magnitude.

Keywords: operating room timing, hospital efficiency, socioeconomics, pandemic.

The COVID-19 pandemic has led to major changes in patient care both from a surgical perspective and in regard to inpatient hospital course. Safety protocols nationwide have been implemented to protect both patients and providers. Some elements of surgical care have drastically changed, including operating room (OR) staffing, personal protective equipment (PPE) utilization, and increased sterilization measures. Furloughs, layoffs, and reassignments due to the focus on nonelective and COVID-19–related cases challenged OR staffing and efficiency. Operating room staff with COVID-19 exposures or COVID-19 infections also caused last-minute changes in staffing. All of these scenarios can cause issues due to actual understaffing or due to staff members being pushed into highly specialized areas, such as neurosurgery, in which they have very little experience. A further obstacle to OR efficiency included policy changes involving PPE utilization, sterilization measures, and supply chain shortages of necessary resources such as PPE.

Neurosurgery in particular has been susceptible to COVID-19–related system-wide changes given operator proximity to the patient’s respiratory passages, frequency of emergent cases, and varying anesthetic needs, as well as the high level of specialization needed to perform neurosurgical care. Previous studies have shown a change in the makeup of neurosurgical patients seeking care, as well as in the acuity of neurological consult of these patients.1 A study in orthopedic surgery by Andreata et al demonstrated worsened OR efficiency, with significantly increased first-start and turnover times.2 In the COVID-19 era, OR quality and safety are crucially important to both patients and providers. Providing this safe and effective care in an efficient manner is important for optimal neurosurgical management in the long term.3 Moreover, the financial burden of implementing new protocols and standards can be compounded by additional financial losses due to reduced OR efficiency.

 

 

Methods

To analyze the effect of COVID-19 on neurosurgical OR efficiency, institutional perioperative databases at a single high-volume center were queried for operations performed from December 2019 until October 2021. March 12, 2020, was chosen as the onset of COVID-19 for analytic purposes, as this was the date when the state of Tennessee declared a state of emergency. The 90-day periods before and after this date were used for comparative analysis for pre-COVID-19, peak COVID-19, and post-peak-restrictions time periods. The peak COVID-19 period was defined as the 90-day period following the initial onset of COVID-19 and the surge of cases. For comparison purposes, post-peak COVID-19 was defined as the months following the first peak until October 2021 (approximately 17 months). COVID-19 burden was determined using a COVID-19 single-institution census of confirmed cases by polymerase chain reaction (PCR) for which the average number of cases of COVID-19 during a given month was determined. This number is a scaled trend, and a true number of COVID-19 cases in our hospital was not reported.

Neurosurgical and neuroendovascular cases were included in the analysis. Outcomes included delay in first-start and OR turnover time between neurosurgical cases, defined as the time from the patient leaving the room until the next patient entered the room. Preset threshold times were used in analyses to adjust for normal leniency in OR scheduling (15 minutes for first start and 90 minutes for turnover, which is a standard for our single-institution perioperative center). Statistical analyses, including data aggregation, were performed using R, version 4.0.1 (R Foundation for Statistical Computing). Patients’ demographic and clinical characteristics were analyzed using an independent 2-sample t-test for interval variables and a chi-square test for categorical variables. Significance was defined as P < .05.

Results

First-Start Time

First-start time was analyzed in 426 pre-COVID-19, 357 peak-COVID-19, and 2304 post-peak-COVID-19 cases. The unadjusted mean delay length was significantly different between the time periods, but the magnitude of increase in minutes was immaterial (mean [SD] minutes, 6 [18] vs 10 [21] vs 8 [20], respectively; P=.004) (Table 1).

First-Start Time Analysis

The adjusted average delay length and proportion of cases delayed beyond the 15-minute threshold were not significantly different, but they have been slightly higher since the onset of COVID-19. The proportion of cases that have started early, as well as significantly early past a 15-minute threshold, have also trended down since the onset of the COVID-19 pandemic, but this difference was again not significant. The temporal relationship of first-start delay, both unadjusted and adjusted, from December 2019 to October 2021 is shown in Figure 1. The trend of increasing delay is loosely associated with the COVID-19 burden experienced by our hospital. The start of COVID-19 as well as both COVID-19 peaks have been associated with increased delays in our hospital.

(A) Unadjusted and (B) adjusted first-start delay in operating room efficiency relative to COVID-19 census.

Turnover Time

Turnover time was assessed in 437 pre-COVID-19, 278 peak-restrictions, and 2411 post-peak-restrictions cases. Turnover time during peak restrictions was not significantly different from pre-COVID-19 (88 [100] vs 85 [95]) and has since remained relatively unchanged (83 [87], P = .78). A similar trend held for comparisons of proportion of cases with turnover time past 90 minutes and average times past the 90-minute threshold (Table 2). The temporal relationship between COVID-19 burden and turnover time, both unadjusted and adjusted, from December 2019 to October 2021 is shown in Figure 2. Both figures demonstrate a slight initial increase in turnover time delay at the start of COVID-19, which stabilized with little variation thereafter.

Turnover Time Analysis

(A) Unadjusted and (B) adjusted turnover time in operating room efficiency relative to COVID-19 census.

 

 

Discussion

We analyzed the OR efficiency metrics of first-start and turnover time during the 90-day period before COVID-19 (pre-COVID-19), the 90 days following Tennessee declaring a state of emergency (peak COVID-19), and the time following this period (post-COVID-19) for all neurosurgical and neuroendovascular cases at Vanderbilt University Medical Center (VUMC). We found a significant difference in unadjusted mean delay length in first-start time between the time periods, but the magnitude of increase in minutes was immaterial (mean [SD] minutes for pre-COVID-19, peak-COVID-19, and post-COVID-19: 6 [18] vs 10 [21] vs 8 [20], respectively; P = .004). No significant increase in turnover time between cases was found between these 3 time periods. Based on metrics from first-start delay and turnover time, our center was able to maintain OR efficiency before, during, and after peak COVID-19.

After the Centers for Disease Control and Prevention released guidelines recommending deferring elective procedures to conserve beds and PPE, VUMC made the decision to suspend all elective surgical procedures from March 18 to April 24, 2020. Prior research conducted during the COVID-19 pandemic has demonstrated more than 400 types of surgical procedures with negatively impacted outcomes when compared to surgical outcomes from the same time frame in 2018 and 2019.4 For more than 20 of these types of procedures, there was a significant association between procedure delay and adverse patient outcomes.4 Testing protocols for patients prior to surgery varied throughout the pandemic based on vaccination status and type of procedure. Before vaccines became widely available, all patients were required to obtain a PCR SARS-CoV-2 test within 48 to 72 hours of the scheduled procedure. If the patient’s procedure was urgent and testing was not feasible, the patient was treated as a SARS-CoV-2–positive patient, which required all health care workers involved in the case to wear gowns, gloves, surgical masks, and eye protection. Testing patients preoperatively likely helped to maintain OR efficiency since not all patients received test results prior to the scheduled procedure, leading to cancellations of cases and therefore more staff available for fewer cases.

After vaccines became widely available to the public, testing requirements for patients preoperatively were relaxed, and only patients who were not fully vaccinated or severely immunocompromised were required to test prior to procedures. However, approximately 40% of the population in Tennessee was fully vaccinated in 2021, which reflects the patient population of VUMC.5 This means that many patients who received care at VUMC were still tested prior to procedures.

Adopting adequate safety protocols was found to be key for OR efficiency during the COVID-19 pandemic since performing surgery increased the risk of infection for each health care worker in the OR.6 VUMC protocols identified procedures that required enhanced safety measures to prevent infection of health care workers and avoid staffing shortages, which would decrease OR efficiency. Protocols mandated that only anesthesia team members were allowed to be in the OR during intubation and extubation of patients, which could be one factor leading to increased delays and decreased efficiency for some institutions. Methods for neurosurgeons to decrease risk of infection in the OR include postponing all nonurgent cases, reappraising the necessity for general anesthesia and endotracheal intubation, considering alternative surgical approaches that avoid the respiratory tract, and limiting the use of aerosol-generating instruments.7,8 VUMC’s success in implementing these protocols likely explains why our center was able to maintain OR efficiency throughout the COVID-19 pandemic.

A study conducted by Andreata et al showed a significantly increased mean first-case delay and a nonsignificant increased turnover time in orthopedic surgeries in Northern Italy when comparing surgeries performed during the COVID-19 pandemic to those performed prior to COVID-19.2 Other studies have indicated a similar trend in decreased OR efficiency during COVID-19 in other areas around the world.9,10 These findings are not consistent with our own findings for neurosurgical and neuroendovascular surgeries at VUMC, and any change at our institution was relatively immaterial. Factors that threatened to change OR efficiency—but did not result in meaningful changes in our institutional experience—include delays due to pending COVID-19 test results, safety procedures such as PPE donning, and planning difficulties to ensure the existence of teams with non-overlapping providers in the case of a surgeon being infected.2,11-13

 

 

Globally, many surgery centers halted all elective surgeries during the initial COVID-19 spike to prevent a PPE shortage and mitigate risk of infection of patients and health care workers.8,12,14 However, there is no centralized definition of which neurosurgical procedures are elective, so that decision was made on a surgeon or center level, which could lead to variability in efficiency trends.14 One study on neurosurgical procedures during COVID-19 found a 30% decline in all cases and a 23% decline in emergent procedures, showing that the decrease in volume was not only due to cancellation of elective procedures.15 This decrease in elective and emergent surgeries created a backlog of surgeries as well as a loss in health care revenue, and caused many patients to go without adequate health care.10 Looking forward, it is imperative that surgical centers study trends in OR efficiency from COVID-19 and learn how to better maintain OR efficiency during future pandemic conditions to prevent a backlog of cases, loss of health care revenue, and decreased health care access.

Limitations

Our data are from a single center and therefore may not be representative of experiences of other hospitals due to different populations and different impacts from COVID-19. However, given our center’s high volume and diverse patient population, we believe our analysis highlights important trends in neurosurgery practice. Notably, data for patient and OR timing are digitally generated and are entered manually by nurses in the electronic medical record, making it prone to errors and variability. This is in our experience, and if any error is present, we believe it is minimal.

Conclusion

The COVID-19 pandemic has had far-reaching effects on health care worldwide, including neurosurgical care. OR efficiency across the United States generally worsened given the stresses of supply chain issues, staffing shortages, and cancellations. At our institution, we were able to maintain OR efficiency during the known COVID-19 peaks until October 2021. Continually functional neurosurgical ORs are important in preventing delays in care and maintaining a steady revenue in order for hospitals and other health care entities to remain solvent. Further study of OR efficiency is needed for health care systems to prepare for future pandemics and other resource-straining events in order to provide optimal patient care.

Corresponding author: Campbell Liles, MD, Vanderbilt University Medical Center, Department of Neurological Surgery, 1161 21st Ave. South, T4224 Medical Center North, Nashville, TN 37232-2380; [email protected]

Disclosures: None reported.

References

1. Koester SW, Catapano JS, Ma KL, et al. COVID-19 and neurosurgery consultation call volume at a single large tertiary center with a propensity- adjusted analysis. World Neurosurg. 2021;146:e768-e772. doi:10.1016/j.wneu.2020.11.017

2. Andreata M, Faraldi M, Bucci E, Lombardi G, Zagra L. Operating room efficiency and timing during coronavirus disease 2019 outbreak in a referral orthopaedic hospital in Northern Italy. Int Orthop. 2020;44(12):2499-2504. doi:10.1007/s00264-020-04772-x

3. Dexter F, Abouleish AE, Epstein RH, et al. Use of operating room information system data to predict the impact of reducing turnover times on staffing costs. Anesth Analg. 2003;97(4):1119-1126. doi:10.1213/01.ANE.0000082520.68800.79

4. Zheng NS, Warner JL, Osterman TJ, et al. A retrospective approach to evaluating potential adverse outcomes associated with delay of procedures for cardiovascular and cancer-related diagnoses in the context of COVID-19. J Biomed Inform. 2021;113:103657. doi:10.1016/j.jbi.2020.103657

5. Alcendor DJ. Targeting COVID-19 vaccine hesitancy in rural communities in Tennessee: implications for extending the COVID- 19 pandemic in the South. Vaccines (Basel). 2021;9(11):1279. doi:10.3390/vaccines9111279

6. Perrone G, Giuffrida M, Bellini V, et al. Operating room setup: how to improve health care professionals safety during pandemic COVID- 19: a quality improvement study. J Laparoendosc Adv Surg Tech A. 2021;31(1):85-89. doi:10.1089/lap.2020.0592

7. Iorio-Morin C, Hodaie M, Sarica C, et al. Letter: the risk of COVID-19 infection during neurosurgical procedures: a review of severe acute respiratory distress syndrome coronavirus 2 (SARS-CoV-2) modes of transmission and proposed neurosurgery-specific measures for mitigation. Neurosurgery. 2020;87(2):E178-E185. doi:10.1093/ neuros/nyaa157

8. Gupta P, Muthukumar N, Rajshekhar V, et al. Neurosurgery and neurology practices during the novel COVID-19 pandemic: a consensus statement from India. Neurol India. 2020;68(2):246-254. doi:10.4103/0028-3886.283130

9. Mercer ST, Agarwal R, Dayananda KSS, et al. A comparative study looking at trauma and orthopaedic operating efficiency in the COVID-19 era. Perioper Care Oper Room Manag. 2020;21:100142. doi:10.1016/j.pcorm.2020.100142

10. Rozario N, Rozario D. Can machine learning optimize the efficiency of the operating room in the era of COVID-19? Can J Surg. 2020;63(6):E527-E529. doi:10.1503/cjs.016520

11. Toh KHQ, Barazanchi A, Rajaretnam NS, et al. COVID-19 response by New Zealand general surgical departments in tertiary metropolitan hospitals. ANZ J Surg. 2021;91(7-8):1352-1357. doi:10.1111/ ans.17044

12. Moorthy RK, Rajshekhar V. Impact of COVID-19 pandemic on neurosurgical practice in India: a survey on personal protective equipment usage, testing, and perceptions on disease transmission. Neurol India. 2020;68(5):1133-1138. doi:10.4103/0028- 3886.299173

13. Meneghini RM. Techniques and strategies to optimize efficiencies in the office and operating room: getting through the patient backlog and preserving hospital resources. J Arthroplasty. 2021;36(7S):S49-S51. doi:10.1016/j.arth.2021.03.010

14. Jean WC, Ironside NT, Sack KD, et al. The impact of COVID- 19 on neurosurgeons and the strategy for triaging non-emergent operations: a global neurosurgery study. Acta Neurochir (Wien). 2020;162(6):1229-1240. doi:10.1007/s00701-020- 04342-5

15. Raneri F, Rustemi O, Zambon G, et al. Neurosurgery in times of a pandemic: a survey of neurosurgical services during the COVID-19 outbreak in the Veneto region in Italy. Neurosurg Focus. 2020;49(6):E9. doi:10.3171/2020.9.FOCUS20691

Article PDF
Issue
Journal of Clinical Outcomes Management - 29(6)
Publications
Topics
Page Number
208-213
Sections
Article PDF
Article PDF

From the Department of Neurological Surgery, Vanderbilt University Medical Center, Nashville, TN (Stefan W. Koester, Puja Jagasia, and Drs. Liles, Dambrino IV, Feldman, and Chambless), and the Department of Anesthesiology, Vanderbilt University Medical Center, Nashville, TN (Drs. Mathews and Tiwari).

ABSTRACT

Background: The COVID-19 pandemic has had broad effects on surgical care, including operating room (OR) staffing, personal protective equipment (PPE) utilization, and newly implemented anti-infective measures. Our aim was to assess neurosurgery OR efficiency before the COVID-19 pandemic, during peak COVID-19, and during current times.

Methods: Institutional perioperative databases at a single, high-volume neurosurgical center were queried for operations performed from December 2019 until October 2021. March 12, 2020, the day that the state of Tennessee declared a state of emergency, was chosen as the onset of the COVID-19 pandemic. The 90-day periods before and after this day were used to define the pre-COVID-19, peak-COVID-19, and post-peak restrictions time periods for comparative analysis. Outcomes included delay in first-start and OR turnover time between neurosurgical cases. Preset threshold times were used in analyses to adjust for normal leniency in OR scheduling (15 minutes for first start and 90 minutes for turnover). Univariate analysis used Wilcoxon rank-sum test for continuous outcomes, while chi-square test and Fisher’s exact test were used for categorical comparisons. Significance was defined as P < .05.

Results: First-start time was analyzed in 426 pre-COVID-19, 357 peak-restrictions, and 2304 post-peak-restrictions cases. The unadjusted mean delay length was found to be significantly different between the time periods, but the magnitude of increase in minutes was immaterial (mean [SD] minutes, 6 [18] vs 10 [21] vs 8 [20], respectively; P = .004). The adjusted average delay length and proportion of cases delayed beyond the 15-minute threshold were not significantly different. The proportion of cases that started early, as well as significantly early past a 15-minute threshold, have not been impacted. There was no significant change in turnover time during peak restrictions relative to the pre-COVID-19 period (88 [100] minutes vs 85 [95] minutes), and turnover time has since remained unchanged (83 [87] minutes).

Conclusion: Our center was able to maintain OR efficiency before, during, and after peak restrictions even while instituting advanced infection-control strategies. While there were significant changes, delays were relatively small in magnitude.

Keywords: operating room timing, hospital efficiency, socioeconomics, pandemic.

The COVID-19 pandemic has led to major changes in patient care both from a surgical perspective and in regard to inpatient hospital course. Safety protocols nationwide have been implemented to protect both patients and providers. Some elements of surgical care have drastically changed, including operating room (OR) staffing, personal protective equipment (PPE) utilization, and increased sterilization measures. Furloughs, layoffs, and reassignments due to the focus on nonelective and COVID-19–related cases challenged OR staffing and efficiency. Operating room staff with COVID-19 exposures or COVID-19 infections also caused last-minute changes in staffing. All of these scenarios can cause issues due to actual understaffing or due to staff members being pushed into highly specialized areas, such as neurosurgery, in which they have very little experience. A further obstacle to OR efficiency included policy changes involving PPE utilization, sterilization measures, and supply chain shortages of necessary resources such as PPE.

Neurosurgery in particular has been susceptible to COVID-19–related system-wide changes given operator proximity to the patient’s respiratory passages, frequency of emergent cases, and varying anesthetic needs, as well as the high level of specialization needed to perform neurosurgical care. Previous studies have shown a change in the makeup of neurosurgical patients seeking care, as well as in the acuity of neurological consult of these patients.1 A study in orthopedic surgery by Andreata et al demonstrated worsened OR efficiency, with significantly increased first-start and turnover times.2 In the COVID-19 era, OR quality and safety are crucially important to both patients and providers. Providing this safe and effective care in an efficient manner is important for optimal neurosurgical management in the long term.3 Moreover, the financial burden of implementing new protocols and standards can be compounded by additional financial losses due to reduced OR efficiency.

 

 

Methods

To analyze the effect of COVID-19 on neurosurgical OR efficiency, institutional perioperative databases at a single high-volume center were queried for operations performed from December 2019 until October 2021. March 12, 2020, was chosen as the onset of COVID-19 for analytic purposes, as this was the date when the state of Tennessee declared a state of emergency. The 90-day periods before and after this date were used for comparative analysis for pre-COVID-19, peak COVID-19, and post-peak-restrictions time periods. The peak COVID-19 period was defined as the 90-day period following the initial onset of COVID-19 and the surge of cases. For comparison purposes, post-peak COVID-19 was defined as the months following the first peak until October 2021 (approximately 17 months). COVID-19 burden was determined using a COVID-19 single-institution census of confirmed cases by polymerase chain reaction (PCR) for which the average number of cases of COVID-19 during a given month was determined. This number is a scaled trend, and a true number of COVID-19 cases in our hospital was not reported.

Neurosurgical and neuroendovascular cases were included in the analysis. Outcomes included delay in first-start and OR turnover time between neurosurgical cases, defined as the time from the patient leaving the room until the next patient entered the room. Preset threshold times were used in analyses to adjust for normal leniency in OR scheduling (15 minutes for first start and 90 minutes for turnover, which is a standard for our single-institution perioperative center). Statistical analyses, including data aggregation, were performed using R, version 4.0.1 (R Foundation for Statistical Computing). Patients’ demographic and clinical characteristics were analyzed using an independent 2-sample t-test for interval variables and a chi-square test for categorical variables. Significance was defined as P < .05.

Results

First-Start Time

First-start time was analyzed in 426 pre-COVID-19, 357 peak-COVID-19, and 2304 post-peak-COVID-19 cases. The unadjusted mean delay length was significantly different between the time periods, but the magnitude of increase in minutes was immaterial (mean [SD] minutes, 6 [18] vs 10 [21] vs 8 [20], respectively; P=.004) (Table 1).

First-Start Time Analysis

The adjusted average delay length and proportion of cases delayed beyond the 15-minute threshold were not significantly different, but they have been slightly higher since the onset of COVID-19. The proportion of cases that have started early, as well as significantly early past a 15-minute threshold, have also trended down since the onset of the COVID-19 pandemic, but this difference was again not significant. The temporal relationship of first-start delay, both unadjusted and adjusted, from December 2019 to October 2021 is shown in Figure 1. The trend of increasing delay is loosely associated with the COVID-19 burden experienced by our hospital. The start of COVID-19 as well as both COVID-19 peaks have been associated with increased delays in our hospital.

(A) Unadjusted and (B) adjusted first-start delay in operating room efficiency relative to COVID-19 census.

Turnover Time

Turnover time was assessed in 437 pre-COVID-19, 278 peak-restrictions, and 2411 post-peak-restrictions cases. Turnover time during peak restrictions was not significantly different from pre-COVID-19 (88 [100] vs 85 [95]) and has since remained relatively unchanged (83 [87], P = .78). A similar trend held for comparisons of proportion of cases with turnover time past 90 minutes and average times past the 90-minute threshold (Table 2). The temporal relationship between COVID-19 burden and turnover time, both unadjusted and adjusted, from December 2019 to October 2021 is shown in Figure 2. Both figures demonstrate a slight initial increase in turnover time delay at the start of COVID-19, which stabilized with little variation thereafter.

Turnover Time Analysis

(A) Unadjusted and (B) adjusted turnover time in operating room efficiency relative to COVID-19 census.

 

 

Discussion

We analyzed the OR efficiency metrics of first-start and turnover time during the 90-day period before COVID-19 (pre-COVID-19), the 90 days following Tennessee declaring a state of emergency (peak COVID-19), and the time following this period (post-COVID-19) for all neurosurgical and neuroendovascular cases at Vanderbilt University Medical Center (VUMC). We found a significant difference in unadjusted mean delay length in first-start time between the time periods, but the magnitude of increase in minutes was immaterial (mean [SD] minutes for pre-COVID-19, peak-COVID-19, and post-COVID-19: 6 [18] vs 10 [21] vs 8 [20], respectively; P = .004). No significant increase in turnover time between cases was found between these 3 time periods. Based on metrics from first-start delay and turnover time, our center was able to maintain OR efficiency before, during, and after peak COVID-19.

After the Centers for Disease Control and Prevention released guidelines recommending deferring elective procedures to conserve beds and PPE, VUMC made the decision to suspend all elective surgical procedures from March 18 to April 24, 2020. Prior research conducted during the COVID-19 pandemic has demonstrated more than 400 types of surgical procedures with negatively impacted outcomes when compared to surgical outcomes from the same time frame in 2018 and 2019.4 For more than 20 of these types of procedures, there was a significant association between procedure delay and adverse patient outcomes.4 Testing protocols for patients prior to surgery varied throughout the pandemic based on vaccination status and type of procedure. Before vaccines became widely available, all patients were required to obtain a PCR SARS-CoV-2 test within 48 to 72 hours of the scheduled procedure. If the patient’s procedure was urgent and testing was not feasible, the patient was treated as a SARS-CoV-2–positive patient, which required all health care workers involved in the case to wear gowns, gloves, surgical masks, and eye protection. Testing patients preoperatively likely helped to maintain OR efficiency since not all patients received test results prior to the scheduled procedure, leading to cancellations of cases and therefore more staff available for fewer cases.

After vaccines became widely available to the public, testing requirements for patients preoperatively were relaxed, and only patients who were not fully vaccinated or severely immunocompromised were required to test prior to procedures. However, approximately 40% of the population in Tennessee was fully vaccinated in 2021, which reflects the patient population of VUMC.5 This means that many patients who received care at VUMC were still tested prior to procedures.

Adopting adequate safety protocols was found to be key for OR efficiency during the COVID-19 pandemic since performing surgery increased the risk of infection for each health care worker in the OR.6 VUMC protocols identified procedures that required enhanced safety measures to prevent infection of health care workers and avoid staffing shortages, which would decrease OR efficiency. Protocols mandated that only anesthesia team members were allowed to be in the OR during intubation and extubation of patients, which could be one factor leading to increased delays and decreased efficiency for some institutions. Methods for neurosurgeons to decrease risk of infection in the OR include postponing all nonurgent cases, reappraising the necessity for general anesthesia and endotracheal intubation, considering alternative surgical approaches that avoid the respiratory tract, and limiting the use of aerosol-generating instruments.7,8 VUMC’s success in implementing these protocols likely explains why our center was able to maintain OR efficiency throughout the COVID-19 pandemic.

A study conducted by Andreata et al showed a significantly increased mean first-case delay and a nonsignificant increased turnover time in orthopedic surgeries in Northern Italy when comparing surgeries performed during the COVID-19 pandemic to those performed prior to COVID-19.2 Other studies have indicated a similar trend in decreased OR efficiency during COVID-19 in other areas around the world.9,10 These findings are not consistent with our own findings for neurosurgical and neuroendovascular surgeries at VUMC, and any change at our institution was relatively immaterial. Factors that threatened to change OR efficiency—but did not result in meaningful changes in our institutional experience—include delays due to pending COVID-19 test results, safety procedures such as PPE donning, and planning difficulties to ensure the existence of teams with non-overlapping providers in the case of a surgeon being infected.2,11-13

 

 

Globally, many surgery centers halted all elective surgeries during the initial COVID-19 spike to prevent a PPE shortage and mitigate risk of infection of patients and health care workers.8,12,14 However, there is no centralized definition of which neurosurgical procedures are elective, so that decision was made on a surgeon or center level, which could lead to variability in efficiency trends.14 One study on neurosurgical procedures during COVID-19 found a 30% decline in all cases and a 23% decline in emergent procedures, showing that the decrease in volume was not only due to cancellation of elective procedures.15 This decrease in elective and emergent surgeries created a backlog of surgeries as well as a loss in health care revenue, and caused many patients to go without adequate health care.10 Looking forward, it is imperative that surgical centers study trends in OR efficiency from COVID-19 and learn how to better maintain OR efficiency during future pandemic conditions to prevent a backlog of cases, loss of health care revenue, and decreased health care access.

Limitations

Our data are from a single center and therefore may not be representative of experiences of other hospitals due to different populations and different impacts from COVID-19. However, given our center’s high volume and diverse patient population, we believe our analysis highlights important trends in neurosurgery practice. Notably, data for patient and OR timing are digitally generated and are entered manually by nurses in the electronic medical record, making it prone to errors and variability. This is in our experience, and if any error is present, we believe it is minimal.

Conclusion

The COVID-19 pandemic has had far-reaching effects on health care worldwide, including neurosurgical care. OR efficiency across the United States generally worsened given the stresses of supply chain issues, staffing shortages, and cancellations. At our institution, we were able to maintain OR efficiency during the known COVID-19 peaks until October 2021. Continually functional neurosurgical ORs are important in preventing delays in care and maintaining a steady revenue in order for hospitals and other health care entities to remain solvent. Further study of OR efficiency is needed for health care systems to prepare for future pandemics and other resource-straining events in order to provide optimal patient care.

Corresponding author: Campbell Liles, MD, Vanderbilt University Medical Center, Department of Neurological Surgery, 1161 21st Ave. South, T4224 Medical Center North, Nashville, TN 37232-2380; [email protected]

Disclosures: None reported.

From the Department of Neurological Surgery, Vanderbilt University Medical Center, Nashville, TN (Stefan W. Koester, Puja Jagasia, and Drs. Liles, Dambrino IV, Feldman, and Chambless), and the Department of Anesthesiology, Vanderbilt University Medical Center, Nashville, TN (Drs. Mathews and Tiwari).

ABSTRACT

Background: The COVID-19 pandemic has had broad effects on surgical care, including operating room (OR) staffing, personal protective equipment (PPE) utilization, and newly implemented anti-infective measures. Our aim was to assess neurosurgery OR efficiency before the COVID-19 pandemic, during peak COVID-19, and during current times.

Methods: Institutional perioperative databases at a single, high-volume neurosurgical center were queried for operations performed from December 2019 until October 2021. March 12, 2020, the day that the state of Tennessee declared a state of emergency, was chosen as the onset of the COVID-19 pandemic. The 90-day periods before and after this day were used to define the pre-COVID-19, peak-COVID-19, and post-peak restrictions time periods for comparative analysis. Outcomes included delay in first-start and OR turnover time between neurosurgical cases. Preset threshold times were used in analyses to adjust for normal leniency in OR scheduling (15 minutes for first start and 90 minutes for turnover). Univariate analysis used Wilcoxon rank-sum test for continuous outcomes, while chi-square test and Fisher’s exact test were used for categorical comparisons. Significance was defined as P < .05.

Results: First-start time was analyzed in 426 pre-COVID-19, 357 peak-restrictions, and 2304 post-peak-restrictions cases. The unadjusted mean delay length was found to be significantly different between the time periods, but the magnitude of increase in minutes was immaterial (mean [SD] minutes, 6 [18] vs 10 [21] vs 8 [20], respectively; P = .004). The adjusted average delay length and proportion of cases delayed beyond the 15-minute threshold were not significantly different. The proportion of cases that started early, as well as significantly early past a 15-minute threshold, have not been impacted. There was no significant change in turnover time during peak restrictions relative to the pre-COVID-19 period (88 [100] minutes vs 85 [95] minutes), and turnover time has since remained unchanged (83 [87] minutes).

Conclusion: Our center was able to maintain OR efficiency before, during, and after peak restrictions even while instituting advanced infection-control strategies. While there were significant changes, delays were relatively small in magnitude.

Keywords: operating room timing, hospital efficiency, socioeconomics, pandemic.

The COVID-19 pandemic has led to major changes in patient care both from a surgical perspective and in regard to inpatient hospital course. Safety protocols nationwide have been implemented to protect both patients and providers. Some elements of surgical care have drastically changed, including operating room (OR) staffing, personal protective equipment (PPE) utilization, and increased sterilization measures. Furloughs, layoffs, and reassignments due to the focus on nonelective and COVID-19–related cases challenged OR staffing and efficiency. Operating room staff with COVID-19 exposures or COVID-19 infections also caused last-minute changes in staffing. All of these scenarios can cause issues due to actual understaffing or due to staff members being pushed into highly specialized areas, such as neurosurgery, in which they have very little experience. A further obstacle to OR efficiency included policy changes involving PPE utilization, sterilization measures, and supply chain shortages of necessary resources such as PPE.

Neurosurgery in particular has been susceptible to COVID-19–related system-wide changes given operator proximity to the patient’s respiratory passages, frequency of emergent cases, and varying anesthetic needs, as well as the high level of specialization needed to perform neurosurgical care. Previous studies have shown a change in the makeup of neurosurgical patients seeking care, as well as in the acuity of neurological consult of these patients.1 A study in orthopedic surgery by Andreata et al demonstrated worsened OR efficiency, with significantly increased first-start and turnover times.2 In the COVID-19 era, OR quality and safety are crucially important to both patients and providers. Providing this safe and effective care in an efficient manner is important for optimal neurosurgical management in the long term.3 Moreover, the financial burden of implementing new protocols and standards can be compounded by additional financial losses due to reduced OR efficiency.

 

 

Methods

To analyze the effect of COVID-19 on neurosurgical OR efficiency, institutional perioperative databases at a single high-volume center were queried for operations performed from December 2019 until October 2021. March 12, 2020, was chosen as the onset of COVID-19 for analytic purposes, as this was the date when the state of Tennessee declared a state of emergency. The 90-day periods before and after this date were used for comparative analysis for pre-COVID-19, peak COVID-19, and post-peak-restrictions time periods. The peak COVID-19 period was defined as the 90-day period following the initial onset of COVID-19 and the surge of cases. For comparison purposes, post-peak COVID-19 was defined as the months following the first peak until October 2021 (approximately 17 months). COVID-19 burden was determined using a COVID-19 single-institution census of confirmed cases by polymerase chain reaction (PCR) for which the average number of cases of COVID-19 during a given month was determined. This number is a scaled trend, and a true number of COVID-19 cases in our hospital was not reported.

Neurosurgical and neuroendovascular cases were included in the analysis. Outcomes included delay in first-start and OR turnover time between neurosurgical cases, defined as the time from the patient leaving the room until the next patient entered the room. Preset threshold times were used in analyses to adjust for normal leniency in OR scheduling (15 minutes for first start and 90 minutes for turnover, which is a standard for our single-institution perioperative center). Statistical analyses, including data aggregation, were performed using R, version 4.0.1 (R Foundation for Statistical Computing). Patients’ demographic and clinical characteristics were analyzed using an independent 2-sample t-test for interval variables and a chi-square test for categorical variables. Significance was defined as P < .05.

Results

First-Start Time

First-start time was analyzed in 426 pre-COVID-19, 357 peak-COVID-19, and 2304 post-peak-COVID-19 cases. The unadjusted mean delay length was significantly different between the time periods, but the magnitude of increase in minutes was immaterial (mean [SD] minutes, 6 [18] vs 10 [21] vs 8 [20], respectively; P=.004) (Table 1).

First-Start Time Analysis

The adjusted average delay length and proportion of cases delayed beyond the 15-minute threshold were not significantly different, but they have been slightly higher since the onset of COVID-19. The proportion of cases that have started early, as well as significantly early past a 15-minute threshold, have also trended down since the onset of the COVID-19 pandemic, but this difference was again not significant. The temporal relationship of first-start delay, both unadjusted and adjusted, from December 2019 to October 2021 is shown in Figure 1. The trend of increasing delay is loosely associated with the COVID-19 burden experienced by our hospital. The start of COVID-19 as well as both COVID-19 peaks have been associated with increased delays in our hospital.

(A) Unadjusted and (B) adjusted first-start delay in operating room efficiency relative to COVID-19 census.

Turnover Time

Turnover time was assessed in 437 pre-COVID-19, 278 peak-restrictions, and 2411 post-peak-restrictions cases. Turnover time during peak restrictions was not significantly different from pre-COVID-19 (88 [100] vs 85 [95]) and has since remained relatively unchanged (83 [87], P = .78). A similar trend held for comparisons of proportion of cases with turnover time past 90 minutes and average times past the 90-minute threshold (Table 2). The temporal relationship between COVID-19 burden and turnover time, both unadjusted and adjusted, from December 2019 to October 2021 is shown in Figure 2. Both figures demonstrate a slight initial increase in turnover time delay at the start of COVID-19, which stabilized with little variation thereafter.

Turnover Time Analysis

(A) Unadjusted and (B) adjusted turnover time in operating room efficiency relative to COVID-19 census.

 

 

Discussion

We analyzed the OR efficiency metrics of first-start and turnover time during the 90-day period before COVID-19 (pre-COVID-19), the 90 days following Tennessee declaring a state of emergency (peak COVID-19), and the time following this period (post-COVID-19) for all neurosurgical and neuroendovascular cases at Vanderbilt University Medical Center (VUMC). We found a significant difference in unadjusted mean delay length in first-start time between the time periods, but the magnitude of increase in minutes was immaterial (mean [SD] minutes for pre-COVID-19, peak-COVID-19, and post-COVID-19: 6 [18] vs 10 [21] vs 8 [20], respectively; P = .004). No significant increase in turnover time between cases was found between these 3 time periods. Based on metrics from first-start delay and turnover time, our center was able to maintain OR efficiency before, during, and after peak COVID-19.

After the Centers for Disease Control and Prevention released guidelines recommending deferring elective procedures to conserve beds and PPE, VUMC made the decision to suspend all elective surgical procedures from March 18 to April 24, 2020. Prior research conducted during the COVID-19 pandemic has demonstrated more than 400 types of surgical procedures with negatively impacted outcomes when compared to surgical outcomes from the same time frame in 2018 and 2019.4 For more than 20 of these types of procedures, there was a significant association between procedure delay and adverse patient outcomes.4 Testing protocols for patients prior to surgery varied throughout the pandemic based on vaccination status and type of procedure. Before vaccines became widely available, all patients were required to obtain a PCR SARS-CoV-2 test within 48 to 72 hours of the scheduled procedure. If the patient’s procedure was urgent and testing was not feasible, the patient was treated as a SARS-CoV-2–positive patient, which required all health care workers involved in the case to wear gowns, gloves, surgical masks, and eye protection. Testing patients preoperatively likely helped to maintain OR efficiency since not all patients received test results prior to the scheduled procedure, leading to cancellations of cases and therefore more staff available for fewer cases.

After vaccines became widely available to the public, testing requirements for patients preoperatively were relaxed, and only patients who were not fully vaccinated or severely immunocompromised were required to test prior to procedures. However, approximately 40% of the population in Tennessee was fully vaccinated in 2021, which reflects the patient population of VUMC.5 This means that many patients who received care at VUMC were still tested prior to procedures.

Adopting adequate safety protocols was found to be key for OR efficiency during the COVID-19 pandemic since performing surgery increased the risk of infection for each health care worker in the OR.6 VUMC protocols identified procedures that required enhanced safety measures to prevent infection of health care workers and avoid staffing shortages, which would decrease OR efficiency. Protocols mandated that only anesthesia team members were allowed to be in the OR during intubation and extubation of patients, which could be one factor leading to increased delays and decreased efficiency for some institutions. Methods for neurosurgeons to decrease risk of infection in the OR include postponing all nonurgent cases, reappraising the necessity for general anesthesia and endotracheal intubation, considering alternative surgical approaches that avoid the respiratory tract, and limiting the use of aerosol-generating instruments.7,8 VUMC’s success in implementing these protocols likely explains why our center was able to maintain OR efficiency throughout the COVID-19 pandemic.

A study conducted by Andreata et al showed a significantly increased mean first-case delay and a nonsignificant increased turnover time in orthopedic surgeries in Northern Italy when comparing surgeries performed during the COVID-19 pandemic to those performed prior to COVID-19.2 Other studies have indicated a similar trend in decreased OR efficiency during COVID-19 in other areas around the world.9,10 These findings are not consistent with our own findings for neurosurgical and neuroendovascular surgeries at VUMC, and any change at our institution was relatively immaterial. Factors that threatened to change OR efficiency—but did not result in meaningful changes in our institutional experience—include delays due to pending COVID-19 test results, safety procedures such as PPE donning, and planning difficulties to ensure the existence of teams with non-overlapping providers in the case of a surgeon being infected.2,11-13

 

 

Globally, many surgery centers halted all elective surgeries during the initial COVID-19 spike to prevent a PPE shortage and mitigate risk of infection of patients and health care workers.8,12,14 However, there is no centralized definition of which neurosurgical procedures are elective, so that decision was made on a surgeon or center level, which could lead to variability in efficiency trends.14 One study on neurosurgical procedures during COVID-19 found a 30% decline in all cases and a 23% decline in emergent procedures, showing that the decrease in volume was not only due to cancellation of elective procedures.15 This decrease in elective and emergent surgeries created a backlog of surgeries as well as a loss in health care revenue, and caused many patients to go without adequate health care.10 Looking forward, it is imperative that surgical centers study trends in OR efficiency from COVID-19 and learn how to better maintain OR efficiency during future pandemic conditions to prevent a backlog of cases, loss of health care revenue, and decreased health care access.

Limitations

Our data are from a single center and therefore may not be representative of experiences of other hospitals due to different populations and different impacts from COVID-19. However, given our center’s high volume and diverse patient population, we believe our analysis highlights important trends in neurosurgery practice. Notably, data for patient and OR timing are digitally generated and are entered manually by nurses in the electronic medical record, making it prone to errors and variability. This is in our experience, and if any error is present, we believe it is minimal.

Conclusion

The COVID-19 pandemic has had far-reaching effects on health care worldwide, including neurosurgical care. OR efficiency across the United States generally worsened given the stresses of supply chain issues, staffing shortages, and cancellations. At our institution, we were able to maintain OR efficiency during the known COVID-19 peaks until October 2021. Continually functional neurosurgical ORs are important in preventing delays in care and maintaining a steady revenue in order for hospitals and other health care entities to remain solvent. Further study of OR efficiency is needed for health care systems to prepare for future pandemics and other resource-straining events in order to provide optimal patient care.

Corresponding author: Campbell Liles, MD, Vanderbilt University Medical Center, Department of Neurological Surgery, 1161 21st Ave. South, T4224 Medical Center North, Nashville, TN 37232-2380; [email protected]

Disclosures: None reported.

References

1. Koester SW, Catapano JS, Ma KL, et al. COVID-19 and neurosurgery consultation call volume at a single large tertiary center with a propensity- adjusted analysis. World Neurosurg. 2021;146:e768-e772. doi:10.1016/j.wneu.2020.11.017

2. Andreata M, Faraldi M, Bucci E, Lombardi G, Zagra L. Operating room efficiency and timing during coronavirus disease 2019 outbreak in a referral orthopaedic hospital in Northern Italy. Int Orthop. 2020;44(12):2499-2504. doi:10.1007/s00264-020-04772-x

3. Dexter F, Abouleish AE, Epstein RH, et al. Use of operating room information system data to predict the impact of reducing turnover times on staffing costs. Anesth Analg. 2003;97(4):1119-1126. doi:10.1213/01.ANE.0000082520.68800.79

4. Zheng NS, Warner JL, Osterman TJ, et al. A retrospective approach to evaluating potential adverse outcomes associated with delay of procedures for cardiovascular and cancer-related diagnoses in the context of COVID-19. J Biomed Inform. 2021;113:103657. doi:10.1016/j.jbi.2020.103657

5. Alcendor DJ. Targeting COVID-19 vaccine hesitancy in rural communities in Tennessee: implications for extending the COVID- 19 pandemic in the South. Vaccines (Basel). 2021;9(11):1279. doi:10.3390/vaccines9111279

6. Perrone G, Giuffrida M, Bellini V, et al. Operating room setup: how to improve health care professionals safety during pandemic COVID- 19: a quality improvement study. J Laparoendosc Adv Surg Tech A. 2021;31(1):85-89. doi:10.1089/lap.2020.0592

7. Iorio-Morin C, Hodaie M, Sarica C, et al. Letter: the risk of COVID-19 infection during neurosurgical procedures: a review of severe acute respiratory distress syndrome coronavirus 2 (SARS-CoV-2) modes of transmission and proposed neurosurgery-specific measures for mitigation. Neurosurgery. 2020;87(2):E178-E185. doi:10.1093/ neuros/nyaa157

8. Gupta P, Muthukumar N, Rajshekhar V, et al. Neurosurgery and neurology practices during the novel COVID-19 pandemic: a consensus statement from India. Neurol India. 2020;68(2):246-254. doi:10.4103/0028-3886.283130

9. Mercer ST, Agarwal R, Dayananda KSS, et al. A comparative study looking at trauma and orthopaedic operating efficiency in the COVID-19 era. Perioper Care Oper Room Manag. 2020;21:100142. doi:10.1016/j.pcorm.2020.100142

10. Rozario N, Rozario D. Can machine learning optimize the efficiency of the operating room in the era of COVID-19? Can J Surg. 2020;63(6):E527-E529. doi:10.1503/cjs.016520

11. Toh KHQ, Barazanchi A, Rajaretnam NS, et al. COVID-19 response by New Zealand general surgical departments in tertiary metropolitan hospitals. ANZ J Surg. 2021;91(7-8):1352-1357. doi:10.1111/ ans.17044

12. Moorthy RK, Rajshekhar V. Impact of COVID-19 pandemic on neurosurgical practice in India: a survey on personal protective equipment usage, testing, and perceptions on disease transmission. Neurol India. 2020;68(5):1133-1138. doi:10.4103/0028- 3886.299173

13. Meneghini RM. Techniques and strategies to optimize efficiencies in the office and operating room: getting through the patient backlog and preserving hospital resources. J Arthroplasty. 2021;36(7S):S49-S51. doi:10.1016/j.arth.2021.03.010

14. Jean WC, Ironside NT, Sack KD, et al. The impact of COVID- 19 on neurosurgeons and the strategy for triaging non-emergent operations: a global neurosurgery study. Acta Neurochir (Wien). 2020;162(6):1229-1240. doi:10.1007/s00701-020- 04342-5

15. Raneri F, Rustemi O, Zambon G, et al. Neurosurgery in times of a pandemic: a survey of neurosurgical services during the COVID-19 outbreak in the Veneto region in Italy. Neurosurg Focus. 2020;49(6):E9. doi:10.3171/2020.9.FOCUS20691

References

1. Koester SW, Catapano JS, Ma KL, et al. COVID-19 and neurosurgery consultation call volume at a single large tertiary center with a propensity- adjusted analysis. World Neurosurg. 2021;146:e768-e772. doi:10.1016/j.wneu.2020.11.017

2. Andreata M, Faraldi M, Bucci E, Lombardi G, Zagra L. Operating room efficiency and timing during coronavirus disease 2019 outbreak in a referral orthopaedic hospital in Northern Italy. Int Orthop. 2020;44(12):2499-2504. doi:10.1007/s00264-020-04772-x

3. Dexter F, Abouleish AE, Epstein RH, et al. Use of operating room information system data to predict the impact of reducing turnover times on staffing costs. Anesth Analg. 2003;97(4):1119-1126. doi:10.1213/01.ANE.0000082520.68800.79

4. Zheng NS, Warner JL, Osterman TJ, et al. A retrospective approach to evaluating potential adverse outcomes associated with delay of procedures for cardiovascular and cancer-related diagnoses in the context of COVID-19. J Biomed Inform. 2021;113:103657. doi:10.1016/j.jbi.2020.103657

5. Alcendor DJ. Targeting COVID-19 vaccine hesitancy in rural communities in Tennessee: implications for extending the COVID- 19 pandemic in the South. Vaccines (Basel). 2021;9(11):1279. doi:10.3390/vaccines9111279

6. Perrone G, Giuffrida M, Bellini V, et al. Operating room setup: how to improve health care professionals safety during pandemic COVID- 19: a quality improvement study. J Laparoendosc Adv Surg Tech A. 2021;31(1):85-89. doi:10.1089/lap.2020.0592

7. Iorio-Morin C, Hodaie M, Sarica C, et al. Letter: the risk of COVID-19 infection during neurosurgical procedures: a review of severe acute respiratory distress syndrome coronavirus 2 (SARS-CoV-2) modes of transmission and proposed neurosurgery-specific measures for mitigation. Neurosurgery. 2020;87(2):E178-E185. doi:10.1093/ neuros/nyaa157

8. Gupta P, Muthukumar N, Rajshekhar V, et al. Neurosurgery and neurology practices during the novel COVID-19 pandemic: a consensus statement from India. Neurol India. 2020;68(2):246-254. doi:10.4103/0028-3886.283130

9. Mercer ST, Agarwal R, Dayananda KSS, et al. A comparative study looking at trauma and orthopaedic operating efficiency in the COVID-19 era. Perioper Care Oper Room Manag. 2020;21:100142. doi:10.1016/j.pcorm.2020.100142

10. Rozario N, Rozario D. Can machine learning optimize the efficiency of the operating room in the era of COVID-19? Can J Surg. 2020;63(6):E527-E529. doi:10.1503/cjs.016520

11. Toh KHQ, Barazanchi A, Rajaretnam NS, et al. COVID-19 response by New Zealand general surgical departments in tertiary metropolitan hospitals. ANZ J Surg. 2021;91(7-8):1352-1357. doi:10.1111/ ans.17044

12. Moorthy RK, Rajshekhar V. Impact of COVID-19 pandemic on neurosurgical practice in India: a survey on personal protective equipment usage, testing, and perceptions on disease transmission. Neurol India. 2020;68(5):1133-1138. doi:10.4103/0028- 3886.299173

13. Meneghini RM. Techniques and strategies to optimize efficiencies in the office and operating room: getting through the patient backlog and preserving hospital resources. J Arthroplasty. 2021;36(7S):S49-S51. doi:10.1016/j.arth.2021.03.010

14. Jean WC, Ironside NT, Sack KD, et al. The impact of COVID- 19 on neurosurgeons and the strategy for triaging non-emergent operations: a global neurosurgery study. Acta Neurochir (Wien). 2020;162(6):1229-1240. doi:10.1007/s00701-020- 04342-5

15. Raneri F, Rustemi O, Zambon G, et al. Neurosurgery in times of a pandemic: a survey of neurosurgical services during the COVID-19 outbreak in the Veneto region in Italy. Neurosurg Focus. 2020;49(6):E9. doi:10.3171/2020.9.FOCUS20691

Issue
Journal of Clinical Outcomes Management - 29(6)
Issue
Journal of Clinical Outcomes Management - 29(6)
Page Number
208-213
Page Number
208-213
Publications
Publications
Topics
Article Type
Display Headline
Neurosurgery Operating Room Efficiency During the COVID-19 Era
Display Headline
Neurosurgery Operating Room Efficiency During the COVID-19 Era
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Best Practice Implementation and Clinical Inertia

Article Type
Changed
Wed, 12/28/2022 - 12:35
Display Headline
Best Practice Implementation and Clinical Inertia

From the Department of Medicine, Brigham and Women’s Hospital, and Harvard Medical School, Boston, MA.

Clinical inertia is defined as the failure of clinicians to initiate or escalate guideline-directed medical therapy to achieve treatment goals for well-defined clinical conditions.1,2 Evidence-based guidelines recommend optimal disease management with readily available medical therapies throughout the phases of clinical care. Unfortunately, the care provided to individual patients undergoes multiple modifications throughout the disease course, resulting in divergent pathways, significant deviations from treatment guidelines, and failure of “safeguard” checkpoints to reinstate, initiate, optimize, or stop treatments. Clinical inertia generally describes rigidity or resistance to change around implementing evidence-based guidelines. Furthermore, this term describes treatment behavior on the part of an individual clinician, not organizational inertia, which generally encompasses both internal (immediate clinical practice settings) and external factors (national and international guidelines and recommendations), eventually leading to resistance to optimizing disease treatment and therapeutic regimens. Individual clinicians’ clinical inertia in the form of resistance to guideline implementation and evidence-based principles can be one factor that drives organizational inertia. In turn, such individual behavior can be dictated by personal beliefs, knowledge, interpretation, skills, management principles, and biases. The terms therapeutic inertia or clinical inertia should not be confused with nonadherence from the patient’s standpoint when the clinician follows the best practice guidelines.3

Clinical inertia has been described in several clinical domains, including diabetes,4,5 hypertension,6,7 heart failure,8 depression,9 pulmonary medicine,10 and complex disease management.11 Clinicians can set suboptimal treatment goals due to specific beliefs and attitudes around optimal therapeutic goals. For example, when treating a patient with a chronic disease that is presently stable, a clinician could elect to initiate suboptimal treatment, as escalation of treatment might not be the priority in stable disease; they also may have concerns about overtreatment. Other factors that can contribute to clinical inertia (ie, undertreatment in the presence of indications for treatment) include those related to the patient, the clinical setting, and the organization, along with the importance of individualizing therapies in specific patients. Organizational inertia is the initial global resistance by the system to implementation, which can slow the dissemination and adaptation of best practices but eventually declines over time. Individual clinical inertia, on the other hand, will likely persist after the system-level rollout of guideline-based approaches.

The trajectory of dissemination, implementation, and adaptation of innovations and best practices is illustrated in the Figure. When the guidelines and medical societies endorse the adaptation of innovations or practice change after the benefits of such innovations/change have been established by the regulatory bodies, uptake can be hindered by both organizational and clinical inertia. Overcoming inertia to system-level changes requires addressing individual clinicians, along with practice and organizational factors, in order to ensure systematic adaptations. From the clinicians’ view, training and cognitive interventions to improve the adaptation and coping skills can improve understanding of treatment options through standardized educational and behavioral modification tools, direct and indirect feedback around performance, and decision support through a continuous improvement approach on both individual and system levels.

Trajectory of innovations, dissemination, and organizational adaptations

Addressing inertia in clinical practice requires a deep understanding of the individual and organizational elements that foster resistance to adapting best practice models. Research that explores tools and approaches to overcome inertia in managing complex diseases is a key step in advancing clinical innovation and disseminating best practices.

Corresponding author: Ebrahim Barkoudah, MD, MPH; [email protected]

Disclosures: None reported.

References

1. Phillips LS, Branch WT, Cook CB, et al. Clinical inertia. Ann Intern Med. 2001;135(9):825-834. doi:10.7326/0003-4819-135-9-200111060-00012

2. Allen JD, Curtiss FR, Fairman KA. Nonadherence, clinical inertia, or therapeutic inertia? J Manag Care Pharm. 2009;15(8):690-695. doi:10.18553/jmcp.2009.15.8.690

3. Zafar A, Davies M, Azhar A, Khunti K. Clinical inertia in management of T2DM. Prim Care Diabetes. 2010;4(4):203-207. doi:10.1016/j.pcd.2010.07.003

4. Khunti K, Davies MJ. Clinical inertia—time to reappraise the terminology? Prim Care Diabetes. 2017;11(2):105-106. doi:10.1016/j.pcd.2017.01.007

5. O’Connor PJ. Overcome clinical inertia to control systolic blood pressure. Arch Intern Med. 2003;163(22):2677-2678. doi:10.1001/archinte.163.22.2677

6. Faria C, Wenzel M, Lee KW, et al. A narrative review of clinical inertia: focus on hypertension. J Am Soc Hypertens. 2009;3(4):267-276. doi:10.1016/j.jash.2009.03.001

7. Jarjour M, Henri C, de Denus S, et al. Care gaps in adherence to heart failure guidelines: clinical inertia or physiological limitations? JACC Heart Fail. 2020;8(9):725-738. doi:10.1016/j.jchf.2020.04.019

8. Henke RM, Zaslavsky AM, McGuire TG, et al. Clinical inertia in depression treatment. Med Care. 2009;47(9):959-67. doi:10.1097/MLR.0b013e31819a5da0

9. Cooke CE, Sidel M, Belletti DA, Fuhlbrigge AL. Clinical inertia in the management of chronic obstructive pulmonary disease. COPD. 2012;9(1):73-80. doi:10.3109/15412555.2011.631957

10. Whitford DL, Al-Anjawi HA, Al-Baharna MM. Impact of clinical inertia on cardiovascular risk factors in patients with diabetes. Prim Care Diabetes. 2014;8(2):133-138. doi:10.1016/j.pcd.2013.10.007

Article PDF
Issue
Journal of Clinical Outcomes Management - 29(6)
Publications
Topics
Page Number
206-207
Sections
Article PDF
Article PDF

From the Department of Medicine, Brigham and Women’s Hospital, and Harvard Medical School, Boston, MA.

Clinical inertia is defined as the failure of clinicians to initiate or escalate guideline-directed medical therapy to achieve treatment goals for well-defined clinical conditions.1,2 Evidence-based guidelines recommend optimal disease management with readily available medical therapies throughout the phases of clinical care. Unfortunately, the care provided to individual patients undergoes multiple modifications throughout the disease course, resulting in divergent pathways, significant deviations from treatment guidelines, and failure of “safeguard” checkpoints to reinstate, initiate, optimize, or stop treatments. Clinical inertia generally describes rigidity or resistance to change around implementing evidence-based guidelines. Furthermore, this term describes treatment behavior on the part of an individual clinician, not organizational inertia, which generally encompasses both internal (immediate clinical practice settings) and external factors (national and international guidelines and recommendations), eventually leading to resistance to optimizing disease treatment and therapeutic regimens. Individual clinicians’ clinical inertia in the form of resistance to guideline implementation and evidence-based principles can be one factor that drives organizational inertia. In turn, such individual behavior can be dictated by personal beliefs, knowledge, interpretation, skills, management principles, and biases. The terms therapeutic inertia or clinical inertia should not be confused with nonadherence from the patient’s standpoint when the clinician follows the best practice guidelines.3

Clinical inertia has been described in several clinical domains, including diabetes,4,5 hypertension,6,7 heart failure,8 depression,9 pulmonary medicine,10 and complex disease management.11 Clinicians can set suboptimal treatment goals due to specific beliefs and attitudes around optimal therapeutic goals. For example, when treating a patient with a chronic disease that is presently stable, a clinician could elect to initiate suboptimal treatment, as escalation of treatment might not be the priority in stable disease; they also may have concerns about overtreatment. Other factors that can contribute to clinical inertia (ie, undertreatment in the presence of indications for treatment) include those related to the patient, the clinical setting, and the organization, along with the importance of individualizing therapies in specific patients. Organizational inertia is the initial global resistance by the system to implementation, which can slow the dissemination and adaptation of best practices but eventually declines over time. Individual clinical inertia, on the other hand, will likely persist after the system-level rollout of guideline-based approaches.

The trajectory of dissemination, implementation, and adaptation of innovations and best practices is illustrated in the Figure. When the guidelines and medical societies endorse the adaptation of innovations or practice change after the benefits of such innovations/change have been established by the regulatory bodies, uptake can be hindered by both organizational and clinical inertia. Overcoming inertia to system-level changes requires addressing individual clinicians, along with practice and organizational factors, in order to ensure systematic adaptations. From the clinicians’ view, training and cognitive interventions to improve the adaptation and coping skills can improve understanding of treatment options through standardized educational and behavioral modification tools, direct and indirect feedback around performance, and decision support through a continuous improvement approach on both individual and system levels.

Trajectory of innovations, dissemination, and organizational adaptations

Addressing inertia in clinical practice requires a deep understanding of the individual and organizational elements that foster resistance to adapting best practice models. Research that explores tools and approaches to overcome inertia in managing complex diseases is a key step in advancing clinical innovation and disseminating best practices.

Corresponding author: Ebrahim Barkoudah, MD, MPH; [email protected]

Disclosures: None reported.

From the Department of Medicine, Brigham and Women’s Hospital, and Harvard Medical School, Boston, MA.

Clinical inertia is defined as the failure of clinicians to initiate or escalate guideline-directed medical therapy to achieve treatment goals for well-defined clinical conditions.1,2 Evidence-based guidelines recommend optimal disease management with readily available medical therapies throughout the phases of clinical care. Unfortunately, the care provided to individual patients undergoes multiple modifications throughout the disease course, resulting in divergent pathways, significant deviations from treatment guidelines, and failure of “safeguard” checkpoints to reinstate, initiate, optimize, or stop treatments. Clinical inertia generally describes rigidity or resistance to change around implementing evidence-based guidelines. Furthermore, this term describes treatment behavior on the part of an individual clinician, not organizational inertia, which generally encompasses both internal (immediate clinical practice settings) and external factors (national and international guidelines and recommendations), eventually leading to resistance to optimizing disease treatment and therapeutic regimens. Individual clinicians’ clinical inertia in the form of resistance to guideline implementation and evidence-based principles can be one factor that drives organizational inertia. In turn, such individual behavior can be dictated by personal beliefs, knowledge, interpretation, skills, management principles, and biases. The terms therapeutic inertia or clinical inertia should not be confused with nonadherence from the patient’s standpoint when the clinician follows the best practice guidelines.3

Clinical inertia has been described in several clinical domains, including diabetes,4,5 hypertension,6,7 heart failure,8 depression,9 pulmonary medicine,10 and complex disease management.11 Clinicians can set suboptimal treatment goals due to specific beliefs and attitudes around optimal therapeutic goals. For example, when treating a patient with a chronic disease that is presently stable, a clinician could elect to initiate suboptimal treatment, as escalation of treatment might not be the priority in stable disease; they also may have concerns about overtreatment. Other factors that can contribute to clinical inertia (ie, undertreatment in the presence of indications for treatment) include those related to the patient, the clinical setting, and the organization, along with the importance of individualizing therapies in specific patients. Organizational inertia is the initial global resistance by the system to implementation, which can slow the dissemination and adaptation of best practices but eventually declines over time. Individual clinical inertia, on the other hand, will likely persist after the system-level rollout of guideline-based approaches.

The trajectory of dissemination, implementation, and adaptation of innovations and best practices is illustrated in the Figure. When the guidelines and medical societies endorse the adaptation of innovations or practice change after the benefits of such innovations/change have been established by the regulatory bodies, uptake can be hindered by both organizational and clinical inertia. Overcoming inertia to system-level changes requires addressing individual clinicians, along with practice and organizational factors, in order to ensure systematic adaptations. From the clinicians’ view, training and cognitive interventions to improve the adaptation and coping skills can improve understanding of treatment options through standardized educational and behavioral modification tools, direct and indirect feedback around performance, and decision support through a continuous improvement approach on both individual and system levels.

Trajectory of innovations, dissemination, and organizational adaptations

Addressing inertia in clinical practice requires a deep understanding of the individual and organizational elements that foster resistance to adapting best practice models. Research that explores tools and approaches to overcome inertia in managing complex diseases is a key step in advancing clinical innovation and disseminating best practices.

Corresponding author: Ebrahim Barkoudah, MD, MPH; [email protected]

Disclosures: None reported.

References

1. Phillips LS, Branch WT, Cook CB, et al. Clinical inertia. Ann Intern Med. 2001;135(9):825-834. doi:10.7326/0003-4819-135-9-200111060-00012

2. Allen JD, Curtiss FR, Fairman KA. Nonadherence, clinical inertia, or therapeutic inertia? J Manag Care Pharm. 2009;15(8):690-695. doi:10.18553/jmcp.2009.15.8.690

3. Zafar A, Davies M, Azhar A, Khunti K. Clinical inertia in management of T2DM. Prim Care Diabetes. 2010;4(4):203-207. doi:10.1016/j.pcd.2010.07.003

4. Khunti K, Davies MJ. Clinical inertia—time to reappraise the terminology? Prim Care Diabetes. 2017;11(2):105-106. doi:10.1016/j.pcd.2017.01.007

5. O’Connor PJ. Overcome clinical inertia to control systolic blood pressure. Arch Intern Med. 2003;163(22):2677-2678. doi:10.1001/archinte.163.22.2677

6. Faria C, Wenzel M, Lee KW, et al. A narrative review of clinical inertia: focus on hypertension. J Am Soc Hypertens. 2009;3(4):267-276. doi:10.1016/j.jash.2009.03.001

7. Jarjour M, Henri C, de Denus S, et al. Care gaps in adherence to heart failure guidelines: clinical inertia or physiological limitations? JACC Heart Fail. 2020;8(9):725-738. doi:10.1016/j.jchf.2020.04.019

8. Henke RM, Zaslavsky AM, McGuire TG, et al. Clinical inertia in depression treatment. Med Care. 2009;47(9):959-67. doi:10.1097/MLR.0b013e31819a5da0

9. Cooke CE, Sidel M, Belletti DA, Fuhlbrigge AL. Clinical inertia in the management of chronic obstructive pulmonary disease. COPD. 2012;9(1):73-80. doi:10.3109/15412555.2011.631957

10. Whitford DL, Al-Anjawi HA, Al-Baharna MM. Impact of clinical inertia on cardiovascular risk factors in patients with diabetes. Prim Care Diabetes. 2014;8(2):133-138. doi:10.1016/j.pcd.2013.10.007

References

1. Phillips LS, Branch WT, Cook CB, et al. Clinical inertia. Ann Intern Med. 2001;135(9):825-834. doi:10.7326/0003-4819-135-9-200111060-00012

2. Allen JD, Curtiss FR, Fairman KA. Nonadherence, clinical inertia, or therapeutic inertia? J Manag Care Pharm. 2009;15(8):690-695. doi:10.18553/jmcp.2009.15.8.690

3. Zafar A, Davies M, Azhar A, Khunti K. Clinical inertia in management of T2DM. Prim Care Diabetes. 2010;4(4):203-207. doi:10.1016/j.pcd.2010.07.003

4. Khunti K, Davies MJ. Clinical inertia—time to reappraise the terminology? Prim Care Diabetes. 2017;11(2):105-106. doi:10.1016/j.pcd.2017.01.007

5. O’Connor PJ. Overcome clinical inertia to control systolic blood pressure. Arch Intern Med. 2003;163(22):2677-2678. doi:10.1001/archinte.163.22.2677

6. Faria C, Wenzel M, Lee KW, et al. A narrative review of clinical inertia: focus on hypertension. J Am Soc Hypertens. 2009;3(4):267-276. doi:10.1016/j.jash.2009.03.001

7. Jarjour M, Henri C, de Denus S, et al. Care gaps in adherence to heart failure guidelines: clinical inertia or physiological limitations? JACC Heart Fail. 2020;8(9):725-738. doi:10.1016/j.jchf.2020.04.019

8. Henke RM, Zaslavsky AM, McGuire TG, et al. Clinical inertia in depression treatment. Med Care. 2009;47(9):959-67. doi:10.1097/MLR.0b013e31819a5da0

9. Cooke CE, Sidel M, Belletti DA, Fuhlbrigge AL. Clinical inertia in the management of chronic obstructive pulmonary disease. COPD. 2012;9(1):73-80. doi:10.3109/15412555.2011.631957

10. Whitford DL, Al-Anjawi HA, Al-Baharna MM. Impact of clinical inertia on cardiovascular risk factors in patients with diabetes. Prim Care Diabetes. 2014;8(2):133-138. doi:10.1016/j.pcd.2013.10.007

Issue
Journal of Clinical Outcomes Management - 29(6)
Issue
Journal of Clinical Outcomes Management - 29(6)
Page Number
206-207
Page Number
206-207
Publications
Publications
Topics
Article Type
Display Headline
Best Practice Implementation and Clinical Inertia
Display Headline
Best Practice Implementation and Clinical Inertia
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

The Role of Revascularization and Viability Testing in Patients With Multivessel Coronary Artery Disease and Severely Reduced Ejection Fraction

Article Type
Changed
Wed, 12/28/2022 - 12:33
Display Headline
The Role of Revascularization and Viability Testing in Patients With Multivessel Coronary Artery Disease and Severely Reduced Ejection Fraction

Study 1 Overview (STICHES Investigators)

Objective: To assess the survival benefit of coronary-artery bypass grafting (CABG) added to guideline-directed medical therapy, compared to optimal medical therapy (OMT) alone, in patients with coronary artery disease, heart failure, and severe left ventricular dysfunction. Design: Multicenter, randomized, prospective study with extended follow-up (median duration of 9.8 years).

Setting and participants: A total of 1212 patients with left ventricular ejection fraction (LVEF) of 35% or less and coronary artery disease were randomized to medical therapy plus CABG or OMT alone at 127 clinical sites in 26 countries.

Main outcome measures: The primary endpoint was death from any cause. Main secondary endpoints were death from cardiovascular causes and a composite outcome of death from any cause or hospitalization for cardiovascular causes.

Main results: There were 359 primary outcome all-cause deaths (58.9%) in the CABG group and 398 (66.1%) in the medical therapy group (hazard ratio [HR], 0.84; 95% CI, 0.73-0.97; P = .02). Death from cardiovascular causes was reported in 247 patients (40.5%) in the CABG group and 297 patients (49.3%) in the medical therapy group (HR, 0.79; 95% CI, 0.66-0.93; P < .01). The composite outcome of death from any cause or hospitalization for cardiovascular causes occurred in 467 patients (76.6%) in the CABG group and 467 patients (87.0%) in the medical therapy group (HR, 0.72; 95% CI, 0.64-0.82; P < .01).

Conclusion: Over a median follow-up of 9.8 years in patients with ischemic cardiomyopathy with severely reduced ejection fraction, the rates of death from any cause, death from cardiovascular causes, and the composite of death from any cause or hospitalization for cardiovascular causes were significantly lower in patients undergoing CABG than in patients receiving medical therapy alone.

Study 2 Overview (REVIVED BCIS Trial Group)

Objective: To assess whether percutaneous coronary intervention (PCI) can improve survival and left ventricular function in patients with severe left ventricular systolic dysfunction as compared to OMT alone.

Design: Multicenter, randomized, prospective study.

Setting and participants: A total of 700 patients with LVEF <35% with severe coronary artery disease amendable to PCI and demonstrable myocardial viability were randomly assigned to either PCI plus optimal medical therapy (PCI group) or OMT alone (OMT group).

Main outcome measures: The primary outcome was death from any cause or hospitalization for heart failure. The main secondary outcomes were LVEF at 6 and 12 months and quality of life (QOL) scores.

Main results: Over a median follow-up of 41 months, the primary outcome was reported in 129 patients (37.2%) in the PCI group and in 134 patients (38.0%) in the OMT group (HR, 0.99; 95% CI, 0.78-1.27; P = .96). The LVEF was similar in the 2 groups at 6 months (mean difference, –1.6 percentage points; 95% CI, –3.7 to 0.5) and at 12 months (mean difference, 0.9 percentage points; 95% CI, –1.7 to 3.4). QOL scores at 6 and 12 months favored the PCI group, but the difference had diminished at 24 months.

Conclusion: In patients with severe ischemic cardiomyopathy, revascularization by PCI in addition to OMT did not result in a lower incidence of death from any cause or hospitalization from heart failure.

 

 

Commentary

Coronary artery disease is the most common cause of heart failure with reduced ejection fraction and an important cause of mortality.1 Patients with ischemic cardiomyopathy with reduced ejection fraction are often considered for revascularization in addition to OMT and device therapies. Although there have been multiple retrospective studies and registries suggesting that cardiac outcomes and LVEF improve with revascularization, the number of large-scale prospective studies that assessed this clinical question and randomized patients to revascularization plus OMT compared to OMT alone has been limited.

In the Surgical Treatment for Ischemic Heart Failure (STICH) study,2,3 eligible patients had coronary artery disease amendable to CABG and a LVEF of 35% or less. Patients (N = 1212) were randomly assigned to CABG plus OMT or OMT alone between July 2002 and May 2007. The original study, with a median follow-up of 5 years, did not show survival benefit, but the investigators reported that the primary outcome of death from any cause was significantly lower in the CABG group compared to OMT alone when follow-up of the same study population was extended to 9.8 years (58.9% vs 66.1%, P = .02). The findings from this study led to a class I guideline recommendation of CABG over medical therapy in patients with multivessel disease and low ejection fraction.4

Since the STICH trial was designed, there have been significant improvements in devices and techniques used for PCI, and the procedure is now widely performed in patients with multivessel disease.5 The advantages of PCI over CABG include shorter recovery times and lower risk of immediate complications. In this context, the recently reported Revascularization for Ischemic Ventricular Dysfunction (REVIVED) study assessed clinical outcomes in patients with severe coronary artery disease and reduced ejection fraction by randomizing patients to either PCI with OMT or OMT alone.6 At a median follow-up of 3.5 years, the investigators found no difference in the primary outcome of death from any cause or hospitalization for heart failure (37.2% vs 38.0%; 95% CI, 0.78-1.28; P = .96). Moreover, the degree of LVEF improvement, assessed by follow-up echocardiogram read by the core lab, showed no difference in the degree of LVEF improvement between groups at 6 and 12 months. Finally, although results of the QOL assessment using the Kansas City Cardiomyopathy Questionnaire (KCCQ), a validated, patient-reported, heart-failure-specific QOL scale, favored the PCI group at 6 and 12 months of follow-up, the difference had diminished at 24 months.

The main strength of the REVIVED study was that it targeted a patient population with severe coronary artery disease, including left main disease and severely reduced ejection fraction, that historically have been excluded from large-scale randomized controlled studies evaluating PCI with OMT compared to OMT alone.7 However, there are several points to consider when interpreting the results of this study. First, further details of the PCI procedures are necessary. The REVIVED study recommended revascularization of all territories with viable myocardium; the anatomical revascularization index utilizing the British Cardiovascular Intervention Society (BCIS) Jeopardy Score was 71%. It is important to note that this jeopardy score was operator-reported and the core-lab adjudicated anatomical revascularization rate may be lower. Although viability testing primarily utilizing cardiac magnetic resonance imaging was performed in most patients, correlation between the revascularization territory and the viable segments has yet to be reported. Moreover, procedural details such as use of intravascular ultrasound and physiological testing, known to improve clinical outcome, need to be reported.8,9

Second, there is a high prevalence of ischemic cardiomyopathy, and it is important to note that the patients included in this study were highly selected from daily clinical practice, as evidenced by the prolonged enrollment period (8 years). Individuals were largely stable patients with less complex coronary anatomy as evidenced by the median interval from angiography to randomization of 80 days. Taking into consideration the degree of left ventricular dysfunction for patients included in the trial, only 14% of the patients had left main disease and half of the patients only had 2-vessel disease. The severity of the left main disease also needs to be clarified as it is likely that patients the operator determined to be critical were not enrolled in the study. Furthermore, the standard of care based on the STICH trial is to refer patients with severe multivessel coronary artery disease to CABG, making it more likely that patients with more severe and complex disease were not included in this trial. It is also important to note that this study enrolled patients with stable ischemic heart disease, and the data do not apply to patients presenting with acute coronary syndrome.

 

 

Third, although the primary outcome was similar between the groups, the secondary outcome of unplanned revascularization was lower in the PCI group. In addition, the rate of acute myocardial infarction (MI) was similar between the 2 groups, but the rate of spontaneous MI was lower in the PCI group compared to the OMT group (5.2% vs 9.3%) as 40% of MI cases in the PCI group were periprocedural MIs. The correlation between periprocedural MI and long-term outcomes has been modest compared to spontaneous MI. Moreover, with the longer follow-up, the number of spontaneous MI cases is expected to rise while the number of periprocedural MI cases is not. Extending the follow-up period is also important, as the STICH extension trial showed a statistically significant difference at 10-year follow up despite negative results at the time of the original publication.

Fourth, the REVIVED trial randomized a significantly lower number of patients compared to the STICH trial, and the authors reported fewer primary-outcome events than the estimated number needed to achieve the power to assess the primary hypothesis. In addition, significant improvements in medical treatment for heart failure with reduced ejection fraction since the STICH trial make comparison of PCI vs CABG in this patient population unfeasible.

Finally, although severe angina was not an exclusion criterion, two-thirds of the patients enrolled had no angina, and only 2% of the patients had baseline severe angina. This is important to consider when interpreting the results of the patient-reported health status as previous studies have shown that patients with worse angina at baseline derive the largest improvement in their QOL,10,11 and symptom improvement is the main indication for PCI in patients with stable ischemic heart disease.

Applications for Clinical Practice and System Implementation

In patients with severe left ventricular systolic dysfunction and multivessel stable ischemic heart disease who are well compensated and have little or no angina at baseline, OMT alone as an initial strategy may be considered against the addition of PCI after careful risk and benefit discussion. Further details about revascularization and extended follow-up data from the REVIVED trial are necessary.

Practice Points

  • Patients with ischemic cardiomyopathy with reduced ejection fraction have been an understudied population in previous studies.
  • Further studies are necessary to understand the benefits of revascularization and the role of viability testing in this population.

Taishi Hirai MD, and Ziad Sayed Ahmad, MD
University of Missouri, Columbia, MO

References

1. Nowbar AN, Gitto M, Howard JP, et al. Mortality from ischemic heart disease. Circ Cardiovasc Qual Outcomes. 2019;12(6):e005375. doi:10.1161/CIRCOUTCOMES

2. Velazquez EJ, Lee KL, Deja MA, et al; for the STICH Investigators. Coronary-artery bypass surgery in patients with left ventricular dysfunction. N Engl J Med. 2011;364(17):1607-1616. doi:10.1056/NEJMoa1100356

3. Velazquez EJ, Lee KL, Jones RH, et al. Coronary-artery bypass surgery in patients with ischemic cardiomyopathy. N Engl J Med. 2016;374(16):1511-1520. doi:10.1056/NEJMoa1602001

4. Lawton JS, Tamis-Holland JE, Bangalore S, et al. 2021 ACC/AHA/SCAI guideline for coronary artery revascularization: a report of the American College of Cardiology/American Heart Association Joint Committee on Clinical Practice Guidelines. J Am Coll Cardiol. 2022;79(2):e21-e129. doi:10.1016/j.jacc.2021.09.006

5. Kirtane AJ, Doshi D, Leon MB, et al. Treatment of higher-risk patients with an indication for revascularization: evolution within the field of contemporary percutaneous coronary intervention. Circulation. 2016;134(5):422-431. doi:10.1161/CIRCULATIONAHA

6. Perera D, Clayton T, O’Kane PD, et al. Percutaneous revascularization for ischemic left ventricular dysfunction. N Engl J Med. 2022;387(15):1351-1360. doi:10.1056/NEJMoa2206606

7. Maron DJ, Hochman JS, Reynolds HR, et al. Initial invasive or conservative strategy for stable coronary disease. Circulation. 2020;142(18):1725-1735. doi:10.1161/CIRCULATIONAHA

8. De Bruyne B, Pijls NH, Kalesan B, et al. Fractional flow reserve-guided PCI versus medical therapy in stable coronary disease. N Engl J Med. 2012;367(11):991-1001. doi:10.1056/NEJMoa1205361

9. Zhang J, Gao X, Kan J, et al. Intravascular ultrasound versus angiography-guided drug-eluting stent implantation: The ULTIMATE trial.  J Am Coll Cardiol. 2018;72(24):3126-3137. doi:10.1016/j.jacc.2018.09.013

10. Spertus JA, Jones PG, Maron DJ, et al. Health-status outcomes with invasive or conservative care in coronary disease. N Engl J Med. 2020;382(15):1408-1419. doi:10.1056/NEJMoa1916370

11. Hirai T, Grantham JA, Sapontis J, et al. Quality of life changes after chronic total occlusion angioplasty in patients with baseline refractory angina. Circ Cardiovasc Interv. 2019;12:e007558. doi:10.1161/CIRCINTERVENTIONS.118.007558

Article PDF
Issue
Journal of Clinical Outcomes Management - 29(6)
Publications
Topics
Page Number
202-205
Sections
Article PDF
Article PDF

Study 1 Overview (STICHES Investigators)

Objective: To assess the survival benefit of coronary-artery bypass grafting (CABG) added to guideline-directed medical therapy, compared to optimal medical therapy (OMT) alone, in patients with coronary artery disease, heart failure, and severe left ventricular dysfunction. Design: Multicenter, randomized, prospective study with extended follow-up (median duration of 9.8 years).

Setting and participants: A total of 1212 patients with left ventricular ejection fraction (LVEF) of 35% or less and coronary artery disease were randomized to medical therapy plus CABG or OMT alone at 127 clinical sites in 26 countries.

Main outcome measures: The primary endpoint was death from any cause. Main secondary endpoints were death from cardiovascular causes and a composite outcome of death from any cause or hospitalization for cardiovascular causes.

Main results: There were 359 primary outcome all-cause deaths (58.9%) in the CABG group and 398 (66.1%) in the medical therapy group (hazard ratio [HR], 0.84; 95% CI, 0.73-0.97; P = .02). Death from cardiovascular causes was reported in 247 patients (40.5%) in the CABG group and 297 patients (49.3%) in the medical therapy group (HR, 0.79; 95% CI, 0.66-0.93; P < .01). The composite outcome of death from any cause or hospitalization for cardiovascular causes occurred in 467 patients (76.6%) in the CABG group and 467 patients (87.0%) in the medical therapy group (HR, 0.72; 95% CI, 0.64-0.82; P < .01).

Conclusion: Over a median follow-up of 9.8 years in patients with ischemic cardiomyopathy with severely reduced ejection fraction, the rates of death from any cause, death from cardiovascular causes, and the composite of death from any cause or hospitalization for cardiovascular causes were significantly lower in patients undergoing CABG than in patients receiving medical therapy alone.

Study 2 Overview (REVIVED BCIS Trial Group)

Objective: To assess whether percutaneous coronary intervention (PCI) can improve survival and left ventricular function in patients with severe left ventricular systolic dysfunction as compared to OMT alone.

Design: Multicenter, randomized, prospective study.

Setting and participants: A total of 700 patients with LVEF <35% with severe coronary artery disease amendable to PCI and demonstrable myocardial viability were randomly assigned to either PCI plus optimal medical therapy (PCI group) or OMT alone (OMT group).

Main outcome measures: The primary outcome was death from any cause or hospitalization for heart failure. The main secondary outcomes were LVEF at 6 and 12 months and quality of life (QOL) scores.

Main results: Over a median follow-up of 41 months, the primary outcome was reported in 129 patients (37.2%) in the PCI group and in 134 patients (38.0%) in the OMT group (HR, 0.99; 95% CI, 0.78-1.27; P = .96). The LVEF was similar in the 2 groups at 6 months (mean difference, –1.6 percentage points; 95% CI, –3.7 to 0.5) and at 12 months (mean difference, 0.9 percentage points; 95% CI, –1.7 to 3.4). QOL scores at 6 and 12 months favored the PCI group, but the difference had diminished at 24 months.

Conclusion: In patients with severe ischemic cardiomyopathy, revascularization by PCI in addition to OMT did not result in a lower incidence of death from any cause or hospitalization from heart failure.

 

 

Commentary

Coronary artery disease is the most common cause of heart failure with reduced ejection fraction and an important cause of mortality.1 Patients with ischemic cardiomyopathy with reduced ejection fraction are often considered for revascularization in addition to OMT and device therapies. Although there have been multiple retrospective studies and registries suggesting that cardiac outcomes and LVEF improve with revascularization, the number of large-scale prospective studies that assessed this clinical question and randomized patients to revascularization plus OMT compared to OMT alone has been limited.

In the Surgical Treatment for Ischemic Heart Failure (STICH) study,2,3 eligible patients had coronary artery disease amendable to CABG and a LVEF of 35% or less. Patients (N = 1212) were randomly assigned to CABG plus OMT or OMT alone between July 2002 and May 2007. The original study, with a median follow-up of 5 years, did not show survival benefit, but the investigators reported that the primary outcome of death from any cause was significantly lower in the CABG group compared to OMT alone when follow-up of the same study population was extended to 9.8 years (58.9% vs 66.1%, P = .02). The findings from this study led to a class I guideline recommendation of CABG over medical therapy in patients with multivessel disease and low ejection fraction.4

Since the STICH trial was designed, there have been significant improvements in devices and techniques used for PCI, and the procedure is now widely performed in patients with multivessel disease.5 The advantages of PCI over CABG include shorter recovery times and lower risk of immediate complications. In this context, the recently reported Revascularization for Ischemic Ventricular Dysfunction (REVIVED) study assessed clinical outcomes in patients with severe coronary artery disease and reduced ejection fraction by randomizing patients to either PCI with OMT or OMT alone.6 At a median follow-up of 3.5 years, the investigators found no difference in the primary outcome of death from any cause or hospitalization for heart failure (37.2% vs 38.0%; 95% CI, 0.78-1.28; P = .96). Moreover, the degree of LVEF improvement, assessed by follow-up echocardiogram read by the core lab, showed no difference in the degree of LVEF improvement between groups at 6 and 12 months. Finally, although results of the QOL assessment using the Kansas City Cardiomyopathy Questionnaire (KCCQ), a validated, patient-reported, heart-failure-specific QOL scale, favored the PCI group at 6 and 12 months of follow-up, the difference had diminished at 24 months.

The main strength of the REVIVED study was that it targeted a patient population with severe coronary artery disease, including left main disease and severely reduced ejection fraction, that historically have been excluded from large-scale randomized controlled studies evaluating PCI with OMT compared to OMT alone.7 However, there are several points to consider when interpreting the results of this study. First, further details of the PCI procedures are necessary. The REVIVED study recommended revascularization of all territories with viable myocardium; the anatomical revascularization index utilizing the British Cardiovascular Intervention Society (BCIS) Jeopardy Score was 71%. It is important to note that this jeopardy score was operator-reported and the core-lab adjudicated anatomical revascularization rate may be lower. Although viability testing primarily utilizing cardiac magnetic resonance imaging was performed in most patients, correlation between the revascularization territory and the viable segments has yet to be reported. Moreover, procedural details such as use of intravascular ultrasound and physiological testing, known to improve clinical outcome, need to be reported.8,9

Second, there is a high prevalence of ischemic cardiomyopathy, and it is important to note that the patients included in this study were highly selected from daily clinical practice, as evidenced by the prolonged enrollment period (8 years). Individuals were largely stable patients with less complex coronary anatomy as evidenced by the median interval from angiography to randomization of 80 days. Taking into consideration the degree of left ventricular dysfunction for patients included in the trial, only 14% of the patients had left main disease and half of the patients only had 2-vessel disease. The severity of the left main disease also needs to be clarified as it is likely that patients the operator determined to be critical were not enrolled in the study. Furthermore, the standard of care based on the STICH trial is to refer patients with severe multivessel coronary artery disease to CABG, making it more likely that patients with more severe and complex disease were not included in this trial. It is also important to note that this study enrolled patients with stable ischemic heart disease, and the data do not apply to patients presenting with acute coronary syndrome.

 

 

Third, although the primary outcome was similar between the groups, the secondary outcome of unplanned revascularization was lower in the PCI group. In addition, the rate of acute myocardial infarction (MI) was similar between the 2 groups, but the rate of spontaneous MI was lower in the PCI group compared to the OMT group (5.2% vs 9.3%) as 40% of MI cases in the PCI group were periprocedural MIs. The correlation between periprocedural MI and long-term outcomes has been modest compared to spontaneous MI. Moreover, with the longer follow-up, the number of spontaneous MI cases is expected to rise while the number of periprocedural MI cases is not. Extending the follow-up period is also important, as the STICH extension trial showed a statistically significant difference at 10-year follow up despite negative results at the time of the original publication.

Fourth, the REVIVED trial randomized a significantly lower number of patients compared to the STICH trial, and the authors reported fewer primary-outcome events than the estimated number needed to achieve the power to assess the primary hypothesis. In addition, significant improvements in medical treatment for heart failure with reduced ejection fraction since the STICH trial make comparison of PCI vs CABG in this patient population unfeasible.

Finally, although severe angina was not an exclusion criterion, two-thirds of the patients enrolled had no angina, and only 2% of the patients had baseline severe angina. This is important to consider when interpreting the results of the patient-reported health status as previous studies have shown that patients with worse angina at baseline derive the largest improvement in their QOL,10,11 and symptom improvement is the main indication for PCI in patients with stable ischemic heart disease.

Applications for Clinical Practice and System Implementation

In patients with severe left ventricular systolic dysfunction and multivessel stable ischemic heart disease who are well compensated and have little or no angina at baseline, OMT alone as an initial strategy may be considered against the addition of PCI after careful risk and benefit discussion. Further details about revascularization and extended follow-up data from the REVIVED trial are necessary.

Practice Points

  • Patients with ischemic cardiomyopathy with reduced ejection fraction have been an understudied population in previous studies.
  • Further studies are necessary to understand the benefits of revascularization and the role of viability testing in this population.

Taishi Hirai MD, and Ziad Sayed Ahmad, MD
University of Missouri, Columbia, MO

Study 1 Overview (STICHES Investigators)

Objective: To assess the survival benefit of coronary-artery bypass grafting (CABG) added to guideline-directed medical therapy, compared to optimal medical therapy (OMT) alone, in patients with coronary artery disease, heart failure, and severe left ventricular dysfunction. Design: Multicenter, randomized, prospective study with extended follow-up (median duration of 9.8 years).

Setting and participants: A total of 1212 patients with left ventricular ejection fraction (LVEF) of 35% or less and coronary artery disease were randomized to medical therapy plus CABG or OMT alone at 127 clinical sites in 26 countries.

Main outcome measures: The primary endpoint was death from any cause. Main secondary endpoints were death from cardiovascular causes and a composite outcome of death from any cause or hospitalization for cardiovascular causes.

Main results: There were 359 primary outcome all-cause deaths (58.9%) in the CABG group and 398 (66.1%) in the medical therapy group (hazard ratio [HR], 0.84; 95% CI, 0.73-0.97; P = .02). Death from cardiovascular causes was reported in 247 patients (40.5%) in the CABG group and 297 patients (49.3%) in the medical therapy group (HR, 0.79; 95% CI, 0.66-0.93; P < .01). The composite outcome of death from any cause or hospitalization for cardiovascular causes occurred in 467 patients (76.6%) in the CABG group and 467 patients (87.0%) in the medical therapy group (HR, 0.72; 95% CI, 0.64-0.82; P < .01).

Conclusion: Over a median follow-up of 9.8 years in patients with ischemic cardiomyopathy with severely reduced ejection fraction, the rates of death from any cause, death from cardiovascular causes, and the composite of death from any cause or hospitalization for cardiovascular causes were significantly lower in patients undergoing CABG than in patients receiving medical therapy alone.

Study 2 Overview (REVIVED BCIS Trial Group)

Objective: To assess whether percutaneous coronary intervention (PCI) can improve survival and left ventricular function in patients with severe left ventricular systolic dysfunction as compared to OMT alone.

Design: Multicenter, randomized, prospective study.

Setting and participants: A total of 700 patients with LVEF <35% with severe coronary artery disease amendable to PCI and demonstrable myocardial viability were randomly assigned to either PCI plus optimal medical therapy (PCI group) or OMT alone (OMT group).

Main outcome measures: The primary outcome was death from any cause or hospitalization for heart failure. The main secondary outcomes were LVEF at 6 and 12 months and quality of life (QOL) scores.

Main results: Over a median follow-up of 41 months, the primary outcome was reported in 129 patients (37.2%) in the PCI group and in 134 patients (38.0%) in the OMT group (HR, 0.99; 95% CI, 0.78-1.27; P = .96). The LVEF was similar in the 2 groups at 6 months (mean difference, –1.6 percentage points; 95% CI, –3.7 to 0.5) and at 12 months (mean difference, 0.9 percentage points; 95% CI, –1.7 to 3.4). QOL scores at 6 and 12 months favored the PCI group, but the difference had diminished at 24 months.

Conclusion: In patients with severe ischemic cardiomyopathy, revascularization by PCI in addition to OMT did not result in a lower incidence of death from any cause or hospitalization from heart failure.

 

 

Commentary

Coronary artery disease is the most common cause of heart failure with reduced ejection fraction and an important cause of mortality.1 Patients with ischemic cardiomyopathy with reduced ejection fraction are often considered for revascularization in addition to OMT and device therapies. Although there have been multiple retrospective studies and registries suggesting that cardiac outcomes and LVEF improve with revascularization, the number of large-scale prospective studies that assessed this clinical question and randomized patients to revascularization plus OMT compared to OMT alone has been limited.

In the Surgical Treatment for Ischemic Heart Failure (STICH) study,2,3 eligible patients had coronary artery disease amendable to CABG and a LVEF of 35% or less. Patients (N = 1212) were randomly assigned to CABG plus OMT or OMT alone between July 2002 and May 2007. The original study, with a median follow-up of 5 years, did not show survival benefit, but the investigators reported that the primary outcome of death from any cause was significantly lower in the CABG group compared to OMT alone when follow-up of the same study population was extended to 9.8 years (58.9% vs 66.1%, P = .02). The findings from this study led to a class I guideline recommendation of CABG over medical therapy in patients with multivessel disease and low ejection fraction.4

Since the STICH trial was designed, there have been significant improvements in devices and techniques used for PCI, and the procedure is now widely performed in patients with multivessel disease.5 The advantages of PCI over CABG include shorter recovery times and lower risk of immediate complications. In this context, the recently reported Revascularization for Ischemic Ventricular Dysfunction (REVIVED) study assessed clinical outcomes in patients with severe coronary artery disease and reduced ejection fraction by randomizing patients to either PCI with OMT or OMT alone.6 At a median follow-up of 3.5 years, the investigators found no difference in the primary outcome of death from any cause or hospitalization for heart failure (37.2% vs 38.0%; 95% CI, 0.78-1.28; P = .96). Moreover, the degree of LVEF improvement, assessed by follow-up echocardiogram read by the core lab, showed no difference in the degree of LVEF improvement between groups at 6 and 12 months. Finally, although results of the QOL assessment using the Kansas City Cardiomyopathy Questionnaire (KCCQ), a validated, patient-reported, heart-failure-specific QOL scale, favored the PCI group at 6 and 12 months of follow-up, the difference had diminished at 24 months.

The main strength of the REVIVED study was that it targeted a patient population with severe coronary artery disease, including left main disease and severely reduced ejection fraction, that historically have been excluded from large-scale randomized controlled studies evaluating PCI with OMT compared to OMT alone.7 However, there are several points to consider when interpreting the results of this study. First, further details of the PCI procedures are necessary. The REVIVED study recommended revascularization of all territories with viable myocardium; the anatomical revascularization index utilizing the British Cardiovascular Intervention Society (BCIS) Jeopardy Score was 71%. It is important to note that this jeopardy score was operator-reported and the core-lab adjudicated anatomical revascularization rate may be lower. Although viability testing primarily utilizing cardiac magnetic resonance imaging was performed in most patients, correlation between the revascularization territory and the viable segments has yet to be reported. Moreover, procedural details such as use of intravascular ultrasound and physiological testing, known to improve clinical outcome, need to be reported.8,9

Second, there is a high prevalence of ischemic cardiomyopathy, and it is important to note that the patients included in this study were highly selected from daily clinical practice, as evidenced by the prolonged enrollment period (8 years). Individuals were largely stable patients with less complex coronary anatomy as evidenced by the median interval from angiography to randomization of 80 days. Taking into consideration the degree of left ventricular dysfunction for patients included in the trial, only 14% of the patients had left main disease and half of the patients only had 2-vessel disease. The severity of the left main disease also needs to be clarified as it is likely that patients the operator determined to be critical were not enrolled in the study. Furthermore, the standard of care based on the STICH trial is to refer patients with severe multivessel coronary artery disease to CABG, making it more likely that patients with more severe and complex disease were not included in this trial. It is also important to note that this study enrolled patients with stable ischemic heart disease, and the data do not apply to patients presenting with acute coronary syndrome.

 

 

Third, although the primary outcome was similar between the groups, the secondary outcome of unplanned revascularization was lower in the PCI group. In addition, the rate of acute myocardial infarction (MI) was similar between the 2 groups, but the rate of spontaneous MI was lower in the PCI group compared to the OMT group (5.2% vs 9.3%) as 40% of MI cases in the PCI group were periprocedural MIs. The correlation between periprocedural MI and long-term outcomes has been modest compared to spontaneous MI. Moreover, with the longer follow-up, the number of spontaneous MI cases is expected to rise while the number of periprocedural MI cases is not. Extending the follow-up period is also important, as the STICH extension trial showed a statistically significant difference at 10-year follow up despite negative results at the time of the original publication.

Fourth, the REVIVED trial randomized a significantly lower number of patients compared to the STICH trial, and the authors reported fewer primary-outcome events than the estimated number needed to achieve the power to assess the primary hypothesis. In addition, significant improvements in medical treatment for heart failure with reduced ejection fraction since the STICH trial make comparison of PCI vs CABG in this patient population unfeasible.

Finally, although severe angina was not an exclusion criterion, two-thirds of the patients enrolled had no angina, and only 2% of the patients had baseline severe angina. This is important to consider when interpreting the results of the patient-reported health status as previous studies have shown that patients with worse angina at baseline derive the largest improvement in their QOL,10,11 and symptom improvement is the main indication for PCI in patients with stable ischemic heart disease.

Applications for Clinical Practice and System Implementation

In patients with severe left ventricular systolic dysfunction and multivessel stable ischemic heart disease who are well compensated and have little or no angina at baseline, OMT alone as an initial strategy may be considered against the addition of PCI after careful risk and benefit discussion. Further details about revascularization and extended follow-up data from the REVIVED trial are necessary.

Practice Points

  • Patients with ischemic cardiomyopathy with reduced ejection fraction have been an understudied population in previous studies.
  • Further studies are necessary to understand the benefits of revascularization and the role of viability testing in this population.

Taishi Hirai MD, and Ziad Sayed Ahmad, MD
University of Missouri, Columbia, MO

References

1. Nowbar AN, Gitto M, Howard JP, et al. Mortality from ischemic heart disease. Circ Cardiovasc Qual Outcomes. 2019;12(6):e005375. doi:10.1161/CIRCOUTCOMES

2. Velazquez EJ, Lee KL, Deja MA, et al; for the STICH Investigators. Coronary-artery bypass surgery in patients with left ventricular dysfunction. N Engl J Med. 2011;364(17):1607-1616. doi:10.1056/NEJMoa1100356

3. Velazquez EJ, Lee KL, Jones RH, et al. Coronary-artery bypass surgery in patients with ischemic cardiomyopathy. N Engl J Med. 2016;374(16):1511-1520. doi:10.1056/NEJMoa1602001

4. Lawton JS, Tamis-Holland JE, Bangalore S, et al. 2021 ACC/AHA/SCAI guideline for coronary artery revascularization: a report of the American College of Cardiology/American Heart Association Joint Committee on Clinical Practice Guidelines. J Am Coll Cardiol. 2022;79(2):e21-e129. doi:10.1016/j.jacc.2021.09.006

5. Kirtane AJ, Doshi D, Leon MB, et al. Treatment of higher-risk patients with an indication for revascularization: evolution within the field of contemporary percutaneous coronary intervention. Circulation. 2016;134(5):422-431. doi:10.1161/CIRCULATIONAHA

6. Perera D, Clayton T, O’Kane PD, et al. Percutaneous revascularization for ischemic left ventricular dysfunction. N Engl J Med. 2022;387(15):1351-1360. doi:10.1056/NEJMoa2206606

7. Maron DJ, Hochman JS, Reynolds HR, et al. Initial invasive or conservative strategy for stable coronary disease. Circulation. 2020;142(18):1725-1735. doi:10.1161/CIRCULATIONAHA

8. De Bruyne B, Pijls NH, Kalesan B, et al. Fractional flow reserve-guided PCI versus medical therapy in stable coronary disease. N Engl J Med. 2012;367(11):991-1001. doi:10.1056/NEJMoa1205361

9. Zhang J, Gao X, Kan J, et al. Intravascular ultrasound versus angiography-guided drug-eluting stent implantation: The ULTIMATE trial.  J Am Coll Cardiol. 2018;72(24):3126-3137. doi:10.1016/j.jacc.2018.09.013

10. Spertus JA, Jones PG, Maron DJ, et al. Health-status outcomes with invasive or conservative care in coronary disease. N Engl J Med. 2020;382(15):1408-1419. doi:10.1056/NEJMoa1916370

11. Hirai T, Grantham JA, Sapontis J, et al. Quality of life changes after chronic total occlusion angioplasty in patients with baseline refractory angina. Circ Cardiovasc Interv. 2019;12:e007558. doi:10.1161/CIRCINTERVENTIONS.118.007558

References

1. Nowbar AN, Gitto M, Howard JP, et al. Mortality from ischemic heart disease. Circ Cardiovasc Qual Outcomes. 2019;12(6):e005375. doi:10.1161/CIRCOUTCOMES

2. Velazquez EJ, Lee KL, Deja MA, et al; for the STICH Investigators. Coronary-artery bypass surgery in patients with left ventricular dysfunction. N Engl J Med. 2011;364(17):1607-1616. doi:10.1056/NEJMoa1100356

3. Velazquez EJ, Lee KL, Jones RH, et al. Coronary-artery bypass surgery in patients with ischemic cardiomyopathy. N Engl J Med. 2016;374(16):1511-1520. doi:10.1056/NEJMoa1602001

4. Lawton JS, Tamis-Holland JE, Bangalore S, et al. 2021 ACC/AHA/SCAI guideline for coronary artery revascularization: a report of the American College of Cardiology/American Heart Association Joint Committee on Clinical Practice Guidelines. J Am Coll Cardiol. 2022;79(2):e21-e129. doi:10.1016/j.jacc.2021.09.006

5. Kirtane AJ, Doshi D, Leon MB, et al. Treatment of higher-risk patients with an indication for revascularization: evolution within the field of contemporary percutaneous coronary intervention. Circulation. 2016;134(5):422-431. doi:10.1161/CIRCULATIONAHA

6. Perera D, Clayton T, O’Kane PD, et al. Percutaneous revascularization for ischemic left ventricular dysfunction. N Engl J Med. 2022;387(15):1351-1360. doi:10.1056/NEJMoa2206606

7. Maron DJ, Hochman JS, Reynolds HR, et al. Initial invasive or conservative strategy for stable coronary disease. Circulation. 2020;142(18):1725-1735. doi:10.1161/CIRCULATIONAHA

8. De Bruyne B, Pijls NH, Kalesan B, et al. Fractional flow reserve-guided PCI versus medical therapy in stable coronary disease. N Engl J Med. 2012;367(11):991-1001. doi:10.1056/NEJMoa1205361

9. Zhang J, Gao X, Kan J, et al. Intravascular ultrasound versus angiography-guided drug-eluting stent implantation: The ULTIMATE trial.  J Am Coll Cardiol. 2018;72(24):3126-3137. doi:10.1016/j.jacc.2018.09.013

10. Spertus JA, Jones PG, Maron DJ, et al. Health-status outcomes with invasive or conservative care in coronary disease. N Engl J Med. 2020;382(15):1408-1419. doi:10.1056/NEJMoa1916370

11. Hirai T, Grantham JA, Sapontis J, et al. Quality of life changes after chronic total occlusion angioplasty in patients with baseline refractory angina. Circ Cardiovasc Interv. 2019;12:e007558. doi:10.1161/CIRCINTERVENTIONS.118.007558

Issue
Journal of Clinical Outcomes Management - 29(6)
Issue
Journal of Clinical Outcomes Management - 29(6)
Page Number
202-205
Page Number
202-205
Publications
Publications
Topics
Article Type
Display Headline
The Role of Revascularization and Viability Testing in Patients With Multivessel Coronary Artery Disease and Severely Reduced Ejection Fraction
Display Headline
The Role of Revascularization and Viability Testing in Patients With Multivessel Coronary Artery Disease and Severely Reduced Ejection Fraction
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Anesthetic Choices and Postoperative Delirium Incidence: Propofol vs Sevoflurane

Article Type
Changed
Wed, 12/28/2022 - 12:32
Display Headline
Anesthetic Choices and Postoperative Delirium Incidence: Propofol vs Sevoflurane

Study 1 Overview (Chang et al)

Objective: To assess the incidence of postoperative delirium (POD) following propofol- vs sevoflurane-based anesthesia in geriatric spine surgery patients.

Design: Retrospective, single-blinded observational study of propofol- and sevoflurane-based anesthesia cohorts.

Setting and participants: Patients eligible for this study were aged 65 years or older admitted to the SMG-SNU Boramae Medical Center (Seoul, South Korea). All patients underwent general anesthesia either via intravenous propofol or inhalational sevoflurane for spine surgery between January 2015 and December 2019. Patients were retrospectively identified via electronic medical records. Patient exclusion criteria included preoperative delirium, history of dementia, psychiatric disease, alcoholism, hepatic or renal dysfunction, postoperative mechanical ventilation dependence, other surgery within the recent 6 months, maintenance of intraoperative anesthesia with combined anesthetics, or incomplete medical record.

Main outcome measures: The primary outcome was the incidence of POD after administration of propofol- and sevoflurane-based anesthesia during hospitalization. Patients were screened for POD regularly by attending nurses using the Nursing Delirium Screening Scale (disorientation, inappropriate behavior, inappropriate communication, hallucination, and psychomotor retardation) during the entirety of the patient’s hospital stay; if 1 or more screening criteria were met, a psychiatrist was consulted for the proper diagnosis and management of delirium. A psychiatric diagnosis was required for a case to be counted toward the incidence of POD in this study. Secondary outcomes included postoperative 30-day complications (angina, myocardial infarction, transient ischemic attack/stroke, pneumonia, deep vein thrombosis, pulmonary embolism, acute kidney injury, or infection) and length of postoperative hospital stay.

Main results: POD occurred in 29 patients (10.3%) out of the total cohort of 281. POD was more common in the sevoflurane group than in the propofol group (15.7% vs 5.0%; P = .003). Using multivariable logistic regression, inhalational sevoflurane was associated with an increased risk of POD as compared to propofol-based anesthesia (odds ratio [OR], 4.120; 95% CI, 1.549-10.954; P = .005). There was no association between choice of anesthetic and postoperative 30-day complications or the length of postoperative hospital stay. Both older age (OR, 1.242; 95% CI, 1.130-1.366; P < .001) and higher pain score at postoperative day 1 (OR, 1.338; 95% CI, 1.056-1.696; P = .016) were associated with increased risk of POD.

Conclusion: Propofol-based anesthesia was associated with a lower incidence of and risk for POD than sevoflurane-based anesthesia in older patients undergoing spine surgery.

Study 2 Overview (Mei et al)

Objective: To determine the incidence and duration of POD in older patients after total knee/hip replacement (TKR/THR) under intravenous propofol or inhalational sevoflurane general anesthesia.

Design: Randomized clinical trial of propofol and sevoflurane groups.

Setting and participants: This study was conducted at the Shanghai Tenth People’s Hospital and involved 209 participants enrolled between June 2016 and November 2019. All participants were 60 years of age or older, scheduled for TKR/THR surgery under general anesthesia, American Society of Anesthesiologists (ASA) class I to III, and assessed to be of normal cognitive function preoperatively via a Mini-Mental State Examination. Participant exclusion criteria included preexisting delirium as assessed by the Confusion Assessment Method (CAM), prior diagnosed neurological diseases (eg, Parkinson’s disease), prior diagnosed mental disorders (eg, schizophrenia), or impaired vision or hearing that would influence cognitive assessments. All participants were randomly assigned to either sevoflurane or propofol anesthesia for their surgery via a computer-generated list. Of these, 103 received inhalational sevoflurane and 106 received intravenous propofol. All participants received standardized postoperative care.

Main outcome measures: All participants were interviewed by investigators, who were blinded to the anesthesia regimen, twice daily on postoperative days 1, 2, and 3 using CAM and a CAM-based scoring system (CAM-S) to assess delirium severity. The CAM encapsulated 4 criteria: acute onset and fluctuating course, agitation, disorganized thinking, and altered level of consciousness. To diagnose delirium, both the first and second criteria must be met, in addition to either the third or fourth criterion. The averages of the scores across the 3 postoperative days indicated delirium severity, while the incidence and duration of delirium was assessed by the presence of delirium as determined by CAM on any postoperative day.

Main results: All eligible participants (N = 209; mean [SD] age 71.2 [6.7] years; 29.2% male) were included in the final analysis. The incidence of POD was not statistically different between the propofol and sevoflurane groups (33.0% vs 23.3%; P = .119, Chi-square test). It was estimated that 316 participants in each arm of the study were needed to detect statistical differences. The number of days of POD per person were higher with propofol anesthesia as compared to sevoflurane (0.5 [0.8] vs 0.3 [0.5]; P =  .049, Student’s t-test).

Conclusion: This underpowered study showed a 9.7% difference in the incidence of POD between older adults who received propofol (33.0%) and sevoflurane (23.3%) after THR/TKR. Further studies with a larger sample size are needed to compare general anesthetics and their role in POD.

 

 

Commentary

Delirium is characterized by an acute state of confusion with fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is often caused by medications and/or their related adverse effects, infections, electrolyte imbalances, and other clinical etiologies. Delirium often manifests in post-surgical settings, disproportionately affecting older patients and leading to increased risk of morbidity, mortality, hospital length of stay, and health care costs.1 Intraoperative risk factors for POD are determined by the degree of operative stress (eg, lower-risk surgeries put the patient at reduced risk for POD as compared to higher-risk surgeries) and are additive to preexisting patient-specific risk factors, such as older age and functional impairment.1 Because operative stress is associated with risk for POD, limiting operative stress in controlled ways, such as through the choice of anesthetic agent administered, may be a pragmatic way to manage operative risks and optimize outcomes, especially when serving a surgically vulnerable population.

In Study 1, Chang et al sought to assess whether 2 commonly utilized general anesthetics, propofol and sevoflurane, in older patients undergoing spine surgery differentially affected the incidence of POD. In this retrospective, single-blinded observational study of 281 geriatric patients, the researchers found that sevoflurane was associated with a higher risk of POD as compared to propofol. However, these anesthetics were not associated with surgical outcomes such as postoperative 30-day complications or the length of postoperative hospital stay. While these findings added new knowledge to this field of research, several limitations should be kept in mind when interpreting this study’s results. For instance, the sample size was relatively small, with all cases selected from a single center utilizing a retrospective analysis. In addition, although a standardized nursing screening tool was used as a method for delirium detection, hypoactive delirium or less symptomatic delirium may have been missed, which in turn would lead to an underestimation of POD incidence. The latter is a common limitation in delirium research.

In Study 2, Mei et al similarly explored the effects of general anesthetics on POD in older surgical patients. Specifically, using a randomized clinical trial design, the investigators compared propofol with sevoflurane in older patients who underwent TKR/THR, and their roles in POD severity and duration. Although the incidence of POD was higher in those who received propofol compared to sevoflurane, this trial was underpowered and the results did not reach statistical significance. In addition, while the duration of POD was slightly longer in the propofol group compared to the sevoflurane group (0.5 vs 0.3 days), it was unclear if this finding was clinically significant. Similar to many research studies in POD, limitations of Study 2 included a small sample size of 209 patients, with all participants enrolled from a single center. On the other hand, this study illustrated the feasibility of a method that allowed reproducible prospective assessment of POD time course using CAM and CAM-S.

 

 

Applications for Clinical Practice and System Implementation

The delineation of risk factors that contribute to delirium after surgery in older patients is key to mitigating risks for POD and improving clinical outcomes. An important step towards a better understanding of these modifiable risk factors is to clearly quantify intraoperative risk of POD attributable to specific anesthetics. While preclinical studies have shown differential neurotoxicity effects of propofol and sevoflurane, their impact on clinically important neurologic outcomes such as delirium and cognitive decline remains poorly understood. Although Studies 1 and 2 both provided head-to-head comparisons of propofol and sevoflurane as risk factors for POD in high-operative-stress surgeries in older patients, the results were inconsistent. That being said, this small incremental increase in knowledge was not unexpected in the course of discovery around a clinically complex research question. Importantly, these studies provided evidence regarding the methodological approaches that could be taken to further this line of research.

The mediating factors of the differences on neurologic outcomes between anesthetic agents are likely pharmacological, biological, and methodological. Pharmacologically, the differences between target receptors, such as GABAA (propofol, etomidate) or NMDA (ketamine), could be a defining feature in the difference in incidence of POD. Additionally, secondary actions of anesthetic agents on glycine, nicotinic, and acetylcholine receptors could play a role as well. Biologically, genes such as CYP2E1, CYP2B6, CYP2C9, GSTP1, UGT1A9, SULT1A1, and NQO1 have all been identified as genetic factors in the metabolism of anesthetics, and variations in such genes could result in different responses to anesthetics.2 Methodologically, routes of anesthetic administration (eg, inhalation vs intravenous), preexisting anatomical structures, or confounding medical conditions (eg, lower respiratory volume due to older age) may influence POD incidence, duration, or severity. Moreover, methodological differences between Studies 1 and 2, such as surgeries performed (spinal vs TKR/THR), patient populations (South Korean vs Chinese), and the diagnosis and monitoring of delirium (retrospective screening and diagnosis vs prospective CAM/CAM-S) may impact delirium outcomes. Thus, these factors should be considered in the design of future clinical trials undertaken to investigate the effects of anesthetics on POD.

Given the high prevalence of delirium and its associated adverse outcomes in the immediate postoperative period in older patients, further research is warranted to determine how anesthetics affect POD in order to optimize perioperative care and mitigate risks in this vulnerable population. Moreover, parallel investigations into how anesthetics differentially impact the development of transient or longer-term cognitive impairment after a surgical procedure (ie, postoperative cognitive dysfunction) in older adults are urgently needed in order to improve their cognitive health.

Practice Points

  • Intravenous propofol and inhalational sevoflurane may be differentially associated with incidence, duration, and severity of POD in geriatric surgical patients.
  • Further larger-scale studies are warranted to clarify the role of anesthetic choice in POD in order to optimize surgical outcomes in older patients.

–Jared Doan, BS, and Fred Ko, MD
Icahn School of Medicine at Mount Sinai

References

1. Dasgupta M, Dumbrell AC. Preoperative risk assessment for delirium after noncardiac surgery: a systematic review. J Am Geriatr Soc. 2006;54(10):1578-1589. doi:10.1111/j.1532-5415.2006.00893.x

2. Mikstacki A, Skrzypczak-Zielinska M, Tamowicz B, et al. The impact of genetic factors on response to anaesthetics. Adv Med Sci. 2013;58(1):9-14. doi:10.2478/v10039-012-0065-z

Article PDF
Issue
Journal of Clinical Outcomes Management - 29(6)
Publications
Topics
Page Number
199-201
Sections
Article PDF
Article PDF

Study 1 Overview (Chang et al)

Objective: To assess the incidence of postoperative delirium (POD) following propofol- vs sevoflurane-based anesthesia in geriatric spine surgery patients.

Design: Retrospective, single-blinded observational study of propofol- and sevoflurane-based anesthesia cohorts.

Setting and participants: Patients eligible for this study were aged 65 years or older admitted to the SMG-SNU Boramae Medical Center (Seoul, South Korea). All patients underwent general anesthesia either via intravenous propofol or inhalational sevoflurane for spine surgery between January 2015 and December 2019. Patients were retrospectively identified via electronic medical records. Patient exclusion criteria included preoperative delirium, history of dementia, psychiatric disease, alcoholism, hepatic or renal dysfunction, postoperative mechanical ventilation dependence, other surgery within the recent 6 months, maintenance of intraoperative anesthesia with combined anesthetics, or incomplete medical record.

Main outcome measures: The primary outcome was the incidence of POD after administration of propofol- and sevoflurane-based anesthesia during hospitalization. Patients were screened for POD regularly by attending nurses using the Nursing Delirium Screening Scale (disorientation, inappropriate behavior, inappropriate communication, hallucination, and psychomotor retardation) during the entirety of the patient’s hospital stay; if 1 or more screening criteria were met, a psychiatrist was consulted for the proper diagnosis and management of delirium. A psychiatric diagnosis was required for a case to be counted toward the incidence of POD in this study. Secondary outcomes included postoperative 30-day complications (angina, myocardial infarction, transient ischemic attack/stroke, pneumonia, deep vein thrombosis, pulmonary embolism, acute kidney injury, or infection) and length of postoperative hospital stay.

Main results: POD occurred in 29 patients (10.3%) out of the total cohort of 281. POD was more common in the sevoflurane group than in the propofol group (15.7% vs 5.0%; P = .003). Using multivariable logistic regression, inhalational sevoflurane was associated with an increased risk of POD as compared to propofol-based anesthesia (odds ratio [OR], 4.120; 95% CI, 1.549-10.954; P = .005). There was no association between choice of anesthetic and postoperative 30-day complications or the length of postoperative hospital stay. Both older age (OR, 1.242; 95% CI, 1.130-1.366; P < .001) and higher pain score at postoperative day 1 (OR, 1.338; 95% CI, 1.056-1.696; P = .016) were associated with increased risk of POD.

Conclusion: Propofol-based anesthesia was associated with a lower incidence of and risk for POD than sevoflurane-based anesthesia in older patients undergoing spine surgery.

Study 2 Overview (Mei et al)

Objective: To determine the incidence and duration of POD in older patients after total knee/hip replacement (TKR/THR) under intravenous propofol or inhalational sevoflurane general anesthesia.

Design: Randomized clinical trial of propofol and sevoflurane groups.

Setting and participants: This study was conducted at the Shanghai Tenth People’s Hospital and involved 209 participants enrolled between June 2016 and November 2019. All participants were 60 years of age or older, scheduled for TKR/THR surgery under general anesthesia, American Society of Anesthesiologists (ASA) class I to III, and assessed to be of normal cognitive function preoperatively via a Mini-Mental State Examination. Participant exclusion criteria included preexisting delirium as assessed by the Confusion Assessment Method (CAM), prior diagnosed neurological diseases (eg, Parkinson’s disease), prior diagnosed mental disorders (eg, schizophrenia), or impaired vision or hearing that would influence cognitive assessments. All participants were randomly assigned to either sevoflurane or propofol anesthesia for their surgery via a computer-generated list. Of these, 103 received inhalational sevoflurane and 106 received intravenous propofol. All participants received standardized postoperative care.

Main outcome measures: All participants were interviewed by investigators, who were blinded to the anesthesia regimen, twice daily on postoperative days 1, 2, and 3 using CAM and a CAM-based scoring system (CAM-S) to assess delirium severity. The CAM encapsulated 4 criteria: acute onset and fluctuating course, agitation, disorganized thinking, and altered level of consciousness. To diagnose delirium, both the first and second criteria must be met, in addition to either the third or fourth criterion. The averages of the scores across the 3 postoperative days indicated delirium severity, while the incidence and duration of delirium was assessed by the presence of delirium as determined by CAM on any postoperative day.

Main results: All eligible participants (N = 209; mean [SD] age 71.2 [6.7] years; 29.2% male) were included in the final analysis. The incidence of POD was not statistically different between the propofol and sevoflurane groups (33.0% vs 23.3%; P = .119, Chi-square test). It was estimated that 316 participants in each arm of the study were needed to detect statistical differences. The number of days of POD per person were higher with propofol anesthesia as compared to sevoflurane (0.5 [0.8] vs 0.3 [0.5]; P =  .049, Student’s t-test).

Conclusion: This underpowered study showed a 9.7% difference in the incidence of POD between older adults who received propofol (33.0%) and sevoflurane (23.3%) after THR/TKR. Further studies with a larger sample size are needed to compare general anesthetics and their role in POD.

 

 

Commentary

Delirium is characterized by an acute state of confusion with fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is often caused by medications and/or their related adverse effects, infections, electrolyte imbalances, and other clinical etiologies. Delirium often manifests in post-surgical settings, disproportionately affecting older patients and leading to increased risk of morbidity, mortality, hospital length of stay, and health care costs.1 Intraoperative risk factors for POD are determined by the degree of operative stress (eg, lower-risk surgeries put the patient at reduced risk for POD as compared to higher-risk surgeries) and are additive to preexisting patient-specific risk factors, such as older age and functional impairment.1 Because operative stress is associated with risk for POD, limiting operative stress in controlled ways, such as through the choice of anesthetic agent administered, may be a pragmatic way to manage operative risks and optimize outcomes, especially when serving a surgically vulnerable population.

In Study 1, Chang et al sought to assess whether 2 commonly utilized general anesthetics, propofol and sevoflurane, in older patients undergoing spine surgery differentially affected the incidence of POD. In this retrospective, single-blinded observational study of 281 geriatric patients, the researchers found that sevoflurane was associated with a higher risk of POD as compared to propofol. However, these anesthetics were not associated with surgical outcomes such as postoperative 30-day complications or the length of postoperative hospital stay. While these findings added new knowledge to this field of research, several limitations should be kept in mind when interpreting this study’s results. For instance, the sample size was relatively small, with all cases selected from a single center utilizing a retrospective analysis. In addition, although a standardized nursing screening tool was used as a method for delirium detection, hypoactive delirium or less symptomatic delirium may have been missed, which in turn would lead to an underestimation of POD incidence. The latter is a common limitation in delirium research.

In Study 2, Mei et al similarly explored the effects of general anesthetics on POD in older surgical patients. Specifically, using a randomized clinical trial design, the investigators compared propofol with sevoflurane in older patients who underwent TKR/THR, and their roles in POD severity and duration. Although the incidence of POD was higher in those who received propofol compared to sevoflurane, this trial was underpowered and the results did not reach statistical significance. In addition, while the duration of POD was slightly longer in the propofol group compared to the sevoflurane group (0.5 vs 0.3 days), it was unclear if this finding was clinically significant. Similar to many research studies in POD, limitations of Study 2 included a small sample size of 209 patients, with all participants enrolled from a single center. On the other hand, this study illustrated the feasibility of a method that allowed reproducible prospective assessment of POD time course using CAM and CAM-S.

 

 

Applications for Clinical Practice and System Implementation

The delineation of risk factors that contribute to delirium after surgery in older patients is key to mitigating risks for POD and improving clinical outcomes. An important step towards a better understanding of these modifiable risk factors is to clearly quantify intraoperative risk of POD attributable to specific anesthetics. While preclinical studies have shown differential neurotoxicity effects of propofol and sevoflurane, their impact on clinically important neurologic outcomes such as delirium and cognitive decline remains poorly understood. Although Studies 1 and 2 both provided head-to-head comparisons of propofol and sevoflurane as risk factors for POD in high-operative-stress surgeries in older patients, the results were inconsistent. That being said, this small incremental increase in knowledge was not unexpected in the course of discovery around a clinically complex research question. Importantly, these studies provided evidence regarding the methodological approaches that could be taken to further this line of research.

The mediating factors of the differences on neurologic outcomes between anesthetic agents are likely pharmacological, biological, and methodological. Pharmacologically, the differences between target receptors, such as GABAA (propofol, etomidate) or NMDA (ketamine), could be a defining feature in the difference in incidence of POD. Additionally, secondary actions of anesthetic agents on glycine, nicotinic, and acetylcholine receptors could play a role as well. Biologically, genes such as CYP2E1, CYP2B6, CYP2C9, GSTP1, UGT1A9, SULT1A1, and NQO1 have all been identified as genetic factors in the metabolism of anesthetics, and variations in such genes could result in different responses to anesthetics.2 Methodologically, routes of anesthetic administration (eg, inhalation vs intravenous), preexisting anatomical structures, or confounding medical conditions (eg, lower respiratory volume due to older age) may influence POD incidence, duration, or severity. Moreover, methodological differences between Studies 1 and 2, such as surgeries performed (spinal vs TKR/THR), patient populations (South Korean vs Chinese), and the diagnosis and monitoring of delirium (retrospective screening and diagnosis vs prospective CAM/CAM-S) may impact delirium outcomes. Thus, these factors should be considered in the design of future clinical trials undertaken to investigate the effects of anesthetics on POD.

Given the high prevalence of delirium and its associated adverse outcomes in the immediate postoperative period in older patients, further research is warranted to determine how anesthetics affect POD in order to optimize perioperative care and mitigate risks in this vulnerable population. Moreover, parallel investigations into how anesthetics differentially impact the development of transient or longer-term cognitive impairment after a surgical procedure (ie, postoperative cognitive dysfunction) in older adults are urgently needed in order to improve their cognitive health.

Practice Points

  • Intravenous propofol and inhalational sevoflurane may be differentially associated with incidence, duration, and severity of POD in geriatric surgical patients.
  • Further larger-scale studies are warranted to clarify the role of anesthetic choice in POD in order to optimize surgical outcomes in older patients.

–Jared Doan, BS, and Fred Ko, MD
Icahn School of Medicine at Mount Sinai

Study 1 Overview (Chang et al)

Objective: To assess the incidence of postoperative delirium (POD) following propofol- vs sevoflurane-based anesthesia in geriatric spine surgery patients.

Design: Retrospective, single-blinded observational study of propofol- and sevoflurane-based anesthesia cohorts.

Setting and participants: Patients eligible for this study were aged 65 years or older admitted to the SMG-SNU Boramae Medical Center (Seoul, South Korea). All patients underwent general anesthesia either via intravenous propofol or inhalational sevoflurane for spine surgery between January 2015 and December 2019. Patients were retrospectively identified via electronic medical records. Patient exclusion criteria included preoperative delirium, history of dementia, psychiatric disease, alcoholism, hepatic or renal dysfunction, postoperative mechanical ventilation dependence, other surgery within the recent 6 months, maintenance of intraoperative anesthesia with combined anesthetics, or incomplete medical record.

Main outcome measures: The primary outcome was the incidence of POD after administration of propofol- and sevoflurane-based anesthesia during hospitalization. Patients were screened for POD regularly by attending nurses using the Nursing Delirium Screening Scale (disorientation, inappropriate behavior, inappropriate communication, hallucination, and psychomotor retardation) during the entirety of the patient’s hospital stay; if 1 or more screening criteria were met, a psychiatrist was consulted for the proper diagnosis and management of delirium. A psychiatric diagnosis was required for a case to be counted toward the incidence of POD in this study. Secondary outcomes included postoperative 30-day complications (angina, myocardial infarction, transient ischemic attack/stroke, pneumonia, deep vein thrombosis, pulmonary embolism, acute kidney injury, or infection) and length of postoperative hospital stay.

Main results: POD occurred in 29 patients (10.3%) out of the total cohort of 281. POD was more common in the sevoflurane group than in the propofol group (15.7% vs 5.0%; P = .003). Using multivariable logistic regression, inhalational sevoflurane was associated with an increased risk of POD as compared to propofol-based anesthesia (odds ratio [OR], 4.120; 95% CI, 1.549-10.954; P = .005). There was no association between choice of anesthetic and postoperative 30-day complications or the length of postoperative hospital stay. Both older age (OR, 1.242; 95% CI, 1.130-1.366; P < .001) and higher pain score at postoperative day 1 (OR, 1.338; 95% CI, 1.056-1.696; P = .016) were associated with increased risk of POD.

Conclusion: Propofol-based anesthesia was associated with a lower incidence of and risk for POD than sevoflurane-based anesthesia in older patients undergoing spine surgery.

Study 2 Overview (Mei et al)

Objective: To determine the incidence and duration of POD in older patients after total knee/hip replacement (TKR/THR) under intravenous propofol or inhalational sevoflurane general anesthesia.

Design: Randomized clinical trial of propofol and sevoflurane groups.

Setting and participants: This study was conducted at the Shanghai Tenth People’s Hospital and involved 209 participants enrolled between June 2016 and November 2019. All participants were 60 years of age or older, scheduled for TKR/THR surgery under general anesthesia, American Society of Anesthesiologists (ASA) class I to III, and assessed to be of normal cognitive function preoperatively via a Mini-Mental State Examination. Participant exclusion criteria included preexisting delirium as assessed by the Confusion Assessment Method (CAM), prior diagnosed neurological diseases (eg, Parkinson’s disease), prior diagnosed mental disorders (eg, schizophrenia), or impaired vision or hearing that would influence cognitive assessments. All participants were randomly assigned to either sevoflurane or propofol anesthesia for their surgery via a computer-generated list. Of these, 103 received inhalational sevoflurane and 106 received intravenous propofol. All participants received standardized postoperative care.

Main outcome measures: All participants were interviewed by investigators, who were blinded to the anesthesia regimen, twice daily on postoperative days 1, 2, and 3 using CAM and a CAM-based scoring system (CAM-S) to assess delirium severity. The CAM encapsulated 4 criteria: acute onset and fluctuating course, agitation, disorganized thinking, and altered level of consciousness. To diagnose delirium, both the first and second criteria must be met, in addition to either the third or fourth criterion. The averages of the scores across the 3 postoperative days indicated delirium severity, while the incidence and duration of delirium was assessed by the presence of delirium as determined by CAM on any postoperative day.

Main results: All eligible participants (N = 209; mean [SD] age 71.2 [6.7] years; 29.2% male) were included in the final analysis. The incidence of POD was not statistically different between the propofol and sevoflurane groups (33.0% vs 23.3%; P = .119, Chi-square test). It was estimated that 316 participants in each arm of the study were needed to detect statistical differences. The number of days of POD per person were higher with propofol anesthesia as compared to sevoflurane (0.5 [0.8] vs 0.3 [0.5]; P =  .049, Student’s t-test).

Conclusion: This underpowered study showed a 9.7% difference in the incidence of POD between older adults who received propofol (33.0%) and sevoflurane (23.3%) after THR/TKR. Further studies with a larger sample size are needed to compare general anesthetics and their role in POD.

 

 

Commentary

Delirium is characterized by an acute state of confusion with fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is often caused by medications and/or their related adverse effects, infections, electrolyte imbalances, and other clinical etiologies. Delirium often manifests in post-surgical settings, disproportionately affecting older patients and leading to increased risk of morbidity, mortality, hospital length of stay, and health care costs.1 Intraoperative risk factors for POD are determined by the degree of operative stress (eg, lower-risk surgeries put the patient at reduced risk for POD as compared to higher-risk surgeries) and are additive to preexisting patient-specific risk factors, such as older age and functional impairment.1 Because operative stress is associated with risk for POD, limiting operative stress in controlled ways, such as through the choice of anesthetic agent administered, may be a pragmatic way to manage operative risks and optimize outcomes, especially when serving a surgically vulnerable population.

In Study 1, Chang et al sought to assess whether 2 commonly utilized general anesthetics, propofol and sevoflurane, in older patients undergoing spine surgery differentially affected the incidence of POD. In this retrospective, single-blinded observational study of 281 geriatric patients, the researchers found that sevoflurane was associated with a higher risk of POD as compared to propofol. However, these anesthetics were not associated with surgical outcomes such as postoperative 30-day complications or the length of postoperative hospital stay. While these findings added new knowledge to this field of research, several limitations should be kept in mind when interpreting this study’s results. For instance, the sample size was relatively small, with all cases selected from a single center utilizing a retrospective analysis. In addition, although a standardized nursing screening tool was used as a method for delirium detection, hypoactive delirium or less symptomatic delirium may have been missed, which in turn would lead to an underestimation of POD incidence. The latter is a common limitation in delirium research.

In Study 2, Mei et al similarly explored the effects of general anesthetics on POD in older surgical patients. Specifically, using a randomized clinical trial design, the investigators compared propofol with sevoflurane in older patients who underwent TKR/THR, and their roles in POD severity and duration. Although the incidence of POD was higher in those who received propofol compared to sevoflurane, this trial was underpowered and the results did not reach statistical significance. In addition, while the duration of POD was slightly longer in the propofol group compared to the sevoflurane group (0.5 vs 0.3 days), it was unclear if this finding was clinically significant. Similar to many research studies in POD, limitations of Study 2 included a small sample size of 209 patients, with all participants enrolled from a single center. On the other hand, this study illustrated the feasibility of a method that allowed reproducible prospective assessment of POD time course using CAM and CAM-S.

 

 

Applications for Clinical Practice and System Implementation

The delineation of risk factors that contribute to delirium after surgery in older patients is key to mitigating risks for POD and improving clinical outcomes. An important step towards a better understanding of these modifiable risk factors is to clearly quantify intraoperative risk of POD attributable to specific anesthetics. While preclinical studies have shown differential neurotoxicity effects of propofol and sevoflurane, their impact on clinically important neurologic outcomes such as delirium and cognitive decline remains poorly understood. Although Studies 1 and 2 both provided head-to-head comparisons of propofol and sevoflurane as risk factors for POD in high-operative-stress surgeries in older patients, the results were inconsistent. That being said, this small incremental increase in knowledge was not unexpected in the course of discovery around a clinically complex research question. Importantly, these studies provided evidence regarding the methodological approaches that could be taken to further this line of research.

The mediating factors of the differences on neurologic outcomes between anesthetic agents are likely pharmacological, biological, and methodological. Pharmacologically, the differences between target receptors, such as GABAA (propofol, etomidate) or NMDA (ketamine), could be a defining feature in the difference in incidence of POD. Additionally, secondary actions of anesthetic agents on glycine, nicotinic, and acetylcholine receptors could play a role as well. Biologically, genes such as CYP2E1, CYP2B6, CYP2C9, GSTP1, UGT1A9, SULT1A1, and NQO1 have all been identified as genetic factors in the metabolism of anesthetics, and variations in such genes could result in different responses to anesthetics.2 Methodologically, routes of anesthetic administration (eg, inhalation vs intravenous), preexisting anatomical structures, or confounding medical conditions (eg, lower respiratory volume due to older age) may influence POD incidence, duration, or severity. Moreover, methodological differences between Studies 1 and 2, such as surgeries performed (spinal vs TKR/THR), patient populations (South Korean vs Chinese), and the diagnosis and monitoring of delirium (retrospective screening and diagnosis vs prospective CAM/CAM-S) may impact delirium outcomes. Thus, these factors should be considered in the design of future clinical trials undertaken to investigate the effects of anesthetics on POD.

Given the high prevalence of delirium and its associated adverse outcomes in the immediate postoperative period in older patients, further research is warranted to determine how anesthetics affect POD in order to optimize perioperative care and mitigate risks in this vulnerable population. Moreover, parallel investigations into how anesthetics differentially impact the development of transient or longer-term cognitive impairment after a surgical procedure (ie, postoperative cognitive dysfunction) in older adults are urgently needed in order to improve their cognitive health.

Practice Points

  • Intravenous propofol and inhalational sevoflurane may be differentially associated with incidence, duration, and severity of POD in geriatric surgical patients.
  • Further larger-scale studies are warranted to clarify the role of anesthetic choice in POD in order to optimize surgical outcomes in older patients.

–Jared Doan, BS, and Fred Ko, MD
Icahn School of Medicine at Mount Sinai

References

1. Dasgupta M, Dumbrell AC. Preoperative risk assessment for delirium after noncardiac surgery: a systematic review. J Am Geriatr Soc. 2006;54(10):1578-1589. doi:10.1111/j.1532-5415.2006.00893.x

2. Mikstacki A, Skrzypczak-Zielinska M, Tamowicz B, et al. The impact of genetic factors on response to anaesthetics. Adv Med Sci. 2013;58(1):9-14. doi:10.2478/v10039-012-0065-z

References

1. Dasgupta M, Dumbrell AC. Preoperative risk assessment for delirium after noncardiac surgery: a systematic review. J Am Geriatr Soc. 2006;54(10):1578-1589. doi:10.1111/j.1532-5415.2006.00893.x

2. Mikstacki A, Skrzypczak-Zielinska M, Tamowicz B, et al. The impact of genetic factors on response to anaesthetics. Adv Med Sci. 2013;58(1):9-14. doi:10.2478/v10039-012-0065-z

Issue
Journal of Clinical Outcomes Management - 29(6)
Issue
Journal of Clinical Outcomes Management - 29(6)
Page Number
199-201
Page Number
199-201
Publications
Publications
Topics
Article Type
Display Headline
Anesthetic Choices and Postoperative Delirium Incidence: Propofol vs Sevoflurane
Display Headline
Anesthetic Choices and Postoperative Delirium Incidence: Propofol vs Sevoflurane
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Effectiveness of Colonoscopy for Colorectal Cancer Screening in Reducing Cancer-Related Mortality: Interpreting the Results From Two Ongoing Randomized Trials 

Article Type
Changed
Wed, 11/23/2022 - 14:24
Display Headline
Effectiveness of Colonoscopy for Colorectal Cancer Screening in Reducing Cancer-Related Mortality: Interpreting the Results From Two Ongoing Randomized Trials 

Study 1 Overview (Bretthauer et al) 

Objective: To evaluate the impact of screening colonoscopy on colon cancer–related death. 

Design: Randomized trial conducted in 4 European countries.

Setting and participants: Presumptively healthy men and women between the ages of 55 and 64 years were selected from population registries in Poland, Norway, Sweden, and the Netherlands between 2009 and 2014. Eligible participants had not previously undergone screening. Patients with a diagnosis of colon cancer before trial entry were excluded.

Intervention: Participants were randomly assigned in a 1:2 ratio to undergo colonoscopy screening by invitation or to no invitation and no screening. Participants were randomized using a computer-generated allocation algorithm. Patients were stratified by age, sex, and municipality.

Main outcome measures: The primary endpoint of the study was risk of colorectal cancer and related death after a median follow-up of 10 to 15 years. The main secondary endpoint was death from any cause.

Main results: The study reported follow-up data from 84,585 participants (89.1% of all participants originally included in the trial). The remaining participants were either excluded or data could not be included due to lack of follow-up data from the usual-care group. Men (50.1%) and women (49.9%) were equally represented. The median age at entry was 59 years. The median follow-up was 10 years. Characteristics were otherwise balanced. Good bowel preparation was reported in 91% of all participants. Cecal intubation was achieved in 96.8% of all participants. The percentage of patients who underwent screening was 42% for the group, but screening rates varied by country (33%-60%). Colorectal cancer was diagnosed at screening in 62 participants (0.5% of screening group). Adenomas were detected in 30.7% of participants; 15 patients had polypectomy-related major bleeding. There were no perforations.

The risk of colorectal cancer at 10 years was 0.98% in the invited-to-screen group and 1.2% in the usual-care group (risk ratio, 0.82; 95% CI, 0.7-0.93). The reported number needed to invite to prevent 1 case of colon cancer in a 10-year period was 455. The risk of colorectal cancer–related death at 10 years was 0.28% in the invited-to-screen group and 0.31% in the usual-care group (risk ratio, 0.9; 95% CI, 0.64-1.16). An adjusted per-protocol analysis was performed to account for the estimated effect of screening if all participants assigned to the screening group underwent screening. In this analysis, the risk of colorectal cancer at 10 years was decreased from 1.22% to 0.84% (risk ratio, 0.69; 95% CI, 0.66-0.83).

Conclusion: Based on the results of this European randomized trial, the risk of colorectal cancer at 10 years was lower among those who were invited to undergo screening.

 

 

Study 2 Overview (Forsberg et al) 

Objective: To investigate the effect of colorectal cancer screening with once-only colonoscopy or fecal immunochemical testing (FIT) on colorectal cancer mortality and incidence.

Design: Randomized controlled trial in Sweden utilizing a population registry. 

Setting and participants: Patients aged 60 years at the time of entry were identified from a population-based registry from the Swedish Tax Agency.

Intervention: Individuals were assigned by an independent statistician to once-only colonoscopy, 2 rounds of FIT 2 years apart, or a control group in which no intervention was performed. Patients were assigned in a 1:6 ratio for colonoscopy vs control and a 1:2 ratio for FIT vs control.

Main outcome measures: The primary endpoint of the trial was colorectal cancer incidence and mortality.

Main results: A total of 278,280 participants were included in the study from March 1, 2014, through December 31, 2020 (31,140 in the colonoscopy group, 60,300 in the FIT group, and 186,840 in the control group). Of those in the colonoscopy group, 35% underwent colonoscopy, and 55% of those in the FIT group participated in testing. Colorectal cancer was detected in 0.16% (49) of people in the colonoscopy group and 0.2% (121) of people in the FIT test group (relative risk, 0.78; 95% CI, 0.56-1.09). The advanced adenoma detection rate was 2.05% in the colonoscopy group and 1.61% in the FIT group (relative risk, 1.27; 95% CI, 1.15-1.41). There were 2 perforations noted in the colonoscopy group and 15 major bleeding events. More right-sided adenomas were detected in the colonoscopy group.

Conclusion: The results of the current study highlight similar detection rates in the colonoscopy and FIT group. Should further follow-up show a benefit in disease-specific mortality, such screening strategies could be translated into population-based screening programs.

 

 

Commentary 

The first colonoscopy screening recommendations were established in the mid 1990s in the United States, and over the subsequent 2 decades colonoscopy has been the recommended method and main modality for colorectal cancer screening in this country. The advantage of colonoscopy over other screening modalities (sigmoidoscopy and fecal-based testing) is that it can examine the entire large bowel and allow for removal of potential precancerous lesions. However, data to support colonoscopy as a screening modality for colorectal cancer are largely based on cohort studies.1,2 These studies have reported a significant reduction in the incidence of colon cancer. Additionally, colorectal cancer mortality was notably lower in the screened populations. For example, one study among health professionals found a nearly 70% reduction in colorectal cancer mortality in those who underwent at least 1 screening colonoscopy.3

There has been a lack of randomized clinical data to validate the efficacy of colonoscopy screening for reducing colorectal cancer–related deaths. The current study by Bretthauer et al addresses an important need and enhances our understanding of the efficacy of colorectal cancer screening with colonoscopy. In this randomized trial involving more than 84,000 participants from Poland, Norway, Sweden, and the Netherlands, there was a noted 18% decrease in the risk of colorectal cancer over a 10-year period in the intention-to-screen population. The reduction in the risk of death from colorectal cancer was not statistically significant (risk ratio, 0.90; 95% CI, 0.64-1.16). These results are surprising and certainly raise the question as to whether previous studies overestimated the effectiveness of colonoscopy in reducing the risk of colorectal cancer–related deaths. There are several limitations to the Bretthauer et al study, however.

Perhaps the most important limitation is the fact that only 42% of participants in the invited-to-screen cohort underwent screening colonoscopy. Therefore, this raises the question of whether the efficacy noted is simply due to a lack of participation in the screening protocol. In the adjusted per-protocol analysis, colonoscopy was estimated to reduce the risk of colorectal cancer by 31% and the risk of colorectal cancer–related death by around 50%. These findings are more in line with prior published studies regarding the efficacy of colorectal cancer screening. The authors plan to repeat this analysis at 15 years, and it is possible that the risk of colorectal cancer and colorectal cancer–related death can be reduced on subsequent follow-up.

 

 

While the results of the Bretthauer et al trial are important, randomized trials that directly compare the effectiveness of different colorectal cancer screening strategies are lacking. The Forsberg et al trial, also an ongoing study, seeks to address this vitally important gap in our current data. The SCREESCO trial is a study that compares the efficacy of colonoscopy with FIT every 2 years or no screening. The currently reported data are preliminary but show a similarly low rate of colonoscopy screening in those invited to do so (35%). This is a similar limitation to that noted in the Bretthauer et al study. Furthermore, there is some question regarding colonoscopy quality in this study, which had a very low reported adenoma detection rate.

While the current studies are important and provide quality randomized data on the effect of colorectal cancer screening, there remain many unanswered questions. Should the results presented by Bretthauer et al represent the current real-world scenario, then colonoscopy screening may not be viewed as an effective screening tool compared to simpler, less-invasive modalities (ie, FIT). Further follow-up from the SCREESCO trial will help shed light on this question. However, there are concerns with this study, including a very low participation rate, which could greatly underestimate the effectiveness of screening. Additional analysis and longer follow-up will be vital to fully understand the benefits of screening colonoscopy. In the meantime, screening remains an important tool for early detection of colorectal cancer and remains a category A recommendation by the United States Preventive Services Task Force.4 

Applications for Clinical Practice and System Implementation

Current guidelines continue to strongly recommend screening for colorectal cancer for persons between 45 and 75 years of age (category B recommendation for those aged 45 to 49 years per the United States Preventive Services Task Force). Stool-based tests and direct visualization tests are both endorsed as screening options. Further follow-up from the presented studies is needed to help shed light on the magnitude of benefit of these modalities.

Practice Points

  • Current guidelines continue to strongly recommend screening for colon cancer in those aged 45 to 75 years.
  • The optimal modality for screening and the impact of screening on cancer-related mortality requires longer- term follow-up from these ongoing studies.

–Daniel Isaac, DO, MS 

References

1. Lin JS, Perdue LA, Henrikson NB, Bean SI, Blasi PR. Screening for Colorectal Cancer: An Evidence Update for the U.S. Preventive Services Task Force [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2021 May. Report No.: 20-05271-EF-1.

2. Lin JS, Perdue LA, Henrikson NB, Bean SI, Blasi PR. Screening for colorectal cancer: updated evidence report and systematic review for the US Preventive Services Task Force. JAMA. 2021;325(19):1978-1998. doi:10.1001/jama.2021.4417

3. Nishihara R, Wu K, Lochhead P, et al. Long-term colorectal-cancer incidence and mortality after lower endoscopy. N Engl J Med. 2013;369(12):1095-1105. doi:10.1056/NEJMoa1301969

4. U.S. Preventive Services Task Force. Colorectal cancer: screening. Published May 18, 2021. Accessed November 8, 2022. https://uspreventiveservicestaskforce.org/uspstf/recommendation/colorectal-cancer-screening

Article PDF
Issue
Journal of Clinical Outcomes Management - 29(6)
Publications
Topics
Page Number
196-198
Sections
Article PDF
Article PDF

Study 1 Overview (Bretthauer et al) 

Objective: To evaluate the impact of screening colonoscopy on colon cancer–related death. 

Design: Randomized trial conducted in 4 European countries.

Setting and participants: Presumptively healthy men and women between the ages of 55 and 64 years were selected from population registries in Poland, Norway, Sweden, and the Netherlands between 2009 and 2014. Eligible participants had not previously undergone screening. Patients with a diagnosis of colon cancer before trial entry were excluded.

Intervention: Participants were randomly assigned in a 1:2 ratio to undergo colonoscopy screening by invitation or to no invitation and no screening. Participants were randomized using a computer-generated allocation algorithm. Patients were stratified by age, sex, and municipality.

Main outcome measures: The primary endpoint of the study was risk of colorectal cancer and related death after a median follow-up of 10 to 15 years. The main secondary endpoint was death from any cause.

Main results: The study reported follow-up data from 84,585 participants (89.1% of all participants originally included in the trial). The remaining participants were either excluded or data could not be included due to lack of follow-up data from the usual-care group. Men (50.1%) and women (49.9%) were equally represented. The median age at entry was 59 years. The median follow-up was 10 years. Characteristics were otherwise balanced. Good bowel preparation was reported in 91% of all participants. Cecal intubation was achieved in 96.8% of all participants. The percentage of patients who underwent screening was 42% for the group, but screening rates varied by country (33%-60%). Colorectal cancer was diagnosed at screening in 62 participants (0.5% of screening group). Adenomas were detected in 30.7% of participants; 15 patients had polypectomy-related major bleeding. There were no perforations.

The risk of colorectal cancer at 10 years was 0.98% in the invited-to-screen group and 1.2% in the usual-care group (risk ratio, 0.82; 95% CI, 0.7-0.93). The reported number needed to invite to prevent 1 case of colon cancer in a 10-year period was 455. The risk of colorectal cancer–related death at 10 years was 0.28% in the invited-to-screen group and 0.31% in the usual-care group (risk ratio, 0.9; 95% CI, 0.64-1.16). An adjusted per-protocol analysis was performed to account for the estimated effect of screening if all participants assigned to the screening group underwent screening. In this analysis, the risk of colorectal cancer at 10 years was decreased from 1.22% to 0.84% (risk ratio, 0.69; 95% CI, 0.66-0.83).

Conclusion: Based on the results of this European randomized trial, the risk of colorectal cancer at 10 years was lower among those who were invited to undergo screening.

 

 

Study 2 Overview (Forsberg et al) 

Objective: To investigate the effect of colorectal cancer screening with once-only colonoscopy or fecal immunochemical testing (FIT) on colorectal cancer mortality and incidence.

Design: Randomized controlled trial in Sweden utilizing a population registry. 

Setting and participants: Patients aged 60 years at the time of entry were identified from a population-based registry from the Swedish Tax Agency.

Intervention: Individuals were assigned by an independent statistician to once-only colonoscopy, 2 rounds of FIT 2 years apart, or a control group in which no intervention was performed. Patients were assigned in a 1:6 ratio for colonoscopy vs control and a 1:2 ratio for FIT vs control.

Main outcome measures: The primary endpoint of the trial was colorectal cancer incidence and mortality.

Main results: A total of 278,280 participants were included in the study from March 1, 2014, through December 31, 2020 (31,140 in the colonoscopy group, 60,300 in the FIT group, and 186,840 in the control group). Of those in the colonoscopy group, 35% underwent colonoscopy, and 55% of those in the FIT group participated in testing. Colorectal cancer was detected in 0.16% (49) of people in the colonoscopy group and 0.2% (121) of people in the FIT test group (relative risk, 0.78; 95% CI, 0.56-1.09). The advanced adenoma detection rate was 2.05% in the colonoscopy group and 1.61% in the FIT group (relative risk, 1.27; 95% CI, 1.15-1.41). There were 2 perforations noted in the colonoscopy group and 15 major bleeding events. More right-sided adenomas were detected in the colonoscopy group.

Conclusion: The results of the current study highlight similar detection rates in the colonoscopy and FIT group. Should further follow-up show a benefit in disease-specific mortality, such screening strategies could be translated into population-based screening programs.

 

 

Commentary 

The first colonoscopy screening recommendations were established in the mid 1990s in the United States, and over the subsequent 2 decades colonoscopy has been the recommended method and main modality for colorectal cancer screening in this country. The advantage of colonoscopy over other screening modalities (sigmoidoscopy and fecal-based testing) is that it can examine the entire large bowel and allow for removal of potential precancerous lesions. However, data to support colonoscopy as a screening modality for colorectal cancer are largely based on cohort studies.1,2 These studies have reported a significant reduction in the incidence of colon cancer. Additionally, colorectal cancer mortality was notably lower in the screened populations. For example, one study among health professionals found a nearly 70% reduction in colorectal cancer mortality in those who underwent at least 1 screening colonoscopy.3

There has been a lack of randomized clinical data to validate the efficacy of colonoscopy screening for reducing colorectal cancer–related deaths. The current study by Bretthauer et al addresses an important need and enhances our understanding of the efficacy of colorectal cancer screening with colonoscopy. In this randomized trial involving more than 84,000 participants from Poland, Norway, Sweden, and the Netherlands, there was a noted 18% decrease in the risk of colorectal cancer over a 10-year period in the intention-to-screen population. The reduction in the risk of death from colorectal cancer was not statistically significant (risk ratio, 0.90; 95% CI, 0.64-1.16). These results are surprising and certainly raise the question as to whether previous studies overestimated the effectiveness of colonoscopy in reducing the risk of colorectal cancer–related deaths. There are several limitations to the Bretthauer et al study, however.

Perhaps the most important limitation is the fact that only 42% of participants in the invited-to-screen cohort underwent screening colonoscopy. Therefore, this raises the question of whether the efficacy noted is simply due to a lack of participation in the screening protocol. In the adjusted per-protocol analysis, colonoscopy was estimated to reduce the risk of colorectal cancer by 31% and the risk of colorectal cancer–related death by around 50%. These findings are more in line with prior published studies regarding the efficacy of colorectal cancer screening. The authors plan to repeat this analysis at 15 years, and it is possible that the risk of colorectal cancer and colorectal cancer–related death can be reduced on subsequent follow-up.

 

 

While the results of the Bretthauer et al trial are important, randomized trials that directly compare the effectiveness of different colorectal cancer screening strategies are lacking. The Forsberg et al trial, also an ongoing study, seeks to address this vitally important gap in our current data. The SCREESCO trial is a study that compares the efficacy of colonoscopy with FIT every 2 years or no screening. The currently reported data are preliminary but show a similarly low rate of colonoscopy screening in those invited to do so (35%). This is a similar limitation to that noted in the Bretthauer et al study. Furthermore, there is some question regarding colonoscopy quality in this study, which had a very low reported adenoma detection rate.

While the current studies are important and provide quality randomized data on the effect of colorectal cancer screening, there remain many unanswered questions. Should the results presented by Bretthauer et al represent the current real-world scenario, then colonoscopy screening may not be viewed as an effective screening tool compared to simpler, less-invasive modalities (ie, FIT). Further follow-up from the SCREESCO trial will help shed light on this question. However, there are concerns with this study, including a very low participation rate, which could greatly underestimate the effectiveness of screening. Additional analysis and longer follow-up will be vital to fully understand the benefits of screening colonoscopy. In the meantime, screening remains an important tool for early detection of colorectal cancer and remains a category A recommendation by the United States Preventive Services Task Force.4 

Applications for Clinical Practice and System Implementation

Current guidelines continue to strongly recommend screening for colorectal cancer for persons between 45 and 75 years of age (category B recommendation for those aged 45 to 49 years per the United States Preventive Services Task Force). Stool-based tests and direct visualization tests are both endorsed as screening options. Further follow-up from the presented studies is needed to help shed light on the magnitude of benefit of these modalities.

Practice Points

  • Current guidelines continue to strongly recommend screening for colon cancer in those aged 45 to 75 years.
  • The optimal modality for screening and the impact of screening on cancer-related mortality requires longer- term follow-up from these ongoing studies.

–Daniel Isaac, DO, MS 

Study 1 Overview (Bretthauer et al) 

Objective: To evaluate the impact of screening colonoscopy on colon cancer–related death. 

Design: Randomized trial conducted in 4 European countries.

Setting and participants: Presumptively healthy men and women between the ages of 55 and 64 years were selected from population registries in Poland, Norway, Sweden, and the Netherlands between 2009 and 2014. Eligible participants had not previously undergone screening. Patients with a diagnosis of colon cancer before trial entry were excluded.

Intervention: Participants were randomly assigned in a 1:2 ratio to undergo colonoscopy screening by invitation or to no invitation and no screening. Participants were randomized using a computer-generated allocation algorithm. Patients were stratified by age, sex, and municipality.

Main outcome measures: The primary endpoint of the study was risk of colorectal cancer and related death after a median follow-up of 10 to 15 years. The main secondary endpoint was death from any cause.

Main results: The study reported follow-up data from 84,585 participants (89.1% of all participants originally included in the trial). The remaining participants were either excluded or data could not be included due to lack of follow-up data from the usual-care group. Men (50.1%) and women (49.9%) were equally represented. The median age at entry was 59 years. The median follow-up was 10 years. Characteristics were otherwise balanced. Good bowel preparation was reported in 91% of all participants. Cecal intubation was achieved in 96.8% of all participants. The percentage of patients who underwent screening was 42% for the group, but screening rates varied by country (33%-60%). Colorectal cancer was diagnosed at screening in 62 participants (0.5% of screening group). Adenomas were detected in 30.7% of participants; 15 patients had polypectomy-related major bleeding. There were no perforations.

The risk of colorectal cancer at 10 years was 0.98% in the invited-to-screen group and 1.2% in the usual-care group (risk ratio, 0.82; 95% CI, 0.7-0.93). The reported number needed to invite to prevent 1 case of colon cancer in a 10-year period was 455. The risk of colorectal cancer–related death at 10 years was 0.28% in the invited-to-screen group and 0.31% in the usual-care group (risk ratio, 0.9; 95% CI, 0.64-1.16). An adjusted per-protocol analysis was performed to account for the estimated effect of screening if all participants assigned to the screening group underwent screening. In this analysis, the risk of colorectal cancer at 10 years was decreased from 1.22% to 0.84% (risk ratio, 0.69; 95% CI, 0.66-0.83).

Conclusion: Based on the results of this European randomized trial, the risk of colorectal cancer at 10 years was lower among those who were invited to undergo screening.

 

 

Study 2 Overview (Forsberg et al) 

Objective: To investigate the effect of colorectal cancer screening with once-only colonoscopy or fecal immunochemical testing (FIT) on colorectal cancer mortality and incidence.

Design: Randomized controlled trial in Sweden utilizing a population registry. 

Setting and participants: Patients aged 60 years at the time of entry were identified from a population-based registry from the Swedish Tax Agency.

Intervention: Individuals were assigned by an independent statistician to once-only colonoscopy, 2 rounds of FIT 2 years apart, or a control group in which no intervention was performed. Patients were assigned in a 1:6 ratio for colonoscopy vs control and a 1:2 ratio for FIT vs control.

Main outcome measures: The primary endpoint of the trial was colorectal cancer incidence and mortality.

Main results: A total of 278,280 participants were included in the study from March 1, 2014, through December 31, 2020 (31,140 in the colonoscopy group, 60,300 in the FIT group, and 186,840 in the control group). Of those in the colonoscopy group, 35% underwent colonoscopy, and 55% of those in the FIT group participated in testing. Colorectal cancer was detected in 0.16% (49) of people in the colonoscopy group and 0.2% (121) of people in the FIT test group (relative risk, 0.78; 95% CI, 0.56-1.09). The advanced adenoma detection rate was 2.05% in the colonoscopy group and 1.61% in the FIT group (relative risk, 1.27; 95% CI, 1.15-1.41). There were 2 perforations noted in the colonoscopy group and 15 major bleeding events. More right-sided adenomas were detected in the colonoscopy group.

Conclusion: The results of the current study highlight similar detection rates in the colonoscopy and FIT group. Should further follow-up show a benefit in disease-specific mortality, such screening strategies could be translated into population-based screening programs.

 

 

Commentary 

The first colonoscopy screening recommendations were established in the mid 1990s in the United States, and over the subsequent 2 decades colonoscopy has been the recommended method and main modality for colorectal cancer screening in this country. The advantage of colonoscopy over other screening modalities (sigmoidoscopy and fecal-based testing) is that it can examine the entire large bowel and allow for removal of potential precancerous lesions. However, data to support colonoscopy as a screening modality for colorectal cancer are largely based on cohort studies.1,2 These studies have reported a significant reduction in the incidence of colon cancer. Additionally, colorectal cancer mortality was notably lower in the screened populations. For example, one study among health professionals found a nearly 70% reduction in colorectal cancer mortality in those who underwent at least 1 screening colonoscopy.3

There has been a lack of randomized clinical data to validate the efficacy of colonoscopy screening for reducing colorectal cancer–related deaths. The current study by Bretthauer et al addresses an important need and enhances our understanding of the efficacy of colorectal cancer screening with colonoscopy. In this randomized trial involving more than 84,000 participants from Poland, Norway, Sweden, and the Netherlands, there was a noted 18% decrease in the risk of colorectal cancer over a 10-year period in the intention-to-screen population. The reduction in the risk of death from colorectal cancer was not statistically significant (risk ratio, 0.90; 95% CI, 0.64-1.16). These results are surprising and certainly raise the question as to whether previous studies overestimated the effectiveness of colonoscopy in reducing the risk of colorectal cancer–related deaths. There are several limitations to the Bretthauer et al study, however.

Perhaps the most important limitation is the fact that only 42% of participants in the invited-to-screen cohort underwent screening colonoscopy. Therefore, this raises the question of whether the efficacy noted is simply due to a lack of participation in the screening protocol. In the adjusted per-protocol analysis, colonoscopy was estimated to reduce the risk of colorectal cancer by 31% and the risk of colorectal cancer–related death by around 50%. These findings are more in line with prior published studies regarding the efficacy of colorectal cancer screening. The authors plan to repeat this analysis at 15 years, and it is possible that the risk of colorectal cancer and colorectal cancer–related death can be reduced on subsequent follow-up.

 

 

While the results of the Bretthauer et al trial are important, randomized trials that directly compare the effectiveness of different colorectal cancer screening strategies are lacking. The Forsberg et al trial, also an ongoing study, seeks to address this vitally important gap in our current data. The SCREESCO trial is a study that compares the efficacy of colonoscopy with FIT every 2 years or no screening. The currently reported data are preliminary but show a similarly low rate of colonoscopy screening in those invited to do so (35%). This is a similar limitation to that noted in the Bretthauer et al study. Furthermore, there is some question regarding colonoscopy quality in this study, which had a very low reported adenoma detection rate.

While the current studies are important and provide quality randomized data on the effect of colorectal cancer screening, there remain many unanswered questions. Should the results presented by Bretthauer et al represent the current real-world scenario, then colonoscopy screening may not be viewed as an effective screening tool compared to simpler, less-invasive modalities (ie, FIT). Further follow-up from the SCREESCO trial will help shed light on this question. However, there are concerns with this study, including a very low participation rate, which could greatly underestimate the effectiveness of screening. Additional analysis and longer follow-up will be vital to fully understand the benefits of screening colonoscopy. In the meantime, screening remains an important tool for early detection of colorectal cancer and remains a category A recommendation by the United States Preventive Services Task Force.4 

Applications for Clinical Practice and System Implementation

Current guidelines continue to strongly recommend screening for colorectal cancer for persons between 45 and 75 years of age (category B recommendation for those aged 45 to 49 years per the United States Preventive Services Task Force). Stool-based tests and direct visualization tests are both endorsed as screening options. Further follow-up from the presented studies is needed to help shed light on the magnitude of benefit of these modalities.

Practice Points

  • Current guidelines continue to strongly recommend screening for colon cancer in those aged 45 to 75 years.
  • The optimal modality for screening and the impact of screening on cancer-related mortality requires longer- term follow-up from these ongoing studies.

–Daniel Isaac, DO, MS 

References

1. Lin JS, Perdue LA, Henrikson NB, Bean SI, Blasi PR. Screening for Colorectal Cancer: An Evidence Update for the U.S. Preventive Services Task Force [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2021 May. Report No.: 20-05271-EF-1.

2. Lin JS, Perdue LA, Henrikson NB, Bean SI, Blasi PR. Screening for colorectal cancer: updated evidence report and systematic review for the US Preventive Services Task Force. JAMA. 2021;325(19):1978-1998. doi:10.1001/jama.2021.4417

3. Nishihara R, Wu K, Lochhead P, et al. Long-term colorectal-cancer incidence and mortality after lower endoscopy. N Engl J Med. 2013;369(12):1095-1105. doi:10.1056/NEJMoa1301969

4. U.S. Preventive Services Task Force. Colorectal cancer: screening. Published May 18, 2021. Accessed November 8, 2022. https://uspreventiveservicestaskforce.org/uspstf/recommendation/colorectal-cancer-screening

References

1. Lin JS, Perdue LA, Henrikson NB, Bean SI, Blasi PR. Screening for Colorectal Cancer: An Evidence Update for the U.S. Preventive Services Task Force [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2021 May. Report No.: 20-05271-EF-1.

2. Lin JS, Perdue LA, Henrikson NB, Bean SI, Blasi PR. Screening for colorectal cancer: updated evidence report and systematic review for the US Preventive Services Task Force. JAMA. 2021;325(19):1978-1998. doi:10.1001/jama.2021.4417

3. Nishihara R, Wu K, Lochhead P, et al. Long-term colorectal-cancer incidence and mortality after lower endoscopy. N Engl J Med. 2013;369(12):1095-1105. doi:10.1056/NEJMoa1301969

4. U.S. Preventive Services Task Force. Colorectal cancer: screening. Published May 18, 2021. Accessed November 8, 2022. https://uspreventiveservicestaskforce.org/uspstf/recommendation/colorectal-cancer-screening

Issue
Journal of Clinical Outcomes Management - 29(6)
Issue
Journal of Clinical Outcomes Management - 29(6)
Page Number
196-198
Page Number
196-198
Publications
Publications
Topics
Article Type
Display Headline
Effectiveness of Colonoscopy for Colorectal Cancer Screening in Reducing Cancer-Related Mortality: Interpreting the Results From Two Ongoing Randomized Trials 
Display Headline
Effectiveness of Colonoscopy for Colorectal Cancer Screening in Reducing Cancer-Related Mortality: Interpreting the Results From Two Ongoing Randomized Trials 
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Residents react: Has residency become easier or overly difficult?

Article Type
Changed
Mon, 11/21/2022 - 09:46

Medical residents have cleared many hurdles to get where they are, as detailed in Medscape’s Residents Salary and Debt Report 2022 which explains their challenges with compensation and school loans as well as long hours and problematic personal relationships.

Whereas 72% of residents described themselves as “very satisfied” or “satisfied” with their professional training experience, only 27% felt that highly about how well they’re paid. Satisfaction levels increased somewhat farther into residency, reaching 35% in year 5.

Respondents to the survey described mixed feelings about residency, with some concluding it is a rite of passage.
 

Do residents have it easier today?

If so, is that rite of passage getting any easier? You’ll get different answers from residents and physicians.

Medscape asked respondents whether their journey to residency was made easier once the Step 1 exam was converted to pass-fail, and interviews brought online, because of the COVID-19 pandemic.

Many residents conceded their journey became easier, less stressful, and less expensive under the new Step 1 formats. One respondent said he was freed up to focus more intently on higher-yield academic goals such as research.

Another respondent called the pass/fail change a “total game-changer,” as it lets applicants apply to all specialties while having other qualifications than test scores considered. A resident who took Step 1 before pass/fail was instituted described the “insurmountable stress associated with studying for Step 1 to get the highest score you possibly could.”

But not all residents liked the difficulty in being able to differentiate themselves, beyond med school pedigrees, in the absence of Step 1 scores.

Meanwhile, some doctors posting comments to the Medscape report strongly disagreed with the idea that residency life is getting harder. They depict residency as a rite of passage under the best of circumstances.

“Whatever issues there may be [today’s residents] are still making eight times what I got and, from what I’ve seen, we had a lot more independent responsibilities,” one physician commenter said.

Other doctors were more sympathetic and worried about the future price to be paid for hardships during residency. “Compensation should not be tied to the willingness to sacrifice the most beautiful years of life,” one commentator wrote.
 

Online interviews: Pros and cons

Many resident respondents celebrated the opportunity to interview for residency programs online. Some who traveled to in-person interviews before the pandemic said they racked up as much as $10,000 in travel costs, adding to their debt loads.

But not everyone was a fan. Other residents sniped that peers can apply to more residencies and “hoard” interviews, making the competition that much harder.

And how useful are online interviews to a prospective resident? “Virtual interviews are terrible for getting a true sense for a program or even the people,” a 1st-year family medicine resident complained. And it’s harder for an applicant “to shine when you’re on Zoom,” a 1st-year internal medicine resident opined.
 

Whether to report harassment

In survey, respondents were asked whether they ever witnessed sexual abuse, harassment, or misconduct; and if so, what they did about it. Among those who did, many opted to take no action, fearing retaliation or retribution. “I saw a resident made out to be a ‘problem resident’ when reporting it and then ultimately fired,” one respondent recounted.

Other residents said they felt unsure about the protocol, whom to report to, or even what constituted harassment or misconduct. “I didn’t realize [an incident] was harassment until later,” one resident said. Others thought “minor” or “subtle” incidents did not warrant action; “they are typically microaggressions and appear accepted within the culture of the institution.”

Residents’ confusion heightened when the perpetrator was a patient. “I’m not sure what to do about that,” a respondent acknowledged. An emergency medicine resident added, “most of the time … it is the patients who are acting inappropriately, saying inappropriate things, etc. There is no way to file a complaint like that.”
 

Rewards and challenges for residents

Among the most rewarding parts of residency that respondents described were developing specific skills such as surgical techniques, job security, and “learning a little day by day” in the words of a 1st-year gastroenterology resident.

Others felt gratified by the chances to help patients and families, their teams, and to advance social justice and health equity.

But challenges abound – chiefly money struggles. A 3rd-year psychiatry resident lamented “being financially strapped in the prime of my life from student loans and low wages.”

Stress and emotional fatigue also came up often as major challenges. “Constantly being told to do more, more presentations, more papers, more research, more studying,” a 5th-year neurosurgery resident bemoaned. “Being expected to be at the top of my game despite being sleep-deprived, depressed, and burned out,” a 3rd-year ob.gyn. resident groused.

But some physician commenters urged residents to look for long-term growth behind the challenges. “Yes, it was hard, but the experience was phenomenal, and I am glad I did it,” one doctor said.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Medical residents have cleared many hurdles to get where they are, as detailed in Medscape’s Residents Salary and Debt Report 2022 which explains their challenges with compensation and school loans as well as long hours and problematic personal relationships.

Whereas 72% of residents described themselves as “very satisfied” or “satisfied” with their professional training experience, only 27% felt that highly about how well they’re paid. Satisfaction levels increased somewhat farther into residency, reaching 35% in year 5.

Respondents to the survey described mixed feelings about residency, with some concluding it is a rite of passage.
 

Do residents have it easier today?

If so, is that rite of passage getting any easier? You’ll get different answers from residents and physicians.

Medscape asked respondents whether their journey to residency was made easier once the Step 1 exam was converted to pass-fail, and interviews brought online, because of the COVID-19 pandemic.

Many residents conceded their journey became easier, less stressful, and less expensive under the new Step 1 formats. One respondent said he was freed up to focus more intently on higher-yield academic goals such as research.

Another respondent called the pass/fail change a “total game-changer,” as it lets applicants apply to all specialties while having other qualifications than test scores considered. A resident who took Step 1 before pass/fail was instituted described the “insurmountable stress associated with studying for Step 1 to get the highest score you possibly could.”

But not all residents liked the difficulty in being able to differentiate themselves, beyond med school pedigrees, in the absence of Step 1 scores.

Meanwhile, some doctors posting comments to the Medscape report strongly disagreed with the idea that residency life is getting harder. They depict residency as a rite of passage under the best of circumstances.

“Whatever issues there may be [today’s residents] are still making eight times what I got and, from what I’ve seen, we had a lot more independent responsibilities,” one physician commenter said.

Other doctors were more sympathetic and worried about the future price to be paid for hardships during residency. “Compensation should not be tied to the willingness to sacrifice the most beautiful years of life,” one commentator wrote.
 

Online interviews: Pros and cons

Many resident respondents celebrated the opportunity to interview for residency programs online. Some who traveled to in-person interviews before the pandemic said they racked up as much as $10,000 in travel costs, adding to their debt loads.

But not everyone was a fan. Other residents sniped that peers can apply to more residencies and “hoard” interviews, making the competition that much harder.

And how useful are online interviews to a prospective resident? “Virtual interviews are terrible for getting a true sense for a program or even the people,” a 1st-year family medicine resident complained. And it’s harder for an applicant “to shine when you’re on Zoom,” a 1st-year internal medicine resident opined.
 

Whether to report harassment

In survey, respondents were asked whether they ever witnessed sexual abuse, harassment, or misconduct; and if so, what they did about it. Among those who did, many opted to take no action, fearing retaliation or retribution. “I saw a resident made out to be a ‘problem resident’ when reporting it and then ultimately fired,” one respondent recounted.

Other residents said they felt unsure about the protocol, whom to report to, or even what constituted harassment or misconduct. “I didn’t realize [an incident] was harassment until later,” one resident said. Others thought “minor” or “subtle” incidents did not warrant action; “they are typically microaggressions and appear accepted within the culture of the institution.”

Residents’ confusion heightened when the perpetrator was a patient. “I’m not sure what to do about that,” a respondent acknowledged. An emergency medicine resident added, “most of the time … it is the patients who are acting inappropriately, saying inappropriate things, etc. There is no way to file a complaint like that.”
 

Rewards and challenges for residents

Among the most rewarding parts of residency that respondents described were developing specific skills such as surgical techniques, job security, and “learning a little day by day” in the words of a 1st-year gastroenterology resident.

Others felt gratified by the chances to help patients and families, their teams, and to advance social justice and health equity.

But challenges abound – chiefly money struggles. A 3rd-year psychiatry resident lamented “being financially strapped in the prime of my life from student loans and low wages.”

Stress and emotional fatigue also came up often as major challenges. “Constantly being told to do more, more presentations, more papers, more research, more studying,” a 5th-year neurosurgery resident bemoaned. “Being expected to be at the top of my game despite being sleep-deprived, depressed, and burned out,” a 3rd-year ob.gyn. resident groused.

But some physician commenters urged residents to look for long-term growth behind the challenges. “Yes, it was hard, but the experience was phenomenal, and I am glad I did it,” one doctor said.

A version of this article first appeared on Medscape.com.

Medical residents have cleared many hurdles to get where they are, as detailed in Medscape’s Residents Salary and Debt Report 2022 which explains their challenges with compensation and school loans as well as long hours and problematic personal relationships.

Whereas 72% of residents described themselves as “very satisfied” or “satisfied” with their professional training experience, only 27% felt that highly about how well they’re paid. Satisfaction levels increased somewhat farther into residency, reaching 35% in year 5.

Respondents to the survey described mixed feelings about residency, with some concluding it is a rite of passage.
 

Do residents have it easier today?

If so, is that rite of passage getting any easier? You’ll get different answers from residents and physicians.

Medscape asked respondents whether their journey to residency was made easier once the Step 1 exam was converted to pass-fail, and interviews brought online, because of the COVID-19 pandemic.

Many residents conceded their journey became easier, less stressful, and less expensive under the new Step 1 formats. One respondent said he was freed up to focus more intently on higher-yield academic goals such as research.

Another respondent called the pass/fail change a “total game-changer,” as it lets applicants apply to all specialties while having other qualifications than test scores considered. A resident who took Step 1 before pass/fail was instituted described the “insurmountable stress associated with studying for Step 1 to get the highest score you possibly could.”

But not all residents liked the difficulty in being able to differentiate themselves, beyond med school pedigrees, in the absence of Step 1 scores.

Meanwhile, some doctors posting comments to the Medscape report strongly disagreed with the idea that residency life is getting harder. They depict residency as a rite of passage under the best of circumstances.

“Whatever issues there may be [today’s residents] are still making eight times what I got and, from what I’ve seen, we had a lot more independent responsibilities,” one physician commenter said.

Other doctors were more sympathetic and worried about the future price to be paid for hardships during residency. “Compensation should not be tied to the willingness to sacrifice the most beautiful years of life,” one commentator wrote.
 

Online interviews: Pros and cons

Many resident respondents celebrated the opportunity to interview for residency programs online. Some who traveled to in-person interviews before the pandemic said they racked up as much as $10,000 in travel costs, adding to their debt loads.

But not everyone was a fan. Other residents sniped that peers can apply to more residencies and “hoard” interviews, making the competition that much harder.

And how useful are online interviews to a prospective resident? “Virtual interviews are terrible for getting a true sense for a program or even the people,” a 1st-year family medicine resident complained. And it’s harder for an applicant “to shine when you’re on Zoom,” a 1st-year internal medicine resident opined.
 

Whether to report harassment

In survey, respondents were asked whether they ever witnessed sexual abuse, harassment, or misconduct; and if so, what they did about it. Among those who did, many opted to take no action, fearing retaliation or retribution. “I saw a resident made out to be a ‘problem resident’ when reporting it and then ultimately fired,” one respondent recounted.

Other residents said they felt unsure about the protocol, whom to report to, or even what constituted harassment or misconduct. “I didn’t realize [an incident] was harassment until later,” one resident said. Others thought “minor” or “subtle” incidents did not warrant action; “they are typically microaggressions and appear accepted within the culture of the institution.”

Residents’ confusion heightened when the perpetrator was a patient. “I’m not sure what to do about that,” a respondent acknowledged. An emergency medicine resident added, “most of the time … it is the patients who are acting inappropriately, saying inappropriate things, etc. There is no way to file a complaint like that.”
 

Rewards and challenges for residents

Among the most rewarding parts of residency that respondents described were developing specific skills such as surgical techniques, job security, and “learning a little day by day” in the words of a 1st-year gastroenterology resident.

Others felt gratified by the chances to help patients and families, their teams, and to advance social justice and health equity.

But challenges abound – chiefly money struggles. A 3rd-year psychiatry resident lamented “being financially strapped in the prime of my life from student loans and low wages.”

Stress and emotional fatigue also came up often as major challenges. “Constantly being told to do more, more presentations, more papers, more research, more studying,” a 5th-year neurosurgery resident bemoaned. “Being expected to be at the top of my game despite being sleep-deprived, depressed, and burned out,” a 3rd-year ob.gyn. resident groused.

But some physician commenters urged residents to look for long-term growth behind the challenges. “Yes, it was hard, but the experience was phenomenal, and I am glad I did it,” one doctor said.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

The importance of connection and community

Article Type
Changed
Fri, 11/18/2022 - 12:23

You only are free when you realize you belong no place – you belong every place – no place at all. The price is high. The reward is great. ~ Maya Angelou

At 8 o’clock, every weekday morning, for years and years now, two friends appear in my kitchen for coffee, and so one identity I carry includes being part of the “coffee ladies.” While this is one of the smaller and more intimate groups to which I belong, I am also a member (“distinguished,” no less) of a slightly larger group: the American Psychiatric Association, and being part of both groups is meaningful to me in more ways than I can describe.

Dr. Dinah Miller

When I think back over the years, I – like most people – have belonged to many people and places, either officially or unofficially. It is these connections that define us, fill our time, give us meaning and purpose, and anchor us. We belong to our families and friends, but we also belong to our professional and community groups, our institutions – whether they are hospitals, schools, religious centers, country clubs, or charitable organizations – as well as interest and advocacy groups. And finally, we belong to our coworkers and to our patients, and they to us, especially if we see the same people over time. Being a psychiatrist can be a solitary career, and it can take a little effort to be a part of larger worlds, especially for those who find solace in more individual activities.

As I’ve gotten older, I’ve noticed that I belong to fewer of these groups. I’m no longer a little league or field hockey mom, nor a member of the neighborhood babysitting co-op, and I’ve exhausted the gamut of council and leadership positions in my APA district branch. I’ve joined organizations only to pay the membership fee, and then never gone to their meetings or events. The pandemic has accounted for some of this: I still belong to my book club, but I often read the book and don’t go to the Zoom meetings as I miss the real-life aspect of getting together. Being boxed on a screen is not the same as the one-on-one conversations before the formal book discussion. And while I still carry a host of identities, I imagine it is not unusual to belong to fewer organizations as time passes. It’s not all bad, there is something good to be said for living life at a less frenetic pace as fewer entities lay claim to my time.

In psychiatry, our patients span the range of human experience: Some are very engaged with their worlds, while others struggle to make even the most basic of connections. Their lives may seem disconnected – empty, even – and I find myself encouraging people to reach out, to find activities that will ease their loneliness and integrate a feeling of belonging in a way that adds meaning and purpose. For some people, this may be as simple as asking a friend to have lunch, but even that can be an overwhelming obstacle for someone who is depressed, or for someone who has no friends.

Patients may counter my suggestions with a host of reasons as to why they can’t connect. Perhaps their friend is too busy with work or his family, the lunch would cost too much, there’s no transportation, or no restaurant that could meet their dietary needs. Or perhaps they are just too fearful of being rejected.

Psychiatric disorders, by their nature, can be very isolating. Depressed and anxious people often find it a struggle just to get through their days, adding new people and activities is not something that brings joy. For people suffering with psychosis, their internal realities are often all-consuming and there may be no room for accommodating others. And finally, what I hear over and over, is that people are afraid of what others might think of them, and this fear is paralyzing. I try to suggest that we never really know or control what others think of us, but obviously, this does not reassure most patients as they are also bewildered by their irrational fear. To go to an event unaccompanied, or even to a party to which they have been invited, is a hurdle they won’t (or can’t) attempt.

The pandemic, with its initial months of shutdown, and then with years of fear of illness, has created new ways of connecting. Our “Zoom” world can be very convenient – in many ways it has opened up aspects of learning and connection for people who are short on time,or struggle with transportation. In the comfort of our living rooms, in pajamas and slippers, we can take classes, join clubs, attend Alcoholics Anonymous meetings, go to conferences or religious services, and be part of any number of organizations without flying or searching for parking. I love that, with 1 hour and a single click, I can now attend my department’s weekly Grand Rounds. But for many who struggle with using technology, or who don’t feel the same benefits from online encounters, the pandemic has been an isolating and lonely time.

It should not be assumed that isolation has been a negative experience for everyone. For many who struggle with interpersonal relationships, for children who are bullied or teased at school or who feel self-conscious sitting alone at lunch, there may not be the presumed “fear of missing out.” As one adult patient told me: “You know, I do ‘alone’ well.” For some, it has been a relief to be relieved of the pressure to socialize, attend parties, or pursue online dating – a process I think of as “people-shopping” which looks so different from the old days of organic interactions that led to romantic interactions over time. Many have found relief without the pressures of social interactions.

Community, connection, and belonging are not inconsequential things, however. They are part of what adds to life’s richness, and they are associated with good health and longevity. The Harvard Study of Adult Development, begun in 1938, has been tracking two groups of Boston teenagers – and now their wives and children – for 84 years. Tracking one group of Harvard students and another group of teens from poorer areas in Boston, the project is now on its 4th director.

George Vaillant, MD, author of “Aging Well: Surprising Guideposts to a Happier Life from the Landmark Harvard Study of Adult Development” (New York: Little, Brown Spark, 2002) was the program’s director from 1972 to 2004. “When the study began, nobody cared about empathy or attachment. But the key to healthy aging is relationships, relationships, relationships,” Dr. Vaillant said in an interview in the Harvard Gazette.

Susan Pinker is a social psychologist and author of “The Village Effect: How Face-to-Face Contact Can Make Us Healthier and Happier” (Toronto: Random House Canada, 2014). In her 2017 TED talk, she notes that in all developed countries, women live 6-8 years longer than men, and are half as likely to die at any age. She is underwhelmed by digital relationships, and says that real life relationships affect our physiological states differently and in more beneficial ways. “Building your village and sustaining it is a matter of life and death,” she states at the end of her TED talk.

I spoke with Ms. Pinker about her thoughts on how our personal villages change over time. She was quick to tell me that she is not against digital communities. “I’m not a Luddite. As a writer, I probably spend as much time facing a screen as anyone else. But it’s important to remember that digital communities can amplify existing relationships, and don’t replace in-person social contact. A lot of people have drunk the Kool-Aid about virtual experiences, even though they are not the same as real life interactions.

“Loneliness takes on a U-shaped function across adulthood,” she explained with regard to how age impacts our social connections. “People are lonely when they first leave home or when they finish college and go out into the world. Then they settle into new situations; they can make friends at work, through their children, in their neighborhood, or by belonging to organizations. As people settle into their adult lives, there are increased opportunities to connect in person. But loneliness increases again in late middle age.” She explained that everyone loses people as their children move away, friends move, and couples may divorce or a spouse dies.

“Attrition of our social face-to-face networks is an ugly feature of aging,” Ms. Pinker said. “Some people are good at replacing the vacant spots; they sense that it is important to invest in different relationships as you age. It’s like a garden that you need to tend by replacing the perennials that die off in the winter.” The United States, she pointed out, has a culture that is particularly difficult for people in their later years.

My world is a little quieter than it once was, but collecting and holding on to people is important to me. The organizations and affiliations change over time, as does the brand of coffee. So I try to inspire some of my more isolated patients to prioritize their relationships, to let go of their grudges, to tolerate the discomfort of moving from their places of comfort to the temporary discomfort of reaching out in the service of achieving a less solitary, more purposeful, and healthier life. When it doesn’t come naturally, it can be hard work.

Dr. Miller is a coauthor of “Committed: The Battle Over Involuntary Psychiatric Care” (Johns Hopkins University Press, 2016). She has a private practice and is assistant professor of psychiatry and behavioral sciences at Johns Hopkins University, Baltimore. She has disclosed no relevant financial relationships.

Publications
Topics
Sections

You only are free when you realize you belong no place – you belong every place – no place at all. The price is high. The reward is great. ~ Maya Angelou

At 8 o’clock, every weekday morning, for years and years now, two friends appear in my kitchen for coffee, and so one identity I carry includes being part of the “coffee ladies.” While this is one of the smaller and more intimate groups to which I belong, I am also a member (“distinguished,” no less) of a slightly larger group: the American Psychiatric Association, and being part of both groups is meaningful to me in more ways than I can describe.

Dr. Dinah Miller

When I think back over the years, I – like most people – have belonged to many people and places, either officially or unofficially. It is these connections that define us, fill our time, give us meaning and purpose, and anchor us. We belong to our families and friends, but we also belong to our professional and community groups, our institutions – whether they are hospitals, schools, religious centers, country clubs, or charitable organizations – as well as interest and advocacy groups. And finally, we belong to our coworkers and to our patients, and they to us, especially if we see the same people over time. Being a psychiatrist can be a solitary career, and it can take a little effort to be a part of larger worlds, especially for those who find solace in more individual activities.

As I’ve gotten older, I’ve noticed that I belong to fewer of these groups. I’m no longer a little league or field hockey mom, nor a member of the neighborhood babysitting co-op, and I’ve exhausted the gamut of council and leadership positions in my APA district branch. I’ve joined organizations only to pay the membership fee, and then never gone to their meetings or events. The pandemic has accounted for some of this: I still belong to my book club, but I often read the book and don’t go to the Zoom meetings as I miss the real-life aspect of getting together. Being boxed on a screen is not the same as the one-on-one conversations before the formal book discussion. And while I still carry a host of identities, I imagine it is not unusual to belong to fewer organizations as time passes. It’s not all bad, there is something good to be said for living life at a less frenetic pace as fewer entities lay claim to my time.

In psychiatry, our patients span the range of human experience: Some are very engaged with their worlds, while others struggle to make even the most basic of connections. Their lives may seem disconnected – empty, even – and I find myself encouraging people to reach out, to find activities that will ease their loneliness and integrate a feeling of belonging in a way that adds meaning and purpose. For some people, this may be as simple as asking a friend to have lunch, but even that can be an overwhelming obstacle for someone who is depressed, or for someone who has no friends.

Patients may counter my suggestions with a host of reasons as to why they can’t connect. Perhaps their friend is too busy with work or his family, the lunch would cost too much, there’s no transportation, or no restaurant that could meet their dietary needs. Or perhaps they are just too fearful of being rejected.

Psychiatric disorders, by their nature, can be very isolating. Depressed and anxious people often find it a struggle just to get through their days, adding new people and activities is not something that brings joy. For people suffering with psychosis, their internal realities are often all-consuming and there may be no room for accommodating others. And finally, what I hear over and over, is that people are afraid of what others might think of them, and this fear is paralyzing. I try to suggest that we never really know or control what others think of us, but obviously, this does not reassure most patients as they are also bewildered by their irrational fear. To go to an event unaccompanied, or even to a party to which they have been invited, is a hurdle they won’t (or can’t) attempt.

The pandemic, with its initial months of shutdown, and then with years of fear of illness, has created new ways of connecting. Our “Zoom” world can be very convenient – in many ways it has opened up aspects of learning and connection for people who are short on time,or struggle with transportation. In the comfort of our living rooms, in pajamas and slippers, we can take classes, join clubs, attend Alcoholics Anonymous meetings, go to conferences or religious services, and be part of any number of organizations without flying or searching for parking. I love that, with 1 hour and a single click, I can now attend my department’s weekly Grand Rounds. But for many who struggle with using technology, or who don’t feel the same benefits from online encounters, the pandemic has been an isolating and lonely time.

It should not be assumed that isolation has been a negative experience for everyone. For many who struggle with interpersonal relationships, for children who are bullied or teased at school or who feel self-conscious sitting alone at lunch, there may not be the presumed “fear of missing out.” As one adult patient told me: “You know, I do ‘alone’ well.” For some, it has been a relief to be relieved of the pressure to socialize, attend parties, or pursue online dating – a process I think of as “people-shopping” which looks so different from the old days of organic interactions that led to romantic interactions over time. Many have found relief without the pressures of social interactions.

Community, connection, and belonging are not inconsequential things, however. They are part of what adds to life’s richness, and they are associated with good health and longevity. The Harvard Study of Adult Development, begun in 1938, has been tracking two groups of Boston teenagers – and now their wives and children – for 84 years. Tracking one group of Harvard students and another group of teens from poorer areas in Boston, the project is now on its 4th director.

George Vaillant, MD, author of “Aging Well: Surprising Guideposts to a Happier Life from the Landmark Harvard Study of Adult Development” (New York: Little, Brown Spark, 2002) was the program’s director from 1972 to 2004. “When the study began, nobody cared about empathy or attachment. But the key to healthy aging is relationships, relationships, relationships,” Dr. Vaillant said in an interview in the Harvard Gazette.

Susan Pinker is a social psychologist and author of “The Village Effect: How Face-to-Face Contact Can Make Us Healthier and Happier” (Toronto: Random House Canada, 2014). In her 2017 TED talk, she notes that in all developed countries, women live 6-8 years longer than men, and are half as likely to die at any age. She is underwhelmed by digital relationships, and says that real life relationships affect our physiological states differently and in more beneficial ways. “Building your village and sustaining it is a matter of life and death,” she states at the end of her TED talk.

I spoke with Ms. Pinker about her thoughts on how our personal villages change over time. She was quick to tell me that she is not against digital communities. “I’m not a Luddite. As a writer, I probably spend as much time facing a screen as anyone else. But it’s important to remember that digital communities can amplify existing relationships, and don’t replace in-person social contact. A lot of people have drunk the Kool-Aid about virtual experiences, even though they are not the same as real life interactions.

“Loneliness takes on a U-shaped function across adulthood,” she explained with regard to how age impacts our social connections. “People are lonely when they first leave home or when they finish college and go out into the world. Then they settle into new situations; they can make friends at work, through their children, in their neighborhood, or by belonging to organizations. As people settle into their adult lives, there are increased opportunities to connect in person. But loneliness increases again in late middle age.” She explained that everyone loses people as their children move away, friends move, and couples may divorce or a spouse dies.

“Attrition of our social face-to-face networks is an ugly feature of aging,” Ms. Pinker said. “Some people are good at replacing the vacant spots; they sense that it is important to invest in different relationships as you age. It’s like a garden that you need to tend by replacing the perennials that die off in the winter.” The United States, she pointed out, has a culture that is particularly difficult for people in their later years.

My world is a little quieter than it once was, but collecting and holding on to people is important to me. The organizations and affiliations change over time, as does the brand of coffee. So I try to inspire some of my more isolated patients to prioritize their relationships, to let go of their grudges, to tolerate the discomfort of moving from their places of comfort to the temporary discomfort of reaching out in the service of achieving a less solitary, more purposeful, and healthier life. When it doesn’t come naturally, it can be hard work.

Dr. Miller is a coauthor of “Committed: The Battle Over Involuntary Psychiatric Care” (Johns Hopkins University Press, 2016). She has a private practice and is assistant professor of psychiatry and behavioral sciences at Johns Hopkins University, Baltimore. She has disclosed no relevant financial relationships.

You only are free when you realize you belong no place – you belong every place – no place at all. The price is high. The reward is great. ~ Maya Angelou

At 8 o’clock, every weekday morning, for years and years now, two friends appear in my kitchen for coffee, and so one identity I carry includes being part of the “coffee ladies.” While this is one of the smaller and more intimate groups to which I belong, I am also a member (“distinguished,” no less) of a slightly larger group: the American Psychiatric Association, and being part of both groups is meaningful to me in more ways than I can describe.

Dr. Dinah Miller

When I think back over the years, I – like most people – have belonged to many people and places, either officially or unofficially. It is these connections that define us, fill our time, give us meaning and purpose, and anchor us. We belong to our families and friends, but we also belong to our professional and community groups, our institutions – whether they are hospitals, schools, religious centers, country clubs, or charitable organizations – as well as interest and advocacy groups. And finally, we belong to our coworkers and to our patients, and they to us, especially if we see the same people over time. Being a psychiatrist can be a solitary career, and it can take a little effort to be a part of larger worlds, especially for those who find solace in more individual activities.

As I’ve gotten older, I’ve noticed that I belong to fewer of these groups. I’m no longer a little league or field hockey mom, nor a member of the neighborhood babysitting co-op, and I’ve exhausted the gamut of council and leadership positions in my APA district branch. I’ve joined organizations only to pay the membership fee, and then never gone to their meetings or events. The pandemic has accounted for some of this: I still belong to my book club, but I often read the book and don’t go to the Zoom meetings as I miss the real-life aspect of getting together. Being boxed on a screen is not the same as the one-on-one conversations before the formal book discussion. And while I still carry a host of identities, I imagine it is not unusual to belong to fewer organizations as time passes. It’s not all bad, there is something good to be said for living life at a less frenetic pace as fewer entities lay claim to my time.

In psychiatry, our patients span the range of human experience: Some are very engaged with their worlds, while others struggle to make even the most basic of connections. Their lives may seem disconnected – empty, even – and I find myself encouraging people to reach out, to find activities that will ease their loneliness and integrate a feeling of belonging in a way that adds meaning and purpose. For some people, this may be as simple as asking a friend to have lunch, but even that can be an overwhelming obstacle for someone who is depressed, or for someone who has no friends.

Patients may counter my suggestions with a host of reasons as to why they can’t connect. Perhaps their friend is too busy with work or his family, the lunch would cost too much, there’s no transportation, or no restaurant that could meet their dietary needs. Or perhaps they are just too fearful of being rejected.

Psychiatric disorders, by their nature, can be very isolating. Depressed and anxious people often find it a struggle just to get through their days, adding new people and activities is not something that brings joy. For people suffering with psychosis, their internal realities are often all-consuming and there may be no room for accommodating others. And finally, what I hear over and over, is that people are afraid of what others might think of them, and this fear is paralyzing. I try to suggest that we never really know or control what others think of us, but obviously, this does not reassure most patients as they are also bewildered by their irrational fear. To go to an event unaccompanied, or even to a party to which they have been invited, is a hurdle they won’t (or can’t) attempt.

The pandemic, with its initial months of shutdown, and then with years of fear of illness, has created new ways of connecting. Our “Zoom” world can be very convenient – in many ways it has opened up aspects of learning and connection for people who are short on time,or struggle with transportation. In the comfort of our living rooms, in pajamas and slippers, we can take classes, join clubs, attend Alcoholics Anonymous meetings, go to conferences or religious services, and be part of any number of organizations without flying or searching for parking. I love that, with 1 hour and a single click, I can now attend my department’s weekly Grand Rounds. But for many who struggle with using technology, or who don’t feel the same benefits from online encounters, the pandemic has been an isolating and lonely time.

It should not be assumed that isolation has been a negative experience for everyone. For many who struggle with interpersonal relationships, for children who are bullied or teased at school or who feel self-conscious sitting alone at lunch, there may not be the presumed “fear of missing out.” As one adult patient told me: “You know, I do ‘alone’ well.” For some, it has been a relief to be relieved of the pressure to socialize, attend parties, or pursue online dating – a process I think of as “people-shopping” which looks so different from the old days of organic interactions that led to romantic interactions over time. Many have found relief without the pressures of social interactions.

Community, connection, and belonging are not inconsequential things, however. They are part of what adds to life’s richness, and they are associated with good health and longevity. The Harvard Study of Adult Development, begun in 1938, has been tracking two groups of Boston teenagers – and now their wives and children – for 84 years. Tracking one group of Harvard students and another group of teens from poorer areas in Boston, the project is now on its 4th director.

George Vaillant, MD, author of “Aging Well: Surprising Guideposts to a Happier Life from the Landmark Harvard Study of Adult Development” (New York: Little, Brown Spark, 2002) was the program’s director from 1972 to 2004. “When the study began, nobody cared about empathy or attachment. But the key to healthy aging is relationships, relationships, relationships,” Dr. Vaillant said in an interview in the Harvard Gazette.

Susan Pinker is a social psychologist and author of “The Village Effect: How Face-to-Face Contact Can Make Us Healthier and Happier” (Toronto: Random House Canada, 2014). In her 2017 TED talk, she notes that in all developed countries, women live 6-8 years longer than men, and are half as likely to die at any age. She is underwhelmed by digital relationships, and says that real life relationships affect our physiological states differently and in more beneficial ways. “Building your village and sustaining it is a matter of life and death,” she states at the end of her TED talk.

I spoke with Ms. Pinker about her thoughts on how our personal villages change over time. She was quick to tell me that she is not against digital communities. “I’m not a Luddite. As a writer, I probably spend as much time facing a screen as anyone else. But it’s important to remember that digital communities can amplify existing relationships, and don’t replace in-person social contact. A lot of people have drunk the Kool-Aid about virtual experiences, even though they are not the same as real life interactions.

“Loneliness takes on a U-shaped function across adulthood,” she explained with regard to how age impacts our social connections. “People are lonely when they first leave home or when they finish college and go out into the world. Then they settle into new situations; they can make friends at work, through their children, in their neighborhood, or by belonging to organizations. As people settle into their adult lives, there are increased opportunities to connect in person. But loneliness increases again in late middle age.” She explained that everyone loses people as their children move away, friends move, and couples may divorce or a spouse dies.

“Attrition of our social face-to-face networks is an ugly feature of aging,” Ms. Pinker said. “Some people are good at replacing the vacant spots; they sense that it is important to invest in different relationships as you age. It’s like a garden that you need to tend by replacing the perennials that die off in the winter.” The United States, she pointed out, has a culture that is particularly difficult for people in their later years.

My world is a little quieter than it once was, but collecting and holding on to people is important to me. The organizations and affiliations change over time, as does the brand of coffee. So I try to inspire some of my more isolated patients to prioritize their relationships, to let go of their grudges, to tolerate the discomfort of moving from their places of comfort to the temporary discomfort of reaching out in the service of achieving a less solitary, more purposeful, and healthier life. When it doesn’t come naturally, it can be hard work.

Dr. Miller is a coauthor of “Committed: The Battle Over Involuntary Psychiatric Care” (Johns Hopkins University Press, 2016). She has a private practice and is assistant professor of psychiatry and behavioral sciences at Johns Hopkins University, Baltimore. She has disclosed no relevant financial relationships.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Vonoprazan promising for erosive esophagitis

Article Type
Changed
Thu, 11/17/2022 - 15:15

The oral potassium-competitive acid blocker (PCAB) vonoprazan was noninferior and superior to the proton-pump inhibitor (PPI) lansoprazole for erosive esophagitis, according to results of the phase 3 PHALCON-EE trial.

Vonoprazan achieved higher rates of healing and maintenance of healing than lansoprazole, with the benefit seen primarily in patients with more severe esophagitis.

The differences in healing rates were evident after 2 weeks of therapy and were maintained throughout the 24-week study, report Loren Laine, MD, Yale University, New Haven, Conn., and colleagues.

The study was published online in Gastroenterology.
 

More potent acid suppression

Gastroesophageal reflux disease is one of the most common disorders of the gastrointestinal tract, and erosive esophagitis is its most common complication.

Although standard PPI therapy is effective for healing erosive esophagitis, some patients do not achieve success with this conventional treatment.

Studies suggest that lack of healing of erosive esophagitis with 8 weeks of PPI therapy can be expected in roughly 5%-20% of patients, with rates up to 30% reported in patients with more severe esophagitis.

The PCAB vonoprazan provides more potent inhibition of gastric acid than PPIs and is seen as a potential alternative. However, data on its efficacy for erosive esophagitis are limited, the authors note.

The PHALCON-EE trial enrolled 1,024 adults from the United States and Europe with erosive esophagitis without Helicobacter pylori infection or Barrett esophagus.

Participants were randomized to receive once-daily vonoprazan 20 mg or lansoprazole 30 mg for up to 8 weeks in the healing phase.

The 878 patients with healing were then rerandomized to receive once-daily vonoprazan 10 mg, vonoprazan 20 mg, or lansoprazole 15 mg for 24 weeks in the maintenance phase.

For healing by week 8, vonoprazan was noninferior to lansoprazole in the primary analysis and superior to lansoprazole in a predefined exploratory analysis (92.9% vs. 84.6%; P < .0001).

Secondary analyses showed that vonoprazan was noninferior to lansoprazole in mean 24-hour heartburn-free days and superior in healing at week 2 for grade C/D esophagitis (70.2% vs. 52.6%; P = .0008).

For maintenance of healing at week 24, vonoprazan was noninferior to lansoprazole in the primary analysis and superior on secondary analysis of healing (80.7% for vonoprazan 20 mg and 79.2% for vonoprazan 10 mg vs. 72.0% for lansoprazole; P < .0001 for both comparisons).

The most common adverse event reported in the healing phase was diarrhea and in the maintenance phase was COVID-19. Two deaths occurred, both from COVID-19, during the maintenance phase in the vonoprazan 20-mg group.

As expected, serum gastrin increased to a greater extent with vonoprazan than lansoprazole, with levels > 500 pg/mL in 16% of those taking 20 mg at the end of maintenance therapy, the authors report. After stopping vonoprazan, gastrin levels dropped by roughly 60%-65% within 4 weeks.
 

Promising new option

“PCABs are a promising new option,” Avin Aggarwal, MD, who was not involved in the study, told this news organization.

They have a “more potent acid inhibitory effect” and have shown “superior healing of erosive esophagitis,” said Dr. Aggarwal, a gastroenterologist and medical director of Banner Health’s South Campus endoscopy services and clinical assistant professor at the University of Arizona in Tucson.

The results of the PHALCON-EE trial “validate noninferiority of PCABs compared to standard PPI therapy in the Western population after being proven in multiple Asian studies,” he said.

Dr. Aggarwal noted that PCABs work the same way as PPIs, by blocking the proton pumps, but “the longer half-life of PCABs and action on both active and inactive proton channels result in greater acid inhibition.”

Long-term effects of PCAB therapy from stronger acid inhibition and resulting hypergastrinemia still remain to be determined, he said.

Earlier this year, the U.S. Food and Drug Administration accepted Phathom Pharmaceuticals’ new drug application for vonoprazan for the treatment of erosive esophagitis.

Last May, the FDA approved two vonoprazan-based therapies for the treatment of H. pylori infection.

The study was funded by Phathom Pharmaceuticals. Dr. Laine and several coauthors have disclosed financial relationships with the company. Dr. Aggarwal reports no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

The oral potassium-competitive acid blocker (PCAB) vonoprazan was noninferior and superior to the proton-pump inhibitor (PPI) lansoprazole for erosive esophagitis, according to results of the phase 3 PHALCON-EE trial.

Vonoprazan achieved higher rates of healing and maintenance of healing than lansoprazole, with the benefit seen primarily in patients with more severe esophagitis.

The differences in healing rates were evident after 2 weeks of therapy and were maintained throughout the 24-week study, report Loren Laine, MD, Yale University, New Haven, Conn., and colleagues.

The study was published online in Gastroenterology.
 

More potent acid suppression

Gastroesophageal reflux disease is one of the most common disorders of the gastrointestinal tract, and erosive esophagitis is its most common complication.

Although standard PPI therapy is effective for healing erosive esophagitis, some patients do not achieve success with this conventional treatment.

Studies suggest that lack of healing of erosive esophagitis with 8 weeks of PPI therapy can be expected in roughly 5%-20% of patients, with rates up to 30% reported in patients with more severe esophagitis.

The PCAB vonoprazan provides more potent inhibition of gastric acid than PPIs and is seen as a potential alternative. However, data on its efficacy for erosive esophagitis are limited, the authors note.

The PHALCON-EE trial enrolled 1,024 adults from the United States and Europe with erosive esophagitis without Helicobacter pylori infection or Barrett esophagus.

Participants were randomized to receive once-daily vonoprazan 20 mg or lansoprazole 30 mg for up to 8 weeks in the healing phase.

The 878 patients with healing were then rerandomized to receive once-daily vonoprazan 10 mg, vonoprazan 20 mg, or lansoprazole 15 mg for 24 weeks in the maintenance phase.

For healing by week 8, vonoprazan was noninferior to lansoprazole in the primary analysis and superior to lansoprazole in a predefined exploratory analysis (92.9% vs. 84.6%; P < .0001).

Secondary analyses showed that vonoprazan was noninferior to lansoprazole in mean 24-hour heartburn-free days and superior in healing at week 2 for grade C/D esophagitis (70.2% vs. 52.6%; P = .0008).

For maintenance of healing at week 24, vonoprazan was noninferior to lansoprazole in the primary analysis and superior on secondary analysis of healing (80.7% for vonoprazan 20 mg and 79.2% for vonoprazan 10 mg vs. 72.0% for lansoprazole; P < .0001 for both comparisons).

The most common adverse event reported in the healing phase was diarrhea and in the maintenance phase was COVID-19. Two deaths occurred, both from COVID-19, during the maintenance phase in the vonoprazan 20-mg group.

As expected, serum gastrin increased to a greater extent with vonoprazan than lansoprazole, with levels > 500 pg/mL in 16% of those taking 20 mg at the end of maintenance therapy, the authors report. After stopping vonoprazan, gastrin levels dropped by roughly 60%-65% within 4 weeks.
 

Promising new option

“PCABs are a promising new option,” Avin Aggarwal, MD, who was not involved in the study, told this news organization.

They have a “more potent acid inhibitory effect” and have shown “superior healing of erosive esophagitis,” said Dr. Aggarwal, a gastroenterologist and medical director of Banner Health’s South Campus endoscopy services and clinical assistant professor at the University of Arizona in Tucson.

The results of the PHALCON-EE trial “validate noninferiority of PCABs compared to standard PPI therapy in the Western population after being proven in multiple Asian studies,” he said.

Dr. Aggarwal noted that PCABs work the same way as PPIs, by blocking the proton pumps, but “the longer half-life of PCABs and action on both active and inactive proton channels result in greater acid inhibition.”

Long-term effects of PCAB therapy from stronger acid inhibition and resulting hypergastrinemia still remain to be determined, he said.

Earlier this year, the U.S. Food and Drug Administration accepted Phathom Pharmaceuticals’ new drug application for vonoprazan for the treatment of erosive esophagitis.

Last May, the FDA approved two vonoprazan-based therapies for the treatment of H. pylori infection.

The study was funded by Phathom Pharmaceuticals. Dr. Laine and several coauthors have disclosed financial relationships with the company. Dr. Aggarwal reports no relevant financial relationships.

A version of this article first appeared on Medscape.com.

The oral potassium-competitive acid blocker (PCAB) vonoprazan was noninferior and superior to the proton-pump inhibitor (PPI) lansoprazole for erosive esophagitis, according to results of the phase 3 PHALCON-EE trial.

Vonoprazan achieved higher rates of healing and maintenance of healing than lansoprazole, with the benefit seen primarily in patients with more severe esophagitis.

The differences in healing rates were evident after 2 weeks of therapy and were maintained throughout the 24-week study, report Loren Laine, MD, Yale University, New Haven, Conn., and colleagues.

The study was published online in Gastroenterology.
 

More potent acid suppression

Gastroesophageal reflux disease is one of the most common disorders of the gastrointestinal tract, and erosive esophagitis is its most common complication.

Although standard PPI therapy is effective for healing erosive esophagitis, some patients do not achieve success with this conventional treatment.

Studies suggest that lack of healing of erosive esophagitis with 8 weeks of PPI therapy can be expected in roughly 5%-20% of patients, with rates up to 30% reported in patients with more severe esophagitis.

The PCAB vonoprazan provides more potent inhibition of gastric acid than PPIs and is seen as a potential alternative. However, data on its efficacy for erosive esophagitis are limited, the authors note.

The PHALCON-EE trial enrolled 1,024 adults from the United States and Europe with erosive esophagitis without Helicobacter pylori infection or Barrett esophagus.

Participants were randomized to receive once-daily vonoprazan 20 mg or lansoprazole 30 mg for up to 8 weeks in the healing phase.

The 878 patients with healing were then rerandomized to receive once-daily vonoprazan 10 mg, vonoprazan 20 mg, or lansoprazole 15 mg for 24 weeks in the maintenance phase.

For healing by week 8, vonoprazan was noninferior to lansoprazole in the primary analysis and superior to lansoprazole in a predefined exploratory analysis (92.9% vs. 84.6%; P < .0001).

Secondary analyses showed that vonoprazan was noninferior to lansoprazole in mean 24-hour heartburn-free days and superior in healing at week 2 for grade C/D esophagitis (70.2% vs. 52.6%; P = .0008).

For maintenance of healing at week 24, vonoprazan was noninferior to lansoprazole in the primary analysis and superior on secondary analysis of healing (80.7% for vonoprazan 20 mg and 79.2% for vonoprazan 10 mg vs. 72.0% for lansoprazole; P < .0001 for both comparisons).

The most common adverse event reported in the healing phase was diarrhea and in the maintenance phase was COVID-19. Two deaths occurred, both from COVID-19, during the maintenance phase in the vonoprazan 20-mg group.

As expected, serum gastrin increased to a greater extent with vonoprazan than lansoprazole, with levels > 500 pg/mL in 16% of those taking 20 mg at the end of maintenance therapy, the authors report. After stopping vonoprazan, gastrin levels dropped by roughly 60%-65% within 4 weeks.
 

Promising new option

“PCABs are a promising new option,” Avin Aggarwal, MD, who was not involved in the study, told this news organization.

They have a “more potent acid inhibitory effect” and have shown “superior healing of erosive esophagitis,” said Dr. Aggarwal, a gastroenterologist and medical director of Banner Health’s South Campus endoscopy services and clinical assistant professor at the University of Arizona in Tucson.

The results of the PHALCON-EE trial “validate noninferiority of PCABs compared to standard PPI therapy in the Western population after being proven in multiple Asian studies,” he said.

Dr. Aggarwal noted that PCABs work the same way as PPIs, by blocking the proton pumps, but “the longer half-life of PCABs and action on both active and inactive proton channels result in greater acid inhibition.”

Long-term effects of PCAB therapy from stronger acid inhibition and resulting hypergastrinemia still remain to be determined, he said.

Earlier this year, the U.S. Food and Drug Administration accepted Phathom Pharmaceuticals’ new drug application for vonoprazan for the treatment of erosive esophagitis.

Last May, the FDA approved two vonoprazan-based therapies for the treatment of H. pylori infection.

The study was funded by Phathom Pharmaceuticals. Dr. Laine and several coauthors have disclosed financial relationships with the company. Dr. Aggarwal reports no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

EHR alerts flag acute kidney injury and avert progression

Article Type
Changed
Thu, 11/17/2022 - 15:07

– Automated alerts sent to clinicians via patients’ electronic health records identified patients with diagnosable acute kidney injury (AKI) who were taking one or more medications that could potentially further worsen their renal function. This led to a significant increase in discontinuations of the problematic drugs and better clinical outcomes in a subgroup of patients in a new multicenter, randomized study with more than 5,000 participants.

“Automated alerts for AKI can increase the rate of cessation of potentially nephrotoxic medications without endangering patients,” said F. Perry Wilson, MD, at Kidney Week 2022, organized by the American Society of Nephrology.

Mitchel L Zoler, Medscape Medical News © 2022 WebMD, LLC
Dr. F. Perry Wilson

In addition, the study provides “limited evidence that these alerts change clinical practice,” said Dr. Wilson, a nephrologist and director of the clinical and translational research accelerator at Yale School of Medicine in New Haven, Conn.

“It was encouraging to get providers to change their behavior” by quickly stopping treatment with potentially nephrotoxic medications in patients with incident AKI. But the results also confirmed that “patient decision-support systems tend to not be panaceas,” Dr. Wilson explained in an interview. Instead, “they tend to marginally improve” patients’ clinical status.

“Our hope is that widespread use may make some difference on a population scale, but rarely are these game changers,” he admitted.

“This was a very nice study showing how we can leverage the EHR to look not only at drugs but also contrast agents to direct educational efforts aimed at clinicians about when to discontinue” these treatments, commented Karen A. Griffin, MD, who was not involved with the study.
 

A danger for alert fatigue

But the results also showed that more research is needed to better refine this approach, added Dr. Griffin, a professor at Loyola University Chicago, Maywood, Ill., and chief of the renal section at the Edward Hines Jr. VA Medical Center in Hines, Ill. And she expressed caution about expanding the alerts that clinicians receive “because of the potential for alert fatigue.”

Dr. Karen A. Griffin

Dr. Wilson also acknowledged the danger for alert fatigue. “We’re doing these studies to try to reduce the number of alerts,” he said. “Most clinicians say that if we could show an alert improves patient outcomes, they would embrace it.”

Dr. Wilson and associates designed their current study to evaluate an enhanced type of alert that not only warned clinicians that a patient had developed AKI but also gave them an option to potentially intervene by stopping treatment with a medication that could possibly exacerbate worsening renal function. This enhancement followed their experience in a 2021 study that tested a purely informational alert that gave physicians no guidance about what actions to take to more quickly resolve the AKI.

These findings plus results from other studies suggested that “purely informational alerts may not be enough. They need to be linked” to suggested changes in patient management, Dr. Wilson explained.
 

 

 

Targeting NSAIDS, RAAS inhibitors, and PPIs

The new study used automated EHR analysis to not only identify patients with incident AKI, but also to flag medications these patients were receiving from any of three classes suspected of worsening renal function: nonsteroidal anti-inflammatory drugs, renin-angiotensin-aldosterone system (RAAS) inhibitors (which include angiotensin-converting enzyme inhibitors and angiotensin receptor blockers), and proton-pump inhibitors (PPIs).

“Our hypothesis was that giving clinicians actionable advice could significantly improve patient outcomes,” Dr. Wilson said. “NSAIDs are frequently discontinued” in patients who develop AKI. “RAAS inhibitors are sometimes discontinued,” although the benefit from doing this remains unproven and controversial. “PPIs are rarely discontinued,” and may be an underappreciated contributor to AKI by causing interstitial nephritis in some patients.

The prospective study included 5,060 adults admitted with a diagnosis of stage 1 AKI at any of four Yale-affiliated teaching hospitals who were also taking agents from at least one of the three targeted drug classes at the time of admission. Clinicians caring for 2,532 of these patients received an alert about the AKI diagnosis and use of the questionable medications, while those caring for the 2,528 control patients received no alert and delivered usual care.

The study excluded patients with higher-risk profiles, including those with extremely elevated serum creatinine levels at admission (4.0 mg/dL or higher), those recently treated with dialysis, and patients with end-stage kidney disease.

The study had two primary outcomes. One measured the impact of the intervention on stopping the targeted drugs. The second assessed the clinical effect of the intervention on progression of AKI to a higher stage, need for dialysis, or death during either the duration of hospitalization or during the first 14 days following randomization.
 

Overall, a 9% relative increase in discontinuations

In general, the intervention had a modest but significant effect on cessation of the targeted drug classes within 24 hours of sending the alert.

Overall, there was about a 58% discontinuation rate among controls and about a 62% discontinuation rate among patients managed using the alerts, a significant 9% relative increase in any drug discontinuation, Dr. Wilson reported.

Discontinuations of NSAIDs occurred at the highest rate, in about 80% of patients in both groups, and the intervention showed no significant effect on stopping agents from this class. Discontinuations of RAAS inhibitors showed the largest absolute difference in between-group effect, about a 10–percentage point increase that represented a significant 14% relative increase in stopping agents from this class. Discontinuations of PPIs occurred at the lowest rate, in roughly 20% of patients, but the alert intervention had the greatest impact by raising the relative rate of stopping by a significant 26% compared with controls.

Analysis of the effect of the intervention on the combined clinical outcome showed a less robust impact. The alerts produced no significant change in the clinical outcome overall, or in the use of NSAIDs or RAAS inhibitors. However, the change in use of PPIs following the alerts significantly linked with a 12% relative drop in the incidence of the combined clinical endpoint of progression of AKI to a higher stage, need for dialysis, or death.

The results were consistent across several prespecified subgroups based on parameters such as age, sex, and race, but these analyses showed a signal that the alerts were most helpful for patients who had serum creatinine levels at admission of less than 0.5 mg/dL.

Dr. Wilson speculated that the alerts might have been especially effective for these patients because their low creatinine levels might otherwise mask AKI onset.

A safety analysis showed no evidence that the alert interventions and drug cessations increased the incidence of any complications.
 

 

 

PPIs may distinguish ‘sicker’ patients

Dr. Wilson cited two potential explanations for why the tested alerts appeared most effective for patients taking a PPI at the time of admission. One is that PPIs are underappreciated as a contributor to AKI, a possibility supported by the low rates of discontinuation in both the control and intervention groups.

In addition, treatment with a PPI may be a marker of “sicker” patients who may have more to gain from quicker identification of their AKI. For example, 28% of the patients who were taking a PPI at admission were in the ICU when they entered the study compared with a 14% rate of ICU care among everyone else.

PPIs were also the most-used targeted drug class among enrolled patients, used by 65% at baseline, compared with 53% who were taking a RAAS inhibitor and about 31% who were taking an NSAID. About 6% of enrolled patients were taking agents from all three classes at baseline, and 36% were on treatment with agents from two of the classes.

The next step is to assess adding more refinement to the alert process, Dr. Wilson said. He and his associates are now running a study in which an AKI alert goes to a “kidney action team” that includes a trained clinician and a pharmacist. The team would review the patient who triggered the alert and quickly make an individualized assessment of the best intervention for resolving the AKI.

The study received no commercial funding. Dr. Wilson has received research funding from AstraZeneca, Boehringer Ingelheim, Vifor, and Whoop. Dr. Griffin has reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

– Automated alerts sent to clinicians via patients’ electronic health records identified patients with diagnosable acute kidney injury (AKI) who were taking one or more medications that could potentially further worsen their renal function. This led to a significant increase in discontinuations of the problematic drugs and better clinical outcomes in a subgroup of patients in a new multicenter, randomized study with more than 5,000 participants.

“Automated alerts for AKI can increase the rate of cessation of potentially nephrotoxic medications without endangering patients,” said F. Perry Wilson, MD, at Kidney Week 2022, organized by the American Society of Nephrology.

Mitchel L Zoler, Medscape Medical News © 2022 WebMD, LLC
Dr. F. Perry Wilson

In addition, the study provides “limited evidence that these alerts change clinical practice,” said Dr. Wilson, a nephrologist and director of the clinical and translational research accelerator at Yale School of Medicine in New Haven, Conn.

“It was encouraging to get providers to change their behavior” by quickly stopping treatment with potentially nephrotoxic medications in patients with incident AKI. But the results also confirmed that “patient decision-support systems tend to not be panaceas,” Dr. Wilson explained in an interview. Instead, “they tend to marginally improve” patients’ clinical status.

“Our hope is that widespread use may make some difference on a population scale, but rarely are these game changers,” he admitted.

“This was a very nice study showing how we can leverage the EHR to look not only at drugs but also contrast agents to direct educational efforts aimed at clinicians about when to discontinue” these treatments, commented Karen A. Griffin, MD, who was not involved with the study.
 

A danger for alert fatigue

But the results also showed that more research is needed to better refine this approach, added Dr. Griffin, a professor at Loyola University Chicago, Maywood, Ill., and chief of the renal section at the Edward Hines Jr. VA Medical Center in Hines, Ill. And she expressed caution about expanding the alerts that clinicians receive “because of the potential for alert fatigue.”

Dr. Karen A. Griffin

Dr. Wilson also acknowledged the danger for alert fatigue. “We’re doing these studies to try to reduce the number of alerts,” he said. “Most clinicians say that if we could show an alert improves patient outcomes, they would embrace it.”

Dr. Wilson and associates designed their current study to evaluate an enhanced type of alert that not only warned clinicians that a patient had developed AKI but also gave them an option to potentially intervene by stopping treatment with a medication that could possibly exacerbate worsening renal function. This enhancement followed their experience in a 2021 study that tested a purely informational alert that gave physicians no guidance about what actions to take to more quickly resolve the AKI.

These findings plus results from other studies suggested that “purely informational alerts may not be enough. They need to be linked” to suggested changes in patient management, Dr. Wilson explained.
 

 

 

Targeting NSAIDS, RAAS inhibitors, and PPIs

The new study used automated EHR analysis to not only identify patients with incident AKI, but also to flag medications these patients were receiving from any of three classes suspected of worsening renal function: nonsteroidal anti-inflammatory drugs, renin-angiotensin-aldosterone system (RAAS) inhibitors (which include angiotensin-converting enzyme inhibitors and angiotensin receptor blockers), and proton-pump inhibitors (PPIs).

“Our hypothesis was that giving clinicians actionable advice could significantly improve patient outcomes,” Dr. Wilson said. “NSAIDs are frequently discontinued” in patients who develop AKI. “RAAS inhibitors are sometimes discontinued,” although the benefit from doing this remains unproven and controversial. “PPIs are rarely discontinued,” and may be an underappreciated contributor to AKI by causing interstitial nephritis in some patients.

The prospective study included 5,060 adults admitted with a diagnosis of stage 1 AKI at any of four Yale-affiliated teaching hospitals who were also taking agents from at least one of the three targeted drug classes at the time of admission. Clinicians caring for 2,532 of these patients received an alert about the AKI diagnosis and use of the questionable medications, while those caring for the 2,528 control patients received no alert and delivered usual care.

The study excluded patients with higher-risk profiles, including those with extremely elevated serum creatinine levels at admission (4.0 mg/dL or higher), those recently treated with dialysis, and patients with end-stage kidney disease.

The study had two primary outcomes. One measured the impact of the intervention on stopping the targeted drugs. The second assessed the clinical effect of the intervention on progression of AKI to a higher stage, need for dialysis, or death during either the duration of hospitalization or during the first 14 days following randomization.
 

Overall, a 9% relative increase in discontinuations

In general, the intervention had a modest but significant effect on cessation of the targeted drug classes within 24 hours of sending the alert.

Overall, there was about a 58% discontinuation rate among controls and about a 62% discontinuation rate among patients managed using the alerts, a significant 9% relative increase in any drug discontinuation, Dr. Wilson reported.

Discontinuations of NSAIDs occurred at the highest rate, in about 80% of patients in both groups, and the intervention showed no significant effect on stopping agents from this class. Discontinuations of RAAS inhibitors showed the largest absolute difference in between-group effect, about a 10–percentage point increase that represented a significant 14% relative increase in stopping agents from this class. Discontinuations of PPIs occurred at the lowest rate, in roughly 20% of patients, but the alert intervention had the greatest impact by raising the relative rate of stopping by a significant 26% compared with controls.

Analysis of the effect of the intervention on the combined clinical outcome showed a less robust impact. The alerts produced no significant change in the clinical outcome overall, or in the use of NSAIDs or RAAS inhibitors. However, the change in use of PPIs following the alerts significantly linked with a 12% relative drop in the incidence of the combined clinical endpoint of progression of AKI to a higher stage, need for dialysis, or death.

The results were consistent across several prespecified subgroups based on parameters such as age, sex, and race, but these analyses showed a signal that the alerts were most helpful for patients who had serum creatinine levels at admission of less than 0.5 mg/dL.

Dr. Wilson speculated that the alerts might have been especially effective for these patients because their low creatinine levels might otherwise mask AKI onset.

A safety analysis showed no evidence that the alert interventions and drug cessations increased the incidence of any complications.
 

 

 

PPIs may distinguish ‘sicker’ patients

Dr. Wilson cited two potential explanations for why the tested alerts appeared most effective for patients taking a PPI at the time of admission. One is that PPIs are underappreciated as a contributor to AKI, a possibility supported by the low rates of discontinuation in both the control and intervention groups.

In addition, treatment with a PPI may be a marker of “sicker” patients who may have more to gain from quicker identification of their AKI. For example, 28% of the patients who were taking a PPI at admission were in the ICU when they entered the study compared with a 14% rate of ICU care among everyone else.

PPIs were also the most-used targeted drug class among enrolled patients, used by 65% at baseline, compared with 53% who were taking a RAAS inhibitor and about 31% who were taking an NSAID. About 6% of enrolled patients were taking agents from all three classes at baseline, and 36% were on treatment with agents from two of the classes.

The next step is to assess adding more refinement to the alert process, Dr. Wilson said. He and his associates are now running a study in which an AKI alert goes to a “kidney action team” that includes a trained clinician and a pharmacist. The team would review the patient who triggered the alert and quickly make an individualized assessment of the best intervention for resolving the AKI.

The study received no commercial funding. Dr. Wilson has received research funding from AstraZeneca, Boehringer Ingelheim, Vifor, and Whoop. Dr. Griffin has reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

– Automated alerts sent to clinicians via patients’ electronic health records identified patients with diagnosable acute kidney injury (AKI) who were taking one or more medications that could potentially further worsen their renal function. This led to a significant increase in discontinuations of the problematic drugs and better clinical outcomes in a subgroup of patients in a new multicenter, randomized study with more than 5,000 participants.

“Automated alerts for AKI can increase the rate of cessation of potentially nephrotoxic medications without endangering patients,” said F. Perry Wilson, MD, at Kidney Week 2022, organized by the American Society of Nephrology.

Mitchel L Zoler, Medscape Medical News © 2022 WebMD, LLC
Dr. F. Perry Wilson

In addition, the study provides “limited evidence that these alerts change clinical practice,” said Dr. Wilson, a nephrologist and director of the clinical and translational research accelerator at Yale School of Medicine in New Haven, Conn.

“It was encouraging to get providers to change their behavior” by quickly stopping treatment with potentially nephrotoxic medications in patients with incident AKI. But the results also confirmed that “patient decision-support systems tend to not be panaceas,” Dr. Wilson explained in an interview. Instead, “they tend to marginally improve” patients’ clinical status.

“Our hope is that widespread use may make some difference on a population scale, but rarely are these game changers,” he admitted.

“This was a very nice study showing how we can leverage the EHR to look not only at drugs but also contrast agents to direct educational efforts aimed at clinicians about when to discontinue” these treatments, commented Karen A. Griffin, MD, who was not involved with the study.
 

A danger for alert fatigue

But the results also showed that more research is needed to better refine this approach, added Dr. Griffin, a professor at Loyola University Chicago, Maywood, Ill., and chief of the renal section at the Edward Hines Jr. VA Medical Center in Hines, Ill. And she expressed caution about expanding the alerts that clinicians receive “because of the potential for alert fatigue.”

Dr. Karen A. Griffin

Dr. Wilson also acknowledged the danger for alert fatigue. “We’re doing these studies to try to reduce the number of alerts,” he said. “Most clinicians say that if we could show an alert improves patient outcomes, they would embrace it.”

Dr. Wilson and associates designed their current study to evaluate an enhanced type of alert that not only warned clinicians that a patient had developed AKI but also gave them an option to potentially intervene by stopping treatment with a medication that could possibly exacerbate worsening renal function. This enhancement followed their experience in a 2021 study that tested a purely informational alert that gave physicians no guidance about what actions to take to more quickly resolve the AKI.

These findings plus results from other studies suggested that “purely informational alerts may not be enough. They need to be linked” to suggested changes in patient management, Dr. Wilson explained.
 

 

 

Targeting NSAIDS, RAAS inhibitors, and PPIs

The new study used automated EHR analysis to not only identify patients with incident AKI, but also to flag medications these patients were receiving from any of three classes suspected of worsening renal function: nonsteroidal anti-inflammatory drugs, renin-angiotensin-aldosterone system (RAAS) inhibitors (which include angiotensin-converting enzyme inhibitors and angiotensin receptor blockers), and proton-pump inhibitors (PPIs).

“Our hypothesis was that giving clinicians actionable advice could significantly improve patient outcomes,” Dr. Wilson said. “NSAIDs are frequently discontinued” in patients who develop AKI. “RAAS inhibitors are sometimes discontinued,” although the benefit from doing this remains unproven and controversial. “PPIs are rarely discontinued,” and may be an underappreciated contributor to AKI by causing interstitial nephritis in some patients.

The prospective study included 5,060 adults admitted with a diagnosis of stage 1 AKI at any of four Yale-affiliated teaching hospitals who were also taking agents from at least one of the three targeted drug classes at the time of admission. Clinicians caring for 2,532 of these patients received an alert about the AKI diagnosis and use of the questionable medications, while those caring for the 2,528 control patients received no alert and delivered usual care.

The study excluded patients with higher-risk profiles, including those with extremely elevated serum creatinine levels at admission (4.0 mg/dL or higher), those recently treated with dialysis, and patients with end-stage kidney disease.

The study had two primary outcomes. One measured the impact of the intervention on stopping the targeted drugs. The second assessed the clinical effect of the intervention on progression of AKI to a higher stage, need for dialysis, or death during either the duration of hospitalization or during the first 14 days following randomization.
 

Overall, a 9% relative increase in discontinuations

In general, the intervention had a modest but significant effect on cessation of the targeted drug classes within 24 hours of sending the alert.

Overall, there was about a 58% discontinuation rate among controls and about a 62% discontinuation rate among patients managed using the alerts, a significant 9% relative increase in any drug discontinuation, Dr. Wilson reported.

Discontinuations of NSAIDs occurred at the highest rate, in about 80% of patients in both groups, and the intervention showed no significant effect on stopping agents from this class. Discontinuations of RAAS inhibitors showed the largest absolute difference in between-group effect, about a 10–percentage point increase that represented a significant 14% relative increase in stopping agents from this class. Discontinuations of PPIs occurred at the lowest rate, in roughly 20% of patients, but the alert intervention had the greatest impact by raising the relative rate of stopping by a significant 26% compared with controls.

Analysis of the effect of the intervention on the combined clinical outcome showed a less robust impact. The alerts produced no significant change in the clinical outcome overall, or in the use of NSAIDs or RAAS inhibitors. However, the change in use of PPIs following the alerts significantly linked with a 12% relative drop in the incidence of the combined clinical endpoint of progression of AKI to a higher stage, need for dialysis, or death.

The results were consistent across several prespecified subgroups based on parameters such as age, sex, and race, but these analyses showed a signal that the alerts were most helpful for patients who had serum creatinine levels at admission of less than 0.5 mg/dL.

Dr. Wilson speculated that the alerts might have been especially effective for these patients because their low creatinine levels might otherwise mask AKI onset.

A safety analysis showed no evidence that the alert interventions and drug cessations increased the incidence of any complications.
 

 

 

PPIs may distinguish ‘sicker’ patients

Dr. Wilson cited two potential explanations for why the tested alerts appeared most effective for patients taking a PPI at the time of admission. One is that PPIs are underappreciated as a contributor to AKI, a possibility supported by the low rates of discontinuation in both the control and intervention groups.

In addition, treatment with a PPI may be a marker of “sicker” patients who may have more to gain from quicker identification of their AKI. For example, 28% of the patients who were taking a PPI at admission were in the ICU when they entered the study compared with a 14% rate of ICU care among everyone else.

PPIs were also the most-used targeted drug class among enrolled patients, used by 65% at baseline, compared with 53% who were taking a RAAS inhibitor and about 31% who were taking an NSAID. About 6% of enrolled patients were taking agents from all three classes at baseline, and 36% were on treatment with agents from two of the classes.

The next step is to assess adding more refinement to the alert process, Dr. Wilson said. He and his associates are now running a study in which an AKI alert goes to a “kidney action team” that includes a trained clinician and a pharmacist. The team would review the patient who triggered the alert and quickly make an individualized assessment of the best intervention for resolving the AKI.

The study received no commercial funding. Dr. Wilson has received research funding from AstraZeneca, Boehringer Ingelheim, Vifor, and Whoop. Dr. Griffin has reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT KIDNEY WEEK 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Denosumab may halt erosive hand OA progression

Article Type
Changed
Thu, 11/17/2022 - 13:44

But pain outcomes questionable

– A double dose of the antiosteoporosis biologic denosumab (Prolia) slowed progression and repaired joints in erosive hand osteoarthritis (OA) but showed no impact on pain levels until 2 years after patients received the first dose, the lead investigator of a Belgium-based randomized clinical trial reported at the annual meeting of the American College of Rheumatology.

“This is the first placebo-controlled, randomized clinical trial showing the efficacy of denosumab double-dosing regimen in structural modification of erosive hand osteoarthritis,” Ruth Wittoek, MD, PhD, a rheumatologist at Ghent (Belgium) University, said in presenting the results.

Dr. Ruth Wittoek

“Our primary endpoint was confirmed by a more robust secondary endpoint, both showing that denosumab stopped erosive progression and induced remodeling in patients with erosive hand OA,” she added. “Moreover, the double-dosing regimen was well-tolerated.”

However, during the question-and-answer period after her presentation, Dr. Wittoek acknowledged the study didn’t evaluate the impact denosumab had on cartilage and didn’t detect a signal for pain resolution until 96 weeks during the open-label extension phase. “I’m not quite sure if denosumab is sufficient to treat symptoms in osteoarthritis,” she said. “There were positive signals but, of course, having to wait 2 years for an effect is kind of hard for our patients.”

The trial randomized 100 adult patients 1:1 to denosumab 60 mg every 12 weeks – double the normal dose for osteoporosis – or placebo. The primary endpoint was changes in erosive progression and signs of repair based on x-ray at 48 weeks, after which all patients were switched to denosumab for the open-label study. To quantify changes, the investigators used the Ghent University Scoring System (GUSS), which uses a scale of 0-300 to quantify radiographic changes in erosive hand OA.

Dr. Wittoek said that the average change in GUSS at week 24 was +6 vs. –2.8 (P = .024) in the treatment and placebo groups, respectively, widening at week 48 to +10.1 and –7.9 (P = .003). By week 96, the variation was +18.8 for denosumab and +17 for placebo with switch to denosumab (P = .03).

“During the open-label extension the denosumab treatment group continued to increase to show remodeling while the former placebo treatment group, now also receiving denosumab, also  showed signs of remodeling,” she said. “So, there was no more erosive progression.”

The secondary endpoint was the percentage of new erosive joint development at week 48: 1.8% in the denosumab group and 7% in placebo group (odds ratio, 0.23; 95% confidence interval, 0.10-0.50; P < .001). “Meaning the odds of erosive progression is 77% lower in the denosumab treatment group,” Dr. Wittoek said.



By week 96, those percentages were 0% and 0.7% in the respective treatment groups. “During the open-label extension, it was clear that denosumab blocked all new development of erosive joints,” she said.

Pain was one of the study’s exploratory endpoints, and the mean numeric rating scale showed no difference between treatment arms until the 96-week results, with a reduction by almost half in the denosumab group (from 4.2 at week 48 to 2.4) and a lesser reduction in the placebo-switched-to-denosumab arm (from 4.2 to 3.5; P = .028) between arms.

The placebo group was more susceptible to adverse events, namely musculoskeletal complaints and nervous system disorders, Dr. Wittoek noted. Infection rates, the most common adverse event, were similar between the two groups: 41 and 39 in the respective arms. Despite the double dose of denosumab, safety and tolerability in this trial was comparable to other trials, she said.

In comments submitted by e-mail, Dr. Wittoek noted that the extension study results will go out to 144 weeks. She also addressed the issues surrounding pain as an outcome.

“Besides disability, pain is also important from the patient’s perspective,” Dr. Wittoek said in the e-mailed comments. “However, pain and radiographic progression are undeniably coupled, but it’s unclear how.”

In erosive hand OA, structural progression and pain may not be related on a molecular level, she said. “Therefore, we don’t deny that pain levels should also be covered by treatment, but they should not be confused with structural modification; it is just another domain, not more nor less important.

The second year of the open-label extension study should clarify the pain outcomes, she said.

Richard Mark Kirkner/MDedge News
Dr. David T. Felson

In an interview, David T. Felson, MD, MPH, professor and director of clinical epidemiology research at Boston University, questioned the delayed pain effect the study suggested. “It didn’t make any sense to me that there would be because both groups at that point got denosumab, so if there was going to be a pain effect that would’ve happened,” he said.

The pain effect is “really important,” he said. “We don’t use denosumab in rheumatoid arthritis to treat erosions because it doesn’t necessarily affect the pain and dysfunction of rheumatoid arthritis, and I’m not sure that isn’t going to be true in erosive hand osteoarthritis, but it’s possible.”

To clarify the pain outcomes, he said, “They’re going to have to work on the data.”

Amgen sponsored the trial but had no role in the design. Dr. Wittoek and Dr. Felson reported no relevant disclosures.
 

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

But pain outcomes questionable

But pain outcomes questionable

– A double dose of the antiosteoporosis biologic denosumab (Prolia) slowed progression and repaired joints in erosive hand osteoarthritis (OA) but showed no impact on pain levels until 2 years after patients received the first dose, the lead investigator of a Belgium-based randomized clinical trial reported at the annual meeting of the American College of Rheumatology.

“This is the first placebo-controlled, randomized clinical trial showing the efficacy of denosumab double-dosing regimen in structural modification of erosive hand osteoarthritis,” Ruth Wittoek, MD, PhD, a rheumatologist at Ghent (Belgium) University, said in presenting the results.

Dr. Ruth Wittoek

“Our primary endpoint was confirmed by a more robust secondary endpoint, both showing that denosumab stopped erosive progression and induced remodeling in patients with erosive hand OA,” she added. “Moreover, the double-dosing regimen was well-tolerated.”

However, during the question-and-answer period after her presentation, Dr. Wittoek acknowledged the study didn’t evaluate the impact denosumab had on cartilage and didn’t detect a signal for pain resolution until 96 weeks during the open-label extension phase. “I’m not quite sure if denosumab is sufficient to treat symptoms in osteoarthritis,” she said. “There were positive signals but, of course, having to wait 2 years for an effect is kind of hard for our patients.”

The trial randomized 100 adult patients 1:1 to denosumab 60 mg every 12 weeks – double the normal dose for osteoporosis – or placebo. The primary endpoint was changes in erosive progression and signs of repair based on x-ray at 48 weeks, after which all patients were switched to denosumab for the open-label study. To quantify changes, the investigators used the Ghent University Scoring System (GUSS), which uses a scale of 0-300 to quantify radiographic changes in erosive hand OA.

Dr. Wittoek said that the average change in GUSS at week 24 was +6 vs. –2.8 (P = .024) in the treatment and placebo groups, respectively, widening at week 48 to +10.1 and –7.9 (P = .003). By week 96, the variation was +18.8 for denosumab and +17 for placebo with switch to denosumab (P = .03).

“During the open-label extension the denosumab treatment group continued to increase to show remodeling while the former placebo treatment group, now also receiving denosumab, also  showed signs of remodeling,” she said. “So, there was no more erosive progression.”

The secondary endpoint was the percentage of new erosive joint development at week 48: 1.8% in the denosumab group and 7% in placebo group (odds ratio, 0.23; 95% confidence interval, 0.10-0.50; P < .001). “Meaning the odds of erosive progression is 77% lower in the denosumab treatment group,” Dr. Wittoek said.



By week 96, those percentages were 0% and 0.7% in the respective treatment groups. “During the open-label extension, it was clear that denosumab blocked all new development of erosive joints,” she said.

Pain was one of the study’s exploratory endpoints, and the mean numeric rating scale showed no difference between treatment arms until the 96-week results, with a reduction by almost half in the denosumab group (from 4.2 at week 48 to 2.4) and a lesser reduction in the placebo-switched-to-denosumab arm (from 4.2 to 3.5; P = .028) between arms.

The placebo group was more susceptible to adverse events, namely musculoskeletal complaints and nervous system disorders, Dr. Wittoek noted. Infection rates, the most common adverse event, were similar between the two groups: 41 and 39 in the respective arms. Despite the double dose of denosumab, safety and tolerability in this trial was comparable to other trials, she said.

In comments submitted by e-mail, Dr. Wittoek noted that the extension study results will go out to 144 weeks. She also addressed the issues surrounding pain as an outcome.

“Besides disability, pain is also important from the patient’s perspective,” Dr. Wittoek said in the e-mailed comments. “However, pain and radiographic progression are undeniably coupled, but it’s unclear how.”

In erosive hand OA, structural progression and pain may not be related on a molecular level, she said. “Therefore, we don’t deny that pain levels should also be covered by treatment, but they should not be confused with structural modification; it is just another domain, not more nor less important.

The second year of the open-label extension study should clarify the pain outcomes, she said.

Richard Mark Kirkner/MDedge News
Dr. David T. Felson

In an interview, David T. Felson, MD, MPH, professor and director of clinical epidemiology research at Boston University, questioned the delayed pain effect the study suggested. “It didn’t make any sense to me that there would be because both groups at that point got denosumab, so if there was going to be a pain effect that would’ve happened,” he said.

The pain effect is “really important,” he said. “We don’t use denosumab in rheumatoid arthritis to treat erosions because it doesn’t necessarily affect the pain and dysfunction of rheumatoid arthritis, and I’m not sure that isn’t going to be true in erosive hand osteoarthritis, but it’s possible.”

To clarify the pain outcomes, he said, “They’re going to have to work on the data.”

Amgen sponsored the trial but had no role in the design. Dr. Wittoek and Dr. Felson reported no relevant disclosures.
 

– A double dose of the antiosteoporosis biologic denosumab (Prolia) slowed progression and repaired joints in erosive hand osteoarthritis (OA) but showed no impact on pain levels until 2 years after patients received the first dose, the lead investigator of a Belgium-based randomized clinical trial reported at the annual meeting of the American College of Rheumatology.

“This is the first placebo-controlled, randomized clinical trial showing the efficacy of denosumab double-dosing regimen in structural modification of erosive hand osteoarthritis,” Ruth Wittoek, MD, PhD, a rheumatologist at Ghent (Belgium) University, said in presenting the results.

Dr. Ruth Wittoek

“Our primary endpoint was confirmed by a more robust secondary endpoint, both showing that denosumab stopped erosive progression and induced remodeling in patients with erosive hand OA,” she added. “Moreover, the double-dosing regimen was well-tolerated.”

However, during the question-and-answer period after her presentation, Dr. Wittoek acknowledged the study didn’t evaluate the impact denosumab had on cartilage and didn’t detect a signal for pain resolution until 96 weeks during the open-label extension phase. “I’m not quite sure if denosumab is sufficient to treat symptoms in osteoarthritis,” she said. “There were positive signals but, of course, having to wait 2 years for an effect is kind of hard for our patients.”

The trial randomized 100 adult patients 1:1 to denosumab 60 mg every 12 weeks – double the normal dose for osteoporosis – or placebo. The primary endpoint was changes in erosive progression and signs of repair based on x-ray at 48 weeks, after which all patients were switched to denosumab for the open-label study. To quantify changes, the investigators used the Ghent University Scoring System (GUSS), which uses a scale of 0-300 to quantify radiographic changes in erosive hand OA.

Dr. Wittoek said that the average change in GUSS at week 24 was +6 vs. –2.8 (P = .024) in the treatment and placebo groups, respectively, widening at week 48 to +10.1 and –7.9 (P = .003). By week 96, the variation was +18.8 for denosumab and +17 for placebo with switch to denosumab (P = .03).

“During the open-label extension the denosumab treatment group continued to increase to show remodeling while the former placebo treatment group, now also receiving denosumab, also  showed signs of remodeling,” she said. “So, there was no more erosive progression.”

The secondary endpoint was the percentage of new erosive joint development at week 48: 1.8% in the denosumab group and 7% in placebo group (odds ratio, 0.23; 95% confidence interval, 0.10-0.50; P < .001). “Meaning the odds of erosive progression is 77% lower in the denosumab treatment group,” Dr. Wittoek said.



By week 96, those percentages were 0% and 0.7% in the respective treatment groups. “During the open-label extension, it was clear that denosumab blocked all new development of erosive joints,” she said.

Pain was one of the study’s exploratory endpoints, and the mean numeric rating scale showed no difference between treatment arms until the 96-week results, with a reduction by almost half in the denosumab group (from 4.2 at week 48 to 2.4) and a lesser reduction in the placebo-switched-to-denosumab arm (from 4.2 to 3.5; P = .028) between arms.

The placebo group was more susceptible to adverse events, namely musculoskeletal complaints and nervous system disorders, Dr. Wittoek noted. Infection rates, the most common adverse event, were similar between the two groups: 41 and 39 in the respective arms. Despite the double dose of denosumab, safety and tolerability in this trial was comparable to other trials, she said.

In comments submitted by e-mail, Dr. Wittoek noted that the extension study results will go out to 144 weeks. She also addressed the issues surrounding pain as an outcome.

“Besides disability, pain is also important from the patient’s perspective,” Dr. Wittoek said in the e-mailed comments. “However, pain and radiographic progression are undeniably coupled, but it’s unclear how.”

In erosive hand OA, structural progression and pain may not be related on a molecular level, she said. “Therefore, we don’t deny that pain levels should also be covered by treatment, but they should not be confused with structural modification; it is just another domain, not more nor less important.

The second year of the open-label extension study should clarify the pain outcomes, she said.

Richard Mark Kirkner/MDedge News
Dr. David T. Felson

In an interview, David T. Felson, MD, MPH, professor and director of clinical epidemiology research at Boston University, questioned the delayed pain effect the study suggested. “It didn’t make any sense to me that there would be because both groups at that point got denosumab, so if there was going to be a pain effect that would’ve happened,” he said.

The pain effect is “really important,” he said. “We don’t use denosumab in rheumatoid arthritis to treat erosions because it doesn’t necessarily affect the pain and dysfunction of rheumatoid arthritis, and I’m not sure that isn’t going to be true in erosive hand osteoarthritis, but it’s possible.”

To clarify the pain outcomes, he said, “They’re going to have to work on the data.”

Amgen sponsored the trial but had no role in the design. Dr. Wittoek and Dr. Felson reported no relevant disclosures.
 

Publications
Publications
Topics
Article Type
Sections
Article Source

AT ACR 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article