Nordic walking bests other workouts on functional outcome in CVD

Article Type
Changed

Nordic walking was significantly better at improving functional capacity than were moderate- to vigorous-intensity continuous training and high-intensity interval training (HIIT) in a single-center randomized controlled trial.

Participants who did Nordic walking saw better improvements in functional capacity, measured via the 6-minute walk test distances, than did individuals doing either of the other exercise strategies (interaction effect, P = .010).

amriphoto/E+/Getty Images

From baseline to 26 weeks, the average changes in 6-minute walk test distance were 55.6 m and 59.9 m for moderate- to vigorous-intensity continuous training and HIIT, respectively, but 94.2 m in the Nordic walking group, reported Tasuku Terada, PhD, University of Ottawa Heart Institute, Ontario, and colleagues.

Previous research looked at these results at the end of a 12-week supervised exercise intervention and showed that although all three strategies were safe and had positive effects on physical and mental health in these patients, Nordic walking had a better effect in raising the 6-minute walk test scores than did moderate- to vigorous-intensity continuous training and HIIT, the researchers noted.

“This study is a follow-up on the previous study to show that Nordic walking had greater sustained effects even after the observation phase,” from 12 to 26 weeks, Dr. Terada said in an interview.

“Exercise is a medicine to improve the health of patients, but unfortunately, sometimes it is not as often utilized,” Dr. Terada told this news organization.

Giving patients additional exercise modalities is beneficial because not everyone likes HIIT workouts or long continuous walking, Dr. Terada said. “So, if that’s the case, we can recommend Nordic walking as another type of exercise and expect a similar or good impact in functional capacity.”

The results were published online in the Canadian Journal of Cardiology.

“I think it honestly supports the idea that, as many other studies show, physical activity and exercise improve functional capacity no matter how you measure it and have beneficial effects on mental health and quality of life and particularly depression as well,” Carl “Chip” Lavie, MD, University of Queensland, New Orleans, who coauthored an editorial accompanying the publication, said in an interview.

“Clinicians need to get patients to do the type of exercise that they are going to do. A lot of people ask what’s the best exercise, and the best exercise is one that the person is going to do,” Dr. Lavie said.

Nordic walking is an enhanced form of walking that engages the upper and lower body musculatures, noted Dr. Lavie.

“With regard to Nordic walking, I think that now adds an additional option that many people wouldn’t have thought about. For many of the patients that have issues that are musculoskeletal, issues with posture, gait, or balance, using the poles can be a way to allow them to walk much better and increase their speed, and as they do that, they become fitter,” Dr. Lavie continued.

Moreover, these findings support the use of Nordic walking in cardiac rehabilitation programs, the editorialists noted.
 

Cardiac rehabilitation

The study examined patients with coronary artery disease who underwent cardiac revascularization. They were then referred by their physicians to cardiac rehabilitation.

Participants were randomly assigned to one of the following intervention groups: Nordic walking (n = 30), moderate- to vigorous-intensity continuous training (n = 27), and HIIT (n = 29) for a 12-week period. There was then an additional 14-week observation period after the exercise program. Mean age was 60 years across the intervention groups.

The research team analyzed the extent of participants’ depression with Beck Depression Inventory–II, quality of life with Short Form–36 and HeartQoL, and functional capacity with a 6-minute walk test. They assessed functional capacity, depression, and quality of life at baseline, 12 weeks, and 26 weeks.

Using linear mixed models with extended measures, the study authors evaluated sustained effects, which were between week 12 and week 26, and prolonged effects, which were between baseline and week 26.

From baseline to 26 weeks, participants saw significantly better outcomes in quality of life, depression symptoms, and 6-minute walk test (P < .05).

Physical quality of life and 6-minute walk test distance rose significantly between weeks 12 and 26 (P < .05).

Notably, at week 26, all training groups achieved the minimal clinical threshold difference of 54 m, although participants in the Nordic walking cohort demonstrated significantly greater improvement in outcomes.

Other data indicated the following:

  • From baseline to week 12, physical activity levels rose significantly, and this improvement was sustained through the observation period.
  • During the observation period, mental component summary significantly declined while physical component summary outcomes improved.
  • After completion of cardiac rehabilitation, functional capacity continued to increase significantly.
  • Moderate- to vigorous-intensity continuous training, HIIT, and Nordic walking had positive and significant prolonged effects on depression symptoms and general and disease-specific quality of life, with no differences in the extent of improvements between exercise types.

Some limitations of the study include the fact that women comprised a small portion of the study group, which limits the generalizability of these data, the cohort was recruited from a single medical facility, and there was a short follow-up time, the researchers noted.

“Further research is warranted to investigate the efficacy and integration of Nordic walking into home-based exercise after supervised cardiac rehabilitation for maintenance of physical and mental health,” the editorialists concluded.

Dr. Terada, Dr. Lavie, and Dr. Taylor reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Nordic walking was significantly better at improving functional capacity than were moderate- to vigorous-intensity continuous training and high-intensity interval training (HIIT) in a single-center randomized controlled trial.

Participants who did Nordic walking saw better improvements in functional capacity, measured via the 6-minute walk test distances, than did individuals doing either of the other exercise strategies (interaction effect, P = .010).

amriphoto/E+/Getty Images

From baseline to 26 weeks, the average changes in 6-minute walk test distance were 55.6 m and 59.9 m for moderate- to vigorous-intensity continuous training and HIIT, respectively, but 94.2 m in the Nordic walking group, reported Tasuku Terada, PhD, University of Ottawa Heart Institute, Ontario, and colleagues.

Previous research looked at these results at the end of a 12-week supervised exercise intervention and showed that although all three strategies were safe and had positive effects on physical and mental health in these patients, Nordic walking had a better effect in raising the 6-minute walk test scores than did moderate- to vigorous-intensity continuous training and HIIT, the researchers noted.

“This study is a follow-up on the previous study to show that Nordic walking had greater sustained effects even after the observation phase,” from 12 to 26 weeks, Dr. Terada said in an interview.

“Exercise is a medicine to improve the health of patients, but unfortunately, sometimes it is not as often utilized,” Dr. Terada told this news organization.

Giving patients additional exercise modalities is beneficial because not everyone likes HIIT workouts or long continuous walking, Dr. Terada said. “So, if that’s the case, we can recommend Nordic walking as another type of exercise and expect a similar or good impact in functional capacity.”

The results were published online in the Canadian Journal of Cardiology.

“I think it honestly supports the idea that, as many other studies show, physical activity and exercise improve functional capacity no matter how you measure it and have beneficial effects on mental health and quality of life and particularly depression as well,” Carl “Chip” Lavie, MD, University of Queensland, New Orleans, who coauthored an editorial accompanying the publication, said in an interview.

“Clinicians need to get patients to do the type of exercise that they are going to do. A lot of people ask what’s the best exercise, and the best exercise is one that the person is going to do,” Dr. Lavie said.

Nordic walking is an enhanced form of walking that engages the upper and lower body musculatures, noted Dr. Lavie.

“With regard to Nordic walking, I think that now adds an additional option that many people wouldn’t have thought about. For many of the patients that have issues that are musculoskeletal, issues with posture, gait, or balance, using the poles can be a way to allow them to walk much better and increase their speed, and as they do that, they become fitter,” Dr. Lavie continued.

Moreover, these findings support the use of Nordic walking in cardiac rehabilitation programs, the editorialists noted.
 

Cardiac rehabilitation

The study examined patients with coronary artery disease who underwent cardiac revascularization. They were then referred by their physicians to cardiac rehabilitation.

Participants were randomly assigned to one of the following intervention groups: Nordic walking (n = 30), moderate- to vigorous-intensity continuous training (n = 27), and HIIT (n = 29) for a 12-week period. There was then an additional 14-week observation period after the exercise program. Mean age was 60 years across the intervention groups.

The research team analyzed the extent of participants’ depression with Beck Depression Inventory–II, quality of life with Short Form–36 and HeartQoL, and functional capacity with a 6-minute walk test. They assessed functional capacity, depression, and quality of life at baseline, 12 weeks, and 26 weeks.

Using linear mixed models with extended measures, the study authors evaluated sustained effects, which were between week 12 and week 26, and prolonged effects, which were between baseline and week 26.

From baseline to 26 weeks, participants saw significantly better outcomes in quality of life, depression symptoms, and 6-minute walk test (P < .05).

Physical quality of life and 6-minute walk test distance rose significantly between weeks 12 and 26 (P < .05).

Notably, at week 26, all training groups achieved the minimal clinical threshold difference of 54 m, although participants in the Nordic walking cohort demonstrated significantly greater improvement in outcomes.

Other data indicated the following:

  • From baseline to week 12, physical activity levels rose significantly, and this improvement was sustained through the observation period.
  • During the observation period, mental component summary significantly declined while physical component summary outcomes improved.
  • After completion of cardiac rehabilitation, functional capacity continued to increase significantly.
  • Moderate- to vigorous-intensity continuous training, HIIT, and Nordic walking had positive and significant prolonged effects on depression symptoms and general and disease-specific quality of life, with no differences in the extent of improvements between exercise types.

Some limitations of the study include the fact that women comprised a small portion of the study group, which limits the generalizability of these data, the cohort was recruited from a single medical facility, and there was a short follow-up time, the researchers noted.

“Further research is warranted to investigate the efficacy and integration of Nordic walking into home-based exercise after supervised cardiac rehabilitation for maintenance of physical and mental health,” the editorialists concluded.

Dr. Terada, Dr. Lavie, and Dr. Taylor reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Nordic walking was significantly better at improving functional capacity than were moderate- to vigorous-intensity continuous training and high-intensity interval training (HIIT) in a single-center randomized controlled trial.

Participants who did Nordic walking saw better improvements in functional capacity, measured via the 6-minute walk test distances, than did individuals doing either of the other exercise strategies (interaction effect, P = .010).

amriphoto/E+/Getty Images

From baseline to 26 weeks, the average changes in 6-minute walk test distance were 55.6 m and 59.9 m for moderate- to vigorous-intensity continuous training and HIIT, respectively, but 94.2 m in the Nordic walking group, reported Tasuku Terada, PhD, University of Ottawa Heart Institute, Ontario, and colleagues.

Previous research looked at these results at the end of a 12-week supervised exercise intervention and showed that although all three strategies were safe and had positive effects on physical and mental health in these patients, Nordic walking had a better effect in raising the 6-minute walk test scores than did moderate- to vigorous-intensity continuous training and HIIT, the researchers noted.

“This study is a follow-up on the previous study to show that Nordic walking had greater sustained effects even after the observation phase,” from 12 to 26 weeks, Dr. Terada said in an interview.

“Exercise is a medicine to improve the health of patients, but unfortunately, sometimes it is not as often utilized,” Dr. Terada told this news organization.

Giving patients additional exercise modalities is beneficial because not everyone likes HIIT workouts or long continuous walking, Dr. Terada said. “So, if that’s the case, we can recommend Nordic walking as another type of exercise and expect a similar or good impact in functional capacity.”

The results were published online in the Canadian Journal of Cardiology.

“I think it honestly supports the idea that, as many other studies show, physical activity and exercise improve functional capacity no matter how you measure it and have beneficial effects on mental health and quality of life and particularly depression as well,” Carl “Chip” Lavie, MD, University of Queensland, New Orleans, who coauthored an editorial accompanying the publication, said in an interview.

“Clinicians need to get patients to do the type of exercise that they are going to do. A lot of people ask what’s the best exercise, and the best exercise is one that the person is going to do,” Dr. Lavie said.

Nordic walking is an enhanced form of walking that engages the upper and lower body musculatures, noted Dr. Lavie.

“With regard to Nordic walking, I think that now adds an additional option that many people wouldn’t have thought about. For many of the patients that have issues that are musculoskeletal, issues with posture, gait, or balance, using the poles can be a way to allow them to walk much better and increase their speed, and as they do that, they become fitter,” Dr. Lavie continued.

Moreover, these findings support the use of Nordic walking in cardiac rehabilitation programs, the editorialists noted.
 

Cardiac rehabilitation

The study examined patients with coronary artery disease who underwent cardiac revascularization. They were then referred by their physicians to cardiac rehabilitation.

Participants were randomly assigned to one of the following intervention groups: Nordic walking (n = 30), moderate- to vigorous-intensity continuous training (n = 27), and HIIT (n = 29) for a 12-week period. There was then an additional 14-week observation period after the exercise program. Mean age was 60 years across the intervention groups.

The research team analyzed the extent of participants’ depression with Beck Depression Inventory–II, quality of life with Short Form–36 and HeartQoL, and functional capacity with a 6-minute walk test. They assessed functional capacity, depression, and quality of life at baseline, 12 weeks, and 26 weeks.

Using linear mixed models with extended measures, the study authors evaluated sustained effects, which were between week 12 and week 26, and prolonged effects, which were between baseline and week 26.

From baseline to 26 weeks, participants saw significantly better outcomes in quality of life, depression symptoms, and 6-minute walk test (P < .05).

Physical quality of life and 6-minute walk test distance rose significantly between weeks 12 and 26 (P < .05).

Notably, at week 26, all training groups achieved the minimal clinical threshold difference of 54 m, although participants in the Nordic walking cohort demonstrated significantly greater improvement in outcomes.

Other data indicated the following:

  • From baseline to week 12, physical activity levels rose significantly, and this improvement was sustained through the observation period.
  • During the observation period, mental component summary significantly declined while physical component summary outcomes improved.
  • After completion of cardiac rehabilitation, functional capacity continued to increase significantly.
  • Moderate- to vigorous-intensity continuous training, HIIT, and Nordic walking had positive and significant prolonged effects on depression symptoms and general and disease-specific quality of life, with no differences in the extent of improvements between exercise types.

Some limitations of the study include the fact that women comprised a small portion of the study group, which limits the generalizability of these data, the cohort was recruited from a single medical facility, and there was a short follow-up time, the researchers noted.

“Further research is warranted to investigate the efficacy and integration of Nordic walking into home-based exercise after supervised cardiac rehabilitation for maintenance of physical and mental health,” the editorialists concluded.

Dr. Terada, Dr. Lavie, and Dr. Taylor reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE CANADIAN JOURNAL OF CARDIOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Early cardiac rehab as effective as later start after sternotomy

Article Type
Changed

Cardiac rehabilitation (CR) started 2 weeks after sternotomy for a cardiac procedure was noninferior to usual care, in which CR starts 6 weeks after the procedure, with a greater improvement in 6-minute walk test outcomes, a randomized study suggests.

There was no difference in adverse events between groups, although the researchers pointed out that the study was not powered specifically for safety outcomes.

“Cardiac surgical techniques have evolved significantly over the last 60 years, leading to improved survival and shorter hospital stays,” Gordon McGregor, PhD, University of Warwick, Coventry, England, told this news organization. “However, sternal precautions and rehabilitation guidelines have not changed accordingly. There has never been a guideline based on empirical evidence to support rehabilitation professionals working with cardiac surgery patients after median sternotomy.”

“By adopting a progressive individualized approach,” he added, “cardiac surgery sternotomy patients can start cardiac rehabilitation up to 4 weeks earlier than current guidance, and thus potentially complete their recovery sooner.”

Results of the Early Initiation of Poststernotomy Cardiac Rehabilitation Exercise Training study were published online  in JAMA Cardiology.

In the study, Dr. McGregor and colleagues randomly assigned 158 patients (mean age, 63 years; 84% men) to 8 weeks of 1-hour, twice-weekly supervised CR exercise training starting 2 weeks (early) or 6 weeks (usual care) after sternotomy.

The primary outcome was change in the 6-minute walk test distance from baseline to 10 or 14 weeks after sternotomy, respectively, and 12 months after randomization.

For usual care, training followed British standards: a warm-up with light cardiovascular and mobility exercises; continuous moderate-intensity cardiovascular exercise; a cooldown; functional exercises using resistance machines and free weights; and upper-body exercises designed to prevent sternal and leg wound pain and complications.

There are no specific outpatient CR exercise guidelines for early CR, so study participants followed an individualized exercise program for the first 2-3 weeks after surgery, starting with light mobility and moderate-intensity cardiovascular training when they could do those exercises with minimal discomfort. They then progressed to current British standards, as per usual care.

Forty patients were lost to follow-up, largely because of the pandemic; about half the participants in each group were included in the primary analysis.

Early CR was not inferior to usual care, the authors wrote. The mean change in 6-minute walk distance from baseline to completion of CR was 28 meters greater in the early group than in the usual-care group, and was achieved 4 weeks earlier in the recovery timeline.

Secondary outcomes (functional fitness and quality of life) improved in both groups and between-group differences were not statistically significant, indicating the noninferiority of early CR, the authors noted.
 

Safety not proven

There were more adverse events in the early group than in the usual-care group (58 vs. 46) and more serious adverse events (18 vs. 14), but fewer deaths (1 vs. 2).

Although there was no between-group difference in the likelihood of having an adverse or serious adverse event, Dr. McGregor acknowledged that the study was “not powered specifically for safety outcomes.” He added that “there is the potential to run a very large multination definitive superiority [randomized, controlled trial] with safety as the primary outcome; however, a very large sample would be required.”

Meanwhile, he said, “we can say with some degree of certainty that early CR was likely as safe as usual-care CR. In the United Kingdom, we work closely with the British Association for Cardiovascular Prevention and Rehabilitation and the Association of Chartered Physiotherapists in Cardiovascular Rehabilitation, who will incorporate our findings in their guidelines and training courses.”
 

 

 

Questions remain

Asked to comment on the study, John Larry, MD, medical director of cardiology and cardiac rehabilitation at the Ohio State University Wexner Medical Center East Hospital, Columbus, said: “For those under time pressure to return to work, [early CR] could be an advantage to allow more rehab time and improved stamina prior to their return-to-work date.”

That said, he noted, “we typically delay any significant upper-body training activities for 8-10 weeks to avoid impact on healing of the sternum. Thus ... starting sooner would limit the amount of time a patient would have to engage in any upper-body resistance training. Many lose upper body strength after surgery, so this is an important part of the recovery/rehab process.”

Matthew Tomey, MD, director of the cardiac intensive care unit, Mount Sinai Morningside, New York, advised “caution” when interpreting the findings, stating that “there was no evident difference in the primary outcome measure of functional capacity by 14 weeks, and the trial was not designed to directly assess impact on either social functioning or economic productivity.”

“I would be interested to [see] more comprehensive data on safety in a larger, more diverse sample of postoperative patients,” he said, “as well as evidence to indicate clear advantage of an earlier start for patient-centered outcomes specifically after cardiac surgery.

“Perhaps the greatest challenges to full realization of the benefits of CR in practice have been gaps in referral and gaps in enrollment,” he added. “It is incumbent upon us as clinicians to counsel our patients and to provide appropriate referrals.”

The study was supported by the Medical and Life Sciences Research Fund and the Jeremy Pilcher Memorial Fund. No conflicts of interest were reported.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Cardiac rehabilitation (CR) started 2 weeks after sternotomy for a cardiac procedure was noninferior to usual care, in which CR starts 6 weeks after the procedure, with a greater improvement in 6-minute walk test outcomes, a randomized study suggests.

There was no difference in adverse events between groups, although the researchers pointed out that the study was not powered specifically for safety outcomes.

“Cardiac surgical techniques have evolved significantly over the last 60 years, leading to improved survival and shorter hospital stays,” Gordon McGregor, PhD, University of Warwick, Coventry, England, told this news organization. “However, sternal precautions and rehabilitation guidelines have not changed accordingly. There has never been a guideline based on empirical evidence to support rehabilitation professionals working with cardiac surgery patients after median sternotomy.”

“By adopting a progressive individualized approach,” he added, “cardiac surgery sternotomy patients can start cardiac rehabilitation up to 4 weeks earlier than current guidance, and thus potentially complete their recovery sooner.”

Results of the Early Initiation of Poststernotomy Cardiac Rehabilitation Exercise Training study were published online  in JAMA Cardiology.

In the study, Dr. McGregor and colleagues randomly assigned 158 patients (mean age, 63 years; 84% men) to 8 weeks of 1-hour, twice-weekly supervised CR exercise training starting 2 weeks (early) or 6 weeks (usual care) after sternotomy.

The primary outcome was change in the 6-minute walk test distance from baseline to 10 or 14 weeks after sternotomy, respectively, and 12 months after randomization.

For usual care, training followed British standards: a warm-up with light cardiovascular and mobility exercises; continuous moderate-intensity cardiovascular exercise; a cooldown; functional exercises using resistance machines and free weights; and upper-body exercises designed to prevent sternal and leg wound pain and complications.

There are no specific outpatient CR exercise guidelines for early CR, so study participants followed an individualized exercise program for the first 2-3 weeks after surgery, starting with light mobility and moderate-intensity cardiovascular training when they could do those exercises with minimal discomfort. They then progressed to current British standards, as per usual care.

Forty patients were lost to follow-up, largely because of the pandemic; about half the participants in each group were included in the primary analysis.

Early CR was not inferior to usual care, the authors wrote. The mean change in 6-minute walk distance from baseline to completion of CR was 28 meters greater in the early group than in the usual-care group, and was achieved 4 weeks earlier in the recovery timeline.

Secondary outcomes (functional fitness and quality of life) improved in both groups and between-group differences were not statistically significant, indicating the noninferiority of early CR, the authors noted.
 

Safety not proven

There were more adverse events in the early group than in the usual-care group (58 vs. 46) and more serious adverse events (18 vs. 14), but fewer deaths (1 vs. 2).

Although there was no between-group difference in the likelihood of having an adverse or serious adverse event, Dr. McGregor acknowledged that the study was “not powered specifically for safety outcomes.” He added that “there is the potential to run a very large multination definitive superiority [randomized, controlled trial] with safety as the primary outcome; however, a very large sample would be required.”

Meanwhile, he said, “we can say with some degree of certainty that early CR was likely as safe as usual-care CR. In the United Kingdom, we work closely with the British Association for Cardiovascular Prevention and Rehabilitation and the Association of Chartered Physiotherapists in Cardiovascular Rehabilitation, who will incorporate our findings in their guidelines and training courses.”
 

 

 

Questions remain

Asked to comment on the study, John Larry, MD, medical director of cardiology and cardiac rehabilitation at the Ohio State University Wexner Medical Center East Hospital, Columbus, said: “For those under time pressure to return to work, [early CR] could be an advantage to allow more rehab time and improved stamina prior to their return-to-work date.”

That said, he noted, “we typically delay any significant upper-body training activities for 8-10 weeks to avoid impact on healing of the sternum. Thus ... starting sooner would limit the amount of time a patient would have to engage in any upper-body resistance training. Many lose upper body strength after surgery, so this is an important part of the recovery/rehab process.”

Matthew Tomey, MD, director of the cardiac intensive care unit, Mount Sinai Morningside, New York, advised “caution” when interpreting the findings, stating that “there was no evident difference in the primary outcome measure of functional capacity by 14 weeks, and the trial was not designed to directly assess impact on either social functioning or economic productivity.”

“I would be interested to [see] more comprehensive data on safety in a larger, more diverse sample of postoperative patients,” he said, “as well as evidence to indicate clear advantage of an earlier start for patient-centered outcomes specifically after cardiac surgery.

“Perhaps the greatest challenges to full realization of the benefits of CR in practice have been gaps in referral and gaps in enrollment,” he added. “It is incumbent upon us as clinicians to counsel our patients and to provide appropriate referrals.”

The study was supported by the Medical and Life Sciences Research Fund and the Jeremy Pilcher Memorial Fund. No conflicts of interest were reported.

A version of this article first appeared on Medscape.com.

Cardiac rehabilitation (CR) started 2 weeks after sternotomy for a cardiac procedure was noninferior to usual care, in which CR starts 6 weeks after the procedure, with a greater improvement in 6-minute walk test outcomes, a randomized study suggests.

There was no difference in adverse events between groups, although the researchers pointed out that the study was not powered specifically for safety outcomes.

“Cardiac surgical techniques have evolved significantly over the last 60 years, leading to improved survival and shorter hospital stays,” Gordon McGregor, PhD, University of Warwick, Coventry, England, told this news organization. “However, sternal precautions and rehabilitation guidelines have not changed accordingly. There has never been a guideline based on empirical evidence to support rehabilitation professionals working with cardiac surgery patients after median sternotomy.”

“By adopting a progressive individualized approach,” he added, “cardiac surgery sternotomy patients can start cardiac rehabilitation up to 4 weeks earlier than current guidance, and thus potentially complete their recovery sooner.”

Results of the Early Initiation of Poststernotomy Cardiac Rehabilitation Exercise Training study were published online  in JAMA Cardiology.

In the study, Dr. McGregor and colleagues randomly assigned 158 patients (mean age, 63 years; 84% men) to 8 weeks of 1-hour, twice-weekly supervised CR exercise training starting 2 weeks (early) or 6 weeks (usual care) after sternotomy.

The primary outcome was change in the 6-minute walk test distance from baseline to 10 or 14 weeks after sternotomy, respectively, and 12 months after randomization.

For usual care, training followed British standards: a warm-up with light cardiovascular and mobility exercises; continuous moderate-intensity cardiovascular exercise; a cooldown; functional exercises using resistance machines and free weights; and upper-body exercises designed to prevent sternal and leg wound pain and complications.

There are no specific outpatient CR exercise guidelines for early CR, so study participants followed an individualized exercise program for the first 2-3 weeks after surgery, starting with light mobility and moderate-intensity cardiovascular training when they could do those exercises with minimal discomfort. They then progressed to current British standards, as per usual care.

Forty patients were lost to follow-up, largely because of the pandemic; about half the participants in each group were included in the primary analysis.

Early CR was not inferior to usual care, the authors wrote. The mean change in 6-minute walk distance from baseline to completion of CR was 28 meters greater in the early group than in the usual-care group, and was achieved 4 weeks earlier in the recovery timeline.

Secondary outcomes (functional fitness and quality of life) improved in both groups and between-group differences were not statistically significant, indicating the noninferiority of early CR, the authors noted.
 

Safety not proven

There were more adverse events in the early group than in the usual-care group (58 vs. 46) and more serious adverse events (18 vs. 14), but fewer deaths (1 vs. 2).

Although there was no between-group difference in the likelihood of having an adverse or serious adverse event, Dr. McGregor acknowledged that the study was “not powered specifically for safety outcomes.” He added that “there is the potential to run a very large multination definitive superiority [randomized, controlled trial] with safety as the primary outcome; however, a very large sample would be required.”

Meanwhile, he said, “we can say with some degree of certainty that early CR was likely as safe as usual-care CR. In the United Kingdom, we work closely with the British Association for Cardiovascular Prevention and Rehabilitation and the Association of Chartered Physiotherapists in Cardiovascular Rehabilitation, who will incorporate our findings in their guidelines and training courses.”
 

 

 

Questions remain

Asked to comment on the study, John Larry, MD, medical director of cardiology and cardiac rehabilitation at the Ohio State University Wexner Medical Center East Hospital, Columbus, said: “For those under time pressure to return to work, [early CR] could be an advantage to allow more rehab time and improved stamina prior to their return-to-work date.”

That said, he noted, “we typically delay any significant upper-body training activities for 8-10 weeks to avoid impact on healing of the sternum. Thus ... starting sooner would limit the amount of time a patient would have to engage in any upper-body resistance training. Many lose upper body strength after surgery, so this is an important part of the recovery/rehab process.”

Matthew Tomey, MD, director of the cardiac intensive care unit, Mount Sinai Morningside, New York, advised “caution” when interpreting the findings, stating that “there was no evident difference in the primary outcome measure of functional capacity by 14 weeks, and the trial was not designed to directly assess impact on either social functioning or economic productivity.”

“I would be interested to [see] more comprehensive data on safety in a larger, more diverse sample of postoperative patients,” he said, “as well as evidence to indicate clear advantage of an earlier start for patient-centered outcomes specifically after cardiac surgery.

“Perhaps the greatest challenges to full realization of the benefits of CR in practice have been gaps in referral and gaps in enrollment,” he added. “It is incumbent upon us as clinicians to counsel our patients and to provide appropriate referrals.”

The study was supported by the Medical and Life Sciences Research Fund and the Jeremy Pilcher Memorial Fund. No conflicts of interest were reported.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA CARDIOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Persistent abdominal pain: Not always IBS

Article Type
Changed

Persistent abdominal pain may be caused by a whole range of different conditions, say French experts who call for more physician awareness to achieve early diagnosis and treatment so as to improve patient outcomes.

Benoit Coffin, MD, PhD, and Henri Duboc, MD, PhD, from Hôpital Louis Mourier, Colombes, France, conducted a literature review to identify rare and less well-known causes of persistent abdominal pain, identifying almost 50 across several categories.

“Some causes of persistent abdominal pain can be effectively treated using established approaches after a definitive diagnosis has been reached,” they wrote.

“Other causes are more complex and may benefit from a multidisciplinary approach involving gastroenterologists, pain specialists, allergists, immunologists, rheumatologists, psychologists, physiotherapists, dietitians, and primary care clinicians,” they wrote.

The research was published online in Alimentary Pharmacology and Therapeutics.
 

Frequent and frustrating symptoms

Although there is “no commonly accepted definition” for persistent abdominal pain, the authors said it may be defined as “continuous or intermittent abdominal discomfort that persists for at least 6 months and fails to respond to conventional therapeutic approaches.”

They highlight that it is “frequently encountered” by physicians and has a prevalence of 22.9 per 1,000 person-years, regardless of age group, ethnicity, or geographical region, with many patients experiencing pain for more than 5 years.

The cause of persistent abdominal pain can be organic with a clear cause or functional, making diagnosis and management “challenging and frustrating for patients and physicians.”

“Clinicians not only need to recognize somatic abnormalities, but they must also perceive the patient’s cognitions and emotions related to the pain,” they added, suggesting that clinicians take time to “listen to the patient and perceive psychological factors.”

Dr. Coffin and Dr. Duboc write that the most common conditions associated with persistent abdominal pain are irritable bowel syndrome and functional dyspepsia, as well as inflammatory bowel disease, chronic pancreatitis, and gallstones.

To examine the diagnosis and management of its less well-known causes, the authors conducted a literature review, beginning with the diagnosis of persistent abdominal pain.

 

 

Diagnostic workup

“Given its chronicity, many patients will have already undergone extensive and redundant medical testing,” they wrote, emphasizing that clinicians should be on the lookout for any change in the description of persistent abdominal pain or new symptoms.

“Other ‘red-flag’ symptoms include fever, vomiting, diarrhea, acute change in bowel habit, obstipation, syncope, tachycardia, hypotension, concomitant chest or back pain, unintentional weight loss, night sweats, and acute gastrointestinal bleeding,” the authors said.

They stressed the need to determine whether the origin of the pain is organic or functional, as well as the importance of identifying a “triggering event, such as an adverse life event, infection, initiating a new medication, or surgical procedure.” They also recommend discussing the patient’s diet.

There are currently no specific algorithms for diagnostic workup of persistent abdominal pain, the authors said. Patients will have undergone repeated laboratory tests, “upper and lower endoscopic examinations, abdominal ultrasounds, and computed tomography scans of the abdominal/pelvic area.”

Consequently, “in the absence of alarm features, any additional tests should be ordered in a conservative and cost-effective manner,” they advised.

They suggested that, at a tertiary center, patients should be assessed in three steps:

  • In-depth questioning of the symptoms and medical history
  • Summary of all previous investigations and treatments and their effectiveness
  • Determination of the complementary explorations to be performed

The authors went on to list 49 rare or less well-known potential causes of persistent abdominal pain, some linked to digestive disorders, such as eosinophilic gastroenteritis, mesenteric panniculitis, and chronic mesenteric ischemia, as well as endometriosis, chronic abdominal wall pain, and referred osteoarticular pain.

Systemic causes of persistent abdominal pain may include adrenal insufficiency and mast cell activation syndrome, while acute hepatic porphyrias and Ehlers-Danlos syndrome may be genetic causes.

There are also centrally mediated disorders that lead to persistent abdominal pain, the authors noted, including postural orthostatic tachycardia syndrome and narcotic bowel syndrome caused by opioid therapy, among others.

Writing support for the manuscript was funded by Alnylam Switzerland. Dr. Coffin has served as a speaker for Kyowa Kyrin and Mayoly Spindler and as an advisory board member for Sanofi and Alnylam. Dr. Duboc reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Persistent abdominal pain may be caused by a whole range of different conditions, say French experts who call for more physician awareness to achieve early diagnosis and treatment so as to improve patient outcomes.

Benoit Coffin, MD, PhD, and Henri Duboc, MD, PhD, from Hôpital Louis Mourier, Colombes, France, conducted a literature review to identify rare and less well-known causes of persistent abdominal pain, identifying almost 50 across several categories.

“Some causes of persistent abdominal pain can be effectively treated using established approaches after a definitive diagnosis has been reached,” they wrote.

“Other causes are more complex and may benefit from a multidisciplinary approach involving gastroenterologists, pain specialists, allergists, immunologists, rheumatologists, psychologists, physiotherapists, dietitians, and primary care clinicians,” they wrote.

The research was published online in Alimentary Pharmacology and Therapeutics.
 

Frequent and frustrating symptoms

Although there is “no commonly accepted definition” for persistent abdominal pain, the authors said it may be defined as “continuous or intermittent abdominal discomfort that persists for at least 6 months and fails to respond to conventional therapeutic approaches.”

They highlight that it is “frequently encountered” by physicians and has a prevalence of 22.9 per 1,000 person-years, regardless of age group, ethnicity, or geographical region, with many patients experiencing pain for more than 5 years.

The cause of persistent abdominal pain can be organic with a clear cause or functional, making diagnosis and management “challenging and frustrating for patients and physicians.”

“Clinicians not only need to recognize somatic abnormalities, but they must also perceive the patient’s cognitions and emotions related to the pain,” they added, suggesting that clinicians take time to “listen to the patient and perceive psychological factors.”

Dr. Coffin and Dr. Duboc write that the most common conditions associated with persistent abdominal pain are irritable bowel syndrome and functional dyspepsia, as well as inflammatory bowel disease, chronic pancreatitis, and gallstones.

To examine the diagnosis and management of its less well-known causes, the authors conducted a literature review, beginning with the diagnosis of persistent abdominal pain.

 

 

Diagnostic workup

“Given its chronicity, many patients will have already undergone extensive and redundant medical testing,” they wrote, emphasizing that clinicians should be on the lookout for any change in the description of persistent abdominal pain or new symptoms.

“Other ‘red-flag’ symptoms include fever, vomiting, diarrhea, acute change in bowel habit, obstipation, syncope, tachycardia, hypotension, concomitant chest or back pain, unintentional weight loss, night sweats, and acute gastrointestinal bleeding,” the authors said.

They stressed the need to determine whether the origin of the pain is organic or functional, as well as the importance of identifying a “triggering event, such as an adverse life event, infection, initiating a new medication, or surgical procedure.” They also recommend discussing the patient’s diet.

There are currently no specific algorithms for diagnostic workup of persistent abdominal pain, the authors said. Patients will have undergone repeated laboratory tests, “upper and lower endoscopic examinations, abdominal ultrasounds, and computed tomography scans of the abdominal/pelvic area.”

Consequently, “in the absence of alarm features, any additional tests should be ordered in a conservative and cost-effective manner,” they advised.

They suggested that, at a tertiary center, patients should be assessed in three steps:

  • In-depth questioning of the symptoms and medical history
  • Summary of all previous investigations and treatments and their effectiveness
  • Determination of the complementary explorations to be performed

The authors went on to list 49 rare or less well-known potential causes of persistent abdominal pain, some linked to digestive disorders, such as eosinophilic gastroenteritis, mesenteric panniculitis, and chronic mesenteric ischemia, as well as endometriosis, chronic abdominal wall pain, and referred osteoarticular pain.

Systemic causes of persistent abdominal pain may include adrenal insufficiency and mast cell activation syndrome, while acute hepatic porphyrias and Ehlers-Danlos syndrome may be genetic causes.

There are also centrally mediated disorders that lead to persistent abdominal pain, the authors noted, including postural orthostatic tachycardia syndrome and narcotic bowel syndrome caused by opioid therapy, among others.

Writing support for the manuscript was funded by Alnylam Switzerland. Dr. Coffin has served as a speaker for Kyowa Kyrin and Mayoly Spindler and as an advisory board member for Sanofi and Alnylam. Dr. Duboc reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Persistent abdominal pain may be caused by a whole range of different conditions, say French experts who call for more physician awareness to achieve early diagnosis and treatment so as to improve patient outcomes.

Benoit Coffin, MD, PhD, and Henri Duboc, MD, PhD, from Hôpital Louis Mourier, Colombes, France, conducted a literature review to identify rare and less well-known causes of persistent abdominal pain, identifying almost 50 across several categories.

“Some causes of persistent abdominal pain can be effectively treated using established approaches after a definitive diagnosis has been reached,” they wrote.

“Other causes are more complex and may benefit from a multidisciplinary approach involving gastroenterologists, pain specialists, allergists, immunologists, rheumatologists, psychologists, physiotherapists, dietitians, and primary care clinicians,” they wrote.

The research was published online in Alimentary Pharmacology and Therapeutics.
 

Frequent and frustrating symptoms

Although there is “no commonly accepted definition” for persistent abdominal pain, the authors said it may be defined as “continuous or intermittent abdominal discomfort that persists for at least 6 months and fails to respond to conventional therapeutic approaches.”

They highlight that it is “frequently encountered” by physicians and has a prevalence of 22.9 per 1,000 person-years, regardless of age group, ethnicity, or geographical region, with many patients experiencing pain for more than 5 years.

The cause of persistent abdominal pain can be organic with a clear cause or functional, making diagnosis and management “challenging and frustrating for patients and physicians.”

“Clinicians not only need to recognize somatic abnormalities, but they must also perceive the patient’s cognitions and emotions related to the pain,” they added, suggesting that clinicians take time to “listen to the patient and perceive psychological factors.”

Dr. Coffin and Dr. Duboc write that the most common conditions associated with persistent abdominal pain are irritable bowel syndrome and functional dyspepsia, as well as inflammatory bowel disease, chronic pancreatitis, and gallstones.

To examine the diagnosis and management of its less well-known causes, the authors conducted a literature review, beginning with the diagnosis of persistent abdominal pain.

 

 

Diagnostic workup

“Given its chronicity, many patients will have already undergone extensive and redundant medical testing,” they wrote, emphasizing that clinicians should be on the lookout for any change in the description of persistent abdominal pain or new symptoms.

“Other ‘red-flag’ symptoms include fever, vomiting, diarrhea, acute change in bowel habit, obstipation, syncope, tachycardia, hypotension, concomitant chest or back pain, unintentional weight loss, night sweats, and acute gastrointestinal bleeding,” the authors said.

They stressed the need to determine whether the origin of the pain is organic or functional, as well as the importance of identifying a “triggering event, such as an adverse life event, infection, initiating a new medication, or surgical procedure.” They also recommend discussing the patient’s diet.

There are currently no specific algorithms for diagnostic workup of persistent abdominal pain, the authors said. Patients will have undergone repeated laboratory tests, “upper and lower endoscopic examinations, abdominal ultrasounds, and computed tomography scans of the abdominal/pelvic area.”

Consequently, “in the absence of alarm features, any additional tests should be ordered in a conservative and cost-effective manner,” they advised.

They suggested that, at a tertiary center, patients should be assessed in three steps:

  • In-depth questioning of the symptoms and medical history
  • Summary of all previous investigations and treatments and their effectiveness
  • Determination of the complementary explorations to be performed

The authors went on to list 49 rare or less well-known potential causes of persistent abdominal pain, some linked to digestive disorders, such as eosinophilic gastroenteritis, mesenteric panniculitis, and chronic mesenteric ischemia, as well as endometriosis, chronic abdominal wall pain, and referred osteoarticular pain.

Systemic causes of persistent abdominal pain may include adrenal insufficiency and mast cell activation syndrome, while acute hepatic porphyrias and Ehlers-Danlos syndrome may be genetic causes.

There are also centrally mediated disorders that lead to persistent abdominal pain, the authors noted, including postural orthostatic tachycardia syndrome and narcotic bowel syndrome caused by opioid therapy, among others.

Writing support for the manuscript was funded by Alnylam Switzerland. Dr. Coffin has served as a speaker for Kyowa Kyrin and Mayoly Spindler and as an advisory board member for Sanofi and Alnylam. Dr. Duboc reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ALIMENTARY PHARMACOLOGY AND THERAPEUTICS

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Impact of eliminating cost-sharing on follow-up colonoscopy mixed

Article Type
Changed

Oregon and Kentucky recently enacted policies to eliminate financial disincentives that may have deterred people from undergoing a follow-up colonoscopy after a positive result on a noninvasive screening test for colorectal cancer (CRC).

A new analysis shows that the impact has been mixed. The policies led to significantly increased overall CRC screening and use of noninvasive testing in Oregon but not Kentucky.

The study was published online in JAMA Network Open.

The Affordable Care Act mandates that several CRC screening tests be covered without cost-sharing for people at average risk for CRC. However, lingering cost barriers remain for some people who have a positive initial screening test result and who need follow-up colonoscopy.

This led Kentucky in 2016 and Oregon in 2017 to enact policies that eliminate cost-sharing. Earlier this year, federal guidance eliminated cost-sharing for colonoscopies following noninvasive CRC screening tests for commercial insurers, and a similar policy is under consideration for Medicare.

For their study, Douglas Barthold, PhD, of the University of Washington, Seattle, and colleagues used claims data to evaluate CRC screening rates in Oregon and Kentucky, compared with rates in neighboring states that do not have cost-sharing policies.

The sample included more than 1.2 million individuals aged 45-64 living in Oregon, Kentucky, and nearby states from 2012 to 2019. Overall, about 15% of the cohort underwent any CRC screening; 8% underwent colonoscopy.

After the Oregon policy that eliminated cost-sharing went into effect, Oregonians had 6% higher odds of receiving any CRC screening (odds ratio [OR], 1.06; 95% confidence interval [CI], 1.00-1.06; P = .03) and 35% higher odds of undergoing an initial noninvasive test (OR, 0.65; 95% CI, 0.58-0.73; P < .001), compared with neighboring states that did not implement a similar policy.

But there were no significant differences in total CRC screening use in Kentucky after policy implementation compared with neighboring states.

The odds of receiving a colonoscopy conditional on undergoing noninvasive CRC screening were not statistically different in Oregon or Kentucky, compared with neighboring states.

“These findings suggest that the enactment of policies that remove financial barriers is merely one of many elements (e.g., health literacy, outreach, transportation, access to care) that may help to achieve desired cancer screening outcomes,” wrote Dr. Barthold and colleagues.

The study had no commercial funding. Dr. Barthold reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Oregon and Kentucky recently enacted policies to eliminate financial disincentives that may have deterred people from undergoing a follow-up colonoscopy after a positive result on a noninvasive screening test for colorectal cancer (CRC).

A new analysis shows that the impact has been mixed. The policies led to significantly increased overall CRC screening and use of noninvasive testing in Oregon but not Kentucky.

The study was published online in JAMA Network Open.

The Affordable Care Act mandates that several CRC screening tests be covered without cost-sharing for people at average risk for CRC. However, lingering cost barriers remain for some people who have a positive initial screening test result and who need follow-up colonoscopy.

This led Kentucky in 2016 and Oregon in 2017 to enact policies that eliminate cost-sharing. Earlier this year, federal guidance eliminated cost-sharing for colonoscopies following noninvasive CRC screening tests for commercial insurers, and a similar policy is under consideration for Medicare.

For their study, Douglas Barthold, PhD, of the University of Washington, Seattle, and colleagues used claims data to evaluate CRC screening rates in Oregon and Kentucky, compared with rates in neighboring states that do not have cost-sharing policies.

The sample included more than 1.2 million individuals aged 45-64 living in Oregon, Kentucky, and nearby states from 2012 to 2019. Overall, about 15% of the cohort underwent any CRC screening; 8% underwent colonoscopy.

After the Oregon policy that eliminated cost-sharing went into effect, Oregonians had 6% higher odds of receiving any CRC screening (odds ratio [OR], 1.06; 95% confidence interval [CI], 1.00-1.06; P = .03) and 35% higher odds of undergoing an initial noninvasive test (OR, 0.65; 95% CI, 0.58-0.73; P < .001), compared with neighboring states that did not implement a similar policy.

But there were no significant differences in total CRC screening use in Kentucky after policy implementation compared with neighboring states.

The odds of receiving a colonoscopy conditional on undergoing noninvasive CRC screening were not statistically different in Oregon or Kentucky, compared with neighboring states.

“These findings suggest that the enactment of policies that remove financial barriers is merely one of many elements (e.g., health literacy, outreach, transportation, access to care) that may help to achieve desired cancer screening outcomes,” wrote Dr. Barthold and colleagues.

The study had no commercial funding. Dr. Barthold reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Oregon and Kentucky recently enacted policies to eliminate financial disincentives that may have deterred people from undergoing a follow-up colonoscopy after a positive result on a noninvasive screening test for colorectal cancer (CRC).

A new analysis shows that the impact has been mixed. The policies led to significantly increased overall CRC screening and use of noninvasive testing in Oregon but not Kentucky.

The study was published online in JAMA Network Open.

The Affordable Care Act mandates that several CRC screening tests be covered without cost-sharing for people at average risk for CRC. However, lingering cost barriers remain for some people who have a positive initial screening test result and who need follow-up colonoscopy.

This led Kentucky in 2016 and Oregon in 2017 to enact policies that eliminate cost-sharing. Earlier this year, federal guidance eliminated cost-sharing for colonoscopies following noninvasive CRC screening tests for commercial insurers, and a similar policy is under consideration for Medicare.

For their study, Douglas Barthold, PhD, of the University of Washington, Seattle, and colleagues used claims data to evaluate CRC screening rates in Oregon and Kentucky, compared with rates in neighboring states that do not have cost-sharing policies.

The sample included more than 1.2 million individuals aged 45-64 living in Oregon, Kentucky, and nearby states from 2012 to 2019. Overall, about 15% of the cohort underwent any CRC screening; 8% underwent colonoscopy.

After the Oregon policy that eliminated cost-sharing went into effect, Oregonians had 6% higher odds of receiving any CRC screening (odds ratio [OR], 1.06; 95% confidence interval [CI], 1.00-1.06; P = .03) and 35% higher odds of undergoing an initial noninvasive test (OR, 0.65; 95% CI, 0.58-0.73; P < .001), compared with neighboring states that did not implement a similar policy.

But there were no significant differences in total CRC screening use in Kentucky after policy implementation compared with neighboring states.

The odds of receiving a colonoscopy conditional on undergoing noninvasive CRC screening were not statistically different in Oregon or Kentucky, compared with neighboring states.

“These findings suggest that the enactment of policies that remove financial barriers is merely one of many elements (e.g., health literacy, outreach, transportation, access to care) that may help to achieve desired cancer screening outcomes,” wrote Dr. Barthold and colleagues.

The study had no commercial funding. Dr. Barthold reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Lawmakers argue for changes in prior authorization processes

Article Type
Changed

Republican and Democratic members of the House called for changes in how insurer-run Medicare plans manage the prior authorization process, following testimony from a federal watchdog organization about improper denials of payment for care.

About 18% of payment denials in a sample examined by the Office of Inspector General (OIG) of the Department of Health and Human Services (HHS) either met Medicare coverage rules or the rules of the insurance plan.

As such, they should not have been denied, according to the OIG. That was the finding of an April OIG report, based on a sample of 2019 denials from large insurer-run Medicare plans.

Erin Bliss, an assistant inspector general with the OIG, appeared as a witness at a June 28 Energy and Commerce Subcommittee on Oversight and Investigations hearing to discuss this investigation and other issues with prior authorization and insurer-run Medicare, also known as the Advantage plans.

Most of these payment denials of appropriate services were due to human error during manual claims-processing reviews, Ms. Bliss told the subcommittee, such as overlooking a document, and to system processing errors, such as a Medicare insurance plan failing to program or update a system correctly.

In many cases, these denials were reversed, but patient care was still disrupted and clinicians lost time chasing clearances for services that plans already had covered, Ms. Bliss said in her testimony.

The April report was not the OIG’s first look into concerns about insurer-run plans inappropriately denying care through prior authorizations. The OIG in 2018 reported that insurer-run Medicare plans overturned 75% of their own denials during 2014-2016 when patients and clinicians appealed these decisions, overturning approximately 216,000 denials each year.

‘Numerous hoops’ unnecessary for doctors, patients

Lawmakers at the hearing supported the idea of the need for prior authorization as a screening tool to prevent unneeded care.

But they chided insurance companies for their execution of this process, with clinicians and patients often frustrated by complex steps needed. Medicare Advantage plans sometimes require prior authorization for “relatively standard medical services,” said Subcommittee on Oversight and Investigations Chair Diana DeGette (D-Colo.).

“Our seniors and their doctors should not be required to jump through numerous hoops to ensure coverage for straightforward and medically necessary procedures,” Rep. DeGette said.

Several lawmakers spoke at the hearing about the need for changes to prior authorization, including calling for action on a pending bill intended to compel insurers to streamline the review process. The Improving Seniors’ Timely Access to Care Act of 2021 already has attracted more than 300 bipartisan sponsors. A companion Senate bill has more than 30 sponsors.

The bill’s aim is to shift this process away from faxes and phone calls while also encouraging plans to adhere to evidence-based medical guidelines in consultation with physicians. The bill calls for the establishment of an electronic prior authorization program that could issue real-time decisions.

“The result will be less administrative burden for providers and more information in the hands of patients. It will allow more patients to receive care when they need it, reducing the likelihood of additional, often more severe complications,” said Rep. Larry Bucshon, MD, (R-Ind.) who is among the active sponsors of the bill.

“In the long term, I believe it would also result in cost savings for the health care system at large by identifying problems earlier and getting them treated before their patients have more complications,” Rep. Bucshon added.
 

 

 

Finding ‘room for improvement’ for prior authorizations

There’s strong bipartisan support in Congress for insurer-run Medicare, which has grown by 10% per year over the last several years and has doubled since 2010, according to the Medicare Payment Advisory Commission (MedPAC). About 27 million people are now enrolled in these plans.

But for that reason, insurer-run Medicare may also need more careful watching, lawmakers made clear at the hearing.

“We’ve heard quite a bit of evidence today that there is room for improvement,” said Rep. Bucshon, a strong supporter of insurer-run Medicare, which can offer patients added benefits such as dental coverage.

Rep. Ann Kuster (D-N.H.) said simplifying prior authorization would reduce stress on clinicians already dealing with burnout.

“They’re just so tired of all this paperwork and red tape,” Rep. Kuster said. “In 2022 can’t we at least consider electronic prior authorization?”

At the hearing, Rep. Michael C. Burgess, MD, (R-Tex.) noted that his home state already has taken a step toward reducing the burden of prior authorization with its “gold card” program.



In 2021, a new Texas law called on the state department of insurance to develop rules to require health plans to provide an exemption from preauthorization requirements for a particular health care service if the issuer has approved, or would have approved, at least 90% of the preauthorization requests submitted by the physician or provider for that service. The law also mandates that a physician participating in a peer-to-peer review on behalf of a health benefit plan issuer must be a Texas-licensed physician who has the same or similar specialty as the physician or clinician requesting the service, according to the state insurance department.

Separately, Rep. Suzan DelBene (D-Wash.), the sponsor of the Improving Seniors’ Timely Access to Care Act, told the American Medical Association in a recent interview that she expects the House Ways and Means Committee, on which she serves, to mark up her bill in July. (A mark-up is the process by which a House or Senate committee considers and often amends a bill and then sends it to the chamber’s leadership for a floor vote.)

In a statement issued about the hearing, America’s Health Insurance Plans (AHIP) noted that there has been work in recent years toward streamlining prior authorization. AHIP said it launched the Fast Prior Authorization Technology Highway (Fast PATH) initiative in 2020 to study electronic procedures for handling these reviews.

“The findings of this study showed that ePA delivered improvements with a strong majority of experienced providers reporting faster time to patient care, fewer phone calls and faxes, better understanding of [prior authorization] requirements, and faster time to decisions,” AHIP said.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Republican and Democratic members of the House called for changes in how insurer-run Medicare plans manage the prior authorization process, following testimony from a federal watchdog organization about improper denials of payment for care.

About 18% of payment denials in a sample examined by the Office of Inspector General (OIG) of the Department of Health and Human Services (HHS) either met Medicare coverage rules or the rules of the insurance plan.

As such, they should not have been denied, according to the OIG. That was the finding of an April OIG report, based on a sample of 2019 denials from large insurer-run Medicare plans.

Erin Bliss, an assistant inspector general with the OIG, appeared as a witness at a June 28 Energy and Commerce Subcommittee on Oversight and Investigations hearing to discuss this investigation and other issues with prior authorization and insurer-run Medicare, also known as the Advantage plans.

Most of these payment denials of appropriate services were due to human error during manual claims-processing reviews, Ms. Bliss told the subcommittee, such as overlooking a document, and to system processing errors, such as a Medicare insurance plan failing to program or update a system correctly.

In many cases, these denials were reversed, but patient care was still disrupted and clinicians lost time chasing clearances for services that plans already had covered, Ms. Bliss said in her testimony.

The April report was not the OIG’s first look into concerns about insurer-run plans inappropriately denying care through prior authorizations. The OIG in 2018 reported that insurer-run Medicare plans overturned 75% of their own denials during 2014-2016 when patients and clinicians appealed these decisions, overturning approximately 216,000 denials each year.

‘Numerous hoops’ unnecessary for doctors, patients

Lawmakers at the hearing supported the idea of the need for prior authorization as a screening tool to prevent unneeded care.

But they chided insurance companies for their execution of this process, with clinicians and patients often frustrated by complex steps needed. Medicare Advantage plans sometimes require prior authorization for “relatively standard medical services,” said Subcommittee on Oversight and Investigations Chair Diana DeGette (D-Colo.).

“Our seniors and their doctors should not be required to jump through numerous hoops to ensure coverage for straightforward and medically necessary procedures,” Rep. DeGette said.

Several lawmakers spoke at the hearing about the need for changes to prior authorization, including calling for action on a pending bill intended to compel insurers to streamline the review process. The Improving Seniors’ Timely Access to Care Act of 2021 already has attracted more than 300 bipartisan sponsors. A companion Senate bill has more than 30 sponsors.

The bill’s aim is to shift this process away from faxes and phone calls while also encouraging plans to adhere to evidence-based medical guidelines in consultation with physicians. The bill calls for the establishment of an electronic prior authorization program that could issue real-time decisions.

“The result will be less administrative burden for providers and more information in the hands of patients. It will allow more patients to receive care when they need it, reducing the likelihood of additional, often more severe complications,” said Rep. Larry Bucshon, MD, (R-Ind.) who is among the active sponsors of the bill.

“In the long term, I believe it would also result in cost savings for the health care system at large by identifying problems earlier and getting them treated before their patients have more complications,” Rep. Bucshon added.
 

 

 

Finding ‘room for improvement’ for prior authorizations

There’s strong bipartisan support in Congress for insurer-run Medicare, which has grown by 10% per year over the last several years and has doubled since 2010, according to the Medicare Payment Advisory Commission (MedPAC). About 27 million people are now enrolled in these plans.

But for that reason, insurer-run Medicare may also need more careful watching, lawmakers made clear at the hearing.

“We’ve heard quite a bit of evidence today that there is room for improvement,” said Rep. Bucshon, a strong supporter of insurer-run Medicare, which can offer patients added benefits such as dental coverage.

Rep. Ann Kuster (D-N.H.) said simplifying prior authorization would reduce stress on clinicians already dealing with burnout.

“They’re just so tired of all this paperwork and red tape,” Rep. Kuster said. “In 2022 can’t we at least consider electronic prior authorization?”

At the hearing, Rep. Michael C. Burgess, MD, (R-Tex.) noted that his home state already has taken a step toward reducing the burden of prior authorization with its “gold card” program.



In 2021, a new Texas law called on the state department of insurance to develop rules to require health plans to provide an exemption from preauthorization requirements for a particular health care service if the issuer has approved, or would have approved, at least 90% of the preauthorization requests submitted by the physician or provider for that service. The law also mandates that a physician participating in a peer-to-peer review on behalf of a health benefit plan issuer must be a Texas-licensed physician who has the same or similar specialty as the physician or clinician requesting the service, according to the state insurance department.

Separately, Rep. Suzan DelBene (D-Wash.), the sponsor of the Improving Seniors’ Timely Access to Care Act, told the American Medical Association in a recent interview that she expects the House Ways and Means Committee, on which she serves, to mark up her bill in July. (A mark-up is the process by which a House or Senate committee considers and often amends a bill and then sends it to the chamber’s leadership for a floor vote.)

In a statement issued about the hearing, America’s Health Insurance Plans (AHIP) noted that there has been work in recent years toward streamlining prior authorization. AHIP said it launched the Fast Prior Authorization Technology Highway (Fast PATH) initiative in 2020 to study electronic procedures for handling these reviews.

“The findings of this study showed that ePA delivered improvements with a strong majority of experienced providers reporting faster time to patient care, fewer phone calls and faxes, better understanding of [prior authorization] requirements, and faster time to decisions,” AHIP said.

A version of this article first appeared on Medscape.com.

Republican and Democratic members of the House called for changes in how insurer-run Medicare plans manage the prior authorization process, following testimony from a federal watchdog organization about improper denials of payment for care.

About 18% of payment denials in a sample examined by the Office of Inspector General (OIG) of the Department of Health and Human Services (HHS) either met Medicare coverage rules or the rules of the insurance plan.

As such, they should not have been denied, according to the OIG. That was the finding of an April OIG report, based on a sample of 2019 denials from large insurer-run Medicare plans.

Erin Bliss, an assistant inspector general with the OIG, appeared as a witness at a June 28 Energy and Commerce Subcommittee on Oversight and Investigations hearing to discuss this investigation and other issues with prior authorization and insurer-run Medicare, also known as the Advantage plans.

Most of these payment denials of appropriate services were due to human error during manual claims-processing reviews, Ms. Bliss told the subcommittee, such as overlooking a document, and to system processing errors, such as a Medicare insurance plan failing to program or update a system correctly.

In many cases, these denials were reversed, but patient care was still disrupted and clinicians lost time chasing clearances for services that plans already had covered, Ms. Bliss said in her testimony.

The April report was not the OIG’s first look into concerns about insurer-run plans inappropriately denying care through prior authorizations. The OIG in 2018 reported that insurer-run Medicare plans overturned 75% of their own denials during 2014-2016 when patients and clinicians appealed these decisions, overturning approximately 216,000 denials each year.

‘Numerous hoops’ unnecessary for doctors, patients

Lawmakers at the hearing supported the idea of the need for prior authorization as a screening tool to prevent unneeded care.

But they chided insurance companies for their execution of this process, with clinicians and patients often frustrated by complex steps needed. Medicare Advantage plans sometimes require prior authorization for “relatively standard medical services,” said Subcommittee on Oversight and Investigations Chair Diana DeGette (D-Colo.).

“Our seniors and their doctors should not be required to jump through numerous hoops to ensure coverage for straightforward and medically necessary procedures,” Rep. DeGette said.

Several lawmakers spoke at the hearing about the need for changes to prior authorization, including calling for action on a pending bill intended to compel insurers to streamline the review process. The Improving Seniors’ Timely Access to Care Act of 2021 already has attracted more than 300 bipartisan sponsors. A companion Senate bill has more than 30 sponsors.

The bill’s aim is to shift this process away from faxes and phone calls while also encouraging plans to adhere to evidence-based medical guidelines in consultation with physicians. The bill calls for the establishment of an electronic prior authorization program that could issue real-time decisions.

“The result will be less administrative burden for providers and more information in the hands of patients. It will allow more patients to receive care when they need it, reducing the likelihood of additional, often more severe complications,” said Rep. Larry Bucshon, MD, (R-Ind.) who is among the active sponsors of the bill.

“In the long term, I believe it would also result in cost savings for the health care system at large by identifying problems earlier and getting them treated before their patients have more complications,” Rep. Bucshon added.
 

 

 

Finding ‘room for improvement’ for prior authorizations

There’s strong bipartisan support in Congress for insurer-run Medicare, which has grown by 10% per year over the last several years and has doubled since 2010, according to the Medicare Payment Advisory Commission (MedPAC). About 27 million people are now enrolled in these plans.

But for that reason, insurer-run Medicare may also need more careful watching, lawmakers made clear at the hearing.

“We’ve heard quite a bit of evidence today that there is room for improvement,” said Rep. Bucshon, a strong supporter of insurer-run Medicare, which can offer patients added benefits such as dental coverage.

Rep. Ann Kuster (D-N.H.) said simplifying prior authorization would reduce stress on clinicians already dealing with burnout.

“They’re just so tired of all this paperwork and red tape,” Rep. Kuster said. “In 2022 can’t we at least consider electronic prior authorization?”

At the hearing, Rep. Michael C. Burgess, MD, (R-Tex.) noted that his home state already has taken a step toward reducing the burden of prior authorization with its “gold card” program.



In 2021, a new Texas law called on the state department of insurance to develop rules to require health plans to provide an exemption from preauthorization requirements for a particular health care service if the issuer has approved, or would have approved, at least 90% of the preauthorization requests submitted by the physician or provider for that service. The law also mandates that a physician participating in a peer-to-peer review on behalf of a health benefit plan issuer must be a Texas-licensed physician who has the same or similar specialty as the physician or clinician requesting the service, according to the state insurance department.

Separately, Rep. Suzan DelBene (D-Wash.), the sponsor of the Improving Seniors’ Timely Access to Care Act, told the American Medical Association in a recent interview that she expects the House Ways and Means Committee, on which she serves, to mark up her bill in July. (A mark-up is the process by which a House or Senate committee considers and often amends a bill and then sends it to the chamber’s leadership for a floor vote.)

In a statement issued about the hearing, America’s Health Insurance Plans (AHIP) noted that there has been work in recent years toward streamlining prior authorization. AHIP said it launched the Fast Prior Authorization Technology Highway (Fast PATH) initiative in 2020 to study electronic procedures for handling these reviews.

“The findings of this study showed that ePA delivered improvements with a strong majority of experienced providers reporting faster time to patient care, fewer phone calls and faxes, better understanding of [prior authorization] requirements, and faster time to decisions,” AHIP said.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

More reflux after sleeve gastrectomy vs. gastric bypass at 10 years

Article Type
Changed

Sleeve gastrectomy (SG) and Roux-en-Y gastric bypass (RYGB) each led to good and sustainable weight loss 10 years later, although reflux was more prevalent after SG, according to the Sleeve vs. Bypass (SLEEVEPASS) randomized clinical trial.

At 10 years, there were no statistically significant between-procedure differences in type 2 diabetes remission, dyslipidemia, or obstructive sleep apnea, but hypertension remission was greater with RYGB.

However, importantly, the cumulative incidence of Barrett’s esophagus was similar after both procedures (4%) and markedly lower than reported in previous trials (14%-17%).

To their knowledge, this is the largest randomized controlled trial with the longest follow-up comparing these two laparoscopic bariatric surgeries, Paulina Salminen, MD, PhD, and colleagues write in their study published online in JAMA Surgery.

They aimed to clarify the “controversial issues” of long-term gastroesophageal reflux disease (GERD) symptoms, endoscopic esophagitis, and Barrett’s esophagus after SG vs. RYGB.    

The findings showed that “there was no difference in the prevalence of Barrett’s esophagus, contrary to previous reports of alarming rates of Barrett’s [esophagus] after sleeve gastrectomy,” Dr. Salminen from Turku (Finland) University Hospital, told this news organization in an email.

“However, our results also show that esophagitis and GERD symptoms are significantly more prevalent after sleeve [gastrectomy], and GERD is an important factor to be considered in the preoperative assessment of bariatric surgery and procedure choice,” she said.

The takeaway is that “we have two good procedures providing good and sustainable 10-year results for both weight loss and remission of comorbidities” for severe obesity, a major health risk, Dr. Salminen summarized.
 

10-year data analysis

Long-term outcomes from randomized clinical trials of laparoscopic SG vs. RYGB are limited, and recent studies have shown a high incidence of worsening of de novo GERD, esophagitis, and Barrett’s esophagus, after laparoscopic SG, Dr. Salminen and colleagues write.

To investigate, they analyzed 10-year data from SLEEVEPASS, which had randomized 240 adult patients with severe obesity to either SG or RYGB at three hospitals in Finland during 2008-2010.

At baseline, 121 patients were randomized to SG and 119 to RYGB. They had a mean age of 48 years, a mean body mass index of 45.9 kg/m2, and 70% were women.

Two patients never had the surgery, and at 10 years, 10 patients had died of causes unrelated to bariatric surgery.

At 10 years, 193 of the 288 remaining patients (85%) completed the follow-up for weight loss and other comorbidity outcomes, and 176 of 228 (77%) underwent gastroscopy.

The primary study endpoint of the trial was percent excess weight loss (%EWL). At 10 years, the median %EWL was 43.5% after SG vs. 50.7% after RYGB, with a wide range for both procedures (roughly 2%-110% excess weight loss). Mean estimate %EWL was not equivalent, with it being 8.4% in favor of RYGB.

After SG and RYGB, there were no statistically significant differences in type 2 diabetes remission (26% and 33%, respectively), dyslipidemia (19% and 35%, respectively), or obstructive sleep apnea (16% and 31%, respectively).

Hypertension remission was superior after RYGB (8% vs. 24%; P = .04).

Esophagitis was more prevalent after SG (31% vs. 7%; P < .001).
 

 

 

‘Very important study’

“This is a very important study, the first to report 10-year results of a randomized controlled trial comparing the two most frequently used bariatric operations, SG and RYGB,” Beat Peter Müller, MD, MBA, and Adrian Billeter, MD, PhD, who were not involved with this research, told this news organization in an email.

“The results will have a major impact on the future of bariatric surgery,” according to Dr. Müller and Dr. Billeter, from Heidelberg (Germany) University.

The most relevant findings are the GERD outcomes, they said. Because of the high rate of upper endoscopies at 10 years (73%), the study allowed a good assessment of this.

“While this study confirms that SG is a GERD-prone procedure, it clearly demonstrates that GERD after SG does not induce severe esophagitis and Barrett’s esophagus,” they said.

Most importantly, the rate of Barrett’s esophagus, the precursor lesion of adenocarcinomas of the esophago-gastric junction is similar (4%) after both operations and there was no dysplasia in either group, they stressed.

“The main problem after SG remains new-onset GERD, for which still no predictive parameter exists,” according to Dr. Müller and Dr. Billeter.

“The take home message … is that GERD after SG is generally mild and the risk of Barrett’s esophagus is equally higher after SG and RYGB,” they said. “Therefore, all patients after any bariatric operations should undergo regular upper endoscopies.” 

However, “RYGB still leads to an increase in proton-pump inhibitor use, despite RYGB being one of the most effective antireflux procedures,” they said. “This finding needs further investigation.”

Furthermore, “a 4% Barrett esophagus rate 10 years after RYGB is troublesome, and the reasons should be investigated,” they added.

“Another relevant finding is that after 10 years, RYGB has a statistically better weight loss, which reaches the primary endpoint of the SLEEVEPASS trial for the first time,” they noted, yet the clinical relevance of this is not clear, since there was no difference in resolution of comorbidities, except for hypertension. 

Gyanprakash A. Ketwaroo, MD, of Baylor College of Medicine, Houston, who was not involved with this research, agreed that “the study shows durable and good weight loss for either type of laparoscopic surgery with important metabolic effects and confirms the long-term benefits of weight-loss surgery.”

“What is somewhat new is the lower levels of Barrett’s esophagus after sleeve gastrectomy compared with several earlier studies,” he told this news organization in an email.

“This is somewhat incongruent with the relatively high incidence of postsleeve esophagitis noted in the study, which is an accepted risk factor for Barrett’s esophagus,” he continued. “Thus, I believe concern will still remain about GERD-related complications, including Barrett’s [esophagus], after sleeve gastrectomy.”    

“This paper highlights the need for larger prospective studies, especially those that include diverse, older populations with multiple risk factors for Barrett’s esophagus,” Dr. Ketwaroo said.
 

Looking ahead

Using a large data set, such as that from SLEEVEPASS and possibly with data from the SM-BOSS trial and the BariSurg trial, with machine learning and other sophisticated analyses might identify parameters that could be used to choose the best operation for an individual patient, Dr. Salminen speculated. 

 

 

“I think what we have learned from these long-term follow-up results is that GERD assessment should be a part of the preoperative assessment, and for patients who have preoperative GERD symptoms and GERD-related endoscopic findings (e.g., hiatal hernia), gastric bypass would be a more optimal procedure choice, if there are no contraindications for it,” she said.

Patient discussions should also cover “long-term symptoms, for example, abdominal pain after RYGB,” she added.

“I am looking forward to our future 20-year follow-up results,” Dr. Salminen said, “which will shed more light on this topic of postoperative [endoscopic] surveillance.

In the meantime, “preoperative gastroscopy is necessary and beneficial, at least when considering sleeve gastrectomy,” she said.

The SLEEVEPASS trial was supported by the Mary and Georg C. Ehrnrooth Foundation, the Government Research Foundation (in a grant awarded to Turku University Hospital), the Orion Research Foundation, the Paulo Foundation, and the Gastroenterological Research Foundation. Dr. Salminen reported receiving grants from the Government Research Foundation awarded to Turku University Hospital and the Mary and Georg C. Ehrnrooth Foundation. Another coauthor received grants from the Orion Research Foundation, the Paulo Foundation, and the Gastroenterological Research Foundation during the study. No other disclosures were reported.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Sleeve gastrectomy (SG) and Roux-en-Y gastric bypass (RYGB) each led to good and sustainable weight loss 10 years later, although reflux was more prevalent after SG, according to the Sleeve vs. Bypass (SLEEVEPASS) randomized clinical trial.

At 10 years, there were no statistically significant between-procedure differences in type 2 diabetes remission, dyslipidemia, or obstructive sleep apnea, but hypertension remission was greater with RYGB.

However, importantly, the cumulative incidence of Barrett’s esophagus was similar after both procedures (4%) and markedly lower than reported in previous trials (14%-17%).

To their knowledge, this is the largest randomized controlled trial with the longest follow-up comparing these two laparoscopic bariatric surgeries, Paulina Salminen, MD, PhD, and colleagues write in their study published online in JAMA Surgery.

They aimed to clarify the “controversial issues” of long-term gastroesophageal reflux disease (GERD) symptoms, endoscopic esophagitis, and Barrett’s esophagus after SG vs. RYGB.    

The findings showed that “there was no difference in the prevalence of Barrett’s esophagus, contrary to previous reports of alarming rates of Barrett’s [esophagus] after sleeve gastrectomy,” Dr. Salminen from Turku (Finland) University Hospital, told this news organization in an email.

“However, our results also show that esophagitis and GERD symptoms are significantly more prevalent after sleeve [gastrectomy], and GERD is an important factor to be considered in the preoperative assessment of bariatric surgery and procedure choice,” she said.

The takeaway is that “we have two good procedures providing good and sustainable 10-year results for both weight loss and remission of comorbidities” for severe obesity, a major health risk, Dr. Salminen summarized.
 

10-year data analysis

Long-term outcomes from randomized clinical trials of laparoscopic SG vs. RYGB are limited, and recent studies have shown a high incidence of worsening of de novo GERD, esophagitis, and Barrett’s esophagus, after laparoscopic SG, Dr. Salminen and colleagues write.

To investigate, they analyzed 10-year data from SLEEVEPASS, which had randomized 240 adult patients with severe obesity to either SG or RYGB at three hospitals in Finland during 2008-2010.

At baseline, 121 patients were randomized to SG and 119 to RYGB. They had a mean age of 48 years, a mean body mass index of 45.9 kg/m2, and 70% were women.

Two patients never had the surgery, and at 10 years, 10 patients had died of causes unrelated to bariatric surgery.

At 10 years, 193 of the 288 remaining patients (85%) completed the follow-up for weight loss and other comorbidity outcomes, and 176 of 228 (77%) underwent gastroscopy.

The primary study endpoint of the trial was percent excess weight loss (%EWL). At 10 years, the median %EWL was 43.5% after SG vs. 50.7% after RYGB, with a wide range for both procedures (roughly 2%-110% excess weight loss). Mean estimate %EWL was not equivalent, with it being 8.4% in favor of RYGB.

After SG and RYGB, there were no statistically significant differences in type 2 diabetes remission (26% and 33%, respectively), dyslipidemia (19% and 35%, respectively), or obstructive sleep apnea (16% and 31%, respectively).

Hypertension remission was superior after RYGB (8% vs. 24%; P = .04).

Esophagitis was more prevalent after SG (31% vs. 7%; P < .001).
 

 

 

‘Very important study’

“This is a very important study, the first to report 10-year results of a randomized controlled trial comparing the two most frequently used bariatric operations, SG and RYGB,” Beat Peter Müller, MD, MBA, and Adrian Billeter, MD, PhD, who were not involved with this research, told this news organization in an email.

“The results will have a major impact on the future of bariatric surgery,” according to Dr. Müller and Dr. Billeter, from Heidelberg (Germany) University.

The most relevant findings are the GERD outcomes, they said. Because of the high rate of upper endoscopies at 10 years (73%), the study allowed a good assessment of this.

“While this study confirms that SG is a GERD-prone procedure, it clearly demonstrates that GERD after SG does not induce severe esophagitis and Barrett’s esophagus,” they said.

Most importantly, the rate of Barrett’s esophagus, the precursor lesion of adenocarcinomas of the esophago-gastric junction is similar (4%) after both operations and there was no dysplasia in either group, they stressed.

“The main problem after SG remains new-onset GERD, for which still no predictive parameter exists,” according to Dr. Müller and Dr. Billeter.

“The take home message … is that GERD after SG is generally mild and the risk of Barrett’s esophagus is equally higher after SG and RYGB,” they said. “Therefore, all patients after any bariatric operations should undergo regular upper endoscopies.” 

However, “RYGB still leads to an increase in proton-pump inhibitor use, despite RYGB being one of the most effective antireflux procedures,” they said. “This finding needs further investigation.”

Furthermore, “a 4% Barrett esophagus rate 10 years after RYGB is troublesome, and the reasons should be investigated,” they added.

“Another relevant finding is that after 10 years, RYGB has a statistically better weight loss, which reaches the primary endpoint of the SLEEVEPASS trial for the first time,” they noted, yet the clinical relevance of this is not clear, since there was no difference in resolution of comorbidities, except for hypertension. 

Gyanprakash A. Ketwaroo, MD, of Baylor College of Medicine, Houston, who was not involved with this research, agreed that “the study shows durable and good weight loss for either type of laparoscopic surgery with important metabolic effects and confirms the long-term benefits of weight-loss surgery.”

“What is somewhat new is the lower levels of Barrett’s esophagus after sleeve gastrectomy compared with several earlier studies,” he told this news organization in an email.

“This is somewhat incongruent with the relatively high incidence of postsleeve esophagitis noted in the study, which is an accepted risk factor for Barrett’s esophagus,” he continued. “Thus, I believe concern will still remain about GERD-related complications, including Barrett’s [esophagus], after sleeve gastrectomy.”    

“This paper highlights the need for larger prospective studies, especially those that include diverse, older populations with multiple risk factors for Barrett’s esophagus,” Dr. Ketwaroo said.
 

Looking ahead

Using a large data set, such as that from SLEEVEPASS and possibly with data from the SM-BOSS trial and the BariSurg trial, with machine learning and other sophisticated analyses might identify parameters that could be used to choose the best operation for an individual patient, Dr. Salminen speculated. 

 

 

“I think what we have learned from these long-term follow-up results is that GERD assessment should be a part of the preoperative assessment, and for patients who have preoperative GERD symptoms and GERD-related endoscopic findings (e.g., hiatal hernia), gastric bypass would be a more optimal procedure choice, if there are no contraindications for it,” she said.

Patient discussions should also cover “long-term symptoms, for example, abdominal pain after RYGB,” she added.

“I am looking forward to our future 20-year follow-up results,” Dr. Salminen said, “which will shed more light on this topic of postoperative [endoscopic] surveillance.

In the meantime, “preoperative gastroscopy is necessary and beneficial, at least when considering sleeve gastrectomy,” she said.

The SLEEVEPASS trial was supported by the Mary and Georg C. Ehrnrooth Foundation, the Government Research Foundation (in a grant awarded to Turku University Hospital), the Orion Research Foundation, the Paulo Foundation, and the Gastroenterological Research Foundation. Dr. Salminen reported receiving grants from the Government Research Foundation awarded to Turku University Hospital and the Mary and Georg C. Ehrnrooth Foundation. Another coauthor received grants from the Orion Research Foundation, the Paulo Foundation, and the Gastroenterological Research Foundation during the study. No other disclosures were reported.

A version of this article first appeared on Medscape.com.

Sleeve gastrectomy (SG) and Roux-en-Y gastric bypass (RYGB) each led to good and sustainable weight loss 10 years later, although reflux was more prevalent after SG, according to the Sleeve vs. Bypass (SLEEVEPASS) randomized clinical trial.

At 10 years, there were no statistically significant between-procedure differences in type 2 diabetes remission, dyslipidemia, or obstructive sleep apnea, but hypertension remission was greater with RYGB.

However, importantly, the cumulative incidence of Barrett’s esophagus was similar after both procedures (4%) and markedly lower than reported in previous trials (14%-17%).

To their knowledge, this is the largest randomized controlled trial with the longest follow-up comparing these two laparoscopic bariatric surgeries, Paulina Salminen, MD, PhD, and colleagues write in their study published online in JAMA Surgery.

They aimed to clarify the “controversial issues” of long-term gastroesophageal reflux disease (GERD) symptoms, endoscopic esophagitis, and Barrett’s esophagus after SG vs. RYGB.    

The findings showed that “there was no difference in the prevalence of Barrett’s esophagus, contrary to previous reports of alarming rates of Barrett’s [esophagus] after sleeve gastrectomy,” Dr. Salminen from Turku (Finland) University Hospital, told this news organization in an email.

“However, our results also show that esophagitis and GERD symptoms are significantly more prevalent after sleeve [gastrectomy], and GERD is an important factor to be considered in the preoperative assessment of bariatric surgery and procedure choice,” she said.

The takeaway is that “we have two good procedures providing good and sustainable 10-year results for both weight loss and remission of comorbidities” for severe obesity, a major health risk, Dr. Salminen summarized.
 

10-year data analysis

Long-term outcomes from randomized clinical trials of laparoscopic SG vs. RYGB are limited, and recent studies have shown a high incidence of worsening of de novo GERD, esophagitis, and Barrett’s esophagus, after laparoscopic SG, Dr. Salminen and colleagues write.

To investigate, they analyzed 10-year data from SLEEVEPASS, which had randomized 240 adult patients with severe obesity to either SG or RYGB at three hospitals in Finland during 2008-2010.

At baseline, 121 patients were randomized to SG and 119 to RYGB. They had a mean age of 48 years, a mean body mass index of 45.9 kg/m2, and 70% were women.

Two patients never had the surgery, and at 10 years, 10 patients had died of causes unrelated to bariatric surgery.

At 10 years, 193 of the 288 remaining patients (85%) completed the follow-up for weight loss and other comorbidity outcomes, and 176 of 228 (77%) underwent gastroscopy.

The primary study endpoint of the trial was percent excess weight loss (%EWL). At 10 years, the median %EWL was 43.5% after SG vs. 50.7% after RYGB, with a wide range for both procedures (roughly 2%-110% excess weight loss). Mean estimate %EWL was not equivalent, with it being 8.4% in favor of RYGB.

After SG and RYGB, there were no statistically significant differences in type 2 diabetes remission (26% and 33%, respectively), dyslipidemia (19% and 35%, respectively), or obstructive sleep apnea (16% and 31%, respectively).

Hypertension remission was superior after RYGB (8% vs. 24%; P = .04).

Esophagitis was more prevalent after SG (31% vs. 7%; P < .001).
 

 

 

‘Very important study’

“This is a very important study, the first to report 10-year results of a randomized controlled trial comparing the two most frequently used bariatric operations, SG and RYGB,” Beat Peter Müller, MD, MBA, and Adrian Billeter, MD, PhD, who were not involved with this research, told this news organization in an email.

“The results will have a major impact on the future of bariatric surgery,” according to Dr. Müller and Dr. Billeter, from Heidelberg (Germany) University.

The most relevant findings are the GERD outcomes, they said. Because of the high rate of upper endoscopies at 10 years (73%), the study allowed a good assessment of this.

“While this study confirms that SG is a GERD-prone procedure, it clearly demonstrates that GERD after SG does not induce severe esophagitis and Barrett’s esophagus,” they said.

Most importantly, the rate of Barrett’s esophagus, the precursor lesion of adenocarcinomas of the esophago-gastric junction is similar (4%) after both operations and there was no dysplasia in either group, they stressed.

“The main problem after SG remains new-onset GERD, for which still no predictive parameter exists,” according to Dr. Müller and Dr. Billeter.

“The take home message … is that GERD after SG is generally mild and the risk of Barrett’s esophagus is equally higher after SG and RYGB,” they said. “Therefore, all patients after any bariatric operations should undergo regular upper endoscopies.” 

However, “RYGB still leads to an increase in proton-pump inhibitor use, despite RYGB being one of the most effective antireflux procedures,” they said. “This finding needs further investigation.”

Furthermore, “a 4% Barrett esophagus rate 10 years after RYGB is troublesome, and the reasons should be investigated,” they added.

“Another relevant finding is that after 10 years, RYGB has a statistically better weight loss, which reaches the primary endpoint of the SLEEVEPASS trial for the first time,” they noted, yet the clinical relevance of this is not clear, since there was no difference in resolution of comorbidities, except for hypertension. 

Gyanprakash A. Ketwaroo, MD, of Baylor College of Medicine, Houston, who was not involved with this research, agreed that “the study shows durable and good weight loss for either type of laparoscopic surgery with important metabolic effects and confirms the long-term benefits of weight-loss surgery.”

“What is somewhat new is the lower levels of Barrett’s esophagus after sleeve gastrectomy compared with several earlier studies,” he told this news organization in an email.

“This is somewhat incongruent with the relatively high incidence of postsleeve esophagitis noted in the study, which is an accepted risk factor for Barrett’s esophagus,” he continued. “Thus, I believe concern will still remain about GERD-related complications, including Barrett’s [esophagus], after sleeve gastrectomy.”    

“This paper highlights the need for larger prospective studies, especially those that include diverse, older populations with multiple risk factors for Barrett’s esophagus,” Dr. Ketwaroo said.
 

Looking ahead

Using a large data set, such as that from SLEEVEPASS and possibly with data from the SM-BOSS trial and the BariSurg trial, with machine learning and other sophisticated analyses might identify parameters that could be used to choose the best operation for an individual patient, Dr. Salminen speculated. 

 

 

“I think what we have learned from these long-term follow-up results is that GERD assessment should be a part of the preoperative assessment, and for patients who have preoperative GERD symptoms and GERD-related endoscopic findings (e.g., hiatal hernia), gastric bypass would be a more optimal procedure choice, if there are no contraindications for it,” she said.

Patient discussions should also cover “long-term symptoms, for example, abdominal pain after RYGB,” she added.

“I am looking forward to our future 20-year follow-up results,” Dr. Salminen said, “which will shed more light on this topic of postoperative [endoscopic] surveillance.

In the meantime, “preoperative gastroscopy is necessary and beneficial, at least when considering sleeve gastrectomy,” she said.

The SLEEVEPASS trial was supported by the Mary and Georg C. Ehrnrooth Foundation, the Government Research Foundation (in a grant awarded to Turku University Hospital), the Orion Research Foundation, the Paulo Foundation, and the Gastroenterological Research Foundation. Dr. Salminen reported receiving grants from the Government Research Foundation awarded to Turku University Hospital and the Mary and Georg C. Ehrnrooth Foundation. Another coauthor received grants from the Orion Research Foundation, the Paulo Foundation, and the Gastroenterological Research Foundation during the study. No other disclosures were reported.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA SURGERY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Best strategy to prevent schizophrenia relapse yields unexpected results

Article Type
Changed

A large meta-analysis sheds light on the best antipsychotic maintenance strategy to prevent relapse in clinically stable schizophrenia – with some unexpected results that have potential implications for changes to current guidelines.

Consistent with the researchers’ hypothesis, continuing antipsychotic treatment at the standard dose, switching to another antipsychotic, and reducing the dose were all significantly more effective than stopping antipsychotic treatment in preventing relapse.

However, contrary to the researchers’ hypothesis, which was based on current literature, switching to another antipsychotic was just as effective as continuing an antipsychotic at the standard dose.

Switching to another antipsychotic “does not increase the risk of relapse. This result was not expected, as previous literature suggested otherwise,” Giovanni Ostuzzi, MD, PhD, with University of Verona (Italy) said in an interview.

“On the other hand, reducing the dose below the standard range used in the acute phase carries a tangible risk of relapse, and should be limited to selected cases, for example those where the risk of withdrawing the treatment altogether is particularly high,” Dr. Ostuzzi said.

“These results should inform evidence-based guidelines, considering that clinical practices for relapse prevention are still heterogeneous and too often guided by clinical common sense only,” he added.

The study was published online in Lancet Psychiatry.
 

Guideline update warranted

The researchers evaluated the effect of different antipsychotic treatment strategies on risk for relapse in a network meta-analysis of 98 randomized controlled trials (RCTs) involving nearly 14,000 patients.

Compared to stopping the antipsychotic, all continuation strategies were effective in preventing relapse.

The risk for relapse was largely (and similarly) reduced when continuing the antipsychotic at the standard dose or switching to a different antipsychotic (relative risk, 0.37 and RR, 0.44, respectively), the researchers found.

Both strategies outperformed the strategy of reducing the antipsychotic dose below the standard (RR, 0.68), which was inferior to the other two strategies.

For every three patients continuing an antipsychotic at standard doses, one additional patient will avoid relapse, compared with patients stopping an antipsychotic, “which can be regarded as a large-effect magnitude according to commonly used thresholds and results from RCTs in acute schizophrenia,” the researchers write.

The number needed to treat (NNT) slightly increased to about 3.5 for patients who switched antipsychotic treatment – “still regarded as a large-effect magnitude,” they note.

“Currently, most psychiatrists are aware of the benefits of continuing antipsychotics in clinically stable individuals. However, they might face the necessity of changing the ongoing treatment strategy, generally because of burdening side effects, poor adherence, or both,” said Dr. Ostuzzi.

“Our findings support updating clinical guidelines to recognize that switching to another antipsychotic during maintenance treatment can be as effective as continuing antipsychotics at standard dose, whereas dose reduction below standard doses should be limited to selected cases,” the investigators write.
 

More to the story

In an accompanying editorial, Marieke J.H. Begemann, PhD, University Medical Center Groningen (the Netherlands) and colleagues note the large number of patients included in the analysis provide “great credibility” to the findings, which are “trustworthy and important, yet only tell part of the story.”

They note that, while tapering information was often missing, antipsychotic discontinuation was probably abrupt for about two-thirds of the included studies. 

“The issue of slow versus swift tapering is not yet settled, as there is a scarcity of RCTs that provide very gradual tapering over several months,” the editorialists write.

To fill this gap, several randomized trials are now in progress to specifically address the effects of gradual tapering or discontinuation vs. antipsychotic maintenance treatment in clinically stable schizophrenia.

“Time is pressing, as patients, their families, and clinicians need evidence-based data to weigh up the risks and benefits of maintaining, switching, or reducing medication with respect to a range of outcomes that are important to them, including social functioning, cognition, physical health, sexual health, and quality of life, thus going well beyond relapse prevention,” the editorialists note.

“Schizophrenia-spectrum disorders are heterogeneous with a largely unpredictable course, and we have known for a long time that a substantial proportion of patients who experienced a first psychosis can manage without antipsychotic medication. The challenge for future research is therefore to identify this subgroup on the basis of individual characteristics and guide them in tapering medication safely,” they add.

The study had no funding source. Dr. Ostuzzi reports no relevant financial relationships. A complete list of author disclosures is available with the original article. The editorialists have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

A large meta-analysis sheds light on the best antipsychotic maintenance strategy to prevent relapse in clinically stable schizophrenia – with some unexpected results that have potential implications for changes to current guidelines.

Consistent with the researchers’ hypothesis, continuing antipsychotic treatment at the standard dose, switching to another antipsychotic, and reducing the dose were all significantly more effective than stopping antipsychotic treatment in preventing relapse.

However, contrary to the researchers’ hypothesis, which was based on current literature, switching to another antipsychotic was just as effective as continuing an antipsychotic at the standard dose.

Switching to another antipsychotic “does not increase the risk of relapse. This result was not expected, as previous literature suggested otherwise,” Giovanni Ostuzzi, MD, PhD, with University of Verona (Italy) said in an interview.

“On the other hand, reducing the dose below the standard range used in the acute phase carries a tangible risk of relapse, and should be limited to selected cases, for example those where the risk of withdrawing the treatment altogether is particularly high,” Dr. Ostuzzi said.

“These results should inform evidence-based guidelines, considering that clinical practices for relapse prevention are still heterogeneous and too often guided by clinical common sense only,” he added.

The study was published online in Lancet Psychiatry.
 

Guideline update warranted

The researchers evaluated the effect of different antipsychotic treatment strategies on risk for relapse in a network meta-analysis of 98 randomized controlled trials (RCTs) involving nearly 14,000 patients.

Compared to stopping the antipsychotic, all continuation strategies were effective in preventing relapse.

The risk for relapse was largely (and similarly) reduced when continuing the antipsychotic at the standard dose or switching to a different antipsychotic (relative risk, 0.37 and RR, 0.44, respectively), the researchers found.

Both strategies outperformed the strategy of reducing the antipsychotic dose below the standard (RR, 0.68), which was inferior to the other two strategies.

For every three patients continuing an antipsychotic at standard doses, one additional patient will avoid relapse, compared with patients stopping an antipsychotic, “which can be regarded as a large-effect magnitude according to commonly used thresholds and results from RCTs in acute schizophrenia,” the researchers write.

The number needed to treat (NNT) slightly increased to about 3.5 for patients who switched antipsychotic treatment – “still regarded as a large-effect magnitude,” they note.

“Currently, most psychiatrists are aware of the benefits of continuing antipsychotics in clinically stable individuals. However, they might face the necessity of changing the ongoing treatment strategy, generally because of burdening side effects, poor adherence, or both,” said Dr. Ostuzzi.

“Our findings support updating clinical guidelines to recognize that switching to another antipsychotic during maintenance treatment can be as effective as continuing antipsychotics at standard dose, whereas dose reduction below standard doses should be limited to selected cases,” the investigators write.
 

More to the story

In an accompanying editorial, Marieke J.H. Begemann, PhD, University Medical Center Groningen (the Netherlands) and colleagues note the large number of patients included in the analysis provide “great credibility” to the findings, which are “trustworthy and important, yet only tell part of the story.”

They note that, while tapering information was often missing, antipsychotic discontinuation was probably abrupt for about two-thirds of the included studies. 

“The issue of slow versus swift tapering is not yet settled, as there is a scarcity of RCTs that provide very gradual tapering over several months,” the editorialists write.

To fill this gap, several randomized trials are now in progress to specifically address the effects of gradual tapering or discontinuation vs. antipsychotic maintenance treatment in clinically stable schizophrenia.

“Time is pressing, as patients, their families, and clinicians need evidence-based data to weigh up the risks and benefits of maintaining, switching, or reducing medication with respect to a range of outcomes that are important to them, including social functioning, cognition, physical health, sexual health, and quality of life, thus going well beyond relapse prevention,” the editorialists note.

“Schizophrenia-spectrum disorders are heterogeneous with a largely unpredictable course, and we have known for a long time that a substantial proportion of patients who experienced a first psychosis can manage without antipsychotic medication. The challenge for future research is therefore to identify this subgroup on the basis of individual characteristics and guide them in tapering medication safely,” they add.

The study had no funding source. Dr. Ostuzzi reports no relevant financial relationships. A complete list of author disclosures is available with the original article. The editorialists have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

A large meta-analysis sheds light on the best antipsychotic maintenance strategy to prevent relapse in clinically stable schizophrenia – with some unexpected results that have potential implications for changes to current guidelines.

Consistent with the researchers’ hypothesis, continuing antipsychotic treatment at the standard dose, switching to another antipsychotic, and reducing the dose were all significantly more effective than stopping antipsychotic treatment in preventing relapse.

However, contrary to the researchers’ hypothesis, which was based on current literature, switching to another antipsychotic was just as effective as continuing an antipsychotic at the standard dose.

Switching to another antipsychotic “does not increase the risk of relapse. This result was not expected, as previous literature suggested otherwise,” Giovanni Ostuzzi, MD, PhD, with University of Verona (Italy) said in an interview.

“On the other hand, reducing the dose below the standard range used in the acute phase carries a tangible risk of relapse, and should be limited to selected cases, for example those where the risk of withdrawing the treatment altogether is particularly high,” Dr. Ostuzzi said.

“These results should inform evidence-based guidelines, considering that clinical practices for relapse prevention are still heterogeneous and too often guided by clinical common sense only,” he added.

The study was published online in Lancet Psychiatry.
 

Guideline update warranted

The researchers evaluated the effect of different antipsychotic treatment strategies on risk for relapse in a network meta-analysis of 98 randomized controlled trials (RCTs) involving nearly 14,000 patients.

Compared to stopping the antipsychotic, all continuation strategies were effective in preventing relapse.

The risk for relapse was largely (and similarly) reduced when continuing the antipsychotic at the standard dose or switching to a different antipsychotic (relative risk, 0.37 and RR, 0.44, respectively), the researchers found.

Both strategies outperformed the strategy of reducing the antipsychotic dose below the standard (RR, 0.68), which was inferior to the other two strategies.

For every three patients continuing an antipsychotic at standard doses, one additional patient will avoid relapse, compared with patients stopping an antipsychotic, “which can be regarded as a large-effect magnitude according to commonly used thresholds and results from RCTs in acute schizophrenia,” the researchers write.

The number needed to treat (NNT) slightly increased to about 3.5 for patients who switched antipsychotic treatment – “still regarded as a large-effect magnitude,” they note.

“Currently, most psychiatrists are aware of the benefits of continuing antipsychotics in clinically stable individuals. However, they might face the necessity of changing the ongoing treatment strategy, generally because of burdening side effects, poor adherence, or both,” said Dr. Ostuzzi.

“Our findings support updating clinical guidelines to recognize that switching to another antipsychotic during maintenance treatment can be as effective as continuing antipsychotics at standard dose, whereas dose reduction below standard doses should be limited to selected cases,” the investigators write.
 

More to the story

In an accompanying editorial, Marieke J.H. Begemann, PhD, University Medical Center Groningen (the Netherlands) and colleagues note the large number of patients included in the analysis provide “great credibility” to the findings, which are “trustworthy and important, yet only tell part of the story.”

They note that, while tapering information was often missing, antipsychotic discontinuation was probably abrupt for about two-thirds of the included studies. 

“The issue of slow versus swift tapering is not yet settled, as there is a scarcity of RCTs that provide very gradual tapering over several months,” the editorialists write.

To fill this gap, several randomized trials are now in progress to specifically address the effects of gradual tapering or discontinuation vs. antipsychotic maintenance treatment in clinically stable schizophrenia.

“Time is pressing, as patients, their families, and clinicians need evidence-based data to weigh up the risks and benefits of maintaining, switching, or reducing medication with respect to a range of outcomes that are important to them, including social functioning, cognition, physical health, sexual health, and quality of life, thus going well beyond relapse prevention,” the editorialists note.

“Schizophrenia-spectrum disorders are heterogeneous with a largely unpredictable course, and we have known for a long time that a substantial proportion of patients who experienced a first psychosis can manage without antipsychotic medication. The challenge for future research is therefore to identify this subgroup on the basis of individual characteristics and guide them in tapering medication safely,” they add.

The study had no funding source. Dr. Ostuzzi reports no relevant financial relationships. A complete list of author disclosures is available with the original article. The editorialists have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE LANCET PSYCHIATRY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Lifestyle medicine eases anxiety symptoms

Article Type
Changed

Lifestyle medicine significantly improved symptoms for patients with anxiety, compared with controls, based on data from a meta-analysis of more than 18,000 individuals.

Despite the availability of effective treatment strategies, including pharmacotherapy, psychotherapy, and combination therapy, the prevalence of anxiety continues to increase, especially in low-income and conflict-ridden countries, Vincent Wing-Hei Wong, a PhD student at The Chinese University of Hong Kong, and colleagues wrote.

Marija Jovovic/Getty Images

Previous studies have shown that lifestyle factors including diet, sleep, and sedentary behavior are involved in the development of anxiety symptoms, but the impact of lifestyle medicine (LM) as a treatment for anxiety has not been well studied, they wrote.

In a meta-analysis published in the Journal of Affective Disorders, the researchers identified 53 randomized, controlled trials with a total of 18,894 participants. Anxiety symptoms were measured using self-report questionnaires including the Hospital Anxiety and Depression Scale, the Depression Anxiety and Stress Scale, and the General Anxiety Disorder–7. Random-effects models were used to assess the effect of the intervention at immediate post treatment, short-term follow-up (1-3 months post treatment), medium follow-up (4-6 months), and long-term follow-up (7 months or more).

The studies included various combinations of LM intervention involving exercise, stress management, and sleep management. The interventions ranged from 1 month to 4 years, with an average duration of 6.3 months.

Overall, patients randomized to multicomponent LM interventions showed significantly improved symptoms compared to controls immediately after treatment and at short-term follow-up (P < .001 for both).

However, no significant differences were noted between the multicomponent LM intervention and control groups at medium-term follow-up, the researchers said. Only one study included data on long-term effects, so these effects were not evaluated in a meta-analysis, and more research is needed.

In a subgroup analysis, the effect was greatest for individuals with moderate anxiety symptoms at baseline (P < .05). “Our results could perhaps be explained by the occurrence of floor effect; those with higher baseline anxiety symptoms have greater room for improvement relative to those with fewer symptoms,” the researchers wrote.

The study findings were limited by several factors including the risk of overall bias and publication bias for the selected studies, as well as the limited degree of improvement because most patients had minimal anxiety symptoms at baseline, the researchers noted. Other limitations included the small number of studies for subgroup comparisons and the use of self-reports.

However, the results were strengthened by the use of broad search terms to capture multiple lifestyle determinants, and the diverse study populations and backgrounds from individuals in 19 countries.

The results support findings from previous studies, and support the value of multicomponent LM interventions for patients with anxiety in the short-term and immediately after treatment, the researchers emphasized.

“The LM approach, which leverages a range of universal lifestyle measures to manage anxiety and other common mental disorders such as depression, may be a viable solution to address the huge mental health burden through empowering individuals to practice self-management,” they concluded.

However, the researchers acknowledged the need for more randomized, controlled trials targeting patients with higher baseline anxiety levels or anxiety disorders, and using technology to improve treatment adherence.

The study received no outside funding. The researchers had no financial conflicts to disclose.
 

Publications
Topics
Sections

Lifestyle medicine significantly improved symptoms for patients with anxiety, compared with controls, based on data from a meta-analysis of more than 18,000 individuals.

Despite the availability of effective treatment strategies, including pharmacotherapy, psychotherapy, and combination therapy, the prevalence of anxiety continues to increase, especially in low-income and conflict-ridden countries, Vincent Wing-Hei Wong, a PhD student at The Chinese University of Hong Kong, and colleagues wrote.

Marija Jovovic/Getty Images

Previous studies have shown that lifestyle factors including diet, sleep, and sedentary behavior are involved in the development of anxiety symptoms, but the impact of lifestyle medicine (LM) as a treatment for anxiety has not been well studied, they wrote.

In a meta-analysis published in the Journal of Affective Disorders, the researchers identified 53 randomized, controlled trials with a total of 18,894 participants. Anxiety symptoms were measured using self-report questionnaires including the Hospital Anxiety and Depression Scale, the Depression Anxiety and Stress Scale, and the General Anxiety Disorder–7. Random-effects models were used to assess the effect of the intervention at immediate post treatment, short-term follow-up (1-3 months post treatment), medium follow-up (4-6 months), and long-term follow-up (7 months or more).

The studies included various combinations of LM intervention involving exercise, stress management, and sleep management. The interventions ranged from 1 month to 4 years, with an average duration of 6.3 months.

Overall, patients randomized to multicomponent LM interventions showed significantly improved symptoms compared to controls immediately after treatment and at short-term follow-up (P < .001 for both).

However, no significant differences were noted between the multicomponent LM intervention and control groups at medium-term follow-up, the researchers said. Only one study included data on long-term effects, so these effects were not evaluated in a meta-analysis, and more research is needed.

In a subgroup analysis, the effect was greatest for individuals with moderate anxiety symptoms at baseline (P < .05). “Our results could perhaps be explained by the occurrence of floor effect; those with higher baseline anxiety symptoms have greater room for improvement relative to those with fewer symptoms,” the researchers wrote.

The study findings were limited by several factors including the risk of overall bias and publication bias for the selected studies, as well as the limited degree of improvement because most patients had minimal anxiety symptoms at baseline, the researchers noted. Other limitations included the small number of studies for subgroup comparisons and the use of self-reports.

However, the results were strengthened by the use of broad search terms to capture multiple lifestyle determinants, and the diverse study populations and backgrounds from individuals in 19 countries.

The results support findings from previous studies, and support the value of multicomponent LM interventions for patients with anxiety in the short-term and immediately after treatment, the researchers emphasized.

“The LM approach, which leverages a range of universal lifestyle measures to manage anxiety and other common mental disorders such as depression, may be a viable solution to address the huge mental health burden through empowering individuals to practice self-management,” they concluded.

However, the researchers acknowledged the need for more randomized, controlled trials targeting patients with higher baseline anxiety levels or anxiety disorders, and using technology to improve treatment adherence.

The study received no outside funding. The researchers had no financial conflicts to disclose.
 

Lifestyle medicine significantly improved symptoms for patients with anxiety, compared with controls, based on data from a meta-analysis of more than 18,000 individuals.

Despite the availability of effective treatment strategies, including pharmacotherapy, psychotherapy, and combination therapy, the prevalence of anxiety continues to increase, especially in low-income and conflict-ridden countries, Vincent Wing-Hei Wong, a PhD student at The Chinese University of Hong Kong, and colleagues wrote.

Marija Jovovic/Getty Images

Previous studies have shown that lifestyle factors including diet, sleep, and sedentary behavior are involved in the development of anxiety symptoms, but the impact of lifestyle medicine (LM) as a treatment for anxiety has not been well studied, they wrote.

In a meta-analysis published in the Journal of Affective Disorders, the researchers identified 53 randomized, controlled trials with a total of 18,894 participants. Anxiety symptoms were measured using self-report questionnaires including the Hospital Anxiety and Depression Scale, the Depression Anxiety and Stress Scale, and the General Anxiety Disorder–7. Random-effects models were used to assess the effect of the intervention at immediate post treatment, short-term follow-up (1-3 months post treatment), medium follow-up (4-6 months), and long-term follow-up (7 months or more).

The studies included various combinations of LM intervention involving exercise, stress management, and sleep management. The interventions ranged from 1 month to 4 years, with an average duration of 6.3 months.

Overall, patients randomized to multicomponent LM interventions showed significantly improved symptoms compared to controls immediately after treatment and at short-term follow-up (P < .001 for both).

However, no significant differences were noted between the multicomponent LM intervention and control groups at medium-term follow-up, the researchers said. Only one study included data on long-term effects, so these effects were not evaluated in a meta-analysis, and more research is needed.

In a subgroup analysis, the effect was greatest for individuals with moderate anxiety symptoms at baseline (P < .05). “Our results could perhaps be explained by the occurrence of floor effect; those with higher baseline anxiety symptoms have greater room for improvement relative to those with fewer symptoms,” the researchers wrote.

The study findings were limited by several factors including the risk of overall bias and publication bias for the selected studies, as well as the limited degree of improvement because most patients had minimal anxiety symptoms at baseline, the researchers noted. Other limitations included the small number of studies for subgroup comparisons and the use of self-reports.

However, the results were strengthened by the use of broad search terms to capture multiple lifestyle determinants, and the diverse study populations and backgrounds from individuals in 19 countries.

The results support findings from previous studies, and support the value of multicomponent LM interventions for patients with anxiety in the short-term and immediately after treatment, the researchers emphasized.

“The LM approach, which leverages a range of universal lifestyle measures to manage anxiety and other common mental disorders such as depression, may be a viable solution to address the huge mental health burden through empowering individuals to practice self-management,” they concluded.

However, the researchers acknowledged the need for more randomized, controlled trials targeting patients with higher baseline anxiety levels or anxiety disorders, and using technology to improve treatment adherence.

The study received no outside funding. The researchers had no financial conflicts to disclose.
 

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF AFFECTIVE DISORDERS

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Fertility rates lower in disadvantaged neighborhoods

Article Type
Changed

A new study ties the odds of conception to the advantages of the neighborhood a woman lives in.

In a cohort of more than 6,000 women who were trying to get pregnant without fertility treatments, the probability of conception was reduced 21%-23% per menstrual cycle when comparing the most disadvantaged neighborhoods with the least disadvantaged.

“When disadvantaged neighborhood status was categorized within each state (as opposed to nationally), the results were slightly larger in magnitude,” wrote authors of the study published online in JAMA Network Open.

Among 6,356 participants, 3,725 pregnancies were observed for 27,427 menstrual cycles of follow-up. Average age was 30, and most participants were non-Hispanic White (5,297 [83.3%]) and had not previously given birth (4,179 [65.7%]).

When the researchers compared the top and bottom deciles of disadvantaged neighborhood status, adjusted fecundability ratios (the per-cycle probability of conception) were 0.79 (95% confidence interval [CI], 0.66-0.96) for national-level area deprivation index (ADI) rankings and 0.77 (95% CI, 0.65-0.92) for within-state ADI rankings. ADI score includes population indicators related to educational attainment, housing, employment, and poverty.

“These findings suggest that investments in disadvantaged neighborhoods may yield positive cobenefits for fertility,” the authors wrote.

The researchers used the Pregnancy Study Online, for which baseline data were collected from women in the United States from June 19, 2013, through April 12, 2019.

In the United States, 10%-15% of reproductive-aged couples experience infertility, defined as the inability to conceive after a year of unprotected intercourse.
 

Reason behind the numbers unclear

Mark Hornstein, MD, director in the reproductive endocrinology division of Brigham and Women’s Hospital and professor at Harvard Medical School, both in Boston, said in an interview that this study gives the “what” but the “why” is harder to pinpoint.

What is not known, he said, is what kind of access the women had to fertility counseling or treatment.

The association between fertility and neighborhood advantage status is very plausible given the well-established links between disadvantaged regions and poorer health outcomes, he said, adding that the authors make a good case for their conclusions in the paper.

The authors ruled out many potential confounders, such as age of the women, reproductive history, multivitamin use, education level, household income, and frequency of intercourse, and still there was a difference between disadvantaged and advantaged neighborhoods, he noted.

Dr. Hornstein said his own research team has found that lack of knowledge about insurance coverage regarding infertility services may keep women from seeking the services.

“One of the things I worry about it access,” he said. “[The study authors] didn’t really look at that. They just looked at what the chances were that they got pregnant. But they didn’t say how many of those women had a workup, an evaluation, for why they were having difficulty, if they were, or had treatment. So I don’t know if some or all or none of that difference that they saw from the highest neighborhood health score to the most disadvantaged – if that was from inherent problems in the area, access to the best health care, or some combination.”
 

 

 

Discussions have focused on changing personal behaviors

Discussions on improving fertility often center on changing personal behaviors, the authors noted. “However, structural, political, and environmental factors may also play a substantial role,” they wrote.

The findings are in line with previous research on the effect of stress on in vitro outcomes, they pointed out. “Perceived stress has been associated with poorer in vitro fertilization outcomes and reduced fecundability among couples attempting spontaneous conception,” the authors noted.

Studies also have shown that living in a disadvantaged neighborhood is linked with comorbidities during pregnancy, such as increased risks of gestational hypertension (risk ratio for lowest vs. highest quartile: 1.24 [95% CI, 1.14-1.35]) and poor gestational weight gain (relative risk for lowest vs. highest quartile: 1.1 [95% CI, 1.1-1.2]).

In addition, policies such as those that support civil rights, protect the environment, and invest in underresourced communities have been shown to improve health markers such as life expectancy.

Policy decisions can also perpetuate a cycle of stress, they wrote. Disadvantaged communities may have more air pollution, which has been shown to have negative effects on fertility. Unemployment has been linked with decreased population-level fertility rates. Lack of green space may result in fewer areas to reduce stress.

A study coauthor reported grants from the National Institutes of Health during the conduct of the study; nonfinancial support from Swiss Precision Diagnostics GmbH, Labcorp, Kindara.com, and FertilityFriend.com; and consulting for AbbVie outside the submitted work. No other author disclosures were reported. Dr. Hornstein reported no relevant financial relationships.

Publications
Topics
Sections

A new study ties the odds of conception to the advantages of the neighborhood a woman lives in.

In a cohort of more than 6,000 women who were trying to get pregnant without fertility treatments, the probability of conception was reduced 21%-23% per menstrual cycle when comparing the most disadvantaged neighborhoods with the least disadvantaged.

“When disadvantaged neighborhood status was categorized within each state (as opposed to nationally), the results were slightly larger in magnitude,” wrote authors of the study published online in JAMA Network Open.

Among 6,356 participants, 3,725 pregnancies were observed for 27,427 menstrual cycles of follow-up. Average age was 30, and most participants were non-Hispanic White (5,297 [83.3%]) and had not previously given birth (4,179 [65.7%]).

When the researchers compared the top and bottom deciles of disadvantaged neighborhood status, adjusted fecundability ratios (the per-cycle probability of conception) were 0.79 (95% confidence interval [CI], 0.66-0.96) for national-level area deprivation index (ADI) rankings and 0.77 (95% CI, 0.65-0.92) for within-state ADI rankings. ADI score includes population indicators related to educational attainment, housing, employment, and poverty.

“These findings suggest that investments in disadvantaged neighborhoods may yield positive cobenefits for fertility,” the authors wrote.

The researchers used the Pregnancy Study Online, for which baseline data were collected from women in the United States from June 19, 2013, through April 12, 2019.

In the United States, 10%-15% of reproductive-aged couples experience infertility, defined as the inability to conceive after a year of unprotected intercourse.
 

Reason behind the numbers unclear

Mark Hornstein, MD, director in the reproductive endocrinology division of Brigham and Women’s Hospital and professor at Harvard Medical School, both in Boston, said in an interview that this study gives the “what” but the “why” is harder to pinpoint.

What is not known, he said, is what kind of access the women had to fertility counseling or treatment.

The association between fertility and neighborhood advantage status is very plausible given the well-established links between disadvantaged regions and poorer health outcomes, he said, adding that the authors make a good case for their conclusions in the paper.

The authors ruled out many potential confounders, such as age of the women, reproductive history, multivitamin use, education level, household income, and frequency of intercourse, and still there was a difference between disadvantaged and advantaged neighborhoods, he noted.

Dr. Hornstein said his own research team has found that lack of knowledge about insurance coverage regarding infertility services may keep women from seeking the services.

“One of the things I worry about it access,” he said. “[The study authors] didn’t really look at that. They just looked at what the chances were that they got pregnant. But they didn’t say how many of those women had a workup, an evaluation, for why they were having difficulty, if they were, or had treatment. So I don’t know if some or all or none of that difference that they saw from the highest neighborhood health score to the most disadvantaged – if that was from inherent problems in the area, access to the best health care, or some combination.”
 

 

 

Discussions have focused on changing personal behaviors

Discussions on improving fertility often center on changing personal behaviors, the authors noted. “However, structural, political, and environmental factors may also play a substantial role,” they wrote.

The findings are in line with previous research on the effect of stress on in vitro outcomes, they pointed out. “Perceived stress has been associated with poorer in vitro fertilization outcomes and reduced fecundability among couples attempting spontaneous conception,” the authors noted.

Studies also have shown that living in a disadvantaged neighborhood is linked with comorbidities during pregnancy, such as increased risks of gestational hypertension (risk ratio for lowest vs. highest quartile: 1.24 [95% CI, 1.14-1.35]) and poor gestational weight gain (relative risk for lowest vs. highest quartile: 1.1 [95% CI, 1.1-1.2]).

In addition, policies such as those that support civil rights, protect the environment, and invest in underresourced communities have been shown to improve health markers such as life expectancy.

Policy decisions can also perpetuate a cycle of stress, they wrote. Disadvantaged communities may have more air pollution, which has been shown to have negative effects on fertility. Unemployment has been linked with decreased population-level fertility rates. Lack of green space may result in fewer areas to reduce stress.

A study coauthor reported grants from the National Institutes of Health during the conduct of the study; nonfinancial support from Swiss Precision Diagnostics GmbH, Labcorp, Kindara.com, and FertilityFriend.com; and consulting for AbbVie outside the submitted work. No other author disclosures were reported. Dr. Hornstein reported no relevant financial relationships.

A new study ties the odds of conception to the advantages of the neighborhood a woman lives in.

In a cohort of more than 6,000 women who were trying to get pregnant without fertility treatments, the probability of conception was reduced 21%-23% per menstrual cycle when comparing the most disadvantaged neighborhoods with the least disadvantaged.

“When disadvantaged neighborhood status was categorized within each state (as opposed to nationally), the results were slightly larger in magnitude,” wrote authors of the study published online in JAMA Network Open.

Among 6,356 participants, 3,725 pregnancies were observed for 27,427 menstrual cycles of follow-up. Average age was 30, and most participants were non-Hispanic White (5,297 [83.3%]) and had not previously given birth (4,179 [65.7%]).

When the researchers compared the top and bottom deciles of disadvantaged neighborhood status, adjusted fecundability ratios (the per-cycle probability of conception) were 0.79 (95% confidence interval [CI], 0.66-0.96) for national-level area deprivation index (ADI) rankings and 0.77 (95% CI, 0.65-0.92) for within-state ADI rankings. ADI score includes population indicators related to educational attainment, housing, employment, and poverty.

“These findings suggest that investments in disadvantaged neighborhoods may yield positive cobenefits for fertility,” the authors wrote.

The researchers used the Pregnancy Study Online, for which baseline data were collected from women in the United States from June 19, 2013, through April 12, 2019.

In the United States, 10%-15% of reproductive-aged couples experience infertility, defined as the inability to conceive after a year of unprotected intercourse.
 

Reason behind the numbers unclear

Mark Hornstein, MD, director in the reproductive endocrinology division of Brigham and Women’s Hospital and professor at Harvard Medical School, both in Boston, said in an interview that this study gives the “what” but the “why” is harder to pinpoint.

What is not known, he said, is what kind of access the women had to fertility counseling or treatment.

The association between fertility and neighborhood advantage status is very plausible given the well-established links between disadvantaged regions and poorer health outcomes, he said, adding that the authors make a good case for their conclusions in the paper.

The authors ruled out many potential confounders, such as age of the women, reproductive history, multivitamin use, education level, household income, and frequency of intercourse, and still there was a difference between disadvantaged and advantaged neighborhoods, he noted.

Dr. Hornstein said his own research team has found that lack of knowledge about insurance coverage regarding infertility services may keep women from seeking the services.

“One of the things I worry about it access,” he said. “[The study authors] didn’t really look at that. They just looked at what the chances were that they got pregnant. But they didn’t say how many of those women had a workup, an evaluation, for why they were having difficulty, if they were, or had treatment. So I don’t know if some or all or none of that difference that they saw from the highest neighborhood health score to the most disadvantaged – if that was from inherent problems in the area, access to the best health care, or some combination.”
 

 

 

Discussions have focused on changing personal behaviors

Discussions on improving fertility often center on changing personal behaviors, the authors noted. “However, structural, political, and environmental factors may also play a substantial role,” they wrote.

The findings are in line with previous research on the effect of stress on in vitro outcomes, they pointed out. “Perceived stress has been associated with poorer in vitro fertilization outcomes and reduced fecundability among couples attempting spontaneous conception,” the authors noted.

Studies also have shown that living in a disadvantaged neighborhood is linked with comorbidities during pregnancy, such as increased risks of gestational hypertension (risk ratio for lowest vs. highest quartile: 1.24 [95% CI, 1.14-1.35]) and poor gestational weight gain (relative risk for lowest vs. highest quartile: 1.1 [95% CI, 1.1-1.2]).

In addition, policies such as those that support civil rights, protect the environment, and invest in underresourced communities have been shown to improve health markers such as life expectancy.

Policy decisions can also perpetuate a cycle of stress, they wrote. Disadvantaged communities may have more air pollution, which has been shown to have negative effects on fertility. Unemployment has been linked with decreased population-level fertility rates. Lack of green space may result in fewer areas to reduce stress.

A study coauthor reported grants from the National Institutes of Health during the conduct of the study; nonfinancial support from Swiss Precision Diagnostics GmbH, Labcorp, Kindara.com, and FertilityFriend.com; and consulting for AbbVie outside the submitted work. No other author disclosures were reported. Dr. Hornstein reported no relevant financial relationships.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Will ESD replace EMR for large colorectal polyps?

Article Type
Changed

Dear colleagues,

Resection of polyps is the bread and butter of endoscopy. Advances in technology have enabled us to tackle larger and more complex lesions throughout the gastrointestinal tract, especially through endoscopic mucosal resection (EMR). Endoscopic submucosal dissection (ESD) is another technique that offers much promise for complex colorectal polyps and is being rapidly adopted in the West. But do its benefits outweigh the costs in time, money and additional training needed for successful ESD? How can we justify higher recurrence rates with EMR when ESD is available? Will reimbursement continue to favor EMR? In this issue of Perspectives, Dr. Alexis Bayudan and Dr. Craig A. Munroe make the case for adopting ESD, while Dr. Sumeet Tewani highlights all the benefits of EMR. I invite you to a great debate and look forward to hearing your own thoughts on Twitter @AGA_GIHN and by email at [email protected].
 

Dr. Gyanprakash Ketwaroo

Gyanprakash A. Ketwaroo, MD, MSc, is assistant professor of medicine at Baylor College of Medicine, Houston. He is an associate editor for GI & Hepatology News.

 

 

The future standard of care

BY ALEXIS BAYUDAN, MD, AND CRAIG A. MUNROE, MD

Endoscopic submucosal dissection (ESD) is a minimally invasive, organ-sparing, flexible endoscopic technique used to treat advanced neoplasia of the digestive tract, with the goal of en bloc resection for accurate histologic assessment. ESD was introduced over 25 years ago in Japan by a small group of innovative endoscopists.1 After its initial adoption and success with removing gastric lesions, ESD later evolved as a technique used for complete resection of lesions throughout the gastrointestinal tract.

The intent of ESD is to achieve clear pathologic evaluation of deep and lateral margins, which is generally lost when piecemeal EMR (pEMR) is performed on lesions larger than 2 cm. With growing global experience, the evidence is clear that ESD is advantageous when compared to pEMR in the resection of large colorectal lesions en bloc, leading to improved curative resection rates and less local recurrence.

Dr. Alexis Bayudan

From our own experience, and from the results of many studies, we know that although procedure time in ESD can be longer, the rate of complete resection is far superior. ESD was previously cited as having a 10% risk of perforation in the 1990s and early 2000s, but current rates are closer to 4.5%, as noted by Nimii et al., with nearly complete successful treatment with endoscopic closure.1 In a 2021 meta-analysis reviewing a total of 21 studies, Lim et. al demonstrate that, although there is an increased risk of perforation with ESD compared to EMR (risk ratio, 7.597; 95% confidence interval, 4.281-13.479; P < .001), there is no significant difference in bleeding risk between the two techniques (RR, 7.597; 95% CI, 4.281-13.479; P < .001).2

Since its inception, many refinements of the ESD technique have occurred through technology, and better understanding of anatomy and disease states. These include, but are not limited to, improvements in hemostatic and closure techniques, electrosurgical equipment, resection and traction devices, the use of carbon dioxide, the ability to perform full-thickness endoscopic surgery, and submucosal lifting.1 The realm of endoscopic innovation is moving at a rapid pace within commercial and noncommercial entities, and advancements in ESD devices will allow for further improvements in procedure times and decreased procedural complications. Conversely, there have been few advancements in EMR technique in decades.

Dr. Craig Munroe

Further developments in ESD will continue to democratize this intervention, so that it can be practiced in all medical centers, not just expert centers. However, for ESD to become standard of care in the Western world, it will require more exposure and training. ESD has rapidly spread throughout Japan because of the master-mentor relationship that fosters safe learning, in addition to an abundance of highly skilled EMR-experienced physicians who went on to acquire their skills under the supervision of ESD experts. Current methods of teaching ESD, such as using pig models to practice specific steps of the procedure, can be implemented in Western gastroenterology training programs and through GI and surgical society training programs to learn safe operation in the third space. Mentorship and proctorship are also mandatory. The incorporation of ESD into standard practice over pEMR is very akin to laparoscopic cholecystectomy revolutionizing gallbladder surgery, even though open cholecystectomy was known to be effective.

A major limitation in the adoption of ESD in the West is reimbursement. Despite mounting evidence of the superiority of ESD in well-trained hands, and the additional training needed to safely perform these procedures, there had not been a pathway forward for payment for the increased requirements needed to perform these procedures safely.3 This leads to more endoscopists performing pEMR in the West which is anti-innovative. In October 2021, the Centers for Medicare and Medicaid Services expanded the reimbursement for ESD (Healthcare Common Procedure Coding System C9779). The availability of billing codes paves the way for increasing patient access to these therapies. Hopefully, additional codes will follow.

With the mounting evidence demonstrating ESD is superior to pEMR in terms of curative resection and recurrence rates, we think it is time for ESD to be incorporated widely into Western practice. ESD is still evolving and improving; ESD will become both safer and more effective. ESD has revolutionized endoscopic resection, and we are just beginning to see the possibilities and value of these techniques.
 

Dr. Baydan is a second-year fellow, and Dr. Munroe is an associate professor, both at the University of California, San Francisco. They have no relevant conflicts of interest.

References

1. Ferreira J et al. Clin Colon Rectal Surg. 2015 Sep; 28(3):146-151.

2. Lim X et al. World J Gastroenterol. 2021 Jul 7;27(25):3925-39.

3. Iqbal S et al. World J Gastrointest Endosc. 2020 Jan 16; 12(1):49-52.

 

 

 

More investment than payoff

Most large colorectal polyps are best managed by endoscopic mucosal resection (EMR) and do not require endoscopic submucosal dissection (ESD). EMR can provide complete, safe, and effective removal, preventing colorectal cancer while avoiding the risks of surgery or ESD. EMR has several advantages over ESD. It is minimally invasive, low cost, well tolerated, and allows excellent histopathologic examination. It is performed during colonoscopy in an outpatient endoscopy lab or ambulatory surgery center. There are several techniques that can be performed safely and efficiently using accessories that are readily available. It is easier to learn and perform, with lower risks and fewer resources. Endoscopists can effectively integrate EMR into a busy practice, without making significant additional investments.

Dr. Sumeet K. Tewani

EMR of large adenomas has improved morbidity, mortality, and cost compared to surgery.1-3 I first carefully inspect the lesion to plan the approach and exclude submucosal invasion, which should be referred for ESD or surgery instead. This includes understanding the size, location, morphology, and surface characteristics, using high-definition and narrow-band imaging or Fujinon intelligent chromoendoscopy. Conventional EMR utilizes submucosal injection to lift the polyp away from the underlying muscle layer before hot snare resection. Injection needles and snares of various shapes, sizes, and stiffness are available in most endoscopy labs. The goal is en bloc resection to achieve potential cure with complete histological assessment and low rate of recurrence. This can be achieved for lesions up to 2 cm in size, although larger lesions require piecemeal resection, which limits accurate histopathology and carries a recurrence rate up to 25%.1 Thermal ablation of the resection margins with argon plasma coagulation or snare-tip soft coagulation can reduce the rate of recurrence. Additionally, most recurrences are identified during early surveillance within 6 months and managed endoscopically. The rates of adverse events, including bleeding (6%-15%), perforation (1%-2%), and postpolypectomy syndrome (< 1%) remain at acceptable low levels.1,4

For many polyps, saline injection is safe, effective, and inexpensive, but it dissipates rapidly with limited duration of effect. Alternative agents can improve the lift, at additional cost.4 I prefer adding dye, such as methylene blue, to differentiate the submucosa from the muscularis, demarcate the lesion margins, and allow easier inspection of the defect. Dilute epinephrine can also be added to reduce intraprocedural bleeding and maintain a clean resection field. I reserve this for the duodenum, but it can be an important adjunct for some colorectal polyps. Submucosal injection also allows assessment for a “nonlifting sign,” which raises suspicion for invasive carcinoma but can also occur with benign submucosal fibrosis from previous biopsy, partial resection, or adjacent tattoo. In these cases, effective management can still be achieved using EMR in combination with avulsion and thermal ablation techniques.

Alternative techniques include cold EMR and underwater EMR.1,4 These are gaining popularity because of their excellent safety profile and favorable outcomes. Cold EMR involves submucosal injection followed by cold-snare resection, eliminating the use of cautery and its associated risks. Cold EMR is very safe and effective for small polyps, and we use this for progressively larger polyps given the low complication rate. Despite the need for piecemeal resection of polyps larger than 10 mm, local recurrence rates are comparable to conventional EMR. Sessile serrated polyps are especially ideal for piecemeal cold EMR. Meanwhile, underwater EMR eliminates the need for submucosal injection by utilizing water immersion, which elevates the mucosal lesion away from the muscularis layer. Either hot or cold snare resection can be performed. Benefits include reduced procedure time and cost, and relatively low complication and recurrence rates, compared with conventional EMR. I find this to be a nice option for laterally spreading polyps, with potentially higher rates of en bloc resection.1,4

ESD involves similar techniques but includes careful dissection of the submucosal layer beneath the lesion. In addition to the tools for EMR, a specialized electrosurgical knife is necessary, as well as dedicated training and mentorship that can be difficult to accommodate for an active endoscopist in practice. The primary advantage of ESD is higher en bloc resection rates for larger and potentially deeper lesions, with accurate histologic assessment and staging, and very low recurrence rates.1,4,5 However, ESD is more complex, technically challenging, and time and resource intensive, with higher risk of complications. Intraprocedural bleeding is common and requires immediate management. Additional risks include 2% risk of delayed bleeding and 5% risk of perforation.1,5 ESD involves an operating room, longer procedure times, and higher cost including surgical, anesthesia, and nursing costs. Some of this may be balanced by reduced frequency of surveillance and therapeutic procedures. While both EMR and ESD carry significant cost savings, compared with surgery, ESD is additionally disadvantaged by lack of reimbursement.

Regardless of the technique, EMR is easier to learn and perform than ESD, uses a limited number of devices that are readily available, and carries lower cost-burden. EMR is successful for most colorectal polyps, with the primary disadvantage being piecemeal resection of larger polyps. The rates of adverse events are lower, and appropriate surveillance is essential to ensuring complete resection and eliminating recurrence. Japanese and European guidelines endorse ESD for lesions that have a high likelihood of cancer invading the submucosa and for lesions that cannot be removed by EMR because of submucosal fibrosis. Ultimately, patients need to be treated individually with the most appropriate technique.

Dr. Tewani of Rockford Gastroenterology Associates is clinical assistant professor of medicine at the University of Illinois, Rockford. He has no relevant conflicts of interest to disclose.

References

1. Rashid MU et al. Surg Oncol. 2022 Mar 18;101742.

2. Law R et al. Gastrointest Endosc. 2016 Jun;83(6):1248-57.

3. Backes Y et al. BMC Gastroenterol. 2016 May 26;16(1):56.

4. Thiruvengadam SS et al. Gastroenterol Hepatol. 2022 Mar;18(3):133-44.

5. Wang J et al. World J Gastroenterol. 2014 Jul 7;20(25):8282-7l.

Publications
Topics
Sections

Dear colleagues,

Resection of polyps is the bread and butter of endoscopy. Advances in technology have enabled us to tackle larger and more complex lesions throughout the gastrointestinal tract, especially through endoscopic mucosal resection (EMR). Endoscopic submucosal dissection (ESD) is another technique that offers much promise for complex colorectal polyps and is being rapidly adopted in the West. But do its benefits outweigh the costs in time, money and additional training needed for successful ESD? How can we justify higher recurrence rates with EMR when ESD is available? Will reimbursement continue to favor EMR? In this issue of Perspectives, Dr. Alexis Bayudan and Dr. Craig A. Munroe make the case for adopting ESD, while Dr. Sumeet Tewani highlights all the benefits of EMR. I invite you to a great debate and look forward to hearing your own thoughts on Twitter @AGA_GIHN and by email at [email protected].
 

Dr. Gyanprakash Ketwaroo

Gyanprakash A. Ketwaroo, MD, MSc, is assistant professor of medicine at Baylor College of Medicine, Houston. He is an associate editor for GI & Hepatology News.

 

 

The future standard of care

BY ALEXIS BAYUDAN, MD, AND CRAIG A. MUNROE, MD

Endoscopic submucosal dissection (ESD) is a minimally invasive, organ-sparing, flexible endoscopic technique used to treat advanced neoplasia of the digestive tract, with the goal of en bloc resection for accurate histologic assessment. ESD was introduced over 25 years ago in Japan by a small group of innovative endoscopists.1 After its initial adoption and success with removing gastric lesions, ESD later evolved as a technique used for complete resection of lesions throughout the gastrointestinal tract.

The intent of ESD is to achieve clear pathologic evaluation of deep and lateral margins, which is generally lost when piecemeal EMR (pEMR) is performed on lesions larger than 2 cm. With growing global experience, the evidence is clear that ESD is advantageous when compared to pEMR in the resection of large colorectal lesions en bloc, leading to improved curative resection rates and less local recurrence.

Dr. Alexis Bayudan

From our own experience, and from the results of many studies, we know that although procedure time in ESD can be longer, the rate of complete resection is far superior. ESD was previously cited as having a 10% risk of perforation in the 1990s and early 2000s, but current rates are closer to 4.5%, as noted by Nimii et al., with nearly complete successful treatment with endoscopic closure.1 In a 2021 meta-analysis reviewing a total of 21 studies, Lim et. al demonstrate that, although there is an increased risk of perforation with ESD compared to EMR (risk ratio, 7.597; 95% confidence interval, 4.281-13.479; P < .001), there is no significant difference in bleeding risk between the two techniques (RR, 7.597; 95% CI, 4.281-13.479; P < .001).2

Since its inception, many refinements of the ESD technique have occurred through technology, and better understanding of anatomy and disease states. These include, but are not limited to, improvements in hemostatic and closure techniques, electrosurgical equipment, resection and traction devices, the use of carbon dioxide, the ability to perform full-thickness endoscopic surgery, and submucosal lifting.1 The realm of endoscopic innovation is moving at a rapid pace within commercial and noncommercial entities, and advancements in ESD devices will allow for further improvements in procedure times and decreased procedural complications. Conversely, there have been few advancements in EMR technique in decades.

Dr. Craig Munroe

Further developments in ESD will continue to democratize this intervention, so that it can be practiced in all medical centers, not just expert centers. However, for ESD to become standard of care in the Western world, it will require more exposure and training. ESD has rapidly spread throughout Japan because of the master-mentor relationship that fosters safe learning, in addition to an abundance of highly skilled EMR-experienced physicians who went on to acquire their skills under the supervision of ESD experts. Current methods of teaching ESD, such as using pig models to practice specific steps of the procedure, can be implemented in Western gastroenterology training programs and through GI and surgical society training programs to learn safe operation in the third space. Mentorship and proctorship are also mandatory. The incorporation of ESD into standard practice over pEMR is very akin to laparoscopic cholecystectomy revolutionizing gallbladder surgery, even though open cholecystectomy was known to be effective.

A major limitation in the adoption of ESD in the West is reimbursement. Despite mounting evidence of the superiority of ESD in well-trained hands, and the additional training needed to safely perform these procedures, there had not been a pathway forward for payment for the increased requirements needed to perform these procedures safely.3 This leads to more endoscopists performing pEMR in the West which is anti-innovative. In October 2021, the Centers for Medicare and Medicaid Services expanded the reimbursement for ESD (Healthcare Common Procedure Coding System C9779). The availability of billing codes paves the way for increasing patient access to these therapies. Hopefully, additional codes will follow.

With the mounting evidence demonstrating ESD is superior to pEMR in terms of curative resection and recurrence rates, we think it is time for ESD to be incorporated widely into Western practice. ESD is still evolving and improving; ESD will become both safer and more effective. ESD has revolutionized endoscopic resection, and we are just beginning to see the possibilities and value of these techniques.
 

Dr. Baydan is a second-year fellow, and Dr. Munroe is an associate professor, both at the University of California, San Francisco. They have no relevant conflicts of interest.

References

1. Ferreira J et al. Clin Colon Rectal Surg. 2015 Sep; 28(3):146-151.

2. Lim X et al. World J Gastroenterol. 2021 Jul 7;27(25):3925-39.

3. Iqbal S et al. World J Gastrointest Endosc. 2020 Jan 16; 12(1):49-52.

 

 

 

More investment than payoff

Most large colorectal polyps are best managed by endoscopic mucosal resection (EMR) and do not require endoscopic submucosal dissection (ESD). EMR can provide complete, safe, and effective removal, preventing colorectal cancer while avoiding the risks of surgery or ESD. EMR has several advantages over ESD. It is minimally invasive, low cost, well tolerated, and allows excellent histopathologic examination. It is performed during colonoscopy in an outpatient endoscopy lab or ambulatory surgery center. There are several techniques that can be performed safely and efficiently using accessories that are readily available. It is easier to learn and perform, with lower risks and fewer resources. Endoscopists can effectively integrate EMR into a busy practice, without making significant additional investments.

Dr. Sumeet K. Tewani

EMR of large adenomas has improved morbidity, mortality, and cost compared to surgery.1-3 I first carefully inspect the lesion to plan the approach and exclude submucosal invasion, which should be referred for ESD or surgery instead. This includes understanding the size, location, morphology, and surface characteristics, using high-definition and narrow-band imaging or Fujinon intelligent chromoendoscopy. Conventional EMR utilizes submucosal injection to lift the polyp away from the underlying muscle layer before hot snare resection. Injection needles and snares of various shapes, sizes, and stiffness are available in most endoscopy labs. The goal is en bloc resection to achieve potential cure with complete histological assessment and low rate of recurrence. This can be achieved for lesions up to 2 cm in size, although larger lesions require piecemeal resection, which limits accurate histopathology and carries a recurrence rate up to 25%.1 Thermal ablation of the resection margins with argon plasma coagulation or snare-tip soft coagulation can reduce the rate of recurrence. Additionally, most recurrences are identified during early surveillance within 6 months and managed endoscopically. The rates of adverse events, including bleeding (6%-15%), perforation (1%-2%), and postpolypectomy syndrome (< 1%) remain at acceptable low levels.1,4

For many polyps, saline injection is safe, effective, and inexpensive, but it dissipates rapidly with limited duration of effect. Alternative agents can improve the lift, at additional cost.4 I prefer adding dye, such as methylene blue, to differentiate the submucosa from the muscularis, demarcate the lesion margins, and allow easier inspection of the defect. Dilute epinephrine can also be added to reduce intraprocedural bleeding and maintain a clean resection field. I reserve this for the duodenum, but it can be an important adjunct for some colorectal polyps. Submucosal injection also allows assessment for a “nonlifting sign,” which raises suspicion for invasive carcinoma but can also occur with benign submucosal fibrosis from previous biopsy, partial resection, or adjacent tattoo. In these cases, effective management can still be achieved using EMR in combination with avulsion and thermal ablation techniques.

Alternative techniques include cold EMR and underwater EMR.1,4 These are gaining popularity because of their excellent safety profile and favorable outcomes. Cold EMR involves submucosal injection followed by cold-snare resection, eliminating the use of cautery and its associated risks. Cold EMR is very safe and effective for small polyps, and we use this for progressively larger polyps given the low complication rate. Despite the need for piecemeal resection of polyps larger than 10 mm, local recurrence rates are comparable to conventional EMR. Sessile serrated polyps are especially ideal for piecemeal cold EMR. Meanwhile, underwater EMR eliminates the need for submucosal injection by utilizing water immersion, which elevates the mucosal lesion away from the muscularis layer. Either hot or cold snare resection can be performed. Benefits include reduced procedure time and cost, and relatively low complication and recurrence rates, compared with conventional EMR. I find this to be a nice option for laterally spreading polyps, with potentially higher rates of en bloc resection.1,4

ESD involves similar techniques but includes careful dissection of the submucosal layer beneath the lesion. In addition to the tools for EMR, a specialized electrosurgical knife is necessary, as well as dedicated training and mentorship that can be difficult to accommodate for an active endoscopist in practice. The primary advantage of ESD is higher en bloc resection rates for larger and potentially deeper lesions, with accurate histologic assessment and staging, and very low recurrence rates.1,4,5 However, ESD is more complex, technically challenging, and time and resource intensive, with higher risk of complications. Intraprocedural bleeding is common and requires immediate management. Additional risks include 2% risk of delayed bleeding and 5% risk of perforation.1,5 ESD involves an operating room, longer procedure times, and higher cost including surgical, anesthesia, and nursing costs. Some of this may be balanced by reduced frequency of surveillance and therapeutic procedures. While both EMR and ESD carry significant cost savings, compared with surgery, ESD is additionally disadvantaged by lack of reimbursement.

Regardless of the technique, EMR is easier to learn and perform than ESD, uses a limited number of devices that are readily available, and carries lower cost-burden. EMR is successful for most colorectal polyps, with the primary disadvantage being piecemeal resection of larger polyps. The rates of adverse events are lower, and appropriate surveillance is essential to ensuring complete resection and eliminating recurrence. Japanese and European guidelines endorse ESD for lesions that have a high likelihood of cancer invading the submucosa and for lesions that cannot be removed by EMR because of submucosal fibrosis. Ultimately, patients need to be treated individually with the most appropriate technique.

Dr. Tewani of Rockford Gastroenterology Associates is clinical assistant professor of medicine at the University of Illinois, Rockford. He has no relevant conflicts of interest to disclose.

References

1. Rashid MU et al. Surg Oncol. 2022 Mar 18;101742.

2. Law R et al. Gastrointest Endosc. 2016 Jun;83(6):1248-57.

3. Backes Y et al. BMC Gastroenterol. 2016 May 26;16(1):56.

4. Thiruvengadam SS et al. Gastroenterol Hepatol. 2022 Mar;18(3):133-44.

5. Wang J et al. World J Gastroenterol. 2014 Jul 7;20(25):8282-7l.

Dear colleagues,

Resection of polyps is the bread and butter of endoscopy. Advances in technology have enabled us to tackle larger and more complex lesions throughout the gastrointestinal tract, especially through endoscopic mucosal resection (EMR). Endoscopic submucosal dissection (ESD) is another technique that offers much promise for complex colorectal polyps and is being rapidly adopted in the West. But do its benefits outweigh the costs in time, money and additional training needed for successful ESD? How can we justify higher recurrence rates with EMR when ESD is available? Will reimbursement continue to favor EMR? In this issue of Perspectives, Dr. Alexis Bayudan and Dr. Craig A. Munroe make the case for adopting ESD, while Dr. Sumeet Tewani highlights all the benefits of EMR. I invite you to a great debate and look forward to hearing your own thoughts on Twitter @AGA_GIHN and by email at [email protected].
 

Dr. Gyanprakash Ketwaroo

Gyanprakash A. Ketwaroo, MD, MSc, is assistant professor of medicine at Baylor College of Medicine, Houston. He is an associate editor for GI & Hepatology News.

 

 

The future standard of care

BY ALEXIS BAYUDAN, MD, AND CRAIG A. MUNROE, MD

Endoscopic submucosal dissection (ESD) is a minimally invasive, organ-sparing, flexible endoscopic technique used to treat advanced neoplasia of the digestive tract, with the goal of en bloc resection for accurate histologic assessment. ESD was introduced over 25 years ago in Japan by a small group of innovative endoscopists.1 After its initial adoption and success with removing gastric lesions, ESD later evolved as a technique used for complete resection of lesions throughout the gastrointestinal tract.

The intent of ESD is to achieve clear pathologic evaluation of deep and lateral margins, which is generally lost when piecemeal EMR (pEMR) is performed on lesions larger than 2 cm. With growing global experience, the evidence is clear that ESD is advantageous when compared to pEMR in the resection of large colorectal lesions en bloc, leading to improved curative resection rates and less local recurrence.

Dr. Alexis Bayudan

From our own experience, and from the results of many studies, we know that although procedure time in ESD can be longer, the rate of complete resection is far superior. ESD was previously cited as having a 10% risk of perforation in the 1990s and early 2000s, but current rates are closer to 4.5%, as noted by Nimii et al., with nearly complete successful treatment with endoscopic closure.1 In a 2021 meta-analysis reviewing a total of 21 studies, Lim et. al demonstrate that, although there is an increased risk of perforation with ESD compared to EMR (risk ratio, 7.597; 95% confidence interval, 4.281-13.479; P < .001), there is no significant difference in bleeding risk between the two techniques (RR, 7.597; 95% CI, 4.281-13.479; P < .001).2

Since its inception, many refinements of the ESD technique have occurred through technology, and better understanding of anatomy and disease states. These include, but are not limited to, improvements in hemostatic and closure techniques, electrosurgical equipment, resection and traction devices, the use of carbon dioxide, the ability to perform full-thickness endoscopic surgery, and submucosal lifting.1 The realm of endoscopic innovation is moving at a rapid pace within commercial and noncommercial entities, and advancements in ESD devices will allow for further improvements in procedure times and decreased procedural complications. Conversely, there have been few advancements in EMR technique in decades.

Dr. Craig Munroe

Further developments in ESD will continue to democratize this intervention, so that it can be practiced in all medical centers, not just expert centers. However, for ESD to become standard of care in the Western world, it will require more exposure and training. ESD has rapidly spread throughout Japan because of the master-mentor relationship that fosters safe learning, in addition to an abundance of highly skilled EMR-experienced physicians who went on to acquire their skills under the supervision of ESD experts. Current methods of teaching ESD, such as using pig models to practice specific steps of the procedure, can be implemented in Western gastroenterology training programs and through GI and surgical society training programs to learn safe operation in the third space. Mentorship and proctorship are also mandatory. The incorporation of ESD into standard practice over pEMR is very akin to laparoscopic cholecystectomy revolutionizing gallbladder surgery, even though open cholecystectomy was known to be effective.

A major limitation in the adoption of ESD in the West is reimbursement. Despite mounting evidence of the superiority of ESD in well-trained hands, and the additional training needed to safely perform these procedures, there had not been a pathway forward for payment for the increased requirements needed to perform these procedures safely.3 This leads to more endoscopists performing pEMR in the West which is anti-innovative. In October 2021, the Centers for Medicare and Medicaid Services expanded the reimbursement for ESD (Healthcare Common Procedure Coding System C9779). The availability of billing codes paves the way for increasing patient access to these therapies. Hopefully, additional codes will follow.

With the mounting evidence demonstrating ESD is superior to pEMR in terms of curative resection and recurrence rates, we think it is time for ESD to be incorporated widely into Western practice. ESD is still evolving and improving; ESD will become both safer and more effective. ESD has revolutionized endoscopic resection, and we are just beginning to see the possibilities and value of these techniques.
 

Dr. Baydan is a second-year fellow, and Dr. Munroe is an associate professor, both at the University of California, San Francisco. They have no relevant conflicts of interest.

References

1. Ferreira J et al. Clin Colon Rectal Surg. 2015 Sep; 28(3):146-151.

2. Lim X et al. World J Gastroenterol. 2021 Jul 7;27(25):3925-39.

3. Iqbal S et al. World J Gastrointest Endosc. 2020 Jan 16; 12(1):49-52.

 

 

 

More investment than payoff

Most large colorectal polyps are best managed by endoscopic mucosal resection (EMR) and do not require endoscopic submucosal dissection (ESD). EMR can provide complete, safe, and effective removal, preventing colorectal cancer while avoiding the risks of surgery or ESD. EMR has several advantages over ESD. It is minimally invasive, low cost, well tolerated, and allows excellent histopathologic examination. It is performed during colonoscopy in an outpatient endoscopy lab or ambulatory surgery center. There are several techniques that can be performed safely and efficiently using accessories that are readily available. It is easier to learn and perform, with lower risks and fewer resources. Endoscopists can effectively integrate EMR into a busy practice, without making significant additional investments.

Dr. Sumeet K. Tewani

EMR of large adenomas has improved morbidity, mortality, and cost compared to surgery.1-3 I first carefully inspect the lesion to plan the approach and exclude submucosal invasion, which should be referred for ESD or surgery instead. This includes understanding the size, location, morphology, and surface characteristics, using high-definition and narrow-band imaging or Fujinon intelligent chromoendoscopy. Conventional EMR utilizes submucosal injection to lift the polyp away from the underlying muscle layer before hot snare resection. Injection needles and snares of various shapes, sizes, and stiffness are available in most endoscopy labs. The goal is en bloc resection to achieve potential cure with complete histological assessment and low rate of recurrence. This can be achieved for lesions up to 2 cm in size, although larger lesions require piecemeal resection, which limits accurate histopathology and carries a recurrence rate up to 25%.1 Thermal ablation of the resection margins with argon plasma coagulation or snare-tip soft coagulation can reduce the rate of recurrence. Additionally, most recurrences are identified during early surveillance within 6 months and managed endoscopically. The rates of adverse events, including bleeding (6%-15%), perforation (1%-2%), and postpolypectomy syndrome (< 1%) remain at acceptable low levels.1,4

For many polyps, saline injection is safe, effective, and inexpensive, but it dissipates rapidly with limited duration of effect. Alternative agents can improve the lift, at additional cost.4 I prefer adding dye, such as methylene blue, to differentiate the submucosa from the muscularis, demarcate the lesion margins, and allow easier inspection of the defect. Dilute epinephrine can also be added to reduce intraprocedural bleeding and maintain a clean resection field. I reserve this for the duodenum, but it can be an important adjunct for some colorectal polyps. Submucosal injection also allows assessment for a “nonlifting sign,” which raises suspicion for invasive carcinoma but can also occur with benign submucosal fibrosis from previous biopsy, partial resection, or adjacent tattoo. In these cases, effective management can still be achieved using EMR in combination with avulsion and thermal ablation techniques.

Alternative techniques include cold EMR and underwater EMR.1,4 These are gaining popularity because of their excellent safety profile and favorable outcomes. Cold EMR involves submucosal injection followed by cold-snare resection, eliminating the use of cautery and its associated risks. Cold EMR is very safe and effective for small polyps, and we use this for progressively larger polyps given the low complication rate. Despite the need for piecemeal resection of polyps larger than 10 mm, local recurrence rates are comparable to conventional EMR. Sessile serrated polyps are especially ideal for piecemeal cold EMR. Meanwhile, underwater EMR eliminates the need for submucosal injection by utilizing water immersion, which elevates the mucosal lesion away from the muscularis layer. Either hot or cold snare resection can be performed. Benefits include reduced procedure time and cost, and relatively low complication and recurrence rates, compared with conventional EMR. I find this to be a nice option for laterally spreading polyps, with potentially higher rates of en bloc resection.1,4

ESD involves similar techniques but includes careful dissection of the submucosal layer beneath the lesion. In addition to the tools for EMR, a specialized electrosurgical knife is necessary, as well as dedicated training and mentorship that can be difficult to accommodate for an active endoscopist in practice. The primary advantage of ESD is higher en bloc resection rates for larger and potentially deeper lesions, with accurate histologic assessment and staging, and very low recurrence rates.1,4,5 However, ESD is more complex, technically challenging, and time and resource intensive, with higher risk of complications. Intraprocedural bleeding is common and requires immediate management. Additional risks include 2% risk of delayed bleeding and 5% risk of perforation.1,5 ESD involves an operating room, longer procedure times, and higher cost including surgical, anesthesia, and nursing costs. Some of this may be balanced by reduced frequency of surveillance and therapeutic procedures. While both EMR and ESD carry significant cost savings, compared with surgery, ESD is additionally disadvantaged by lack of reimbursement.

Regardless of the technique, EMR is easier to learn and perform than ESD, uses a limited number of devices that are readily available, and carries lower cost-burden. EMR is successful for most colorectal polyps, with the primary disadvantage being piecemeal resection of larger polyps. The rates of adverse events are lower, and appropriate surveillance is essential to ensuring complete resection and eliminating recurrence. Japanese and European guidelines endorse ESD for lesions that have a high likelihood of cancer invading the submucosa and for lesions that cannot be removed by EMR because of submucosal fibrosis. Ultimately, patients need to be treated individually with the most appropriate technique.

Dr. Tewani of Rockford Gastroenterology Associates is clinical assistant professor of medicine at the University of Illinois, Rockford. He has no relevant conflicts of interest to disclose.

References

1. Rashid MU et al. Surg Oncol. 2022 Mar 18;101742.

2. Law R et al. Gastrointest Endosc. 2016 Jun;83(6):1248-57.

3. Backes Y et al. BMC Gastroenterol. 2016 May 26;16(1):56.

4. Thiruvengadam SS et al. Gastroenterol Hepatol. 2022 Mar;18(3):133-44.

5. Wang J et al. World J Gastroenterol. 2014 Jul 7;20(25):8282-7l.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article