User login
Prurigo nodularis diagnosis delay in skin of color gains added significance
NEW YORK –
according to an expert evaluating current approaches at the Skin of Color Update 2023.“As dermatologists, prurigo nodularis is one of the most severe diseases we treat, said Shawn G. Kwatra, MD, director of the Johns Hopkins Itch Center, Baltimore. Now with one approved therapy and more coming, “it offers one of the most important opportunities we have to dramatically improve someone’s entire life.”
Prior to the September 2022 approval of dupilumab for the treatment of prurigo nodularis (the first treatment approved for this indication), Dr. Kwatra said that the limited options for control of pruritus made him anxious. Prurigo nodularis is characterized by highly itchy nodules that can produce symptoms patients describe as unbearable.
Itch typically severe
On a scale for which 10 represents the worst itch imaginable, scores of 8 or greater are not unusual, according to Dr. Kwatra. Nodules on the trunk and the extensor surfaces of the arms and legs are characteristic, but the persistent itch is the immediate target of treatment once the diagnosis is made. For that reason, he urged clinicians to be familiar with the presentation in patients with darker skin types to reduce time to treatment.
In addition to the difficulty of seeing the characteristic red that is typical of erythema in lighter skin, patients with darker skin types tend to have larger nodules that might vary in shape relative to lighter skin types, Dr. Kwatra said. Given that the presentation of prurigo nodularis is highly heterogeneous even among the same skin types, the nuances in patients with darker skin can be that much more confusing for those without prior experience.
Among Blacks in particular, the nodules in some cases “can be huge,” he added. “They can almost look like keloids due to their thickened and fibrotic appearance.”
Phenotypes appear to be racially linked
In Black patients, the appearance can vary enough relative to lighter skin individuals, that “there seems to be something a little bit different going on,” he said, and this is, in fact, supported by a cluster analysis of circulating biomarkers reported by Dr. Kwatra and colleagues in 2022, in the Journal of Investigative Dermatology.
In that study, the biomarker profile distinguished two distinct groups. Whites were more common in a cluster with relatively low expression of inflammatory markers (cluster 1), while Blacks were more common in a cluster with an inflammatory plasma profile (cluster 2), with higher relative expression of multiple cytokines, C-reactive protein, eosinophils, and other markers of up-regulated inflammation.
In addition to a lower rate of myelopathy in cluster 2 than cluster 1 (18% vs. 67%; P = .028), patients in cluster 2 had a significantly worse itch than those in cluster 1 on the Numeric Rating Scale for itch and a significantly lower quality of life based on the Dermatology Life Quality Index score.
Other work at Dr. Kwatra’s center that is based on genetic sequencing has provided evidence that Blacks – and Asians to a lesser extent – are predisposed genetically to develop nodules, perhaps explaining why the nodules tend to be larger than those seen in Whites.
The significance of the evidence that prurigo nodularis is associated with a more up-regulated inflammatory profile in Blacks than in Whites is that they might be particularly likely to respond to dupilumab or other targeted immunomodulating therapies that are in development, according to Dr. Kwatra. Although he did not provide data on response by race, he did provide several case examples of complete itch control following dupilumab therapy in Black patients.
In his experience, high levels of blood eosinophils and other inflammatory markers are predictors of response to dupilumab regardless of skin type, but he expressed concern that time to diagnosis is sometimes longer in Black patients if the nuances of disease expression are not appreciated.
For treating prurigo nodularis in Blacks as well as Whites, Dr. Kwatra suggested that clinicians stay current with what he predicted will be a growing array of treatment options. He did not discuss nemolizumab, an interleukin-31 receptor alpha antagonist. Soon after the meeting, results of a phase 3 trial of nemolizumab in patients with moderate to severe prurigo nodularis were published in the New England Journal of Medicine. (Dr. Kwatra is the lead author of the study but did not specifically discuss this treatment at the meeting.)
In the international placebo-controlled trial, called OLYMPIA 2, treatment was associated with a significant reduction in the signs and symptoms of prurigo nodularis, including reductions in itch, at 16 weeks, although only 4% of patients in the study were Black.
Given the expanding array of therapies, the message of considering prurigo nodularis in Black patients in order to accelerate the time to diagnosis is timely, Andrew F. Alexis, MD, MPH, professor of clinical dermatology and vice-chair for diversity and inclusion for the department of dermatology, Weill Cornell Medicine, New York.
“Current studies suggest a higher prevalence and greater severity of prurigo nodularis among Black patients compared to White patients,” said Dr. Alexis, agreeing with Dr. Kwatra. Referring to evidence that Blacks might mount a greater inflammatory response to prurigo nodularis than Whites, Dr. Alexis called for “a better understanding of the pathomechanisms” of this disease in order “to address unmet needs and reduce disparities for our diverse population of patients who suffer from prurigo nodularis.’
Dr. Kwatra reported financial relationships with AbbVie, Amgen, Arcutis, ASLAN, Cara, Castle Biosciences, Celldex, Galderma, Incyte, Johnson & Johnson, LEO pharma, Novartis, Pfizer, Regeneron, and Sanofi.
NEW YORK –
according to an expert evaluating current approaches at the Skin of Color Update 2023.“As dermatologists, prurigo nodularis is one of the most severe diseases we treat, said Shawn G. Kwatra, MD, director of the Johns Hopkins Itch Center, Baltimore. Now with one approved therapy and more coming, “it offers one of the most important opportunities we have to dramatically improve someone’s entire life.”
Prior to the September 2022 approval of dupilumab for the treatment of prurigo nodularis (the first treatment approved for this indication), Dr. Kwatra said that the limited options for control of pruritus made him anxious. Prurigo nodularis is characterized by highly itchy nodules that can produce symptoms patients describe as unbearable.
Itch typically severe
On a scale for which 10 represents the worst itch imaginable, scores of 8 or greater are not unusual, according to Dr. Kwatra. Nodules on the trunk and the extensor surfaces of the arms and legs are characteristic, but the persistent itch is the immediate target of treatment once the diagnosis is made. For that reason, he urged clinicians to be familiar with the presentation in patients with darker skin types to reduce time to treatment.
In addition to the difficulty of seeing the characteristic red that is typical of erythema in lighter skin, patients with darker skin types tend to have larger nodules that might vary in shape relative to lighter skin types, Dr. Kwatra said. Given that the presentation of prurigo nodularis is highly heterogeneous even among the same skin types, the nuances in patients with darker skin can be that much more confusing for those without prior experience.
Among Blacks in particular, the nodules in some cases “can be huge,” he added. “They can almost look like keloids due to their thickened and fibrotic appearance.”
Phenotypes appear to be racially linked
In Black patients, the appearance can vary enough relative to lighter skin individuals, that “there seems to be something a little bit different going on,” he said, and this is, in fact, supported by a cluster analysis of circulating biomarkers reported by Dr. Kwatra and colleagues in 2022, in the Journal of Investigative Dermatology.
In that study, the biomarker profile distinguished two distinct groups. Whites were more common in a cluster with relatively low expression of inflammatory markers (cluster 1), while Blacks were more common in a cluster with an inflammatory plasma profile (cluster 2), with higher relative expression of multiple cytokines, C-reactive protein, eosinophils, and other markers of up-regulated inflammation.
In addition to a lower rate of myelopathy in cluster 2 than cluster 1 (18% vs. 67%; P = .028), patients in cluster 2 had a significantly worse itch than those in cluster 1 on the Numeric Rating Scale for itch and a significantly lower quality of life based on the Dermatology Life Quality Index score.
Other work at Dr. Kwatra’s center that is based on genetic sequencing has provided evidence that Blacks – and Asians to a lesser extent – are predisposed genetically to develop nodules, perhaps explaining why the nodules tend to be larger than those seen in Whites.
The significance of the evidence that prurigo nodularis is associated with a more up-regulated inflammatory profile in Blacks than in Whites is that they might be particularly likely to respond to dupilumab or other targeted immunomodulating therapies that are in development, according to Dr. Kwatra. Although he did not provide data on response by race, he did provide several case examples of complete itch control following dupilumab therapy in Black patients.
In his experience, high levels of blood eosinophils and other inflammatory markers are predictors of response to dupilumab regardless of skin type, but he expressed concern that time to diagnosis is sometimes longer in Black patients if the nuances of disease expression are not appreciated.
For treating prurigo nodularis in Blacks as well as Whites, Dr. Kwatra suggested that clinicians stay current with what he predicted will be a growing array of treatment options. He did not discuss nemolizumab, an interleukin-31 receptor alpha antagonist. Soon after the meeting, results of a phase 3 trial of nemolizumab in patients with moderate to severe prurigo nodularis were published in the New England Journal of Medicine. (Dr. Kwatra is the lead author of the study but did not specifically discuss this treatment at the meeting.)
In the international placebo-controlled trial, called OLYMPIA 2, treatment was associated with a significant reduction in the signs and symptoms of prurigo nodularis, including reductions in itch, at 16 weeks, although only 4% of patients in the study were Black.
Given the expanding array of therapies, the message of considering prurigo nodularis in Black patients in order to accelerate the time to diagnosis is timely, Andrew F. Alexis, MD, MPH, professor of clinical dermatology and vice-chair for diversity and inclusion for the department of dermatology, Weill Cornell Medicine, New York.
“Current studies suggest a higher prevalence and greater severity of prurigo nodularis among Black patients compared to White patients,” said Dr. Alexis, agreeing with Dr. Kwatra. Referring to evidence that Blacks might mount a greater inflammatory response to prurigo nodularis than Whites, Dr. Alexis called for “a better understanding of the pathomechanisms” of this disease in order “to address unmet needs and reduce disparities for our diverse population of patients who suffer from prurigo nodularis.’
Dr. Kwatra reported financial relationships with AbbVie, Amgen, Arcutis, ASLAN, Cara, Castle Biosciences, Celldex, Galderma, Incyte, Johnson & Johnson, LEO pharma, Novartis, Pfizer, Regeneron, and Sanofi.
NEW YORK –
according to an expert evaluating current approaches at the Skin of Color Update 2023.“As dermatologists, prurigo nodularis is one of the most severe diseases we treat, said Shawn G. Kwatra, MD, director of the Johns Hopkins Itch Center, Baltimore. Now with one approved therapy and more coming, “it offers one of the most important opportunities we have to dramatically improve someone’s entire life.”
Prior to the September 2022 approval of dupilumab for the treatment of prurigo nodularis (the first treatment approved for this indication), Dr. Kwatra said that the limited options for control of pruritus made him anxious. Prurigo nodularis is characterized by highly itchy nodules that can produce symptoms patients describe as unbearable.
Itch typically severe
On a scale for which 10 represents the worst itch imaginable, scores of 8 or greater are not unusual, according to Dr. Kwatra. Nodules on the trunk and the extensor surfaces of the arms and legs are characteristic, but the persistent itch is the immediate target of treatment once the diagnosis is made. For that reason, he urged clinicians to be familiar with the presentation in patients with darker skin types to reduce time to treatment.
In addition to the difficulty of seeing the characteristic red that is typical of erythema in lighter skin, patients with darker skin types tend to have larger nodules that might vary in shape relative to lighter skin types, Dr. Kwatra said. Given that the presentation of prurigo nodularis is highly heterogeneous even among the same skin types, the nuances in patients with darker skin can be that much more confusing for those without prior experience.
Among Blacks in particular, the nodules in some cases “can be huge,” he added. “They can almost look like keloids due to their thickened and fibrotic appearance.”
Phenotypes appear to be racially linked
In Black patients, the appearance can vary enough relative to lighter skin individuals, that “there seems to be something a little bit different going on,” he said, and this is, in fact, supported by a cluster analysis of circulating biomarkers reported by Dr. Kwatra and colleagues in 2022, in the Journal of Investigative Dermatology.
In that study, the biomarker profile distinguished two distinct groups. Whites were more common in a cluster with relatively low expression of inflammatory markers (cluster 1), while Blacks were more common in a cluster with an inflammatory plasma profile (cluster 2), with higher relative expression of multiple cytokines, C-reactive protein, eosinophils, and other markers of up-regulated inflammation.
In addition to a lower rate of myelopathy in cluster 2 than cluster 1 (18% vs. 67%; P = .028), patients in cluster 2 had a significantly worse itch than those in cluster 1 on the Numeric Rating Scale for itch and a significantly lower quality of life based on the Dermatology Life Quality Index score.
Other work at Dr. Kwatra’s center that is based on genetic sequencing has provided evidence that Blacks – and Asians to a lesser extent – are predisposed genetically to develop nodules, perhaps explaining why the nodules tend to be larger than those seen in Whites.
The significance of the evidence that prurigo nodularis is associated with a more up-regulated inflammatory profile in Blacks than in Whites is that they might be particularly likely to respond to dupilumab or other targeted immunomodulating therapies that are in development, according to Dr. Kwatra. Although he did not provide data on response by race, he did provide several case examples of complete itch control following dupilumab therapy in Black patients.
In his experience, high levels of blood eosinophils and other inflammatory markers are predictors of response to dupilumab regardless of skin type, but he expressed concern that time to diagnosis is sometimes longer in Black patients if the nuances of disease expression are not appreciated.
For treating prurigo nodularis in Blacks as well as Whites, Dr. Kwatra suggested that clinicians stay current with what he predicted will be a growing array of treatment options. He did not discuss nemolizumab, an interleukin-31 receptor alpha antagonist. Soon after the meeting, results of a phase 3 trial of nemolizumab in patients with moderate to severe prurigo nodularis were published in the New England Journal of Medicine. (Dr. Kwatra is the lead author of the study but did not specifically discuss this treatment at the meeting.)
In the international placebo-controlled trial, called OLYMPIA 2, treatment was associated with a significant reduction in the signs and symptoms of prurigo nodularis, including reductions in itch, at 16 weeks, although only 4% of patients in the study were Black.
Given the expanding array of therapies, the message of considering prurigo nodularis in Black patients in order to accelerate the time to diagnosis is timely, Andrew F. Alexis, MD, MPH, professor of clinical dermatology and vice-chair for diversity and inclusion for the department of dermatology, Weill Cornell Medicine, New York.
“Current studies suggest a higher prevalence and greater severity of prurigo nodularis among Black patients compared to White patients,” said Dr. Alexis, agreeing with Dr. Kwatra. Referring to evidence that Blacks might mount a greater inflammatory response to prurigo nodularis than Whites, Dr. Alexis called for “a better understanding of the pathomechanisms” of this disease in order “to address unmet needs and reduce disparities for our diverse population of patients who suffer from prurigo nodularis.’
Dr. Kwatra reported financial relationships with AbbVie, Amgen, Arcutis, ASLAN, Cara, Castle Biosciences, Celldex, Galderma, Incyte, Johnson & Johnson, LEO pharma, Novartis, Pfizer, Regeneron, and Sanofi.
AT SOC 2023
Gout: Studies support early use of urate-lowering therapy, warn of peripheral arterial disease
LA JOLLA, Calif. – A new analysis suggests that it may not be necessary to delay urate-lowering therapy (ULT) in gout flares, and a study warns of the potential heightened risk of peripheral arterial disease (PAD) in gout.
The reports were released at the annual research symposium of the Gout, Hyperuricemia, and Crystal Associated Disease Network (G-CAN).
The urate-lowering report, a systematic review and meta-analysis, suggests that the use of ULT during a gout flare “does not affect flare severity nor the duration of the flare or risk of recurrence in the subsequent month,” lead author Vicky Tai, MBChB, of the University of Auckland (New Zealand), said in a presentation.
She noted that there’s ongoing debate about whether ULT should be delayed until a week or two after gout flares subside to avoid their return. “This is reflected in guidelines on gout management, which have provided inconsistent recommendations on the issue,” Dr. Tai said.
As she noted, the American College of Rheumatology’s 2020 gout guidelines conditionally recommended starting ULT during gout flares – and not afterward – if it’s indicated. (The guidelines also conditionally recommend against ULT in a first gout flare, however, with a few exceptions.)
The British Society for Rheumatology’s 2017 gout guidelines suggested waiting until flares have settled down and “the patient was no longer in pain,” although ULT may be started in patients with frequent attacks. The European Alliance of Associations for Rheumatology’s gout guidelines from 2016 didn’t address timing, Dr. Tai said.
For the new analysis, Dr. Tai and colleagues examined six randomized studies from the United States (two), China (two), Taiwan (one), and Thailand (one) that examined the use of allopurinol (three studies), febuxostat (two studies), and probenecid (one). The studies, dated from 2012 to 2023, randomized 226 subjects with gout to early initiation of ULT vs. 219 who received placebo or delayed ULT. Subjects were tracked for a median of 28 days (15 days to 12 weeks).
Three of the studies were deemed to have high risk of bias.
There were no differences in patient-rated pain scores at various time points, duration of gout flares (examined in three studies), or recurrence of gout flares (examined in four studies).
“Other outcomes of interest, including long-term adherence, time to achieve target serum urate, and patient satisfaction with treatment, were not examined,” Dr. Tai said. “Adverse events were similar between groups.”
She cautioned that the sample sizes are small, and the findings may not be applicable to patients with tophaceous gout or comorbid renal disease.
A similar meta-analysis published in 2022 examined five studies (including three of those in the new analysis); among the five was one study from 1987 that examined azapropazone and indomethacin plus allopurinol. The review found “that initiation of ULT during an acute gout flare did not prolong the duration of acute flares.”
Risk for PAD
In the other study, researchers raised an alarm after finding a high rate of PAD in patients with gout regardless of whether they also had diabetes, which is a known risk factor for PAD. “Our data suggest that gout is an underrecognized risk factor for PAD and indicates the importance of assessing for PAD in gout patients,” lead author Nicole Leung, MD, of NYU Langone Health, said in a presentation.
According to Dr. Leung, there’s little known about links between PAD and gout, although she highlighted a 2018 study that found that patients with obstructive coronary artery disease were more likely to have poor outcomes if they also developed gout after catheterization. She highlighted a 2022 study that found higher rates of lower-extremity amputations in patients with gout independent of cardiovascular disease and diabetes. However, she noted that a link to PAD is unclear, and the study found a link between gout and amputations that was independent of PAD.
Patients with gout, she added, are not routinely screened for PAD.
For the new retrospective, cross-sectional analysis, Dr. Leung and colleagues examined Veterans Administration data from 2014 to 2018 for 7.2 million patients. The population was largely male.
Of those, 140,862 (2.52%) – the control group – had no gout or diabetes. In comparison, 11,449 (5.56%) of 205,904 with gout but not diabetes had PAD, for a rate 2.2 times greater than the control group). PAD occurred in 101,582 (8.70%) of 1,168,138 with diabetes but not gout, giving a rate 3.2 times greater than the control group. The rate was highest among people with both gout and diabetes, at 9.97% (9,905 of 99,377), which is about four times greater than the control group.
The link between gout and PAD remained after adjustment for creatinine levels, age, gender, and body mass index. Diabetes was linked to a higher risk for PAD than was gout, and the effect of both conditions combined was “less than additive.” This “may suggest an overlap and pathophysiology between the two,” she said.
Disclosure information was not provided. The Rheumatology Research Foundation funded the PAD study; funding information for the ULT/gout flare analysis was not provided.
LA JOLLA, Calif. – A new analysis suggests that it may not be necessary to delay urate-lowering therapy (ULT) in gout flares, and a study warns of the potential heightened risk of peripheral arterial disease (PAD) in gout.
The reports were released at the annual research symposium of the Gout, Hyperuricemia, and Crystal Associated Disease Network (G-CAN).
The urate-lowering report, a systematic review and meta-analysis, suggests that the use of ULT during a gout flare “does not affect flare severity nor the duration of the flare or risk of recurrence in the subsequent month,” lead author Vicky Tai, MBChB, of the University of Auckland (New Zealand), said in a presentation.
She noted that there’s ongoing debate about whether ULT should be delayed until a week or two after gout flares subside to avoid their return. “This is reflected in guidelines on gout management, which have provided inconsistent recommendations on the issue,” Dr. Tai said.
As she noted, the American College of Rheumatology’s 2020 gout guidelines conditionally recommended starting ULT during gout flares – and not afterward – if it’s indicated. (The guidelines also conditionally recommend against ULT in a first gout flare, however, with a few exceptions.)
The British Society for Rheumatology’s 2017 gout guidelines suggested waiting until flares have settled down and “the patient was no longer in pain,” although ULT may be started in patients with frequent attacks. The European Alliance of Associations for Rheumatology’s gout guidelines from 2016 didn’t address timing, Dr. Tai said.
For the new analysis, Dr. Tai and colleagues examined six randomized studies from the United States (two), China (two), Taiwan (one), and Thailand (one) that examined the use of allopurinol (three studies), febuxostat (two studies), and probenecid (one). The studies, dated from 2012 to 2023, randomized 226 subjects with gout to early initiation of ULT vs. 219 who received placebo or delayed ULT. Subjects were tracked for a median of 28 days (15 days to 12 weeks).
Three of the studies were deemed to have high risk of bias.
There were no differences in patient-rated pain scores at various time points, duration of gout flares (examined in three studies), or recurrence of gout flares (examined in four studies).
“Other outcomes of interest, including long-term adherence, time to achieve target serum urate, and patient satisfaction with treatment, were not examined,” Dr. Tai said. “Adverse events were similar between groups.”
She cautioned that the sample sizes are small, and the findings may not be applicable to patients with tophaceous gout or comorbid renal disease.
A similar meta-analysis published in 2022 examined five studies (including three of those in the new analysis); among the five was one study from 1987 that examined azapropazone and indomethacin plus allopurinol. The review found “that initiation of ULT during an acute gout flare did not prolong the duration of acute flares.”
Risk for PAD
In the other study, researchers raised an alarm after finding a high rate of PAD in patients with gout regardless of whether they also had diabetes, which is a known risk factor for PAD. “Our data suggest that gout is an underrecognized risk factor for PAD and indicates the importance of assessing for PAD in gout patients,” lead author Nicole Leung, MD, of NYU Langone Health, said in a presentation.
According to Dr. Leung, there’s little known about links between PAD and gout, although she highlighted a 2018 study that found that patients with obstructive coronary artery disease were more likely to have poor outcomes if they also developed gout after catheterization. She highlighted a 2022 study that found higher rates of lower-extremity amputations in patients with gout independent of cardiovascular disease and diabetes. However, she noted that a link to PAD is unclear, and the study found a link between gout and amputations that was independent of PAD.
Patients with gout, she added, are not routinely screened for PAD.
For the new retrospective, cross-sectional analysis, Dr. Leung and colleagues examined Veterans Administration data from 2014 to 2018 for 7.2 million patients. The population was largely male.
Of those, 140,862 (2.52%) – the control group – had no gout or diabetes. In comparison, 11,449 (5.56%) of 205,904 with gout but not diabetes had PAD, for a rate 2.2 times greater than the control group). PAD occurred in 101,582 (8.70%) of 1,168,138 with diabetes but not gout, giving a rate 3.2 times greater than the control group. The rate was highest among people with both gout and diabetes, at 9.97% (9,905 of 99,377), which is about four times greater than the control group.
The link between gout and PAD remained after adjustment for creatinine levels, age, gender, and body mass index. Diabetes was linked to a higher risk for PAD than was gout, and the effect of both conditions combined was “less than additive.” This “may suggest an overlap and pathophysiology between the two,” she said.
Disclosure information was not provided. The Rheumatology Research Foundation funded the PAD study; funding information for the ULT/gout flare analysis was not provided.
LA JOLLA, Calif. – A new analysis suggests that it may not be necessary to delay urate-lowering therapy (ULT) in gout flares, and a study warns of the potential heightened risk of peripheral arterial disease (PAD) in gout.
The reports were released at the annual research symposium of the Gout, Hyperuricemia, and Crystal Associated Disease Network (G-CAN).
The urate-lowering report, a systematic review and meta-analysis, suggests that the use of ULT during a gout flare “does not affect flare severity nor the duration of the flare or risk of recurrence in the subsequent month,” lead author Vicky Tai, MBChB, of the University of Auckland (New Zealand), said in a presentation.
She noted that there’s ongoing debate about whether ULT should be delayed until a week or two after gout flares subside to avoid their return. “This is reflected in guidelines on gout management, which have provided inconsistent recommendations on the issue,” Dr. Tai said.
As she noted, the American College of Rheumatology’s 2020 gout guidelines conditionally recommended starting ULT during gout flares – and not afterward – if it’s indicated. (The guidelines also conditionally recommend against ULT in a first gout flare, however, with a few exceptions.)
The British Society for Rheumatology’s 2017 gout guidelines suggested waiting until flares have settled down and “the patient was no longer in pain,” although ULT may be started in patients with frequent attacks. The European Alliance of Associations for Rheumatology’s gout guidelines from 2016 didn’t address timing, Dr. Tai said.
For the new analysis, Dr. Tai and colleagues examined six randomized studies from the United States (two), China (two), Taiwan (one), and Thailand (one) that examined the use of allopurinol (three studies), febuxostat (two studies), and probenecid (one). The studies, dated from 2012 to 2023, randomized 226 subjects with gout to early initiation of ULT vs. 219 who received placebo or delayed ULT. Subjects were tracked for a median of 28 days (15 days to 12 weeks).
Three of the studies were deemed to have high risk of bias.
There were no differences in patient-rated pain scores at various time points, duration of gout flares (examined in three studies), or recurrence of gout flares (examined in four studies).
“Other outcomes of interest, including long-term adherence, time to achieve target serum urate, and patient satisfaction with treatment, were not examined,” Dr. Tai said. “Adverse events were similar between groups.”
She cautioned that the sample sizes are small, and the findings may not be applicable to patients with tophaceous gout or comorbid renal disease.
A similar meta-analysis published in 2022 examined five studies (including three of those in the new analysis); among the five was one study from 1987 that examined azapropazone and indomethacin plus allopurinol. The review found “that initiation of ULT during an acute gout flare did not prolong the duration of acute flares.”
Risk for PAD
In the other study, researchers raised an alarm after finding a high rate of PAD in patients with gout regardless of whether they also had diabetes, which is a known risk factor for PAD. “Our data suggest that gout is an underrecognized risk factor for PAD and indicates the importance of assessing for PAD in gout patients,” lead author Nicole Leung, MD, of NYU Langone Health, said in a presentation.
According to Dr. Leung, there’s little known about links between PAD and gout, although she highlighted a 2018 study that found that patients with obstructive coronary artery disease were more likely to have poor outcomes if they also developed gout after catheterization. She highlighted a 2022 study that found higher rates of lower-extremity amputations in patients with gout independent of cardiovascular disease and diabetes. However, she noted that a link to PAD is unclear, and the study found a link between gout and amputations that was independent of PAD.
Patients with gout, she added, are not routinely screened for PAD.
For the new retrospective, cross-sectional analysis, Dr. Leung and colleagues examined Veterans Administration data from 2014 to 2018 for 7.2 million patients. The population was largely male.
Of those, 140,862 (2.52%) – the control group – had no gout or diabetes. In comparison, 11,449 (5.56%) of 205,904 with gout but not diabetes had PAD, for a rate 2.2 times greater than the control group). PAD occurred in 101,582 (8.70%) of 1,168,138 with diabetes but not gout, giving a rate 3.2 times greater than the control group. The rate was highest among people with both gout and diabetes, at 9.97% (9,905 of 99,377), which is about four times greater than the control group.
The link between gout and PAD remained after adjustment for creatinine levels, age, gender, and body mass index. Diabetes was linked to a higher risk for PAD than was gout, and the effect of both conditions combined was “less than additive.” This “may suggest an overlap and pathophysiology between the two,” she said.
Disclosure information was not provided. The Rheumatology Research Foundation funded the PAD study; funding information for the ULT/gout flare analysis was not provided.
AT G-CAN 2023
Sensory comeback: New findings show the path to smell and taste recovery after COVID
Good news for people struggling with sensory problems after a bout of COVID-19. Although mild cases of the disease often impair the ability to taste and smell, and the problem can drag on for months, a new study from Italy shows that most people return to their senses, as it were, within 3 years.
published as a research letter in JAMA Otolaryngology–Head & Neck Surgery.
Dr. Boscolo-Rizzo and his colleagues analyzed data from 88 adults with mild COVID-19, which was defined as having no lower respiratory disease and blood oxygen saturation of 94% or greater. Another group of 88 adults who never contracted the virus but sometimes had difficulties with smell and taste were also studied. In both groups, the average age was 49 years, all participants were White, and 58% were women.
The researchers tested participants’ sense of smell with sticks that contained different odors and checked their sense of taste with strips that had different tastes. Over time, fewer people had difficulty distinguishing odors. Three years after developing COVID-19, only 12 people had impaired smell, compared with 36 people at year 1 and 24 people at year 2. And at the 3-year mark, all participants had at least a partial ability to smell.
The story was similar with sense of taste, with 10 of 88 people reporting impairments 3 years later. By then, people with COVID-19 were no more likely to have trouble with smell or taste than people who did not get the virus.
A study this past June showed a strong correlation between severity of COVID-19 symptoms and impaired sense of taste and smell and estimated that millions of Americans maintained altered senses. More than 10% of people in the Italian study still had trouble with smell or taste 3 years later.
Emerging treatments, psychological concerns
“We’re seeing fewer people with this problem, but there are still people suffering from it,” said Fernando Carnavali, MD, an internal medicine physician and a site director for the Center for Post-COVID Care at the Icahn School of Medicine at Mount Sinai, New York City.
Dr. Carnavali wasn’t part of this study, but he did find the new results encouraging, and he called for similar studies in diverse populations that have experienced COVID-19. He also noted that an impaired sense of smell is distressing.
“It really has a significant psychological impact,” Dr. Carnavali said.
He recalled a patient crying in his office because her inability to smell made it impossible for her to cook. Dr. Carnavali recommended clinicians refer patients facing protracted loss of smell or taste to mental health professionals for support.
Treatments are emerging for COVID-19 smell loss. One approach is to inject platelet-rich plasma into a patient’s nasal cavities to help neurons related to smell repair themselves.
A randomized trial showed platelet-rich plasma significantly outperformed placebo in patients with smell loss up to a year after getting COVID-19.
“I wish more people would do it,” said Zara Patel, MD, an otolaryngologist at Stanford (Calif.) Medicine, who helped conduct that trial. She said some physicians may be nervous about injecting plasma so close to the skull and are therefore hesitant to try this approach.
Another technique may help to address the olfactory condition known as parosmia, in which patients generally experience a benign odor as rancid, according to otolaryngologist Nyssa Farrell, MD, of Washington University School of Medicine, St. Louis. Dr. Farrell said around two-thirds of patients who contract COVID-19 develop the condition, and the rates of long-term parosmia range from 10%-50% depending on various studies.
“It is almost always foul; this can profoundly affect someone’s quality of life,” impairing their ability to eat or to be intimate with a partner who now smells unpleasant, said Dr. Farrell, who wasn’t associated with this research.
The treatment, called a stellate ganglion block, is provided through a shot into nerves in the neck. People with parosmia associated with COVID-19 often report that this method cures them. Dr. Patel said that may be because their psychological health is improving, not their sense of smell, because the area of the body where the stellate ganglion block is applied is not part of the olfactory system.
Earlier this year, Dr. Farrell and colleagues reported that parosmia linked to COVID-19 is associated with an increased risk for depression, anxiety, and suicidal ideation.
One coauthor reported receiving grants from Smell and Taste Lab, Takasago, Baia Foods, and Frequency Therapeutics. The other authors reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Good news for people struggling with sensory problems after a bout of COVID-19. Although mild cases of the disease often impair the ability to taste and smell, and the problem can drag on for months, a new study from Italy shows that most people return to their senses, as it were, within 3 years.
published as a research letter in JAMA Otolaryngology–Head & Neck Surgery.
Dr. Boscolo-Rizzo and his colleagues analyzed data from 88 adults with mild COVID-19, which was defined as having no lower respiratory disease and blood oxygen saturation of 94% or greater. Another group of 88 adults who never contracted the virus but sometimes had difficulties with smell and taste were also studied. In both groups, the average age was 49 years, all participants were White, and 58% were women.
The researchers tested participants’ sense of smell with sticks that contained different odors and checked their sense of taste with strips that had different tastes. Over time, fewer people had difficulty distinguishing odors. Three years after developing COVID-19, only 12 people had impaired smell, compared with 36 people at year 1 and 24 people at year 2. And at the 3-year mark, all participants had at least a partial ability to smell.
The story was similar with sense of taste, with 10 of 88 people reporting impairments 3 years later. By then, people with COVID-19 were no more likely to have trouble with smell or taste than people who did not get the virus.
A study this past June showed a strong correlation between severity of COVID-19 symptoms and impaired sense of taste and smell and estimated that millions of Americans maintained altered senses. More than 10% of people in the Italian study still had trouble with smell or taste 3 years later.
Emerging treatments, psychological concerns
“We’re seeing fewer people with this problem, but there are still people suffering from it,” said Fernando Carnavali, MD, an internal medicine physician and a site director for the Center for Post-COVID Care at the Icahn School of Medicine at Mount Sinai, New York City.
Dr. Carnavali wasn’t part of this study, but he did find the new results encouraging, and he called for similar studies in diverse populations that have experienced COVID-19. He also noted that an impaired sense of smell is distressing.
“It really has a significant psychological impact,” Dr. Carnavali said.
He recalled a patient crying in his office because her inability to smell made it impossible for her to cook. Dr. Carnavali recommended clinicians refer patients facing protracted loss of smell or taste to mental health professionals for support.
Treatments are emerging for COVID-19 smell loss. One approach is to inject platelet-rich plasma into a patient’s nasal cavities to help neurons related to smell repair themselves.
A randomized trial showed platelet-rich plasma significantly outperformed placebo in patients with smell loss up to a year after getting COVID-19.
“I wish more people would do it,” said Zara Patel, MD, an otolaryngologist at Stanford (Calif.) Medicine, who helped conduct that trial. She said some physicians may be nervous about injecting plasma so close to the skull and are therefore hesitant to try this approach.
Another technique may help to address the olfactory condition known as parosmia, in which patients generally experience a benign odor as rancid, according to otolaryngologist Nyssa Farrell, MD, of Washington University School of Medicine, St. Louis. Dr. Farrell said around two-thirds of patients who contract COVID-19 develop the condition, and the rates of long-term parosmia range from 10%-50% depending on various studies.
“It is almost always foul; this can profoundly affect someone’s quality of life,” impairing their ability to eat or to be intimate with a partner who now smells unpleasant, said Dr. Farrell, who wasn’t associated with this research.
The treatment, called a stellate ganglion block, is provided through a shot into nerves in the neck. People with parosmia associated with COVID-19 often report that this method cures them. Dr. Patel said that may be because their psychological health is improving, not their sense of smell, because the area of the body where the stellate ganglion block is applied is not part of the olfactory system.
Earlier this year, Dr. Farrell and colleagues reported that parosmia linked to COVID-19 is associated with an increased risk for depression, anxiety, and suicidal ideation.
One coauthor reported receiving grants from Smell and Taste Lab, Takasago, Baia Foods, and Frequency Therapeutics. The other authors reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Good news for people struggling with sensory problems after a bout of COVID-19. Although mild cases of the disease often impair the ability to taste and smell, and the problem can drag on for months, a new study from Italy shows that most people return to their senses, as it were, within 3 years.
published as a research letter in JAMA Otolaryngology–Head & Neck Surgery.
Dr. Boscolo-Rizzo and his colleagues analyzed data from 88 adults with mild COVID-19, which was defined as having no lower respiratory disease and blood oxygen saturation of 94% or greater. Another group of 88 adults who never contracted the virus but sometimes had difficulties with smell and taste were also studied. In both groups, the average age was 49 years, all participants were White, and 58% were women.
The researchers tested participants’ sense of smell with sticks that contained different odors and checked their sense of taste with strips that had different tastes. Over time, fewer people had difficulty distinguishing odors. Three years after developing COVID-19, only 12 people had impaired smell, compared with 36 people at year 1 and 24 people at year 2. And at the 3-year mark, all participants had at least a partial ability to smell.
The story was similar with sense of taste, with 10 of 88 people reporting impairments 3 years later. By then, people with COVID-19 were no more likely to have trouble with smell or taste than people who did not get the virus.
A study this past June showed a strong correlation between severity of COVID-19 symptoms and impaired sense of taste and smell and estimated that millions of Americans maintained altered senses. More than 10% of people in the Italian study still had trouble with smell or taste 3 years later.
Emerging treatments, psychological concerns
“We’re seeing fewer people with this problem, but there are still people suffering from it,” said Fernando Carnavali, MD, an internal medicine physician and a site director for the Center for Post-COVID Care at the Icahn School of Medicine at Mount Sinai, New York City.
Dr. Carnavali wasn’t part of this study, but he did find the new results encouraging, and he called for similar studies in diverse populations that have experienced COVID-19. He also noted that an impaired sense of smell is distressing.
“It really has a significant psychological impact,” Dr. Carnavali said.
He recalled a patient crying in his office because her inability to smell made it impossible for her to cook. Dr. Carnavali recommended clinicians refer patients facing protracted loss of smell or taste to mental health professionals for support.
Treatments are emerging for COVID-19 smell loss. One approach is to inject platelet-rich plasma into a patient’s nasal cavities to help neurons related to smell repair themselves.
A randomized trial showed platelet-rich plasma significantly outperformed placebo in patients with smell loss up to a year after getting COVID-19.
“I wish more people would do it,” said Zara Patel, MD, an otolaryngologist at Stanford (Calif.) Medicine, who helped conduct that trial. She said some physicians may be nervous about injecting plasma so close to the skull and are therefore hesitant to try this approach.
Another technique may help to address the olfactory condition known as parosmia, in which patients generally experience a benign odor as rancid, according to otolaryngologist Nyssa Farrell, MD, of Washington University School of Medicine, St. Louis. Dr. Farrell said around two-thirds of patients who contract COVID-19 develop the condition, and the rates of long-term parosmia range from 10%-50% depending on various studies.
“It is almost always foul; this can profoundly affect someone’s quality of life,” impairing their ability to eat or to be intimate with a partner who now smells unpleasant, said Dr. Farrell, who wasn’t associated with this research.
The treatment, called a stellate ganglion block, is provided through a shot into nerves in the neck. People with parosmia associated with COVID-19 often report that this method cures them. Dr. Patel said that may be because their psychological health is improving, not their sense of smell, because the area of the body where the stellate ganglion block is applied is not part of the olfactory system.
Earlier this year, Dr. Farrell and colleagues reported that parosmia linked to COVID-19 is associated with an increased risk for depression, anxiety, and suicidal ideation.
One coauthor reported receiving grants from Smell and Taste Lab, Takasago, Baia Foods, and Frequency Therapeutics. The other authors reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM JAMA OTOLARYNGOLOGY–HEAD & NECK SURGERY
AI algorithm aids egg retrieval date during fertility treatment cycles
According to the researchers, such an algorithm is needed due to the increased demand for fertility treatments, as well as the high day-to-day variability in lab workload.
According to the study investigators, predicting retrieval dates in advance for ongoing cycles is of major importance for both patients and clinicians.
“The population requiring fertility treatments, including genetic testing and fertility preservation, has massively increased, and this causes many more cycles and a high day-to-day variability in IVF activity, especially in the lab workload,” said Rohi Hourvitz, MBA, from FertilAI, an Israeli health care company focused on developing technologies that improve fertility treatments.
“We also need to accommodate and reschedule for non-working days, which causes a big issue with managing the workload in many clinics around the world,” added Mr. Hourvitz, who presented the research highlighting AI’s growing role in reproductive medicine.
In addition, AI has recently emerged as an effective tool for assisting in clinical decision-making in assisted reproductive technology, prompting further research in this space, he said.
The new study used a dataset of 9,550 predictable antagonist cycles (defined as having all necessary data) gathered from one lab with over 50 physicians between August 2018 and October 2022. The data were split into two subsets: one for training the AI model and the other for prospective testing.
To train and test the AI model, data from nearly 6,000 predictable antagonist cycles were used. Key factors used for each cycle included estrogen levels, mean follicle size, primary follicle size, and various patient demographics. Other features were considered, but Mr. Hourvitz noted that primary follicle size influenced the algorithm most, “because that is what most of us use when we want to trigger.”
Mr. Hourvitz explained that these patient data were run through an algorithm that produced a graph predicting the most probable date for a cycle retrieval.
“We could accurately predict when those ‘peak days’ were going to be happening in the clinic, and we could also give a pretty good estimate on how many cycles you’re going to have every day,” Mr. Hourvitz said, explaining that this information could help clinics more efficiently allocate resources and manage patients.
According to Mr. Hourvitz, the predictions derived from this study could improve various aspects of fertility treatments and related procedures, including better staff planning and caseload management in IVF labs, as well as higher-quality eggs at retrieval. Patients would have a clearer timeline for their treatment cycles.
Nikica Zaninovic, PhD, MS, director of the embryology lab at Weill Cornell Medical College, New York City, cautioned that the new findings are not yet ready for clinical application but emphasized the importance of more AI research focusing on the quality of oocytes, not only embryos.
“We’re so focused on the end of the process: the embryo,” Dr. Zaninovic, who was not involved in the research, said in an interview. “I think the focus should be on the beginning – the quality of eggs and sperm, not just the quantity – because that’s what the embryos will depend on.”
He noted the increasing numbers of young women in the United States undergoing egg freezing.
“Cornell is the largest academic IVF center in the United States; 20%-30% of all of the patients that we treat are actually freezing their eggs,” he said. “It’s a huge population.”
“When they come to us, they ask how many eggs they’ll need to guarantee one or two children in the future,” Dr. Zaninovic continued. “We don’t have that answer, so we always tell them [we’ll retrieve] as many as we can. That’s not the answer; we need to be more precise. We’re still lacking these tools, and I think that’s where the research will go.”
The study was funded by FertilAI. Mr. Hourvitz is a shareholder and CEO of FertilAI. Dr. Zaninovic is president of the AI Fertility Society.
A version of this article appeared on Medscape.com.
According to the researchers, such an algorithm is needed due to the increased demand for fertility treatments, as well as the high day-to-day variability in lab workload.
According to the study investigators, predicting retrieval dates in advance for ongoing cycles is of major importance for both patients and clinicians.
“The population requiring fertility treatments, including genetic testing and fertility preservation, has massively increased, and this causes many more cycles and a high day-to-day variability in IVF activity, especially in the lab workload,” said Rohi Hourvitz, MBA, from FertilAI, an Israeli health care company focused on developing technologies that improve fertility treatments.
“We also need to accommodate and reschedule for non-working days, which causes a big issue with managing the workload in many clinics around the world,” added Mr. Hourvitz, who presented the research highlighting AI’s growing role in reproductive medicine.
In addition, AI has recently emerged as an effective tool for assisting in clinical decision-making in assisted reproductive technology, prompting further research in this space, he said.
The new study used a dataset of 9,550 predictable antagonist cycles (defined as having all necessary data) gathered from one lab with over 50 physicians between August 2018 and October 2022. The data were split into two subsets: one for training the AI model and the other for prospective testing.
To train and test the AI model, data from nearly 6,000 predictable antagonist cycles were used. Key factors used for each cycle included estrogen levels, mean follicle size, primary follicle size, and various patient demographics. Other features were considered, but Mr. Hourvitz noted that primary follicle size influenced the algorithm most, “because that is what most of us use when we want to trigger.”
Mr. Hourvitz explained that these patient data were run through an algorithm that produced a graph predicting the most probable date for a cycle retrieval.
“We could accurately predict when those ‘peak days’ were going to be happening in the clinic, and we could also give a pretty good estimate on how many cycles you’re going to have every day,” Mr. Hourvitz said, explaining that this information could help clinics more efficiently allocate resources and manage patients.
According to Mr. Hourvitz, the predictions derived from this study could improve various aspects of fertility treatments and related procedures, including better staff planning and caseload management in IVF labs, as well as higher-quality eggs at retrieval. Patients would have a clearer timeline for their treatment cycles.
Nikica Zaninovic, PhD, MS, director of the embryology lab at Weill Cornell Medical College, New York City, cautioned that the new findings are not yet ready for clinical application but emphasized the importance of more AI research focusing on the quality of oocytes, not only embryos.
“We’re so focused on the end of the process: the embryo,” Dr. Zaninovic, who was not involved in the research, said in an interview. “I think the focus should be on the beginning – the quality of eggs and sperm, not just the quantity – because that’s what the embryos will depend on.”
He noted the increasing numbers of young women in the United States undergoing egg freezing.
“Cornell is the largest academic IVF center in the United States; 20%-30% of all of the patients that we treat are actually freezing their eggs,” he said. “It’s a huge population.”
“When they come to us, they ask how many eggs they’ll need to guarantee one or two children in the future,” Dr. Zaninovic continued. “We don’t have that answer, so we always tell them [we’ll retrieve] as many as we can. That’s not the answer; we need to be more precise. We’re still lacking these tools, and I think that’s where the research will go.”
The study was funded by FertilAI. Mr. Hourvitz is a shareholder and CEO of FertilAI. Dr. Zaninovic is president of the AI Fertility Society.
A version of this article appeared on Medscape.com.
According to the researchers, such an algorithm is needed due to the increased demand for fertility treatments, as well as the high day-to-day variability in lab workload.
According to the study investigators, predicting retrieval dates in advance for ongoing cycles is of major importance for both patients and clinicians.
“The population requiring fertility treatments, including genetic testing and fertility preservation, has massively increased, and this causes many more cycles and a high day-to-day variability in IVF activity, especially in the lab workload,” said Rohi Hourvitz, MBA, from FertilAI, an Israeli health care company focused on developing technologies that improve fertility treatments.
“We also need to accommodate and reschedule for non-working days, which causes a big issue with managing the workload in many clinics around the world,” added Mr. Hourvitz, who presented the research highlighting AI’s growing role in reproductive medicine.
In addition, AI has recently emerged as an effective tool for assisting in clinical decision-making in assisted reproductive technology, prompting further research in this space, he said.
The new study used a dataset of 9,550 predictable antagonist cycles (defined as having all necessary data) gathered from one lab with over 50 physicians between August 2018 and October 2022. The data were split into two subsets: one for training the AI model and the other for prospective testing.
To train and test the AI model, data from nearly 6,000 predictable antagonist cycles were used. Key factors used for each cycle included estrogen levels, mean follicle size, primary follicle size, and various patient demographics. Other features were considered, but Mr. Hourvitz noted that primary follicle size influenced the algorithm most, “because that is what most of us use when we want to trigger.”
Mr. Hourvitz explained that these patient data were run through an algorithm that produced a graph predicting the most probable date for a cycle retrieval.
“We could accurately predict when those ‘peak days’ were going to be happening in the clinic, and we could also give a pretty good estimate on how many cycles you’re going to have every day,” Mr. Hourvitz said, explaining that this information could help clinics more efficiently allocate resources and manage patients.
According to Mr. Hourvitz, the predictions derived from this study could improve various aspects of fertility treatments and related procedures, including better staff planning and caseload management in IVF labs, as well as higher-quality eggs at retrieval. Patients would have a clearer timeline for their treatment cycles.
Nikica Zaninovic, PhD, MS, director of the embryology lab at Weill Cornell Medical College, New York City, cautioned that the new findings are not yet ready for clinical application but emphasized the importance of more AI research focusing on the quality of oocytes, not only embryos.
“We’re so focused on the end of the process: the embryo,” Dr. Zaninovic, who was not involved in the research, said in an interview. “I think the focus should be on the beginning – the quality of eggs and sperm, not just the quantity – because that’s what the embryos will depend on.”
He noted the increasing numbers of young women in the United States undergoing egg freezing.
“Cornell is the largest academic IVF center in the United States; 20%-30% of all of the patients that we treat are actually freezing their eggs,” he said. “It’s a huge population.”
“When they come to us, they ask how many eggs they’ll need to guarantee one or two children in the future,” Dr. Zaninovic continued. “We don’t have that answer, so we always tell them [we’ll retrieve] as many as we can. That’s not the answer; we need to be more precise. We’re still lacking these tools, and I think that’s where the research will go.”
The study was funded by FertilAI. Mr. Hourvitz is a shareholder and CEO of FertilAI. Dr. Zaninovic is president of the AI Fertility Society.
A version of this article appeared on Medscape.com.
FROM ASRM 2023
Excellent outcome of Ross procedure after 2 decades
TOPLINE:
a survival rate equivalent to that of the general population, results of a new study show. The need for reintervention in these patients is low.
METHODOLOGY:
- The study was a post hoc analysis of a randomized clinical trial that showed superior survival, freedom from reoperation, and quality of life at 10 years for patients who received the Ross procedure, compared with those who got homograft root replacement.
- This new analysis included 108 patients, median age 38 years and mostly male and of British origin, who underwent the Ross procedure. Of these, 45% had aortic regurgitation (AR) as the main hemodynamic lesion.
- The primary outcome was long-term survival, compared with an age-, sex-, and country of origin–matched general U.K. population using a novel, patient-level matching strategy. Secondary outcomes included freedom from any valve-related reintervention, autograft reintervention, or homograft reintervention.
TAKEAWAY:
- Survival at 25 years was 83.0% (95% confidence interval, 75.5%-91.2%), representing a relative survival of 99.1% (95% CI, 91.8%-100%), compared with the matched general population (survival in general population was 83.7%).
- At 25 years, freedom from any Ross-related reintervention was 71.1% (95% CI, 61.6%-82.0%); freedom from autograft reintervention was 80.3% (95% CI, 71.9%-89.6%); and freedom from homograft reintervention was 86.3% (95% CI, 79.0%-94.3%).
- There was no increased hazard for autograft deterioration in patients presenting with versus without preoperative AR, an important finding since it has been suggested Ross procedure benefits may not extend fully to patients with preoperative AR, said the authors.
- 86% of patients had New York Heart Association class I or II status at the latest clinical follow-up (approaching 25 years).
IN PRACTICE:
This study shows the Ross procedure “provided excellent survival into the third decade after surgery,” with the new data further supporting “the unique benefits” of the valve substitute in adults, the authors conclude.
Authors of an accompanying editorial, Tsuyoshi Kaneko, MD, Division of Cardiothoracic Surgery, Washington University School of Medicine, St. Louis, and Maral Ouzounian, MD, PhD, Peter Munk Cardiac Centre, Division of Cardiac Surgery, University Health Network, University of Toronto, write that the new evidence suggests the Ross procedure is “a truly attractive option in younger patients with long life expectancy.” However, they note that aortic regurgitation in the cohort worsened over time, potentially leading to late reinterventions; echocardiographic follow-up was available in only 71% of patients; and generalizing the Ross procedure to a broader group of surgeons is challenging.
SOURCE:
The study was conducted by Maximiliaan L. Notenboom, BSc, department of cardiothoracic surgery, Erasmus University Medical Center, Rotterdam, the Netherlands, and colleagues. It was published online in JAMA Cardiology.
LIMITATIONS:
The analysis reflects a single-surgeon experience, so it’s difficult to extrapolate the results, although the operative steps involved in the Ross procedure have now been clearly delineated, making the operation reproducible. The duration of echocardiographic follow-up was shorter and less complete than the clinical follow-up. Outcomes of the cohort that underwent homograft procedures in the randomized clinical trial were not reported, but since that procedure has nearly disappeared from practice, reporting on its long-term outcomes would be of limited clinical significance.
DISCLOSURES:
Mr. Notenboom has disclosed no relevant financial relationships. Co-author Fabio De Robertis, MD, department of cardiothoracic surgery and transplantation, Royal Brompton & Harefield Hospitals, London, received nonfinancial support from Edwards Lifescience for travel and personal fees from Bristol Myers Squibb for consulting outside the submitted work, and has a service agreement with Medtronic U.K., which paid a fee to the Royal Brompton & Harefield Hospitals Charity Fund.
Editorial co-author Kaneko received personal fees from Edwards Lifesciences, Medtronic, Abbott, and Johnson & Johnson outside the submitted work; Ouzounian received personal fees from Medtronic, Edwards Lifesciences, and Terumo Aortic outside the submitted work.
A version of this article appeared on Medscape.com.
TOPLINE:
a survival rate equivalent to that of the general population, results of a new study show. The need for reintervention in these patients is low.
METHODOLOGY:
- The study was a post hoc analysis of a randomized clinical trial that showed superior survival, freedom from reoperation, and quality of life at 10 years for patients who received the Ross procedure, compared with those who got homograft root replacement.
- This new analysis included 108 patients, median age 38 years and mostly male and of British origin, who underwent the Ross procedure. Of these, 45% had aortic regurgitation (AR) as the main hemodynamic lesion.
- The primary outcome was long-term survival, compared with an age-, sex-, and country of origin–matched general U.K. population using a novel, patient-level matching strategy. Secondary outcomes included freedom from any valve-related reintervention, autograft reintervention, or homograft reintervention.
TAKEAWAY:
- Survival at 25 years was 83.0% (95% confidence interval, 75.5%-91.2%), representing a relative survival of 99.1% (95% CI, 91.8%-100%), compared with the matched general population (survival in general population was 83.7%).
- At 25 years, freedom from any Ross-related reintervention was 71.1% (95% CI, 61.6%-82.0%); freedom from autograft reintervention was 80.3% (95% CI, 71.9%-89.6%); and freedom from homograft reintervention was 86.3% (95% CI, 79.0%-94.3%).
- There was no increased hazard for autograft deterioration in patients presenting with versus without preoperative AR, an important finding since it has been suggested Ross procedure benefits may not extend fully to patients with preoperative AR, said the authors.
- 86% of patients had New York Heart Association class I or II status at the latest clinical follow-up (approaching 25 years).
IN PRACTICE:
This study shows the Ross procedure “provided excellent survival into the third decade after surgery,” with the new data further supporting “the unique benefits” of the valve substitute in adults, the authors conclude.
Authors of an accompanying editorial, Tsuyoshi Kaneko, MD, Division of Cardiothoracic Surgery, Washington University School of Medicine, St. Louis, and Maral Ouzounian, MD, PhD, Peter Munk Cardiac Centre, Division of Cardiac Surgery, University Health Network, University of Toronto, write that the new evidence suggests the Ross procedure is “a truly attractive option in younger patients with long life expectancy.” However, they note that aortic regurgitation in the cohort worsened over time, potentially leading to late reinterventions; echocardiographic follow-up was available in only 71% of patients; and generalizing the Ross procedure to a broader group of surgeons is challenging.
SOURCE:
The study was conducted by Maximiliaan L. Notenboom, BSc, department of cardiothoracic surgery, Erasmus University Medical Center, Rotterdam, the Netherlands, and colleagues. It was published online in JAMA Cardiology.
LIMITATIONS:
The analysis reflects a single-surgeon experience, so it’s difficult to extrapolate the results, although the operative steps involved in the Ross procedure have now been clearly delineated, making the operation reproducible. The duration of echocardiographic follow-up was shorter and less complete than the clinical follow-up. Outcomes of the cohort that underwent homograft procedures in the randomized clinical trial were not reported, but since that procedure has nearly disappeared from practice, reporting on its long-term outcomes would be of limited clinical significance.
DISCLOSURES:
Mr. Notenboom has disclosed no relevant financial relationships. Co-author Fabio De Robertis, MD, department of cardiothoracic surgery and transplantation, Royal Brompton & Harefield Hospitals, London, received nonfinancial support from Edwards Lifescience for travel and personal fees from Bristol Myers Squibb for consulting outside the submitted work, and has a service agreement with Medtronic U.K., which paid a fee to the Royal Brompton & Harefield Hospitals Charity Fund.
Editorial co-author Kaneko received personal fees from Edwards Lifesciences, Medtronic, Abbott, and Johnson & Johnson outside the submitted work; Ouzounian received personal fees from Medtronic, Edwards Lifesciences, and Terumo Aortic outside the submitted work.
A version of this article appeared on Medscape.com.
TOPLINE:
a survival rate equivalent to that of the general population, results of a new study show. The need for reintervention in these patients is low.
METHODOLOGY:
- The study was a post hoc analysis of a randomized clinical trial that showed superior survival, freedom from reoperation, and quality of life at 10 years for patients who received the Ross procedure, compared with those who got homograft root replacement.
- This new analysis included 108 patients, median age 38 years and mostly male and of British origin, who underwent the Ross procedure. Of these, 45% had aortic regurgitation (AR) as the main hemodynamic lesion.
- The primary outcome was long-term survival, compared with an age-, sex-, and country of origin–matched general U.K. population using a novel, patient-level matching strategy. Secondary outcomes included freedom from any valve-related reintervention, autograft reintervention, or homograft reintervention.
TAKEAWAY:
- Survival at 25 years was 83.0% (95% confidence interval, 75.5%-91.2%), representing a relative survival of 99.1% (95% CI, 91.8%-100%), compared with the matched general population (survival in general population was 83.7%).
- At 25 years, freedom from any Ross-related reintervention was 71.1% (95% CI, 61.6%-82.0%); freedom from autograft reintervention was 80.3% (95% CI, 71.9%-89.6%); and freedom from homograft reintervention was 86.3% (95% CI, 79.0%-94.3%).
- There was no increased hazard for autograft deterioration in patients presenting with versus without preoperative AR, an important finding since it has been suggested Ross procedure benefits may not extend fully to patients with preoperative AR, said the authors.
- 86% of patients had New York Heart Association class I or II status at the latest clinical follow-up (approaching 25 years).
IN PRACTICE:
This study shows the Ross procedure “provided excellent survival into the third decade after surgery,” with the new data further supporting “the unique benefits” of the valve substitute in adults, the authors conclude.
Authors of an accompanying editorial, Tsuyoshi Kaneko, MD, Division of Cardiothoracic Surgery, Washington University School of Medicine, St. Louis, and Maral Ouzounian, MD, PhD, Peter Munk Cardiac Centre, Division of Cardiac Surgery, University Health Network, University of Toronto, write that the new evidence suggests the Ross procedure is “a truly attractive option in younger patients with long life expectancy.” However, they note that aortic regurgitation in the cohort worsened over time, potentially leading to late reinterventions; echocardiographic follow-up was available in only 71% of patients; and generalizing the Ross procedure to a broader group of surgeons is challenging.
SOURCE:
The study was conducted by Maximiliaan L. Notenboom, BSc, department of cardiothoracic surgery, Erasmus University Medical Center, Rotterdam, the Netherlands, and colleagues. It was published online in JAMA Cardiology.
LIMITATIONS:
The analysis reflects a single-surgeon experience, so it’s difficult to extrapolate the results, although the operative steps involved in the Ross procedure have now been clearly delineated, making the operation reproducible. The duration of echocardiographic follow-up was shorter and less complete than the clinical follow-up. Outcomes of the cohort that underwent homograft procedures in the randomized clinical trial were not reported, but since that procedure has nearly disappeared from practice, reporting on its long-term outcomes would be of limited clinical significance.
DISCLOSURES:
Mr. Notenboom has disclosed no relevant financial relationships. Co-author Fabio De Robertis, MD, department of cardiothoracic surgery and transplantation, Royal Brompton & Harefield Hospitals, London, received nonfinancial support from Edwards Lifescience for travel and personal fees from Bristol Myers Squibb for consulting outside the submitted work, and has a service agreement with Medtronic U.K., which paid a fee to the Royal Brompton & Harefield Hospitals Charity Fund.
Editorial co-author Kaneko received personal fees from Edwards Lifesciences, Medtronic, Abbott, and Johnson & Johnson outside the submitted work; Ouzounian received personal fees from Medtronic, Edwards Lifesciences, and Terumo Aortic outside the submitted work.
A version of this article appeared on Medscape.com.
Women have worse outcomes in cardiogenic shock
“These data identify the need for us to continue working to identify barriers in terms of diagnosis, management, and technological innovations for women in cardiogenic shock to resolve these issues and improve outcomes,” the senior author of the study, Navin Kapur, MD, Tufts Medical Center, Boston, said in an interview.
The study is said to be the one of the largest contemporary analyses of real-world registry data on the characteristics and outcomes of women in comparison with men with cardiogenic shock.
It showed sex-specific differences in outcomes that were primarily driven by differences in heart failure–related cardiogenic shock. Women with heart failure–related cardiogenic shock had more severe cardiogenic shock, worse survival at discharge, and more vascular complications than men. Outcomes in cardiogenic shock related to MI were similar for men and women.
The study, which will be presented at the upcoming annual meeting of the American Heart Association, was published online in JACC: Heart Failure.
Dr. Kapur founded the Cardiogenic Shock Working Group in 2017 to collect quality data on the condition.
“We realized our patients were dying, and we didn’t have enough data on how best to manage them. So, we started this registry, and now have detailed data on close to 9,000 patients with cardiogenic shock from 45 hospitals in the U.S., Mexico, Australia, and Japan,” he explained.
“The primary goal is to try to investigate the questions related to cardiogenic shock that can inform management, and one of the key questions that came up was differences in how men and women present with cardiogenic shock and what their outcomes may be. This is what we are reporting in this paper,” he added.
Cardiogenic shock is defined as having a low cardiac output most commonly because of MI or an episode of acute heart failure, Dr. Kapur said. Patients with cardiogenic shock are identified by their low blood pressure or hypoperfusion evidenced by clinical exam or biomarkers, such as elevated lactate levels.
“In this analysis, we’re looking at patients presenting with cardiogenic shock, so were not looking at the incidence of the condition in men versus women,” Dr. Kapur noted. “However, we believe that cardiogenic shock is probably more underrepresented in women, who may present with an MI or acute heart failure and may or may not be identified as having low cardiac output states until quite late. The likelihood is that the incidence is similar in men and women, but women are more often undiagnosed.”
For the current study, the authors analyzed data on 5,083 patients with cardiogenic shock in the registry, of whom 1,522 (30%) were women. Compared with men, women had slightly higher body mass index (BMI) and smaller body surface area.
Results showed that women with heart failure–related cardiogenic shock had worse survival at discharge than men (69.9% vs. 74.4%) and a higher rate of refractory shock (SCAI stage E; 26% vs. 21%). Women were also less likely to undergo pulmonary artery catheterization (52.9% vs. 54.6%), heart transplantation (6.5% vs. 10.3%), or left ventricular assist device implantation (7.8% vs. 10%).
Regardless of cardiogenic shock etiology, women had more vascular complications (8.8% vs. 5.7%), bleeding (7.1% vs. 5.2%), and limb ischemia (6.8% vs. 4.5%).
“This analysis is quite revealing. We identified some important distinctions between men and women,” Dr. Kapur commented.
For many patients who present with MI-related cardiogenic shock, many of the baseline characteristics in men and women were quite similar, he said. “But in heart failure–related cardiogenic shock, we saw more differences, with typical comorbidities associated with cardiogenic shock [e.g., diabetes, chronic kidney disease, hypertension] being less common in women than in men. This suggests there may be phenotypic differences as to why women present with heart failure shock versus men.”
Dr. Kapur pointed out that differences in BMI or body surface area between men and women may play into some of the management decision-making.
“Women having a smaller stature may lead to a selection bias where we don’t want to use large-bore pumps or devices because we’re worried about causing complications. We found in the analysis that vascular complications such as bleeding or ischemia of the lower extremity where these devices typically go were more frequent in women,” he noted.
“We also found that women were less likely to receive invasive therapies in general, including pulmonary artery catheters, temporary mechanical support, and heart replacements, such as LVAD or transplants,” he added.
Further results showed that, after propensity score matching, some of the gender differences disappeared, but women continued to have a higher rate of vascular complications (10.4% women vs. 7.4% men).
But Dr. Kapur warned that the propensity-matched analysis had some caveats.
“Essentially what we are doing with propensity matching is creating two populations that are as similar as possible, and this reduced the number of patients in the analysis down to 25% of the original population,” he said. “One of the things we had to match was body surface area, and in doing this, we are taking out one of the most important differences between men and women, and as a result, a lot of the differences in outcomes go away.
“In this respect, propensity matching can be a bit of a double-edge sword,” he added. “I think the non–propensity-matched results are more interesting, as they are more of a reflection of the real world.”
Dr. Kapur concluded that these findings are compelling enough to suggest that there are important differences between women and men with cardiogenic shock in terms of outcomes as well as complication rates.
“Our decision-making around women seems to be different to that around men. I think this paper should start to trigger more awareness of that.”
Dr. Kapur also emphasized the importance of paying attention to vascular complications in women.
“The higher rates of bleeding and limb ischemia issues in women may explain the rationale for being less aggressive with invasive therapies in women,” he said. “But we need to come up with better solutions or technologies so they can be used more effectively in women. This could include adapting technology for smaller vascular sizes, which should lead to better outcome and fewer complications in women.”
He added that further granular data on this issue are needed. “We have very limited datasets in cardiogenic shock. There are few randomized controlled trials, and women are not well represented in such trials. We need to make sure we enroll women in randomized trials.”
Dr. Kapur said more women physicians who treat cardiogenic shock are also required, which would include cardiologists, critical care specialists, cardiac surgeons, and anesthesia personnel.
He pointed out that the two first authors of the current study are women – Van-Khue Ton, MD, Massachusetts General Hospital, Boston, and Manreet Kanwar, MD, Allegheny Health Network, Pittsburgh.
“We worked hard to involve women as principal investigators. They led the effort. These are investigations led by women, on women, to advance the care of women,” he commented.
Gender-related inequality
In an editorial accompanying publication of the study, Sara Kalantari, MD, and Jonathan Grinstein, MD, University of Chicago, and Robert O. Roswell, MD, Hofstra University, Hempstead, N.Y., said these results “provide valuable information about gender-related inequality in care and outcomes in the management of cardiogenic shock, although the exact mechanisms driving these observed differences still need to be elucidated.
“Broadly speaking, barriers in the care of women with heart failure and cardiogenic shock include a reduced awareness among both patients and providers, a deficiency of sex-specific objective criteria for guiding therapy, and unfavorable temporary mechanical circulatory support devices with higher rates of hemocompatibility-related complications in women,” they added.
“In the era of the multidisciplinary shock team and shock pathways with protocolized management algorithms, it is imperative that we still allow for personalization of care to match the physiologic needs of the patient in order for us to continue to close the gender gap in the care of patients presenting with cardiogenic shock,” the editorialists concluded.
A version of this article appeared on Medscape.com.
“These data identify the need for us to continue working to identify barriers in terms of diagnosis, management, and technological innovations for women in cardiogenic shock to resolve these issues and improve outcomes,” the senior author of the study, Navin Kapur, MD, Tufts Medical Center, Boston, said in an interview.
The study is said to be the one of the largest contemporary analyses of real-world registry data on the characteristics and outcomes of women in comparison with men with cardiogenic shock.
It showed sex-specific differences in outcomes that were primarily driven by differences in heart failure–related cardiogenic shock. Women with heart failure–related cardiogenic shock had more severe cardiogenic shock, worse survival at discharge, and more vascular complications than men. Outcomes in cardiogenic shock related to MI were similar for men and women.
The study, which will be presented at the upcoming annual meeting of the American Heart Association, was published online in JACC: Heart Failure.
Dr. Kapur founded the Cardiogenic Shock Working Group in 2017 to collect quality data on the condition.
“We realized our patients were dying, and we didn’t have enough data on how best to manage them. So, we started this registry, and now have detailed data on close to 9,000 patients with cardiogenic shock from 45 hospitals in the U.S., Mexico, Australia, and Japan,” he explained.
“The primary goal is to try to investigate the questions related to cardiogenic shock that can inform management, and one of the key questions that came up was differences in how men and women present with cardiogenic shock and what their outcomes may be. This is what we are reporting in this paper,” he added.
Cardiogenic shock is defined as having a low cardiac output most commonly because of MI or an episode of acute heart failure, Dr. Kapur said. Patients with cardiogenic shock are identified by their low blood pressure or hypoperfusion evidenced by clinical exam or biomarkers, such as elevated lactate levels.
“In this analysis, we’re looking at patients presenting with cardiogenic shock, so were not looking at the incidence of the condition in men versus women,” Dr. Kapur noted. “However, we believe that cardiogenic shock is probably more underrepresented in women, who may present with an MI or acute heart failure and may or may not be identified as having low cardiac output states until quite late. The likelihood is that the incidence is similar in men and women, but women are more often undiagnosed.”
For the current study, the authors analyzed data on 5,083 patients with cardiogenic shock in the registry, of whom 1,522 (30%) were women. Compared with men, women had slightly higher body mass index (BMI) and smaller body surface area.
Results showed that women with heart failure–related cardiogenic shock had worse survival at discharge than men (69.9% vs. 74.4%) and a higher rate of refractory shock (SCAI stage E; 26% vs. 21%). Women were also less likely to undergo pulmonary artery catheterization (52.9% vs. 54.6%), heart transplantation (6.5% vs. 10.3%), or left ventricular assist device implantation (7.8% vs. 10%).
Regardless of cardiogenic shock etiology, women had more vascular complications (8.8% vs. 5.7%), bleeding (7.1% vs. 5.2%), and limb ischemia (6.8% vs. 4.5%).
“This analysis is quite revealing. We identified some important distinctions between men and women,” Dr. Kapur commented.
For many patients who present with MI-related cardiogenic shock, many of the baseline characteristics in men and women were quite similar, he said. “But in heart failure–related cardiogenic shock, we saw more differences, with typical comorbidities associated with cardiogenic shock [e.g., diabetes, chronic kidney disease, hypertension] being less common in women than in men. This suggests there may be phenotypic differences as to why women present with heart failure shock versus men.”
Dr. Kapur pointed out that differences in BMI or body surface area between men and women may play into some of the management decision-making.
“Women having a smaller stature may lead to a selection bias where we don’t want to use large-bore pumps or devices because we’re worried about causing complications. We found in the analysis that vascular complications such as bleeding or ischemia of the lower extremity where these devices typically go were more frequent in women,” he noted.
“We also found that women were less likely to receive invasive therapies in general, including pulmonary artery catheters, temporary mechanical support, and heart replacements, such as LVAD or transplants,” he added.
Further results showed that, after propensity score matching, some of the gender differences disappeared, but women continued to have a higher rate of vascular complications (10.4% women vs. 7.4% men).
But Dr. Kapur warned that the propensity-matched analysis had some caveats.
“Essentially what we are doing with propensity matching is creating two populations that are as similar as possible, and this reduced the number of patients in the analysis down to 25% of the original population,” he said. “One of the things we had to match was body surface area, and in doing this, we are taking out one of the most important differences between men and women, and as a result, a lot of the differences in outcomes go away.
“In this respect, propensity matching can be a bit of a double-edge sword,” he added. “I think the non–propensity-matched results are more interesting, as they are more of a reflection of the real world.”
Dr. Kapur concluded that these findings are compelling enough to suggest that there are important differences between women and men with cardiogenic shock in terms of outcomes as well as complication rates.
“Our decision-making around women seems to be different to that around men. I think this paper should start to trigger more awareness of that.”
Dr. Kapur also emphasized the importance of paying attention to vascular complications in women.
“The higher rates of bleeding and limb ischemia issues in women may explain the rationale for being less aggressive with invasive therapies in women,” he said. “But we need to come up with better solutions or technologies so they can be used more effectively in women. This could include adapting technology for smaller vascular sizes, which should lead to better outcome and fewer complications in women.”
He added that further granular data on this issue are needed. “We have very limited datasets in cardiogenic shock. There are few randomized controlled trials, and women are not well represented in such trials. We need to make sure we enroll women in randomized trials.”
Dr. Kapur said more women physicians who treat cardiogenic shock are also required, which would include cardiologists, critical care specialists, cardiac surgeons, and anesthesia personnel.
He pointed out that the two first authors of the current study are women – Van-Khue Ton, MD, Massachusetts General Hospital, Boston, and Manreet Kanwar, MD, Allegheny Health Network, Pittsburgh.
“We worked hard to involve women as principal investigators. They led the effort. These are investigations led by women, on women, to advance the care of women,” he commented.
Gender-related inequality
In an editorial accompanying publication of the study, Sara Kalantari, MD, and Jonathan Grinstein, MD, University of Chicago, and Robert O. Roswell, MD, Hofstra University, Hempstead, N.Y., said these results “provide valuable information about gender-related inequality in care and outcomes in the management of cardiogenic shock, although the exact mechanisms driving these observed differences still need to be elucidated.
“Broadly speaking, barriers in the care of women with heart failure and cardiogenic shock include a reduced awareness among both patients and providers, a deficiency of sex-specific objective criteria for guiding therapy, and unfavorable temporary mechanical circulatory support devices with higher rates of hemocompatibility-related complications in women,” they added.
“In the era of the multidisciplinary shock team and shock pathways with protocolized management algorithms, it is imperative that we still allow for personalization of care to match the physiologic needs of the patient in order for us to continue to close the gender gap in the care of patients presenting with cardiogenic shock,” the editorialists concluded.
A version of this article appeared on Medscape.com.
“These data identify the need for us to continue working to identify barriers in terms of diagnosis, management, and technological innovations for women in cardiogenic shock to resolve these issues and improve outcomes,” the senior author of the study, Navin Kapur, MD, Tufts Medical Center, Boston, said in an interview.
The study is said to be the one of the largest contemporary analyses of real-world registry data on the characteristics and outcomes of women in comparison with men with cardiogenic shock.
It showed sex-specific differences in outcomes that were primarily driven by differences in heart failure–related cardiogenic shock. Women with heart failure–related cardiogenic shock had more severe cardiogenic shock, worse survival at discharge, and more vascular complications than men. Outcomes in cardiogenic shock related to MI were similar for men and women.
The study, which will be presented at the upcoming annual meeting of the American Heart Association, was published online in JACC: Heart Failure.
Dr. Kapur founded the Cardiogenic Shock Working Group in 2017 to collect quality data on the condition.
“We realized our patients were dying, and we didn’t have enough data on how best to manage them. So, we started this registry, and now have detailed data on close to 9,000 patients with cardiogenic shock from 45 hospitals in the U.S., Mexico, Australia, and Japan,” he explained.
“The primary goal is to try to investigate the questions related to cardiogenic shock that can inform management, and one of the key questions that came up was differences in how men and women present with cardiogenic shock and what their outcomes may be. This is what we are reporting in this paper,” he added.
Cardiogenic shock is defined as having a low cardiac output most commonly because of MI or an episode of acute heart failure, Dr. Kapur said. Patients with cardiogenic shock are identified by their low blood pressure or hypoperfusion evidenced by clinical exam or biomarkers, such as elevated lactate levels.
“In this analysis, we’re looking at patients presenting with cardiogenic shock, so were not looking at the incidence of the condition in men versus women,” Dr. Kapur noted. “However, we believe that cardiogenic shock is probably more underrepresented in women, who may present with an MI or acute heart failure and may or may not be identified as having low cardiac output states until quite late. The likelihood is that the incidence is similar in men and women, but women are more often undiagnosed.”
For the current study, the authors analyzed data on 5,083 patients with cardiogenic shock in the registry, of whom 1,522 (30%) were women. Compared with men, women had slightly higher body mass index (BMI) and smaller body surface area.
Results showed that women with heart failure–related cardiogenic shock had worse survival at discharge than men (69.9% vs. 74.4%) and a higher rate of refractory shock (SCAI stage E; 26% vs. 21%). Women were also less likely to undergo pulmonary artery catheterization (52.9% vs. 54.6%), heart transplantation (6.5% vs. 10.3%), or left ventricular assist device implantation (7.8% vs. 10%).
Regardless of cardiogenic shock etiology, women had more vascular complications (8.8% vs. 5.7%), bleeding (7.1% vs. 5.2%), and limb ischemia (6.8% vs. 4.5%).
“This analysis is quite revealing. We identified some important distinctions between men and women,” Dr. Kapur commented.
For many patients who present with MI-related cardiogenic shock, many of the baseline characteristics in men and women were quite similar, he said. “But in heart failure–related cardiogenic shock, we saw more differences, with typical comorbidities associated with cardiogenic shock [e.g., diabetes, chronic kidney disease, hypertension] being less common in women than in men. This suggests there may be phenotypic differences as to why women present with heart failure shock versus men.”
Dr. Kapur pointed out that differences in BMI or body surface area between men and women may play into some of the management decision-making.
“Women having a smaller stature may lead to a selection bias where we don’t want to use large-bore pumps or devices because we’re worried about causing complications. We found in the analysis that vascular complications such as bleeding or ischemia of the lower extremity where these devices typically go were more frequent in women,” he noted.
“We also found that women were less likely to receive invasive therapies in general, including pulmonary artery catheters, temporary mechanical support, and heart replacements, such as LVAD or transplants,” he added.
Further results showed that, after propensity score matching, some of the gender differences disappeared, but women continued to have a higher rate of vascular complications (10.4% women vs. 7.4% men).
But Dr. Kapur warned that the propensity-matched analysis had some caveats.
“Essentially what we are doing with propensity matching is creating two populations that are as similar as possible, and this reduced the number of patients in the analysis down to 25% of the original population,” he said. “One of the things we had to match was body surface area, and in doing this, we are taking out one of the most important differences between men and women, and as a result, a lot of the differences in outcomes go away.
“In this respect, propensity matching can be a bit of a double-edge sword,” he added. “I think the non–propensity-matched results are more interesting, as they are more of a reflection of the real world.”
Dr. Kapur concluded that these findings are compelling enough to suggest that there are important differences between women and men with cardiogenic shock in terms of outcomes as well as complication rates.
“Our decision-making around women seems to be different to that around men. I think this paper should start to trigger more awareness of that.”
Dr. Kapur also emphasized the importance of paying attention to vascular complications in women.
“The higher rates of bleeding and limb ischemia issues in women may explain the rationale for being less aggressive with invasive therapies in women,” he said. “But we need to come up with better solutions or technologies so they can be used more effectively in women. This could include adapting technology for smaller vascular sizes, which should lead to better outcome and fewer complications in women.”
He added that further granular data on this issue are needed. “We have very limited datasets in cardiogenic shock. There are few randomized controlled trials, and women are not well represented in such trials. We need to make sure we enroll women in randomized trials.”
Dr. Kapur said more women physicians who treat cardiogenic shock are also required, which would include cardiologists, critical care specialists, cardiac surgeons, and anesthesia personnel.
He pointed out that the two first authors of the current study are women – Van-Khue Ton, MD, Massachusetts General Hospital, Boston, and Manreet Kanwar, MD, Allegheny Health Network, Pittsburgh.
“We worked hard to involve women as principal investigators. They led the effort. These are investigations led by women, on women, to advance the care of women,” he commented.
Gender-related inequality
In an editorial accompanying publication of the study, Sara Kalantari, MD, and Jonathan Grinstein, MD, University of Chicago, and Robert O. Roswell, MD, Hofstra University, Hempstead, N.Y., said these results “provide valuable information about gender-related inequality in care and outcomes in the management of cardiogenic shock, although the exact mechanisms driving these observed differences still need to be elucidated.
“Broadly speaking, barriers in the care of women with heart failure and cardiogenic shock include a reduced awareness among both patients and providers, a deficiency of sex-specific objective criteria for guiding therapy, and unfavorable temporary mechanical circulatory support devices with higher rates of hemocompatibility-related complications in women,” they added.
“In the era of the multidisciplinary shock team and shock pathways with protocolized management algorithms, it is imperative that we still allow for personalization of care to match the physiologic needs of the patient in order for us to continue to close the gender gap in the care of patients presenting with cardiogenic shock,” the editorialists concluded.
A version of this article appeared on Medscape.com.
FROM AHA 2023
The challenges of palmoplantar pustulosis and other acral psoriatic disease
WASHINGTON – The approval last year of the interleukin (IL)-36 receptor antagonist spesolimab for treating generalized pustular psoriasis flares brightened the treatment landscape for this rare condition, and a recently published phase 2 study suggests a potential role of spesolimab for flare prevention. , according to speakers at the annual research symposium of the National Psoriasis Foundation.
“The IL-36 receptor antagonists don’t seem to be quite the answer for [palmoplantar pustulosis] that they are for generalized pustular psoriasis [GPP],” Megan H. Noe, MD, MPH, assistant professor of dermatology at Harvard Medical School and a dermatologist at Brigham and Women’s Hospital, Boston, said at the meeting.
Psoriasis affecting the hands and feet – both pustular and nonpustular – has a higher impact on quality of life and higher functional disability than does non-acral psoriasis, is less responsive to treatment, and has a “very confusing nomenclature” that complicates research and thus management, said Jason Ezra Hawkes, MD, a dermatologist in Rocklin, Calif., and former faculty member of several departments of dermatology. Both he and Dr. Noe spoke during a tough-to-treat session at the NPF meeting.
IL-17 and IL-23 blockade, as well as tumor necrosis factor (TNF) inhibition, are effective overall for palmoplantar psoriasis (nonpustular), but in general, responses are lower than for plaque psoriasis. Apremilast (Otezla), a phosphodiesterase-4 inhibitor, has some efficacy for pustular variants, but for hyperkeratotic variants it “does not perform as well as more selective inhibition of IL-17 and IL-23 blockade,” he said.
In general, ”what’s happening in the acral sites is different from an immune perspective than what’s happening in the non-acral sites,” and more research utilizing a clearer, descriptive nomenclature is needed to tease out differing immunophenotypes, explained Dr. Hawkes, who has led multiple clinical trials of treatments for psoriasis and other inflammatory skin conditions.
Palmoplantar pustulosis, and a word on generalized disease
Dermatologists are using a variety of treatments for palmoplantar pustulosis, with no clear first-line choices, Dr. Noe said. In a case series of almost 200 patients with palmoplantar pustulosis across 20 dermatology practices, published in JAMA Dermatology, 35% of patients received a systemic therapy prescription at their initial encounter – most commonly acitretin, followed by methotrexate and phototherapy. “Biologics were used, but use was varied and not as often as with oral agents,” said Dr. Noe, a coauthor of the study.
TNF blockers led to improvements ranging from 57% to 84%, depending on the agent, in a 2020 retrospective study of patients with palmoplantar pustulosis or acrodermatitis continua of Hallopeau, Dr. Noe noted. However, rates of complete clearance were only 20%-29%.
Apremilast showed modest efficacy after 5 months of treatment, with 62% of patients achieving at least a 50% improvement in the Palmoplantar Pustulosis Psoriasis Area and Severity Index (PPPASI) in a 2021 open-label, phase 2 study involving 21 patients. “This may represent a potential treatment option,” Dr. Noe said. “It’s something, but not what we’re used to seeing in our plaque psoriasis patients.”
A 2021 phase 2a, double-blind, randomized, placebo-controlled study of spesolimab in patients with palmoplantar pustulosis, meanwhile, failed to meet its primary endpoint, with only 32% of patients achieving a 50% improvement at 16 weeks, compared with 24% of patients in the placebo arm. And a recently published network meta-analysis found that none of the five drugs studied in seven randomized controlled trials – biologic or oral – was more effective than placebo for clearance or improvement of palmoplantar pustulosis.
The spesolimab (Spevigo) results have been disappointing considering the biologic’s newfound efficacy and role as the first Food and Drug Administration–approved therapy for generalized pustular disease, according to Dr. Noe. The ability of a single 900-mg intravenous dose of the IL-36 receptor antagonist to completely clear pustules at 1 week in 54% of patients with generalized disease, compared with 6% of the placebo group, was “groundbreaking,” she said, referring to results of the pivotal trial published in the New England Journal of Medicine.
And given that “preventing GPP flares is ultimately what we want,” she said, more good news was reported this year in The Lancet: The finding from an international, randomized, placebo-controlled study that high-dose subcutaneous spesolimab significantly reduced the risk of a flare over 48 weeks. “There are lots of ongoing studies right now to understand the best way to dose spesolimab,” she said.
Moreover, another IL-36 receptor antagonist, imsidolimab, is being investigated in a phase 3 trial for generalized pustular disease, she noted. A phase 2, open-label study of patients with GPP found that “more than half of patients were very much improved at 4 weeks, and some patients started showing improvement at day 3,” Dr. Noe said.
An area of research she is interested in is the potential for Janus kinase (JAK) inhibitors as a treatment for palmoplantar pustulosis. For pustulosis on the hands and feet, recent case reports describing the efficacy of JAK inhibitors have caught her eye. “Right now, all we have is this case report data, mostly with tofacitinib, but I think it’s exciting,” she said, noting a recently published report in the British Journal of Dermatology.
Palmoplantar psoriasis
Pustular psoriatic disease can be localized to the hand and/or feet only, or can co-occur with generalized pustular disease, just as palmoplantar psoriasis can be localized to the hands and/or feet or, more commonly, can co-occur with widespread plaque psoriasis. Research has shown, Dr. Hawkes said, that with both types of acral disease, many patients have or have had plaque psoriasis outside of acral sites.
The nomenclature and acronyms for palmoplantar psoriatic disease have complicated patient education, communication, and research, Dr. Hawkes said. Does PPP refer to palmoplantar psoriasis, or palmoplantar pustulosis, for instance? What is the difference between palmoplantar pustulosis (coined PPP) and palmoplantar pustular psoriasis (referred to as PPPP)?
What if disease is only on the hands, only on the feet, or only on the backs of the hands? And at what point is disease not classified as palmoplantar psoriasis, but plaque psoriasis with involvement of the hands and feet? Inconsistencies and lack of clarification lead to “confusing” literature, he said.
Heterogeneity in populations across trials resulting from “inconsistent categorization and phenotype inclusion” may partly account for the recalcitrance to treatment reported in the literature, he said. Misdiagnosis as psoriasis in cases of localized disease (confusion with eczema, for instance), and the fact that hands and feet are subject to increased trauma and injury, compared with non-acral sites, are also at play.
Trials may also allow insufficient time for improvement, compared with non-acral sites. “What we’ve learned about the hands and feet is that it takes a much longer time for disease to improve,” Dr. Hawkes said, so primary endpoints must take this into account.
There is unique immunologic signaling in palmoplantar disease that differs from the predominant signaling in traditional plaque psoriasis, he emphasized, and “mixed immunophenotypes” that need to be unraveled.
Dr. Hawkes disclosed ties with AbbVie, Arcutis, Bristol-Myers Squibb, Boehringer Ingelheim, Janssen, LEO, Lilly, Novartis, Pfizer, Regeneron, Sanofi, Sun Pharma, and UCB. Dr. Noe disclosed ties to Bristol-Myers Squibb and Boehringer Ingelheim.
WASHINGTON – The approval last year of the interleukin (IL)-36 receptor antagonist spesolimab for treating generalized pustular psoriasis flares brightened the treatment landscape for this rare condition, and a recently published phase 2 study suggests a potential role of spesolimab for flare prevention. , according to speakers at the annual research symposium of the National Psoriasis Foundation.
“The IL-36 receptor antagonists don’t seem to be quite the answer for [palmoplantar pustulosis] that they are for generalized pustular psoriasis [GPP],” Megan H. Noe, MD, MPH, assistant professor of dermatology at Harvard Medical School and a dermatologist at Brigham and Women’s Hospital, Boston, said at the meeting.
Psoriasis affecting the hands and feet – both pustular and nonpustular – has a higher impact on quality of life and higher functional disability than does non-acral psoriasis, is less responsive to treatment, and has a “very confusing nomenclature” that complicates research and thus management, said Jason Ezra Hawkes, MD, a dermatologist in Rocklin, Calif., and former faculty member of several departments of dermatology. Both he and Dr. Noe spoke during a tough-to-treat session at the NPF meeting.
IL-17 and IL-23 blockade, as well as tumor necrosis factor (TNF) inhibition, are effective overall for palmoplantar psoriasis (nonpustular), but in general, responses are lower than for plaque psoriasis. Apremilast (Otezla), a phosphodiesterase-4 inhibitor, has some efficacy for pustular variants, but for hyperkeratotic variants it “does not perform as well as more selective inhibition of IL-17 and IL-23 blockade,” he said.
In general, ”what’s happening in the acral sites is different from an immune perspective than what’s happening in the non-acral sites,” and more research utilizing a clearer, descriptive nomenclature is needed to tease out differing immunophenotypes, explained Dr. Hawkes, who has led multiple clinical trials of treatments for psoriasis and other inflammatory skin conditions.
Palmoplantar pustulosis, and a word on generalized disease
Dermatologists are using a variety of treatments for palmoplantar pustulosis, with no clear first-line choices, Dr. Noe said. In a case series of almost 200 patients with palmoplantar pustulosis across 20 dermatology practices, published in JAMA Dermatology, 35% of patients received a systemic therapy prescription at their initial encounter – most commonly acitretin, followed by methotrexate and phototherapy. “Biologics were used, but use was varied and not as often as with oral agents,” said Dr. Noe, a coauthor of the study.
TNF blockers led to improvements ranging from 57% to 84%, depending on the agent, in a 2020 retrospective study of patients with palmoplantar pustulosis or acrodermatitis continua of Hallopeau, Dr. Noe noted. However, rates of complete clearance were only 20%-29%.
Apremilast showed modest efficacy after 5 months of treatment, with 62% of patients achieving at least a 50% improvement in the Palmoplantar Pustulosis Psoriasis Area and Severity Index (PPPASI) in a 2021 open-label, phase 2 study involving 21 patients. “This may represent a potential treatment option,” Dr. Noe said. “It’s something, but not what we’re used to seeing in our plaque psoriasis patients.”
A 2021 phase 2a, double-blind, randomized, placebo-controlled study of spesolimab in patients with palmoplantar pustulosis, meanwhile, failed to meet its primary endpoint, with only 32% of patients achieving a 50% improvement at 16 weeks, compared with 24% of patients in the placebo arm. And a recently published network meta-analysis found that none of the five drugs studied in seven randomized controlled trials – biologic or oral – was more effective than placebo for clearance or improvement of palmoplantar pustulosis.
The spesolimab (Spevigo) results have been disappointing considering the biologic’s newfound efficacy and role as the first Food and Drug Administration–approved therapy for generalized pustular disease, according to Dr. Noe. The ability of a single 900-mg intravenous dose of the IL-36 receptor antagonist to completely clear pustules at 1 week in 54% of patients with generalized disease, compared with 6% of the placebo group, was “groundbreaking,” she said, referring to results of the pivotal trial published in the New England Journal of Medicine.
And given that “preventing GPP flares is ultimately what we want,” she said, more good news was reported this year in The Lancet: The finding from an international, randomized, placebo-controlled study that high-dose subcutaneous spesolimab significantly reduced the risk of a flare over 48 weeks. “There are lots of ongoing studies right now to understand the best way to dose spesolimab,” she said.
Moreover, another IL-36 receptor antagonist, imsidolimab, is being investigated in a phase 3 trial for generalized pustular disease, she noted. A phase 2, open-label study of patients with GPP found that “more than half of patients were very much improved at 4 weeks, and some patients started showing improvement at day 3,” Dr. Noe said.
An area of research she is interested in is the potential for Janus kinase (JAK) inhibitors as a treatment for palmoplantar pustulosis. For pustulosis on the hands and feet, recent case reports describing the efficacy of JAK inhibitors have caught her eye. “Right now, all we have is this case report data, mostly with tofacitinib, but I think it’s exciting,” she said, noting a recently published report in the British Journal of Dermatology.
Palmoplantar psoriasis
Pustular psoriatic disease can be localized to the hand and/or feet only, or can co-occur with generalized pustular disease, just as palmoplantar psoriasis can be localized to the hands and/or feet or, more commonly, can co-occur with widespread plaque psoriasis. Research has shown, Dr. Hawkes said, that with both types of acral disease, many patients have or have had plaque psoriasis outside of acral sites.
The nomenclature and acronyms for palmoplantar psoriatic disease have complicated patient education, communication, and research, Dr. Hawkes said. Does PPP refer to palmoplantar psoriasis, or palmoplantar pustulosis, for instance? What is the difference between palmoplantar pustulosis (coined PPP) and palmoplantar pustular psoriasis (referred to as PPPP)?
What if disease is only on the hands, only on the feet, or only on the backs of the hands? And at what point is disease not classified as palmoplantar psoriasis, but plaque psoriasis with involvement of the hands and feet? Inconsistencies and lack of clarification lead to “confusing” literature, he said.
Heterogeneity in populations across trials resulting from “inconsistent categorization and phenotype inclusion” may partly account for the recalcitrance to treatment reported in the literature, he said. Misdiagnosis as psoriasis in cases of localized disease (confusion with eczema, for instance), and the fact that hands and feet are subject to increased trauma and injury, compared with non-acral sites, are also at play.
Trials may also allow insufficient time for improvement, compared with non-acral sites. “What we’ve learned about the hands and feet is that it takes a much longer time for disease to improve,” Dr. Hawkes said, so primary endpoints must take this into account.
There is unique immunologic signaling in palmoplantar disease that differs from the predominant signaling in traditional plaque psoriasis, he emphasized, and “mixed immunophenotypes” that need to be unraveled.
Dr. Hawkes disclosed ties with AbbVie, Arcutis, Bristol-Myers Squibb, Boehringer Ingelheim, Janssen, LEO, Lilly, Novartis, Pfizer, Regeneron, Sanofi, Sun Pharma, and UCB. Dr. Noe disclosed ties to Bristol-Myers Squibb and Boehringer Ingelheim.
WASHINGTON – The approval last year of the interleukin (IL)-36 receptor antagonist spesolimab for treating generalized pustular psoriasis flares brightened the treatment landscape for this rare condition, and a recently published phase 2 study suggests a potential role of spesolimab for flare prevention. , according to speakers at the annual research symposium of the National Psoriasis Foundation.
“The IL-36 receptor antagonists don’t seem to be quite the answer for [palmoplantar pustulosis] that they are for generalized pustular psoriasis [GPP],” Megan H. Noe, MD, MPH, assistant professor of dermatology at Harvard Medical School and a dermatologist at Brigham and Women’s Hospital, Boston, said at the meeting.
Psoriasis affecting the hands and feet – both pustular and nonpustular – has a higher impact on quality of life and higher functional disability than does non-acral psoriasis, is less responsive to treatment, and has a “very confusing nomenclature” that complicates research and thus management, said Jason Ezra Hawkes, MD, a dermatologist in Rocklin, Calif., and former faculty member of several departments of dermatology. Both he and Dr. Noe spoke during a tough-to-treat session at the NPF meeting.
IL-17 and IL-23 blockade, as well as tumor necrosis factor (TNF) inhibition, are effective overall for palmoplantar psoriasis (nonpustular), but in general, responses are lower than for plaque psoriasis. Apremilast (Otezla), a phosphodiesterase-4 inhibitor, has some efficacy for pustular variants, but for hyperkeratotic variants it “does not perform as well as more selective inhibition of IL-17 and IL-23 blockade,” he said.
In general, ”what’s happening in the acral sites is different from an immune perspective than what’s happening in the non-acral sites,” and more research utilizing a clearer, descriptive nomenclature is needed to tease out differing immunophenotypes, explained Dr. Hawkes, who has led multiple clinical trials of treatments for psoriasis and other inflammatory skin conditions.
Palmoplantar pustulosis, and a word on generalized disease
Dermatologists are using a variety of treatments for palmoplantar pustulosis, with no clear first-line choices, Dr. Noe said. In a case series of almost 200 patients with palmoplantar pustulosis across 20 dermatology practices, published in JAMA Dermatology, 35% of patients received a systemic therapy prescription at their initial encounter – most commonly acitretin, followed by methotrexate and phototherapy. “Biologics were used, but use was varied and not as often as with oral agents,” said Dr. Noe, a coauthor of the study.
TNF blockers led to improvements ranging from 57% to 84%, depending on the agent, in a 2020 retrospective study of patients with palmoplantar pustulosis or acrodermatitis continua of Hallopeau, Dr. Noe noted. However, rates of complete clearance were only 20%-29%.
Apremilast showed modest efficacy after 5 months of treatment, with 62% of patients achieving at least a 50% improvement in the Palmoplantar Pustulosis Psoriasis Area and Severity Index (PPPASI) in a 2021 open-label, phase 2 study involving 21 patients. “This may represent a potential treatment option,” Dr. Noe said. “It’s something, but not what we’re used to seeing in our plaque psoriasis patients.”
A 2021 phase 2a, double-blind, randomized, placebo-controlled study of spesolimab in patients with palmoplantar pustulosis, meanwhile, failed to meet its primary endpoint, with only 32% of patients achieving a 50% improvement at 16 weeks, compared with 24% of patients in the placebo arm. And a recently published network meta-analysis found that none of the five drugs studied in seven randomized controlled trials – biologic or oral – was more effective than placebo for clearance or improvement of palmoplantar pustulosis.
The spesolimab (Spevigo) results have been disappointing considering the biologic’s newfound efficacy and role as the first Food and Drug Administration–approved therapy for generalized pustular disease, according to Dr. Noe. The ability of a single 900-mg intravenous dose of the IL-36 receptor antagonist to completely clear pustules at 1 week in 54% of patients with generalized disease, compared with 6% of the placebo group, was “groundbreaking,” she said, referring to results of the pivotal trial published in the New England Journal of Medicine.
And given that “preventing GPP flares is ultimately what we want,” she said, more good news was reported this year in The Lancet: The finding from an international, randomized, placebo-controlled study that high-dose subcutaneous spesolimab significantly reduced the risk of a flare over 48 weeks. “There are lots of ongoing studies right now to understand the best way to dose spesolimab,” she said.
Moreover, another IL-36 receptor antagonist, imsidolimab, is being investigated in a phase 3 trial for generalized pustular disease, she noted. A phase 2, open-label study of patients with GPP found that “more than half of patients were very much improved at 4 weeks, and some patients started showing improvement at day 3,” Dr. Noe said.
An area of research she is interested in is the potential for Janus kinase (JAK) inhibitors as a treatment for palmoplantar pustulosis. For pustulosis on the hands and feet, recent case reports describing the efficacy of JAK inhibitors have caught her eye. “Right now, all we have is this case report data, mostly with tofacitinib, but I think it’s exciting,” she said, noting a recently published report in the British Journal of Dermatology.
Palmoplantar psoriasis
Pustular psoriatic disease can be localized to the hand and/or feet only, or can co-occur with generalized pustular disease, just as palmoplantar psoriasis can be localized to the hands and/or feet or, more commonly, can co-occur with widespread plaque psoriasis. Research has shown, Dr. Hawkes said, that with both types of acral disease, many patients have or have had plaque psoriasis outside of acral sites.
The nomenclature and acronyms for palmoplantar psoriatic disease have complicated patient education, communication, and research, Dr. Hawkes said. Does PPP refer to palmoplantar psoriasis, or palmoplantar pustulosis, for instance? What is the difference between palmoplantar pustulosis (coined PPP) and palmoplantar pustular psoriasis (referred to as PPPP)?
What if disease is only on the hands, only on the feet, or only on the backs of the hands? And at what point is disease not classified as palmoplantar psoriasis, but plaque psoriasis with involvement of the hands and feet? Inconsistencies and lack of clarification lead to “confusing” literature, he said.
Heterogeneity in populations across trials resulting from “inconsistent categorization and phenotype inclusion” may partly account for the recalcitrance to treatment reported in the literature, he said. Misdiagnosis as psoriasis in cases of localized disease (confusion with eczema, for instance), and the fact that hands and feet are subject to increased trauma and injury, compared with non-acral sites, are also at play.
Trials may also allow insufficient time for improvement, compared with non-acral sites. “What we’ve learned about the hands and feet is that it takes a much longer time for disease to improve,” Dr. Hawkes said, so primary endpoints must take this into account.
There is unique immunologic signaling in palmoplantar disease that differs from the predominant signaling in traditional plaque psoriasis, he emphasized, and “mixed immunophenotypes” that need to be unraveled.
Dr. Hawkes disclosed ties with AbbVie, Arcutis, Bristol-Myers Squibb, Boehringer Ingelheim, Janssen, LEO, Lilly, Novartis, Pfizer, Regeneron, Sanofi, Sun Pharma, and UCB. Dr. Noe disclosed ties to Bristol-Myers Squibb and Boehringer Ingelheim.
AT THE NPF RESEARCH SYMPOSIUM 2023
Meta-analysis of postcancer use of immunosuppressive therapies shows no increase in cancer recurrence risk
that covered approximately 24,000 patients and 86,000 person-years of follow-up.
The findings could “help guide clinical decision making,” providing “reassurance that it remains safe to use conventional immunomodulators, anti-TNF [tumor necrosis factor] agents, or newer biologics in individuals with [immune-mediated diseases] with a prior malignancy consistent with recent guidelines,” Akshita Gupta, MD, of Massachusetts General Hospital, Boston, and coinvestigators wrote in Clinical Gastroenterology and Hepatology.
And because a stratification of studies by the timing of immunosuppression therapy initiation found no increased risk when treatment was started within 5 years of a cancer diagnosis compared to later on, the meta-analysis could “potentially reduce the time to initiation of immunosuppressive treatment,” the authors wrote, noting a continued need for individualized decision-making.
Ustekinumab, a monoclonal antibody targeting interleukin-12 and IL-23, and vedolizumab, a monoclonal antibody that binds to alpha4beta7 integrin, were covered in the meta-analysis, but investigators found no studies on the use of upadacitinib or other Janus kinase (JAK) inhibitors, or the use of S1P modulators, in patients with prior malignancies.
The analysis included 31 observational studies, 17 of which involved patients with inflammatory bowel disease (IBD). (Of the other studies, 14 involved patients with rheumatoid arthritis, 2 covered psoriasis, and 1 covered ankylosing spondylitis.)
Similar levels of risk
The incidence rate of new or recurrent cancers among individuals not receiving any immunosuppressive therapy for IBD or other immune-mediated diseases after an index cancer was 35 per 1,000 patient-years (95% confidence interval, 27-43 per 1,000 patient-years; 1,627 incident cancers among 12,238 patients, 43,765 patient-years), and the rate among anti-TNF users was similar at 32 per 1,000 patient-years (95% CI, 25-38 per 1,000 patient-years; 571 cancers among 3,939 patients, 17,772 patient-years).
Among patients on conventional immunomodulator therapy (thiopurines, methotrexate), the incidence rate was numerically higher at 46 per 1,000 patient-years (95% CI, 31-61; 1,104 incident cancers among 5,930 patients; 17,018 patient-years), but was not statistically different from anti-TNF (P = .92) or no immunosuppression (P = .98).
Patients on combination immunosuppression also had numerically higher rates of new or recurrent cancers at 56 per 1,000 patient-years (95% CI, 31-81; 179 incident cancers, 2,659 patient-years), but these rates were not statistically different from immunomodulator use alone (P = .19), anti-TNF alone (P = .06) or no immunosuppressive therapy (P = .14).
Patients on ustekinumab and vedolizumab similarly had numerically lower rates of cancer recurrence, compared with other treatment groups: 21 per 1,000 patient-years (95% CI, 0-44; 5 cancers among 41 patients, 213 patient-years) and 16 per 1,000 patient-years (95% CI, 5-26; 37 cancers among 281 patients, 1,951 patient-years). However, the difference was statistically significant only for vedolizumab (P = .03 vs. immunomodulators and P = .04 vs. anti-TNF agents).
Subgroup analyses for new primary cancers, recurrence of a prior cancer, and type of index cancer (skin cancer vs. other cancers) similarly found no statistically significant differences between treatment arms. Results were similar in patients with IBD and RA.
Timing of therapy
The new meta-analysis confirms and expands a previous meta-analysis published in Gastroenterology in 2016 that showed no impact of treatment – primarily IMM or anti-TNF treatment – on cancer recurrence in patients with immune-mediated diseases, Dr. Gupta and coauthors wrote.
The 2016 meta-analysis reported similar cancer recurrence rates with IMMs and anti-TNFs when immunosuppression was introduced before or after 6 years of cancer diagnosis. In the new meta-analysis – with twice the number of patients, a longer duration of follow-up, and the inclusion of other biologic therapies – a stratification of results at the median interval of therapy initiation similarly found no increased risk before 5 years, compared with after 5 years.
“Although several existing guidelines recommend avoiding immunosuppression for 5 years after the index cancer, our results indicate that it may be safe to initiate these agents earlier than 5 years, at least in some patients,” Dr. Gupta and coauthors wrote, mentioning the possible impact of selection bias and surveillance bias in the study. Ongoing registries “may help answer this question more definitively with prospectively collected data, but inherently may suffer from this selection bias as well.”
Assessment of the newer biologics ustekinumab and vedolizumab is limited by the low number of studies (four and five, respectively) and by limited duration of follow-up. “Longer-term evaluation after these treatments is essential but it is reassuring that in the early analysis we did not observe an increase and in fact noted numerically lower rates of cancers,” they wrote.
It is also “critically important” to generate more data on JAK inhibitors, and to further study the safety of combining systemic chemotherapy and the continuation of IBD therapy in the setting of a new cancer diagnosis, they wrote.
The study was funded in part by grants from the Crohn’s and Colitis Foundation, and the Chleck Family Foundation. Dr. Gupta disclosed no conflicts. One coauthor disclosed consulting for Abbvie, Amgen, Biogen, and other companies, and receiving grants from several companies. Another coauthor disclosed serving on the scientific advisory boards for AbbVie and other companies, and receiving research support from Pfizer.
that covered approximately 24,000 patients and 86,000 person-years of follow-up.
The findings could “help guide clinical decision making,” providing “reassurance that it remains safe to use conventional immunomodulators, anti-TNF [tumor necrosis factor] agents, or newer biologics in individuals with [immune-mediated diseases] with a prior malignancy consistent with recent guidelines,” Akshita Gupta, MD, of Massachusetts General Hospital, Boston, and coinvestigators wrote in Clinical Gastroenterology and Hepatology.
And because a stratification of studies by the timing of immunosuppression therapy initiation found no increased risk when treatment was started within 5 years of a cancer diagnosis compared to later on, the meta-analysis could “potentially reduce the time to initiation of immunosuppressive treatment,” the authors wrote, noting a continued need for individualized decision-making.
Ustekinumab, a monoclonal antibody targeting interleukin-12 and IL-23, and vedolizumab, a monoclonal antibody that binds to alpha4beta7 integrin, were covered in the meta-analysis, but investigators found no studies on the use of upadacitinib or other Janus kinase (JAK) inhibitors, or the use of S1P modulators, in patients with prior malignancies.
The analysis included 31 observational studies, 17 of which involved patients with inflammatory bowel disease (IBD). (Of the other studies, 14 involved patients with rheumatoid arthritis, 2 covered psoriasis, and 1 covered ankylosing spondylitis.)
Similar levels of risk
The incidence rate of new or recurrent cancers among individuals not receiving any immunosuppressive therapy for IBD or other immune-mediated diseases after an index cancer was 35 per 1,000 patient-years (95% confidence interval, 27-43 per 1,000 patient-years; 1,627 incident cancers among 12,238 patients, 43,765 patient-years), and the rate among anti-TNF users was similar at 32 per 1,000 patient-years (95% CI, 25-38 per 1,000 patient-years; 571 cancers among 3,939 patients, 17,772 patient-years).
Among patients on conventional immunomodulator therapy (thiopurines, methotrexate), the incidence rate was numerically higher at 46 per 1,000 patient-years (95% CI, 31-61; 1,104 incident cancers among 5,930 patients; 17,018 patient-years), but was not statistically different from anti-TNF (P = .92) or no immunosuppression (P = .98).
Patients on combination immunosuppression also had numerically higher rates of new or recurrent cancers at 56 per 1,000 patient-years (95% CI, 31-81; 179 incident cancers, 2,659 patient-years), but these rates were not statistically different from immunomodulator use alone (P = .19), anti-TNF alone (P = .06) or no immunosuppressive therapy (P = .14).
Patients on ustekinumab and vedolizumab similarly had numerically lower rates of cancer recurrence, compared with other treatment groups: 21 per 1,000 patient-years (95% CI, 0-44; 5 cancers among 41 patients, 213 patient-years) and 16 per 1,000 patient-years (95% CI, 5-26; 37 cancers among 281 patients, 1,951 patient-years). However, the difference was statistically significant only for vedolizumab (P = .03 vs. immunomodulators and P = .04 vs. anti-TNF agents).
Subgroup analyses for new primary cancers, recurrence of a prior cancer, and type of index cancer (skin cancer vs. other cancers) similarly found no statistically significant differences between treatment arms. Results were similar in patients with IBD and RA.
Timing of therapy
The new meta-analysis confirms and expands a previous meta-analysis published in Gastroenterology in 2016 that showed no impact of treatment – primarily IMM or anti-TNF treatment – on cancer recurrence in patients with immune-mediated diseases, Dr. Gupta and coauthors wrote.
The 2016 meta-analysis reported similar cancer recurrence rates with IMMs and anti-TNFs when immunosuppression was introduced before or after 6 years of cancer diagnosis. In the new meta-analysis – with twice the number of patients, a longer duration of follow-up, and the inclusion of other biologic therapies – a stratification of results at the median interval of therapy initiation similarly found no increased risk before 5 years, compared with after 5 years.
“Although several existing guidelines recommend avoiding immunosuppression for 5 years after the index cancer, our results indicate that it may be safe to initiate these agents earlier than 5 years, at least in some patients,” Dr. Gupta and coauthors wrote, mentioning the possible impact of selection bias and surveillance bias in the study. Ongoing registries “may help answer this question more definitively with prospectively collected data, but inherently may suffer from this selection bias as well.”
Assessment of the newer biologics ustekinumab and vedolizumab is limited by the low number of studies (four and five, respectively) and by limited duration of follow-up. “Longer-term evaluation after these treatments is essential but it is reassuring that in the early analysis we did not observe an increase and in fact noted numerically lower rates of cancers,” they wrote.
It is also “critically important” to generate more data on JAK inhibitors, and to further study the safety of combining systemic chemotherapy and the continuation of IBD therapy in the setting of a new cancer diagnosis, they wrote.
The study was funded in part by grants from the Crohn’s and Colitis Foundation, and the Chleck Family Foundation. Dr. Gupta disclosed no conflicts. One coauthor disclosed consulting for Abbvie, Amgen, Biogen, and other companies, and receiving grants from several companies. Another coauthor disclosed serving on the scientific advisory boards for AbbVie and other companies, and receiving research support from Pfizer.
that covered approximately 24,000 patients and 86,000 person-years of follow-up.
The findings could “help guide clinical decision making,” providing “reassurance that it remains safe to use conventional immunomodulators, anti-TNF [tumor necrosis factor] agents, or newer biologics in individuals with [immune-mediated diseases] with a prior malignancy consistent with recent guidelines,” Akshita Gupta, MD, of Massachusetts General Hospital, Boston, and coinvestigators wrote in Clinical Gastroenterology and Hepatology.
And because a stratification of studies by the timing of immunosuppression therapy initiation found no increased risk when treatment was started within 5 years of a cancer diagnosis compared to later on, the meta-analysis could “potentially reduce the time to initiation of immunosuppressive treatment,” the authors wrote, noting a continued need for individualized decision-making.
Ustekinumab, a monoclonal antibody targeting interleukin-12 and IL-23, and vedolizumab, a monoclonal antibody that binds to alpha4beta7 integrin, were covered in the meta-analysis, but investigators found no studies on the use of upadacitinib or other Janus kinase (JAK) inhibitors, or the use of S1P modulators, in patients with prior malignancies.
The analysis included 31 observational studies, 17 of which involved patients with inflammatory bowel disease (IBD). (Of the other studies, 14 involved patients with rheumatoid arthritis, 2 covered psoriasis, and 1 covered ankylosing spondylitis.)
Similar levels of risk
The incidence rate of new or recurrent cancers among individuals not receiving any immunosuppressive therapy for IBD or other immune-mediated diseases after an index cancer was 35 per 1,000 patient-years (95% confidence interval, 27-43 per 1,000 patient-years; 1,627 incident cancers among 12,238 patients, 43,765 patient-years), and the rate among anti-TNF users was similar at 32 per 1,000 patient-years (95% CI, 25-38 per 1,000 patient-years; 571 cancers among 3,939 patients, 17,772 patient-years).
Among patients on conventional immunomodulator therapy (thiopurines, methotrexate), the incidence rate was numerically higher at 46 per 1,000 patient-years (95% CI, 31-61; 1,104 incident cancers among 5,930 patients; 17,018 patient-years), but was not statistically different from anti-TNF (P = .92) or no immunosuppression (P = .98).
Patients on combination immunosuppression also had numerically higher rates of new or recurrent cancers at 56 per 1,000 patient-years (95% CI, 31-81; 179 incident cancers, 2,659 patient-years), but these rates were not statistically different from immunomodulator use alone (P = .19), anti-TNF alone (P = .06) or no immunosuppressive therapy (P = .14).
Patients on ustekinumab and vedolizumab similarly had numerically lower rates of cancer recurrence, compared with other treatment groups: 21 per 1,000 patient-years (95% CI, 0-44; 5 cancers among 41 patients, 213 patient-years) and 16 per 1,000 patient-years (95% CI, 5-26; 37 cancers among 281 patients, 1,951 patient-years). However, the difference was statistically significant only for vedolizumab (P = .03 vs. immunomodulators and P = .04 vs. anti-TNF agents).
Subgroup analyses for new primary cancers, recurrence of a prior cancer, and type of index cancer (skin cancer vs. other cancers) similarly found no statistically significant differences between treatment arms. Results were similar in patients with IBD and RA.
Timing of therapy
The new meta-analysis confirms and expands a previous meta-analysis published in Gastroenterology in 2016 that showed no impact of treatment – primarily IMM or anti-TNF treatment – on cancer recurrence in patients with immune-mediated diseases, Dr. Gupta and coauthors wrote.
The 2016 meta-analysis reported similar cancer recurrence rates with IMMs and anti-TNFs when immunosuppression was introduced before or after 6 years of cancer diagnosis. In the new meta-analysis – with twice the number of patients, a longer duration of follow-up, and the inclusion of other biologic therapies – a stratification of results at the median interval of therapy initiation similarly found no increased risk before 5 years, compared with after 5 years.
“Although several existing guidelines recommend avoiding immunosuppression for 5 years after the index cancer, our results indicate that it may be safe to initiate these agents earlier than 5 years, at least in some patients,” Dr. Gupta and coauthors wrote, mentioning the possible impact of selection bias and surveillance bias in the study. Ongoing registries “may help answer this question more definitively with prospectively collected data, but inherently may suffer from this selection bias as well.”
Assessment of the newer biologics ustekinumab and vedolizumab is limited by the low number of studies (four and five, respectively) and by limited duration of follow-up. “Longer-term evaluation after these treatments is essential but it is reassuring that in the early analysis we did not observe an increase and in fact noted numerically lower rates of cancers,” they wrote.
It is also “critically important” to generate more data on JAK inhibitors, and to further study the safety of combining systemic chemotherapy and the continuation of IBD therapy in the setting of a new cancer diagnosis, they wrote.
The study was funded in part by grants from the Crohn’s and Colitis Foundation, and the Chleck Family Foundation. Dr. Gupta disclosed no conflicts. One coauthor disclosed consulting for Abbvie, Amgen, Biogen, and other companies, and receiving grants from several companies. Another coauthor disclosed serving on the scientific advisory boards for AbbVie and other companies, and receiving research support from Pfizer.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
AGA publishes CPU for AI in colon polyp diagnosis and management
The American Gastroenterological Association has published a Clinical Practice Update (CPU) on artificial intelligence (AI) for diagnosing and managing colorectal polyps.
The CPU, authored by Jason Samarasena, MD, of UCI Health, Orange, Calif., and colleagues, draws on recent studies and clinical experience to discuss ways that AI is already reshaping colonoscopy, and what opportunities may lie ahead.
“As with any emerging technology, there are important questions and challenges that need to be addressed to ensure that AI tools are introduced safely and effectively into clinical endoscopic practice, ”they wrote in Gastroenterology.
With advances in processing speed and deep-learning technology, AI “computer vision” can now analyze live video of a colonoscopy in progress, enabling computer-aided detection (CADe) and computer-aided diagnosis (CADx), which the panelists described as the two most important developments in the area.
CADe
“In the last several years, numerous prospective, multicenter studies have found that real-time use of AI CADe tools during colonoscopy leads to improvements in adenoma detection and other related performance metrics,” Dr. Samarasena and colleagues wrote.
CADe has yielded mixed success in real-world practice, however, with some studies reporting worse detection metrics after implementing the new technology. Dr. Samarasena and colleagues offered a variety of possible explanations for these findings, including a “ceiling effect” among highly adept endoscopists, reduced operator vigilance caused by false confidence in the technology, and potential confounding inherent to unblinded trials.
CADe may also increase health care costs and burden, they suggested, as the technology tends to catch small benign polyps, prompting unnecessary resections and shortened colonoscopy surveillance intervals.
CADx
The above, unintended consequences of CADe may be counteracted by CADx, which uses computer vision to predict which lesions have benign histology, enabling “resect-and discard” or “diagnose-and-leave” strategies.
Such approaches could significantly reduce rates of polypectomy and/or histopathology, saving an estimated $33 million–150 million per year, according to the update.
Results of real-time CADx clinical trials have been “encouraging,” Dr. Samarasena and colleagues wrote, noting that emerging technology–compatible white-light endoscopy can achieve a negative predictive value of almost 98% for lesions less than 5 mm in diameter, potentially reducing polypectomy rate by almost half.
“Increasing endoscopist confidence in optical diagnosis may be an important step toward broader implementation of leave in situ and resect-and-discard strategies, but successful implementation will also require CADx tools that seamlessly integrate the endoscopic work flow, without the need for image enhancement or magnification,” the panelists wrote.
Reimbursement models may also need to be reworked, they suggested, as many GI practices depend on a steady stream of revenue from pathology services.
Computer-aided quality assessment systems
Beyond optical detection and diagnosis, AI tools are also being developed to improve colonoscopy technique.
Investigators are studying quality assessment systems that use AI offer feedback on a range of endoscopist skills, including colonic-fold evaluation, level of mucosal exposure, and withdrawal time, the latter of which is visualized by a “speedometer” that “paints” the mucosa with “a graphical representation of the colon.”
“In the future, these types of AI-based systems may support trainees and lower-performing endoscopists to reduce exposure errors and, more broadly, may empower physician practices and hospital systems with more nuanced and actionable data on an array of factors that contribute to colonoscopy quality,” the panelists wrote.
Looking ahead
Dr. Samarasena and colleagues concluded by suggesting that the AI tools in usage and development are just the beginning of a wave of technology that will revolutionize how colonoscopies are performed.
“Eventually, we predict an AI suite of tools for colonoscopy will seem indispensable, as a powerful adjunct to support safe and efficient clinical practice,” they wrote. “As technological innovation progresses, we can expect that the future for AI in endoscopy will be a hybrid model, where the unique capabilities of physicians and our AI tools will be seamlessly intertwined to optimize patient care.”
This CPU was commissioned and approved by the AGA Institute Clinical Practice Updates Committee and the AGA Governing Board. The investigators disclosed relationships with Olympus, Neptune Medical, Conmed, and others.
The American Gastroenterological Association has published a Clinical Practice Update (CPU) on artificial intelligence (AI) for diagnosing and managing colorectal polyps.
The CPU, authored by Jason Samarasena, MD, of UCI Health, Orange, Calif., and colleagues, draws on recent studies and clinical experience to discuss ways that AI is already reshaping colonoscopy, and what opportunities may lie ahead.
“As with any emerging technology, there are important questions and challenges that need to be addressed to ensure that AI tools are introduced safely and effectively into clinical endoscopic practice, ”they wrote in Gastroenterology.
With advances in processing speed and deep-learning technology, AI “computer vision” can now analyze live video of a colonoscopy in progress, enabling computer-aided detection (CADe) and computer-aided diagnosis (CADx), which the panelists described as the two most important developments in the area.
CADe
“In the last several years, numerous prospective, multicenter studies have found that real-time use of AI CADe tools during colonoscopy leads to improvements in adenoma detection and other related performance metrics,” Dr. Samarasena and colleagues wrote.
CADe has yielded mixed success in real-world practice, however, with some studies reporting worse detection metrics after implementing the new technology. Dr. Samarasena and colleagues offered a variety of possible explanations for these findings, including a “ceiling effect” among highly adept endoscopists, reduced operator vigilance caused by false confidence in the technology, and potential confounding inherent to unblinded trials.
CADe may also increase health care costs and burden, they suggested, as the technology tends to catch small benign polyps, prompting unnecessary resections and shortened colonoscopy surveillance intervals.
CADx
The above, unintended consequences of CADe may be counteracted by CADx, which uses computer vision to predict which lesions have benign histology, enabling “resect-and discard” or “diagnose-and-leave” strategies.
Such approaches could significantly reduce rates of polypectomy and/or histopathology, saving an estimated $33 million–150 million per year, according to the update.
Results of real-time CADx clinical trials have been “encouraging,” Dr. Samarasena and colleagues wrote, noting that emerging technology–compatible white-light endoscopy can achieve a negative predictive value of almost 98% for lesions less than 5 mm in diameter, potentially reducing polypectomy rate by almost half.
“Increasing endoscopist confidence in optical diagnosis may be an important step toward broader implementation of leave in situ and resect-and-discard strategies, but successful implementation will also require CADx tools that seamlessly integrate the endoscopic work flow, without the need for image enhancement or magnification,” the panelists wrote.
Reimbursement models may also need to be reworked, they suggested, as many GI practices depend on a steady stream of revenue from pathology services.
Computer-aided quality assessment systems
Beyond optical detection and diagnosis, AI tools are also being developed to improve colonoscopy technique.
Investigators are studying quality assessment systems that use AI offer feedback on a range of endoscopist skills, including colonic-fold evaluation, level of mucosal exposure, and withdrawal time, the latter of which is visualized by a “speedometer” that “paints” the mucosa with “a graphical representation of the colon.”
“In the future, these types of AI-based systems may support trainees and lower-performing endoscopists to reduce exposure errors and, more broadly, may empower physician practices and hospital systems with more nuanced and actionable data on an array of factors that contribute to colonoscopy quality,” the panelists wrote.
Looking ahead
Dr. Samarasena and colleagues concluded by suggesting that the AI tools in usage and development are just the beginning of a wave of technology that will revolutionize how colonoscopies are performed.
“Eventually, we predict an AI suite of tools for colonoscopy will seem indispensable, as a powerful adjunct to support safe and efficient clinical practice,” they wrote. “As technological innovation progresses, we can expect that the future for AI in endoscopy will be a hybrid model, where the unique capabilities of physicians and our AI tools will be seamlessly intertwined to optimize patient care.”
This CPU was commissioned and approved by the AGA Institute Clinical Practice Updates Committee and the AGA Governing Board. The investigators disclosed relationships with Olympus, Neptune Medical, Conmed, and others.
The American Gastroenterological Association has published a Clinical Practice Update (CPU) on artificial intelligence (AI) for diagnosing and managing colorectal polyps.
The CPU, authored by Jason Samarasena, MD, of UCI Health, Orange, Calif., and colleagues, draws on recent studies and clinical experience to discuss ways that AI is already reshaping colonoscopy, and what opportunities may lie ahead.
“As with any emerging technology, there are important questions and challenges that need to be addressed to ensure that AI tools are introduced safely and effectively into clinical endoscopic practice, ”they wrote in Gastroenterology.
With advances in processing speed and deep-learning technology, AI “computer vision” can now analyze live video of a colonoscopy in progress, enabling computer-aided detection (CADe) and computer-aided diagnosis (CADx), which the panelists described as the two most important developments in the area.
CADe
“In the last several years, numerous prospective, multicenter studies have found that real-time use of AI CADe tools during colonoscopy leads to improvements in adenoma detection and other related performance metrics,” Dr. Samarasena and colleagues wrote.
CADe has yielded mixed success in real-world practice, however, with some studies reporting worse detection metrics after implementing the new technology. Dr. Samarasena and colleagues offered a variety of possible explanations for these findings, including a “ceiling effect” among highly adept endoscopists, reduced operator vigilance caused by false confidence in the technology, and potential confounding inherent to unblinded trials.
CADe may also increase health care costs and burden, they suggested, as the technology tends to catch small benign polyps, prompting unnecessary resections and shortened colonoscopy surveillance intervals.
CADx
The above, unintended consequences of CADe may be counteracted by CADx, which uses computer vision to predict which lesions have benign histology, enabling “resect-and discard” or “diagnose-and-leave” strategies.
Such approaches could significantly reduce rates of polypectomy and/or histopathology, saving an estimated $33 million–150 million per year, according to the update.
Results of real-time CADx clinical trials have been “encouraging,” Dr. Samarasena and colleagues wrote, noting that emerging technology–compatible white-light endoscopy can achieve a negative predictive value of almost 98% for lesions less than 5 mm in diameter, potentially reducing polypectomy rate by almost half.
“Increasing endoscopist confidence in optical diagnosis may be an important step toward broader implementation of leave in situ and resect-and-discard strategies, but successful implementation will also require CADx tools that seamlessly integrate the endoscopic work flow, without the need for image enhancement or magnification,” the panelists wrote.
Reimbursement models may also need to be reworked, they suggested, as many GI practices depend on a steady stream of revenue from pathology services.
Computer-aided quality assessment systems
Beyond optical detection and diagnosis, AI tools are also being developed to improve colonoscopy technique.
Investigators are studying quality assessment systems that use AI offer feedback on a range of endoscopist skills, including colonic-fold evaluation, level of mucosal exposure, and withdrawal time, the latter of which is visualized by a “speedometer” that “paints” the mucosa with “a graphical representation of the colon.”
“In the future, these types of AI-based systems may support trainees and lower-performing endoscopists to reduce exposure errors and, more broadly, may empower physician practices and hospital systems with more nuanced and actionable data on an array of factors that contribute to colonoscopy quality,” the panelists wrote.
Looking ahead
Dr. Samarasena and colleagues concluded by suggesting that the AI tools in usage and development are just the beginning of a wave of technology that will revolutionize how colonoscopies are performed.
“Eventually, we predict an AI suite of tools for colonoscopy will seem indispensable, as a powerful adjunct to support safe and efficient clinical practice,” they wrote. “As technological innovation progresses, we can expect that the future for AI in endoscopy will be a hybrid model, where the unique capabilities of physicians and our AI tools will be seamlessly intertwined to optimize patient care.”
This CPU was commissioned and approved by the AGA Institute Clinical Practice Updates Committee and the AGA Governing Board. The investigators disclosed relationships with Olympus, Neptune Medical, Conmed, and others.
FROM GASTROENTEROLOGY
The steep costs of disrupting gut-barrier harmony
An interview with Elena Ivanina, DO, MPH
From Ayurveda to the teachings of Hippocrates, medicine’s earliest traditions advanced a belief that the gut was the foundation of all health and disease. It wasn’t until recently, however, that Western medicine has adopted the notion of gut-barrier dysfunction as a pathologic phenomenon critical to not only digestive health but also chronic allergic, inflammatory, and autoimmune disease.
To learn more, Medscape contributor Akash Goel, MD, interviewed Elena Ivanina, DO, MPH, an integrative gastroenterologist, on the role of the gut barrier. Dr. Ivanina is the founder of the Center for Integrative Gut Health and the former director of Neurogastroenterology and Motility at Lenox Hill Hospital in New York. She runs the educational platform for all things gut health, gutlove.com.
What is the role of the gut barrier in overall health and disease?
The gut contains the human body’s largest interface between a person and their external environment. The actual interface is at the gut barrier, where there needs to be an ideal homeostasis and selectivity mechanism to allow the absorption of healthy nutrients, but on the other hand prevent the penetration of harmful microbes, food antigens, and other proinflammatory factors and toxins.
The gut barrier is made up of the mucus layer, gut microbiome, epithelial cells, and immune cells in the lamina propria. When this apparatus is disrupted by factors such as infection, low-fiber diet, antibiotics, and alcohol, then it cannot function normally to selectively keep out the harmful intraluminal substances.
Gut-barrier disruption leads to translocation of dangerous intraluminal components, such as bacteria and their components, into the gut wall and, most importantly, exposes the immune system to them. This causes improper immune activation and dysregulation, which has been shown to lead to various diseases, including gastrointestinal inflammatory disorders such as inflammatory bowel disease (IBD) and celiac disease, systemic autoimmune diseases such as multiple sclerosis and rheumatoid arthritis, and metabolic diseases such as obesity and diabetes.
Is disruption of this barrier what is usually referred to as “leaky gut”?
Leaky gut is a colloquial term for increased intestinal permeability or intestinal hyperpermeability. In a 2019 review article, Dr. Michael Camilleri exposes leaky gut as a term that can be misleading and confusing to the general population. It calls upon clinicians to have an increased awareness of the potential of barrier dysfunction in diseases, and to consider the barrier as a target for treatment.
Is leaky gut more of a mechanism of underlying chronic disease or is it a disease of its own?
Intestinal permeability is a pathophysiologic process in the gut with certain risk factors that in some conditions has been shown to precede chronic disease. There has not been any convincing evidence that it can be diagnosed and treated as its own entity, but research is ongoing.
In IBD, the Crohn’s and Colitis Canada Genetic, Environmental, Microbial Project research consortium has been studying individuals at increased risk for Crohn’s disease because of a first-degree family member with Crohn’s disease. They found an increased abundance of Ruminococcus torques in the microbiomes of at-risk individuals who went on to develop the disease. R. torques are mucin degraders that induce an increase in other mucin-using bacteria, which can contribute to gut-barrier compromise.
In other studies, patients have been found to have asymptomatic intestinal hyperpermeability years before their diagnosis of Crohn’s disease. This supports understanding more about the potential of intestinal hyperpermeability as its own diagnosis that, if addressed, could possibly prevent disease development.
The many possible sources of gut-barrier disruption
What causes leaky gut, and when should physicians and patients be suspicious if they have it?
There are many risk factors that have been associated with leaky gut in both human studies and animal studies, including acrolein (food toxin), aging, alcohol, antacid drugs, antibiotics, burn injury, chemotherapy, circadian rhythm disruption, corticosteroids, emulsifiers (food additives), strenuous exercise (≥ 2 hours) at 60% VO2 max, starvation, fructose, fructans, gliadin (wheat protein), high-fat diet, high-salt diet, high-sugar diet, hyperglycemia, low-fiber diet, nonsteroidal anti-inflammatory drugs, pesticide, proinflammatory cytokines, psychological stress, radiation, sleep deprivation, smoking, and sweeteners.
Patients may be completely asymptomatic with leaky gut. Physicians should be suspicious if there is a genetic predisposition to chronic disease or if any risk factors are unveiled after assessing diet and lifestyle exposures.
What is the role of the Western diet and processed food consumption in driving disruptions of the gut barrier?
The Western diet reduces gut-barrier mucus thickness, leading to increased gut permeability. People who consume a Western diet typically eat less than 15 grams of fiber per day, which is significantly less than many other cultures, including the hunter-gatherers of Tanzania (Hadza), who get 100 or more grams of fiber a day in their food.
With a fiber-depleted diet, gut microbiota that normally feed on fiber gradually disappear and other commensals shift their metabolism to degrade the gut-barrier mucus layer.
A low-fiber diet also decreases short-chain fatty acid production, which reduces production of mucus and affects tight junction regulation.
Emerging evidence on causality
New evidence is demonstrating that previous functional conditions of the gastrointestinal tract, like functional dyspepsia, are associated with abnormalities to the intestinal barrier. What is the association between conditions like functional dyspepsia and irritable bowel syndrome (IBS) with gut-barrier disruption?
Conditions such as functional dyspepsia and IBS are similar in that their pathophysiology is incompletely understood and likely attributable to contributions from many different underlying mechanisms. This makes it difficult for clinicians to explain the condition to patients and often to treat without specific therapeutic targets.
Emerging evidence with new diagnostic tools, such as confocal laser endomicroscopy, has demonstrated altered mucosal barrier function in both conditions.
In patients with IBS who have a suspected food intolerance, studies looking at exposure to the food antigens found that the food caused immediate breaks, increased intervillous spaces, and increased inflammatory cells in the gut mucosa. These changes were associated with patient responses to exclusion diets.
In functional dyspepsia, another study, using confocal laser endomicroscopy, has shown that affected patients have significantly greater epithelial gap density in the duodenum, compared with healthy controls. There was also impaired duodenal-epithelial barrier integrity and evidence of increased cellular pyroptosis in the duodenal mucosa.
These findings suggest that while IBS and functional dyspepsia are still likely multifactorial, there may be a common preclinical state that can be further investigated as far as preventing its development and using it as a therapeutic target.
What diagnostic testing are you using to determine whether patients have disruptions to the gut barrier? Are they validated or more experimental?
There are various testing strategies that have been used in research to diagnose intestinal hyperpermeability. In a 2021 analysis, Dr. Michael Camilleri found that the optimal probes for measuring small intestinal and colonic permeability are the mass excreted of 13C-mannitol at 0-2 hours and lactulose during 2-8 hours or sucralose during 8-24 hours. Studies looking at postinfectious IBS have incorporated elevated urinary lactulose/mannitol ratios. Dr. Alessio Fasano and others have looked at using zonulin as a biomarker of impaired gut-barrier function. These tests are still considered experimental.
Is there an association between alterations in the gut microbiome and gut-barrier disruption?
There is an integral relationship between the gut microbiome and gut-barrier function, and dysbiosis can disrupt gut-barrier functionality.
The microbiota produce a variety of metabolites in close proximity to the gut epithelium, impacting gut-barrier function and immune response. For example, short-chain fatty acids produced by Bifidobacterium, Bacteroides, Enterobacter, Faecalibacterium, and Roseburia species impact host immune cell differentiation and metabolism as well as influence susceptibility to pathogens.
Studies have shown that sodium butyrate significantly improves epithelial-barrier function. Other experiments have used transplantation of the intestinal microbiota to show that introduction of certain microbial phenotypes can significantly increase gut permeability.
Practical advice for clinicians and patients
How do you advise patients to avoid gut-barrier disruption?
It is important to educate and counsel patients about the long list of risk factors, many of which are closely related to a Western diet and lifestyle, which can increase their risk for leaky gut.
Once one has it, can it be repaired? Can you share a bit about your protocols in general terms?
Many interventions have been shown to improve intestinal permeability. They include berberine, butyrate, caloric restriction and fasting, curcumin, dietary fiber (prebiotics), moderate exercise, fermented food, fish oil, glutamine, quercetin, probiotics, vagus nerve stimulation, vitamin D, and zinc.
Protocols have to be tailored to patients and their risk factors, diet, and lifestyle.
What are some tips from a nutrition and lifestyle standpoint that patients can follow to ensure a robust gut barrier?
It is important to emphasize a high-fiber diet with naturally fermented food, incorporating time-restricted eating, such as eating an early dinner and nothing else before bedtime, a moderate exercise routine, and gut-brain modulation with techniques such as acupuncture that can incorporate vagus nerve stimulation. Limited safe precision supplementation can be discussed on an individual basis based on the patient’s interest, additional testing, and other existing health conditions.
Dr. Akash Goel is a clinical assistant professor of medicine at Weill Cornell in gastroenterology and hepatology. He has disclosed no relevant financial relationships. His work has appeared on networks and publications such as CNN, The New York Times, Time Magazine, and Financial Times. He has a deep interest in nutrition, food as medicine, and the intersection between the gut microbiome and human health.
A version of this article appeared on Medscape.com.
An interview with Elena Ivanina, DO, MPH
An interview with Elena Ivanina, DO, MPH
From Ayurveda to the teachings of Hippocrates, medicine’s earliest traditions advanced a belief that the gut was the foundation of all health and disease. It wasn’t until recently, however, that Western medicine has adopted the notion of gut-barrier dysfunction as a pathologic phenomenon critical to not only digestive health but also chronic allergic, inflammatory, and autoimmune disease.
To learn more, Medscape contributor Akash Goel, MD, interviewed Elena Ivanina, DO, MPH, an integrative gastroenterologist, on the role of the gut barrier. Dr. Ivanina is the founder of the Center for Integrative Gut Health and the former director of Neurogastroenterology and Motility at Lenox Hill Hospital in New York. She runs the educational platform for all things gut health, gutlove.com.
What is the role of the gut barrier in overall health and disease?
The gut contains the human body’s largest interface between a person and their external environment. The actual interface is at the gut barrier, where there needs to be an ideal homeostasis and selectivity mechanism to allow the absorption of healthy nutrients, but on the other hand prevent the penetration of harmful microbes, food antigens, and other proinflammatory factors and toxins.
The gut barrier is made up of the mucus layer, gut microbiome, epithelial cells, and immune cells in the lamina propria. When this apparatus is disrupted by factors such as infection, low-fiber diet, antibiotics, and alcohol, then it cannot function normally to selectively keep out the harmful intraluminal substances.
Gut-barrier disruption leads to translocation of dangerous intraluminal components, such as bacteria and their components, into the gut wall and, most importantly, exposes the immune system to them. This causes improper immune activation and dysregulation, which has been shown to lead to various diseases, including gastrointestinal inflammatory disorders such as inflammatory bowel disease (IBD) and celiac disease, systemic autoimmune diseases such as multiple sclerosis and rheumatoid arthritis, and metabolic diseases such as obesity and diabetes.
Is disruption of this barrier what is usually referred to as “leaky gut”?
Leaky gut is a colloquial term for increased intestinal permeability or intestinal hyperpermeability. In a 2019 review article, Dr. Michael Camilleri exposes leaky gut as a term that can be misleading and confusing to the general population. It calls upon clinicians to have an increased awareness of the potential of barrier dysfunction in diseases, and to consider the barrier as a target for treatment.
Is leaky gut more of a mechanism of underlying chronic disease or is it a disease of its own?
Intestinal permeability is a pathophysiologic process in the gut with certain risk factors that in some conditions has been shown to precede chronic disease. There has not been any convincing evidence that it can be diagnosed and treated as its own entity, but research is ongoing.
In IBD, the Crohn’s and Colitis Canada Genetic, Environmental, Microbial Project research consortium has been studying individuals at increased risk for Crohn’s disease because of a first-degree family member with Crohn’s disease. They found an increased abundance of Ruminococcus torques in the microbiomes of at-risk individuals who went on to develop the disease. R. torques are mucin degraders that induce an increase in other mucin-using bacteria, which can contribute to gut-barrier compromise.
In other studies, patients have been found to have asymptomatic intestinal hyperpermeability years before their diagnosis of Crohn’s disease. This supports understanding more about the potential of intestinal hyperpermeability as its own diagnosis that, if addressed, could possibly prevent disease development.
The many possible sources of gut-barrier disruption
What causes leaky gut, and when should physicians and patients be suspicious if they have it?
There are many risk factors that have been associated with leaky gut in both human studies and animal studies, including acrolein (food toxin), aging, alcohol, antacid drugs, antibiotics, burn injury, chemotherapy, circadian rhythm disruption, corticosteroids, emulsifiers (food additives), strenuous exercise (≥ 2 hours) at 60% VO2 max, starvation, fructose, fructans, gliadin (wheat protein), high-fat diet, high-salt diet, high-sugar diet, hyperglycemia, low-fiber diet, nonsteroidal anti-inflammatory drugs, pesticide, proinflammatory cytokines, psychological stress, radiation, sleep deprivation, smoking, and sweeteners.
Patients may be completely asymptomatic with leaky gut. Physicians should be suspicious if there is a genetic predisposition to chronic disease or if any risk factors are unveiled after assessing diet and lifestyle exposures.
What is the role of the Western diet and processed food consumption in driving disruptions of the gut barrier?
The Western diet reduces gut-barrier mucus thickness, leading to increased gut permeability. People who consume a Western diet typically eat less than 15 grams of fiber per day, which is significantly less than many other cultures, including the hunter-gatherers of Tanzania (Hadza), who get 100 or more grams of fiber a day in their food.
With a fiber-depleted diet, gut microbiota that normally feed on fiber gradually disappear and other commensals shift their metabolism to degrade the gut-barrier mucus layer.
A low-fiber diet also decreases short-chain fatty acid production, which reduces production of mucus and affects tight junction regulation.
Emerging evidence on causality
New evidence is demonstrating that previous functional conditions of the gastrointestinal tract, like functional dyspepsia, are associated with abnormalities to the intestinal barrier. What is the association between conditions like functional dyspepsia and irritable bowel syndrome (IBS) with gut-barrier disruption?
Conditions such as functional dyspepsia and IBS are similar in that their pathophysiology is incompletely understood and likely attributable to contributions from many different underlying mechanisms. This makes it difficult for clinicians to explain the condition to patients and often to treat without specific therapeutic targets.
Emerging evidence with new diagnostic tools, such as confocal laser endomicroscopy, has demonstrated altered mucosal barrier function in both conditions.
In patients with IBS who have a suspected food intolerance, studies looking at exposure to the food antigens found that the food caused immediate breaks, increased intervillous spaces, and increased inflammatory cells in the gut mucosa. These changes were associated with patient responses to exclusion diets.
In functional dyspepsia, another study, using confocal laser endomicroscopy, has shown that affected patients have significantly greater epithelial gap density in the duodenum, compared with healthy controls. There was also impaired duodenal-epithelial barrier integrity and evidence of increased cellular pyroptosis in the duodenal mucosa.
These findings suggest that while IBS and functional dyspepsia are still likely multifactorial, there may be a common preclinical state that can be further investigated as far as preventing its development and using it as a therapeutic target.
What diagnostic testing are you using to determine whether patients have disruptions to the gut barrier? Are they validated or more experimental?
There are various testing strategies that have been used in research to diagnose intestinal hyperpermeability. In a 2021 analysis, Dr. Michael Camilleri found that the optimal probes for measuring small intestinal and colonic permeability are the mass excreted of 13C-mannitol at 0-2 hours and lactulose during 2-8 hours or sucralose during 8-24 hours. Studies looking at postinfectious IBS have incorporated elevated urinary lactulose/mannitol ratios. Dr. Alessio Fasano and others have looked at using zonulin as a biomarker of impaired gut-barrier function. These tests are still considered experimental.
Is there an association between alterations in the gut microbiome and gut-barrier disruption?
There is an integral relationship between the gut microbiome and gut-barrier function, and dysbiosis can disrupt gut-barrier functionality.
The microbiota produce a variety of metabolites in close proximity to the gut epithelium, impacting gut-barrier function and immune response. For example, short-chain fatty acids produced by Bifidobacterium, Bacteroides, Enterobacter, Faecalibacterium, and Roseburia species impact host immune cell differentiation and metabolism as well as influence susceptibility to pathogens.
Studies have shown that sodium butyrate significantly improves epithelial-barrier function. Other experiments have used transplantation of the intestinal microbiota to show that introduction of certain microbial phenotypes can significantly increase gut permeability.
Practical advice for clinicians and patients
How do you advise patients to avoid gut-barrier disruption?
It is important to educate and counsel patients about the long list of risk factors, many of which are closely related to a Western diet and lifestyle, which can increase their risk for leaky gut.
Once one has it, can it be repaired? Can you share a bit about your protocols in general terms?
Many interventions have been shown to improve intestinal permeability. They include berberine, butyrate, caloric restriction and fasting, curcumin, dietary fiber (prebiotics), moderate exercise, fermented food, fish oil, glutamine, quercetin, probiotics, vagus nerve stimulation, vitamin D, and zinc.
Protocols have to be tailored to patients and their risk factors, diet, and lifestyle.
What are some tips from a nutrition and lifestyle standpoint that patients can follow to ensure a robust gut barrier?
It is important to emphasize a high-fiber diet with naturally fermented food, incorporating time-restricted eating, such as eating an early dinner and nothing else before bedtime, a moderate exercise routine, and gut-brain modulation with techniques such as acupuncture that can incorporate vagus nerve stimulation. Limited safe precision supplementation can be discussed on an individual basis based on the patient’s interest, additional testing, and other existing health conditions.
Dr. Akash Goel is a clinical assistant professor of medicine at Weill Cornell in gastroenterology and hepatology. He has disclosed no relevant financial relationships. His work has appeared on networks and publications such as CNN, The New York Times, Time Magazine, and Financial Times. He has a deep interest in nutrition, food as medicine, and the intersection between the gut microbiome and human health.
A version of this article appeared on Medscape.com.
From Ayurveda to the teachings of Hippocrates, medicine’s earliest traditions advanced a belief that the gut was the foundation of all health and disease. It wasn’t until recently, however, that Western medicine has adopted the notion of gut-barrier dysfunction as a pathologic phenomenon critical to not only digestive health but also chronic allergic, inflammatory, and autoimmune disease.
To learn more, Medscape contributor Akash Goel, MD, interviewed Elena Ivanina, DO, MPH, an integrative gastroenterologist, on the role of the gut barrier. Dr. Ivanina is the founder of the Center for Integrative Gut Health and the former director of Neurogastroenterology and Motility at Lenox Hill Hospital in New York. She runs the educational platform for all things gut health, gutlove.com.
What is the role of the gut barrier in overall health and disease?
The gut contains the human body’s largest interface between a person and their external environment. The actual interface is at the gut barrier, where there needs to be an ideal homeostasis and selectivity mechanism to allow the absorption of healthy nutrients, but on the other hand prevent the penetration of harmful microbes, food antigens, and other proinflammatory factors and toxins.
The gut barrier is made up of the mucus layer, gut microbiome, epithelial cells, and immune cells in the lamina propria. When this apparatus is disrupted by factors such as infection, low-fiber diet, antibiotics, and alcohol, then it cannot function normally to selectively keep out the harmful intraluminal substances.
Gut-barrier disruption leads to translocation of dangerous intraluminal components, such as bacteria and their components, into the gut wall and, most importantly, exposes the immune system to them. This causes improper immune activation and dysregulation, which has been shown to lead to various diseases, including gastrointestinal inflammatory disorders such as inflammatory bowel disease (IBD) and celiac disease, systemic autoimmune diseases such as multiple sclerosis and rheumatoid arthritis, and metabolic diseases such as obesity and diabetes.
Is disruption of this barrier what is usually referred to as “leaky gut”?
Leaky gut is a colloquial term for increased intestinal permeability or intestinal hyperpermeability. In a 2019 review article, Dr. Michael Camilleri exposes leaky gut as a term that can be misleading and confusing to the general population. It calls upon clinicians to have an increased awareness of the potential of barrier dysfunction in diseases, and to consider the barrier as a target for treatment.
Is leaky gut more of a mechanism of underlying chronic disease or is it a disease of its own?
Intestinal permeability is a pathophysiologic process in the gut with certain risk factors that in some conditions has been shown to precede chronic disease. There has not been any convincing evidence that it can be diagnosed and treated as its own entity, but research is ongoing.
In IBD, the Crohn’s and Colitis Canada Genetic, Environmental, Microbial Project research consortium has been studying individuals at increased risk for Crohn’s disease because of a first-degree family member with Crohn’s disease. They found an increased abundance of Ruminococcus torques in the microbiomes of at-risk individuals who went on to develop the disease. R. torques are mucin degraders that induce an increase in other mucin-using bacteria, which can contribute to gut-barrier compromise.
In other studies, patients have been found to have asymptomatic intestinal hyperpermeability years before their diagnosis of Crohn’s disease. This supports understanding more about the potential of intestinal hyperpermeability as its own diagnosis that, if addressed, could possibly prevent disease development.
The many possible sources of gut-barrier disruption
What causes leaky gut, and when should physicians and patients be suspicious if they have it?
There are many risk factors that have been associated with leaky gut in both human studies and animal studies, including acrolein (food toxin), aging, alcohol, antacid drugs, antibiotics, burn injury, chemotherapy, circadian rhythm disruption, corticosteroids, emulsifiers (food additives), strenuous exercise (≥ 2 hours) at 60% VO2 max, starvation, fructose, fructans, gliadin (wheat protein), high-fat diet, high-salt diet, high-sugar diet, hyperglycemia, low-fiber diet, nonsteroidal anti-inflammatory drugs, pesticide, proinflammatory cytokines, psychological stress, radiation, sleep deprivation, smoking, and sweeteners.
Patients may be completely asymptomatic with leaky gut. Physicians should be suspicious if there is a genetic predisposition to chronic disease or if any risk factors are unveiled after assessing diet and lifestyle exposures.
What is the role of the Western diet and processed food consumption in driving disruptions of the gut barrier?
The Western diet reduces gut-barrier mucus thickness, leading to increased gut permeability. People who consume a Western diet typically eat less than 15 grams of fiber per day, which is significantly less than many other cultures, including the hunter-gatherers of Tanzania (Hadza), who get 100 or more grams of fiber a day in their food.
With a fiber-depleted diet, gut microbiota that normally feed on fiber gradually disappear and other commensals shift their metabolism to degrade the gut-barrier mucus layer.
A low-fiber diet also decreases short-chain fatty acid production, which reduces production of mucus and affects tight junction regulation.
Emerging evidence on causality
New evidence is demonstrating that previous functional conditions of the gastrointestinal tract, like functional dyspepsia, are associated with abnormalities to the intestinal barrier. What is the association between conditions like functional dyspepsia and irritable bowel syndrome (IBS) with gut-barrier disruption?
Conditions such as functional dyspepsia and IBS are similar in that their pathophysiology is incompletely understood and likely attributable to contributions from many different underlying mechanisms. This makes it difficult for clinicians to explain the condition to patients and often to treat without specific therapeutic targets.
Emerging evidence with new diagnostic tools, such as confocal laser endomicroscopy, has demonstrated altered mucosal barrier function in both conditions.
In patients with IBS who have a suspected food intolerance, studies looking at exposure to the food antigens found that the food caused immediate breaks, increased intervillous spaces, and increased inflammatory cells in the gut mucosa. These changes were associated with patient responses to exclusion diets.
In functional dyspepsia, another study, using confocal laser endomicroscopy, has shown that affected patients have significantly greater epithelial gap density in the duodenum, compared with healthy controls. There was also impaired duodenal-epithelial barrier integrity and evidence of increased cellular pyroptosis in the duodenal mucosa.
These findings suggest that while IBS and functional dyspepsia are still likely multifactorial, there may be a common preclinical state that can be further investigated as far as preventing its development and using it as a therapeutic target.
What diagnostic testing are you using to determine whether patients have disruptions to the gut barrier? Are they validated or more experimental?
There are various testing strategies that have been used in research to diagnose intestinal hyperpermeability. In a 2021 analysis, Dr. Michael Camilleri found that the optimal probes for measuring small intestinal and colonic permeability are the mass excreted of 13C-mannitol at 0-2 hours and lactulose during 2-8 hours or sucralose during 8-24 hours. Studies looking at postinfectious IBS have incorporated elevated urinary lactulose/mannitol ratios. Dr. Alessio Fasano and others have looked at using zonulin as a biomarker of impaired gut-barrier function. These tests are still considered experimental.
Is there an association between alterations in the gut microbiome and gut-barrier disruption?
There is an integral relationship between the gut microbiome and gut-barrier function, and dysbiosis can disrupt gut-barrier functionality.
The microbiota produce a variety of metabolites in close proximity to the gut epithelium, impacting gut-barrier function and immune response. For example, short-chain fatty acids produced by Bifidobacterium, Bacteroides, Enterobacter, Faecalibacterium, and Roseburia species impact host immune cell differentiation and metabolism as well as influence susceptibility to pathogens.
Studies have shown that sodium butyrate significantly improves epithelial-barrier function. Other experiments have used transplantation of the intestinal microbiota to show that introduction of certain microbial phenotypes can significantly increase gut permeability.
Practical advice for clinicians and patients
How do you advise patients to avoid gut-barrier disruption?
It is important to educate and counsel patients about the long list of risk factors, many of which are closely related to a Western diet and lifestyle, which can increase their risk for leaky gut.
Once one has it, can it be repaired? Can you share a bit about your protocols in general terms?
Many interventions have been shown to improve intestinal permeability. They include berberine, butyrate, caloric restriction and fasting, curcumin, dietary fiber (prebiotics), moderate exercise, fermented food, fish oil, glutamine, quercetin, probiotics, vagus nerve stimulation, vitamin D, and zinc.
Protocols have to be tailored to patients and their risk factors, diet, and lifestyle.
What are some tips from a nutrition and lifestyle standpoint that patients can follow to ensure a robust gut barrier?
It is important to emphasize a high-fiber diet with naturally fermented food, incorporating time-restricted eating, such as eating an early dinner and nothing else before bedtime, a moderate exercise routine, and gut-brain modulation with techniques such as acupuncture that can incorporate vagus nerve stimulation. Limited safe precision supplementation can be discussed on an individual basis based on the patient’s interest, additional testing, and other existing health conditions.
Dr. Akash Goel is a clinical assistant professor of medicine at Weill Cornell in gastroenterology and hepatology. He has disclosed no relevant financial relationships. His work has appeared on networks and publications such as CNN, The New York Times, Time Magazine, and Financial Times. He has a deep interest in nutrition, food as medicine, and the intersection between the gut microbiome and human health.
A version of this article appeared on Medscape.com.