Can Modulation of the Microbiome Improve Cancer Immunotherapy Tolerance and Efficacy?

Article Type
Changed

WASHINGTON — For years, oncologist Jonathan Peled, MD, PhD, and his colleagues at Memorial Sloan Kettering Cancer Center (MSKCC) in New York City have been documenting gut microbiota disruption during allogeneic hematopoietic stem cell transplantation (allo-HSCT) and its role in frequent and potentially fatal bloodstream infections (BSIs) in the first 100 days after transplant.

Dr. Jonathan Peled

Modulating microbiome composition to improve outcomes after allo-HSCT for hematological malignancies is a prime goal, and at the Gut Microbiota for Health (GMFH) World Summit 2025, Peled shared two new findings.

In one study, his team found that sucrose can exacerbate antibiotic-induced microbiome injury in patients undergoing allo-HSCT — a finding that “raises the question of whether our dietary recommendations [for] allo-HSCT patients are correct,” said Peled, assistant attending at MSKCC, during a session on the gut microbiome and oncology.

And in another study, they found that a rationally designed probiotic formulation may help lower the incidence of bacterial BSIs. In December 2024, the probiotic formulation (SER-155, Seres Therapeutics, Inc.) was granted breakthrough therapy designation by the FDA.

With immunotherapies more broadly, researchers are increasingly looking at diet and modulation of the microbiome to improve both treatment tolerance and efficacy, experts said at the meeting convened by the AGA and the European Society of Neurogastroenterology and Motility.

“Cancer patients and caregivers are asking, ‘What should I eat?’” said Carrie Daniel-MacDougall, PhD, MPH, a nutritional epidemiologist at the University of Texas MD Anderson Cancer Center in Houston. “They’re not just focused on side effects — they want a good outcome for their treatment, and they’re exploring a lot of dietary strategies [for which there] is not a lot of evidence.”

Clinicians are challenged by the fact that “we don’t typically collect dietary data in clinical trials of cancer drugs,” leaving them to extrapolate from evidence-based diet guidelines for cancer prevention, Daniel-MacDougall said.

But “I think that’s starting to shift,” she said, with the microbiome being increasingly recognized for its potential influences on therapeutic response and clinical trials underway looking at “a healthy dietary pattern not just for prevention but survival.”

 

Diet and Probiotics After allo-HSCT

The patterns of microbiota disruption during allo-HSCT — a procedure that includes antibiotic administration, chemotherapy, and sometimes irradiation — are characterized by loss of diversity and the expansion of potentially pathogenic organisms, most commonly Enterococcus, said Peled.

This has been demonstrated across transplantation centers. In a multicenter, international study published in 2020, the patterns of microbiota disruption and their impact on mortality were similar across MSK and other transplantation centers, with higher diversity of intestinal microbiota associated with lower mortality.

Other studies have shown that Enterococcus domination alone (defined arbitrarily as > 30% of fecal microbial composition) is associated with graft vs host disease and higher mortality after allo-HSCT and that intestinal domination by Proteobacteria coincides temporally with BSIs, he said.

Autologous fecal microbiota transplantation (FMT) has been shown to largely restore the microbiota composition the patient had before antibiotic treatment and allo-HSCT, he said, making fecal sample banking and posttreatment FMT a potential approach for reconstituting the gut microbiome and improving outcomes.

But “lately we’ve been very interested in diet for modulating [harmful] patterns” in the microbiome composition, Peled said.

In the new study suggesting a role for sugar avoidance, published last year as a bioRxiv preprint, Peled and his colleagues collected real-time dietary intake data (40,702 food entries) from 173 patients hospitalized for several weeks for allo-HSCT at MSK and analyzed it alongside longitudinally collected fecal samples. They used a Bayesian mixed-effects model to identify dietary components that may correlate with microbial disruption.

“What jumped out as very predictive of a low diversity fecal sample [and expansion of Enterococcus] in the 2 days prior to collection was the interaction between antibiotics and the consumption of sweets” — foods rich in simple sugars, Peled said. The relationship between sugar and the microbiome occurred only during periods of antibiotic exposure.

“And it was particularly perplexing because the foods that fall into the ‘sweets’ category are foods we encourage people to eat clinically when they’re not feeling well and food intake drops dramatically,” he said. This includes foods like nutritional drinks or shakes, Italian ice, gelatin dessert, and sports drinks.

(In a mouse model of post-antibiotic Enterococcus expansion, Peled and his co-investigators then validated the findings and ruled out the impact of any reductions in fiber.)

In addition to possibly revising dietary recommendations for patients undergoing allo-HSCT, the findings raise the question of whether avoiding sugar intake while on antibiotics, in general, is a way to mitigate antibiotic-induced dysbiosis, he said.

To test the role of probiotics, Peled and colleagues collaborated with Seres Therapeutics on a phase 1b trial of an oral combination (SER-155) of 16 fermented strains “selected rationally,” he said, for their ability to decolonize gut pathogens, improve gut barrier function (in vitro), and reduce gut inflammation and local immune activation.

After a safety lead-in, patients were randomized to receive SER-155 (20) or placebo (14) three times — prior to transplant, upon neutrophil engraftment (with vancomycin “conditioning”), and after transplant. “The strains succeeded in grafting in the [gastrointestinal] GI tract…and some of them persisted all the way through to day 100,” Peled said.

The incidence of pathogen domination was substantially lower in the probiotic recipients compared to an MSK historical control cohort, and the incidence of BSIs was significantly lower compared to the placebo arm (10% vs 43%, respectively, representing a 77% relative risk reduction), he said.

 

Diet and Immunotherapy Response: Trials at MD Anderson

One of the first trials Daniel-MacDougall launched at MD Anderson on diet and the microbiome randomized 55 patients who were obese and had a history of colorectal cancer or precancerous polyps to add a cup of beans to their usual diet or to continue their usual diet without beans. There was a crossover at 8 weeks in the 16-week BE GONE trial; stool and fasting blood were collected every 4 weeks.

“Beans are a prebiotic super-house in my opinion, and they’re also something this population would avoid,” said Daniel-MacDougall, associate professor in the department of epidemiology at MD Anderson and faculty director of the Bionutrition Research Core and Research Kitchen.

“We saw a modest increase in alpha diversity [in the intervention group] and similar trends with microbiota-derived metabolites” that regressed when patients returned to their usual diet, she said. The researchers also documented decreases in proteomic biomarkers of intestinal and systemic immune and inflammatory response.

The impact of diet on cancer survival was shown in subsequent research, including an observational study published in Science in 2021 of patients with melanoma receiving immune checkpoint blockade (ICB) treatment. “Patients who consumed insufficient dietary fiber at the start of therapy tended to do worse [than those reporting sufficient fiber intake],” with significantly lower progression-free survival, Daniel-MacDougall said.

“And interestingly, when we looked at dietary fiber [with and without] probiotic use, patients who had sufficient fiber but did not take probiotics did the best,” she said. [The probiotics were not endorsed or selected by their physicians.]

Now, the researchers at MD Anderson are moving into “precision nutrition” research, Daniel-MacDougall said, with a phase 2 randomized, double-blind trial of high dietary fiber intake (a target of 50 g/d from whole foods) vs a healthy control diet (20 g/d of fiber) in patients with melanoma receiving ICB.

The study, which is underway, is a fully controlled feeding study, with all meals and snacks provided by MD Anderson and macronutrients controlled. Researchers are collecting blood, stool, and tumor tissue (if available) to answer questions about the microbiome, changes in systemic and tissue immunity, disease response and immunotherapy toxicity, and other issues.

Peled disclosed IP licensing and research support from Seres Therapeutics; consulting with Da Volterra, MaaT Pharma, and CSL Behring; and advisory/equity with Postbiotics + Research LLC and Prodigy Biosciences. Daniel-MacDougall reported having no disclosures.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

WASHINGTON — For years, oncologist Jonathan Peled, MD, PhD, and his colleagues at Memorial Sloan Kettering Cancer Center (MSKCC) in New York City have been documenting gut microbiota disruption during allogeneic hematopoietic stem cell transplantation (allo-HSCT) and its role in frequent and potentially fatal bloodstream infections (BSIs) in the first 100 days after transplant.

Dr. Jonathan Peled

Modulating microbiome composition to improve outcomes after allo-HSCT for hematological malignancies is a prime goal, and at the Gut Microbiota for Health (GMFH) World Summit 2025, Peled shared two new findings.

In one study, his team found that sucrose can exacerbate antibiotic-induced microbiome injury in patients undergoing allo-HSCT — a finding that “raises the question of whether our dietary recommendations [for] allo-HSCT patients are correct,” said Peled, assistant attending at MSKCC, during a session on the gut microbiome and oncology.

And in another study, they found that a rationally designed probiotic formulation may help lower the incidence of bacterial BSIs. In December 2024, the probiotic formulation (SER-155, Seres Therapeutics, Inc.) was granted breakthrough therapy designation by the FDA.

With immunotherapies more broadly, researchers are increasingly looking at diet and modulation of the microbiome to improve both treatment tolerance and efficacy, experts said at the meeting convened by the AGA and the European Society of Neurogastroenterology and Motility.

“Cancer patients and caregivers are asking, ‘What should I eat?’” said Carrie Daniel-MacDougall, PhD, MPH, a nutritional epidemiologist at the University of Texas MD Anderson Cancer Center in Houston. “They’re not just focused on side effects — they want a good outcome for their treatment, and they’re exploring a lot of dietary strategies [for which there] is not a lot of evidence.”

Clinicians are challenged by the fact that “we don’t typically collect dietary data in clinical trials of cancer drugs,” leaving them to extrapolate from evidence-based diet guidelines for cancer prevention, Daniel-MacDougall said.

But “I think that’s starting to shift,” she said, with the microbiome being increasingly recognized for its potential influences on therapeutic response and clinical trials underway looking at “a healthy dietary pattern not just for prevention but survival.”

 

Diet and Probiotics After allo-HSCT

The patterns of microbiota disruption during allo-HSCT — a procedure that includes antibiotic administration, chemotherapy, and sometimes irradiation — are characterized by loss of diversity and the expansion of potentially pathogenic organisms, most commonly Enterococcus, said Peled.

This has been demonstrated across transplantation centers. In a multicenter, international study published in 2020, the patterns of microbiota disruption and their impact on mortality were similar across MSK and other transplantation centers, with higher diversity of intestinal microbiota associated with lower mortality.

Other studies have shown that Enterococcus domination alone (defined arbitrarily as > 30% of fecal microbial composition) is associated with graft vs host disease and higher mortality after allo-HSCT and that intestinal domination by Proteobacteria coincides temporally with BSIs, he said.

Autologous fecal microbiota transplantation (FMT) has been shown to largely restore the microbiota composition the patient had before antibiotic treatment and allo-HSCT, he said, making fecal sample banking and posttreatment FMT a potential approach for reconstituting the gut microbiome and improving outcomes.

But “lately we’ve been very interested in diet for modulating [harmful] patterns” in the microbiome composition, Peled said.

In the new study suggesting a role for sugar avoidance, published last year as a bioRxiv preprint, Peled and his colleagues collected real-time dietary intake data (40,702 food entries) from 173 patients hospitalized for several weeks for allo-HSCT at MSK and analyzed it alongside longitudinally collected fecal samples. They used a Bayesian mixed-effects model to identify dietary components that may correlate with microbial disruption.

“What jumped out as very predictive of a low diversity fecal sample [and expansion of Enterococcus] in the 2 days prior to collection was the interaction between antibiotics and the consumption of sweets” — foods rich in simple sugars, Peled said. The relationship between sugar and the microbiome occurred only during periods of antibiotic exposure.

“And it was particularly perplexing because the foods that fall into the ‘sweets’ category are foods we encourage people to eat clinically when they’re not feeling well and food intake drops dramatically,” he said. This includes foods like nutritional drinks or shakes, Italian ice, gelatin dessert, and sports drinks.

(In a mouse model of post-antibiotic Enterococcus expansion, Peled and his co-investigators then validated the findings and ruled out the impact of any reductions in fiber.)

In addition to possibly revising dietary recommendations for patients undergoing allo-HSCT, the findings raise the question of whether avoiding sugar intake while on antibiotics, in general, is a way to mitigate antibiotic-induced dysbiosis, he said.

To test the role of probiotics, Peled and colleagues collaborated with Seres Therapeutics on a phase 1b trial of an oral combination (SER-155) of 16 fermented strains “selected rationally,” he said, for their ability to decolonize gut pathogens, improve gut barrier function (in vitro), and reduce gut inflammation and local immune activation.

After a safety lead-in, patients were randomized to receive SER-155 (20) or placebo (14) three times — prior to transplant, upon neutrophil engraftment (with vancomycin “conditioning”), and after transplant. “The strains succeeded in grafting in the [gastrointestinal] GI tract…and some of them persisted all the way through to day 100,” Peled said.

The incidence of pathogen domination was substantially lower in the probiotic recipients compared to an MSK historical control cohort, and the incidence of BSIs was significantly lower compared to the placebo arm (10% vs 43%, respectively, representing a 77% relative risk reduction), he said.

 

Diet and Immunotherapy Response: Trials at MD Anderson

One of the first trials Daniel-MacDougall launched at MD Anderson on diet and the microbiome randomized 55 patients who were obese and had a history of colorectal cancer or precancerous polyps to add a cup of beans to their usual diet or to continue their usual diet without beans. There was a crossover at 8 weeks in the 16-week BE GONE trial; stool and fasting blood were collected every 4 weeks.

“Beans are a prebiotic super-house in my opinion, and they’re also something this population would avoid,” said Daniel-MacDougall, associate professor in the department of epidemiology at MD Anderson and faculty director of the Bionutrition Research Core and Research Kitchen.

“We saw a modest increase in alpha diversity [in the intervention group] and similar trends with microbiota-derived metabolites” that regressed when patients returned to their usual diet, she said. The researchers also documented decreases in proteomic biomarkers of intestinal and systemic immune and inflammatory response.

The impact of diet on cancer survival was shown in subsequent research, including an observational study published in Science in 2021 of patients with melanoma receiving immune checkpoint blockade (ICB) treatment. “Patients who consumed insufficient dietary fiber at the start of therapy tended to do worse [than those reporting sufficient fiber intake],” with significantly lower progression-free survival, Daniel-MacDougall said.

“And interestingly, when we looked at dietary fiber [with and without] probiotic use, patients who had sufficient fiber but did not take probiotics did the best,” she said. [The probiotics were not endorsed or selected by their physicians.]

Now, the researchers at MD Anderson are moving into “precision nutrition” research, Daniel-MacDougall said, with a phase 2 randomized, double-blind trial of high dietary fiber intake (a target of 50 g/d from whole foods) vs a healthy control diet (20 g/d of fiber) in patients with melanoma receiving ICB.

The study, which is underway, is a fully controlled feeding study, with all meals and snacks provided by MD Anderson and macronutrients controlled. Researchers are collecting blood, stool, and tumor tissue (if available) to answer questions about the microbiome, changes in systemic and tissue immunity, disease response and immunotherapy toxicity, and other issues.

Peled disclosed IP licensing and research support from Seres Therapeutics; consulting with Da Volterra, MaaT Pharma, and CSL Behring; and advisory/equity with Postbiotics + Research LLC and Prodigy Biosciences. Daniel-MacDougall reported having no disclosures.

A version of this article appeared on Medscape.com.

WASHINGTON — For years, oncologist Jonathan Peled, MD, PhD, and his colleagues at Memorial Sloan Kettering Cancer Center (MSKCC) in New York City have been documenting gut microbiota disruption during allogeneic hematopoietic stem cell transplantation (allo-HSCT) and its role in frequent and potentially fatal bloodstream infections (BSIs) in the first 100 days after transplant.

Dr. Jonathan Peled

Modulating microbiome composition to improve outcomes after allo-HSCT for hematological malignancies is a prime goal, and at the Gut Microbiota for Health (GMFH) World Summit 2025, Peled shared two new findings.

In one study, his team found that sucrose can exacerbate antibiotic-induced microbiome injury in patients undergoing allo-HSCT — a finding that “raises the question of whether our dietary recommendations [for] allo-HSCT patients are correct,” said Peled, assistant attending at MSKCC, during a session on the gut microbiome and oncology.

And in another study, they found that a rationally designed probiotic formulation may help lower the incidence of bacterial BSIs. In December 2024, the probiotic formulation (SER-155, Seres Therapeutics, Inc.) was granted breakthrough therapy designation by the FDA.

With immunotherapies more broadly, researchers are increasingly looking at diet and modulation of the microbiome to improve both treatment tolerance and efficacy, experts said at the meeting convened by the AGA and the European Society of Neurogastroenterology and Motility.

“Cancer patients and caregivers are asking, ‘What should I eat?’” said Carrie Daniel-MacDougall, PhD, MPH, a nutritional epidemiologist at the University of Texas MD Anderson Cancer Center in Houston. “They’re not just focused on side effects — they want a good outcome for their treatment, and they’re exploring a lot of dietary strategies [for which there] is not a lot of evidence.”

Clinicians are challenged by the fact that “we don’t typically collect dietary data in clinical trials of cancer drugs,” leaving them to extrapolate from evidence-based diet guidelines for cancer prevention, Daniel-MacDougall said.

But “I think that’s starting to shift,” she said, with the microbiome being increasingly recognized for its potential influences on therapeutic response and clinical trials underway looking at “a healthy dietary pattern not just for prevention but survival.”

 

Diet and Probiotics After allo-HSCT

The patterns of microbiota disruption during allo-HSCT — a procedure that includes antibiotic administration, chemotherapy, and sometimes irradiation — are characterized by loss of diversity and the expansion of potentially pathogenic organisms, most commonly Enterococcus, said Peled.

This has been demonstrated across transplantation centers. In a multicenter, international study published in 2020, the patterns of microbiota disruption and their impact on mortality were similar across MSK and other transplantation centers, with higher diversity of intestinal microbiota associated with lower mortality.

Other studies have shown that Enterococcus domination alone (defined arbitrarily as > 30% of fecal microbial composition) is associated with graft vs host disease and higher mortality after allo-HSCT and that intestinal domination by Proteobacteria coincides temporally with BSIs, he said.

Autologous fecal microbiota transplantation (FMT) has been shown to largely restore the microbiota composition the patient had before antibiotic treatment and allo-HSCT, he said, making fecal sample banking and posttreatment FMT a potential approach for reconstituting the gut microbiome and improving outcomes.

But “lately we’ve been very interested in diet for modulating [harmful] patterns” in the microbiome composition, Peled said.

In the new study suggesting a role for sugar avoidance, published last year as a bioRxiv preprint, Peled and his colleagues collected real-time dietary intake data (40,702 food entries) from 173 patients hospitalized for several weeks for allo-HSCT at MSK and analyzed it alongside longitudinally collected fecal samples. They used a Bayesian mixed-effects model to identify dietary components that may correlate with microbial disruption.

“What jumped out as very predictive of a low diversity fecal sample [and expansion of Enterococcus] in the 2 days prior to collection was the interaction between antibiotics and the consumption of sweets” — foods rich in simple sugars, Peled said. The relationship between sugar and the microbiome occurred only during periods of antibiotic exposure.

“And it was particularly perplexing because the foods that fall into the ‘sweets’ category are foods we encourage people to eat clinically when they’re not feeling well and food intake drops dramatically,” he said. This includes foods like nutritional drinks or shakes, Italian ice, gelatin dessert, and sports drinks.

(In a mouse model of post-antibiotic Enterococcus expansion, Peled and his co-investigators then validated the findings and ruled out the impact of any reductions in fiber.)

In addition to possibly revising dietary recommendations for patients undergoing allo-HSCT, the findings raise the question of whether avoiding sugar intake while on antibiotics, in general, is a way to mitigate antibiotic-induced dysbiosis, he said.

To test the role of probiotics, Peled and colleagues collaborated with Seres Therapeutics on a phase 1b trial of an oral combination (SER-155) of 16 fermented strains “selected rationally,” he said, for their ability to decolonize gut pathogens, improve gut barrier function (in vitro), and reduce gut inflammation and local immune activation.

After a safety lead-in, patients were randomized to receive SER-155 (20) or placebo (14) three times — prior to transplant, upon neutrophil engraftment (with vancomycin “conditioning”), and after transplant. “The strains succeeded in grafting in the [gastrointestinal] GI tract…and some of them persisted all the way through to day 100,” Peled said.

The incidence of pathogen domination was substantially lower in the probiotic recipients compared to an MSK historical control cohort, and the incidence of BSIs was significantly lower compared to the placebo arm (10% vs 43%, respectively, representing a 77% relative risk reduction), he said.

 

Diet and Immunotherapy Response: Trials at MD Anderson

One of the first trials Daniel-MacDougall launched at MD Anderson on diet and the microbiome randomized 55 patients who were obese and had a history of colorectal cancer or precancerous polyps to add a cup of beans to their usual diet or to continue their usual diet without beans. There was a crossover at 8 weeks in the 16-week BE GONE trial; stool and fasting blood were collected every 4 weeks.

“Beans are a prebiotic super-house in my opinion, and they’re also something this population would avoid,” said Daniel-MacDougall, associate professor in the department of epidemiology at MD Anderson and faculty director of the Bionutrition Research Core and Research Kitchen.

“We saw a modest increase in alpha diversity [in the intervention group] and similar trends with microbiota-derived metabolites” that regressed when patients returned to their usual diet, she said. The researchers also documented decreases in proteomic biomarkers of intestinal and systemic immune and inflammatory response.

The impact of diet on cancer survival was shown in subsequent research, including an observational study published in Science in 2021 of patients with melanoma receiving immune checkpoint blockade (ICB) treatment. “Patients who consumed insufficient dietary fiber at the start of therapy tended to do worse [than those reporting sufficient fiber intake],” with significantly lower progression-free survival, Daniel-MacDougall said.

“And interestingly, when we looked at dietary fiber [with and without] probiotic use, patients who had sufficient fiber but did not take probiotics did the best,” she said. [The probiotics were not endorsed or selected by their physicians.]

Now, the researchers at MD Anderson are moving into “precision nutrition” research, Daniel-MacDougall said, with a phase 2 randomized, double-blind trial of high dietary fiber intake (a target of 50 g/d from whole foods) vs a healthy control diet (20 g/d of fiber) in patients with melanoma receiving ICB.

The study, which is underway, is a fully controlled feeding study, with all meals and snacks provided by MD Anderson and macronutrients controlled. Researchers are collecting blood, stool, and tumor tissue (if available) to answer questions about the microbiome, changes in systemic and tissue immunity, disease response and immunotherapy toxicity, and other issues.

Peled disclosed IP licensing and research support from Seres Therapeutics; consulting with Da Volterra, MaaT Pharma, and CSL Behring; and advisory/equity with Postbiotics + Research LLC and Prodigy Biosciences. Daniel-MacDougall reported having no disclosures.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Novel Gene Risk Score Predicts Outcomes After RYGB Surgery

Article Type
Changed

SAN DIEGO –A novel gene risk score, informed by machine learning, predicted weight-loss outcomes after Roux-en-Y gastric bypass (RYGB) surgery, a new analysis showed.

The findings suggested that the MyPhenome test (Phenomix Sciences) can help clinicians identify the patients most likely to benefit from bariatric procedures and at a greater risk for long-term weight regain after surgery.

“Patients with both a high genetic risk score and rare mutations in the leptin-melanocortin pathway (LMP) had significantly worse outcomes, maintaining only 4.9% total body weight loss [TBWL] over 15 years compared to up to 24.8% in other genetic groups,” Phenomix Sciences Co-founder Andres Acosta, MD, PhD, told GI & Hepatology News.

Dr. Andres Acosta



The study included details on the score’s development and predictive capability. It was presented at Digestive Disease Week® (DDW) 2025

‘More Precise Bariatric Care’

The researchers recently developed a machine learning-assisted gene risk score for calories to satiation (CTSGRS), which mainly involves genes in the LMP. To assess the role of the score with or without LMP gene variants on weight loss and weight recurrence after RYGB, they identified 707 patients with a history of bariatric procedures from the Mayo Clinic Biobank. Patients with duodenal switch, revisional procedures, or who used antiobesity medications or became pregnant during follow-up were excluded.

To make predictions for 442 of the patients, the team first collected anthropometric data up to 15 years after RYGB. Then they used a two-step approach: Assessing for monogenic variants in the LMP and defining participants as carriers (LMP+) or noncarriers (LMP-). Then they defined the gene risk score (CTSGRS+ or CTSGRS-).

The result was four groups: LMP+/CTSGRS+, LMP+/CTSGRS-, LMP-/CTSGRS+, and LMP-/CTSGRS-. Multiple regression analysis was used to analyze TBWL percentage (TBWL%) between the groups at different timepoints, adjusting for baseline weight, age, and gender.

At the 10-year follow-up, the LMP+/CTSGRS+ group demonstrated a significantly higher weight recurrence (regain) of TBW% compared to the other groups.

At 15 years post-RYGB, the mean TBWL% for LMP+/CTSGRS+ was -4.9 vs -20.3 for LMP+/CTSGRS-, -18.0 for LMP-/CTSGRS+, and -24.8 for LMP-/CTSGRS-.

Further analyses showed that the LMP+/CTSGRS+ group had significantly less weight loss than LMP+/CTSGRS- and LMP-/CTSGRS- groups.

Based on the findings, the authors wrote, “Genotyping patients could improve the implementation of individualized weight-loss interventions, enhance weight-loss outcomes, and/or may explain one of the etiological factors associated with weight recurrence after RYGB.”

Acosta noted, “We’re actively expanding our research to include more diverse populations by age, sex, and race. This includes ongoing analysis to understand whether certain demographic or physiological characteristics affect how the test performs, particularly in the context of bariatric surgery.”

The team also is investigating the benefits of phenotyping for obesity comorbidities such as heart disease and diabetes, he said, and exploring whether early interventions in high-risk patients can prevent long-term weight regain and improve outcomes.

In addition, Acosta said, the team recently launched “the first prospective, placebo-controlled clinical trial using the MyPhenome test to predict response to semaglutide.” That study is based on earlier findings showing that patients identified with a Hungry Gut phenotype lost nearly twice as much weight on semaglutide compared with those who tested negative.

Overall, he concluded, “These findings open the door to more precise bariatric care. When we understand a patient’s biological drivers of obesity, we can make better decisions about the right procedure, follow-up, and long-term support. This moves us away from a one-size-fits-all model to care rooted in each patient’s unique biology.”

 

Potentially Paradigm-Shifting

Onur Kutlu, MD, associate professor of surgery and director of the Metabolic Surgery and Metabolic Health Program at the Miller School of Medicine, University of Miami, in Miami, Florida, commented on the study for GI & Hepatology News. “By integrating polygenic risk scores into predictive models, the authors offer an innovative method for identifying patients at elevated risk for weight regain following RYGB.”

“Their findings support the hypothesis that genetic predisposition — particularly involving energy homeostasis pathways — may underlie differential postoperative trajectories,” he said. “This approach has the potential to shift the paradigm from reactive to proactive management of weight recurrence.”

Because current options for treat weight regain are “suboptimal,” he said, “prevention becomes paramount. Preoperative identification of high-risk individuals could inform surgical decision-making, enable earlier interventions, and facilitate personalized postoperative monitoring and support.”

“If validated in larger, prospective cohorts, genetic risk stratification could enhance the precision of bariatric care and improve long-term outcomes,” he added. “Future studies should aim to validate these genetic models across diverse populations and explore how integration of behavioral, psychological, and genetic data may further refine patient selection and care pathways.”

The study was funded by Mayo Clinic and Phenomix Sciences. Gila Therapeutics and Phenomix Sciences licensed Acosta’s research technologies from the University of Florida and Mayo Clinic. Acosta declared receiving consultant fees in the past 5 years from Rhythm Pharmaceuticals, Gila Therapeutics, Amgen, General Mills, BI, Currax, Nestle, Phenomix Sciences, Bausch Health, and RareDiseases, as well as funding support from the National Institutes of Health, Vivus Pharmaceuticals, Novo Nordisk, Apollo Endosurgery, Satiogen Pharmaceuticals, Spatz Medical, and Rhythm Pharmaceuticals. Kutlu declared having no conflicts of interest.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

SAN DIEGO –A novel gene risk score, informed by machine learning, predicted weight-loss outcomes after Roux-en-Y gastric bypass (RYGB) surgery, a new analysis showed.

The findings suggested that the MyPhenome test (Phenomix Sciences) can help clinicians identify the patients most likely to benefit from bariatric procedures and at a greater risk for long-term weight regain after surgery.

“Patients with both a high genetic risk score and rare mutations in the leptin-melanocortin pathway (LMP) had significantly worse outcomes, maintaining only 4.9% total body weight loss [TBWL] over 15 years compared to up to 24.8% in other genetic groups,” Phenomix Sciences Co-founder Andres Acosta, MD, PhD, told GI & Hepatology News.

Dr. Andres Acosta



The study included details on the score’s development and predictive capability. It was presented at Digestive Disease Week® (DDW) 2025

‘More Precise Bariatric Care’

The researchers recently developed a machine learning-assisted gene risk score for calories to satiation (CTSGRS), which mainly involves genes in the LMP. To assess the role of the score with or without LMP gene variants on weight loss and weight recurrence after RYGB, they identified 707 patients with a history of bariatric procedures from the Mayo Clinic Biobank. Patients with duodenal switch, revisional procedures, or who used antiobesity medications or became pregnant during follow-up were excluded.

To make predictions for 442 of the patients, the team first collected anthropometric data up to 15 years after RYGB. Then they used a two-step approach: Assessing for monogenic variants in the LMP and defining participants as carriers (LMP+) or noncarriers (LMP-). Then they defined the gene risk score (CTSGRS+ or CTSGRS-).

The result was four groups: LMP+/CTSGRS+, LMP+/CTSGRS-, LMP-/CTSGRS+, and LMP-/CTSGRS-. Multiple regression analysis was used to analyze TBWL percentage (TBWL%) between the groups at different timepoints, adjusting for baseline weight, age, and gender.

At the 10-year follow-up, the LMP+/CTSGRS+ group demonstrated a significantly higher weight recurrence (regain) of TBW% compared to the other groups.

At 15 years post-RYGB, the mean TBWL% for LMP+/CTSGRS+ was -4.9 vs -20.3 for LMP+/CTSGRS-, -18.0 for LMP-/CTSGRS+, and -24.8 for LMP-/CTSGRS-.

Further analyses showed that the LMP+/CTSGRS+ group had significantly less weight loss than LMP+/CTSGRS- and LMP-/CTSGRS- groups.

Based on the findings, the authors wrote, “Genotyping patients could improve the implementation of individualized weight-loss interventions, enhance weight-loss outcomes, and/or may explain one of the etiological factors associated with weight recurrence after RYGB.”

Acosta noted, “We’re actively expanding our research to include more diverse populations by age, sex, and race. This includes ongoing analysis to understand whether certain demographic or physiological characteristics affect how the test performs, particularly in the context of bariatric surgery.”

The team also is investigating the benefits of phenotyping for obesity comorbidities such as heart disease and diabetes, he said, and exploring whether early interventions in high-risk patients can prevent long-term weight regain and improve outcomes.

In addition, Acosta said, the team recently launched “the first prospective, placebo-controlled clinical trial using the MyPhenome test to predict response to semaglutide.” That study is based on earlier findings showing that patients identified with a Hungry Gut phenotype lost nearly twice as much weight on semaglutide compared with those who tested negative.

Overall, he concluded, “These findings open the door to more precise bariatric care. When we understand a patient’s biological drivers of obesity, we can make better decisions about the right procedure, follow-up, and long-term support. This moves us away from a one-size-fits-all model to care rooted in each patient’s unique biology.”

 

Potentially Paradigm-Shifting

Onur Kutlu, MD, associate professor of surgery and director of the Metabolic Surgery and Metabolic Health Program at the Miller School of Medicine, University of Miami, in Miami, Florida, commented on the study for GI & Hepatology News. “By integrating polygenic risk scores into predictive models, the authors offer an innovative method for identifying patients at elevated risk for weight regain following RYGB.”

“Their findings support the hypothesis that genetic predisposition — particularly involving energy homeostasis pathways — may underlie differential postoperative trajectories,” he said. “This approach has the potential to shift the paradigm from reactive to proactive management of weight recurrence.”

Because current options for treat weight regain are “suboptimal,” he said, “prevention becomes paramount. Preoperative identification of high-risk individuals could inform surgical decision-making, enable earlier interventions, and facilitate personalized postoperative monitoring and support.”

“If validated in larger, prospective cohorts, genetic risk stratification could enhance the precision of bariatric care and improve long-term outcomes,” he added. “Future studies should aim to validate these genetic models across diverse populations and explore how integration of behavioral, psychological, and genetic data may further refine patient selection and care pathways.”

The study was funded by Mayo Clinic and Phenomix Sciences. Gila Therapeutics and Phenomix Sciences licensed Acosta’s research technologies from the University of Florida and Mayo Clinic. Acosta declared receiving consultant fees in the past 5 years from Rhythm Pharmaceuticals, Gila Therapeutics, Amgen, General Mills, BI, Currax, Nestle, Phenomix Sciences, Bausch Health, and RareDiseases, as well as funding support from the National Institutes of Health, Vivus Pharmaceuticals, Novo Nordisk, Apollo Endosurgery, Satiogen Pharmaceuticals, Spatz Medical, and Rhythm Pharmaceuticals. Kutlu declared having no conflicts of interest.

A version of this article appeared on Medscape.com.

SAN DIEGO –A novel gene risk score, informed by machine learning, predicted weight-loss outcomes after Roux-en-Y gastric bypass (RYGB) surgery, a new analysis showed.

The findings suggested that the MyPhenome test (Phenomix Sciences) can help clinicians identify the patients most likely to benefit from bariatric procedures and at a greater risk for long-term weight regain after surgery.

“Patients with both a high genetic risk score and rare mutations in the leptin-melanocortin pathway (LMP) had significantly worse outcomes, maintaining only 4.9% total body weight loss [TBWL] over 15 years compared to up to 24.8% in other genetic groups,” Phenomix Sciences Co-founder Andres Acosta, MD, PhD, told GI & Hepatology News.

Dr. Andres Acosta



The study included details on the score’s development and predictive capability. It was presented at Digestive Disease Week® (DDW) 2025

‘More Precise Bariatric Care’

The researchers recently developed a machine learning-assisted gene risk score for calories to satiation (CTSGRS), which mainly involves genes in the LMP. To assess the role of the score with or without LMP gene variants on weight loss and weight recurrence after RYGB, they identified 707 patients with a history of bariatric procedures from the Mayo Clinic Biobank. Patients with duodenal switch, revisional procedures, or who used antiobesity medications or became pregnant during follow-up were excluded.

To make predictions for 442 of the patients, the team first collected anthropometric data up to 15 years after RYGB. Then they used a two-step approach: Assessing for monogenic variants in the LMP and defining participants as carriers (LMP+) or noncarriers (LMP-). Then they defined the gene risk score (CTSGRS+ or CTSGRS-).

The result was four groups: LMP+/CTSGRS+, LMP+/CTSGRS-, LMP-/CTSGRS+, and LMP-/CTSGRS-. Multiple regression analysis was used to analyze TBWL percentage (TBWL%) between the groups at different timepoints, adjusting for baseline weight, age, and gender.

At the 10-year follow-up, the LMP+/CTSGRS+ group demonstrated a significantly higher weight recurrence (regain) of TBW% compared to the other groups.

At 15 years post-RYGB, the mean TBWL% for LMP+/CTSGRS+ was -4.9 vs -20.3 for LMP+/CTSGRS-, -18.0 for LMP-/CTSGRS+, and -24.8 for LMP-/CTSGRS-.

Further analyses showed that the LMP+/CTSGRS+ group had significantly less weight loss than LMP+/CTSGRS- and LMP-/CTSGRS- groups.

Based on the findings, the authors wrote, “Genotyping patients could improve the implementation of individualized weight-loss interventions, enhance weight-loss outcomes, and/or may explain one of the etiological factors associated with weight recurrence after RYGB.”

Acosta noted, “We’re actively expanding our research to include more diverse populations by age, sex, and race. This includes ongoing analysis to understand whether certain demographic or physiological characteristics affect how the test performs, particularly in the context of bariatric surgery.”

The team also is investigating the benefits of phenotyping for obesity comorbidities such as heart disease and diabetes, he said, and exploring whether early interventions in high-risk patients can prevent long-term weight regain and improve outcomes.

In addition, Acosta said, the team recently launched “the first prospective, placebo-controlled clinical trial using the MyPhenome test to predict response to semaglutide.” That study is based on earlier findings showing that patients identified with a Hungry Gut phenotype lost nearly twice as much weight on semaglutide compared with those who tested negative.

Overall, he concluded, “These findings open the door to more precise bariatric care. When we understand a patient’s biological drivers of obesity, we can make better decisions about the right procedure, follow-up, and long-term support. This moves us away from a one-size-fits-all model to care rooted in each patient’s unique biology.”

 

Potentially Paradigm-Shifting

Onur Kutlu, MD, associate professor of surgery and director of the Metabolic Surgery and Metabolic Health Program at the Miller School of Medicine, University of Miami, in Miami, Florida, commented on the study for GI & Hepatology News. “By integrating polygenic risk scores into predictive models, the authors offer an innovative method for identifying patients at elevated risk for weight regain following RYGB.”

“Their findings support the hypothesis that genetic predisposition — particularly involving energy homeostasis pathways — may underlie differential postoperative trajectories,” he said. “This approach has the potential to shift the paradigm from reactive to proactive management of weight recurrence.”

Because current options for treat weight regain are “suboptimal,” he said, “prevention becomes paramount. Preoperative identification of high-risk individuals could inform surgical decision-making, enable earlier interventions, and facilitate personalized postoperative monitoring and support.”

“If validated in larger, prospective cohorts, genetic risk stratification could enhance the precision of bariatric care and improve long-term outcomes,” he added. “Future studies should aim to validate these genetic models across diverse populations and explore how integration of behavioral, psychological, and genetic data may further refine patient selection and care pathways.”

The study was funded by Mayo Clinic and Phenomix Sciences. Gila Therapeutics and Phenomix Sciences licensed Acosta’s research technologies from the University of Florida and Mayo Clinic. Acosta declared receiving consultant fees in the past 5 years from Rhythm Pharmaceuticals, Gila Therapeutics, Amgen, General Mills, BI, Currax, Nestle, Phenomix Sciences, Bausch Health, and RareDiseases, as well as funding support from the National Institutes of Health, Vivus Pharmaceuticals, Novo Nordisk, Apollo Endosurgery, Satiogen Pharmaceuticals, Spatz Medical, and Rhythm Pharmaceuticals. Kutlu declared having no conflicts of interest.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM DDW 2025

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Add-On Niraparib May Slow Hormone-Sensitive Metastatic Prostate Cancer

Article Type
Changed

Adding the poly (ADP-ribose) polymerase (PARP) inhibitor niraparib to abiraterone acetate plus prednisone delayed disease progression and postponed the onset of symptoms in patients with metastatic castration-sensitive prostate cancer with homologous recombination repair (HRR) genetic alterations, according to findings from the AMPLITUDE trial.

An interim analysis also demonstrated an early trend toward improved overall survival in patients who received niraparib.

These findings support adding niraparib to abiraterone acetate plus prednisone “as a new treatment option” in patients with HRR alterations, said Study Chief Gerhardt Attard, MD, PhD, chair of medical oncology, University College London Cancer Institute, London, England, speaking at the American Society of Clinical Oncology (ASCO) 2025 annual meeting.

The findings also highlight that “it’s going to be incredibly important that patients who get diagnosed with hormone-sensitive prostate cancer are tested to see if they have these mutations, so they can be offered the right therapy at the right time,” Outside Expert Bradley McGregor, MD, with Dana-Farber Cancer Institute in Boston, said during a press briefing.

Ultimately, “you don’t know if you don’t test,” McGregor added.

About one quarter of patients with metastatic castration-sensitive prostate cancer have alterations in HRR genes, about half of which are BRCA mutations. These patients typically experience faster disease progression and worse outcomes. An androgen receptor pathway inhibitor, such as abiraterone, alongside androgen deprivation therapy with or without docetaxel, is standard therapy for these patients, but “there is still a need for treatments that are tailored to patients whose tumors harbor HRR alterations,” Attard said in a press release.

Adding niraparib to this standard regimen could help improve survival in these patients.

In 2023, the FDA approved niraparib and abiraterone acetate to treat BRCA-mutated metastatic castration-resistant prostate cancer, after findings from the MAGNITUDE study demonstrated improved progression-free survival (PFS).

The phase 3 AMPLITUDE trial set out to evaluate whether this combination would yield similar survival benefits in metastatic castration-sensitive prostate cancer with HRR mutations.

In the study, 696 patients (median age, 68 years) with metastatic castration-sensitive prostate cancer and one or more HRR gene alterations were randomly allocated (1:1) to niraparib with abiraterone acetate plus prednisone or placebo with abiraterone acetate plus prednisone.

Exclusion criteria included any prior PARP inhibitor therapy or androgen receptor pathway inhibitor other than abiraterone. Eligible patients could have received at most 6 months of androgen deprivation therapy, ≤ 6 cycles of docetaxel, ≤ 45 days of abiraterone acetate plus prednisone and palliative radiation.

Baseline characteristics were well balanced between the groups. Just over half the patients in each group had BRCA1 or BRCA2 alterations. The majority had an electrocorticogram performance status of 0, but high-risk features with a predominance for synchronous metastatic disease and metastatic high volume. About 16% had received prior docetaxel, in keeping with real world data, Attard noted.

At a median follow-up of 30.8 months, niraparib plus standard therapy led to a significant 37% reduction in the risk for radiographic progression or death. The median radiographic PFS (rPFS) was not reached in the niraparib group vs 29.5 months in the placebo group (hazard ratio [HR], 0.63; P = .0001).

Patients with BRCA alterations, in particular, showed the greatest benefit, with niraparib reducing the risk for radiographic progression or death by 48% compared to placebo (median rPFS not reached vs 26 months; HR, 0.52; P < .0001).

On the key secondary endpoint of time to symptomatic progression, adding niraparib led to a “statistically and clinically” significant benefit — a 50% lower in the risk for symptomatic progression in the full population (HR, 0.50), and a 56% lower risk in BRCA-mutant group (HR, 0.44).

The first interim analysis also showed an early trend toward improved overall survival favoring the niraparib combination, with a reduction in the risk for death of 21% in the HRR-mutant population (HR, 0.79; P = .10) and 25% (HR, 0.75; P = .15) in the BRCA-mutant population.

Grade 3/4 adverse events were more common with the niraparib combination group compared to the placebo group (75% vs 59%), with anemia and hypertension being the most common. However, treatment discontinuations due to adverse remained low (15% with niraparib vs 10% with placebo).

Attard noted, however, that half the target number of patients required for the final analysis died. Still, “in my view, there’s a clear trend for favoring survival in the patients randomized to niraparib,” he told attendees.

 

‘Exciting News’ for Patients 

The AMPLITUDE results are “really exciting news for our patients,” McGregor said.

Considering the poor prognosis of patients with metastatic castration-sensitive prostate cancer, “it is reasonable to prioritize early access to PARP inhibitors for these men, at least for the ones with BRCA mutations,” added ASCO discussant Joaquin Mateo, MD, PhD, with Vall d’Hebron Institute of Oncology, Barcelona, Spain.

However, Mateo explained, “I think that for patients with mutations in the other genes, I will be more prudent, and I’ll be on the lookout for the overall survival data to mature.”

The other key conclusion, Mateo said, is that genomic profiling “should be moved earlier into the patient course, and I am confident that embedding genomic profiling into the diagnostic evaluations of metastatic prostate cancer is also going to result in better quality of testing, more efficacious testing, and also a more equitable framework of access to testing for patients.”



This study was funded by Janssen Research & Development, LLC. Attard and Mateo disclosed relationships with Janssen and other pharmaceutical companies. McGregor disclosed relationships with Arcus Biosciences, Astellas, AVEO, Bristol Myers Squibb, Daiichi Sankyo, AstraZeneca, and other companies.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Adding the poly (ADP-ribose) polymerase (PARP) inhibitor niraparib to abiraterone acetate plus prednisone delayed disease progression and postponed the onset of symptoms in patients with metastatic castration-sensitive prostate cancer with homologous recombination repair (HRR) genetic alterations, according to findings from the AMPLITUDE trial.

An interim analysis also demonstrated an early trend toward improved overall survival in patients who received niraparib.

These findings support adding niraparib to abiraterone acetate plus prednisone “as a new treatment option” in patients with HRR alterations, said Study Chief Gerhardt Attard, MD, PhD, chair of medical oncology, University College London Cancer Institute, London, England, speaking at the American Society of Clinical Oncology (ASCO) 2025 annual meeting.

The findings also highlight that “it’s going to be incredibly important that patients who get diagnosed with hormone-sensitive prostate cancer are tested to see if they have these mutations, so they can be offered the right therapy at the right time,” Outside Expert Bradley McGregor, MD, with Dana-Farber Cancer Institute in Boston, said during a press briefing.

Ultimately, “you don’t know if you don’t test,” McGregor added.

About one quarter of patients with metastatic castration-sensitive prostate cancer have alterations in HRR genes, about half of which are BRCA mutations. These patients typically experience faster disease progression and worse outcomes. An androgen receptor pathway inhibitor, such as abiraterone, alongside androgen deprivation therapy with or without docetaxel, is standard therapy for these patients, but “there is still a need for treatments that are tailored to patients whose tumors harbor HRR alterations,” Attard said in a press release.

Adding niraparib to this standard regimen could help improve survival in these patients.

In 2023, the FDA approved niraparib and abiraterone acetate to treat BRCA-mutated metastatic castration-resistant prostate cancer, after findings from the MAGNITUDE study demonstrated improved progression-free survival (PFS).

The phase 3 AMPLITUDE trial set out to evaluate whether this combination would yield similar survival benefits in metastatic castration-sensitive prostate cancer with HRR mutations.

In the study, 696 patients (median age, 68 years) with metastatic castration-sensitive prostate cancer and one or more HRR gene alterations were randomly allocated (1:1) to niraparib with abiraterone acetate plus prednisone or placebo with abiraterone acetate plus prednisone.

Exclusion criteria included any prior PARP inhibitor therapy or androgen receptor pathway inhibitor other than abiraterone. Eligible patients could have received at most 6 months of androgen deprivation therapy, ≤ 6 cycles of docetaxel, ≤ 45 days of abiraterone acetate plus prednisone and palliative radiation.

Baseline characteristics were well balanced between the groups. Just over half the patients in each group had BRCA1 or BRCA2 alterations. The majority had an electrocorticogram performance status of 0, but high-risk features with a predominance for synchronous metastatic disease and metastatic high volume. About 16% had received prior docetaxel, in keeping with real world data, Attard noted.

At a median follow-up of 30.8 months, niraparib plus standard therapy led to a significant 37% reduction in the risk for radiographic progression or death. The median radiographic PFS (rPFS) was not reached in the niraparib group vs 29.5 months in the placebo group (hazard ratio [HR], 0.63; P = .0001).

Patients with BRCA alterations, in particular, showed the greatest benefit, with niraparib reducing the risk for radiographic progression or death by 48% compared to placebo (median rPFS not reached vs 26 months; HR, 0.52; P < .0001).

On the key secondary endpoint of time to symptomatic progression, adding niraparib led to a “statistically and clinically” significant benefit — a 50% lower in the risk for symptomatic progression in the full population (HR, 0.50), and a 56% lower risk in BRCA-mutant group (HR, 0.44).

The first interim analysis also showed an early trend toward improved overall survival favoring the niraparib combination, with a reduction in the risk for death of 21% in the HRR-mutant population (HR, 0.79; P = .10) and 25% (HR, 0.75; P = .15) in the BRCA-mutant population.

Grade 3/4 adverse events were more common with the niraparib combination group compared to the placebo group (75% vs 59%), with anemia and hypertension being the most common. However, treatment discontinuations due to adverse remained low (15% with niraparib vs 10% with placebo).

Attard noted, however, that half the target number of patients required for the final analysis died. Still, “in my view, there’s a clear trend for favoring survival in the patients randomized to niraparib,” he told attendees.

 

‘Exciting News’ for Patients 

The AMPLITUDE results are “really exciting news for our patients,” McGregor said.

Considering the poor prognosis of patients with metastatic castration-sensitive prostate cancer, “it is reasonable to prioritize early access to PARP inhibitors for these men, at least for the ones with BRCA mutations,” added ASCO discussant Joaquin Mateo, MD, PhD, with Vall d’Hebron Institute of Oncology, Barcelona, Spain.

However, Mateo explained, “I think that for patients with mutations in the other genes, I will be more prudent, and I’ll be on the lookout for the overall survival data to mature.”

The other key conclusion, Mateo said, is that genomic profiling “should be moved earlier into the patient course, and I am confident that embedding genomic profiling into the diagnostic evaluations of metastatic prostate cancer is also going to result in better quality of testing, more efficacious testing, and also a more equitable framework of access to testing for patients.”



This study was funded by Janssen Research & Development, LLC. Attard and Mateo disclosed relationships with Janssen and other pharmaceutical companies. McGregor disclosed relationships with Arcus Biosciences, Astellas, AVEO, Bristol Myers Squibb, Daiichi Sankyo, AstraZeneca, and other companies.

A version of this article first appeared on Medscape.com.

Adding the poly (ADP-ribose) polymerase (PARP) inhibitor niraparib to abiraterone acetate plus prednisone delayed disease progression and postponed the onset of symptoms in patients with metastatic castration-sensitive prostate cancer with homologous recombination repair (HRR) genetic alterations, according to findings from the AMPLITUDE trial.

An interim analysis also demonstrated an early trend toward improved overall survival in patients who received niraparib.

These findings support adding niraparib to abiraterone acetate plus prednisone “as a new treatment option” in patients with HRR alterations, said Study Chief Gerhardt Attard, MD, PhD, chair of medical oncology, University College London Cancer Institute, London, England, speaking at the American Society of Clinical Oncology (ASCO) 2025 annual meeting.

The findings also highlight that “it’s going to be incredibly important that patients who get diagnosed with hormone-sensitive prostate cancer are tested to see if they have these mutations, so they can be offered the right therapy at the right time,” Outside Expert Bradley McGregor, MD, with Dana-Farber Cancer Institute in Boston, said during a press briefing.

Ultimately, “you don’t know if you don’t test,” McGregor added.

About one quarter of patients with metastatic castration-sensitive prostate cancer have alterations in HRR genes, about half of which are BRCA mutations. These patients typically experience faster disease progression and worse outcomes. An androgen receptor pathway inhibitor, such as abiraterone, alongside androgen deprivation therapy with or without docetaxel, is standard therapy for these patients, but “there is still a need for treatments that are tailored to patients whose tumors harbor HRR alterations,” Attard said in a press release.

Adding niraparib to this standard regimen could help improve survival in these patients.

In 2023, the FDA approved niraparib and abiraterone acetate to treat BRCA-mutated metastatic castration-resistant prostate cancer, after findings from the MAGNITUDE study demonstrated improved progression-free survival (PFS).

The phase 3 AMPLITUDE trial set out to evaluate whether this combination would yield similar survival benefits in metastatic castration-sensitive prostate cancer with HRR mutations.

In the study, 696 patients (median age, 68 years) with metastatic castration-sensitive prostate cancer and one or more HRR gene alterations were randomly allocated (1:1) to niraparib with abiraterone acetate plus prednisone or placebo with abiraterone acetate plus prednisone.

Exclusion criteria included any prior PARP inhibitor therapy or androgen receptor pathway inhibitor other than abiraterone. Eligible patients could have received at most 6 months of androgen deprivation therapy, ≤ 6 cycles of docetaxel, ≤ 45 days of abiraterone acetate plus prednisone and palliative radiation.

Baseline characteristics were well balanced between the groups. Just over half the patients in each group had BRCA1 or BRCA2 alterations. The majority had an electrocorticogram performance status of 0, but high-risk features with a predominance for synchronous metastatic disease and metastatic high volume. About 16% had received prior docetaxel, in keeping with real world data, Attard noted.

At a median follow-up of 30.8 months, niraparib plus standard therapy led to a significant 37% reduction in the risk for radiographic progression or death. The median radiographic PFS (rPFS) was not reached in the niraparib group vs 29.5 months in the placebo group (hazard ratio [HR], 0.63; P = .0001).

Patients with BRCA alterations, in particular, showed the greatest benefit, with niraparib reducing the risk for radiographic progression or death by 48% compared to placebo (median rPFS not reached vs 26 months; HR, 0.52; P < .0001).

On the key secondary endpoint of time to symptomatic progression, adding niraparib led to a “statistically and clinically” significant benefit — a 50% lower in the risk for symptomatic progression in the full population (HR, 0.50), and a 56% lower risk in BRCA-mutant group (HR, 0.44).

The first interim analysis also showed an early trend toward improved overall survival favoring the niraparib combination, with a reduction in the risk for death of 21% in the HRR-mutant population (HR, 0.79; P = .10) and 25% (HR, 0.75; P = .15) in the BRCA-mutant population.

Grade 3/4 adverse events were more common with the niraparib combination group compared to the placebo group (75% vs 59%), with anemia and hypertension being the most common. However, treatment discontinuations due to adverse remained low (15% with niraparib vs 10% with placebo).

Attard noted, however, that half the target number of patients required for the final analysis died. Still, “in my view, there’s a clear trend for favoring survival in the patients randomized to niraparib,” he told attendees.

 

‘Exciting News’ for Patients 

The AMPLITUDE results are “really exciting news for our patients,” McGregor said.

Considering the poor prognosis of patients with metastatic castration-sensitive prostate cancer, “it is reasonable to prioritize early access to PARP inhibitors for these men, at least for the ones with BRCA mutations,” added ASCO discussant Joaquin Mateo, MD, PhD, with Vall d’Hebron Institute of Oncology, Barcelona, Spain.

However, Mateo explained, “I think that for patients with mutations in the other genes, I will be more prudent, and I’ll be on the lookout for the overall survival data to mature.”

The other key conclusion, Mateo said, is that genomic profiling “should be moved earlier into the patient course, and I am confident that embedding genomic profiling into the diagnostic evaluations of metastatic prostate cancer is also going to result in better quality of testing, more efficacious testing, and also a more equitable framework of access to testing for patients.”



This study was funded by Janssen Research & Development, LLC. Attard and Mateo disclosed relationships with Janssen and other pharmaceutical companies. McGregor disclosed relationships with Arcus Biosciences, Astellas, AVEO, Bristol Myers Squibb, Daiichi Sankyo, AstraZeneca, and other companies.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ASCO 2025

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Walnuts Cut Gut Permeability in Obesity

Article Type
Changed

Walnut consumption modified the fecal microbiota and metabolome, improved insulin response, and reduced gut permeability in adults with obesity, a small study showed.

“Less than 10% of adults are meeting their fiber needs each day, and walnuts are a source of dietary fiber, which helps nourish the gut microbiota,” study coauthor Hannah Holscher, PhD, RD, associate professor of nutrition at the University of Illinois at Urbana-Champaign, told GI & Hepatology News.

Hannah Holscher



Holscher and her colleagues previously conducted a study on the effects of walnut consumption on the human intestinal microbiota “and found interesting results,” she said. Among 18 healthy men and women with a mean age of 53 years, “walnuts enriched intestinal microorganisms, including Roseburia that provide important gut-health promoting attributes, like short-chain fatty acid production. We also saw lower proinflammatory secondary bile acid concentrations in individuals that ate walnuts.”

The current study, presented at NUTRITION 2025 in Orlando, Florida, found similar benefits among 30 adults with obesity but without diabetes or gastrointestinal disease.

 

Walnut Halves, Walnut Oil, Corn Oil — Compared

The researchers aimed to determine the impact of walnut consumption on the gut microbiome, serum and fecal bile acid profiles, systemic inflammation, and oral glucose tolerance to a mixed-meal challenge.

Participants were enrolled in a randomized, controlled, crossover, complete feeding trial with three 3-week conditions, each identical except for walnut halves (WH), walnut oil (WO), or corn oil (CO) in the diet. A 3-week washout separated each condition.

“This was a fully controlled dietary feeding intervention,” Holscher said. “We provided their breakfast, lunch, snacks and dinners — all of their foods and beverages during the three dietary intervention periods that lasted for 3 weeks each. Their base diet consisted of typical American foods that you would find in a grocery store in central Illinois.”

Fecal samples were collected on days 18-20. On day 20, participants underwent a 6-hour mixed-meal tolerance test (75 g glucose + treatment) with a fasting blood draw followed by blood sampling every 30 minutes.

The fecal microbiome and microbiota were assessed using metagenomic and amplicon sequencing, respectively. Fecal microbial metabolites were quantified using gas chromatography-mass spectrometry.

Blood glucose, insulin, and inflammatory biomarkers (interleukin-6, tumor necrosis factor-alpha, C-reactive protein, and lipopolysaccharide-binding protein) were quantified. Fecal and circulating bile acids were measured via liquid chromatography tandem mass spectrometry.

Gut permeability was assessed by quantifying 24-hour urinary excretion of orally ingested sucralose and erythritol on day 21.

Linear mixed-effects models and repeated measures ANOVA were used for the statistical analysis.

The team found that Roseburia spp were greatest following WH (3.9%) vs WO (1.6) and CO (1.9); Lachnospiraceae UCG-001 and UCG-004 were also greatest with WH vs WO and CO.

WH fecal isobutyrate concentrations (5.41 µmol/g) were lower than WO (7.17 µmol/g) and CO (7.77). Similarly, fecal isovalerate concentrations were lowest with WH (7.84 µmol/g) vs WO (10.3µmol/g) and CO (11.6 µmol/g).

In contrast, indoles were highest in WH (36.8 µmol/g) vs WO (6.78 µmol/g) and CO (8.67µmol/g).

No differences in glucose concentrations were seen among groups. The 2-hour area under the curve (AUC) for insulin was lower with WH (469 µIU/mL/min) and WO (494) vs CO (604 µIU/mL/min).

The 4-hour AUC for glycolithocholic acid was lower with WH vs WO and CO. Furthermore, sucralose recovery was lowest following WH (10.5) vs WO (14.3) and CO (14.6).

“Our current efforts are focused on understanding connections between plasma bile acids and glycemic control (ie, blood glucose and insulin concentrations),” Holscher said. “We are also interested in studying individualized or personalized responses, since people had different magnitudes of responses.”

In addition, she said, “as the gut microbiome is one of the factors that can underpin the physiological response to the diet, we are interested in determining if there are microbial signatures that are predictive of glycemic control.”

Because the research is still in the early stages, at this point, Holscher simply encourages people to eat a variety of fruits, vegetables, whole grains, legumes and nuts to meet their daily fiber recommendations and support their gut microbiome.

This study was funded by a USDA NIFA grant. No competing interests were reported.

A version of this article appeared on Medscape.com . 

Publications
Topics
Sections

Walnut consumption modified the fecal microbiota and metabolome, improved insulin response, and reduced gut permeability in adults with obesity, a small study showed.

“Less than 10% of adults are meeting their fiber needs each day, and walnuts are a source of dietary fiber, which helps nourish the gut microbiota,” study coauthor Hannah Holscher, PhD, RD, associate professor of nutrition at the University of Illinois at Urbana-Champaign, told GI & Hepatology News.

Hannah Holscher



Holscher and her colleagues previously conducted a study on the effects of walnut consumption on the human intestinal microbiota “and found interesting results,” she said. Among 18 healthy men and women with a mean age of 53 years, “walnuts enriched intestinal microorganisms, including Roseburia that provide important gut-health promoting attributes, like short-chain fatty acid production. We also saw lower proinflammatory secondary bile acid concentrations in individuals that ate walnuts.”

The current study, presented at NUTRITION 2025 in Orlando, Florida, found similar benefits among 30 adults with obesity but without diabetes or gastrointestinal disease.

 

Walnut Halves, Walnut Oil, Corn Oil — Compared

The researchers aimed to determine the impact of walnut consumption on the gut microbiome, serum and fecal bile acid profiles, systemic inflammation, and oral glucose tolerance to a mixed-meal challenge.

Participants were enrolled in a randomized, controlled, crossover, complete feeding trial with three 3-week conditions, each identical except for walnut halves (WH), walnut oil (WO), or corn oil (CO) in the diet. A 3-week washout separated each condition.

“This was a fully controlled dietary feeding intervention,” Holscher said. “We provided their breakfast, lunch, snacks and dinners — all of their foods and beverages during the three dietary intervention periods that lasted for 3 weeks each. Their base diet consisted of typical American foods that you would find in a grocery store in central Illinois.”

Fecal samples were collected on days 18-20. On day 20, participants underwent a 6-hour mixed-meal tolerance test (75 g glucose + treatment) with a fasting blood draw followed by blood sampling every 30 minutes.

The fecal microbiome and microbiota were assessed using metagenomic and amplicon sequencing, respectively. Fecal microbial metabolites were quantified using gas chromatography-mass spectrometry.

Blood glucose, insulin, and inflammatory biomarkers (interleukin-6, tumor necrosis factor-alpha, C-reactive protein, and lipopolysaccharide-binding protein) were quantified. Fecal and circulating bile acids were measured via liquid chromatography tandem mass spectrometry.

Gut permeability was assessed by quantifying 24-hour urinary excretion of orally ingested sucralose and erythritol on day 21.

Linear mixed-effects models and repeated measures ANOVA were used for the statistical analysis.

The team found that Roseburia spp were greatest following WH (3.9%) vs WO (1.6) and CO (1.9); Lachnospiraceae UCG-001 and UCG-004 were also greatest with WH vs WO and CO.

WH fecal isobutyrate concentrations (5.41 µmol/g) were lower than WO (7.17 µmol/g) and CO (7.77). Similarly, fecal isovalerate concentrations were lowest with WH (7.84 µmol/g) vs WO (10.3µmol/g) and CO (11.6 µmol/g).

In contrast, indoles were highest in WH (36.8 µmol/g) vs WO (6.78 µmol/g) and CO (8.67µmol/g).

No differences in glucose concentrations were seen among groups. The 2-hour area under the curve (AUC) for insulin was lower with WH (469 µIU/mL/min) and WO (494) vs CO (604 µIU/mL/min).

The 4-hour AUC for glycolithocholic acid was lower with WH vs WO and CO. Furthermore, sucralose recovery was lowest following WH (10.5) vs WO (14.3) and CO (14.6).

“Our current efforts are focused on understanding connections between plasma bile acids and glycemic control (ie, blood glucose and insulin concentrations),” Holscher said. “We are also interested in studying individualized or personalized responses, since people had different magnitudes of responses.”

In addition, she said, “as the gut microbiome is one of the factors that can underpin the physiological response to the diet, we are interested in determining if there are microbial signatures that are predictive of glycemic control.”

Because the research is still in the early stages, at this point, Holscher simply encourages people to eat a variety of fruits, vegetables, whole grains, legumes and nuts to meet their daily fiber recommendations and support their gut microbiome.

This study was funded by a USDA NIFA grant. No competing interests were reported.

A version of this article appeared on Medscape.com . 

Walnut consumption modified the fecal microbiota and metabolome, improved insulin response, and reduced gut permeability in adults with obesity, a small study showed.

“Less than 10% of adults are meeting their fiber needs each day, and walnuts are a source of dietary fiber, which helps nourish the gut microbiota,” study coauthor Hannah Holscher, PhD, RD, associate professor of nutrition at the University of Illinois at Urbana-Champaign, told GI & Hepatology News.

Hannah Holscher



Holscher and her colleagues previously conducted a study on the effects of walnut consumption on the human intestinal microbiota “and found interesting results,” she said. Among 18 healthy men and women with a mean age of 53 years, “walnuts enriched intestinal microorganisms, including Roseburia that provide important gut-health promoting attributes, like short-chain fatty acid production. We also saw lower proinflammatory secondary bile acid concentrations in individuals that ate walnuts.”

The current study, presented at NUTRITION 2025 in Orlando, Florida, found similar benefits among 30 adults with obesity but without diabetes or gastrointestinal disease.

 

Walnut Halves, Walnut Oil, Corn Oil — Compared

The researchers aimed to determine the impact of walnut consumption on the gut microbiome, serum and fecal bile acid profiles, systemic inflammation, and oral glucose tolerance to a mixed-meal challenge.

Participants were enrolled in a randomized, controlled, crossover, complete feeding trial with three 3-week conditions, each identical except for walnut halves (WH), walnut oil (WO), or corn oil (CO) in the diet. A 3-week washout separated each condition.

“This was a fully controlled dietary feeding intervention,” Holscher said. “We provided their breakfast, lunch, snacks and dinners — all of their foods and beverages during the three dietary intervention periods that lasted for 3 weeks each. Their base diet consisted of typical American foods that you would find in a grocery store in central Illinois.”

Fecal samples were collected on days 18-20. On day 20, participants underwent a 6-hour mixed-meal tolerance test (75 g glucose + treatment) with a fasting blood draw followed by blood sampling every 30 minutes.

The fecal microbiome and microbiota were assessed using metagenomic and amplicon sequencing, respectively. Fecal microbial metabolites were quantified using gas chromatography-mass spectrometry.

Blood glucose, insulin, and inflammatory biomarkers (interleukin-6, tumor necrosis factor-alpha, C-reactive protein, and lipopolysaccharide-binding protein) were quantified. Fecal and circulating bile acids were measured via liquid chromatography tandem mass spectrometry.

Gut permeability was assessed by quantifying 24-hour urinary excretion of orally ingested sucralose and erythritol on day 21.

Linear mixed-effects models and repeated measures ANOVA were used for the statistical analysis.

The team found that Roseburia spp were greatest following WH (3.9%) vs WO (1.6) and CO (1.9); Lachnospiraceae UCG-001 and UCG-004 were also greatest with WH vs WO and CO.

WH fecal isobutyrate concentrations (5.41 µmol/g) were lower than WO (7.17 µmol/g) and CO (7.77). Similarly, fecal isovalerate concentrations were lowest with WH (7.84 µmol/g) vs WO (10.3µmol/g) and CO (11.6 µmol/g).

In contrast, indoles were highest in WH (36.8 µmol/g) vs WO (6.78 µmol/g) and CO (8.67µmol/g).

No differences in glucose concentrations were seen among groups. The 2-hour area under the curve (AUC) for insulin was lower with WH (469 µIU/mL/min) and WO (494) vs CO (604 µIU/mL/min).

The 4-hour AUC for glycolithocholic acid was lower with WH vs WO and CO. Furthermore, sucralose recovery was lowest following WH (10.5) vs WO (14.3) and CO (14.6).

“Our current efforts are focused on understanding connections between plasma bile acids and glycemic control (ie, blood glucose and insulin concentrations),” Holscher said. “We are also interested in studying individualized or personalized responses, since people had different magnitudes of responses.”

In addition, she said, “as the gut microbiome is one of the factors that can underpin the physiological response to the diet, we are interested in determining if there are microbial signatures that are predictive of glycemic control.”

Because the research is still in the early stages, at this point, Holscher simply encourages people to eat a variety of fruits, vegetables, whole grains, legumes and nuts to meet their daily fiber recommendations and support their gut microbiome.

This study was funded by a USDA NIFA grant. No competing interests were reported.

A version of this article appeared on Medscape.com . 

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

AI Algorithm Predicts Transfusion Need, Mortality Risk in Acute GI Bleeds

Article Type
Changed

SAN DIEGO — A novel generative artificial intelligence (AI) framework known as trajectory flow matching (TFM) can predict the need for red blood cell transfusion and mortality risk in intensive care unit (ICU) patients with acute gastrointestinal (GI) bleeding, researchers reported at Digestive Disease Week® (DDW) 2025.

Acute GI bleeding is the most common cause of digestive disease–related hospitalization, with an estimated 500,000 hospital admissions annually. It’s known that predicting the need for red blood cell transfusion in the first 24 hours may improve resuscitation and decrease both morbidity and mortality.

However, an existing clinical score known as the Rockall Score does not perform well for predicting mortality, Xi (Nicole) Zhang, an MD-PhD student at McGill University, Montreal, Quebec, Canada, told attendees at DDW. With an area under the curve of 0.65-0.75, better prediction is needed, said Zhang, whose coresearchers included Dennis Shung, MD, MHS, PhD, director of Applied Artificial Intelligence at Yale University School of Medicine, New Haven, Connecticut.

Dr. Xi Zhang



“We’d like to predict multiple outcomes in addition to mortality,” said Zhang, who is also a student at the Mila-Quebec Artificial Intelligence Institute.

As a result, the researchers turned to the TFM approach, applying it to ICU patients with acute GI bleeding to predict both the need for transfusion and in-hospital mortality risk. The all-cause mortality rate is up to 11%, according to a 2020 study by James Y. W. Lau, MD, and colleagues. The rebleeding rate of nonvariceal upper GI bleeds is up to 10.4%. Zhang said the rebleeding rate for variceal upper gastrointestinal bleeding is up to 65%.

The AI method the researchers used outperformed a standard deep learning model at predicting the need for transfusion and estimating mortality risk.

 

Defining the AI Framework

“Probabilistic flow matching is a class of generative artificial intelligence that learns how a simple distribution becomes a more complex distribution with ordinary differential equations,” Zhang told GI & Hepatology News. “For example, if you had a few lines and shapes you could learn how it could become a detailed portrait of a face. In our case, we start with a few blood pressure and heart rate measurements and learn the pattern of blood pressures and heart rates over time, particularly if they reflect clinical deterioration with hemodynamic instability.”

Another way to think about the underlying algorithm, Zhang said, is to think about a river with boats where the river flow determines where the boats end up. “We are trying to direct the boat to the correct dock by adjusting the flow of water in the canal. In this case we are mapping the distribution with the first few data points to the distribution with the entire patient trajectory.”

The information gained, she said, could be helpful in timing endoscopic evaluation or allocating red blood cell products for emergent transfusion.

 

Study Details

The researchers evaluated a cohort of 2602 patients admitted to the ICU, identified from the publicly available MIMIC-III database. They divided the patients into a training set of 2342 patients and an internal validation set of 260 patients. Input variables were severe liver disease comorbidity, administration of vasopressor medications, mean arterial blood pressure, and heart rate over the first 24 hours.

Excluded was hemoglobin, since the point was to test the trajectory of hemodynamic parameters independent of hemoglobin thresholds used to guide red blood cell transfusion.

The outcome measures were administration of packed red blood cell transfusion within 24 hours and all-cause hospital mortality.

The TFM was more accurate than a standard deep learning model in predicting red blood cell transfusion, with an accuracy of 93.6% vs 43.2%; P ≤ .001. It was also more accurate at predicting all-cause in-hospital mortality, with an accuracy of 89.5% vs 42.5%, P = .01.

The researchers concluded that the TFM approach was able to predict the hemodynamic trajectories of patients with acute GI bleeding defined as deviation and outperformed the baseline from the measured mean arterial pressure and heart rate.

 

Expert Perspective

“This is an exciting proof-of-concept study that shows generative AI methods may be applied to complex datasets in order to improve on our current predictive models and improve patient care,” said Jeremy Glissen Brown, MD, MSc, an assistant professor of medicine and a practicing gastroenterologist at Duke University who has published research on the use of AI in clinical practice. He reviewed the study for GI & Hepatology News but was not involved in the research.

Dr. Jeremy Glissen Brown

“Future work will likely look into the implementation of a version of this model on real-time data.” he said. “We are at an exciting inflection point in predictive models within GI and clinical medicine. Predictive models based on deep learning and generative AI hold the promise of improving how we predict and treat disease states, but the excitement being generated with studies such as this needs to be balanced with the trade-offs inherent to the current paradigm of deep learning and generative models compared to more traditional regression-based models. These include many of the same ‘black box’ explainability questions that have risen in the age of convolutional neural networks as well as some method-specific questions due to the continuous and implicit nature of TFM.”

Elaborating on that, Glissen Brown said: “TFM, like many deep learning techniques, raises concerns about explainability that we’ve long seen with convolutional neural networks — the ‘black box’ problem, where it’s difficult to interpret exactly how and why the model arrives at a particular decision. But TFM also introduces unique challenges due to its continuous and implicit formulation. Since it often learns flows without explicitly defining intermediate representations or steps, it can be harder to trace the logic or pathways it uses to connect inputs to outputs. This makes standard interpretability tools less effective and calls for new techniques tailored to these continuous architectures.”

“This approach could have a real clinical impact,” said Robert Hirten, MD, associate professor of medicine and artificial intelligence, Icahn School of Medicine at Mount Sinai, New York City, who also reviewed the study. “Accurately predicting transfusion needs and mortality risk in real time could support earlier, more targeted interventions for high-risk patients. While these findings still need to be validated in prospective studies, it could enhance ICU decision-making and resource allocation.”

Dr. Robert Hirten



“For the practicing gastroenterologist, we envision this system could help them figure out when to perform endoscopy in a patient admitted with acute gastrointestinal bleeding in the ICU at very high risk of exsanguination,” Zhang told GI & Hepatology News.

The approach, the researchers said, will be useful in identifying unique patient characteristics, make possible the identification of high-risk patients and lead to more personalized medicine.

Hirten, Zhang, and Shung had no disclosures. Glissen Brown reported consulting relationships with Medtronic, OdinVision, Doximity, and Olympus. The National Institutes of Health funded this study.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

SAN DIEGO — A novel generative artificial intelligence (AI) framework known as trajectory flow matching (TFM) can predict the need for red blood cell transfusion and mortality risk in intensive care unit (ICU) patients with acute gastrointestinal (GI) bleeding, researchers reported at Digestive Disease Week® (DDW) 2025.

Acute GI bleeding is the most common cause of digestive disease–related hospitalization, with an estimated 500,000 hospital admissions annually. It’s known that predicting the need for red blood cell transfusion in the first 24 hours may improve resuscitation and decrease both morbidity and mortality.

However, an existing clinical score known as the Rockall Score does not perform well for predicting mortality, Xi (Nicole) Zhang, an MD-PhD student at McGill University, Montreal, Quebec, Canada, told attendees at DDW. With an area under the curve of 0.65-0.75, better prediction is needed, said Zhang, whose coresearchers included Dennis Shung, MD, MHS, PhD, director of Applied Artificial Intelligence at Yale University School of Medicine, New Haven, Connecticut.

Dr. Xi Zhang



“We’d like to predict multiple outcomes in addition to mortality,” said Zhang, who is also a student at the Mila-Quebec Artificial Intelligence Institute.

As a result, the researchers turned to the TFM approach, applying it to ICU patients with acute GI bleeding to predict both the need for transfusion and in-hospital mortality risk. The all-cause mortality rate is up to 11%, according to a 2020 study by James Y. W. Lau, MD, and colleagues. The rebleeding rate of nonvariceal upper GI bleeds is up to 10.4%. Zhang said the rebleeding rate for variceal upper gastrointestinal bleeding is up to 65%.

The AI method the researchers used outperformed a standard deep learning model at predicting the need for transfusion and estimating mortality risk.

 

Defining the AI Framework

“Probabilistic flow matching is a class of generative artificial intelligence that learns how a simple distribution becomes a more complex distribution with ordinary differential equations,” Zhang told GI & Hepatology News. “For example, if you had a few lines and shapes you could learn how it could become a detailed portrait of a face. In our case, we start with a few blood pressure and heart rate measurements and learn the pattern of blood pressures and heart rates over time, particularly if they reflect clinical deterioration with hemodynamic instability.”

Another way to think about the underlying algorithm, Zhang said, is to think about a river with boats where the river flow determines where the boats end up. “We are trying to direct the boat to the correct dock by adjusting the flow of water in the canal. In this case we are mapping the distribution with the first few data points to the distribution with the entire patient trajectory.”

The information gained, she said, could be helpful in timing endoscopic evaluation or allocating red blood cell products for emergent transfusion.

 

Study Details

The researchers evaluated a cohort of 2602 patients admitted to the ICU, identified from the publicly available MIMIC-III database. They divided the patients into a training set of 2342 patients and an internal validation set of 260 patients. Input variables were severe liver disease comorbidity, administration of vasopressor medications, mean arterial blood pressure, and heart rate over the first 24 hours.

Excluded was hemoglobin, since the point was to test the trajectory of hemodynamic parameters independent of hemoglobin thresholds used to guide red blood cell transfusion.

The outcome measures were administration of packed red blood cell transfusion within 24 hours and all-cause hospital mortality.

The TFM was more accurate than a standard deep learning model in predicting red blood cell transfusion, with an accuracy of 93.6% vs 43.2%; P ≤ .001. It was also more accurate at predicting all-cause in-hospital mortality, with an accuracy of 89.5% vs 42.5%, P = .01.

The researchers concluded that the TFM approach was able to predict the hemodynamic trajectories of patients with acute GI bleeding defined as deviation and outperformed the baseline from the measured mean arterial pressure and heart rate.

 

Expert Perspective

“This is an exciting proof-of-concept study that shows generative AI methods may be applied to complex datasets in order to improve on our current predictive models and improve patient care,” said Jeremy Glissen Brown, MD, MSc, an assistant professor of medicine and a practicing gastroenterologist at Duke University who has published research on the use of AI in clinical practice. He reviewed the study for GI & Hepatology News but was not involved in the research.

Dr. Jeremy Glissen Brown

“Future work will likely look into the implementation of a version of this model on real-time data.” he said. “We are at an exciting inflection point in predictive models within GI and clinical medicine. Predictive models based on deep learning and generative AI hold the promise of improving how we predict and treat disease states, but the excitement being generated with studies such as this needs to be balanced with the trade-offs inherent to the current paradigm of deep learning and generative models compared to more traditional regression-based models. These include many of the same ‘black box’ explainability questions that have risen in the age of convolutional neural networks as well as some method-specific questions due to the continuous and implicit nature of TFM.”

Elaborating on that, Glissen Brown said: “TFM, like many deep learning techniques, raises concerns about explainability that we’ve long seen with convolutional neural networks — the ‘black box’ problem, where it’s difficult to interpret exactly how and why the model arrives at a particular decision. But TFM also introduces unique challenges due to its continuous and implicit formulation. Since it often learns flows without explicitly defining intermediate representations or steps, it can be harder to trace the logic or pathways it uses to connect inputs to outputs. This makes standard interpretability tools less effective and calls for new techniques tailored to these continuous architectures.”

“This approach could have a real clinical impact,” said Robert Hirten, MD, associate professor of medicine and artificial intelligence, Icahn School of Medicine at Mount Sinai, New York City, who also reviewed the study. “Accurately predicting transfusion needs and mortality risk in real time could support earlier, more targeted interventions for high-risk patients. While these findings still need to be validated in prospective studies, it could enhance ICU decision-making and resource allocation.”

Dr. Robert Hirten



“For the practicing gastroenterologist, we envision this system could help them figure out when to perform endoscopy in a patient admitted with acute gastrointestinal bleeding in the ICU at very high risk of exsanguination,” Zhang told GI & Hepatology News.

The approach, the researchers said, will be useful in identifying unique patient characteristics, make possible the identification of high-risk patients and lead to more personalized medicine.

Hirten, Zhang, and Shung had no disclosures. Glissen Brown reported consulting relationships with Medtronic, OdinVision, Doximity, and Olympus. The National Institutes of Health funded this study.

A version of this article appeared on Medscape.com.

SAN DIEGO — A novel generative artificial intelligence (AI) framework known as trajectory flow matching (TFM) can predict the need for red blood cell transfusion and mortality risk in intensive care unit (ICU) patients with acute gastrointestinal (GI) bleeding, researchers reported at Digestive Disease Week® (DDW) 2025.

Acute GI bleeding is the most common cause of digestive disease–related hospitalization, with an estimated 500,000 hospital admissions annually. It’s known that predicting the need for red blood cell transfusion in the first 24 hours may improve resuscitation and decrease both morbidity and mortality.

However, an existing clinical score known as the Rockall Score does not perform well for predicting mortality, Xi (Nicole) Zhang, an MD-PhD student at McGill University, Montreal, Quebec, Canada, told attendees at DDW. With an area under the curve of 0.65-0.75, better prediction is needed, said Zhang, whose coresearchers included Dennis Shung, MD, MHS, PhD, director of Applied Artificial Intelligence at Yale University School of Medicine, New Haven, Connecticut.

Dr. Xi Zhang



“We’d like to predict multiple outcomes in addition to mortality,” said Zhang, who is also a student at the Mila-Quebec Artificial Intelligence Institute.

As a result, the researchers turned to the TFM approach, applying it to ICU patients with acute GI bleeding to predict both the need for transfusion and in-hospital mortality risk. The all-cause mortality rate is up to 11%, according to a 2020 study by James Y. W. Lau, MD, and colleagues. The rebleeding rate of nonvariceal upper GI bleeds is up to 10.4%. Zhang said the rebleeding rate for variceal upper gastrointestinal bleeding is up to 65%.

The AI method the researchers used outperformed a standard deep learning model at predicting the need for transfusion and estimating mortality risk.

 

Defining the AI Framework

“Probabilistic flow matching is a class of generative artificial intelligence that learns how a simple distribution becomes a more complex distribution with ordinary differential equations,” Zhang told GI & Hepatology News. “For example, if you had a few lines and shapes you could learn how it could become a detailed portrait of a face. In our case, we start with a few blood pressure and heart rate measurements and learn the pattern of blood pressures and heart rates over time, particularly if they reflect clinical deterioration with hemodynamic instability.”

Another way to think about the underlying algorithm, Zhang said, is to think about a river with boats where the river flow determines where the boats end up. “We are trying to direct the boat to the correct dock by adjusting the flow of water in the canal. In this case we are mapping the distribution with the first few data points to the distribution with the entire patient trajectory.”

The information gained, she said, could be helpful in timing endoscopic evaluation or allocating red blood cell products for emergent transfusion.

 

Study Details

The researchers evaluated a cohort of 2602 patients admitted to the ICU, identified from the publicly available MIMIC-III database. They divided the patients into a training set of 2342 patients and an internal validation set of 260 patients. Input variables were severe liver disease comorbidity, administration of vasopressor medications, mean arterial blood pressure, and heart rate over the first 24 hours.

Excluded was hemoglobin, since the point was to test the trajectory of hemodynamic parameters independent of hemoglobin thresholds used to guide red blood cell transfusion.

The outcome measures were administration of packed red blood cell transfusion within 24 hours and all-cause hospital mortality.

The TFM was more accurate than a standard deep learning model in predicting red blood cell transfusion, with an accuracy of 93.6% vs 43.2%; P ≤ .001. It was also more accurate at predicting all-cause in-hospital mortality, with an accuracy of 89.5% vs 42.5%, P = .01.

The researchers concluded that the TFM approach was able to predict the hemodynamic trajectories of patients with acute GI bleeding defined as deviation and outperformed the baseline from the measured mean arterial pressure and heart rate.

 

Expert Perspective

“This is an exciting proof-of-concept study that shows generative AI methods may be applied to complex datasets in order to improve on our current predictive models and improve patient care,” said Jeremy Glissen Brown, MD, MSc, an assistant professor of medicine and a practicing gastroenterologist at Duke University who has published research on the use of AI in clinical practice. He reviewed the study for GI & Hepatology News but was not involved in the research.

Dr. Jeremy Glissen Brown

“Future work will likely look into the implementation of a version of this model on real-time data.” he said. “We are at an exciting inflection point in predictive models within GI and clinical medicine. Predictive models based on deep learning and generative AI hold the promise of improving how we predict and treat disease states, but the excitement being generated with studies such as this needs to be balanced with the trade-offs inherent to the current paradigm of deep learning and generative models compared to more traditional regression-based models. These include many of the same ‘black box’ explainability questions that have risen in the age of convolutional neural networks as well as some method-specific questions due to the continuous and implicit nature of TFM.”

Elaborating on that, Glissen Brown said: “TFM, like many deep learning techniques, raises concerns about explainability that we’ve long seen with convolutional neural networks — the ‘black box’ problem, where it’s difficult to interpret exactly how and why the model arrives at a particular decision. But TFM also introduces unique challenges due to its continuous and implicit formulation. Since it often learns flows without explicitly defining intermediate representations or steps, it can be harder to trace the logic or pathways it uses to connect inputs to outputs. This makes standard interpretability tools less effective and calls for new techniques tailored to these continuous architectures.”

“This approach could have a real clinical impact,” said Robert Hirten, MD, associate professor of medicine and artificial intelligence, Icahn School of Medicine at Mount Sinai, New York City, who also reviewed the study. “Accurately predicting transfusion needs and mortality risk in real time could support earlier, more targeted interventions for high-risk patients. While these findings still need to be validated in prospective studies, it could enhance ICU decision-making and resource allocation.”

Dr. Robert Hirten



“For the practicing gastroenterologist, we envision this system could help them figure out when to perform endoscopy in a patient admitted with acute gastrointestinal bleeding in the ICU at very high risk of exsanguination,” Zhang told GI & Hepatology News.

The approach, the researchers said, will be useful in identifying unique patient characteristics, make possible the identification of high-risk patients and lead to more personalized medicine.

Hirten, Zhang, and Shung had no disclosures. Glissen Brown reported consulting relationships with Medtronic, OdinVision, Doximity, and Olympus. The National Institutes of Health funded this study.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM DDW 2025

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Chatbot Helps Users Adopt a Low FODMAP Diet

Article Type
Changed

SAN DIEGO — Low fermentable oligo-, di-, and monosaccharides and polyols (FODMAPs) dietary advice has been shown to be effective in easing bloating and abdominal pain, especially in patients with irritable bowel syndrome (IBS), but limited availability of dietitians makes delivering this advice challenging. Researchers from Thailand have successfully enlisted a chatbot to help.

In a randomized controlled trial, they found that chatbot-assisted dietary advice with brief guidance effectively reduced high FODMAP intake, bloating severity, and improved dietary knowledge, particularly in patients with bothersome bloating.

“Chatbot-assisted dietary advice for FODMAPs restriction was feasible and applicable in patients with bloating symptoms that had baseline symptoms of moderate severity,” study chief Pochara Somvanapanich, with the Division of Gastroenterology, Chulalongkorn University and King Chulalongkorn Memorial Hospital, Bangkok, Thailand, told GI & Hepatology News.

Somvanapanich, who developed the chatbot algorithm, presented the study results at Digestive Disease Week (DDW) 2025.

 

More Knowledge, Less Bloating

The trial enrolled 86 adults with disorders of gut-brain interaction experiencing bloating symptoms for more than 6 months and consuming more than seven high-FODMAPs items per week. Half of them had IBS.

At baseline, gastrointestinal (GI) symptoms and the ability to identify FODMAPs were assessed. All participants received a 5-minute consultation on FODMAPs avoidance from a GI fellow and were randomly allocated (stratified by IBS diagnosis and education) into two groups.

The chatbot-assisted group received real-time dietary advice via a chatbot which helped them identify high, low, and non-FODMAP foods from a list of more than 300 ingredients/dishes of Thai and western cuisines.

The control group received only brief advice on high FODMAPs restriction. Both groups used a diary app to log food intake and postprandial symptoms. Baseline bloating, abdominal pain and global symptoms severity were similar between the two groups. Data on 64 participants (32 in each group) were analyzed.

After 4 weeks, significantly more people in the chatbot group than the control group responded — achieving a 30% or greater reduction in daily worst bloating, abdominal pain or global symptoms (19 [59%] vs 10 [31%], P < .05). Responder rates were similar in the IBS and non-IBS subgroups.

Subgroup analysis revealed significant differences between groups only for participants with bothersome bloating, not those with mild bloating severity.

In those with bothersome bloating severity, the chatbot group had a higher response rate (69.5% vs 36.3%) and fewer bloating symptoms (P < .05). They also had a greater reduction in high FODMAPs intake (10 vs 23 items/week) and demonstrated improved knowledge in identifying FODMAPs (P < .05).

“Responders in a chatbot group consistently engaged more with the app, performing significantly more weekly item searches than nonresponders (P < .05),” the authors noted in their conference abstract.

“Our next step is to develop the chatbot-assisted approach for the reintroduction and personalization phase based on messenger applications (including Facebook Messenger and other messaging platforms),” Somvanapanich told GI & Hepatology News.

“Once we’ve gathered enough data to confirm these are working effectively, we definitely plan to create a one-stop service application for FODMAPs dietary advice,” Somvanapanich added.

 

Lack of Robust Data on Digital GI Health Apps

Commenting on this research for GI & Hepatology News, Sidhartha R. Sinha, MD, Director of Digital Health and Innovation, Division of Gastroenterology and Hepatology, Stanford University in Stanford, California, noted that there is a “notable lack of robust data supporting digital health tools in gastroenterology. Despite hundreds of apps available, very few are supported by well-designed trials.”

Dr. Sidhartha R. Sinha

“The study demonstrated that chatbot-assisted dietary advice significantly improved bloating symptoms, reduced intake of high-FODMAP foods, and enhanced patients’ dietary knowledge compared to brief dietary counseling alone, especially in those with bothersome symptoms,” said Sinha, who wasn’t involved in the study.

“Patients actively used the chatbot to manage their symptoms, achieving a higher response rate than those in the control arm who received brief counseling on avoiding high-FODMAP food,” he noted.

Sinha said in his practice at Stanford, “in the heart of Silicon Valley,” patients do use digital resources to manage their GI symptoms, including diseases like IBS and inflammatory bowel disease (IBD) — and he believes this is “increasingly common nationally.”

“However, the need for evidence-based tools is critical and the lack here often prevents many practitioners from regularly recommending them to patients. This study aligns well with clinical practice, and supports the use of this particular app to improve IBS symptoms, particularly when access to dietitians is limited. These results support chatbot-assisted dietary management as a feasible, effective, and scalable approach to patient care,” Sinha told GI & Hepatology News.

The study received no commercial funding. Somvanapanich and Sinha had no relevant disclosures.

A version of this article appeared on Medscape.com.
 

Publications
Topics
Sections

SAN DIEGO — Low fermentable oligo-, di-, and monosaccharides and polyols (FODMAPs) dietary advice has been shown to be effective in easing bloating and abdominal pain, especially in patients with irritable bowel syndrome (IBS), but limited availability of dietitians makes delivering this advice challenging. Researchers from Thailand have successfully enlisted a chatbot to help.

In a randomized controlled trial, they found that chatbot-assisted dietary advice with brief guidance effectively reduced high FODMAP intake, bloating severity, and improved dietary knowledge, particularly in patients with bothersome bloating.

“Chatbot-assisted dietary advice for FODMAPs restriction was feasible and applicable in patients with bloating symptoms that had baseline symptoms of moderate severity,” study chief Pochara Somvanapanich, with the Division of Gastroenterology, Chulalongkorn University and King Chulalongkorn Memorial Hospital, Bangkok, Thailand, told GI & Hepatology News.

Somvanapanich, who developed the chatbot algorithm, presented the study results at Digestive Disease Week (DDW) 2025.

 

More Knowledge, Less Bloating

The trial enrolled 86 adults with disorders of gut-brain interaction experiencing bloating symptoms for more than 6 months and consuming more than seven high-FODMAPs items per week. Half of them had IBS.

At baseline, gastrointestinal (GI) symptoms and the ability to identify FODMAPs were assessed. All participants received a 5-minute consultation on FODMAPs avoidance from a GI fellow and were randomly allocated (stratified by IBS diagnosis and education) into two groups.

The chatbot-assisted group received real-time dietary advice via a chatbot which helped them identify high, low, and non-FODMAP foods from a list of more than 300 ingredients/dishes of Thai and western cuisines.

The control group received only brief advice on high FODMAPs restriction. Both groups used a diary app to log food intake and postprandial symptoms. Baseline bloating, abdominal pain and global symptoms severity were similar between the two groups. Data on 64 participants (32 in each group) were analyzed.

After 4 weeks, significantly more people in the chatbot group than the control group responded — achieving a 30% or greater reduction in daily worst bloating, abdominal pain or global symptoms (19 [59%] vs 10 [31%], P < .05). Responder rates were similar in the IBS and non-IBS subgroups.

Subgroup analysis revealed significant differences between groups only for participants with bothersome bloating, not those with mild bloating severity.

In those with bothersome bloating severity, the chatbot group had a higher response rate (69.5% vs 36.3%) and fewer bloating symptoms (P < .05). They also had a greater reduction in high FODMAPs intake (10 vs 23 items/week) and demonstrated improved knowledge in identifying FODMAPs (P < .05).

“Responders in a chatbot group consistently engaged more with the app, performing significantly more weekly item searches than nonresponders (P < .05),” the authors noted in their conference abstract.

“Our next step is to develop the chatbot-assisted approach for the reintroduction and personalization phase based on messenger applications (including Facebook Messenger and other messaging platforms),” Somvanapanich told GI & Hepatology News.

“Once we’ve gathered enough data to confirm these are working effectively, we definitely plan to create a one-stop service application for FODMAPs dietary advice,” Somvanapanich added.

 

Lack of Robust Data on Digital GI Health Apps

Commenting on this research for GI & Hepatology News, Sidhartha R. Sinha, MD, Director of Digital Health and Innovation, Division of Gastroenterology and Hepatology, Stanford University in Stanford, California, noted that there is a “notable lack of robust data supporting digital health tools in gastroenterology. Despite hundreds of apps available, very few are supported by well-designed trials.”

Dr. Sidhartha R. Sinha

“The study demonstrated that chatbot-assisted dietary advice significantly improved bloating symptoms, reduced intake of high-FODMAP foods, and enhanced patients’ dietary knowledge compared to brief dietary counseling alone, especially in those with bothersome symptoms,” said Sinha, who wasn’t involved in the study.

“Patients actively used the chatbot to manage their symptoms, achieving a higher response rate than those in the control arm who received brief counseling on avoiding high-FODMAP food,” he noted.

Sinha said in his practice at Stanford, “in the heart of Silicon Valley,” patients do use digital resources to manage their GI symptoms, including diseases like IBS and inflammatory bowel disease (IBD) — and he believes this is “increasingly common nationally.”

“However, the need for evidence-based tools is critical and the lack here often prevents many practitioners from regularly recommending them to patients. This study aligns well with clinical practice, and supports the use of this particular app to improve IBS symptoms, particularly when access to dietitians is limited. These results support chatbot-assisted dietary management as a feasible, effective, and scalable approach to patient care,” Sinha told GI & Hepatology News.

The study received no commercial funding. Somvanapanich and Sinha had no relevant disclosures.

A version of this article appeared on Medscape.com.
 

SAN DIEGO — Low fermentable oligo-, di-, and monosaccharides and polyols (FODMAPs) dietary advice has been shown to be effective in easing bloating and abdominal pain, especially in patients with irritable bowel syndrome (IBS), but limited availability of dietitians makes delivering this advice challenging. Researchers from Thailand have successfully enlisted a chatbot to help.

In a randomized controlled trial, they found that chatbot-assisted dietary advice with brief guidance effectively reduced high FODMAP intake, bloating severity, and improved dietary knowledge, particularly in patients with bothersome bloating.

“Chatbot-assisted dietary advice for FODMAPs restriction was feasible and applicable in patients with bloating symptoms that had baseline symptoms of moderate severity,” study chief Pochara Somvanapanich, with the Division of Gastroenterology, Chulalongkorn University and King Chulalongkorn Memorial Hospital, Bangkok, Thailand, told GI & Hepatology News.

Somvanapanich, who developed the chatbot algorithm, presented the study results at Digestive Disease Week (DDW) 2025.

 

More Knowledge, Less Bloating

The trial enrolled 86 adults with disorders of gut-brain interaction experiencing bloating symptoms for more than 6 months and consuming more than seven high-FODMAPs items per week. Half of them had IBS.

At baseline, gastrointestinal (GI) symptoms and the ability to identify FODMAPs were assessed. All participants received a 5-minute consultation on FODMAPs avoidance from a GI fellow and were randomly allocated (stratified by IBS diagnosis and education) into two groups.

The chatbot-assisted group received real-time dietary advice via a chatbot which helped them identify high, low, and non-FODMAP foods from a list of more than 300 ingredients/dishes of Thai and western cuisines.

The control group received only brief advice on high FODMAPs restriction. Both groups used a diary app to log food intake and postprandial symptoms. Baseline bloating, abdominal pain and global symptoms severity were similar between the two groups. Data on 64 participants (32 in each group) were analyzed.

After 4 weeks, significantly more people in the chatbot group than the control group responded — achieving a 30% or greater reduction in daily worst bloating, abdominal pain or global symptoms (19 [59%] vs 10 [31%], P < .05). Responder rates were similar in the IBS and non-IBS subgroups.

Subgroup analysis revealed significant differences between groups only for participants with bothersome bloating, not those with mild bloating severity.

In those with bothersome bloating severity, the chatbot group had a higher response rate (69.5% vs 36.3%) and fewer bloating symptoms (P < .05). They also had a greater reduction in high FODMAPs intake (10 vs 23 items/week) and demonstrated improved knowledge in identifying FODMAPs (P < .05).

“Responders in a chatbot group consistently engaged more with the app, performing significantly more weekly item searches than nonresponders (P < .05),” the authors noted in their conference abstract.

“Our next step is to develop the chatbot-assisted approach for the reintroduction and personalization phase based on messenger applications (including Facebook Messenger and other messaging platforms),” Somvanapanich told GI & Hepatology News.

“Once we’ve gathered enough data to confirm these are working effectively, we definitely plan to create a one-stop service application for FODMAPs dietary advice,” Somvanapanich added.

 

Lack of Robust Data on Digital GI Health Apps

Commenting on this research for GI & Hepatology News, Sidhartha R. Sinha, MD, Director of Digital Health and Innovation, Division of Gastroenterology and Hepatology, Stanford University in Stanford, California, noted that there is a “notable lack of robust data supporting digital health tools in gastroenterology. Despite hundreds of apps available, very few are supported by well-designed trials.”

Dr. Sidhartha R. Sinha

“The study demonstrated that chatbot-assisted dietary advice significantly improved bloating symptoms, reduced intake of high-FODMAP foods, and enhanced patients’ dietary knowledge compared to brief dietary counseling alone, especially in those with bothersome symptoms,” said Sinha, who wasn’t involved in the study.

“Patients actively used the chatbot to manage their symptoms, achieving a higher response rate than those in the control arm who received brief counseling on avoiding high-FODMAP food,” he noted.

Sinha said in his practice at Stanford, “in the heart of Silicon Valley,” patients do use digital resources to manage their GI symptoms, including diseases like IBS and inflammatory bowel disease (IBD) — and he believes this is “increasingly common nationally.”

“However, the need for evidence-based tools is critical and the lack here often prevents many practitioners from regularly recommending them to patients. This study aligns well with clinical practice, and supports the use of this particular app to improve IBS symptoms, particularly when access to dietitians is limited. These results support chatbot-assisted dietary management as a feasible, effective, and scalable approach to patient care,” Sinha told GI & Hepatology News.

The study received no commercial funding. Somvanapanich and Sinha had no relevant disclosures.

A version of this article appeared on Medscape.com.
 

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM DDW 2025

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Blood-Based Test May Predict Crohn’s Disease 2 Years Before Onset

Article Type
Changed

SAN DIEGO — Crohn’s disease (CD) has become more common in the United States, and an estimated 1 million Americans have the condition. Still, much is unknown about how to evaluate the individual risk for the disease.

“It’s pretty much accepted that Crohn’s disease does not begin at diagnosis,” said Ryan Ungaro, MD, associate professor of medicine at the Icahn School of Medicine at Mount Sinai, New York City, speaking at Digestive Disease Week (DDW)® 2025.

Dr. Ryan Ungaro



Although individual blood markers have been associated with the future risk for CD, what’s needed, he said, is to understand which combination of biomarkers are most predictive.

Now, Ungaro and his team have developed a risk score they found accurate in predicting CD onset within 2 years before its onset.

It’s an early version that will likely be further improved and needs additional validation, Ungaro told GI & Hepatology News.

“Once we can accurately identify individuals at risk for developing Crohn’s disease, we can then imagine a number of potential interventions,” Ungaro said.

Approaches would vary depending on how far away the onset is estimated to be. For people who likely wouldn’t develop disease for many years, one intervention might be close monitoring to enable diagnosis in the earliest stages, when treatment works best, he said. Someone at a high risk of developing CD in the next 2 or 3 years, on the other hand, might be offered a pharmaceutical intervention.

 

Developing and Testing the Risk Score

To develop the risk score, Ungaro and colleagues analyzed data of 200 patients with CD and 100 healthy control participants from PREDICTS, a nested case-controlled study of active US military service members. The study is within the larger Department of Defense Serum Repository, which began in 1985 and has more than 62.5 million samples, all stored at −30 °C.

The researchers collected serum samples at four timepoints up to 6 or more years before the diagnosis. They assayed antimicrobial antibodies using the Prometheus Laboratories platform, proteomic markers using the Olink inflammation panel, and anti–granulocyte macrophage colony-stimulating factor autoantibodies using enzyme-linked immunosorbent assay.

Participants (median age, 33 years for both groups) were randomly divided into equally sized training and testing sets. In both the group, 83% of patients were White and about 90% were men.

Time-varying trajectories of marker abundance were estimated for each biomarker. Then, logistic regression modeled disease status as a function of each marker for different timepoints and multivariate modeling was performed via logistic LASSO regression.

A risk score to predict CD onset within 2 years was developed. Prediction models were fit on the testing set and predictive performance evaluated using receiver operating characteristic curves and area under the curve (AUC).

Blood proteins and antibodies have differing associations with CD depending on the time before diagnosis, the researchers found.

The integrative model to predict CD onset within 2 years incorporated 10 biomarkers associated significantly with CD onset.

The AUC for the model was 0.87 (considered good, with 1 indicating perfect discrimination). It produced a specificity of 99% and a positive predictive value of 84%.

The researchers stratified the model scores into quartiles and found the CD incidence within 2 years increased from 2% in the first quartile to 57.7% in the fourth. The relative risk of developing CD in the top quartile individuals vs lower quartile individuals was 10.4.

The serologic and proteomic markers show dynamic changes years before the diagnosis, Ungaro said.

 

A Strong Start

The research represents “an ambitious and exciting frontier for the future of IBD [inflammatory bowel disease] care,” said Victor G. Chedid, MD, MS, consultant and assistant professor of medicine at Mayo Clinic, Rochester, Minnesota, who reviewed the findings but was not involved in the study.

Dr. Victor G. Chedid

Currently, physicians treat IBD once it manifests, and it’s difficult to predict who will get CD, he said.

The integrative model’s AUC of 0.87 is impressive, and its specificity and positive predictive value levels show it is highly accurate in predicting the onset of CD within 2 years, Chedid added.

Further validation in larger and more diverse population is needed, Chedid said, but he sees the potential for the model to be practical in clinical practice.

“Additionally, the use of blood-based biomarkers makes the model relatively noninvasive and easy to implement in a clinical setting,” he said.

Now, the research goal is to understand the best biomarkers for characterizing the different preclinical phases of CD and to test different interventions in prevention trials, Ungaro told GI & Hepatology News.

A few trials are planned or ongoing, he noted. The trial PIONIR trial will look at the impact of a specific diet on the risk of developing CD, and the INTERCEPT trial aims to develop a blood-based risk score that can identify individuals with a high risk of developing CD within 5 years after initial evaluation.

Ungaro reported being on the advisory board of and/or receiving speaker or consulting fees from AbbVie, Bristol Myer Squibb, Celltrion, ECM Therapeutics, Genentech, Jansen, Eli Lilly, Pfizer, Roivant, Sanofi, and Takeda. Chedid reported having no relevant disclosures.

The PROMISE Consortium is funded by the Helmsley Charitable Trust.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

SAN DIEGO — Crohn’s disease (CD) has become more common in the United States, and an estimated 1 million Americans have the condition. Still, much is unknown about how to evaluate the individual risk for the disease.

“It’s pretty much accepted that Crohn’s disease does not begin at diagnosis,” said Ryan Ungaro, MD, associate professor of medicine at the Icahn School of Medicine at Mount Sinai, New York City, speaking at Digestive Disease Week (DDW)® 2025.

Dr. Ryan Ungaro



Although individual blood markers have been associated with the future risk for CD, what’s needed, he said, is to understand which combination of biomarkers are most predictive.

Now, Ungaro and his team have developed a risk score they found accurate in predicting CD onset within 2 years before its onset.

It’s an early version that will likely be further improved and needs additional validation, Ungaro told GI & Hepatology News.

“Once we can accurately identify individuals at risk for developing Crohn’s disease, we can then imagine a number of potential interventions,” Ungaro said.

Approaches would vary depending on how far away the onset is estimated to be. For people who likely wouldn’t develop disease for many years, one intervention might be close monitoring to enable diagnosis in the earliest stages, when treatment works best, he said. Someone at a high risk of developing CD in the next 2 or 3 years, on the other hand, might be offered a pharmaceutical intervention.

 

Developing and Testing the Risk Score

To develop the risk score, Ungaro and colleagues analyzed data of 200 patients with CD and 100 healthy control participants from PREDICTS, a nested case-controlled study of active US military service members. The study is within the larger Department of Defense Serum Repository, which began in 1985 and has more than 62.5 million samples, all stored at −30 °C.

The researchers collected serum samples at four timepoints up to 6 or more years before the diagnosis. They assayed antimicrobial antibodies using the Prometheus Laboratories platform, proteomic markers using the Olink inflammation panel, and anti–granulocyte macrophage colony-stimulating factor autoantibodies using enzyme-linked immunosorbent assay.

Participants (median age, 33 years for both groups) were randomly divided into equally sized training and testing sets. In both the group, 83% of patients were White and about 90% were men.

Time-varying trajectories of marker abundance were estimated for each biomarker. Then, logistic regression modeled disease status as a function of each marker for different timepoints and multivariate modeling was performed via logistic LASSO regression.

A risk score to predict CD onset within 2 years was developed. Prediction models were fit on the testing set and predictive performance evaluated using receiver operating characteristic curves and area under the curve (AUC).

Blood proteins and antibodies have differing associations with CD depending on the time before diagnosis, the researchers found.

The integrative model to predict CD onset within 2 years incorporated 10 biomarkers associated significantly with CD onset.

The AUC for the model was 0.87 (considered good, with 1 indicating perfect discrimination). It produced a specificity of 99% and a positive predictive value of 84%.

The researchers stratified the model scores into quartiles and found the CD incidence within 2 years increased from 2% in the first quartile to 57.7% in the fourth. The relative risk of developing CD in the top quartile individuals vs lower quartile individuals was 10.4.

The serologic and proteomic markers show dynamic changes years before the diagnosis, Ungaro said.

 

A Strong Start

The research represents “an ambitious and exciting frontier for the future of IBD [inflammatory bowel disease] care,” said Victor G. Chedid, MD, MS, consultant and assistant professor of medicine at Mayo Clinic, Rochester, Minnesota, who reviewed the findings but was not involved in the study.

Dr. Victor G. Chedid

Currently, physicians treat IBD once it manifests, and it’s difficult to predict who will get CD, he said.

The integrative model’s AUC of 0.87 is impressive, and its specificity and positive predictive value levels show it is highly accurate in predicting the onset of CD within 2 years, Chedid added.

Further validation in larger and more diverse population is needed, Chedid said, but he sees the potential for the model to be practical in clinical practice.

“Additionally, the use of blood-based biomarkers makes the model relatively noninvasive and easy to implement in a clinical setting,” he said.

Now, the research goal is to understand the best biomarkers for characterizing the different preclinical phases of CD and to test different interventions in prevention trials, Ungaro told GI & Hepatology News.

A few trials are planned or ongoing, he noted. The trial PIONIR trial will look at the impact of a specific diet on the risk of developing CD, and the INTERCEPT trial aims to develop a blood-based risk score that can identify individuals with a high risk of developing CD within 5 years after initial evaluation.

Ungaro reported being on the advisory board of and/or receiving speaker or consulting fees from AbbVie, Bristol Myer Squibb, Celltrion, ECM Therapeutics, Genentech, Jansen, Eli Lilly, Pfizer, Roivant, Sanofi, and Takeda. Chedid reported having no relevant disclosures.

The PROMISE Consortium is funded by the Helmsley Charitable Trust.

A version of this article appeared on Medscape.com.

SAN DIEGO — Crohn’s disease (CD) has become more common in the United States, and an estimated 1 million Americans have the condition. Still, much is unknown about how to evaluate the individual risk for the disease.

“It’s pretty much accepted that Crohn’s disease does not begin at diagnosis,” said Ryan Ungaro, MD, associate professor of medicine at the Icahn School of Medicine at Mount Sinai, New York City, speaking at Digestive Disease Week (DDW)® 2025.

Dr. Ryan Ungaro



Although individual blood markers have been associated with the future risk for CD, what’s needed, he said, is to understand which combination of biomarkers are most predictive.

Now, Ungaro and his team have developed a risk score they found accurate in predicting CD onset within 2 years before its onset.

It’s an early version that will likely be further improved and needs additional validation, Ungaro told GI & Hepatology News.

“Once we can accurately identify individuals at risk for developing Crohn’s disease, we can then imagine a number of potential interventions,” Ungaro said.

Approaches would vary depending on how far away the onset is estimated to be. For people who likely wouldn’t develop disease for many years, one intervention might be close monitoring to enable diagnosis in the earliest stages, when treatment works best, he said. Someone at a high risk of developing CD in the next 2 or 3 years, on the other hand, might be offered a pharmaceutical intervention.

 

Developing and Testing the Risk Score

To develop the risk score, Ungaro and colleagues analyzed data of 200 patients with CD and 100 healthy control participants from PREDICTS, a nested case-controlled study of active US military service members. The study is within the larger Department of Defense Serum Repository, which began in 1985 and has more than 62.5 million samples, all stored at −30 °C.

The researchers collected serum samples at four timepoints up to 6 or more years before the diagnosis. They assayed antimicrobial antibodies using the Prometheus Laboratories platform, proteomic markers using the Olink inflammation panel, and anti–granulocyte macrophage colony-stimulating factor autoantibodies using enzyme-linked immunosorbent assay.

Participants (median age, 33 years for both groups) were randomly divided into equally sized training and testing sets. In both the group, 83% of patients were White and about 90% were men.

Time-varying trajectories of marker abundance were estimated for each biomarker. Then, logistic regression modeled disease status as a function of each marker for different timepoints and multivariate modeling was performed via logistic LASSO regression.

A risk score to predict CD onset within 2 years was developed. Prediction models were fit on the testing set and predictive performance evaluated using receiver operating characteristic curves and area under the curve (AUC).

Blood proteins and antibodies have differing associations with CD depending on the time before diagnosis, the researchers found.

The integrative model to predict CD onset within 2 years incorporated 10 biomarkers associated significantly with CD onset.

The AUC for the model was 0.87 (considered good, with 1 indicating perfect discrimination). It produced a specificity of 99% and a positive predictive value of 84%.

The researchers stratified the model scores into quartiles and found the CD incidence within 2 years increased from 2% in the first quartile to 57.7% in the fourth. The relative risk of developing CD in the top quartile individuals vs lower quartile individuals was 10.4.

The serologic and proteomic markers show dynamic changes years before the diagnosis, Ungaro said.

 

A Strong Start

The research represents “an ambitious and exciting frontier for the future of IBD [inflammatory bowel disease] care,” said Victor G. Chedid, MD, MS, consultant and assistant professor of medicine at Mayo Clinic, Rochester, Minnesota, who reviewed the findings but was not involved in the study.

Dr. Victor G. Chedid

Currently, physicians treat IBD once it manifests, and it’s difficult to predict who will get CD, he said.

The integrative model’s AUC of 0.87 is impressive, and its specificity and positive predictive value levels show it is highly accurate in predicting the onset of CD within 2 years, Chedid added.

Further validation in larger and more diverse population is needed, Chedid said, but he sees the potential for the model to be practical in clinical practice.

“Additionally, the use of blood-based biomarkers makes the model relatively noninvasive and easy to implement in a clinical setting,” he said.

Now, the research goal is to understand the best biomarkers for characterizing the different preclinical phases of CD and to test different interventions in prevention trials, Ungaro told GI & Hepatology News.

A few trials are planned or ongoing, he noted. The trial PIONIR trial will look at the impact of a specific diet on the risk of developing CD, and the INTERCEPT trial aims to develop a blood-based risk score that can identify individuals with a high risk of developing CD within 5 years after initial evaluation.

Ungaro reported being on the advisory board of and/or receiving speaker or consulting fees from AbbVie, Bristol Myer Squibb, Celltrion, ECM Therapeutics, Genentech, Jansen, Eli Lilly, Pfizer, Roivant, Sanofi, and Takeda. Chedid reported having no relevant disclosures.

The PROMISE Consortium is funded by the Helmsley Charitable Trust.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM DDW 2025

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Winning Strategies to Retain Private Practice Gastroenterologists

Article Type
Changed

SAN DIEGO — With the recently updated recommendations by the US Preventive Services Task Force lowering the age for colorectal cancer screening to 45 instead of 50, an additional 19 million patients now require screening, Asma Khapra, MD, AGAF, a gastroenterologist at Gastro Health in Fairfax, Virginia, told attendees at Digestive Disease Week® (DDW) 2025.

Dr. Asma Khapra

That change, coupled with the expected shortage of gastroenterologists, means one thing: The current workforce can’t meet patient demand, she said. Private practices in particular face challenges in retaining gastroenterologists, Khapra added.

The private practice model is already declining, she said. The fraction of US gastroenterologists in “fully independent” private practice was about 30% in 2019, Khapra noted. Then, “COVID really changed the landscape even more.” By 2022, “that number has shrunk to 13%.” Meanwhile, 67% are employed gastroenterologists (not in private practice), 7% work in large group practices, and 13% are private equity (PE) backed.

That makes effective retention strategies crucial for private practices, Khapra said. She first addressed the common attractions of private practices, then the challenges, and finally the winning strategies to retain and keep a viable private practice gastroenterology workforce.

 

The Attractions of Private Practice

The reasons for choosing private practice are many, Khapra said, including:

  • Autonomy,
  • Flexibility,
  • Competitive compensation,
  • Ownership mindset,
  • Partnership paths, and
  • Work-life balance including involvement in community and culture.

On the other hand, private practices have unique challenges, including:

  • Administrative burdens such as EHR documentation, paperwork, prior authorizations, and staffing issues,
  • Financial pressures, including competition with the employment packages offered by hospitals, as reimbursements continue to drop and staffing costs increase,
  • Burnout,
  • Variety of buy-ins and partnership tracks,
  • Limited career development, and
  • The strains of aging and endoscopy. “We used to joke in our practice that at any given time, three staff members are in physical therapy due to injuries and disabilities.”
  •  

Employing the Iceberg Model

One strategy, Khapra said, is to follow Edward T. Hall’s Iceberg Model of Culture , which focuses on the importance of both visible and invisible elements.

“The key to retention in private practice is to develop a value system where everyone is treated well and respected and compensated fairly,” she said. “That doesn’t mean you split the pie [equally].”

“Visible” elements of the model include the physical environment, policies and practices, symbols and behaviors, she said. While under the surface (“invisible” elements) are shared values, perceptions and attitudes, leadership style, conflict resolution, decision making and unwritten rules.

The key, she said, is to provide physicians an actual voice in decision making and to avoid favouritism, thus avoiding comments such as “Why do the same two people always get the prime scoping blocks?”

Financial transparency is also important, Khapra said. And people want flexibility without it being called special treatment. She provided several practical suggestions to accomplish the invisible Iceberg goals.

For instance, she suggested paying for activities outside the practice that physicians do, such as serving on committees. If the practice can’t afford that, she suggested asking the affiliated hospitals to do so, noting that such an initiative can often build community support.

Paying more attention to early associates than is typical can also benefit the practice, Khapra said. “So much effort is made to recruit them, and then once there, we’re on to the next [recruits].” Instead, she suggested, “pay attention to their needs.”

Providing support to physicians who are injured is also crucial and can foster a community culture, she said. For example, one Gastro Health physician was out for 4 weeks due to complications from surgery. “Everyone jumped in” to help fill the injured physician’s shifts, she said, reassuring the physician that the money would be figured out later. “That’s the culture you want to instill.”

To prevent burnout, another key to retaining physicians, “you have to provide support staff.” And offering good benefits, including parental and maternal leave and disability benefits, is also crucial, Khapra said. Consider practices such as having social dinners, another way to build a sense of community.

Finally, bring in national and local gastroenterologist organizations for discussions, including advocating for fair reimbursement for private practice. Consider working with the Digestive Health Physicians Alliance, which describes itself as the voice of independent gastroenterology, she suggested.

 

More Perspectives

Jami Kinnucan, MD, AGAF, a gastroenterologist and associate professor of medicine at Mayo Clinic, Jacksonville , Florida, spoke about optimizing recruitment of young gastroenterologists and provided perspective on Khapra’s talk.

Dr. Jami Kinnucan

“I think there’s a lot of overlap” with her topic and retaining private practice gastroenterologists, she said in an interview with GI & Hepatology News. Most important, she said, is having an efficient system in which the administrative flow is left to digital tools or other staff, not physicians. “That will also help to reduce burnout,” she said, and allow physicians to do what they most want to do, which is to focus on providing care to patients.

“People want to feel valued for their work,” she agreed. “People want opportunity for career development, opportunities for growth.”

As gastroenterologists age, flexibility is important, as it in in general for all physicians, Kinnucan said. She suggested schedule flexibility as one way. For instance, “if I tell 10 providers, ‘I need you to see 100 patients this week, but you can do it however you want,’ that promotes flexibility. They might want to see all of them on Monday and Tuesday, for instance. If you give people choice and autonomy, they are more likely to feel like they are part of the decision.”

How do you build a high-functioning team? “You do it by letting them operate autonomously,” and “you let people do the things they are really excited about.” And always, as Khapra said, focus on the invisible elements that are so crucial.

Khapra and Kinnucan had no relevant disclosures. Khapra received no funding for her presentation.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

SAN DIEGO — With the recently updated recommendations by the US Preventive Services Task Force lowering the age for colorectal cancer screening to 45 instead of 50, an additional 19 million patients now require screening, Asma Khapra, MD, AGAF, a gastroenterologist at Gastro Health in Fairfax, Virginia, told attendees at Digestive Disease Week® (DDW) 2025.

Dr. Asma Khapra

That change, coupled with the expected shortage of gastroenterologists, means one thing: The current workforce can’t meet patient demand, she said. Private practices in particular face challenges in retaining gastroenterologists, Khapra added.

The private practice model is already declining, she said. The fraction of US gastroenterologists in “fully independent” private practice was about 30% in 2019, Khapra noted. Then, “COVID really changed the landscape even more.” By 2022, “that number has shrunk to 13%.” Meanwhile, 67% are employed gastroenterologists (not in private practice), 7% work in large group practices, and 13% are private equity (PE) backed.

That makes effective retention strategies crucial for private practices, Khapra said. She first addressed the common attractions of private practices, then the challenges, and finally the winning strategies to retain and keep a viable private practice gastroenterology workforce.

 

The Attractions of Private Practice

The reasons for choosing private practice are many, Khapra said, including:

  • Autonomy,
  • Flexibility,
  • Competitive compensation,
  • Ownership mindset,
  • Partnership paths, and
  • Work-life balance including involvement in community and culture.

On the other hand, private practices have unique challenges, including:

  • Administrative burdens such as EHR documentation, paperwork, prior authorizations, and staffing issues,
  • Financial pressures, including competition with the employment packages offered by hospitals, as reimbursements continue to drop and staffing costs increase,
  • Burnout,
  • Variety of buy-ins and partnership tracks,
  • Limited career development, and
  • The strains of aging and endoscopy. “We used to joke in our practice that at any given time, three staff members are in physical therapy due to injuries and disabilities.”
  •  

Employing the Iceberg Model

One strategy, Khapra said, is to follow Edward T. Hall’s Iceberg Model of Culture , which focuses on the importance of both visible and invisible elements.

“The key to retention in private practice is to develop a value system where everyone is treated well and respected and compensated fairly,” she said. “That doesn’t mean you split the pie [equally].”

“Visible” elements of the model include the physical environment, policies and practices, symbols and behaviors, she said. While under the surface (“invisible” elements) are shared values, perceptions and attitudes, leadership style, conflict resolution, decision making and unwritten rules.

The key, she said, is to provide physicians an actual voice in decision making and to avoid favouritism, thus avoiding comments such as “Why do the same two people always get the prime scoping blocks?”

Financial transparency is also important, Khapra said. And people want flexibility without it being called special treatment. She provided several practical suggestions to accomplish the invisible Iceberg goals.

For instance, she suggested paying for activities outside the practice that physicians do, such as serving on committees. If the practice can’t afford that, she suggested asking the affiliated hospitals to do so, noting that such an initiative can often build community support.

Paying more attention to early associates than is typical can also benefit the practice, Khapra said. “So much effort is made to recruit them, and then once there, we’re on to the next [recruits].” Instead, she suggested, “pay attention to their needs.”

Providing support to physicians who are injured is also crucial and can foster a community culture, she said. For example, one Gastro Health physician was out for 4 weeks due to complications from surgery. “Everyone jumped in” to help fill the injured physician’s shifts, she said, reassuring the physician that the money would be figured out later. “That’s the culture you want to instill.”

To prevent burnout, another key to retaining physicians, “you have to provide support staff.” And offering good benefits, including parental and maternal leave and disability benefits, is also crucial, Khapra said. Consider practices such as having social dinners, another way to build a sense of community.

Finally, bring in national and local gastroenterologist organizations for discussions, including advocating for fair reimbursement for private practice. Consider working with the Digestive Health Physicians Alliance, which describes itself as the voice of independent gastroenterology, she suggested.

 

More Perspectives

Jami Kinnucan, MD, AGAF, a gastroenterologist and associate professor of medicine at Mayo Clinic, Jacksonville , Florida, spoke about optimizing recruitment of young gastroenterologists and provided perspective on Khapra’s talk.

Dr. Jami Kinnucan

“I think there’s a lot of overlap” with her topic and retaining private practice gastroenterologists, she said in an interview with GI & Hepatology News. Most important, she said, is having an efficient system in which the administrative flow is left to digital tools or other staff, not physicians. “That will also help to reduce burnout,” she said, and allow physicians to do what they most want to do, which is to focus on providing care to patients.

“People want to feel valued for their work,” she agreed. “People want opportunity for career development, opportunities for growth.”

As gastroenterologists age, flexibility is important, as it in in general for all physicians, Kinnucan said. She suggested schedule flexibility as one way. For instance, “if I tell 10 providers, ‘I need you to see 100 patients this week, but you can do it however you want,’ that promotes flexibility. They might want to see all of them on Monday and Tuesday, for instance. If you give people choice and autonomy, they are more likely to feel like they are part of the decision.”

How do you build a high-functioning team? “You do it by letting them operate autonomously,” and “you let people do the things they are really excited about.” And always, as Khapra said, focus on the invisible elements that are so crucial.

Khapra and Kinnucan had no relevant disclosures. Khapra received no funding for her presentation.

A version of this article appeared on Medscape.com.

SAN DIEGO — With the recently updated recommendations by the US Preventive Services Task Force lowering the age for colorectal cancer screening to 45 instead of 50, an additional 19 million patients now require screening, Asma Khapra, MD, AGAF, a gastroenterologist at Gastro Health in Fairfax, Virginia, told attendees at Digestive Disease Week® (DDW) 2025.

Dr. Asma Khapra

That change, coupled with the expected shortage of gastroenterologists, means one thing: The current workforce can’t meet patient demand, she said. Private practices in particular face challenges in retaining gastroenterologists, Khapra added.

The private practice model is already declining, she said. The fraction of US gastroenterologists in “fully independent” private practice was about 30% in 2019, Khapra noted. Then, “COVID really changed the landscape even more.” By 2022, “that number has shrunk to 13%.” Meanwhile, 67% are employed gastroenterologists (not in private practice), 7% work in large group practices, and 13% are private equity (PE) backed.

That makes effective retention strategies crucial for private practices, Khapra said. She first addressed the common attractions of private practices, then the challenges, and finally the winning strategies to retain and keep a viable private practice gastroenterology workforce.

 

The Attractions of Private Practice

The reasons for choosing private practice are many, Khapra said, including:

  • Autonomy,
  • Flexibility,
  • Competitive compensation,
  • Ownership mindset,
  • Partnership paths, and
  • Work-life balance including involvement in community and culture.

On the other hand, private practices have unique challenges, including:

  • Administrative burdens such as EHR documentation, paperwork, prior authorizations, and staffing issues,
  • Financial pressures, including competition with the employment packages offered by hospitals, as reimbursements continue to drop and staffing costs increase,
  • Burnout,
  • Variety of buy-ins and partnership tracks,
  • Limited career development, and
  • The strains of aging and endoscopy. “We used to joke in our practice that at any given time, three staff members are in physical therapy due to injuries and disabilities.”
  •  

Employing the Iceberg Model

One strategy, Khapra said, is to follow Edward T. Hall’s Iceberg Model of Culture , which focuses on the importance of both visible and invisible elements.

“The key to retention in private practice is to develop a value system where everyone is treated well and respected and compensated fairly,” she said. “That doesn’t mean you split the pie [equally].”

“Visible” elements of the model include the physical environment, policies and practices, symbols and behaviors, she said. While under the surface (“invisible” elements) are shared values, perceptions and attitudes, leadership style, conflict resolution, decision making and unwritten rules.

The key, she said, is to provide physicians an actual voice in decision making and to avoid favouritism, thus avoiding comments such as “Why do the same two people always get the prime scoping blocks?”

Financial transparency is also important, Khapra said. And people want flexibility without it being called special treatment. She provided several practical suggestions to accomplish the invisible Iceberg goals.

For instance, she suggested paying for activities outside the practice that physicians do, such as serving on committees. If the practice can’t afford that, she suggested asking the affiliated hospitals to do so, noting that such an initiative can often build community support.

Paying more attention to early associates than is typical can also benefit the practice, Khapra said. “So much effort is made to recruit them, and then once there, we’re on to the next [recruits].” Instead, she suggested, “pay attention to their needs.”

Providing support to physicians who are injured is also crucial and can foster a community culture, she said. For example, one Gastro Health physician was out for 4 weeks due to complications from surgery. “Everyone jumped in” to help fill the injured physician’s shifts, she said, reassuring the physician that the money would be figured out later. “That’s the culture you want to instill.”

To prevent burnout, another key to retaining physicians, “you have to provide support staff.” And offering good benefits, including parental and maternal leave and disability benefits, is also crucial, Khapra said. Consider practices such as having social dinners, another way to build a sense of community.

Finally, bring in national and local gastroenterologist organizations for discussions, including advocating for fair reimbursement for private practice. Consider working with the Digestive Health Physicians Alliance, which describes itself as the voice of independent gastroenterology, she suggested.

 

More Perspectives

Jami Kinnucan, MD, AGAF, a gastroenterologist and associate professor of medicine at Mayo Clinic, Jacksonville , Florida, spoke about optimizing recruitment of young gastroenterologists and provided perspective on Khapra’s talk.

Dr. Jami Kinnucan

“I think there’s a lot of overlap” with her topic and retaining private practice gastroenterologists, she said in an interview with GI & Hepatology News. Most important, she said, is having an efficient system in which the administrative flow is left to digital tools or other staff, not physicians. “That will also help to reduce burnout,” she said, and allow physicians to do what they most want to do, which is to focus on providing care to patients.

“People want to feel valued for their work,” she agreed. “People want opportunity for career development, opportunities for growth.”

As gastroenterologists age, flexibility is important, as it in in general for all physicians, Kinnucan said. She suggested schedule flexibility as one way. For instance, “if I tell 10 providers, ‘I need you to see 100 patients this week, but you can do it however you want,’ that promotes flexibility. They might want to see all of them on Monday and Tuesday, for instance. If you give people choice and autonomy, they are more likely to feel like they are part of the decision.”

How do you build a high-functioning team? “You do it by letting them operate autonomously,” and “you let people do the things they are really excited about.” And always, as Khapra said, focus on the invisible elements that are so crucial.

Khapra and Kinnucan had no relevant disclosures. Khapra received no funding for her presentation.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM DDW 2025

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Blood Detection Capsule Helpful in Suspected Upper GI Bleeding

Article Type
Changed

SAN DIEGO — A real-time, blood-sensing capsule (PillSense) is a safe and effective diagnostic tool for patients with suspected upper gastrointestinal (GI) bleeding that can aid patient triage, reduce unnecessary procedures, and optimize resource use, a study found.

Notably, patients with negative capsule results had shorter hospital stays and lower acuity markers, and in more than one third of cases, an esophagogastroduodenoscopy (EGD) was avoided altogether without any observed adverse events or readmissions, the study team found.

“Our study shows that this novel capsule that detects blood in the upper GI tract (PillSense) was highly sensitive and specific (> 90%) for detecting recent or active upper GI blood, influenced clinical management in 80% of cases and allowed about one third of patients to be safely discharged from the emergency department, with close outpatient follow-up,” Linda Lee, MD, AGAF, medical director of endoscopy, Brigham and Women’s Hospital and associate professor of medicine, Harvard Medical School, Boston, told GI & Hepatology News.

The study was presented at Digestive Disease Week® (DDW) 2025.

 

Real-World Insights

EGD is the gold standard for diagnosing suspected upper GI bleeding, but limited access to timely EGD complicates diagnosis and resource allocation.

Approved by the US Food and Drug Administration, PillSense (EnteraSense) is an ingestible capsule with a reusable receiver that provides a rapid, noninvasive method for detecting upper GI bleeding. The capsule analyzes light absorption to identify blood and transmits the result within 10 minutes.

Lee and colleagues evaluated the real-world impact of this point-of-care device on clinical triage and resource allocation, while assessing its safety profile.

They analyzed data on 43 patients (mean age 60 years; 72% men) with clinical suspicion of upper GI bleeding in whom the device was used. The most common symptoms were symptomatic anemia (70%), melena (67%), and hematemesis (33%).

Sixteen PillSense studies (37%) were positive for blood detection, and 27 (63%) were negative.

Compared to patients with a positive capsule results, those without blood detected by the capsule had shorter hospital stays (mean, 3.8 vs 13.4 days, P = .02), lower GBS scores (mean, 7.93 vs 12.81; P = .005), and fewer units of blood transfused (mean, 1.19 vs 10.94; P = .01) and were less apt to be hemodynamically unstable (5 vs 8 patients; P = .03).

Capsule results influenced clinical management in 80% of cases, leading to avoidance of EGD in 37% and prioritization of urgent EGD in 18% (all had active bleeding on EGD).

Capsule use improved resource allocation in 51% of cases. This included 12 patients who were discharged from the ED, six who were assigned an inpatient bed early, and four who underwent expedited colonoscopy as upper GI bleeding was ruled out, they noted.

Among the eight patients who did not undergo EGD, there were no readmissions within 30 days and no adverse events. There were no capsule-related adverse events.

“Clinicians should consider using this novel capsule PillSense as another data point in the management of suspected upper GI bleed,” Lee told GI & Hepatology News.

“This could include in helping to triage patients for safe discharge from the ED or to more urgent endoscopy, to differentiate between upper vs lower GI bleed and to manage ICU patients with possible rebleeding,” Lee said.

 

Important Real-World Evidence

Reached for comment, Shahin Ayazi, MD, esophageal surgeon, Director, Allegheny Health Network Chevalier Jackson Esophageal Research Center, Pittsburgh, Pennsylvania, said this study is important for several reasons.

“Prior investigations have established that PillSense possesses a high negative predictive value for detecting upper GI bleeding and have speculated on its utility in triage, decision-making, and potentially avoiding unnecessary endoscopy. This study is important because it substantiates that speculation with clinical data,” Ayazi, who wasn’t involved in the study, told GI & Hepatology News.

“These findings support the capsule’s practical application in patient stratification and clinical workflow, particularly when diagnostic uncertainty is high and endoscopic resources are limited,” Ayazi noted.

In his experience, PillSense is “highly useful as a triage adjunct in the evaluation of suspected upper GI bleeding. It provides direct and objective evidence as to whether blood is currently present in the stomach,” he said.

“In patients whose presentation is ambiguous or whose clinical scores fall into an intermediate risk zone, this binary result can provide clarity that subjective assessment alone may not achieve. This is particularly relevant in settings where the goal is to perform endoscopy within 24 hours, but the volume of consults exceeds procedural capacity,” Ayazi explained.

“In such scenarios, PillSense enables physicians to stratify patients based on objective evidence of active bleeding, helping to prioritize those who require urgent endoscopy and defer or even avoid endoscopic evaluation in those who do not. The result is a more efficient allocation of endoscopic resources without compromising patient safety,” he added.

Ayazi cautioned that the PillSense capsule should not be used as a replacement for clinical evaluation or established risk stratification protocols.

“It is intended for hemodynamically stable patients and has not been validated in cases of active or massive bleeding. Its diagnostic yield depends on the presence of blood in the stomach at the time of capsule transit; intermittent or proximal bleeding that has ceased may not be detected, introducing the potential for false-negative results,” Ayazi told GI & Hepatology News.

“However, in prior studies, the negative predictive value was high, and in the present study, no adverse outcomes were observed in patients who did not undergo endoscopy following a negative PillSense result,” Ayazi noted.

“It must also be understood that PillSense does not localize the source of bleeding or replace endoscopy in patients with a high likelihood of active hemorrhage. It is not designed to detect bleeding from the lower GI tract or distal small bowel. Rather, it serves as an adjunct that can provide immediate clarity when the need for endoscopy is uncertain, and should be interpreted within the broader context of clinical findings, laboratory data, and established risk stratification tools,” he added.

The study had no specific funding. Lee and Ayazi had no relevant disclosures.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

SAN DIEGO — A real-time, blood-sensing capsule (PillSense) is a safe and effective diagnostic tool for patients with suspected upper gastrointestinal (GI) bleeding that can aid patient triage, reduce unnecessary procedures, and optimize resource use, a study found.

Notably, patients with negative capsule results had shorter hospital stays and lower acuity markers, and in more than one third of cases, an esophagogastroduodenoscopy (EGD) was avoided altogether without any observed adverse events or readmissions, the study team found.

“Our study shows that this novel capsule that detects blood in the upper GI tract (PillSense) was highly sensitive and specific (> 90%) for detecting recent or active upper GI blood, influenced clinical management in 80% of cases and allowed about one third of patients to be safely discharged from the emergency department, with close outpatient follow-up,” Linda Lee, MD, AGAF, medical director of endoscopy, Brigham and Women’s Hospital and associate professor of medicine, Harvard Medical School, Boston, told GI & Hepatology News.

The study was presented at Digestive Disease Week® (DDW) 2025.

 

Real-World Insights

EGD is the gold standard for diagnosing suspected upper GI bleeding, but limited access to timely EGD complicates diagnosis and resource allocation.

Approved by the US Food and Drug Administration, PillSense (EnteraSense) is an ingestible capsule with a reusable receiver that provides a rapid, noninvasive method for detecting upper GI bleeding. The capsule analyzes light absorption to identify blood and transmits the result within 10 minutes.

Lee and colleagues evaluated the real-world impact of this point-of-care device on clinical triage and resource allocation, while assessing its safety profile.

They analyzed data on 43 patients (mean age 60 years; 72% men) with clinical suspicion of upper GI bleeding in whom the device was used. The most common symptoms were symptomatic anemia (70%), melena (67%), and hematemesis (33%).

Sixteen PillSense studies (37%) were positive for blood detection, and 27 (63%) were negative.

Compared to patients with a positive capsule results, those without blood detected by the capsule had shorter hospital stays (mean, 3.8 vs 13.4 days, P = .02), lower GBS scores (mean, 7.93 vs 12.81; P = .005), and fewer units of blood transfused (mean, 1.19 vs 10.94; P = .01) and were less apt to be hemodynamically unstable (5 vs 8 patients; P = .03).

Capsule results influenced clinical management in 80% of cases, leading to avoidance of EGD in 37% and prioritization of urgent EGD in 18% (all had active bleeding on EGD).

Capsule use improved resource allocation in 51% of cases. This included 12 patients who were discharged from the ED, six who were assigned an inpatient bed early, and four who underwent expedited colonoscopy as upper GI bleeding was ruled out, they noted.

Among the eight patients who did not undergo EGD, there were no readmissions within 30 days and no adverse events. There were no capsule-related adverse events.

“Clinicians should consider using this novel capsule PillSense as another data point in the management of suspected upper GI bleed,” Lee told GI & Hepatology News.

“This could include in helping to triage patients for safe discharge from the ED or to more urgent endoscopy, to differentiate between upper vs lower GI bleed and to manage ICU patients with possible rebleeding,” Lee said.

 

Important Real-World Evidence

Reached for comment, Shahin Ayazi, MD, esophageal surgeon, Director, Allegheny Health Network Chevalier Jackson Esophageal Research Center, Pittsburgh, Pennsylvania, said this study is important for several reasons.

“Prior investigations have established that PillSense possesses a high negative predictive value for detecting upper GI bleeding and have speculated on its utility in triage, decision-making, and potentially avoiding unnecessary endoscopy. This study is important because it substantiates that speculation with clinical data,” Ayazi, who wasn’t involved in the study, told GI & Hepatology News.

“These findings support the capsule’s practical application in patient stratification and clinical workflow, particularly when diagnostic uncertainty is high and endoscopic resources are limited,” Ayazi noted.

In his experience, PillSense is “highly useful as a triage adjunct in the evaluation of suspected upper GI bleeding. It provides direct and objective evidence as to whether blood is currently present in the stomach,” he said.

“In patients whose presentation is ambiguous or whose clinical scores fall into an intermediate risk zone, this binary result can provide clarity that subjective assessment alone may not achieve. This is particularly relevant in settings where the goal is to perform endoscopy within 24 hours, but the volume of consults exceeds procedural capacity,” Ayazi explained.

“In such scenarios, PillSense enables physicians to stratify patients based on objective evidence of active bleeding, helping to prioritize those who require urgent endoscopy and defer or even avoid endoscopic evaluation in those who do not. The result is a more efficient allocation of endoscopic resources without compromising patient safety,” he added.

Ayazi cautioned that the PillSense capsule should not be used as a replacement for clinical evaluation or established risk stratification protocols.

“It is intended for hemodynamically stable patients and has not been validated in cases of active or massive bleeding. Its diagnostic yield depends on the presence of blood in the stomach at the time of capsule transit; intermittent or proximal bleeding that has ceased may not be detected, introducing the potential for false-negative results,” Ayazi told GI & Hepatology News.

“However, in prior studies, the negative predictive value was high, and in the present study, no adverse outcomes were observed in patients who did not undergo endoscopy following a negative PillSense result,” Ayazi noted.

“It must also be understood that PillSense does not localize the source of bleeding or replace endoscopy in patients with a high likelihood of active hemorrhage. It is not designed to detect bleeding from the lower GI tract or distal small bowel. Rather, it serves as an adjunct that can provide immediate clarity when the need for endoscopy is uncertain, and should be interpreted within the broader context of clinical findings, laboratory data, and established risk stratification tools,” he added.

The study had no specific funding. Lee and Ayazi had no relevant disclosures.

A version of this article appeared on Medscape.com.

SAN DIEGO — A real-time, blood-sensing capsule (PillSense) is a safe and effective diagnostic tool for patients with suspected upper gastrointestinal (GI) bleeding that can aid patient triage, reduce unnecessary procedures, and optimize resource use, a study found.

Notably, patients with negative capsule results had shorter hospital stays and lower acuity markers, and in more than one third of cases, an esophagogastroduodenoscopy (EGD) was avoided altogether without any observed adverse events or readmissions, the study team found.

“Our study shows that this novel capsule that detects blood in the upper GI tract (PillSense) was highly sensitive and specific (> 90%) for detecting recent or active upper GI blood, influenced clinical management in 80% of cases and allowed about one third of patients to be safely discharged from the emergency department, with close outpatient follow-up,” Linda Lee, MD, AGAF, medical director of endoscopy, Brigham and Women’s Hospital and associate professor of medicine, Harvard Medical School, Boston, told GI & Hepatology News.

The study was presented at Digestive Disease Week® (DDW) 2025.

 

Real-World Insights

EGD is the gold standard for diagnosing suspected upper GI bleeding, but limited access to timely EGD complicates diagnosis and resource allocation.

Approved by the US Food and Drug Administration, PillSense (EnteraSense) is an ingestible capsule with a reusable receiver that provides a rapid, noninvasive method for detecting upper GI bleeding. The capsule analyzes light absorption to identify blood and transmits the result within 10 minutes.

Lee and colleagues evaluated the real-world impact of this point-of-care device on clinical triage and resource allocation, while assessing its safety profile.

They analyzed data on 43 patients (mean age 60 years; 72% men) with clinical suspicion of upper GI bleeding in whom the device was used. The most common symptoms were symptomatic anemia (70%), melena (67%), and hematemesis (33%).

Sixteen PillSense studies (37%) were positive for blood detection, and 27 (63%) were negative.

Compared to patients with a positive capsule results, those without blood detected by the capsule had shorter hospital stays (mean, 3.8 vs 13.4 days, P = .02), lower GBS scores (mean, 7.93 vs 12.81; P = .005), and fewer units of blood transfused (mean, 1.19 vs 10.94; P = .01) and were less apt to be hemodynamically unstable (5 vs 8 patients; P = .03).

Capsule results influenced clinical management in 80% of cases, leading to avoidance of EGD in 37% and prioritization of urgent EGD in 18% (all had active bleeding on EGD).

Capsule use improved resource allocation in 51% of cases. This included 12 patients who were discharged from the ED, six who were assigned an inpatient bed early, and four who underwent expedited colonoscopy as upper GI bleeding was ruled out, they noted.

Among the eight patients who did not undergo EGD, there were no readmissions within 30 days and no adverse events. There were no capsule-related adverse events.

“Clinicians should consider using this novel capsule PillSense as another data point in the management of suspected upper GI bleed,” Lee told GI & Hepatology News.

“This could include in helping to triage patients for safe discharge from the ED or to more urgent endoscopy, to differentiate between upper vs lower GI bleed and to manage ICU patients with possible rebleeding,” Lee said.

 

Important Real-World Evidence

Reached for comment, Shahin Ayazi, MD, esophageal surgeon, Director, Allegheny Health Network Chevalier Jackson Esophageal Research Center, Pittsburgh, Pennsylvania, said this study is important for several reasons.

“Prior investigations have established that PillSense possesses a high negative predictive value for detecting upper GI bleeding and have speculated on its utility in triage, decision-making, and potentially avoiding unnecessary endoscopy. This study is important because it substantiates that speculation with clinical data,” Ayazi, who wasn’t involved in the study, told GI & Hepatology News.

“These findings support the capsule’s practical application in patient stratification and clinical workflow, particularly when diagnostic uncertainty is high and endoscopic resources are limited,” Ayazi noted.

In his experience, PillSense is “highly useful as a triage adjunct in the evaluation of suspected upper GI bleeding. It provides direct and objective evidence as to whether blood is currently present in the stomach,” he said.

“In patients whose presentation is ambiguous or whose clinical scores fall into an intermediate risk zone, this binary result can provide clarity that subjective assessment alone may not achieve. This is particularly relevant in settings where the goal is to perform endoscopy within 24 hours, but the volume of consults exceeds procedural capacity,” Ayazi explained.

“In such scenarios, PillSense enables physicians to stratify patients based on objective evidence of active bleeding, helping to prioritize those who require urgent endoscopy and defer or even avoid endoscopic evaluation in those who do not. The result is a more efficient allocation of endoscopic resources without compromising patient safety,” he added.

Ayazi cautioned that the PillSense capsule should not be used as a replacement for clinical evaluation or established risk stratification protocols.

“It is intended for hemodynamically stable patients and has not been validated in cases of active or massive bleeding. Its diagnostic yield depends on the presence of blood in the stomach at the time of capsule transit; intermittent or proximal bleeding that has ceased may not be detected, introducing the potential for false-negative results,” Ayazi told GI & Hepatology News.

“However, in prior studies, the negative predictive value was high, and in the present study, no adverse outcomes were observed in patients who did not undergo endoscopy following a negative PillSense result,” Ayazi noted.

“It must also be understood that PillSense does not localize the source of bleeding or replace endoscopy in patients with a high likelihood of active hemorrhage. It is not designed to detect bleeding from the lower GI tract or distal small bowel. Rather, it serves as an adjunct that can provide immediate clarity when the need for endoscopy is uncertain, and should be interpreted within the broader context of clinical findings, laboratory data, and established risk stratification tools,” he added.

The study had no specific funding. Lee and Ayazi had no relevant disclosures.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM DDW 2025

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Colorectal Cancer Screening Choices: Is Compliance Key?

Article Type
Changed

SAN DIEGO — In the ever-expanding options for colorectal cancer (CRC) screening, blood tests using precision medicine are becoming more advanced and convenient than ever; however, caveats abound, and when it comes to potentially life-saving screening measures, picking the optimal screening tool is critical.

Regarding tests, “perfect is not possible,” said William M. Grady, MD, AGAF, of the Fred Hutchinson Cancer Center, University of Washington School of Medicine, in Seattle, who took part in a debate on the pros and cons of key screening options at Digestive Disease Week® (DDW) 2025.

Dr. William M. Grady



“We have to remember that that’s the reality of colorectal cancer screening, and we need to meet our patients where they live,” said Grady, who argued on behalf of blood-based tests, including cell-free (cf) DNA (Shield, Guardant Health) and cfDNA plus protein biomarkers (Freenome).

A big point in their favor is their convenience and higher patient compliance — better tests that don’t get done do not work, he stressed.

He cited data that showed suboptimal compliance rates with standard colonoscopy: Rates range from about 70% among non-Hispanic White individuals to 67% among Black individuals, 51% among Hispanic individuals, and the low rate of just 26% among patients aged between 45 and 50 years.

With troubling increases in CRC incidence among younger patients, “that’s a group we’re particularly concerned about,” Grady said.

Meanwhile, studies show compliance rates with blood-based tests are ≥ 80%, with similar rates seen among those racial and ethnic groups, with lower rates for conventional colonoscopy, he noted.

Importantly, in terms of performance in detecting CRC, blood-based tests stand up to other modalities, as demonstrated in a real-world study conducted by Grady and his colleagues showing a sensitivity of 83% for the cfDNA test, 74% for the fecal immunochemical test (FIT) stool test, and 92% for a multitarget stool DNA test compared with 95% for colonoscopy.

“What we can see is that the sensitivity of blood-based tests looks favorable and comparable to other tests,” he said.

Among the four options, cfDNA had a highest patient adherence rate (85%-86%) compared with colonoscopy (28%-42%), FIT (43%-65%), and multitarget stool DNA (48%-60%).

“The bottom line is that these tests decrease CRC mortality and incidence, and we know there’s a potential to improve compliance with colorectal cancer screening if we offer blood-based tests for average-risk people who refuse colonoscopy,” Grady said.

 

Blood-Based Tests: Caveats, Harms?

Arguing against blood-based tests in the debate, Robert E. Schoen, MD, MPH, professor of medicine and epidemiology, Division of Gastroenterology, Hepatology and Nutrition, at the University of Pittsburgh, in Pittsburgh, Pennsylvania, checked off some of the key caveats.

While the overall sensitivity of blood-based tests may look favorable, these tests don’t detect early CRC well,” said Schoen. The sensitivity rates for stage 1 CRC are 64.7% with Guardant Health and 57.1% with Freenome.

Furthermore, their rates of detecting advanced adenomas are very low; the rate with Guardant Health is only about 13%, and with Freenome is even lower at 12.5%, he reported.

These rates are “similar to the false positive rate, with poor discrimination and accuracy for advanced adenomas,” Schoen said. “Without substantial detection of advanced adenomas, blood-based testing is inferior [to other options].”

Importantly, the low advanced adenoma rate translates to a lack of CRC prevention, which is key to reducing CRC mortality, he noted.

Essential to success with blood-based biopsies, as well as with stool tests, is the need for a follow-up colonoscopy if results are positive, but Schoen pointed out that this may or may not happen.

He cited research from FIT data showing that among 33,000 patients with abnormal stool tests, the rate of follow-up colonoscopy within a year, despite the concerning results, was a dismal 56%.

“We have a long way to go to make sure that people who get positive noninvasive tests get followed up,” he said.

In terms of the argument that blood-based screening is better than no screening at all, Schoen cited recent research that projected reductions in the risk for CRC incidence and mortality among 100,000 patients with each of the screening modalities.

Starting with standard colonoscopy performed every 10 years, the reductions in incidence and mortality would be 79% and 81%, respectively, followed by annual FIT, at 72% and 76%; multitarget DNA every 3 years, at 68% and 73%; and cfDNA (Shield), at 45% and 55%.

Based on those rates, if patients originally opting for FIT were to shift to blood-based tests, “the rate of CRC deaths would increase,” Schoen noted.

The findings underscore that “blood testing is unfavorable as a ‘substitution test,’” he added. “In fact, widespread adoption of blood testing could increase CRC morbidity.”

“Is it better than nothing?” he asked. “Yes, but only if performance of a colonoscopy after a positive test is accomplished.”

 

What About FIT?

Arguing that stool-based testing, or FIT, is the ideal choice as a first-line CRC test Jill Tinmouth, MD, PhD, a professor at the University of Toronto, Ontario, Canada, pointed to its prominent role in organized screening programs, including regions where resources may limit the widespread utilization of routine first-line colonoscopy screening. In addition, it narrows colonoscopies to those that are already prescreened as being at risk.

Dr. Jill Tinmouth

Data from one such program, reported by Kaiser Permanente of Northern California, showed that participation in CRC screening doubled from 40% to 80% over 10 years after initiating FIT screening. CRC mortality over the same period decreased by 50% from baseline, and incidence fell by as much as 75%.

In follow-up colonoscopies, Tinmouth noted that collective research from studies reflecting real-world participation and adherence to FIT in populations in the United Kingdom, the Netherlands, Taiwan, and California show follow-up colonoscopy rates of 88%, 85%, 70%, and 78%, respectively.

Meanwhile, a recent large comparison of biennial FIT (n = 26,719) vs one-time colonoscopy (n = 26,332) screening, the first study to directly compare the two, showed noninferiority, with nearly identical rates of CRC mortality at 10 years (0.22% colonoscopy vs 0.24% FIT) as well as CRC incidence (1.13% vs 1.22%, respectively).

“This study shows that in the context of organized screening, the benefits of FIT are the same as colonoscopy in the most important outcome of CRC — mortality,” Tinmouth said.

Furthermore, as noted with blood-based screening, the higher participation with FIT shows a much more even racial/ethnic participation than that observed with colonoscopy.

“FIT has clear and compelling advantages over colonoscopy,” she said. As well as better compliance among all groups, “it is less costly and also better for the environment [by using fewer resources],” she added.

 

Colonoscopy: ‘Best for First-Line Screening’

Making the case that standard colonoscopy should in fact be the first-line test, Swati G. Patel, MD, director of the Gastrointestinal Cancer Risk and Prevention Center at the University of Colorado Anschutz Medical Center, Aurora, Colorado, emphasized the robust, large population studies showing its benefits. Among them is a landmark national policy study showing a significant reduction in CRC incidence and mortality associated with first-line colonoscopy and adenoma removal.

Dr. Swati G. Patel

A multitude of other studies in different settings have also shown similar benefits across large populations, Patel added.

In terms of its key advantages over FIT, the once-a-decade screening requirement for average-risk patients is seen as highly favorable by many, as evidenced in clinical trial data showing that individuals highly value tests that are accurate and do not need to be completed frequently, she said. Research from various other trials of organized screening programs further showed patients crossing over from FIT to colonoscopy, including one study of more than 3500 patients comparing colonoscopy and FIT, which had approximately 40% adherence with FIT vs nearly 90% with colonoscopy.

Notably, as many as 25% of the patients in the FIT arm in that study crossed over to colonoscopy, presumably due to preference for the once-a-decade regimen, Patel said.

“Colonoscopy had a substantial and impressive long-term protective benefit both in terms of developing colon cancer and dying from colon cancer,” she said.

Regarding the head-to-head FIT and colonoscopy comparison that Tinmouth described, Patel noted that a supplemental table in the study’s appendix of patients who completed screening does reveal increasing separation between the two approaches, favoring colonoscopy, in terms of longer-term CRC incidence and mortality.

The collective findings underscore that “colonoscopy as a standalone test is uniquely cost-effective,” in the face of costs related to colon cancer treatment.

Instead of relying on biennial tests with FIT, colonoscopy allows clinicians to immediately risk-stratify those individuals who can benefit from closer surveillance and really relax surveillance for those who are determined to be low risk, she said.

Grady had been on the scientific advisory boards for Guardant Health and Freenome and had consulted for Karius. Shoen reported relationships with Guardant Health and grant/research support from Exact Sciences, Freenome, and Immunovia. Tinmouth had no disclosures to report. Patel disclosed relationships with Olympus America and Exact Sciences.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

SAN DIEGO — In the ever-expanding options for colorectal cancer (CRC) screening, blood tests using precision medicine are becoming more advanced and convenient than ever; however, caveats abound, and when it comes to potentially life-saving screening measures, picking the optimal screening tool is critical.

Regarding tests, “perfect is not possible,” said William M. Grady, MD, AGAF, of the Fred Hutchinson Cancer Center, University of Washington School of Medicine, in Seattle, who took part in a debate on the pros and cons of key screening options at Digestive Disease Week® (DDW) 2025.

Dr. William M. Grady



“We have to remember that that’s the reality of colorectal cancer screening, and we need to meet our patients where they live,” said Grady, who argued on behalf of blood-based tests, including cell-free (cf) DNA (Shield, Guardant Health) and cfDNA plus protein biomarkers (Freenome).

A big point in their favor is their convenience and higher patient compliance — better tests that don’t get done do not work, he stressed.

He cited data that showed suboptimal compliance rates with standard colonoscopy: Rates range from about 70% among non-Hispanic White individuals to 67% among Black individuals, 51% among Hispanic individuals, and the low rate of just 26% among patients aged between 45 and 50 years.

With troubling increases in CRC incidence among younger patients, “that’s a group we’re particularly concerned about,” Grady said.

Meanwhile, studies show compliance rates with blood-based tests are ≥ 80%, with similar rates seen among those racial and ethnic groups, with lower rates for conventional colonoscopy, he noted.

Importantly, in terms of performance in detecting CRC, blood-based tests stand up to other modalities, as demonstrated in a real-world study conducted by Grady and his colleagues showing a sensitivity of 83% for the cfDNA test, 74% for the fecal immunochemical test (FIT) stool test, and 92% for a multitarget stool DNA test compared with 95% for colonoscopy.

“What we can see is that the sensitivity of blood-based tests looks favorable and comparable to other tests,” he said.

Among the four options, cfDNA had a highest patient adherence rate (85%-86%) compared with colonoscopy (28%-42%), FIT (43%-65%), and multitarget stool DNA (48%-60%).

“The bottom line is that these tests decrease CRC mortality and incidence, and we know there’s a potential to improve compliance with colorectal cancer screening if we offer blood-based tests for average-risk people who refuse colonoscopy,” Grady said.

 

Blood-Based Tests: Caveats, Harms?

Arguing against blood-based tests in the debate, Robert E. Schoen, MD, MPH, professor of medicine and epidemiology, Division of Gastroenterology, Hepatology and Nutrition, at the University of Pittsburgh, in Pittsburgh, Pennsylvania, checked off some of the key caveats.

While the overall sensitivity of blood-based tests may look favorable, these tests don’t detect early CRC well,” said Schoen. The sensitivity rates for stage 1 CRC are 64.7% with Guardant Health and 57.1% with Freenome.

Furthermore, their rates of detecting advanced adenomas are very low; the rate with Guardant Health is only about 13%, and with Freenome is even lower at 12.5%, he reported.

These rates are “similar to the false positive rate, with poor discrimination and accuracy for advanced adenomas,” Schoen said. “Without substantial detection of advanced adenomas, blood-based testing is inferior [to other options].”

Importantly, the low advanced adenoma rate translates to a lack of CRC prevention, which is key to reducing CRC mortality, he noted.

Essential to success with blood-based biopsies, as well as with stool tests, is the need for a follow-up colonoscopy if results are positive, but Schoen pointed out that this may or may not happen.

He cited research from FIT data showing that among 33,000 patients with abnormal stool tests, the rate of follow-up colonoscopy within a year, despite the concerning results, was a dismal 56%.

“We have a long way to go to make sure that people who get positive noninvasive tests get followed up,” he said.

In terms of the argument that blood-based screening is better than no screening at all, Schoen cited recent research that projected reductions in the risk for CRC incidence and mortality among 100,000 patients with each of the screening modalities.

Starting with standard colonoscopy performed every 10 years, the reductions in incidence and mortality would be 79% and 81%, respectively, followed by annual FIT, at 72% and 76%; multitarget DNA every 3 years, at 68% and 73%; and cfDNA (Shield), at 45% and 55%.

Based on those rates, if patients originally opting for FIT were to shift to blood-based tests, “the rate of CRC deaths would increase,” Schoen noted.

The findings underscore that “blood testing is unfavorable as a ‘substitution test,’” he added. “In fact, widespread adoption of blood testing could increase CRC morbidity.”

“Is it better than nothing?” he asked. “Yes, but only if performance of a colonoscopy after a positive test is accomplished.”

 

What About FIT?

Arguing that stool-based testing, or FIT, is the ideal choice as a first-line CRC test Jill Tinmouth, MD, PhD, a professor at the University of Toronto, Ontario, Canada, pointed to its prominent role in organized screening programs, including regions where resources may limit the widespread utilization of routine first-line colonoscopy screening. In addition, it narrows colonoscopies to those that are already prescreened as being at risk.

Dr. Jill Tinmouth

Data from one such program, reported by Kaiser Permanente of Northern California, showed that participation in CRC screening doubled from 40% to 80% over 10 years after initiating FIT screening. CRC mortality over the same period decreased by 50% from baseline, and incidence fell by as much as 75%.

In follow-up colonoscopies, Tinmouth noted that collective research from studies reflecting real-world participation and adherence to FIT in populations in the United Kingdom, the Netherlands, Taiwan, and California show follow-up colonoscopy rates of 88%, 85%, 70%, and 78%, respectively.

Meanwhile, a recent large comparison of biennial FIT (n = 26,719) vs one-time colonoscopy (n = 26,332) screening, the first study to directly compare the two, showed noninferiority, with nearly identical rates of CRC mortality at 10 years (0.22% colonoscopy vs 0.24% FIT) as well as CRC incidence (1.13% vs 1.22%, respectively).

“This study shows that in the context of organized screening, the benefits of FIT are the same as colonoscopy in the most important outcome of CRC — mortality,” Tinmouth said.

Furthermore, as noted with blood-based screening, the higher participation with FIT shows a much more even racial/ethnic participation than that observed with colonoscopy.

“FIT has clear and compelling advantages over colonoscopy,” she said. As well as better compliance among all groups, “it is less costly and also better for the environment [by using fewer resources],” she added.

 

Colonoscopy: ‘Best for First-Line Screening’

Making the case that standard colonoscopy should in fact be the first-line test, Swati G. Patel, MD, director of the Gastrointestinal Cancer Risk and Prevention Center at the University of Colorado Anschutz Medical Center, Aurora, Colorado, emphasized the robust, large population studies showing its benefits. Among them is a landmark national policy study showing a significant reduction in CRC incidence and mortality associated with first-line colonoscopy and adenoma removal.

Dr. Swati G. Patel

A multitude of other studies in different settings have also shown similar benefits across large populations, Patel added.

In terms of its key advantages over FIT, the once-a-decade screening requirement for average-risk patients is seen as highly favorable by many, as evidenced in clinical trial data showing that individuals highly value tests that are accurate and do not need to be completed frequently, she said. Research from various other trials of organized screening programs further showed patients crossing over from FIT to colonoscopy, including one study of more than 3500 patients comparing colonoscopy and FIT, which had approximately 40% adherence with FIT vs nearly 90% with colonoscopy.

Notably, as many as 25% of the patients in the FIT arm in that study crossed over to colonoscopy, presumably due to preference for the once-a-decade regimen, Patel said.

“Colonoscopy had a substantial and impressive long-term protective benefit both in terms of developing colon cancer and dying from colon cancer,” she said.

Regarding the head-to-head FIT and colonoscopy comparison that Tinmouth described, Patel noted that a supplemental table in the study’s appendix of patients who completed screening does reveal increasing separation between the two approaches, favoring colonoscopy, in terms of longer-term CRC incidence and mortality.

The collective findings underscore that “colonoscopy as a standalone test is uniquely cost-effective,” in the face of costs related to colon cancer treatment.

Instead of relying on biennial tests with FIT, colonoscopy allows clinicians to immediately risk-stratify those individuals who can benefit from closer surveillance and really relax surveillance for those who are determined to be low risk, she said.

Grady had been on the scientific advisory boards for Guardant Health and Freenome and had consulted for Karius. Shoen reported relationships with Guardant Health and grant/research support from Exact Sciences, Freenome, and Immunovia. Tinmouth had no disclosures to report. Patel disclosed relationships with Olympus America and Exact Sciences.

A version of this article appeared on Medscape.com.

SAN DIEGO — In the ever-expanding options for colorectal cancer (CRC) screening, blood tests using precision medicine are becoming more advanced and convenient than ever; however, caveats abound, and when it comes to potentially life-saving screening measures, picking the optimal screening tool is critical.

Regarding tests, “perfect is not possible,” said William M. Grady, MD, AGAF, of the Fred Hutchinson Cancer Center, University of Washington School of Medicine, in Seattle, who took part in a debate on the pros and cons of key screening options at Digestive Disease Week® (DDW) 2025.

Dr. William M. Grady



“We have to remember that that’s the reality of colorectal cancer screening, and we need to meet our patients where they live,” said Grady, who argued on behalf of blood-based tests, including cell-free (cf) DNA (Shield, Guardant Health) and cfDNA plus protein biomarkers (Freenome).

A big point in their favor is their convenience and higher patient compliance — better tests that don’t get done do not work, he stressed.

He cited data that showed suboptimal compliance rates with standard colonoscopy: Rates range from about 70% among non-Hispanic White individuals to 67% among Black individuals, 51% among Hispanic individuals, and the low rate of just 26% among patients aged between 45 and 50 years.

With troubling increases in CRC incidence among younger patients, “that’s a group we’re particularly concerned about,” Grady said.

Meanwhile, studies show compliance rates with blood-based tests are ≥ 80%, with similar rates seen among those racial and ethnic groups, with lower rates for conventional colonoscopy, he noted.

Importantly, in terms of performance in detecting CRC, blood-based tests stand up to other modalities, as demonstrated in a real-world study conducted by Grady and his colleagues showing a sensitivity of 83% for the cfDNA test, 74% for the fecal immunochemical test (FIT) stool test, and 92% for a multitarget stool DNA test compared with 95% for colonoscopy.

“What we can see is that the sensitivity of blood-based tests looks favorable and comparable to other tests,” he said.

Among the four options, cfDNA had a highest patient adherence rate (85%-86%) compared with colonoscopy (28%-42%), FIT (43%-65%), and multitarget stool DNA (48%-60%).

“The bottom line is that these tests decrease CRC mortality and incidence, and we know there’s a potential to improve compliance with colorectal cancer screening if we offer blood-based tests for average-risk people who refuse colonoscopy,” Grady said.

 

Blood-Based Tests: Caveats, Harms?

Arguing against blood-based tests in the debate, Robert E. Schoen, MD, MPH, professor of medicine and epidemiology, Division of Gastroenterology, Hepatology and Nutrition, at the University of Pittsburgh, in Pittsburgh, Pennsylvania, checked off some of the key caveats.

While the overall sensitivity of blood-based tests may look favorable, these tests don’t detect early CRC well,” said Schoen. The sensitivity rates for stage 1 CRC are 64.7% with Guardant Health and 57.1% with Freenome.

Furthermore, their rates of detecting advanced adenomas are very low; the rate with Guardant Health is only about 13%, and with Freenome is even lower at 12.5%, he reported.

These rates are “similar to the false positive rate, with poor discrimination and accuracy for advanced adenomas,” Schoen said. “Without substantial detection of advanced adenomas, blood-based testing is inferior [to other options].”

Importantly, the low advanced adenoma rate translates to a lack of CRC prevention, which is key to reducing CRC mortality, he noted.

Essential to success with blood-based biopsies, as well as with stool tests, is the need for a follow-up colonoscopy if results are positive, but Schoen pointed out that this may or may not happen.

He cited research from FIT data showing that among 33,000 patients with abnormal stool tests, the rate of follow-up colonoscopy within a year, despite the concerning results, was a dismal 56%.

“We have a long way to go to make sure that people who get positive noninvasive tests get followed up,” he said.

In terms of the argument that blood-based screening is better than no screening at all, Schoen cited recent research that projected reductions in the risk for CRC incidence and mortality among 100,000 patients with each of the screening modalities.

Starting with standard colonoscopy performed every 10 years, the reductions in incidence and mortality would be 79% and 81%, respectively, followed by annual FIT, at 72% and 76%; multitarget DNA every 3 years, at 68% and 73%; and cfDNA (Shield), at 45% and 55%.

Based on those rates, if patients originally opting for FIT were to shift to blood-based tests, “the rate of CRC deaths would increase,” Schoen noted.

The findings underscore that “blood testing is unfavorable as a ‘substitution test,’” he added. “In fact, widespread adoption of blood testing could increase CRC morbidity.”

“Is it better than nothing?” he asked. “Yes, but only if performance of a colonoscopy after a positive test is accomplished.”

 

What About FIT?

Arguing that stool-based testing, or FIT, is the ideal choice as a first-line CRC test Jill Tinmouth, MD, PhD, a professor at the University of Toronto, Ontario, Canada, pointed to its prominent role in organized screening programs, including regions where resources may limit the widespread utilization of routine first-line colonoscopy screening. In addition, it narrows colonoscopies to those that are already prescreened as being at risk.

Dr. Jill Tinmouth

Data from one such program, reported by Kaiser Permanente of Northern California, showed that participation in CRC screening doubled from 40% to 80% over 10 years after initiating FIT screening. CRC mortality over the same period decreased by 50% from baseline, and incidence fell by as much as 75%.

In follow-up colonoscopies, Tinmouth noted that collective research from studies reflecting real-world participation and adherence to FIT in populations in the United Kingdom, the Netherlands, Taiwan, and California show follow-up colonoscopy rates of 88%, 85%, 70%, and 78%, respectively.

Meanwhile, a recent large comparison of biennial FIT (n = 26,719) vs one-time colonoscopy (n = 26,332) screening, the first study to directly compare the two, showed noninferiority, with nearly identical rates of CRC mortality at 10 years (0.22% colonoscopy vs 0.24% FIT) as well as CRC incidence (1.13% vs 1.22%, respectively).

“This study shows that in the context of organized screening, the benefits of FIT are the same as colonoscopy in the most important outcome of CRC — mortality,” Tinmouth said.

Furthermore, as noted with blood-based screening, the higher participation with FIT shows a much more even racial/ethnic participation than that observed with colonoscopy.

“FIT has clear and compelling advantages over colonoscopy,” she said. As well as better compliance among all groups, “it is less costly and also better for the environment [by using fewer resources],” she added.

 

Colonoscopy: ‘Best for First-Line Screening’

Making the case that standard colonoscopy should in fact be the first-line test, Swati G. Patel, MD, director of the Gastrointestinal Cancer Risk and Prevention Center at the University of Colorado Anschutz Medical Center, Aurora, Colorado, emphasized the robust, large population studies showing its benefits. Among them is a landmark national policy study showing a significant reduction in CRC incidence and mortality associated with first-line colonoscopy and adenoma removal.

Dr. Swati G. Patel

A multitude of other studies in different settings have also shown similar benefits across large populations, Patel added.

In terms of its key advantages over FIT, the once-a-decade screening requirement for average-risk patients is seen as highly favorable by many, as evidenced in clinical trial data showing that individuals highly value tests that are accurate and do not need to be completed frequently, she said. Research from various other trials of organized screening programs further showed patients crossing over from FIT to colonoscopy, including one study of more than 3500 patients comparing colonoscopy and FIT, which had approximately 40% adherence with FIT vs nearly 90% with colonoscopy.

Notably, as many as 25% of the patients in the FIT arm in that study crossed over to colonoscopy, presumably due to preference for the once-a-decade regimen, Patel said.

“Colonoscopy had a substantial and impressive long-term protective benefit both in terms of developing colon cancer and dying from colon cancer,” she said.

Regarding the head-to-head FIT and colonoscopy comparison that Tinmouth described, Patel noted that a supplemental table in the study’s appendix of patients who completed screening does reveal increasing separation between the two approaches, favoring colonoscopy, in terms of longer-term CRC incidence and mortality.

The collective findings underscore that “colonoscopy as a standalone test is uniquely cost-effective,” in the face of costs related to colon cancer treatment.

Instead of relying on biennial tests with FIT, colonoscopy allows clinicians to immediately risk-stratify those individuals who can benefit from closer surveillance and really relax surveillance for those who are determined to be low risk, she said.

Grady had been on the scientific advisory boards for Guardant Health and Freenome and had consulted for Karius. Shoen reported relationships with Guardant Health and grant/research support from Exact Sciences, Freenome, and Immunovia. Tinmouth had no disclosures to report. Patel disclosed relationships with Olympus America and Exact Sciences.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM DDW 2025

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date