User login
Auto-Brewery Syndrome Explained: New Patient Cohort Identifies Culprit Bacteria, Fermentation
WASHINGTON — When a published case of auto-brewery syndrome (ABS) in China — caused by Klebsiella pneumoniae — received widespread publicity in 2019, patients reacted, sending emails to lead author Jing Yuan, in Beijing, China. Many of these inquiries were from patients in the United States who believed they might have ABS.
“Can you check to see if I have ABS?” patients asked Yuan.
For help, Yuan contacted Bernd Schnabl, MD, AGAF, at the University of California, San Diego, whose research was addressing alcohol-associated liver disease and who was also interested in the gut-liver axis and the role of gut microbiome–derived ethanol in metabolic dysfunction–associated steatotic liver disease (MASLD).
“She asked me, ‘Are you interested in looking into ABS?” Schnabl recalled at the Gut Microbiota for Health (GMFH) World Summit 2025. He dug in and formed what may be the largest research cohort thus far of patients with ABS — a group of 22 patients with their diagnosis confirmed through observed glucose challenge.
His soon-to-be-published
ABS is considered a rare condition, but “I’d argue that it’s rarely diagnosed because many physicians don’t know of the diagnosis, and many are actually very skeptical about the disease,” Schnabl said at the meeting, convened by AGA and the European Society of Neurogastroenterology and Motility.
Patients experience symptoms of intoxication when ethanol produced by dysregulated gut microbiota exceeds the capacity of the liver to metabolize it and accumulates in the blood, he explained.
“Patients constantly talk about brain fog; they can’t concentrate, and it can be very severe,” he said. “They don’t get a firm diagnosis and go from one medical center to another, and they also suffer from complications of alcohol use disorder including serious family, social, and legal problems.”
Advancing Knowledge, Findings From the Cohort
The phenomenon of ethanol production by gut microbiota has been known for over a century, Schnabl wrote with two co-authors in a 2024 review in Nature Reviews Gastroenterology & Hepatology of “endogenous ethanol production in health and disease.”
And in recent decades, he said at the meeting, research has linked endogenous ethanol production to MASLD, positioning it as a potential contributor to disease pathogenesis. In one of the most recently published studies, patients with MASLD had higher concentrations of ethanol in their systemic circulation after a mixed meal test than did healthy controls — and even higher ethanol concentrations in their portal vein blood, “suggesting that this ethanol is coming from the gut microbiome,” Schnabl said.
The paper from China that led Schnabl to establish his cohort was spurred on by a patient with both ABS and MASLD cirrhosis. The patient was found to have strains of high alcohol–producing K pneumoniae in the gut microbiome. When the researchers transplanted these strains into mice via fecal microbiota transplantation (FMT), the mice developed MASLD.
Schnabl’s study focused just on ABS, which is alternatively sometimes called gut fermentation syndrome. The 22 patients in his ABS cohort — each of whom provided stool samples corresponding to remission or flare of ABS symptoms — had a median age of 45 years and were predominantly men, slightly overweight and not obese, and without liver disease. (About 48 patients with suspected ABS were screened, and 20 were excluded after an observed glucose challenge failed to establish a diagnosis; 6 withdrew from the study.)
During remission (no symptoms), patients’ mean blood alcohol content (BAC) level was zero, but during a flare, the mean BAC level was 136 mg/dL. “To put it into perspective, the legal limit for driving in the US is 80 mg/dL,” Schnabl said. Within a mean of 4 hours after the oral glucose load used for diagnosis, patients’ mean BAC level was 73 mg/dL, he noted.
To assess ethanol production by the patients’ microbiota, Schnabl and his team cultured the stool samples — anaerobically adding glucose and measuring ethanol production — and compared the results with findings from stool samples collected from household partners who generally were of the opposite sex. Among their findings: cultures of stool from patients experiencing a flare produced significantly more ethanol than stool from household partners and samples from patients in remission.
To assess whether ethanol was produced by bacteria or fungi, the researchers measured ethanol production in cultures treated with either the antifungal amphotericin B or the antibiotic chloramphenicol. “Chloramphenicol clearly decreased the ethanol production,” Schnabl said. “So at least in this culture test, bacteria produced most of the alcohol in our patients.”
Taxonomic profiling, moreover, revealed “significantly elevated levels” of proteobacteria — with relative abundance of Escherichia coli and K pneumoniae — in patients who were flaring, he said. And functional profiling of the fecal microbiota showed much higher activity of fermentation pathways during patients’ flares than in household partners or healthy controls. (Healthy controls were incorporated into the taxonomic and functional profiling parts of the research.)
A Clinical Approach to ABS
Schnabl said at the meeting that stool cultures of both household partners and patients in long-term remission “all produced some low amount of ethanol, which was initially puzzling to us” but became less surprising as he and his colleagues reviewed more of the literature.
Asked during a discussion period whether ABS could explain chronic fatigue, a commonly reported chronic symptom in populations, Schnabl said it’s possible. And in an interview after the meeting, he elaborated. “The literature clearly says ABS is a rare disease, but I argue that more patients may have ABS; they just don’t know it. And I suspect some may have mild symptoms, like brain fog, feeling tired,” he said. “But at this point, this is complete speculation.”
Physicians should “be aware that if a patient has unexplained symptoms that could be aligned with ABS, checking the blood alcohol level” may be warranted, he said in the interview. A PEth (phosphatidylethanol) test — a biomarker test used to check for longer-term alcohol consumption — is an option, but it is important to appreciate it will not discriminate between exogenous alcohol drinking and endogenous ethanol production.
There are no standardized diagnostic tests for ABS, but at the meeting, Schnabl outlined a clinical approach, starting with a standardized oral glucose tolerance challenge test to detect elevated ethanol concentrations.
A fecal yeast test is warranted for diagnosed patients on the basis of some case reports in which ABS symptoms have improved with antifungal treatments. When the fecal yeast test is negative, “ideally you want to identify the ethanol-producing intestinal bacteria in the patient,” he said, using cultures and fecal metagenomics sequencing.
Treatment could then be tailored to the identified microbial strain, with options being selective antibiotics, probiotics and/or prebiotics, and — likely in the future — phages or FMT, he said. (These options, all aimed at restoring gut homeostasis, are also discussed in his 2024 review.)
Schnabl and his team recently performed FMT in a patient with ABS in whom E coli was determined to be producing excessive ethanol. The FMT, performed after antibiotic pretreatment, resulted in decreases in the relative abundance of proteobacteria and E coli levels, lower blood alcohol levels and fermentation enrichment pathways, and normalized liver enzymes.
After 6 months, however, the patient relapsed, and the measurements reversed. “We decided to do FMT every month, and we treated the patient for 6 months,” Schnabl said, noting that ABS had rendered the patient dysfunctional and unable to work. “He has been out of treatment for over a year now and is not flaring any longer.”
Schnabl and Elizabeth Hohmann, MD, at Massachusetts General Hospital, Boston, are currently recruiting patients with confirmed ABS for a National Institutes of Health–funded phase 1 safety and tolerability study of FMT for ABS.
Schnabl disclosed serving as an external scientific advisor/consultant to Ambys Medicines, Boehringer Ingelheim, Gelesis, Mabwell Therapeutics, Surrozen, and Takeda; and as the founder/BOD/BEO of Nterica Bio.
A version of this article appeared on Medscape.com.
WASHINGTON — When a published case of auto-brewery syndrome (ABS) in China — caused by Klebsiella pneumoniae — received widespread publicity in 2019, patients reacted, sending emails to lead author Jing Yuan, in Beijing, China. Many of these inquiries were from patients in the United States who believed they might have ABS.
“Can you check to see if I have ABS?” patients asked Yuan.
For help, Yuan contacted Bernd Schnabl, MD, AGAF, at the University of California, San Diego, whose research was addressing alcohol-associated liver disease and who was also interested in the gut-liver axis and the role of gut microbiome–derived ethanol in metabolic dysfunction–associated steatotic liver disease (MASLD).
“She asked me, ‘Are you interested in looking into ABS?” Schnabl recalled at the Gut Microbiota for Health (GMFH) World Summit 2025. He dug in and formed what may be the largest research cohort thus far of patients with ABS — a group of 22 patients with their diagnosis confirmed through observed glucose challenge.
His soon-to-be-published
ABS is considered a rare condition, but “I’d argue that it’s rarely diagnosed because many physicians don’t know of the diagnosis, and many are actually very skeptical about the disease,” Schnabl said at the meeting, convened by AGA and the European Society of Neurogastroenterology and Motility.
Patients experience symptoms of intoxication when ethanol produced by dysregulated gut microbiota exceeds the capacity of the liver to metabolize it and accumulates in the blood, he explained.
“Patients constantly talk about brain fog; they can’t concentrate, and it can be very severe,” he said. “They don’t get a firm diagnosis and go from one medical center to another, and they also suffer from complications of alcohol use disorder including serious family, social, and legal problems.”
Advancing Knowledge, Findings From the Cohort
The phenomenon of ethanol production by gut microbiota has been known for over a century, Schnabl wrote with two co-authors in a 2024 review in Nature Reviews Gastroenterology & Hepatology of “endogenous ethanol production in health and disease.”
And in recent decades, he said at the meeting, research has linked endogenous ethanol production to MASLD, positioning it as a potential contributor to disease pathogenesis. In one of the most recently published studies, patients with MASLD had higher concentrations of ethanol in their systemic circulation after a mixed meal test than did healthy controls — and even higher ethanol concentrations in their portal vein blood, “suggesting that this ethanol is coming from the gut microbiome,” Schnabl said.
The paper from China that led Schnabl to establish his cohort was spurred on by a patient with both ABS and MASLD cirrhosis. The patient was found to have strains of high alcohol–producing K pneumoniae in the gut microbiome. When the researchers transplanted these strains into mice via fecal microbiota transplantation (FMT), the mice developed MASLD.
Schnabl’s study focused just on ABS, which is alternatively sometimes called gut fermentation syndrome. The 22 patients in his ABS cohort — each of whom provided stool samples corresponding to remission or flare of ABS symptoms — had a median age of 45 years and were predominantly men, slightly overweight and not obese, and without liver disease. (About 48 patients with suspected ABS were screened, and 20 were excluded after an observed glucose challenge failed to establish a diagnosis; 6 withdrew from the study.)
During remission (no symptoms), patients’ mean blood alcohol content (BAC) level was zero, but during a flare, the mean BAC level was 136 mg/dL. “To put it into perspective, the legal limit for driving in the US is 80 mg/dL,” Schnabl said. Within a mean of 4 hours after the oral glucose load used for diagnosis, patients’ mean BAC level was 73 mg/dL, he noted.
To assess ethanol production by the patients’ microbiota, Schnabl and his team cultured the stool samples — anaerobically adding glucose and measuring ethanol production — and compared the results with findings from stool samples collected from household partners who generally were of the opposite sex. Among their findings: cultures of stool from patients experiencing a flare produced significantly more ethanol than stool from household partners and samples from patients in remission.
To assess whether ethanol was produced by bacteria or fungi, the researchers measured ethanol production in cultures treated with either the antifungal amphotericin B or the antibiotic chloramphenicol. “Chloramphenicol clearly decreased the ethanol production,” Schnabl said. “So at least in this culture test, bacteria produced most of the alcohol in our patients.”
Taxonomic profiling, moreover, revealed “significantly elevated levels” of proteobacteria — with relative abundance of Escherichia coli and K pneumoniae — in patients who were flaring, he said. And functional profiling of the fecal microbiota showed much higher activity of fermentation pathways during patients’ flares than in household partners or healthy controls. (Healthy controls were incorporated into the taxonomic and functional profiling parts of the research.)
A Clinical Approach to ABS
Schnabl said at the meeting that stool cultures of both household partners and patients in long-term remission “all produced some low amount of ethanol, which was initially puzzling to us” but became less surprising as he and his colleagues reviewed more of the literature.
Asked during a discussion period whether ABS could explain chronic fatigue, a commonly reported chronic symptom in populations, Schnabl said it’s possible. And in an interview after the meeting, he elaborated. “The literature clearly says ABS is a rare disease, but I argue that more patients may have ABS; they just don’t know it. And I suspect some may have mild symptoms, like brain fog, feeling tired,” he said. “But at this point, this is complete speculation.”
Physicians should “be aware that if a patient has unexplained symptoms that could be aligned with ABS, checking the blood alcohol level” may be warranted, he said in the interview. A PEth (phosphatidylethanol) test — a biomarker test used to check for longer-term alcohol consumption — is an option, but it is important to appreciate it will not discriminate between exogenous alcohol drinking and endogenous ethanol production.
There are no standardized diagnostic tests for ABS, but at the meeting, Schnabl outlined a clinical approach, starting with a standardized oral glucose tolerance challenge test to detect elevated ethanol concentrations.
A fecal yeast test is warranted for diagnosed patients on the basis of some case reports in which ABS symptoms have improved with antifungal treatments. When the fecal yeast test is negative, “ideally you want to identify the ethanol-producing intestinal bacteria in the patient,” he said, using cultures and fecal metagenomics sequencing.
Treatment could then be tailored to the identified microbial strain, with options being selective antibiotics, probiotics and/or prebiotics, and — likely in the future — phages or FMT, he said. (These options, all aimed at restoring gut homeostasis, are also discussed in his 2024 review.)
Schnabl and his team recently performed FMT in a patient with ABS in whom E coli was determined to be producing excessive ethanol. The FMT, performed after antibiotic pretreatment, resulted in decreases in the relative abundance of proteobacteria and E coli levels, lower blood alcohol levels and fermentation enrichment pathways, and normalized liver enzymes.
After 6 months, however, the patient relapsed, and the measurements reversed. “We decided to do FMT every month, and we treated the patient for 6 months,” Schnabl said, noting that ABS had rendered the patient dysfunctional and unable to work. “He has been out of treatment for over a year now and is not flaring any longer.”
Schnabl and Elizabeth Hohmann, MD, at Massachusetts General Hospital, Boston, are currently recruiting patients with confirmed ABS for a National Institutes of Health–funded phase 1 safety and tolerability study of FMT for ABS.
Schnabl disclosed serving as an external scientific advisor/consultant to Ambys Medicines, Boehringer Ingelheim, Gelesis, Mabwell Therapeutics, Surrozen, and Takeda; and as the founder/BOD/BEO of Nterica Bio.
A version of this article appeared on Medscape.com.
WASHINGTON — When a published case of auto-brewery syndrome (ABS) in China — caused by Klebsiella pneumoniae — received widespread publicity in 2019, patients reacted, sending emails to lead author Jing Yuan, in Beijing, China. Many of these inquiries were from patients in the United States who believed they might have ABS.
“Can you check to see if I have ABS?” patients asked Yuan.
For help, Yuan contacted Bernd Schnabl, MD, AGAF, at the University of California, San Diego, whose research was addressing alcohol-associated liver disease and who was also interested in the gut-liver axis and the role of gut microbiome–derived ethanol in metabolic dysfunction–associated steatotic liver disease (MASLD).
“She asked me, ‘Are you interested in looking into ABS?” Schnabl recalled at the Gut Microbiota for Health (GMFH) World Summit 2025. He dug in and formed what may be the largest research cohort thus far of patients with ABS — a group of 22 patients with their diagnosis confirmed through observed glucose challenge.
His soon-to-be-published
ABS is considered a rare condition, but “I’d argue that it’s rarely diagnosed because many physicians don’t know of the diagnosis, and many are actually very skeptical about the disease,” Schnabl said at the meeting, convened by AGA and the European Society of Neurogastroenterology and Motility.
Patients experience symptoms of intoxication when ethanol produced by dysregulated gut microbiota exceeds the capacity of the liver to metabolize it and accumulates in the blood, he explained.
“Patients constantly talk about brain fog; they can’t concentrate, and it can be very severe,” he said. “They don’t get a firm diagnosis and go from one medical center to another, and they also suffer from complications of alcohol use disorder including serious family, social, and legal problems.”
Advancing Knowledge, Findings From the Cohort
The phenomenon of ethanol production by gut microbiota has been known for over a century, Schnabl wrote with two co-authors in a 2024 review in Nature Reviews Gastroenterology & Hepatology of “endogenous ethanol production in health and disease.”
And in recent decades, he said at the meeting, research has linked endogenous ethanol production to MASLD, positioning it as a potential contributor to disease pathogenesis. In one of the most recently published studies, patients with MASLD had higher concentrations of ethanol in their systemic circulation after a mixed meal test than did healthy controls — and even higher ethanol concentrations in their portal vein blood, “suggesting that this ethanol is coming from the gut microbiome,” Schnabl said.
The paper from China that led Schnabl to establish his cohort was spurred on by a patient with both ABS and MASLD cirrhosis. The patient was found to have strains of high alcohol–producing K pneumoniae in the gut microbiome. When the researchers transplanted these strains into mice via fecal microbiota transplantation (FMT), the mice developed MASLD.
Schnabl’s study focused just on ABS, which is alternatively sometimes called gut fermentation syndrome. The 22 patients in his ABS cohort — each of whom provided stool samples corresponding to remission or flare of ABS symptoms — had a median age of 45 years and were predominantly men, slightly overweight and not obese, and without liver disease. (About 48 patients with suspected ABS were screened, and 20 were excluded after an observed glucose challenge failed to establish a diagnosis; 6 withdrew from the study.)
During remission (no symptoms), patients’ mean blood alcohol content (BAC) level was zero, but during a flare, the mean BAC level was 136 mg/dL. “To put it into perspective, the legal limit for driving in the US is 80 mg/dL,” Schnabl said. Within a mean of 4 hours after the oral glucose load used for diagnosis, patients’ mean BAC level was 73 mg/dL, he noted.
To assess ethanol production by the patients’ microbiota, Schnabl and his team cultured the stool samples — anaerobically adding glucose and measuring ethanol production — and compared the results with findings from stool samples collected from household partners who generally were of the opposite sex. Among their findings: cultures of stool from patients experiencing a flare produced significantly more ethanol than stool from household partners and samples from patients in remission.
To assess whether ethanol was produced by bacteria or fungi, the researchers measured ethanol production in cultures treated with either the antifungal amphotericin B or the antibiotic chloramphenicol. “Chloramphenicol clearly decreased the ethanol production,” Schnabl said. “So at least in this culture test, bacteria produced most of the alcohol in our patients.”
Taxonomic profiling, moreover, revealed “significantly elevated levels” of proteobacteria — with relative abundance of Escherichia coli and K pneumoniae — in patients who were flaring, he said. And functional profiling of the fecal microbiota showed much higher activity of fermentation pathways during patients’ flares than in household partners or healthy controls. (Healthy controls were incorporated into the taxonomic and functional profiling parts of the research.)
A Clinical Approach to ABS
Schnabl said at the meeting that stool cultures of both household partners and patients in long-term remission “all produced some low amount of ethanol, which was initially puzzling to us” but became less surprising as he and his colleagues reviewed more of the literature.
Asked during a discussion period whether ABS could explain chronic fatigue, a commonly reported chronic symptom in populations, Schnabl said it’s possible. And in an interview after the meeting, he elaborated. “The literature clearly says ABS is a rare disease, but I argue that more patients may have ABS; they just don’t know it. And I suspect some may have mild symptoms, like brain fog, feeling tired,” he said. “But at this point, this is complete speculation.”
Physicians should “be aware that if a patient has unexplained symptoms that could be aligned with ABS, checking the blood alcohol level” may be warranted, he said in the interview. A PEth (phosphatidylethanol) test — a biomarker test used to check for longer-term alcohol consumption — is an option, but it is important to appreciate it will not discriminate between exogenous alcohol drinking and endogenous ethanol production.
There are no standardized diagnostic tests for ABS, but at the meeting, Schnabl outlined a clinical approach, starting with a standardized oral glucose tolerance challenge test to detect elevated ethanol concentrations.
A fecal yeast test is warranted for diagnosed patients on the basis of some case reports in which ABS symptoms have improved with antifungal treatments. When the fecal yeast test is negative, “ideally you want to identify the ethanol-producing intestinal bacteria in the patient,” he said, using cultures and fecal metagenomics sequencing.
Treatment could then be tailored to the identified microbial strain, with options being selective antibiotics, probiotics and/or prebiotics, and — likely in the future — phages or FMT, he said. (These options, all aimed at restoring gut homeostasis, are also discussed in his 2024 review.)
Schnabl and his team recently performed FMT in a patient with ABS in whom E coli was determined to be producing excessive ethanol. The FMT, performed after antibiotic pretreatment, resulted in decreases in the relative abundance of proteobacteria and E coli levels, lower blood alcohol levels and fermentation enrichment pathways, and normalized liver enzymes.
After 6 months, however, the patient relapsed, and the measurements reversed. “We decided to do FMT every month, and we treated the patient for 6 months,” Schnabl said, noting that ABS had rendered the patient dysfunctional and unable to work. “He has been out of treatment for over a year now and is not flaring any longer.”
Schnabl and Elizabeth Hohmann, MD, at Massachusetts General Hospital, Boston, are currently recruiting patients with confirmed ABS for a National Institutes of Health–funded phase 1 safety and tolerability study of FMT for ABS.
Schnabl disclosed serving as an external scientific advisor/consultant to Ambys Medicines, Boehringer Ingelheim, Gelesis, Mabwell Therapeutics, Surrozen, and Takeda; and as the founder/BOD/BEO of Nterica Bio.
A version of this article appeared on Medscape.com.
FROM GMFH 2025
Computer-Aided Colonoscopy Not Ready for Prime Time: AGA Clinical Practice Guideline
cancer mortality in the United States.
, the third most common cause ofThe systematic data review is a collaboration between AGA and The BMJ’s MAGIC Rapid Recommendations. The BMJ issued a separate recommendation against CADe shortly after the AGA guideline was published.
Led by Shahnaz S. Sultan, MD, MHSc, AGAF, of the Division of Gastroenterology, Hepatology, and Nutrition at University of Minnesota, Minneapolis, and recently published in Gastroenterology, found only very low certainty of GRADE-based evidence for several critical long-term outcomes, both desirable and undesirable. These included the following: 11 fewer CRCs per 10,000 individuals and two fewer CRC deaths per 10,000 individuals, an increased burden of more intensive surveillance colonoscopies (635 more per 10,000 individuals), and cost and resource implications.
This technology did, however, yield an 8% (95% CI, 6-10) absolute increase in the adenoma detection rate (ADR) and a 2% (95% CI, 0-4) increase in the detection rate of advanced adenomas and/or sessile serrated lesions. “How this translates into a reduction in CRC incidence or death is where we were uncertain,” Sultan said. “Our best effort at trying to translate the ADR and other endoscopy outcomes to CRC incidence and CRC death relied on the modeling study, which included a lot of assumptions, which also contributed to our overall lower certainty.”
The systematic and meta-analysis included 41 randomized controlled trials with more than 32,108 participants who underwent CADe-assisted colonoscopy. This technology was associated with a higher polyp detection rate than standard colonoscopy: 56.1% vs 47.9% (relative risk [RR], 1.22, 95% CI, 1.15-1.28). It also had a higher ADR: 44.8% vs 37.4% (RR, 1.22; 95% CI, 1.16-1.29).
But although CADe-assisted colonoscopy may increase ADR, it carries a risk for overdiagnosis, as most polyps detected during colonoscopy are diminutive (< 5 mm) and of low malignant potential, the panel noted. Approximately 25% of lesions are missed at colonoscopy. More than 15 million colonoscopies are performed annually in the United States, but studies have demonstrated variable quality of colonoscopies across key quality indicators.
“Artificial intelligence [AI] is revolutionizing medicine and healthcare in the field of GI [gastroenterology], and CADe in colonoscopy has been brought to commercialization,” Sultan told GI & Hepatology News. “Unlike many areas of endoscopic research where we often have a finite number of clinical trial data, CADe-assisted colonoscopy intervention has been studied in over 44 randomized controlled trials and numerous nonrandomized, real-world studies. The question of whether or not to adopt this intervention at a health system or practice level is an important question that was prioritized to be addressed as guidance was needed.”
Commenting on the guideline but not involved in its formulation, Larry S. Kim, MD, MBA, AGAF, a gastroenterologist at South Denver Gastroenterology in Denver, Colorado, said his practice group has used the GI Genius AI system in its affiliated hospitals but has so far chosen not to implement the technology at its endoscopy centers. “At the hospital, our physicians have the ability to utilize the system for select patients or not at all,” he told GI & Hepatology News.
The fact that The BMJ reached a different conclusion based on the same data, evidence-grading system, and microsimulation, Kim added, “highlights the point that when evidence for benefit is uncertain, underlying values are critical.” In declining to make a recommendation, the AGA panel balanced the benefit of improved detection of potentially precancerous adenomas vs increased resource utilization in the face of unclear benefit. “With different priorities, other bodies could reasonably decide to recommend either for or against CADe.”
The Future
According to Sultan, gastroenterologists need a better understanding of patient values and preferences and the value placed on increased adenoma detection, which may also lead to more lifetime colonoscopies without reducing the risk for CRC. “We need better intermediate- and long-term data on the impact of adenoma detection on interval cancers and CRC incidence,” she said. “We need data on detection of polyps that are more clinically significant such as those 6-10 mm in size, as well as serrated sessile lesions. We also need to understand at the population or health system level what the impact is on resources, cost, and access.”
Ultimately, the living guideline underscores the trade-off between desirable and undesirable effects and the limitations of current evidence to support a recommendation, but CADe has to improve as an iterative AI application with further validation and better training.
With the anticipated improvement in software accuracy as AI machine learning reads increasing numbers of images, Sultan added, “the next version of the software may perform better, especially for polyps that are more clinically significant or for flat sessile serrated polyps, which are harder to detect. We plan to revisit the question in the next year or two and potentially revise the guideline.”
These guidelines were fully funded by the AGA Institute with no funding from any outside agency or industry.
Sultan is supported by the US Food and Drug Administration. Co-authors Shazia Mehmood Siddique, Dennis L. Shung, and Benjamin Lebwohl are supported by grants from the National Institute of Diabetes and Digestive and Kidney Diseases. Theodore R. Levin is supported by the Permanente Medical Group Delivery Science and Applied Research Program. Cesare Hassan is a consultant for Fujifilm and Olympus. Peter S. Liang reported doing research work for Freenome and advisory board work for Guardant Health and Natera.
Kim is the AGA president-elect. He disclosed no competing interests relevant to his comments.
A version of this article appeared on Medscape.com.
cancer mortality in the United States.
, the third most common cause ofThe systematic data review is a collaboration between AGA and The BMJ’s MAGIC Rapid Recommendations. The BMJ issued a separate recommendation against CADe shortly after the AGA guideline was published.
Led by Shahnaz S. Sultan, MD, MHSc, AGAF, of the Division of Gastroenterology, Hepatology, and Nutrition at University of Minnesota, Minneapolis, and recently published in Gastroenterology, found only very low certainty of GRADE-based evidence for several critical long-term outcomes, both desirable and undesirable. These included the following: 11 fewer CRCs per 10,000 individuals and two fewer CRC deaths per 10,000 individuals, an increased burden of more intensive surveillance colonoscopies (635 more per 10,000 individuals), and cost and resource implications.
This technology did, however, yield an 8% (95% CI, 6-10) absolute increase in the adenoma detection rate (ADR) and a 2% (95% CI, 0-4) increase in the detection rate of advanced adenomas and/or sessile serrated lesions. “How this translates into a reduction in CRC incidence or death is where we were uncertain,” Sultan said. “Our best effort at trying to translate the ADR and other endoscopy outcomes to CRC incidence and CRC death relied on the modeling study, which included a lot of assumptions, which also contributed to our overall lower certainty.”
The systematic and meta-analysis included 41 randomized controlled trials with more than 32,108 participants who underwent CADe-assisted colonoscopy. This technology was associated with a higher polyp detection rate than standard colonoscopy: 56.1% vs 47.9% (relative risk [RR], 1.22, 95% CI, 1.15-1.28). It also had a higher ADR: 44.8% vs 37.4% (RR, 1.22; 95% CI, 1.16-1.29).
But although CADe-assisted colonoscopy may increase ADR, it carries a risk for overdiagnosis, as most polyps detected during colonoscopy are diminutive (< 5 mm) and of low malignant potential, the panel noted. Approximately 25% of lesions are missed at colonoscopy. More than 15 million colonoscopies are performed annually in the United States, but studies have demonstrated variable quality of colonoscopies across key quality indicators.
“Artificial intelligence [AI] is revolutionizing medicine and healthcare in the field of GI [gastroenterology], and CADe in colonoscopy has been brought to commercialization,” Sultan told GI & Hepatology News. “Unlike many areas of endoscopic research where we often have a finite number of clinical trial data, CADe-assisted colonoscopy intervention has been studied in over 44 randomized controlled trials and numerous nonrandomized, real-world studies. The question of whether or not to adopt this intervention at a health system or practice level is an important question that was prioritized to be addressed as guidance was needed.”
Commenting on the guideline but not involved in its formulation, Larry S. Kim, MD, MBA, AGAF, a gastroenterologist at South Denver Gastroenterology in Denver, Colorado, said his practice group has used the GI Genius AI system in its affiliated hospitals but has so far chosen not to implement the technology at its endoscopy centers. “At the hospital, our physicians have the ability to utilize the system for select patients or not at all,” he told GI & Hepatology News.
The fact that The BMJ reached a different conclusion based on the same data, evidence-grading system, and microsimulation, Kim added, “highlights the point that when evidence for benefit is uncertain, underlying values are critical.” In declining to make a recommendation, the AGA panel balanced the benefit of improved detection of potentially precancerous adenomas vs increased resource utilization in the face of unclear benefit. “With different priorities, other bodies could reasonably decide to recommend either for or against CADe.”
The Future
According to Sultan, gastroenterologists need a better understanding of patient values and preferences and the value placed on increased adenoma detection, which may also lead to more lifetime colonoscopies without reducing the risk for CRC. “We need better intermediate- and long-term data on the impact of adenoma detection on interval cancers and CRC incidence,” she said. “We need data on detection of polyps that are more clinically significant such as those 6-10 mm in size, as well as serrated sessile lesions. We also need to understand at the population or health system level what the impact is on resources, cost, and access.”
Ultimately, the living guideline underscores the trade-off between desirable and undesirable effects and the limitations of current evidence to support a recommendation, but CADe has to improve as an iterative AI application with further validation and better training.
With the anticipated improvement in software accuracy as AI machine learning reads increasing numbers of images, Sultan added, “the next version of the software may perform better, especially for polyps that are more clinically significant or for flat sessile serrated polyps, which are harder to detect. We plan to revisit the question in the next year or two and potentially revise the guideline.”
These guidelines were fully funded by the AGA Institute with no funding from any outside agency or industry.
Sultan is supported by the US Food and Drug Administration. Co-authors Shazia Mehmood Siddique, Dennis L. Shung, and Benjamin Lebwohl are supported by grants from the National Institute of Diabetes and Digestive and Kidney Diseases. Theodore R. Levin is supported by the Permanente Medical Group Delivery Science and Applied Research Program. Cesare Hassan is a consultant for Fujifilm and Olympus. Peter S. Liang reported doing research work for Freenome and advisory board work for Guardant Health and Natera.
Kim is the AGA president-elect. He disclosed no competing interests relevant to his comments.
A version of this article appeared on Medscape.com.
cancer mortality in the United States.
, the third most common cause ofThe systematic data review is a collaboration between AGA and The BMJ’s MAGIC Rapid Recommendations. The BMJ issued a separate recommendation against CADe shortly after the AGA guideline was published.
Led by Shahnaz S. Sultan, MD, MHSc, AGAF, of the Division of Gastroenterology, Hepatology, and Nutrition at University of Minnesota, Minneapolis, and recently published in Gastroenterology, found only very low certainty of GRADE-based evidence for several critical long-term outcomes, both desirable and undesirable. These included the following: 11 fewer CRCs per 10,000 individuals and two fewer CRC deaths per 10,000 individuals, an increased burden of more intensive surveillance colonoscopies (635 more per 10,000 individuals), and cost and resource implications.
This technology did, however, yield an 8% (95% CI, 6-10) absolute increase in the adenoma detection rate (ADR) and a 2% (95% CI, 0-4) increase in the detection rate of advanced adenomas and/or sessile serrated lesions. “How this translates into a reduction in CRC incidence or death is where we were uncertain,” Sultan said. “Our best effort at trying to translate the ADR and other endoscopy outcomes to CRC incidence and CRC death relied on the modeling study, which included a lot of assumptions, which also contributed to our overall lower certainty.”
The systematic and meta-analysis included 41 randomized controlled trials with more than 32,108 participants who underwent CADe-assisted colonoscopy. This technology was associated with a higher polyp detection rate than standard colonoscopy: 56.1% vs 47.9% (relative risk [RR], 1.22, 95% CI, 1.15-1.28). It also had a higher ADR: 44.8% vs 37.4% (RR, 1.22; 95% CI, 1.16-1.29).
But although CADe-assisted colonoscopy may increase ADR, it carries a risk for overdiagnosis, as most polyps detected during colonoscopy are diminutive (< 5 mm) and of low malignant potential, the panel noted. Approximately 25% of lesions are missed at colonoscopy. More than 15 million colonoscopies are performed annually in the United States, but studies have demonstrated variable quality of colonoscopies across key quality indicators.
“Artificial intelligence [AI] is revolutionizing medicine and healthcare in the field of GI [gastroenterology], and CADe in colonoscopy has been brought to commercialization,” Sultan told GI & Hepatology News. “Unlike many areas of endoscopic research where we often have a finite number of clinical trial data, CADe-assisted colonoscopy intervention has been studied in over 44 randomized controlled trials and numerous nonrandomized, real-world studies. The question of whether or not to adopt this intervention at a health system or practice level is an important question that was prioritized to be addressed as guidance was needed.”
Commenting on the guideline but not involved in its formulation, Larry S. Kim, MD, MBA, AGAF, a gastroenterologist at South Denver Gastroenterology in Denver, Colorado, said his practice group has used the GI Genius AI system in its affiliated hospitals but has so far chosen not to implement the technology at its endoscopy centers. “At the hospital, our physicians have the ability to utilize the system for select patients or not at all,” he told GI & Hepatology News.
The fact that The BMJ reached a different conclusion based on the same data, evidence-grading system, and microsimulation, Kim added, “highlights the point that when evidence for benefit is uncertain, underlying values are critical.” In declining to make a recommendation, the AGA panel balanced the benefit of improved detection of potentially precancerous adenomas vs increased resource utilization in the face of unclear benefit. “With different priorities, other bodies could reasonably decide to recommend either for or against CADe.”
The Future
According to Sultan, gastroenterologists need a better understanding of patient values and preferences and the value placed on increased adenoma detection, which may also lead to more lifetime colonoscopies without reducing the risk for CRC. “We need better intermediate- and long-term data on the impact of adenoma detection on interval cancers and CRC incidence,” she said. “We need data on detection of polyps that are more clinically significant such as those 6-10 mm in size, as well as serrated sessile lesions. We also need to understand at the population or health system level what the impact is on resources, cost, and access.”
Ultimately, the living guideline underscores the trade-off between desirable and undesirable effects and the limitations of current evidence to support a recommendation, but CADe has to improve as an iterative AI application with further validation and better training.
With the anticipated improvement in software accuracy as AI machine learning reads increasing numbers of images, Sultan added, “the next version of the software may perform better, especially for polyps that are more clinically significant or for flat sessile serrated polyps, which are harder to detect. We plan to revisit the question in the next year or two and potentially revise the guideline.”
These guidelines were fully funded by the AGA Institute with no funding from any outside agency or industry.
Sultan is supported by the US Food and Drug Administration. Co-authors Shazia Mehmood Siddique, Dennis L. Shung, and Benjamin Lebwohl are supported by grants from the National Institute of Diabetes and Digestive and Kidney Diseases. Theodore R. Levin is supported by the Permanente Medical Group Delivery Science and Applied Research Program. Cesare Hassan is a consultant for Fujifilm and Olympus. Peter S. Liang reported doing research work for Freenome and advisory board work for Guardant Health and Natera.
Kim is the AGA president-elect. He disclosed no competing interests relevant to his comments.
A version of this article appeared on Medscape.com.
FROM GASTROENTEROLOGY
Elemental Diet Eases Symptoms in Microbiome Gastro Disorders
, according to a new study.
“Elemental diets have long shown promise for treating gastrointestinal disorders like Crohn’s disease, eosinophilic esophagitis, SIBO (small intestinal bacterial overgrowth), and IMO (intestinal methanogen overgrowth), but poor palatability has limited their use,” lead author Ali Rezaie, MD, medical director of the Gastrointestinal (GI) Motility Program and director of Bioinformatics at Cedars-Sinai Medical Center, Los Angeles, told GI & Hepatology News.
Elemental diets are specialized formulas tailored to meet an individual’s specific nutritional needs and daily requirements for vitamins, minerals, fat, free amino acids, and carbohydrates.
In SIBO and IMO specifically, only about half the patients respond to antibiotics, and many require repeat treatments, which underscores the need for effective nonantibiotic alternatives, said Rezaie. “This is the first prospective trial using a PED, aiming to make this approach both viable and accessible for patients,” he noted.
Assessing a Novel Diet in IMO and SIBO
In the study, which was recently published in Clinical Gastroenterology and Hepatology, Rezaie and colleagues enrolled 30 adults with IMO (40%), SIBO (20%), or both (40%). The mean participant age was 45 years, and 63% were women.
All participants completed 2 weeks of a PED, transitioned to 2-3 days of a bland diet, and then resumed their regular diets for 2 weeks.
The diet consisted of multiple 300-calorie packets, adjusted for individual caloric needs. Participants could consume additional packets for hunger but were prohibited from eating other foods. There was no restriction on water intake.
The primary endpoint was changes in stool microbiome after the PED and reintroduction of regular food. Secondary endpoints included lactose breath test normalization to determine bacterial overgrowth in the gut, symptom response, and adverse events.
Researchers collected 29 stool samples at baseline, 27 post-PED, and 27 at study conclusion (2 weeks post-diet).
Key Outcomes
Although the stool samples’ alpha diversity decreased after the PED, the difference was not statistically significant at the end of the study. However, 30 bacterial families showed significant differences in relative abundance post-PED.
Daily symptom severity improved significantly during the second week of the diet compared with baseline, with reduction in abdominal discomfort, bloating, distention, constipation, and flatulence. Further significant improvements in measures such as abdominal pain, diarrhea, fatigue, urgency, and brain fog were observed after reintroducing regular food.
“We observed 73% breath test normalization and 83% global symptom relief — with 100% adherence and tolerance to 2 weeks of exclusive PED,” Rezaie told GI & Hepatology News. No serious adverse events occurred during the study, he added.
Lactose breath test normalization rates post-PED were 58% in patients with IMO, 100% in patients with SIBO, and 75% in those with both conditions.
The extent of patient response to PED was notable, given that 83% had failed prior treatments, Rezaie said.
“While we expected benefit based on palatability improvements and prior retrospective data, the rapid reduction in methane and hydrogen gas — and the sustained microbiome modulation even after reintroducing a regular diet — exceeded expectations,” he said. A significant reduction in visceral fat was another novel finding.
“This study reinforces the power of diet as a therapeutic tool,” Rezaie said, adding that the results show that elemental diets can be palatable, thereby improving patient adherence, tolerance, and, eventually, effectiveness. This is particularly valuable for patients with SIBO and IMO who do not tolerate or respond to antibiotics, prefer nonpharmacologic options, or experience recurrent symptoms after antibiotic treatment.
Limitations and Next Steps
Study limitations included the lack of a placebo group with a sham diet, the short follow-up after reintroducing a regular diet, and the inability to assess microbial gene function.
However, the results support the safety, tolerance, and benefit of a PED in patients with IMO/SIBO. Personalized dietary interventions that support the growth of beneficial bacteria may be an effective approach to treating these disorders, Rezaie and colleagues noted in their publication.
Although the current study is a promising first step, longer-term studies are needed to evaluate the durability of microbiome and symptom improvements, Rezaie said.
Making the Most of Microbiome Manipulation
Elemental diets may help modulate the gut microbiome while reducing immune activation, making them attractive for microbiome-targeted gastrointestinal therapies, Jatin Roper, MD, a gastroenterologist at Duke University, Durham, North Carolina, told GI & Hepatology News.
“Antibiotics are only effective in half of SIBO cases and often require retreatment, so better therapies are needed,” said Roper, who was not affiliated with the study. He added that its findings confirmed the researchers’ hypothesis that a PED can be both safe and effective in patients with SIBO.
Roper noted the 83% symptom improvement as the study’s most unexpected and encouraging finding, as it represents a substantial improvement compared with standard antibiotic therapy. “It is also surprising that the tolerance rate of the elemental diet in this study was 100%,” he said.
However, diet palatability remains a major barrier in real-world practice.
“Adherence rates are likely to be far lower than in trials in which patients are closely monitored, and this challenge will not be easily overcome,” he added.
The study’s limitations, including the lack of metagenomic analysis and a placebo group, are important to address in future research, Roper said. In particular, controlled trials of elemental diets are needed to determine whether microbiome changes are directly responsible for symptom improvement.
The study was supported in part by Good LFE and the John and Geraldine Cusenza Foundation. Rezaie disclosed serving as a consultant/speaker for Bausch Health and having equity in Dieta Health, Gemelli Biotech, and Good LFE. Roper had no financial conflicts to disclose.
A version of this article first appeared on Medscape.com.
, according to a new study.
“Elemental diets have long shown promise for treating gastrointestinal disorders like Crohn’s disease, eosinophilic esophagitis, SIBO (small intestinal bacterial overgrowth), and IMO (intestinal methanogen overgrowth), but poor palatability has limited their use,” lead author Ali Rezaie, MD, medical director of the Gastrointestinal (GI) Motility Program and director of Bioinformatics at Cedars-Sinai Medical Center, Los Angeles, told GI & Hepatology News.
Elemental diets are specialized formulas tailored to meet an individual’s specific nutritional needs and daily requirements for vitamins, minerals, fat, free amino acids, and carbohydrates.
In SIBO and IMO specifically, only about half the patients respond to antibiotics, and many require repeat treatments, which underscores the need for effective nonantibiotic alternatives, said Rezaie. “This is the first prospective trial using a PED, aiming to make this approach both viable and accessible for patients,” he noted.
Assessing a Novel Diet in IMO and SIBO
In the study, which was recently published in Clinical Gastroenterology and Hepatology, Rezaie and colleagues enrolled 30 adults with IMO (40%), SIBO (20%), or both (40%). The mean participant age was 45 years, and 63% were women.
All participants completed 2 weeks of a PED, transitioned to 2-3 days of a bland diet, and then resumed their regular diets for 2 weeks.
The diet consisted of multiple 300-calorie packets, adjusted for individual caloric needs. Participants could consume additional packets for hunger but were prohibited from eating other foods. There was no restriction on water intake.
The primary endpoint was changes in stool microbiome after the PED and reintroduction of regular food. Secondary endpoints included lactose breath test normalization to determine bacterial overgrowth in the gut, symptom response, and adverse events.
Researchers collected 29 stool samples at baseline, 27 post-PED, and 27 at study conclusion (2 weeks post-diet).
Key Outcomes
Although the stool samples’ alpha diversity decreased after the PED, the difference was not statistically significant at the end of the study. However, 30 bacterial families showed significant differences in relative abundance post-PED.
Daily symptom severity improved significantly during the second week of the diet compared with baseline, with reduction in abdominal discomfort, bloating, distention, constipation, and flatulence. Further significant improvements in measures such as abdominal pain, diarrhea, fatigue, urgency, and brain fog were observed after reintroducing regular food.
“We observed 73% breath test normalization and 83% global symptom relief — with 100% adherence and tolerance to 2 weeks of exclusive PED,” Rezaie told GI & Hepatology News. No serious adverse events occurred during the study, he added.
Lactose breath test normalization rates post-PED were 58% in patients with IMO, 100% in patients with SIBO, and 75% in those with both conditions.
The extent of patient response to PED was notable, given that 83% had failed prior treatments, Rezaie said.
“While we expected benefit based on palatability improvements and prior retrospective data, the rapid reduction in methane and hydrogen gas — and the sustained microbiome modulation even after reintroducing a regular diet — exceeded expectations,” he said. A significant reduction in visceral fat was another novel finding.
“This study reinforces the power of diet as a therapeutic tool,” Rezaie said, adding that the results show that elemental diets can be palatable, thereby improving patient adherence, tolerance, and, eventually, effectiveness. This is particularly valuable for patients with SIBO and IMO who do not tolerate or respond to antibiotics, prefer nonpharmacologic options, or experience recurrent symptoms after antibiotic treatment.
Limitations and Next Steps
Study limitations included the lack of a placebo group with a sham diet, the short follow-up after reintroducing a regular diet, and the inability to assess microbial gene function.
However, the results support the safety, tolerance, and benefit of a PED in patients with IMO/SIBO. Personalized dietary interventions that support the growth of beneficial bacteria may be an effective approach to treating these disorders, Rezaie and colleagues noted in their publication.
Although the current study is a promising first step, longer-term studies are needed to evaluate the durability of microbiome and symptom improvements, Rezaie said.
Making the Most of Microbiome Manipulation
Elemental diets may help modulate the gut microbiome while reducing immune activation, making them attractive for microbiome-targeted gastrointestinal therapies, Jatin Roper, MD, a gastroenterologist at Duke University, Durham, North Carolina, told GI & Hepatology News.
“Antibiotics are only effective in half of SIBO cases and often require retreatment, so better therapies are needed,” said Roper, who was not affiliated with the study. He added that its findings confirmed the researchers’ hypothesis that a PED can be both safe and effective in patients with SIBO.
Roper noted the 83% symptom improvement as the study’s most unexpected and encouraging finding, as it represents a substantial improvement compared with standard antibiotic therapy. “It is also surprising that the tolerance rate of the elemental diet in this study was 100%,” he said.
However, diet palatability remains a major barrier in real-world practice.
“Adherence rates are likely to be far lower than in trials in which patients are closely monitored, and this challenge will not be easily overcome,” he added.
The study’s limitations, including the lack of metagenomic analysis and a placebo group, are important to address in future research, Roper said. In particular, controlled trials of elemental diets are needed to determine whether microbiome changes are directly responsible for symptom improvement.
The study was supported in part by Good LFE and the John and Geraldine Cusenza Foundation. Rezaie disclosed serving as a consultant/speaker for Bausch Health and having equity in Dieta Health, Gemelli Biotech, and Good LFE. Roper had no financial conflicts to disclose.
A version of this article first appeared on Medscape.com.
, according to a new study.
“Elemental diets have long shown promise for treating gastrointestinal disorders like Crohn’s disease, eosinophilic esophagitis, SIBO (small intestinal bacterial overgrowth), and IMO (intestinal methanogen overgrowth), but poor palatability has limited their use,” lead author Ali Rezaie, MD, medical director of the Gastrointestinal (GI) Motility Program and director of Bioinformatics at Cedars-Sinai Medical Center, Los Angeles, told GI & Hepatology News.
Elemental diets are specialized formulas tailored to meet an individual’s specific nutritional needs and daily requirements for vitamins, minerals, fat, free amino acids, and carbohydrates.
In SIBO and IMO specifically, only about half the patients respond to antibiotics, and many require repeat treatments, which underscores the need for effective nonantibiotic alternatives, said Rezaie. “This is the first prospective trial using a PED, aiming to make this approach both viable and accessible for patients,” he noted.
Assessing a Novel Diet in IMO and SIBO
In the study, which was recently published in Clinical Gastroenterology and Hepatology, Rezaie and colleagues enrolled 30 adults with IMO (40%), SIBO (20%), or both (40%). The mean participant age was 45 years, and 63% were women.
All participants completed 2 weeks of a PED, transitioned to 2-3 days of a bland diet, and then resumed their regular diets for 2 weeks.
The diet consisted of multiple 300-calorie packets, adjusted for individual caloric needs. Participants could consume additional packets for hunger but were prohibited from eating other foods. There was no restriction on water intake.
The primary endpoint was changes in stool microbiome after the PED and reintroduction of regular food. Secondary endpoints included lactose breath test normalization to determine bacterial overgrowth in the gut, symptom response, and adverse events.
Researchers collected 29 stool samples at baseline, 27 post-PED, and 27 at study conclusion (2 weeks post-diet).
Key Outcomes
Although the stool samples’ alpha diversity decreased after the PED, the difference was not statistically significant at the end of the study. However, 30 bacterial families showed significant differences in relative abundance post-PED.
Daily symptom severity improved significantly during the second week of the diet compared with baseline, with reduction in abdominal discomfort, bloating, distention, constipation, and flatulence. Further significant improvements in measures such as abdominal pain, diarrhea, fatigue, urgency, and brain fog were observed after reintroducing regular food.
“We observed 73% breath test normalization and 83% global symptom relief — with 100% adherence and tolerance to 2 weeks of exclusive PED,” Rezaie told GI & Hepatology News. No serious adverse events occurred during the study, he added.
Lactose breath test normalization rates post-PED were 58% in patients with IMO, 100% in patients with SIBO, and 75% in those with both conditions.
The extent of patient response to PED was notable, given that 83% had failed prior treatments, Rezaie said.
“While we expected benefit based on palatability improvements and prior retrospective data, the rapid reduction in methane and hydrogen gas — and the sustained microbiome modulation even after reintroducing a regular diet — exceeded expectations,” he said. A significant reduction in visceral fat was another novel finding.
“This study reinforces the power of diet as a therapeutic tool,” Rezaie said, adding that the results show that elemental diets can be palatable, thereby improving patient adherence, tolerance, and, eventually, effectiveness. This is particularly valuable for patients with SIBO and IMO who do not tolerate or respond to antibiotics, prefer nonpharmacologic options, or experience recurrent symptoms after antibiotic treatment.
Limitations and Next Steps
Study limitations included the lack of a placebo group with a sham diet, the short follow-up after reintroducing a regular diet, and the inability to assess microbial gene function.
However, the results support the safety, tolerance, and benefit of a PED in patients with IMO/SIBO. Personalized dietary interventions that support the growth of beneficial bacteria may be an effective approach to treating these disorders, Rezaie and colleagues noted in their publication.
Although the current study is a promising first step, longer-term studies are needed to evaluate the durability of microbiome and symptom improvements, Rezaie said.
Making the Most of Microbiome Manipulation
Elemental diets may help modulate the gut microbiome while reducing immune activation, making them attractive for microbiome-targeted gastrointestinal therapies, Jatin Roper, MD, a gastroenterologist at Duke University, Durham, North Carolina, told GI & Hepatology News.
“Antibiotics are only effective in half of SIBO cases and often require retreatment, so better therapies are needed,” said Roper, who was not affiliated with the study. He added that its findings confirmed the researchers’ hypothesis that a PED can be both safe and effective in patients with SIBO.
Roper noted the 83% symptom improvement as the study’s most unexpected and encouraging finding, as it represents a substantial improvement compared with standard antibiotic therapy. “It is also surprising that the tolerance rate of the elemental diet in this study was 100%,” he said.
However, diet palatability remains a major barrier in real-world practice.
“Adherence rates are likely to be far lower than in trials in which patients are closely monitored, and this challenge will not be easily overcome,” he added.
The study’s limitations, including the lack of metagenomic analysis and a placebo group, are important to address in future research, Roper said. In particular, controlled trials of elemental diets are needed to determine whether microbiome changes are directly responsible for symptom improvement.
The study was supported in part by Good LFE and the John and Geraldine Cusenza Foundation. Rezaie disclosed serving as a consultant/speaker for Bausch Health and having equity in Dieta Health, Gemelli Biotech, and Good LFE. Roper had no financial conflicts to disclose.
A version of this article first appeared on Medscape.com.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
The Extra-Bacterial Gut Ecosystem: The Influence of Phages and Fungi in the Microbiome
WASHINGTON, DC — Research on the gut microbiome — and clinical attention to it — has focused mainly on bacteria, but bacteriophages and fungi play critical roles as well, with significant influences on health and disease, experts said at the Gut Microbiota for Health (GMFH) World Summit 2025.
Fungi account for < 1% of the total genetic material in the microbiome but 1%-2% of its total biomass. “Despite their relative rarity, they have an important and outsized influence on gut health” — an impact that results from their unique interface with the immune system, said Kyla Ost, PhD, of the Anschutz Medical Campus, University of Colorado, in Denver, whose research focuses on this interface.
And bacteriophages — viruses that infect and kill bacteria — are highly abundant in the gut. “Bacteriophages begin to colonize our GI [gastrointestinal] tract at the same time we develop our own microbiome shortly after birth, and from that time on, they interact with the bacteria in our GI tract, shaping [and being shaped by] the bacterial species we carry with us,” said Robert (Chip) Schooley, MD, distinguished professor of medicine at the University of California San Diego School of Medicine.
“We’ve been talking about things that affect the gut microbiome — diet, genetics, immune response — but probably the biggest influence on what grows in the GI tract are bacteriophages,” said Schooley, co-director of the Center for Innovative Phage Applications and Therapeutics, in a session on the extra-bacterial gut ecosystem.
Among the current questions:
‘New life’ for Phage Therapy
Bacteriophages represent a promising approach for the treatment of multidrug resistant bacterial pathogens in an era of increasing resistance and a dried-up antibiotic discovery pipeline, Schooley said. (In 2019, an estimated 4.95 million deaths around the world were associated with bacterial antimicrobial resistance, and by 2050, it has been forecast that this number will rise to an estimated 8.22 million deaths.)
But in addition to suppressing bacterial pathogens causing direct morbidity, phage therapy has the potential to suppress bacteria believed to contribute to chronic diseases, he said. “We have proof-of-concept studies about the ability of phage to modulate bacteria in the digestive tract,” and an increasing number of clinical trials of the use of phages in GI and other diseases are underway, he said.
Phages were discovered just over a century ago, but phage therapy was widely abandoned once antibiotics were developed, except for in Russia and the former Eastern Bloc countries, where phage therapy continued to be used.
Phage therapy “got new life” in the West, Schooley said, about 10-15 years ago with an increasing number of detailed and high-profile case reports, including one in which a UC San Diego colleague, Tom Patterson, PhD, contracted a deadly multidrug resistant bacterial infection in Egypt and was eventually saved with bacteriophage therapy. (The case was the subject of the book The Perfect Predator).
Since then, as described in case reports and studies in the literature, “hundreds of people have been treated with bacteriophages here and in Europe,” most commonly for pulmonary infections and infections in implanted vascular and orthopedic devices, said Schooley, who coauthored a review in Cell in 2023 that describes phage biology and advances and future directions in phage therapy.
The use of bacteriophages to prevent systemic infections during high-risk periods — such as during chemotherapeutic regimens for hematological regimens — is an area of interest, he said at the meeting.
In research that is making its way to a clinical trial of patients undergoing allogeneic hematopoietic stem cell transplant (HSCT), researchers screened a library of phages to identify those with broad coverage of Escherichia coli. Using tail fiber engineering and CRISPR technology, they then engineered a combination of the four most complementary bacteriophages to selectively kill E coli — including fluoroquinolone-resistant strains that, in patients whose GI tracts are colonized with these strains, can translocate from the gut into the bloodstream, causing sepsis, during chemotherapeutic regimens for HSCT.
In a mouse model, the CRISPR-enhanced four-phage cocktail (SNIPR001) led to a steady reduction in the E coli colony counts in stool, “showing you can modulate these bacteria in the gut by using bacteriophages to kill them,” Schooley said. Moreover, the CRISPR enhancement strengthened the phages’ ability to break up biofilms, he said, showing “that you can engineer bacteriophages to make them better killers.” A phase 1b/2a study is being planned.
Other Niches for Therapeutic Phages, Challenges
Bacteriophages also could be used to target a gut bacterium that has been shown to attenuate alcoholic liver disease. Patients with alcoholic hepatitis “have a gut microbiome that is different in distribution,” Schooley said, often with increased numbers of Enterococcus faecalis that produce cytolysin, an exotoxin that exacerbates liver injury and is associated with increased mortality.
In published research led by investigators at UC San Diego, stool from cytolysin-positive patients with alcoholic hepatitis was found to exacerbate ethanol-induced liver disease in gnotobiotic mice, and phage therapy against cytolytic E faecalis was found to abolish it, Schooley shared.
Research is also exploring the potential of phage therapy to selectively target adherent invasive E coli in Crohn’s disease, and Klebsiella pneumoniae in the gut microbiome as an exacerbator of inflammatory bowel disease (IBD), he said.
And investigators in Japan, he noted, have reported that bacteriophage therapy against K pneumoniae can ameliorate liver inflammation and disease severity in primary sclerosing cholangitis.
Challenges in the therapeutic use of phages include the narrow host range of phages and an uncertain predictive value of in vitro phage susceptibility testing. “We don’t know yet how to do resistance testing as well as we do with antibiotics,” he said.
In addition, most phages tend to be acid labile, requiring strategies to mitigate inactivation by gastric acid, and there are “major knowledge gaps” relating to phage pharmacology. “We also know that adaptive immune responses to phages can but often doesn’t impact therapy, and we want to understand that better in clinical trials,” Schooley said.
Phages that have a “lysogenic” lifestyle — as opposed to lytic phages which are used therapeutically — can contribute to antibiotic resistance by facilitating the interchange of bacterial resistance genes, he noted.
A Window Into the Mycobiome
The human gut mycobiome is primarily composed of fungi in the Saccharomyces, Candida, and Malassezia genera, with Candida species dominating. Fungal cells harbor distinct immune-stimulatory molecules and activate distinct immune pathways compared with bacteria and other members of the microbiome, said Ost, assistant professor in the immunology and microbiology department of CU Anschutz.
Some fungi, including those in the Candida genus, activate adaptive and innate immune responses that promote metabolic health and protect against infection. A recently published study in Science, for instance, demonstrated that colonization with C dubliniensis in very young mice who had been exposed to broad-spectrum antibiotics promoted “the expansion and development of beta cells in the pancreas” in a macrophage dependent manner, improving metabolic health and reducing diabetes incidence, she shared.
On the one hand, fungi can “exacerbate and perpetuate the pathogenic inflammation that’s found in a growing list of inflammatory diseases” such as IBD. And “in fact, a lot of the benefits and detriments are driven by the exact same species of fungi,” said Ost. “This is particularly true of Candida,” which is a “lifelong colonizer of intestinal microbiota that rarely causes disease but can be quite pathogenic when it does.”
A 2023 review in Nature Reviews Gastroenterology & Hepatology coauthored by Ost describes the role of commensal fungi in intestinal diseases, including IBD, colorectal cancer, and pancreatic cancer.
The pathogenic potential of commensal fungi is largely dependent on its strain, its morphology and its expression of virulence factors, researchers are learning. Ost has studied C albicans, which has been associated with intestinal inflammation and IBD. Like some other Candida species, C albicans are “fascinating shape shifters,” she said, transitioning between a less pathogenic “yeast” morphology and an elongated, adhesive “hyphae” shape that is more pathogenic.
It turns out, according to research by Ost and others, that the C albicans hyphal morphotype — and the adhesins (sticky proteins that facilitate adherence to epithelial cells) and a cytolytic toxin it produces — are preferentially targeted and suppressed by immunoglobulin A (IgA) in the gut.
“Our gut is protected by a large quantity of IgA antibodies…and these IgA interact with the microbiota and play a big role in what microbes are there and the biology of the microbes,” Ost said. Indeed, symptomatic IgA deficiency in humans has been shown to be associated with C albicans overgrowth.
Leveraging the hyphal-specific IgA response to protect against disease seems possible, she said, referring to an experimental anti-Candida fungal vaccine (NDV-3A) designed to induce an adhesin-specific immune response. In a mouse model of colitis, the vaccine protected against C albicans-associated damage. “We saw an immediate IgA response that targeted C albicans in the intestinal contents,” Ost said.
C glabrata, which has also been associated with intestinal inflammation and IBD, does not form hyphae but — depending on the strain — may also induce intestinal IgA responses, she said in describing her recent research.
Ost reported having no disclosures. Schooley disclosed being a consultant for SNIPR Biome, BiomX, Locus, MicrobiotiX, Amazon Data Monitoring Committee: Merck.
A version of this article appeared on Medscape.com.
WASHINGTON, DC — Research on the gut microbiome — and clinical attention to it — has focused mainly on bacteria, but bacteriophages and fungi play critical roles as well, with significant influences on health and disease, experts said at the Gut Microbiota for Health (GMFH) World Summit 2025.
Fungi account for < 1% of the total genetic material in the microbiome but 1%-2% of its total biomass. “Despite their relative rarity, they have an important and outsized influence on gut health” — an impact that results from their unique interface with the immune system, said Kyla Ost, PhD, of the Anschutz Medical Campus, University of Colorado, in Denver, whose research focuses on this interface.
And bacteriophages — viruses that infect and kill bacteria — are highly abundant in the gut. “Bacteriophages begin to colonize our GI [gastrointestinal] tract at the same time we develop our own microbiome shortly after birth, and from that time on, they interact with the bacteria in our GI tract, shaping [and being shaped by] the bacterial species we carry with us,” said Robert (Chip) Schooley, MD, distinguished professor of medicine at the University of California San Diego School of Medicine.
“We’ve been talking about things that affect the gut microbiome — diet, genetics, immune response — but probably the biggest influence on what grows in the GI tract are bacteriophages,” said Schooley, co-director of the Center for Innovative Phage Applications and Therapeutics, in a session on the extra-bacterial gut ecosystem.
Among the current questions:
‘New life’ for Phage Therapy
Bacteriophages represent a promising approach for the treatment of multidrug resistant bacterial pathogens in an era of increasing resistance and a dried-up antibiotic discovery pipeline, Schooley said. (In 2019, an estimated 4.95 million deaths around the world were associated with bacterial antimicrobial resistance, and by 2050, it has been forecast that this number will rise to an estimated 8.22 million deaths.)
But in addition to suppressing bacterial pathogens causing direct morbidity, phage therapy has the potential to suppress bacteria believed to contribute to chronic diseases, he said. “We have proof-of-concept studies about the ability of phage to modulate bacteria in the digestive tract,” and an increasing number of clinical trials of the use of phages in GI and other diseases are underway, he said.
Phages were discovered just over a century ago, but phage therapy was widely abandoned once antibiotics were developed, except for in Russia and the former Eastern Bloc countries, where phage therapy continued to be used.
Phage therapy “got new life” in the West, Schooley said, about 10-15 years ago with an increasing number of detailed and high-profile case reports, including one in which a UC San Diego colleague, Tom Patterson, PhD, contracted a deadly multidrug resistant bacterial infection in Egypt and was eventually saved with bacteriophage therapy. (The case was the subject of the book The Perfect Predator).
Since then, as described in case reports and studies in the literature, “hundreds of people have been treated with bacteriophages here and in Europe,” most commonly for pulmonary infections and infections in implanted vascular and orthopedic devices, said Schooley, who coauthored a review in Cell in 2023 that describes phage biology and advances and future directions in phage therapy.
The use of bacteriophages to prevent systemic infections during high-risk periods — such as during chemotherapeutic regimens for hematological regimens — is an area of interest, he said at the meeting.
In research that is making its way to a clinical trial of patients undergoing allogeneic hematopoietic stem cell transplant (HSCT), researchers screened a library of phages to identify those with broad coverage of Escherichia coli. Using tail fiber engineering and CRISPR technology, they then engineered a combination of the four most complementary bacteriophages to selectively kill E coli — including fluoroquinolone-resistant strains that, in patients whose GI tracts are colonized with these strains, can translocate from the gut into the bloodstream, causing sepsis, during chemotherapeutic regimens for HSCT.
In a mouse model, the CRISPR-enhanced four-phage cocktail (SNIPR001) led to a steady reduction in the E coli colony counts in stool, “showing you can modulate these bacteria in the gut by using bacteriophages to kill them,” Schooley said. Moreover, the CRISPR enhancement strengthened the phages’ ability to break up biofilms, he said, showing “that you can engineer bacteriophages to make them better killers.” A phase 1b/2a study is being planned.
Other Niches for Therapeutic Phages, Challenges
Bacteriophages also could be used to target a gut bacterium that has been shown to attenuate alcoholic liver disease. Patients with alcoholic hepatitis “have a gut microbiome that is different in distribution,” Schooley said, often with increased numbers of Enterococcus faecalis that produce cytolysin, an exotoxin that exacerbates liver injury and is associated with increased mortality.
In published research led by investigators at UC San Diego, stool from cytolysin-positive patients with alcoholic hepatitis was found to exacerbate ethanol-induced liver disease in gnotobiotic mice, and phage therapy against cytolytic E faecalis was found to abolish it, Schooley shared.
Research is also exploring the potential of phage therapy to selectively target adherent invasive E coli in Crohn’s disease, and Klebsiella pneumoniae in the gut microbiome as an exacerbator of inflammatory bowel disease (IBD), he said.
And investigators in Japan, he noted, have reported that bacteriophage therapy against K pneumoniae can ameliorate liver inflammation and disease severity in primary sclerosing cholangitis.
Challenges in the therapeutic use of phages include the narrow host range of phages and an uncertain predictive value of in vitro phage susceptibility testing. “We don’t know yet how to do resistance testing as well as we do with antibiotics,” he said.
In addition, most phages tend to be acid labile, requiring strategies to mitigate inactivation by gastric acid, and there are “major knowledge gaps” relating to phage pharmacology. “We also know that adaptive immune responses to phages can but often doesn’t impact therapy, and we want to understand that better in clinical trials,” Schooley said.
Phages that have a “lysogenic” lifestyle — as opposed to lytic phages which are used therapeutically — can contribute to antibiotic resistance by facilitating the interchange of bacterial resistance genes, he noted.
A Window Into the Mycobiome
The human gut mycobiome is primarily composed of fungi in the Saccharomyces, Candida, and Malassezia genera, with Candida species dominating. Fungal cells harbor distinct immune-stimulatory molecules and activate distinct immune pathways compared with bacteria and other members of the microbiome, said Ost, assistant professor in the immunology and microbiology department of CU Anschutz.
Some fungi, including those in the Candida genus, activate adaptive and innate immune responses that promote metabolic health and protect against infection. A recently published study in Science, for instance, demonstrated that colonization with C dubliniensis in very young mice who had been exposed to broad-spectrum antibiotics promoted “the expansion and development of beta cells in the pancreas” in a macrophage dependent manner, improving metabolic health and reducing diabetes incidence, she shared.
On the one hand, fungi can “exacerbate and perpetuate the pathogenic inflammation that’s found in a growing list of inflammatory diseases” such as IBD. And “in fact, a lot of the benefits and detriments are driven by the exact same species of fungi,” said Ost. “This is particularly true of Candida,” which is a “lifelong colonizer of intestinal microbiota that rarely causes disease but can be quite pathogenic when it does.”
A 2023 review in Nature Reviews Gastroenterology & Hepatology coauthored by Ost describes the role of commensal fungi in intestinal diseases, including IBD, colorectal cancer, and pancreatic cancer.
The pathogenic potential of commensal fungi is largely dependent on its strain, its morphology and its expression of virulence factors, researchers are learning. Ost has studied C albicans, which has been associated with intestinal inflammation and IBD. Like some other Candida species, C albicans are “fascinating shape shifters,” she said, transitioning between a less pathogenic “yeast” morphology and an elongated, adhesive “hyphae” shape that is more pathogenic.
It turns out, according to research by Ost and others, that the C albicans hyphal morphotype — and the adhesins (sticky proteins that facilitate adherence to epithelial cells) and a cytolytic toxin it produces — are preferentially targeted and suppressed by immunoglobulin A (IgA) in the gut.
“Our gut is protected by a large quantity of IgA antibodies…and these IgA interact with the microbiota and play a big role in what microbes are there and the biology of the microbes,” Ost said. Indeed, symptomatic IgA deficiency in humans has been shown to be associated with C albicans overgrowth.
Leveraging the hyphal-specific IgA response to protect against disease seems possible, she said, referring to an experimental anti-Candida fungal vaccine (NDV-3A) designed to induce an adhesin-specific immune response. In a mouse model of colitis, the vaccine protected against C albicans-associated damage. “We saw an immediate IgA response that targeted C albicans in the intestinal contents,” Ost said.
C glabrata, which has also been associated with intestinal inflammation and IBD, does not form hyphae but — depending on the strain — may also induce intestinal IgA responses, she said in describing her recent research.
Ost reported having no disclosures. Schooley disclosed being a consultant for SNIPR Biome, BiomX, Locus, MicrobiotiX, Amazon Data Monitoring Committee: Merck.
A version of this article appeared on Medscape.com.
WASHINGTON, DC — Research on the gut microbiome — and clinical attention to it — has focused mainly on bacteria, but bacteriophages and fungi play critical roles as well, with significant influences on health and disease, experts said at the Gut Microbiota for Health (GMFH) World Summit 2025.
Fungi account for < 1% of the total genetic material in the microbiome but 1%-2% of its total biomass. “Despite their relative rarity, they have an important and outsized influence on gut health” — an impact that results from their unique interface with the immune system, said Kyla Ost, PhD, of the Anschutz Medical Campus, University of Colorado, in Denver, whose research focuses on this interface.
And bacteriophages — viruses that infect and kill bacteria — are highly abundant in the gut. “Bacteriophages begin to colonize our GI [gastrointestinal] tract at the same time we develop our own microbiome shortly after birth, and from that time on, they interact with the bacteria in our GI tract, shaping [and being shaped by] the bacterial species we carry with us,” said Robert (Chip) Schooley, MD, distinguished professor of medicine at the University of California San Diego School of Medicine.
“We’ve been talking about things that affect the gut microbiome — diet, genetics, immune response — but probably the biggest influence on what grows in the GI tract are bacteriophages,” said Schooley, co-director of the Center for Innovative Phage Applications and Therapeutics, in a session on the extra-bacterial gut ecosystem.
Among the current questions:
‘New life’ for Phage Therapy
Bacteriophages represent a promising approach for the treatment of multidrug resistant bacterial pathogens in an era of increasing resistance and a dried-up antibiotic discovery pipeline, Schooley said. (In 2019, an estimated 4.95 million deaths around the world were associated with bacterial antimicrobial resistance, and by 2050, it has been forecast that this number will rise to an estimated 8.22 million deaths.)
But in addition to suppressing bacterial pathogens causing direct morbidity, phage therapy has the potential to suppress bacteria believed to contribute to chronic diseases, he said. “We have proof-of-concept studies about the ability of phage to modulate bacteria in the digestive tract,” and an increasing number of clinical trials of the use of phages in GI and other diseases are underway, he said.
Phages were discovered just over a century ago, but phage therapy was widely abandoned once antibiotics were developed, except for in Russia and the former Eastern Bloc countries, where phage therapy continued to be used.
Phage therapy “got new life” in the West, Schooley said, about 10-15 years ago with an increasing number of detailed and high-profile case reports, including one in which a UC San Diego colleague, Tom Patterson, PhD, contracted a deadly multidrug resistant bacterial infection in Egypt and was eventually saved with bacteriophage therapy. (The case was the subject of the book The Perfect Predator).
Since then, as described in case reports and studies in the literature, “hundreds of people have been treated with bacteriophages here and in Europe,” most commonly for pulmonary infections and infections in implanted vascular and orthopedic devices, said Schooley, who coauthored a review in Cell in 2023 that describes phage biology and advances and future directions in phage therapy.
The use of bacteriophages to prevent systemic infections during high-risk periods — such as during chemotherapeutic regimens for hematological regimens — is an area of interest, he said at the meeting.
In research that is making its way to a clinical trial of patients undergoing allogeneic hematopoietic stem cell transplant (HSCT), researchers screened a library of phages to identify those with broad coverage of Escherichia coli. Using tail fiber engineering and CRISPR technology, they then engineered a combination of the four most complementary bacteriophages to selectively kill E coli — including fluoroquinolone-resistant strains that, in patients whose GI tracts are colonized with these strains, can translocate from the gut into the bloodstream, causing sepsis, during chemotherapeutic regimens for HSCT.
In a mouse model, the CRISPR-enhanced four-phage cocktail (SNIPR001) led to a steady reduction in the E coli colony counts in stool, “showing you can modulate these bacteria in the gut by using bacteriophages to kill them,” Schooley said. Moreover, the CRISPR enhancement strengthened the phages’ ability to break up biofilms, he said, showing “that you can engineer bacteriophages to make them better killers.” A phase 1b/2a study is being planned.
Other Niches for Therapeutic Phages, Challenges
Bacteriophages also could be used to target a gut bacterium that has been shown to attenuate alcoholic liver disease. Patients with alcoholic hepatitis “have a gut microbiome that is different in distribution,” Schooley said, often with increased numbers of Enterococcus faecalis that produce cytolysin, an exotoxin that exacerbates liver injury and is associated with increased mortality.
In published research led by investigators at UC San Diego, stool from cytolysin-positive patients with alcoholic hepatitis was found to exacerbate ethanol-induced liver disease in gnotobiotic mice, and phage therapy against cytolytic E faecalis was found to abolish it, Schooley shared.
Research is also exploring the potential of phage therapy to selectively target adherent invasive E coli in Crohn’s disease, and Klebsiella pneumoniae in the gut microbiome as an exacerbator of inflammatory bowel disease (IBD), he said.
And investigators in Japan, he noted, have reported that bacteriophage therapy against K pneumoniae can ameliorate liver inflammation and disease severity in primary sclerosing cholangitis.
Challenges in the therapeutic use of phages include the narrow host range of phages and an uncertain predictive value of in vitro phage susceptibility testing. “We don’t know yet how to do resistance testing as well as we do with antibiotics,” he said.
In addition, most phages tend to be acid labile, requiring strategies to mitigate inactivation by gastric acid, and there are “major knowledge gaps” relating to phage pharmacology. “We also know that adaptive immune responses to phages can but often doesn’t impact therapy, and we want to understand that better in clinical trials,” Schooley said.
Phages that have a “lysogenic” lifestyle — as opposed to lytic phages which are used therapeutically — can contribute to antibiotic resistance by facilitating the interchange of bacterial resistance genes, he noted.
A Window Into the Mycobiome
The human gut mycobiome is primarily composed of fungi in the Saccharomyces, Candida, and Malassezia genera, with Candida species dominating. Fungal cells harbor distinct immune-stimulatory molecules and activate distinct immune pathways compared with bacteria and other members of the microbiome, said Ost, assistant professor in the immunology and microbiology department of CU Anschutz.
Some fungi, including those in the Candida genus, activate adaptive and innate immune responses that promote metabolic health and protect against infection. A recently published study in Science, for instance, demonstrated that colonization with C dubliniensis in very young mice who had been exposed to broad-spectrum antibiotics promoted “the expansion and development of beta cells in the pancreas” in a macrophage dependent manner, improving metabolic health and reducing diabetes incidence, she shared.
On the one hand, fungi can “exacerbate and perpetuate the pathogenic inflammation that’s found in a growing list of inflammatory diseases” such as IBD. And “in fact, a lot of the benefits and detriments are driven by the exact same species of fungi,” said Ost. “This is particularly true of Candida,” which is a “lifelong colonizer of intestinal microbiota that rarely causes disease but can be quite pathogenic when it does.”
A 2023 review in Nature Reviews Gastroenterology & Hepatology coauthored by Ost describes the role of commensal fungi in intestinal diseases, including IBD, colorectal cancer, and pancreatic cancer.
The pathogenic potential of commensal fungi is largely dependent on its strain, its morphology and its expression of virulence factors, researchers are learning. Ost has studied C albicans, which has been associated with intestinal inflammation and IBD. Like some other Candida species, C albicans are “fascinating shape shifters,” she said, transitioning between a less pathogenic “yeast” morphology and an elongated, adhesive “hyphae” shape that is more pathogenic.
It turns out, according to research by Ost and others, that the C albicans hyphal morphotype — and the adhesins (sticky proteins that facilitate adherence to epithelial cells) and a cytolytic toxin it produces — are preferentially targeted and suppressed by immunoglobulin A (IgA) in the gut.
“Our gut is protected by a large quantity of IgA antibodies…and these IgA interact with the microbiota and play a big role in what microbes are there and the biology of the microbes,” Ost said. Indeed, symptomatic IgA deficiency in humans has been shown to be associated with C albicans overgrowth.
Leveraging the hyphal-specific IgA response to protect against disease seems possible, she said, referring to an experimental anti-Candida fungal vaccine (NDV-3A) designed to induce an adhesin-specific immune response. In a mouse model of colitis, the vaccine protected against C albicans-associated damage. “We saw an immediate IgA response that targeted C albicans in the intestinal contents,” Ost said.
C glabrata, which has also been associated with intestinal inflammation and IBD, does not form hyphae but — depending on the strain — may also induce intestinal IgA responses, she said in describing her recent research.
Ost reported having no disclosures. Schooley disclosed being a consultant for SNIPR Biome, BiomX, Locus, MicrobiotiX, Amazon Data Monitoring Committee: Merck.
A version of this article appeared on Medscape.com.
FROM GMFH 2025
Radiation Oncology Reimbursement: New Bill Rocks the Boat
A renewed effort to modernize and stabilize Medicare reimbursement for radiation therapy services is underway.
In mid-March, members of Congress reintroduced bipartisan federal legislation that would shift Medicare reimbursement for radiation oncology services from quantity-based payments to episode-based payments and help stabilize the declining rates of reimbursement in the field.
The Radiation Oncology Case Rate (ROCR) Value Based Payment Program Act, sponsored by two senators and four representatives, would not only “transform” how Medicare reimburses radiation therapy services, it would also “protect access to high quality cancer care and improve outcomes for patients nationwide, while generating savings for Medicare,” according to a recent American Society for Radiation Oncology (ASTRO) press release praising the bill.
However, the reaction among those in the field has been mixed. Whereas some radiation oncologists are aligned with the bill, others argue that the legislation was crafted without meaningful input from many who will be affected.
“There’s consensus across multiple groups within the house of radiation oncology, hospital groups, and industry, which is incredibly important,” according to Mustafa Basree, DO, a radiation oncology resident who serves on ASTRO’s government relations committee and was part of the discussion on drafting the bill.
But, Basree acknowledged, “not everybody likes the bill.”
A core complaint is a lack of communication and input from clinicians in the field. “If we’re going to decide to design our own quality program — which is really like a dream from a clinician’s standpoint — we need a meaningful way to come together as a unified field,” said Matthew Spraker, MD, PhD, a radiation oncologist practicing in Denver. In this bill, “we’re not getting any of that.”
Impetus for the Bill
Amid dramatic drops in Medicare reimbursement — and with more probably on the horizon — ASTRO announced in January 2024 that the society had partnered with the American College of Radiation Oncology, the American College of Radiology, and the American Society of Clinical Oncology to lobby for payment reform.
Cuts to Medicare reimbursement were approaching 25% at the time. These declines were related, in part, to changes in how radiation treatment was being delivered. Reimbursement has historically been based on the fraction of radiation given, but the field has increasingly embraced hypofractionated regimens and deescalated approaches, which have led to fewer billable fractions and consequently lower reimbursement.
A recent study highlighted significant declines in reimbursement based on this shift in care. For instance, greater use of hypofractionation led to declines in reimbursement for technical services in freestanding radiation oncology offices by nearly 17% for breast cancer and 14.2% for prostate cancer between 2016 and 2022. Inflation-adjusted Medicare conversion factors fell 12.2% in hospital outpatient centers and 20.8% in freestanding offices.
These declining reimbursement rates have occurred alongside changes to radiation oncology practice patterns. A recent analysis reported a 51% increase in the number of US practices with at least 10 radiation oncologists between 2015 and 2023, and a 27% decrease in the number of solo practices during the same period. The number of practicing radiation oncologists increased by 16%, but the number of practices employing them decreased by 13%, indicating a trend toward practice consolidation.
These changes, the analysis found, may affect patients’ access to care. In rural areas, retirement rates were higher and rates of entry of new radiation oncologists was lower compared with urban areas.
The current payment structure “has become untenable,” leading to practice consolidation that threaten patient access, especially in rural and underserved areas, a spokesperson for ASTRO told this news organization last year.
What Is the ROCR Act?
The ROCR ACT was drafted by ASTRO to address these issues and reverse declines in Medicare reimbursement.
In addition to shifting radiation oncology reimbursement from fraction-based to episode-based, the bill also aims to encourage clinicians to adoptevidence-based shorter treatment regimens, improve safety and quality by supporting new technologies, and generate savings to Medicare by eliminating outdated and costly practices that have not been shown to improve patient outcomes.
When first introduced last year, the bill did not receive a vote in Congress.
A 2025 version of the bill, introduced last month, largely aligns with the 2024 legislation but contains some “enhancements,” such as improving accreditation with increased incentive payments, outlining a revised exemption for practices with limited resources and instituting a transitional payment period for adaptive radiation therapy to allow billing to continue while a new code is created.
Mixed Reactions
How has the radiation oncology community reacted to the latest ROCR Act?
A recent survey, which included more than 500 practicing radiation oncologists, found that 61% of respondents supported implementing an episode-based payment model such as that proposed in the 2025 legislation, 17.3% neither supported nor opposed it, and 21.6% opposed the model.
“I think this supports this idea that our field would have benefited from much more open discussion in the design phases of the bill,” Spraker told this news organization.
Jason Beckta, MD, PhD, a radiation oncologist at Rutland Regional Medical Center, Vermont, agreed.
While on board with the concept of reform and episode-based payment, given what Beckta called “the absolute absurdity of the cuts in radiation oncology,” he took issue with the lack of transparency in the rollout of the bill.
The announcement about the 2025 version of the ROCR Act came as “a complete surprise — out of nowhere — except to insiders,” Beckta said.
ASTRO held a town hall in February 2025 “featuring new information and discussion” regarding ROCR 2025, but the bill had already been finalized for submission at that point, Beckta said.
And although Beckta and Spraker believe ASTRO had good intentions, the physicians highlighted concerns with several aspects of the bill.
“What upsets me most is the blatant regulatory capture,” Beckta said. The legislation will require all practices to be accredited by either ASTRO, the American College of Radiation Oncology, or the American College of Radiology, essentially capturing their business through a regulation or having practices “face a 2.5% penalty, which is up 1% from the prior version,” Beckta explained.
The bill also shortens the runway to get accredited from 3 years to 2 years, Beckta noted, stressing the arduousness of the accreditation process as a hurdle for many practices.
But doing the accreditation program does not mean care will get better, Spraker said. In fact, “that is absolutely not the case.”
Another issue: Requirements for medical equipment and quality review periods seem to favor industry over patients and practices, he said, highlighting the potential role of manufacturers in determining if or when equipment updates are required.
“A field like ours has rapidly exploding technology with not-always-clear patient benefit,” said Spraker. “We’re seeing too many examples where people are leveraging that, basically to sell devices instead of to help patients,” he added.
Furthermore, Beckta noted, the bill allows for reduced reimbursement of between 4% and 7%, depending on the circumstances — a cut to reimbursement that is being justified by saying it’s the only way the bill will get through Congress. But “it’s just a less-bad option than continued cuts to fee-for-service,” Beckta said.
ASTRO leadership has expressed strong support for the bill. In the recent ASTRO press release, Howard M. Sandler, MD, chair of the ASTRO Board of Directors, called the ROCR Act “the only viable policy solution designed to provide payment stability for the field of radiation oncology in 2026 and beyond.”
The ROCR Act, which is broadly supported by more than 80 organizations, “represents a balanced, evidence-based policy solution to safeguard access to high value cancer treatment for Americans,” Sandler said.
“I believe in this bill,” Basree added.
Basree touted the replacement of a fee-for-service model with a “value-based payment system, ensuring predictable, fair reimbursement for the field” as a major win for stabilizing Medicare reimbursement. The bill also includes measures to improve patient access, such as providing discounted transportation for patients — a significant need, particularly in rural areas, he explained.
Although not everyone is happy with the bill, ASTRO did aim “to build coalition of support,” Basree said. “It’s an uphill battle, for sure, but we should press forward and hope for the best,” he added.
Even with their concerns, both Spraker and Beckta are optimistic that improvements to the bill can still be made, and urge colleagues to study the bill, speak out, and engage to help promote the best possible policy.
Basree reported receiving reimbursement for meeting travel and lodging as both a Fellow and member of the Association of Residents in Radiation Oncology. Spraker and Beckta reported having no relevant disclosures.
A version of this article first appeared on Medscape.com.
A renewed effort to modernize and stabilize Medicare reimbursement for radiation therapy services is underway.
In mid-March, members of Congress reintroduced bipartisan federal legislation that would shift Medicare reimbursement for radiation oncology services from quantity-based payments to episode-based payments and help stabilize the declining rates of reimbursement in the field.
The Radiation Oncology Case Rate (ROCR) Value Based Payment Program Act, sponsored by two senators and four representatives, would not only “transform” how Medicare reimburses radiation therapy services, it would also “protect access to high quality cancer care and improve outcomes for patients nationwide, while generating savings for Medicare,” according to a recent American Society for Radiation Oncology (ASTRO) press release praising the bill.
However, the reaction among those in the field has been mixed. Whereas some radiation oncologists are aligned with the bill, others argue that the legislation was crafted without meaningful input from many who will be affected.
“There’s consensus across multiple groups within the house of radiation oncology, hospital groups, and industry, which is incredibly important,” according to Mustafa Basree, DO, a radiation oncology resident who serves on ASTRO’s government relations committee and was part of the discussion on drafting the bill.
But, Basree acknowledged, “not everybody likes the bill.”
A core complaint is a lack of communication and input from clinicians in the field. “If we’re going to decide to design our own quality program — which is really like a dream from a clinician’s standpoint — we need a meaningful way to come together as a unified field,” said Matthew Spraker, MD, PhD, a radiation oncologist practicing in Denver. In this bill, “we’re not getting any of that.”
Impetus for the Bill
Amid dramatic drops in Medicare reimbursement — and with more probably on the horizon — ASTRO announced in January 2024 that the society had partnered with the American College of Radiation Oncology, the American College of Radiology, and the American Society of Clinical Oncology to lobby for payment reform.
Cuts to Medicare reimbursement were approaching 25% at the time. These declines were related, in part, to changes in how radiation treatment was being delivered. Reimbursement has historically been based on the fraction of radiation given, but the field has increasingly embraced hypofractionated regimens and deescalated approaches, which have led to fewer billable fractions and consequently lower reimbursement.
A recent study highlighted significant declines in reimbursement based on this shift in care. For instance, greater use of hypofractionation led to declines in reimbursement for technical services in freestanding radiation oncology offices by nearly 17% for breast cancer and 14.2% for prostate cancer between 2016 and 2022. Inflation-adjusted Medicare conversion factors fell 12.2% in hospital outpatient centers and 20.8% in freestanding offices.
These declining reimbursement rates have occurred alongside changes to radiation oncology practice patterns. A recent analysis reported a 51% increase in the number of US practices with at least 10 radiation oncologists between 2015 and 2023, and a 27% decrease in the number of solo practices during the same period. The number of practicing radiation oncologists increased by 16%, but the number of practices employing them decreased by 13%, indicating a trend toward practice consolidation.
These changes, the analysis found, may affect patients’ access to care. In rural areas, retirement rates were higher and rates of entry of new radiation oncologists was lower compared with urban areas.
The current payment structure “has become untenable,” leading to practice consolidation that threaten patient access, especially in rural and underserved areas, a spokesperson for ASTRO told this news organization last year.
What Is the ROCR Act?
The ROCR ACT was drafted by ASTRO to address these issues and reverse declines in Medicare reimbursement.
In addition to shifting radiation oncology reimbursement from fraction-based to episode-based, the bill also aims to encourage clinicians to adoptevidence-based shorter treatment regimens, improve safety and quality by supporting new technologies, and generate savings to Medicare by eliminating outdated and costly practices that have not been shown to improve patient outcomes.
When first introduced last year, the bill did not receive a vote in Congress.
A 2025 version of the bill, introduced last month, largely aligns with the 2024 legislation but contains some “enhancements,” such as improving accreditation with increased incentive payments, outlining a revised exemption for practices with limited resources and instituting a transitional payment period for adaptive radiation therapy to allow billing to continue while a new code is created.
Mixed Reactions
How has the radiation oncology community reacted to the latest ROCR Act?
A recent survey, which included more than 500 practicing radiation oncologists, found that 61% of respondents supported implementing an episode-based payment model such as that proposed in the 2025 legislation, 17.3% neither supported nor opposed it, and 21.6% opposed the model.
“I think this supports this idea that our field would have benefited from much more open discussion in the design phases of the bill,” Spraker told this news organization.
Jason Beckta, MD, PhD, a radiation oncologist at Rutland Regional Medical Center, Vermont, agreed.
While on board with the concept of reform and episode-based payment, given what Beckta called “the absolute absurdity of the cuts in radiation oncology,” he took issue with the lack of transparency in the rollout of the bill.
The announcement about the 2025 version of the ROCR Act came as “a complete surprise — out of nowhere — except to insiders,” Beckta said.
ASTRO held a town hall in February 2025 “featuring new information and discussion” regarding ROCR 2025, but the bill had already been finalized for submission at that point, Beckta said.
And although Beckta and Spraker believe ASTRO had good intentions, the physicians highlighted concerns with several aspects of the bill.
“What upsets me most is the blatant regulatory capture,” Beckta said. The legislation will require all practices to be accredited by either ASTRO, the American College of Radiation Oncology, or the American College of Radiology, essentially capturing their business through a regulation or having practices “face a 2.5% penalty, which is up 1% from the prior version,” Beckta explained.
The bill also shortens the runway to get accredited from 3 years to 2 years, Beckta noted, stressing the arduousness of the accreditation process as a hurdle for many practices.
But doing the accreditation program does not mean care will get better, Spraker said. In fact, “that is absolutely not the case.”
Another issue: Requirements for medical equipment and quality review periods seem to favor industry over patients and practices, he said, highlighting the potential role of manufacturers in determining if or when equipment updates are required.
“A field like ours has rapidly exploding technology with not-always-clear patient benefit,” said Spraker. “We’re seeing too many examples where people are leveraging that, basically to sell devices instead of to help patients,” he added.
Furthermore, Beckta noted, the bill allows for reduced reimbursement of between 4% and 7%, depending on the circumstances — a cut to reimbursement that is being justified by saying it’s the only way the bill will get through Congress. But “it’s just a less-bad option than continued cuts to fee-for-service,” Beckta said.
ASTRO leadership has expressed strong support for the bill. In the recent ASTRO press release, Howard M. Sandler, MD, chair of the ASTRO Board of Directors, called the ROCR Act “the only viable policy solution designed to provide payment stability for the field of radiation oncology in 2026 and beyond.”
The ROCR Act, which is broadly supported by more than 80 organizations, “represents a balanced, evidence-based policy solution to safeguard access to high value cancer treatment for Americans,” Sandler said.
“I believe in this bill,” Basree added.
Basree touted the replacement of a fee-for-service model with a “value-based payment system, ensuring predictable, fair reimbursement for the field” as a major win for stabilizing Medicare reimbursement. The bill also includes measures to improve patient access, such as providing discounted transportation for patients — a significant need, particularly in rural areas, he explained.
Although not everyone is happy with the bill, ASTRO did aim “to build coalition of support,” Basree said. “It’s an uphill battle, for sure, but we should press forward and hope for the best,” he added.
Even with their concerns, both Spraker and Beckta are optimistic that improvements to the bill can still be made, and urge colleagues to study the bill, speak out, and engage to help promote the best possible policy.
Basree reported receiving reimbursement for meeting travel and lodging as both a Fellow and member of the Association of Residents in Radiation Oncology. Spraker and Beckta reported having no relevant disclosures.
A version of this article first appeared on Medscape.com.
A renewed effort to modernize and stabilize Medicare reimbursement for radiation therapy services is underway.
In mid-March, members of Congress reintroduced bipartisan federal legislation that would shift Medicare reimbursement for radiation oncology services from quantity-based payments to episode-based payments and help stabilize the declining rates of reimbursement in the field.
The Radiation Oncology Case Rate (ROCR) Value Based Payment Program Act, sponsored by two senators and four representatives, would not only “transform” how Medicare reimburses radiation therapy services, it would also “protect access to high quality cancer care and improve outcomes for patients nationwide, while generating savings for Medicare,” according to a recent American Society for Radiation Oncology (ASTRO) press release praising the bill.
However, the reaction among those in the field has been mixed. Whereas some radiation oncologists are aligned with the bill, others argue that the legislation was crafted without meaningful input from many who will be affected.
“There’s consensus across multiple groups within the house of radiation oncology, hospital groups, and industry, which is incredibly important,” according to Mustafa Basree, DO, a radiation oncology resident who serves on ASTRO’s government relations committee and was part of the discussion on drafting the bill.
But, Basree acknowledged, “not everybody likes the bill.”
A core complaint is a lack of communication and input from clinicians in the field. “If we’re going to decide to design our own quality program — which is really like a dream from a clinician’s standpoint — we need a meaningful way to come together as a unified field,” said Matthew Spraker, MD, PhD, a radiation oncologist practicing in Denver. In this bill, “we’re not getting any of that.”
Impetus for the Bill
Amid dramatic drops in Medicare reimbursement — and with more probably on the horizon — ASTRO announced in January 2024 that the society had partnered with the American College of Radiation Oncology, the American College of Radiology, and the American Society of Clinical Oncology to lobby for payment reform.
Cuts to Medicare reimbursement were approaching 25% at the time. These declines were related, in part, to changes in how radiation treatment was being delivered. Reimbursement has historically been based on the fraction of radiation given, but the field has increasingly embraced hypofractionated regimens and deescalated approaches, which have led to fewer billable fractions and consequently lower reimbursement.
A recent study highlighted significant declines in reimbursement based on this shift in care. For instance, greater use of hypofractionation led to declines in reimbursement for technical services in freestanding radiation oncology offices by nearly 17% for breast cancer and 14.2% for prostate cancer between 2016 and 2022. Inflation-adjusted Medicare conversion factors fell 12.2% in hospital outpatient centers and 20.8% in freestanding offices.
These declining reimbursement rates have occurred alongside changes to radiation oncology practice patterns. A recent analysis reported a 51% increase in the number of US practices with at least 10 radiation oncologists between 2015 and 2023, and a 27% decrease in the number of solo practices during the same period. The number of practicing radiation oncologists increased by 16%, but the number of practices employing them decreased by 13%, indicating a trend toward practice consolidation.
These changes, the analysis found, may affect patients’ access to care. In rural areas, retirement rates were higher and rates of entry of new radiation oncologists was lower compared with urban areas.
The current payment structure “has become untenable,” leading to practice consolidation that threaten patient access, especially in rural and underserved areas, a spokesperson for ASTRO told this news organization last year.
What Is the ROCR Act?
The ROCR ACT was drafted by ASTRO to address these issues and reverse declines in Medicare reimbursement.
In addition to shifting radiation oncology reimbursement from fraction-based to episode-based, the bill also aims to encourage clinicians to adoptevidence-based shorter treatment regimens, improve safety and quality by supporting new technologies, and generate savings to Medicare by eliminating outdated and costly practices that have not been shown to improve patient outcomes.
When first introduced last year, the bill did not receive a vote in Congress.
A 2025 version of the bill, introduced last month, largely aligns with the 2024 legislation but contains some “enhancements,” such as improving accreditation with increased incentive payments, outlining a revised exemption for practices with limited resources and instituting a transitional payment period for adaptive radiation therapy to allow billing to continue while a new code is created.
Mixed Reactions
How has the radiation oncology community reacted to the latest ROCR Act?
A recent survey, which included more than 500 practicing radiation oncologists, found that 61% of respondents supported implementing an episode-based payment model such as that proposed in the 2025 legislation, 17.3% neither supported nor opposed it, and 21.6% opposed the model.
“I think this supports this idea that our field would have benefited from much more open discussion in the design phases of the bill,” Spraker told this news organization.
Jason Beckta, MD, PhD, a radiation oncologist at Rutland Regional Medical Center, Vermont, agreed.
While on board with the concept of reform and episode-based payment, given what Beckta called “the absolute absurdity of the cuts in radiation oncology,” he took issue with the lack of transparency in the rollout of the bill.
The announcement about the 2025 version of the ROCR Act came as “a complete surprise — out of nowhere — except to insiders,” Beckta said.
ASTRO held a town hall in February 2025 “featuring new information and discussion” regarding ROCR 2025, but the bill had already been finalized for submission at that point, Beckta said.
And although Beckta and Spraker believe ASTRO had good intentions, the physicians highlighted concerns with several aspects of the bill.
“What upsets me most is the blatant regulatory capture,” Beckta said. The legislation will require all practices to be accredited by either ASTRO, the American College of Radiation Oncology, or the American College of Radiology, essentially capturing their business through a regulation or having practices “face a 2.5% penalty, which is up 1% from the prior version,” Beckta explained.
The bill also shortens the runway to get accredited from 3 years to 2 years, Beckta noted, stressing the arduousness of the accreditation process as a hurdle for many practices.
But doing the accreditation program does not mean care will get better, Spraker said. In fact, “that is absolutely not the case.”
Another issue: Requirements for medical equipment and quality review periods seem to favor industry over patients and practices, he said, highlighting the potential role of manufacturers in determining if or when equipment updates are required.
“A field like ours has rapidly exploding technology with not-always-clear patient benefit,” said Spraker. “We’re seeing too many examples where people are leveraging that, basically to sell devices instead of to help patients,” he added.
Furthermore, Beckta noted, the bill allows for reduced reimbursement of between 4% and 7%, depending on the circumstances — a cut to reimbursement that is being justified by saying it’s the only way the bill will get through Congress. But “it’s just a less-bad option than continued cuts to fee-for-service,” Beckta said.
ASTRO leadership has expressed strong support for the bill. In the recent ASTRO press release, Howard M. Sandler, MD, chair of the ASTRO Board of Directors, called the ROCR Act “the only viable policy solution designed to provide payment stability for the field of radiation oncology in 2026 and beyond.”
The ROCR Act, which is broadly supported by more than 80 organizations, “represents a balanced, evidence-based policy solution to safeguard access to high value cancer treatment for Americans,” Sandler said.
“I believe in this bill,” Basree added.
Basree touted the replacement of a fee-for-service model with a “value-based payment system, ensuring predictable, fair reimbursement for the field” as a major win for stabilizing Medicare reimbursement. The bill also includes measures to improve patient access, such as providing discounted transportation for patients — a significant need, particularly in rural areas, he explained.
Although not everyone is happy with the bill, ASTRO did aim “to build coalition of support,” Basree said. “It’s an uphill battle, for sure, but we should press forward and hope for the best,” he added.
Even with their concerns, both Spraker and Beckta are optimistic that improvements to the bill can still be made, and urge colleagues to study the bill, speak out, and engage to help promote the best possible policy.
Basree reported receiving reimbursement for meeting travel and lodging as both a Fellow and member of the Association of Residents in Radiation Oncology. Spraker and Beckta reported having no relevant disclosures.
A version of this article first appeared on Medscape.com.
Patient Navigation Boosts Follow-Up Colonoscopy Completion
The intervention led to a significant 13-point increase in follow-up colonoscopy completion at 1 year, compared with usual care (55.1% vs 42.1%), according the study, which was published online in Annals of Internal Medicine.
“Patients with an abnormal fecal test results have about a 1 in 20 chance of having colorectal cancer found, and many more will be found to have advanced adenomas that can be removed to prevent cancer,” Gloria Coronado, PhD, of Kaiser Permanente Center for Health Research, Portland, Oregon, and University of Arizona Cancer Center, Tucson, said in an interview.
“It is critical that these patients get a follow-up colonoscopy,” she said. “Patient navigation can accomplish this goal.”
‘Highly Effective’ Intervention
Researchers compared the effectiveness of a patient navigation program with that of usual care outreach in increasing follow-up colonoscopy completion after an abnormal stool test. They also developed a risk-prediction model that calculated a patient’s probability of obtaining a follow-up colonoscopy without navigation to determine if the addition of this intervention had a greater impact on those determined to be less likely to follow through.
The study included 967 patients from a community health center in Washington State who received an abnormal fecal test result within the prior month. The mean age of participants was 61 years, approximately 45% were women and 77% were White, and 18% preferred a Spanish-language intervention. In total, 479 patients received the intervention and 488 received usual care.
The intervention was delivered by a patient navigator who mailed introductory letters, sent text messages, and made live phone calls. In the calls, the navigators addressed the topics of barrier assessment and resolution, bowel preparation instruction and reminders, colonoscopy check-in, and understanding colonoscopy results and retesting intervals.
Patients in the usual-care group were contacted by a referral coordinator to schedule a follow-up colonoscopy appointment. If they couldn’t be reached initially, up to two follow-up attempts were made at 30 and 45 days after the referral date.
Patient navigation resulted in a significant 13% increase in follow-up, and those in this group completed a colonoscopy 27 days sooner than those in the usual care group (mean, 229 days vs 256 days).
Contrary to the authors’ expectation, the effectiveness of the intervention did not vary by patients’ predicted likelihood of obtaining a colonoscopy without navigation.
Notably, 20.3% of patients were unreachable or lost to follow-up, and 29.7% did not receive navigation. Among the 479 patients assigned to navigation, 79 (16.5%) declined participation and 56 (11.7%) were never reached.
The study was primarily conducted during the height of the COVID-19 pandemic, which created additional systemic and individual barriers to completing colonoscopies.
Nevertheless, the authors wrote, “our findings suggest that patient navigation is highly effective for patients eligible for colonoscopy.”
“Most patients who were reached were contacted with six or fewer phone attempts,” Coronado noted. “Further efforts are needed to determine how to reach and motivate patients [who did not participate] to get a follow-up colonoscopy.”
Coronado and colleagues are exploring ways to leverage artificial intelligence and virtual approaches to augment patient navigation programs — for example, by using a virtual navigator or low-cost automated tools to provide education to build patient confidence in getting a colonoscopy.
‘A Promising Tool’
“Colonoscopy completion after positive stool-based testing is critical to mitigating the impact of colon cancer,” commented Rajiv Bhuta, MD, assistant professor of clinical gastroenterology & hepatology, Lewis Katz School of Medicine, Temple University, Philadelphia, who was not involved in the study. “While prior studies assessing navigation have demonstrated improvements, none were as large enrollment-wise or as generalizable as the current study.”
That said, Bhuta said in an interview that the study could have provided more detail about coordination and communication with local gastrointestinal practices.
“Local ordering and prescribing practices vary and can significantly impact compliance rates. Were colonoscopies completed via an open access pathway or were the patients required to see a gastroenterologist first? How long was the average wait time for colonoscopy once scheduled? What were the local policies on requiring an escort after the procedure?”
He also noted that some aspects of the study — such as access to reduced-cost specialty care and free ride-share services — may limit generalizable to settings without such resources.
He added: “Although patient navigators for cancer treatment have mandated reimbursement, there is no current reimbursement for navigators for abnormal screening tests, another barrier to wide-spread implementation.”
Bhuta said that the dropout rate in the study mirrors that of his own real-world practice, which serves a high-risk, low-resource community. “I would specifically like to see research that provides behavioral insights on why patients respond positively to navigation — whether it is due to reminders, emotional support, or logistical assistance. Is it systemic barriers or patient disinterest or both that drives noncompliance?”
Despite these uncertainties and the need to refine implementation logistics, Bhuta concluded, “this strategy is a promising tool to reduce disparities and improve colorectal cancer outcomes. Clinicians should advocate for or implement structured follow-up systems, particularly in high-risk populations.”
The study was funded by the US National Cancer Institute. Coronado received a grant/contract from Guardant Health. Bhuta declared no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
The intervention led to a significant 13-point increase in follow-up colonoscopy completion at 1 year, compared with usual care (55.1% vs 42.1%), according the study, which was published online in Annals of Internal Medicine.
“Patients with an abnormal fecal test results have about a 1 in 20 chance of having colorectal cancer found, and many more will be found to have advanced adenomas that can be removed to prevent cancer,” Gloria Coronado, PhD, of Kaiser Permanente Center for Health Research, Portland, Oregon, and University of Arizona Cancer Center, Tucson, said in an interview.
“It is critical that these patients get a follow-up colonoscopy,” she said. “Patient navigation can accomplish this goal.”
‘Highly Effective’ Intervention
Researchers compared the effectiveness of a patient navigation program with that of usual care outreach in increasing follow-up colonoscopy completion after an abnormal stool test. They also developed a risk-prediction model that calculated a patient’s probability of obtaining a follow-up colonoscopy without navigation to determine if the addition of this intervention had a greater impact on those determined to be less likely to follow through.
The study included 967 patients from a community health center in Washington State who received an abnormal fecal test result within the prior month. The mean age of participants was 61 years, approximately 45% were women and 77% were White, and 18% preferred a Spanish-language intervention. In total, 479 patients received the intervention and 488 received usual care.
The intervention was delivered by a patient navigator who mailed introductory letters, sent text messages, and made live phone calls. In the calls, the navigators addressed the topics of barrier assessment and resolution, bowel preparation instruction and reminders, colonoscopy check-in, and understanding colonoscopy results and retesting intervals.
Patients in the usual-care group were contacted by a referral coordinator to schedule a follow-up colonoscopy appointment. If they couldn’t be reached initially, up to two follow-up attempts were made at 30 and 45 days after the referral date.
Patient navigation resulted in a significant 13% increase in follow-up, and those in this group completed a colonoscopy 27 days sooner than those in the usual care group (mean, 229 days vs 256 days).
Contrary to the authors’ expectation, the effectiveness of the intervention did not vary by patients’ predicted likelihood of obtaining a colonoscopy without navigation.
Notably, 20.3% of patients were unreachable or lost to follow-up, and 29.7% did not receive navigation. Among the 479 patients assigned to navigation, 79 (16.5%) declined participation and 56 (11.7%) were never reached.
The study was primarily conducted during the height of the COVID-19 pandemic, which created additional systemic and individual barriers to completing colonoscopies.
Nevertheless, the authors wrote, “our findings suggest that patient navigation is highly effective for patients eligible for colonoscopy.”
“Most patients who were reached were contacted with six or fewer phone attempts,” Coronado noted. “Further efforts are needed to determine how to reach and motivate patients [who did not participate] to get a follow-up colonoscopy.”
Coronado and colleagues are exploring ways to leverage artificial intelligence and virtual approaches to augment patient navigation programs — for example, by using a virtual navigator or low-cost automated tools to provide education to build patient confidence in getting a colonoscopy.
‘A Promising Tool’
“Colonoscopy completion after positive stool-based testing is critical to mitigating the impact of colon cancer,” commented Rajiv Bhuta, MD, assistant professor of clinical gastroenterology & hepatology, Lewis Katz School of Medicine, Temple University, Philadelphia, who was not involved in the study. “While prior studies assessing navigation have demonstrated improvements, none were as large enrollment-wise or as generalizable as the current study.”
That said, Bhuta said in an interview that the study could have provided more detail about coordination and communication with local gastrointestinal practices.
“Local ordering and prescribing practices vary and can significantly impact compliance rates. Were colonoscopies completed via an open access pathway or were the patients required to see a gastroenterologist first? How long was the average wait time for colonoscopy once scheduled? What were the local policies on requiring an escort after the procedure?”
He also noted that some aspects of the study — such as access to reduced-cost specialty care and free ride-share services — may limit generalizable to settings without such resources.
He added: “Although patient navigators for cancer treatment have mandated reimbursement, there is no current reimbursement for navigators for abnormal screening tests, another barrier to wide-spread implementation.”
Bhuta said that the dropout rate in the study mirrors that of his own real-world practice, which serves a high-risk, low-resource community. “I would specifically like to see research that provides behavioral insights on why patients respond positively to navigation — whether it is due to reminders, emotional support, or logistical assistance. Is it systemic barriers or patient disinterest or both that drives noncompliance?”
Despite these uncertainties and the need to refine implementation logistics, Bhuta concluded, “this strategy is a promising tool to reduce disparities and improve colorectal cancer outcomes. Clinicians should advocate for or implement structured follow-up systems, particularly in high-risk populations.”
The study was funded by the US National Cancer Institute. Coronado received a grant/contract from Guardant Health. Bhuta declared no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
The intervention led to a significant 13-point increase in follow-up colonoscopy completion at 1 year, compared with usual care (55.1% vs 42.1%), according the study, which was published online in Annals of Internal Medicine.
“Patients with an abnormal fecal test results have about a 1 in 20 chance of having colorectal cancer found, and many more will be found to have advanced adenomas that can be removed to prevent cancer,” Gloria Coronado, PhD, of Kaiser Permanente Center for Health Research, Portland, Oregon, and University of Arizona Cancer Center, Tucson, said in an interview.
“It is critical that these patients get a follow-up colonoscopy,” she said. “Patient navigation can accomplish this goal.”
‘Highly Effective’ Intervention
Researchers compared the effectiveness of a patient navigation program with that of usual care outreach in increasing follow-up colonoscopy completion after an abnormal stool test. They also developed a risk-prediction model that calculated a patient’s probability of obtaining a follow-up colonoscopy without navigation to determine if the addition of this intervention had a greater impact on those determined to be less likely to follow through.
The study included 967 patients from a community health center in Washington State who received an abnormal fecal test result within the prior month. The mean age of participants was 61 years, approximately 45% were women and 77% were White, and 18% preferred a Spanish-language intervention. In total, 479 patients received the intervention and 488 received usual care.
The intervention was delivered by a patient navigator who mailed introductory letters, sent text messages, and made live phone calls. In the calls, the navigators addressed the topics of barrier assessment and resolution, bowel preparation instruction and reminders, colonoscopy check-in, and understanding colonoscopy results and retesting intervals.
Patients in the usual-care group were contacted by a referral coordinator to schedule a follow-up colonoscopy appointment. If they couldn’t be reached initially, up to two follow-up attempts were made at 30 and 45 days after the referral date.
Patient navigation resulted in a significant 13% increase in follow-up, and those in this group completed a colonoscopy 27 days sooner than those in the usual care group (mean, 229 days vs 256 days).
Contrary to the authors’ expectation, the effectiveness of the intervention did not vary by patients’ predicted likelihood of obtaining a colonoscopy without navigation.
Notably, 20.3% of patients were unreachable or lost to follow-up, and 29.7% did not receive navigation. Among the 479 patients assigned to navigation, 79 (16.5%) declined participation and 56 (11.7%) were never reached.
The study was primarily conducted during the height of the COVID-19 pandemic, which created additional systemic and individual barriers to completing colonoscopies.
Nevertheless, the authors wrote, “our findings suggest that patient navigation is highly effective for patients eligible for colonoscopy.”
“Most patients who were reached were contacted with six or fewer phone attempts,” Coronado noted. “Further efforts are needed to determine how to reach and motivate patients [who did not participate] to get a follow-up colonoscopy.”
Coronado and colleagues are exploring ways to leverage artificial intelligence and virtual approaches to augment patient navigation programs — for example, by using a virtual navigator or low-cost automated tools to provide education to build patient confidence in getting a colonoscopy.
‘A Promising Tool’
“Colonoscopy completion after positive stool-based testing is critical to mitigating the impact of colon cancer,” commented Rajiv Bhuta, MD, assistant professor of clinical gastroenterology & hepatology, Lewis Katz School of Medicine, Temple University, Philadelphia, who was not involved in the study. “While prior studies assessing navigation have demonstrated improvements, none were as large enrollment-wise or as generalizable as the current study.”
That said, Bhuta said in an interview that the study could have provided more detail about coordination and communication with local gastrointestinal practices.
“Local ordering and prescribing practices vary and can significantly impact compliance rates. Were colonoscopies completed via an open access pathway or were the patients required to see a gastroenterologist first? How long was the average wait time for colonoscopy once scheduled? What were the local policies on requiring an escort after the procedure?”
He also noted that some aspects of the study — such as access to reduced-cost specialty care and free ride-share services — may limit generalizable to settings without such resources.
He added: “Although patient navigators for cancer treatment have mandated reimbursement, there is no current reimbursement for navigators for abnormal screening tests, another barrier to wide-spread implementation.”
Bhuta said that the dropout rate in the study mirrors that of his own real-world practice, which serves a high-risk, low-resource community. “I would specifically like to see research that provides behavioral insights on why patients respond positively to navigation — whether it is due to reminders, emotional support, or logistical assistance. Is it systemic barriers or patient disinterest or both that drives noncompliance?”
Despite these uncertainties and the need to refine implementation logistics, Bhuta concluded, “this strategy is a promising tool to reduce disparities and improve colorectal cancer outcomes. Clinicians should advocate for or implement structured follow-up systems, particularly in high-risk populations.”
The study was funded by the US National Cancer Institute. Coronado received a grant/contract from Guardant Health. Bhuta declared no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
FROM ANNALS OF INTERNAL MEDICINE
Intermittent Fasting Outperforms Daily Calorie Cutting for Weight Loss
a randomized study found.
A 4:3 IMF program produced modestly superior weight loss than DCR of 2.89 kg over 12 months in the context of a guidelines-based, high-intensity, comprehensive behavioral weight loss program, according to Danielle M. Ostendorf, PhD, MS, co–lead author and an assistant professor at the University of Tennessee, Knoxville, and Victoria Catenacci, MD, study principal investigator, co–lead author, and an associate professor located at the University of Colorado Anschutz Medical Campus, Aurora.
The study, published in Annals of Internal Medicine, found that objectively measured percentage caloric restriction was greater in the 4:3 IMF group, whereas there was no between-group difference in change in total moderate to vigorous physical activity, suggesting that differences in weight loss may have been caused by greater adherence to 4:3 IMF. The 4:3 IMF program was well tolerated and attrition was lower in this group: 19% for IMF group vs 30% for DCR group.
The authors noted that alternative patterns for restricting dietary energy intake are gaining attention owing to the difficulty of adhering to a reduced-calorie diet daily, with most adults who lose weight through DCR showing significant weight regain a year later.
According to Ostendorf and Catenacci, fasting strategies “come in two different flavors and oftentimes get confused in the lay press and by patients and researchers. And there is a difference between IMF and time-restricted eating (TRE),” they said in an interview. “TRE involves limiting the daily window of food intake to 8-10 hours or less on most days of the week — for example, 16:8 or 14:10 strategies. TRE is done every day, consistently and involves eating in the predefined window, and fasting outside of that window.”
IMF is a more periodic and significant fast and involves cycling between complete or near-complete (> 75%) energy restriction on fast days and ad libitum energy intake on nonfast days.
An appealing feature of IMF is that dieters do not have to focus on counting calories and restricting intake every day as they do with DCR, the authors wrote. Furthermore, the periodic nature of fasting is simpler and may mitigate the constant hunger associated with DCR.
Some said the diet was dreadful, but many said it was the easiest diet they had ever been on. “But it did take time for people to adjust to this strategy,” Catenacci said. “It was reassuring to see no evidence of increased binge-eating behaviors.”
Although objectively measured adherence to the targeted energy deficit (percentage caloric restriction from baseline) was below the target of 34.3% in both groups, the 4:3 IMF group showed greater percentage caloric restriction over 12 months. This suggests that, on average, the 4:3 IMF group may be more sustainable over a year than the DCR group. However, weight loss varied in both groups. Future studies should evaluate biological and behavioral predictors of response to both 4:3 IMF and DCR groups in order to personalize recommendations for weight loss.
Study Details
The investigators randomized 165 patients at the University of Colorado Anschutz Medical Campus, with a mean age of 42 years (18-60), a mean baseline weight of 97.4 kg, and a mean baseline body mass index (BMI) of 34.1 to IMF (n = 84) or DCR (n = 81). Of these, 74% were women and 86% were White individuals, and 125 (76%) completed the trial.
The 4:3 IMF group restricted energy intake by 80% on 3 nonconsecutive fast days per week, with ad libitum intake on the other 4 days (4:3 IMF). The 80% calorie reduction fasting corresponded to about 400-600 kcals/d for women and 500-700 kcals/d for men.
“Participants were only required to count calories on their fast days, which is part of the appeal,” Ostendorf said. Although permitted to eat what they wanted on nonfast days, participants were encouraged to make healthy food choices and consume healthy portion sizes.
For its part, the DCR group reduced daily energy intake by 34% to match the weekly energy deficit of 4:3 IMF.
Both groups participated in a high-intensity comprehensive weight loss program with group-based behavioral support and a recommended increase in moderate-intensity physical activity to 300 min/wk.
On the primary endpoint, the 4:3 IMF group showed a weight loss of 7.7 kg (95% CI, –9.6 to –5.9 kg) compared with 4.8 kg (95% CI, –6.8 to –2.8 kg, P =.040) in the DCR group at 12 months. The percentage change in body weight from baseline was –7.6% (95% CI, –9.5% to –5.7%) in the 4:3 IMF group and –5% (95% CI, –6.9% to –3.1%) in the DCR group.
At 12 months, 58% (n = 50) of participants in the 4:3 IMF group achieved weight loss of at least 5% vs 47% (n = 27) of those in the DCR group. In addition, 38% (n = 26) of participants in the 4:3 IMF group achieved weight loss of at least 10% at 12 months vs 16% (n = 9) of those in the DCR group. Changes in body composition, BMI, and waist circumference also tended to favor the 4:3 IMF group.
On other 12-month measures, point estimates of change in systolic blood pressure, total and low-density lipoprotein cholesterol levels, triglyceride level, homeostasis model assessment of insulin resistance, fasting glucose level, and hemoglobin A1c level favored 4:3 IMF. Point estimates of change in diastolic blood pressure and high-density lipoprotein cholesterol level favored DCR.
Currently lacking, the authors said, are data on safety in children and older adults, and adults affected by a long list of conditions: Diabetes, cardiovascular disease, kidney disease (stage 4 or 5), cancer, and eating disorders. Also, people of normal weight or only mild overweight, and pregnant or lactating women. “There have been concerns about IMF causing eating disorders, so we did not include people with eating disorders in our study,” Ostendorf and Catenacci said.
Offering an outside perspective on the findings, James O. Hill, PhD, director of the Nutrition Obesity Research Center and a professor at the University of Alabama at Birmingham believes IMF is a viable option for people trying to lose weight and has prescribed this approach for some in his practice. “But there is no one strategy that works for everyone,” he said in an interview. “I recommend IMF as a science-based strategy that can be effective for some people, and I think it should be on the list of science-based tools that people can consider using.” But as it won’t work for everyone, “we need to consider both metabolic success and behavioral success. In other words, would it be more effective if people could do it and how easy or hard is it for people to do?”
Audra Wilson, MS, RD, a bariatric dietitian at Northwestern Medicine Delnor Hospital in Geneva, Illinois, who was not involved in the study, expressed more reservations. “We do not specifically recommend intermittent fasting at Northwestern Medicine. There is no set protocol for this diet, and it can vary in ways that can limit nutrition to the point where we are not meeting needs on a regular basis,” she said in an interview.
Moreover, this study did not specify exact nutritional recommendations for participants but merely reduced overall caloric intake. “Although intermittent fasting may be helpful to some, in my nearly 10 years of experience I have not seen it be effective for many and especially not long term,” Wilson added.
Concerningly, IMF can foster disordered eating patterns of restriction followed by binging. “Although a balanced diet is more difficult to achieve, guidance from professionals like dietitians can give patients the tools to achieve balance, meet all nutrient needs, achieve satiety, and maybe most importantly, have a better relationship with food,” she said.
As for the influence of metabolic factors that may be associated with better weight loss, Ostendorf said, “be on the lookout for future publications in this area. We are analyzing data around changes in energy expenditure and changes in hunger-related hormones, among others.” A colleague is collecting biological samples to study genetics in this context. “However, in general, it appeared that the difference in weight loss was due to a greater caloric deficit in the 4:3 IMF group.”
Ostendorf and Catenacci are currently conducting a pilot study testing 4:3 IMF in breast cancer survivors. “We think this is a promising strategy for weight loss in breast cancer survivors who struggle with overweight/obesity in addition to their cancer diagnosis,” Ostendorf said.
This study was funded by the National Institute of Diabetes and Digestive and Kidney Diseases. Ostendorf, Catenacci, Hill, and Wilson disclosed no relevant financial conflicts of interest.
A version of this article appeared on Medscape.com.
a randomized study found.
A 4:3 IMF program produced modestly superior weight loss than DCR of 2.89 kg over 12 months in the context of a guidelines-based, high-intensity, comprehensive behavioral weight loss program, according to Danielle M. Ostendorf, PhD, MS, co–lead author and an assistant professor at the University of Tennessee, Knoxville, and Victoria Catenacci, MD, study principal investigator, co–lead author, and an associate professor located at the University of Colorado Anschutz Medical Campus, Aurora.
The study, published in Annals of Internal Medicine, found that objectively measured percentage caloric restriction was greater in the 4:3 IMF group, whereas there was no between-group difference in change in total moderate to vigorous physical activity, suggesting that differences in weight loss may have been caused by greater adherence to 4:3 IMF. The 4:3 IMF program was well tolerated and attrition was lower in this group: 19% for IMF group vs 30% for DCR group.
The authors noted that alternative patterns for restricting dietary energy intake are gaining attention owing to the difficulty of adhering to a reduced-calorie diet daily, with most adults who lose weight through DCR showing significant weight regain a year later.
According to Ostendorf and Catenacci, fasting strategies “come in two different flavors and oftentimes get confused in the lay press and by patients and researchers. And there is a difference between IMF and time-restricted eating (TRE),” they said in an interview. “TRE involves limiting the daily window of food intake to 8-10 hours or less on most days of the week — for example, 16:8 or 14:10 strategies. TRE is done every day, consistently and involves eating in the predefined window, and fasting outside of that window.”
IMF is a more periodic and significant fast and involves cycling between complete or near-complete (> 75%) energy restriction on fast days and ad libitum energy intake on nonfast days.
An appealing feature of IMF is that dieters do not have to focus on counting calories and restricting intake every day as they do with DCR, the authors wrote. Furthermore, the periodic nature of fasting is simpler and may mitigate the constant hunger associated with DCR.
Some said the diet was dreadful, but many said it was the easiest diet they had ever been on. “But it did take time for people to adjust to this strategy,” Catenacci said. “It was reassuring to see no evidence of increased binge-eating behaviors.”
Although objectively measured adherence to the targeted energy deficit (percentage caloric restriction from baseline) was below the target of 34.3% in both groups, the 4:3 IMF group showed greater percentage caloric restriction over 12 months. This suggests that, on average, the 4:3 IMF group may be more sustainable over a year than the DCR group. However, weight loss varied in both groups. Future studies should evaluate biological and behavioral predictors of response to both 4:3 IMF and DCR groups in order to personalize recommendations for weight loss.
Study Details
The investigators randomized 165 patients at the University of Colorado Anschutz Medical Campus, with a mean age of 42 years (18-60), a mean baseline weight of 97.4 kg, and a mean baseline body mass index (BMI) of 34.1 to IMF (n = 84) or DCR (n = 81). Of these, 74% were women and 86% were White individuals, and 125 (76%) completed the trial.
The 4:3 IMF group restricted energy intake by 80% on 3 nonconsecutive fast days per week, with ad libitum intake on the other 4 days (4:3 IMF). The 80% calorie reduction fasting corresponded to about 400-600 kcals/d for women and 500-700 kcals/d for men.
“Participants were only required to count calories on their fast days, which is part of the appeal,” Ostendorf said. Although permitted to eat what they wanted on nonfast days, participants were encouraged to make healthy food choices and consume healthy portion sizes.
For its part, the DCR group reduced daily energy intake by 34% to match the weekly energy deficit of 4:3 IMF.
Both groups participated in a high-intensity comprehensive weight loss program with group-based behavioral support and a recommended increase in moderate-intensity physical activity to 300 min/wk.
On the primary endpoint, the 4:3 IMF group showed a weight loss of 7.7 kg (95% CI, –9.6 to –5.9 kg) compared with 4.8 kg (95% CI, –6.8 to –2.8 kg, P =.040) in the DCR group at 12 months. The percentage change in body weight from baseline was –7.6% (95% CI, –9.5% to –5.7%) in the 4:3 IMF group and –5% (95% CI, –6.9% to –3.1%) in the DCR group.
At 12 months, 58% (n = 50) of participants in the 4:3 IMF group achieved weight loss of at least 5% vs 47% (n = 27) of those in the DCR group. In addition, 38% (n = 26) of participants in the 4:3 IMF group achieved weight loss of at least 10% at 12 months vs 16% (n = 9) of those in the DCR group. Changes in body composition, BMI, and waist circumference also tended to favor the 4:3 IMF group.
On other 12-month measures, point estimates of change in systolic blood pressure, total and low-density lipoprotein cholesterol levels, triglyceride level, homeostasis model assessment of insulin resistance, fasting glucose level, and hemoglobin A1c level favored 4:3 IMF. Point estimates of change in diastolic blood pressure and high-density lipoprotein cholesterol level favored DCR.
Currently lacking, the authors said, are data on safety in children and older adults, and adults affected by a long list of conditions: Diabetes, cardiovascular disease, kidney disease (stage 4 or 5), cancer, and eating disorders. Also, people of normal weight or only mild overweight, and pregnant or lactating women. “There have been concerns about IMF causing eating disorders, so we did not include people with eating disorders in our study,” Ostendorf and Catenacci said.
Offering an outside perspective on the findings, James O. Hill, PhD, director of the Nutrition Obesity Research Center and a professor at the University of Alabama at Birmingham believes IMF is a viable option for people trying to lose weight and has prescribed this approach for some in his practice. “But there is no one strategy that works for everyone,” he said in an interview. “I recommend IMF as a science-based strategy that can be effective for some people, and I think it should be on the list of science-based tools that people can consider using.” But as it won’t work for everyone, “we need to consider both metabolic success and behavioral success. In other words, would it be more effective if people could do it and how easy or hard is it for people to do?”
Audra Wilson, MS, RD, a bariatric dietitian at Northwestern Medicine Delnor Hospital in Geneva, Illinois, who was not involved in the study, expressed more reservations. “We do not specifically recommend intermittent fasting at Northwestern Medicine. There is no set protocol for this diet, and it can vary in ways that can limit nutrition to the point where we are not meeting needs on a regular basis,” she said in an interview.
Moreover, this study did not specify exact nutritional recommendations for participants but merely reduced overall caloric intake. “Although intermittent fasting may be helpful to some, in my nearly 10 years of experience I have not seen it be effective for many and especially not long term,” Wilson added.
Concerningly, IMF can foster disordered eating patterns of restriction followed by binging. “Although a balanced diet is more difficult to achieve, guidance from professionals like dietitians can give patients the tools to achieve balance, meet all nutrient needs, achieve satiety, and maybe most importantly, have a better relationship with food,” she said.
As for the influence of metabolic factors that may be associated with better weight loss, Ostendorf said, “be on the lookout for future publications in this area. We are analyzing data around changes in energy expenditure and changes in hunger-related hormones, among others.” A colleague is collecting biological samples to study genetics in this context. “However, in general, it appeared that the difference in weight loss was due to a greater caloric deficit in the 4:3 IMF group.”
Ostendorf and Catenacci are currently conducting a pilot study testing 4:3 IMF in breast cancer survivors. “We think this is a promising strategy for weight loss in breast cancer survivors who struggle with overweight/obesity in addition to their cancer diagnosis,” Ostendorf said.
This study was funded by the National Institute of Diabetes and Digestive and Kidney Diseases. Ostendorf, Catenacci, Hill, and Wilson disclosed no relevant financial conflicts of interest.
A version of this article appeared on Medscape.com.
a randomized study found.
A 4:3 IMF program produced modestly superior weight loss than DCR of 2.89 kg over 12 months in the context of a guidelines-based, high-intensity, comprehensive behavioral weight loss program, according to Danielle M. Ostendorf, PhD, MS, co–lead author and an assistant professor at the University of Tennessee, Knoxville, and Victoria Catenacci, MD, study principal investigator, co–lead author, and an associate professor located at the University of Colorado Anschutz Medical Campus, Aurora.
The study, published in Annals of Internal Medicine, found that objectively measured percentage caloric restriction was greater in the 4:3 IMF group, whereas there was no between-group difference in change in total moderate to vigorous physical activity, suggesting that differences in weight loss may have been caused by greater adherence to 4:3 IMF. The 4:3 IMF program was well tolerated and attrition was lower in this group: 19% for IMF group vs 30% for DCR group.
The authors noted that alternative patterns for restricting dietary energy intake are gaining attention owing to the difficulty of adhering to a reduced-calorie diet daily, with most adults who lose weight through DCR showing significant weight regain a year later.
According to Ostendorf and Catenacci, fasting strategies “come in two different flavors and oftentimes get confused in the lay press and by patients and researchers. And there is a difference between IMF and time-restricted eating (TRE),” they said in an interview. “TRE involves limiting the daily window of food intake to 8-10 hours or less on most days of the week — for example, 16:8 or 14:10 strategies. TRE is done every day, consistently and involves eating in the predefined window, and fasting outside of that window.”
IMF is a more periodic and significant fast and involves cycling between complete or near-complete (> 75%) energy restriction on fast days and ad libitum energy intake on nonfast days.
An appealing feature of IMF is that dieters do not have to focus on counting calories and restricting intake every day as they do with DCR, the authors wrote. Furthermore, the periodic nature of fasting is simpler and may mitigate the constant hunger associated with DCR.
Some said the diet was dreadful, but many said it was the easiest diet they had ever been on. “But it did take time for people to adjust to this strategy,” Catenacci said. “It was reassuring to see no evidence of increased binge-eating behaviors.”
Although objectively measured adherence to the targeted energy deficit (percentage caloric restriction from baseline) was below the target of 34.3% in both groups, the 4:3 IMF group showed greater percentage caloric restriction over 12 months. This suggests that, on average, the 4:3 IMF group may be more sustainable over a year than the DCR group. However, weight loss varied in both groups. Future studies should evaluate biological and behavioral predictors of response to both 4:3 IMF and DCR groups in order to personalize recommendations for weight loss.
Study Details
The investigators randomized 165 patients at the University of Colorado Anschutz Medical Campus, with a mean age of 42 years (18-60), a mean baseline weight of 97.4 kg, and a mean baseline body mass index (BMI) of 34.1 to IMF (n = 84) or DCR (n = 81). Of these, 74% were women and 86% were White individuals, and 125 (76%) completed the trial.
The 4:3 IMF group restricted energy intake by 80% on 3 nonconsecutive fast days per week, with ad libitum intake on the other 4 days (4:3 IMF). The 80% calorie reduction fasting corresponded to about 400-600 kcals/d for women and 500-700 kcals/d for men.
“Participants were only required to count calories on their fast days, which is part of the appeal,” Ostendorf said. Although permitted to eat what they wanted on nonfast days, participants were encouraged to make healthy food choices and consume healthy portion sizes.
For its part, the DCR group reduced daily energy intake by 34% to match the weekly energy deficit of 4:3 IMF.
Both groups participated in a high-intensity comprehensive weight loss program with group-based behavioral support and a recommended increase in moderate-intensity physical activity to 300 min/wk.
On the primary endpoint, the 4:3 IMF group showed a weight loss of 7.7 kg (95% CI, –9.6 to –5.9 kg) compared with 4.8 kg (95% CI, –6.8 to –2.8 kg, P =.040) in the DCR group at 12 months. The percentage change in body weight from baseline was –7.6% (95% CI, –9.5% to –5.7%) in the 4:3 IMF group and –5% (95% CI, –6.9% to –3.1%) in the DCR group.
At 12 months, 58% (n = 50) of participants in the 4:3 IMF group achieved weight loss of at least 5% vs 47% (n = 27) of those in the DCR group. In addition, 38% (n = 26) of participants in the 4:3 IMF group achieved weight loss of at least 10% at 12 months vs 16% (n = 9) of those in the DCR group. Changes in body composition, BMI, and waist circumference also tended to favor the 4:3 IMF group.
On other 12-month measures, point estimates of change in systolic blood pressure, total and low-density lipoprotein cholesterol levels, triglyceride level, homeostasis model assessment of insulin resistance, fasting glucose level, and hemoglobin A1c level favored 4:3 IMF. Point estimates of change in diastolic blood pressure and high-density lipoprotein cholesterol level favored DCR.
Currently lacking, the authors said, are data on safety in children and older adults, and adults affected by a long list of conditions: Diabetes, cardiovascular disease, kidney disease (stage 4 or 5), cancer, and eating disorders. Also, people of normal weight or only mild overweight, and pregnant or lactating women. “There have been concerns about IMF causing eating disorders, so we did not include people with eating disorders in our study,” Ostendorf and Catenacci said.
Offering an outside perspective on the findings, James O. Hill, PhD, director of the Nutrition Obesity Research Center and a professor at the University of Alabama at Birmingham believes IMF is a viable option for people trying to lose weight and has prescribed this approach for some in his practice. “But there is no one strategy that works for everyone,” he said in an interview. “I recommend IMF as a science-based strategy that can be effective for some people, and I think it should be on the list of science-based tools that people can consider using.” But as it won’t work for everyone, “we need to consider both metabolic success and behavioral success. In other words, would it be more effective if people could do it and how easy or hard is it for people to do?”
Audra Wilson, MS, RD, a bariatric dietitian at Northwestern Medicine Delnor Hospital in Geneva, Illinois, who was not involved in the study, expressed more reservations. “We do not specifically recommend intermittent fasting at Northwestern Medicine. There is no set protocol for this diet, and it can vary in ways that can limit nutrition to the point where we are not meeting needs on a regular basis,” she said in an interview.
Moreover, this study did not specify exact nutritional recommendations for participants but merely reduced overall caloric intake. “Although intermittent fasting may be helpful to some, in my nearly 10 years of experience I have not seen it be effective for many and especially not long term,” Wilson added.
Concerningly, IMF can foster disordered eating patterns of restriction followed by binging. “Although a balanced diet is more difficult to achieve, guidance from professionals like dietitians can give patients the tools to achieve balance, meet all nutrient needs, achieve satiety, and maybe most importantly, have a better relationship with food,” she said.
As for the influence of metabolic factors that may be associated with better weight loss, Ostendorf said, “be on the lookout for future publications in this area. We are analyzing data around changes in energy expenditure and changes in hunger-related hormones, among others.” A colleague is collecting biological samples to study genetics in this context. “However, in general, it appeared that the difference in weight loss was due to a greater caloric deficit in the 4:3 IMF group.”
Ostendorf and Catenacci are currently conducting a pilot study testing 4:3 IMF in breast cancer survivors. “We think this is a promising strategy for weight loss in breast cancer survivors who struggle with overweight/obesity in addition to their cancer diagnosis,” Ostendorf said.
This study was funded by the National Institute of Diabetes and Digestive and Kidney Diseases. Ostendorf, Catenacci, Hill, and Wilson disclosed no relevant financial conflicts of interest.
A version of this article appeared on Medscape.com.
FROM ANNALS OF INTERNAL MEDICINE
Wearable Devices May Predict IBD Flares Weeks in Advance
according to investigators.
These findings suggest that widely used consumer wearables could support long-term monitoring of IBD and other chronic inflammatory conditions, lead author Robert P. Hirten, MD, of Icahn School of Medicine at Mount Sinai, New York, and colleagues reported.
“Wearable devices are an increasingly accepted tool for monitoring health and disease,” the investigators wrote in Gastroenterology. “They are frequently used in non–inflammatory-based diseases for remote patient monitoring, allowing individuals to be monitored outside of the clinical setting, which has resulted in improved outcomes in multiple disease states.”
Progress has been slower for inflammatory conditions, the investigators noted, despite interest from both providers and patients. Prior studies have explored activity and sleep tracking, or sweat-based biomarkers, as potential tools for monitoring IBD.
Hirten and colleagues took a novel approach, focusing on physiologic changes driven by autonomic nervous system dysfunction — a hallmark of chronic inflammation. Conditions like IBD are associated with reduced parasympathetic activity and increased sympathetic tone, which in turn affect heart rate and heart rate variability. Heart rate tends to rise during flares, while heart rate variability decreases.
Their prospective cohort study included 309 adults with Crohn’s disease (n = 196) or ulcerative colitis (n = 113). Participants used their own or a study-provided Apple Watch, Fitbit, or Oura Ring to passively collect physiological data, including heart rate, resting heart rate, heart rate variability, and step count. A subset of Apple Watch users also contributed oxygen saturation data.
Participants also completed daily symptom surveys using a custom smartphone app and reported laboratory values such as C-reactive protein, erythrocyte sedimentation rate, and fecal calprotectin, as part of routine care. These data were used to identify symptomatic and inflammatory flare periods.
Over a mean follow-up of about 7 months, the physiological data consistently distinguished both types of flares from periods of remission. Heart rate variability dropped significantly during flares, while heart rate and resting heart rate increased. Step counts decreased during inflammatory flares but not during symptom-only flares. Oxygen saturation stayed mostly the same, except for a slight drop seen in participants with Crohn’s disease.
These physiological changes could be detected as early as 7 weeks before a flare. Predictive models that combined multiple metrics — heart rate variability, heart rate, resting heart rate, and step count — were highly accurate, with F1 scores as high as 0.90 for predicting inflammatory flares and 0.83 for predicting symptomatic flares.
In addition, wearable data helped differentiate between flares caused by active inflammation and those driven by symptoms alone. Even when symptoms were similar, heart rate variability, heart rate, and resting heart rate were significantly higher when inflammation was present—suggesting wearable devices may help address the common mismatch between symptoms and actual disease activity in IBD.
“These findings support the further evaluation of wearable devices in the monitoring of IBD,” the investigators concluded.
The study was supported by the National Institute of Diabetes and Digestive and Kidney Diseases and Ms. Jenny Steingart. The investigators disclosed additional relationships with Agomab, Lilly, Merck, and others.
Dana J. Lukin, MD, PhD, AGAF, of New York-Presbyterian Hospital/Weill Cornell Medicine, New York City, described the study by Hirten et al as “provocative.”
“While the data require a machine learning approach to transform the recorded values into predictive algorithms, it is intriguing that routinely recorded information from smart devices can be used in a manner to inform disease activity,” Lukin said in an interview. “Furthermore, the use of continuously recorded physiological data in this study likely reflects longitudinal health status more accurately than cross-sectional use of patient-reported outcomes or episodic biomarker testing.”
In addition to offering potentially higher accuracy than conventional monitoring, the remote strategy is also more convenient, he noted.
“The use of these devices is likely easier to adhere to than the use of other contemporary monitoring strategies involving the collection of stool or blood samples,” Lukin said. “It may become possible to passively monitor a larger number of patients at risk for flares remotely,” especially given that “almost half of Americans utilize wearables, such as the Apple Watch, Oura Ring, and Fitbit.”
Still, Lukin predicted challenges with widespread adoption.
“More than half of Americans do not routinely [use these devices],” Lukin said. “Cost, access to internet and smartphones, and adoption of new technology may all be barriers to more widespread use.”
He suggested that the present study offers proof of concept, but more prospective data are needed to demonstrate how this type of remote monitoring might improve real-world IBD care.
“Potential studies will assess change in healthcare utilization, corticosteroids, surgery, and clinical flare activity with the use of these data,” Lukin said. “As we learn more about how to handle the large amount of data generated by these devices, our algorithms can be refined to make a feasible platform for practices to employ in routine care.”
Lukin disclosed relationships with Boehringer Ingelheim, Takeda, Vedanta, and others.
Dana J. Lukin, MD, PhD, AGAF, of New York-Presbyterian Hospital/Weill Cornell Medicine, New York City, described the study by Hirten et al as “provocative.”
“While the data require a machine learning approach to transform the recorded values into predictive algorithms, it is intriguing that routinely recorded information from smart devices can be used in a manner to inform disease activity,” Lukin said in an interview. “Furthermore, the use of continuously recorded physiological data in this study likely reflects longitudinal health status more accurately than cross-sectional use of patient-reported outcomes or episodic biomarker testing.”
In addition to offering potentially higher accuracy than conventional monitoring, the remote strategy is also more convenient, he noted.
“The use of these devices is likely easier to adhere to than the use of other contemporary monitoring strategies involving the collection of stool or blood samples,” Lukin said. “It may become possible to passively monitor a larger number of patients at risk for flares remotely,” especially given that “almost half of Americans utilize wearables, such as the Apple Watch, Oura Ring, and Fitbit.”
Still, Lukin predicted challenges with widespread adoption.
“More than half of Americans do not routinely [use these devices],” Lukin said. “Cost, access to internet and smartphones, and adoption of new technology may all be barriers to more widespread use.”
He suggested that the present study offers proof of concept, but more prospective data are needed to demonstrate how this type of remote monitoring might improve real-world IBD care.
“Potential studies will assess change in healthcare utilization, corticosteroids, surgery, and clinical flare activity with the use of these data,” Lukin said. “As we learn more about how to handle the large amount of data generated by these devices, our algorithms can be refined to make a feasible platform for practices to employ in routine care.”
Lukin disclosed relationships with Boehringer Ingelheim, Takeda, Vedanta, and others.
Dana J. Lukin, MD, PhD, AGAF, of New York-Presbyterian Hospital/Weill Cornell Medicine, New York City, described the study by Hirten et al as “provocative.”
“While the data require a machine learning approach to transform the recorded values into predictive algorithms, it is intriguing that routinely recorded information from smart devices can be used in a manner to inform disease activity,” Lukin said in an interview. “Furthermore, the use of continuously recorded physiological data in this study likely reflects longitudinal health status more accurately than cross-sectional use of patient-reported outcomes or episodic biomarker testing.”
In addition to offering potentially higher accuracy than conventional monitoring, the remote strategy is also more convenient, he noted.
“The use of these devices is likely easier to adhere to than the use of other contemporary monitoring strategies involving the collection of stool or blood samples,” Lukin said. “It may become possible to passively monitor a larger number of patients at risk for flares remotely,” especially given that “almost half of Americans utilize wearables, such as the Apple Watch, Oura Ring, and Fitbit.”
Still, Lukin predicted challenges with widespread adoption.
“More than half of Americans do not routinely [use these devices],” Lukin said. “Cost, access to internet and smartphones, and adoption of new technology may all be barriers to more widespread use.”
He suggested that the present study offers proof of concept, but more prospective data are needed to demonstrate how this type of remote monitoring might improve real-world IBD care.
“Potential studies will assess change in healthcare utilization, corticosteroids, surgery, and clinical flare activity with the use of these data,” Lukin said. “As we learn more about how to handle the large amount of data generated by these devices, our algorithms can be refined to make a feasible platform for practices to employ in routine care.”
Lukin disclosed relationships with Boehringer Ingelheim, Takeda, Vedanta, and others.
according to investigators.
These findings suggest that widely used consumer wearables could support long-term monitoring of IBD and other chronic inflammatory conditions, lead author Robert P. Hirten, MD, of Icahn School of Medicine at Mount Sinai, New York, and colleagues reported.
“Wearable devices are an increasingly accepted tool for monitoring health and disease,” the investigators wrote in Gastroenterology. “They are frequently used in non–inflammatory-based diseases for remote patient monitoring, allowing individuals to be monitored outside of the clinical setting, which has resulted in improved outcomes in multiple disease states.”
Progress has been slower for inflammatory conditions, the investigators noted, despite interest from both providers and patients. Prior studies have explored activity and sleep tracking, or sweat-based biomarkers, as potential tools for monitoring IBD.
Hirten and colleagues took a novel approach, focusing on physiologic changes driven by autonomic nervous system dysfunction — a hallmark of chronic inflammation. Conditions like IBD are associated with reduced parasympathetic activity and increased sympathetic tone, which in turn affect heart rate and heart rate variability. Heart rate tends to rise during flares, while heart rate variability decreases.
Their prospective cohort study included 309 adults with Crohn’s disease (n = 196) or ulcerative colitis (n = 113). Participants used their own or a study-provided Apple Watch, Fitbit, or Oura Ring to passively collect physiological data, including heart rate, resting heart rate, heart rate variability, and step count. A subset of Apple Watch users also contributed oxygen saturation data.
Participants also completed daily symptom surveys using a custom smartphone app and reported laboratory values such as C-reactive protein, erythrocyte sedimentation rate, and fecal calprotectin, as part of routine care. These data were used to identify symptomatic and inflammatory flare periods.
Over a mean follow-up of about 7 months, the physiological data consistently distinguished both types of flares from periods of remission. Heart rate variability dropped significantly during flares, while heart rate and resting heart rate increased. Step counts decreased during inflammatory flares but not during symptom-only flares. Oxygen saturation stayed mostly the same, except for a slight drop seen in participants with Crohn’s disease.
These physiological changes could be detected as early as 7 weeks before a flare. Predictive models that combined multiple metrics — heart rate variability, heart rate, resting heart rate, and step count — were highly accurate, with F1 scores as high as 0.90 for predicting inflammatory flares and 0.83 for predicting symptomatic flares.
In addition, wearable data helped differentiate between flares caused by active inflammation and those driven by symptoms alone. Even when symptoms were similar, heart rate variability, heart rate, and resting heart rate were significantly higher when inflammation was present—suggesting wearable devices may help address the common mismatch between symptoms and actual disease activity in IBD.
“These findings support the further evaluation of wearable devices in the monitoring of IBD,” the investigators concluded.
The study was supported by the National Institute of Diabetes and Digestive and Kidney Diseases and Ms. Jenny Steingart. The investigators disclosed additional relationships with Agomab, Lilly, Merck, and others.
according to investigators.
These findings suggest that widely used consumer wearables could support long-term monitoring of IBD and other chronic inflammatory conditions, lead author Robert P. Hirten, MD, of Icahn School of Medicine at Mount Sinai, New York, and colleagues reported.
“Wearable devices are an increasingly accepted tool for monitoring health and disease,” the investigators wrote in Gastroenterology. “They are frequently used in non–inflammatory-based diseases for remote patient monitoring, allowing individuals to be monitored outside of the clinical setting, which has resulted in improved outcomes in multiple disease states.”
Progress has been slower for inflammatory conditions, the investigators noted, despite interest from both providers and patients. Prior studies have explored activity and sleep tracking, or sweat-based biomarkers, as potential tools for monitoring IBD.
Hirten and colleagues took a novel approach, focusing on physiologic changes driven by autonomic nervous system dysfunction — a hallmark of chronic inflammation. Conditions like IBD are associated with reduced parasympathetic activity and increased sympathetic tone, which in turn affect heart rate and heart rate variability. Heart rate tends to rise during flares, while heart rate variability decreases.
Their prospective cohort study included 309 adults with Crohn’s disease (n = 196) or ulcerative colitis (n = 113). Participants used their own or a study-provided Apple Watch, Fitbit, or Oura Ring to passively collect physiological data, including heart rate, resting heart rate, heart rate variability, and step count. A subset of Apple Watch users also contributed oxygen saturation data.
Participants also completed daily symptom surveys using a custom smartphone app and reported laboratory values such as C-reactive protein, erythrocyte sedimentation rate, and fecal calprotectin, as part of routine care. These data were used to identify symptomatic and inflammatory flare periods.
Over a mean follow-up of about 7 months, the physiological data consistently distinguished both types of flares from periods of remission. Heart rate variability dropped significantly during flares, while heart rate and resting heart rate increased. Step counts decreased during inflammatory flares but not during symptom-only flares. Oxygen saturation stayed mostly the same, except for a slight drop seen in participants with Crohn’s disease.
These physiological changes could be detected as early as 7 weeks before a flare. Predictive models that combined multiple metrics — heart rate variability, heart rate, resting heart rate, and step count — were highly accurate, with F1 scores as high as 0.90 for predicting inflammatory flares and 0.83 for predicting symptomatic flares.
In addition, wearable data helped differentiate between flares caused by active inflammation and those driven by symptoms alone. Even when symptoms were similar, heart rate variability, heart rate, and resting heart rate were significantly higher when inflammation was present—suggesting wearable devices may help address the common mismatch between symptoms and actual disease activity in IBD.
“These findings support the further evaluation of wearable devices in the monitoring of IBD,” the investigators concluded.
The study was supported by the National Institute of Diabetes and Digestive and Kidney Diseases and Ms. Jenny Steingart. The investigators disclosed additional relationships with Agomab, Lilly, Merck, and others.
FROM GASTROENTEROLOGY
Low-Quality Food Environments Increase MASLD-related Mortality
according to investigators.
These findings highlight the importance of addressing disparities in food environments and social determinants of health to help reduce MASLD-related mortality, lead author Annette Paik, MD, of Inova Health System, Falls Church, Virginia, and colleagues reported.
“Recent studies indicate that food swamps and deserts, as surrogates for food insecurity, are linked to poor glycemic control and higher adult obesity rates,” the investigators wrote in Clinical Gastroenterology and Hepatology. “Understanding the intersection of these factors with sociodemographic and clinical variables offers insights into MASLD-related outcomes, including mortality.”
To this end, the present study examined the association between food environments and MASLD-related mortality across more than 2,195 US counties. County-level mortality data were obtained from the CDC WONDER database (2016-2020) and linked to food environment data from the US Department of Agriculture Food Environment Atlas using Federal Information Processing Standards (FIPS) codes. Food deserts were defined as low-income areas with limited access to grocery stores, while food swamps were characterized by a predominance of unhealthy food outlets relative to healthy ones.
Additional data on obesity, type 2 diabetes (T2D), and nine social determinants of health were obtained from CDC PLACES and other publicly available datasets. Counties were stratified into quartiles based on MASLD-related mortality rates. Population-weighted mixed-effects linear regression models were used to evaluate associations between food environment exposures and MASLD mortality, adjusting for region, rural-urban status, age, sex, race, insurance coverage, chronic dis-ease prevalence, SNAP participation, and access to exercise facilities.
Counties with the worst food environments had significantly higher MASLD-related mortality, even after adjusting for clinical and sociodemographic factors. Compared with counties in the lowest quartile of MASLD mortality, those in the highest quartile had a greater proportion of food deserts (22.3% vs 14.9%; P < .001) and food swamps (73.1% vs 65.7%; P < .001). They also had a significantly higher prevalence of obesity (40.5% vs 32.5%), type 2 diabetes (15.8% vs 11.4%), and physical inactivity (33.7% vs 24.9%).
Demographically, counties with higher MASLD mortality had significantly larger proportions of Black and Hispanic residents, and were more likely to be rural and located in the South. These counties also had significantly lower median household incomes, higher poverty rates, fewer adults with a college education, lower access to exercise opportunities, greater SNAP participation, less broadband access, and more uninsured adults.
In multivariable regression models, both food deserts and food swamps remained independently associated with MASLD mortality. Counties in the highest quartile of food desert exposure had a 14.5% higher MASLD mortality rate, compared with the lowest quartile (P = .001), and those in the highest quartile for food swamp exposure had a 13.9% higher mortality rate (P = .005).
Type 2 diabetes, physical inactivity, and lack of health insurance were also independently associated with increased MASLD-related mortality.
“Implementing public health interventions that address the specific environmental factors of each county can help US policymakers promote access to healthy, culturally appropriate food choices at affordable prices and reduce the consumption of poor-quality food,” the investigators wrote. “Moreover, improving access to parks and exercise facilities can further enhance the impact of healthy nutrition. These strategies could help curb the growing epidemic of metabolic diseases, including MASLD and related mortality.”
This study was supported by King Faisal Specialist Hospital & Research Center, the Global NASH Council, Center for Outcomes Research in Liver Diseases, and the Beatty Liver and Obesity Research Fund, Inova Health System. The investigators disclosed no conflicts of interest.
A healthy lifestyle continues to be foundational to the management of metabolic dysfunction–associated steatotic liver disease (MASLD). Poor diet quality is a risk factor for developing MASLD in the US general population. Food deserts and food swamps are symptoms of socioeconomic hardship, as they both are characterized by limited access to healthy food (as described by the US Department of Agriculture Dietary Guidelines for Americans) owing to the absence of grocery stores/supermarkets. However, food swamps suffer from abundant access to unhealthy, energy-dense, yet nutritionally sparse (EDYNS) foods.
The article by Paik et al shows that food deserts and food swamps are not only associated with the burden of MASLD in the United States but also with MASLD-related mortality. The counties with the highest MASLD-related mortality carried higher food swamps and food deserts, poverty, unemployment, household crowding, absence of broadband internet access, lack of high school education, and elderly, Hispanic residents and likely to be located in the South.
MASLD appears to have origins in the dark underbelly of socioeconomic hardship that might preclude many of our patients from complying with lifestyle changes. Policy changes are urgently needed at a national level, from increasing incentives to establish grocery stores in the food deserts to limiting the proportion of EDYNS foods in grocery stores and conspicuous labeling by the Food and Drug Administration of EDYNS foods. At an individual practice level, supporting MASLD patients in the clinic with a dietitian, educational material, and, where possible, utilizing applications to assist healthy dietary habits to empower them in choosing healthy food options.
Niharika Samala, MD, is assistant professor of medicine, associate program director of the GI Fellowship, and director of the IUH MASLD/NAFLD Clinic at the Indiana University School of Medicine, Indianapolis. She reported no relevant conflicts of interest.
A healthy lifestyle continues to be foundational to the management of metabolic dysfunction–associated steatotic liver disease (MASLD). Poor diet quality is a risk factor for developing MASLD in the US general population. Food deserts and food swamps are symptoms of socioeconomic hardship, as they both are characterized by limited access to healthy food (as described by the US Department of Agriculture Dietary Guidelines for Americans) owing to the absence of grocery stores/supermarkets. However, food swamps suffer from abundant access to unhealthy, energy-dense, yet nutritionally sparse (EDYNS) foods.
The article by Paik et al shows that food deserts and food swamps are not only associated with the burden of MASLD in the United States but also with MASLD-related mortality. The counties with the highest MASLD-related mortality carried higher food swamps and food deserts, poverty, unemployment, household crowding, absence of broadband internet access, lack of high school education, and elderly, Hispanic residents and likely to be located in the South.
MASLD appears to have origins in the dark underbelly of socioeconomic hardship that might preclude many of our patients from complying with lifestyle changes. Policy changes are urgently needed at a national level, from increasing incentives to establish grocery stores in the food deserts to limiting the proportion of EDYNS foods in grocery stores and conspicuous labeling by the Food and Drug Administration of EDYNS foods. At an individual practice level, supporting MASLD patients in the clinic with a dietitian, educational material, and, where possible, utilizing applications to assist healthy dietary habits to empower them in choosing healthy food options.
Niharika Samala, MD, is assistant professor of medicine, associate program director of the GI Fellowship, and director of the IUH MASLD/NAFLD Clinic at the Indiana University School of Medicine, Indianapolis. She reported no relevant conflicts of interest.
A healthy lifestyle continues to be foundational to the management of metabolic dysfunction–associated steatotic liver disease (MASLD). Poor diet quality is a risk factor for developing MASLD in the US general population. Food deserts and food swamps are symptoms of socioeconomic hardship, as they both are characterized by limited access to healthy food (as described by the US Department of Agriculture Dietary Guidelines for Americans) owing to the absence of grocery stores/supermarkets. However, food swamps suffer from abundant access to unhealthy, energy-dense, yet nutritionally sparse (EDYNS) foods.
The article by Paik et al shows that food deserts and food swamps are not only associated with the burden of MASLD in the United States but also with MASLD-related mortality. The counties with the highest MASLD-related mortality carried higher food swamps and food deserts, poverty, unemployment, household crowding, absence of broadband internet access, lack of high school education, and elderly, Hispanic residents and likely to be located in the South.
MASLD appears to have origins in the dark underbelly of socioeconomic hardship that might preclude many of our patients from complying with lifestyle changes. Policy changes are urgently needed at a national level, from increasing incentives to establish grocery stores in the food deserts to limiting the proportion of EDYNS foods in grocery stores and conspicuous labeling by the Food and Drug Administration of EDYNS foods. At an individual practice level, supporting MASLD patients in the clinic with a dietitian, educational material, and, where possible, utilizing applications to assist healthy dietary habits to empower them in choosing healthy food options.
Niharika Samala, MD, is assistant professor of medicine, associate program director of the GI Fellowship, and director of the IUH MASLD/NAFLD Clinic at the Indiana University School of Medicine, Indianapolis. She reported no relevant conflicts of interest.
according to investigators.
These findings highlight the importance of addressing disparities in food environments and social determinants of health to help reduce MASLD-related mortality, lead author Annette Paik, MD, of Inova Health System, Falls Church, Virginia, and colleagues reported.
“Recent studies indicate that food swamps and deserts, as surrogates for food insecurity, are linked to poor glycemic control and higher adult obesity rates,” the investigators wrote in Clinical Gastroenterology and Hepatology. “Understanding the intersection of these factors with sociodemographic and clinical variables offers insights into MASLD-related outcomes, including mortality.”
To this end, the present study examined the association between food environments and MASLD-related mortality across more than 2,195 US counties. County-level mortality data were obtained from the CDC WONDER database (2016-2020) and linked to food environment data from the US Department of Agriculture Food Environment Atlas using Federal Information Processing Standards (FIPS) codes. Food deserts were defined as low-income areas with limited access to grocery stores, while food swamps were characterized by a predominance of unhealthy food outlets relative to healthy ones.
Additional data on obesity, type 2 diabetes (T2D), and nine social determinants of health were obtained from CDC PLACES and other publicly available datasets. Counties were stratified into quartiles based on MASLD-related mortality rates. Population-weighted mixed-effects linear regression models were used to evaluate associations between food environment exposures and MASLD mortality, adjusting for region, rural-urban status, age, sex, race, insurance coverage, chronic dis-ease prevalence, SNAP participation, and access to exercise facilities.
Counties with the worst food environments had significantly higher MASLD-related mortality, even after adjusting for clinical and sociodemographic factors. Compared with counties in the lowest quartile of MASLD mortality, those in the highest quartile had a greater proportion of food deserts (22.3% vs 14.9%; P < .001) and food swamps (73.1% vs 65.7%; P < .001). They also had a significantly higher prevalence of obesity (40.5% vs 32.5%), type 2 diabetes (15.8% vs 11.4%), and physical inactivity (33.7% vs 24.9%).
Demographically, counties with higher MASLD mortality had significantly larger proportions of Black and Hispanic residents, and were more likely to be rural and located in the South. These counties also had significantly lower median household incomes, higher poverty rates, fewer adults with a college education, lower access to exercise opportunities, greater SNAP participation, less broadband access, and more uninsured adults.
In multivariable regression models, both food deserts and food swamps remained independently associated with MASLD mortality. Counties in the highest quartile of food desert exposure had a 14.5% higher MASLD mortality rate, compared with the lowest quartile (P = .001), and those in the highest quartile for food swamp exposure had a 13.9% higher mortality rate (P = .005).
Type 2 diabetes, physical inactivity, and lack of health insurance were also independently associated with increased MASLD-related mortality.
“Implementing public health interventions that address the specific environmental factors of each county can help US policymakers promote access to healthy, culturally appropriate food choices at affordable prices and reduce the consumption of poor-quality food,” the investigators wrote. “Moreover, improving access to parks and exercise facilities can further enhance the impact of healthy nutrition. These strategies could help curb the growing epidemic of metabolic diseases, including MASLD and related mortality.”
This study was supported by King Faisal Specialist Hospital & Research Center, the Global NASH Council, Center for Outcomes Research in Liver Diseases, and the Beatty Liver and Obesity Research Fund, Inova Health System. The investigators disclosed no conflicts of interest.
according to investigators.
These findings highlight the importance of addressing disparities in food environments and social determinants of health to help reduce MASLD-related mortality, lead author Annette Paik, MD, of Inova Health System, Falls Church, Virginia, and colleagues reported.
“Recent studies indicate that food swamps and deserts, as surrogates for food insecurity, are linked to poor glycemic control and higher adult obesity rates,” the investigators wrote in Clinical Gastroenterology and Hepatology. “Understanding the intersection of these factors with sociodemographic and clinical variables offers insights into MASLD-related outcomes, including mortality.”
To this end, the present study examined the association between food environments and MASLD-related mortality across more than 2,195 US counties. County-level mortality data were obtained from the CDC WONDER database (2016-2020) and linked to food environment data from the US Department of Agriculture Food Environment Atlas using Federal Information Processing Standards (FIPS) codes. Food deserts were defined as low-income areas with limited access to grocery stores, while food swamps were characterized by a predominance of unhealthy food outlets relative to healthy ones.
Additional data on obesity, type 2 diabetes (T2D), and nine social determinants of health were obtained from CDC PLACES and other publicly available datasets. Counties were stratified into quartiles based on MASLD-related mortality rates. Population-weighted mixed-effects linear regression models were used to evaluate associations between food environment exposures and MASLD mortality, adjusting for region, rural-urban status, age, sex, race, insurance coverage, chronic dis-ease prevalence, SNAP participation, and access to exercise facilities.
Counties with the worst food environments had significantly higher MASLD-related mortality, even after adjusting for clinical and sociodemographic factors. Compared with counties in the lowest quartile of MASLD mortality, those in the highest quartile had a greater proportion of food deserts (22.3% vs 14.9%; P < .001) and food swamps (73.1% vs 65.7%; P < .001). They also had a significantly higher prevalence of obesity (40.5% vs 32.5%), type 2 diabetes (15.8% vs 11.4%), and physical inactivity (33.7% vs 24.9%).
Demographically, counties with higher MASLD mortality had significantly larger proportions of Black and Hispanic residents, and were more likely to be rural and located in the South. These counties also had significantly lower median household incomes, higher poverty rates, fewer adults with a college education, lower access to exercise opportunities, greater SNAP participation, less broadband access, and more uninsured adults.
In multivariable regression models, both food deserts and food swamps remained independently associated with MASLD mortality. Counties in the highest quartile of food desert exposure had a 14.5% higher MASLD mortality rate, compared with the lowest quartile (P = .001), and those in the highest quartile for food swamp exposure had a 13.9% higher mortality rate (P = .005).
Type 2 diabetes, physical inactivity, and lack of health insurance were also independently associated with increased MASLD-related mortality.
“Implementing public health interventions that address the specific environmental factors of each county can help US policymakers promote access to healthy, culturally appropriate food choices at affordable prices and reduce the consumption of poor-quality food,” the investigators wrote. “Moreover, improving access to parks and exercise facilities can further enhance the impact of healthy nutrition. These strategies could help curb the growing epidemic of metabolic diseases, including MASLD and related mortality.”
This study was supported by King Faisal Specialist Hospital & Research Center, the Global NASH Council, Center for Outcomes Research in Liver Diseases, and the Beatty Liver and Obesity Research Fund, Inova Health System. The investigators disclosed no conflicts of interest.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Infrequent HDV Testing Raises Concern for Worse Liver Outcomes
—according to new findings.
The low testing rate suggests limited awareness of HDV-associated risks in patients with CHB, and underscores the need for earlier testing and diagnosis, lead author Robert J. Wong, MD, of Stanford University School of Medicine, Stanford, California, and colleagues, reported.
“Data among US populations are lacking to describe the epidemiology and long-term outcomes of patients with CHB and concurrent HDV infection,” the investigators wrote in Gastro Hep Advances (2025 Oct. doi: 10.1016/j.gastha.2024.10.015).
Prior studies have found that only 6% to 19% of patients with CHB get tested for HDV, and among those tested, the prevalence is relatively low—between 2% and 4.6%. Although relatively uncommon, HDV carries a substantial clinical and economic burden, Dr. Wong and colleagues noted, highlighting the importance of clinical awareness and accurate epidemiologic data.
The present study analyzed data from the Veterans Affairs (VA) Corporate Data Warehouse between 2010 and 2023. Adults with CHB were identified based on laboratory-confirmed markers and ICD-9/10 codes. HDV testing (anti-HDV antibody and HDV RNA) was assessed, and predictors of testing were evaluated using multivariable logistic regression.
To examine liver-related outcomes, patients who tested positive for HDV were propensity score–matched 1:2 with CHB patients who tested negative. Matching accounted for age, sex, race/ethnicity, HBeAg status, antiviral treatment, HCV and HIV coinfection, diabetes, and alcohol use. Patients with cirrhosis or hepatocellular carcinoma (HCC) at base-line were excluded. Incidence of cirrhosis, hepatic decompensation, and HCC was estimated using competing risks Nelson-Aalen methods.
Among 27,548 veterans with CHB, only 16.1% underwent HDV testing. Of those tested, 3.25% were HDV positive. Testing rates were higher among patients who were HBeAg positive, on antiviral therapy, or identified as Asian or Pacific Islander.
Conversely, testing was significantly less common among patients with high-risk alcohol use, past or current drug use, cirrhosis at diagnosis, or HCV coinfection. In contrast, HIV coinfection was associated with increased odds of being tested.
Among those tested, HDV positivity was more likely in patients with HCV coinfection, cirrhosis, or a history of drug use. On multivariable analysis, these factors were independent predictors of HDV positivity.
In the matched cohort of 71 HDV-positive patients and 140 HDV-negative controls, the incidence of cirrhosis was more than 3-fold higher in HDV-positive patients (4.39 vs 1.30 per 100,000 person-years; P less than .01), and hepatic decompensation was over 5 times more common (2.18 vs 0.41 per 100,000 person-years; P = .01). There was also a non-significant trend toward increased HCC risk in the HDV group.
“These findings align with existing studies and confirm that among a predominantly non-Asian US cohort of CHB patients, presence of concurrent HDV is associated with more severe liver disease progression,” the investigators wrote. “These observations, taken together with the low rates of HDV testing overall and particularly among high-risk individuals, emphasizes the need for greater awareness and novel strategies on how to improve HDV testing and diagnosis, particularly given that novel HDV therapies are on the near horizon.”
The study was supported by Gilead. The investigators disclosed additional relationships with Exact Sciences, GSK, Novo Nordisk, and others.
Hepatitis D virus (HDV) is an RNA “sub-virus” that infects patients with co-existing hepatitis B virus (HBV) infections. HDV infection currently affects approximately 15-20 million people worldwide but is an orphan disease in the United States with fewer than 100,000 individuals infected today.
Those with HDV have a 70% lifetime risk of hepatocellular carcinoma (HCC), cirrhosis, liver failure, death, or liver transplant. But there are no current treatments in the US that are Food and Drug Administration (FDA)-approved for the treatment of HDV, and only one therapy in the European Union with full approval by the European Medicines Agency.
Despite HDV severity and limited treatment options, screening for HDV remains severely inadequate, often only testing those individuals at high risk sequentially. HDV screening, would benefit from a revamped approach that automatically reflexes testing when individuals are diagnosed with HBV if positive for hepatitis B surface antigen (HBsAg+), then proceeds to anti-HDV antibody total testing, and then double reflexed to HDV-RNA polymerase chain reaction (PCR) quantitation. This is especially true in the Veterans Administration (VA)’s hospitals and clinics, where Wong and colleagues found very low rates of HDV testing among a national cohort of US Veterans with chronic HBV.
This study highlights the importance of timely HDV testing using reflex tools to improve diagnosis and HDV treatment, reducing long-term risks of liver-related morbidity and mortality.
Robert G. Gish, MD, AGAF, is principal at Robert G Gish Consultants LLC, clinical professor of medicine at Loma Linda University, Loma Linda, Calif., and medical director of the Hepatitis B Foundation. His complete list of disclosures can be found at www.robertgish.com/about.
Hepatitis D virus (HDV) is an RNA “sub-virus” that infects patients with co-existing hepatitis B virus (HBV) infections. HDV infection currently affects approximately 15-20 million people worldwide but is an orphan disease in the United States with fewer than 100,000 individuals infected today.
Those with HDV have a 70% lifetime risk of hepatocellular carcinoma (HCC), cirrhosis, liver failure, death, or liver transplant. But there are no current treatments in the US that are Food and Drug Administration (FDA)-approved for the treatment of HDV, and only one therapy in the European Union with full approval by the European Medicines Agency.
Despite HDV severity and limited treatment options, screening for HDV remains severely inadequate, often only testing those individuals at high risk sequentially. HDV screening, would benefit from a revamped approach that automatically reflexes testing when individuals are diagnosed with HBV if positive for hepatitis B surface antigen (HBsAg+), then proceeds to anti-HDV antibody total testing, and then double reflexed to HDV-RNA polymerase chain reaction (PCR) quantitation. This is especially true in the Veterans Administration (VA)’s hospitals and clinics, where Wong and colleagues found very low rates of HDV testing among a national cohort of US Veterans with chronic HBV.
This study highlights the importance of timely HDV testing using reflex tools to improve diagnosis and HDV treatment, reducing long-term risks of liver-related morbidity and mortality.
Robert G. Gish, MD, AGAF, is principal at Robert G Gish Consultants LLC, clinical professor of medicine at Loma Linda University, Loma Linda, Calif., and medical director of the Hepatitis B Foundation. His complete list of disclosures can be found at www.robertgish.com/about.
Hepatitis D virus (HDV) is an RNA “sub-virus” that infects patients with co-existing hepatitis B virus (HBV) infections. HDV infection currently affects approximately 15-20 million people worldwide but is an orphan disease in the United States with fewer than 100,000 individuals infected today.
Those with HDV have a 70% lifetime risk of hepatocellular carcinoma (HCC), cirrhosis, liver failure, death, or liver transplant. But there are no current treatments in the US that are Food and Drug Administration (FDA)-approved for the treatment of HDV, and only one therapy in the European Union with full approval by the European Medicines Agency.
Despite HDV severity and limited treatment options, screening for HDV remains severely inadequate, often only testing those individuals at high risk sequentially. HDV screening, would benefit from a revamped approach that automatically reflexes testing when individuals are diagnosed with HBV if positive for hepatitis B surface antigen (HBsAg+), then proceeds to anti-HDV antibody total testing, and then double reflexed to HDV-RNA polymerase chain reaction (PCR) quantitation. This is especially true in the Veterans Administration (VA)’s hospitals and clinics, where Wong and colleagues found very low rates of HDV testing among a national cohort of US Veterans with chronic HBV.
This study highlights the importance of timely HDV testing using reflex tools to improve diagnosis and HDV treatment, reducing long-term risks of liver-related morbidity and mortality.
Robert G. Gish, MD, AGAF, is principal at Robert G Gish Consultants LLC, clinical professor of medicine at Loma Linda University, Loma Linda, Calif., and medical director of the Hepatitis B Foundation. His complete list of disclosures can be found at www.robertgish.com/about.
—according to new findings.
The low testing rate suggests limited awareness of HDV-associated risks in patients with CHB, and underscores the need for earlier testing and diagnosis, lead author Robert J. Wong, MD, of Stanford University School of Medicine, Stanford, California, and colleagues, reported.
“Data among US populations are lacking to describe the epidemiology and long-term outcomes of patients with CHB and concurrent HDV infection,” the investigators wrote in Gastro Hep Advances (2025 Oct. doi: 10.1016/j.gastha.2024.10.015).
Prior studies have found that only 6% to 19% of patients with CHB get tested for HDV, and among those tested, the prevalence is relatively low—between 2% and 4.6%. Although relatively uncommon, HDV carries a substantial clinical and economic burden, Dr. Wong and colleagues noted, highlighting the importance of clinical awareness and accurate epidemiologic data.
The present study analyzed data from the Veterans Affairs (VA) Corporate Data Warehouse between 2010 and 2023. Adults with CHB were identified based on laboratory-confirmed markers and ICD-9/10 codes. HDV testing (anti-HDV antibody and HDV RNA) was assessed, and predictors of testing were evaluated using multivariable logistic regression.
To examine liver-related outcomes, patients who tested positive for HDV were propensity score–matched 1:2 with CHB patients who tested negative. Matching accounted for age, sex, race/ethnicity, HBeAg status, antiviral treatment, HCV and HIV coinfection, diabetes, and alcohol use. Patients with cirrhosis or hepatocellular carcinoma (HCC) at base-line were excluded. Incidence of cirrhosis, hepatic decompensation, and HCC was estimated using competing risks Nelson-Aalen methods.
Among 27,548 veterans with CHB, only 16.1% underwent HDV testing. Of those tested, 3.25% were HDV positive. Testing rates were higher among patients who were HBeAg positive, on antiviral therapy, or identified as Asian or Pacific Islander.
Conversely, testing was significantly less common among patients with high-risk alcohol use, past or current drug use, cirrhosis at diagnosis, or HCV coinfection. In contrast, HIV coinfection was associated with increased odds of being tested.
Among those tested, HDV positivity was more likely in patients with HCV coinfection, cirrhosis, or a history of drug use. On multivariable analysis, these factors were independent predictors of HDV positivity.
In the matched cohort of 71 HDV-positive patients and 140 HDV-negative controls, the incidence of cirrhosis was more than 3-fold higher in HDV-positive patients (4.39 vs 1.30 per 100,000 person-years; P less than .01), and hepatic decompensation was over 5 times more common (2.18 vs 0.41 per 100,000 person-years; P = .01). There was also a non-significant trend toward increased HCC risk in the HDV group.
“These findings align with existing studies and confirm that among a predominantly non-Asian US cohort of CHB patients, presence of concurrent HDV is associated with more severe liver disease progression,” the investigators wrote. “These observations, taken together with the low rates of HDV testing overall and particularly among high-risk individuals, emphasizes the need for greater awareness and novel strategies on how to improve HDV testing and diagnosis, particularly given that novel HDV therapies are on the near horizon.”
The study was supported by Gilead. The investigators disclosed additional relationships with Exact Sciences, GSK, Novo Nordisk, and others.
—according to new findings.
The low testing rate suggests limited awareness of HDV-associated risks in patients with CHB, and underscores the need for earlier testing and diagnosis, lead author Robert J. Wong, MD, of Stanford University School of Medicine, Stanford, California, and colleagues, reported.
“Data among US populations are lacking to describe the epidemiology and long-term outcomes of patients with CHB and concurrent HDV infection,” the investigators wrote in Gastro Hep Advances (2025 Oct. doi: 10.1016/j.gastha.2024.10.015).
Prior studies have found that only 6% to 19% of patients with CHB get tested for HDV, and among those tested, the prevalence is relatively low—between 2% and 4.6%. Although relatively uncommon, HDV carries a substantial clinical and economic burden, Dr. Wong and colleagues noted, highlighting the importance of clinical awareness and accurate epidemiologic data.
The present study analyzed data from the Veterans Affairs (VA) Corporate Data Warehouse between 2010 and 2023. Adults with CHB were identified based on laboratory-confirmed markers and ICD-9/10 codes. HDV testing (anti-HDV antibody and HDV RNA) was assessed, and predictors of testing were evaluated using multivariable logistic regression.
To examine liver-related outcomes, patients who tested positive for HDV were propensity score–matched 1:2 with CHB patients who tested negative. Matching accounted for age, sex, race/ethnicity, HBeAg status, antiviral treatment, HCV and HIV coinfection, diabetes, and alcohol use. Patients with cirrhosis or hepatocellular carcinoma (HCC) at base-line were excluded. Incidence of cirrhosis, hepatic decompensation, and HCC was estimated using competing risks Nelson-Aalen methods.
Among 27,548 veterans with CHB, only 16.1% underwent HDV testing. Of those tested, 3.25% were HDV positive. Testing rates were higher among patients who were HBeAg positive, on antiviral therapy, or identified as Asian or Pacific Islander.
Conversely, testing was significantly less common among patients with high-risk alcohol use, past or current drug use, cirrhosis at diagnosis, or HCV coinfection. In contrast, HIV coinfection was associated with increased odds of being tested.
Among those tested, HDV positivity was more likely in patients with HCV coinfection, cirrhosis, or a history of drug use. On multivariable analysis, these factors were independent predictors of HDV positivity.
In the matched cohort of 71 HDV-positive patients and 140 HDV-negative controls, the incidence of cirrhosis was more than 3-fold higher in HDV-positive patients (4.39 vs 1.30 per 100,000 person-years; P less than .01), and hepatic decompensation was over 5 times more common (2.18 vs 0.41 per 100,000 person-years; P = .01). There was also a non-significant trend toward increased HCC risk in the HDV group.
“These findings align with existing studies and confirm that among a predominantly non-Asian US cohort of CHB patients, presence of concurrent HDV is associated with more severe liver disease progression,” the investigators wrote. “These observations, taken together with the low rates of HDV testing overall and particularly among high-risk individuals, emphasizes the need for greater awareness and novel strategies on how to improve HDV testing and diagnosis, particularly given that novel HDV therapies are on the near horizon.”
The study was supported by Gilead. The investigators disclosed additional relationships with Exact Sciences, GSK, Novo Nordisk, and others.
FROM GASTRO HEP ADVANCES