User login
Histamine Pathway a Target for Erythropoietic Protoporphyria?
An experimental study in zebrafish has suggested the decades-old, first-generation antihistamine chlorcyclizine and/or other antihistamines may be a strategy for treating erythropoietic protoporphyria (EPP)-associated liver disease by decreasing hepatic protoporphorin IX (PP-IX) accumulation.
Currently, liver transplantation is the primary treatment for this rare, painful, and life-threatening genetic disease, which is caused by excessive PP-IX accumulation and affects about 4000 people in the United States.
The findings could eventually lead to a simpler treatment that prevent shepatic damage at a much earlier stage, according to researchers led by M. Bishr Omary, MD, PhD, a professor in the Center for Advanced Biotechnology and Medicine and Robert Wood Johnson Medical School at Rutgers University in Piscataway, New Jersey.
Reporting in Cellular and Molecular Gastroenterology and Hepatology, the investigators found that chlorcyclizine reduced PP-IX levels. EPP is caused by mutations leading to deficiency of the enzyme ferrochelatase, which inserts iron into PP-IX to generate heme. The resulting condition is characterized by PP-IX accumulation, skin photosensitivity, cholestasis, and end-stage liver disease. “Despite available drugs that address photosensitivity, the treatment of EPP-related liver disease remains an unmet need,” Omary and colleagues wrote.
The Study
In order to trigger PP-IX overproduction and accumulation, the investigators administered delta-aminolevulinic acid and deferoxamine to zebrafish. These freshwater tropical fish share many physiological characteristics with humans and have been used to model human disease and develop drugs. Furthermore, these fish are transparent at the larval stage, allowing quantification and visualization of porphyrin, which is fluorescent.
The researchers had screened some 2500 approved and bioactive compounds and identified chlorcyclizine as a potent PP-IX–lowering agent.
High-throughput compound screening of ALA + DFO-treated zebrafish found that the HH-1 blocker reduced zebrafish liver PP-IX levels. The effect of chlorcyclizine was validated in porphyrin-loaded primary mouse hepatocytes, transgenic mice, and mice fed the porphyrinogenic compound 3,5-diethoxycarbonyl-1,4-dihydrocollidine.
Plasma and tissue PP-IX were measured by fluorescence; livers were analyzed by histology, immunoblotting, and quantitative polymerase chain reaction.
Chlorcyclizine-treated zebrafish larvae as well as the two types of mice all showed reduced hepatic PP-IX levels compared with controls. While the neurotransmitter played an important role in PP-IX accumulation in porphyrin-stressed hepatocytes, blockading notably decreased PP-IX levels.
Detailed analysis showed that chlorcyclizine appeared to work through multiple mechanisms, helping the liver clear toxic porphyrin buildup and reducing inflammation. It also decreased the presence of histamine-producing mast cells. The result was less liver injury, decreased porphyrin-triggered protein aggregation and oxidation, and increased clearance of s PP-I in stool.
Interestingly, in both mouse models, chlorcyclizine lowered PP-IX levels in female but not male mice in liver, erythrocytes, and bone marrow. This sex-specific effect appeared to be related to the greater speed at which male murines metabolize the drug, the authors explained in a news release. In rats, for example, the metabolism of chlorcyclizine is 8 times higher in male than in female livers.
The investigators plan to launch a clinical trial in EPP patients to evaluate the effectiveness of chlorcyclizine for both liver and skin involvement. And a phase 2 trial is already underway testing the antacid cimetidine for treating EPP skin manifestations. It is possible that the different antihistamines may act additively or synergistically.
This work was supported by National Institutes of Health (NIH) grants and the Henry and Mala Dorfman Family Professorship of Pediatric Hematology/Oncology.
Omary is a member of the NIH/National Institute of Diabetes and Digestive and Kidney Diseases Data and Safety Monitoring Board of the Porphyrias Consortium.
A provisional patent application has been submitted for the use of H1-receptor blockers with or without receptor blockers to treat protoporphyrias associated with PP-IX accumulation.
Mutations in the ferrochelatase (FECH) gene cause erythropoietic porphyria. EPP is characterized biochemically by liver and bone marrow accumulation of protoporphyrin-IX (PP-IX), and is characterized clinically by hepatic dysfunction with progression in 1-4% to advanced liver disease.
A recent study by Kuo and colleagues exemplifies a bench-to-bedside evolution comprising pharmacological screening, mechanistic dissection, and ultimately translation of this mechanism to human subjects to treat EPP. They utilized high-throughput compound screening in a zebrafish model to identify the anti-histamine, chlorcyclizine (CCZ), as a candidate EPP therapy. Chlorciclizine lowered hepatocyte PP-IX in multiple EPP models by blocking peripheral histamine production, and by inducing hepatocyte PP-IX efflux. The data represent advances in the realms of both clinical therapeutics and molecular pathophysiological discovery.
From a discovery standpoint, strategic compound screening that utilizes the LOPAC (library of pharmaceutically active compounds) and Prestwick libraries offers at least two key characteristics. First, these compounds have largely known targets. The known pharmacology of chlorcyclizine provided immediate clues to validate mechanism rapidly in hepatic HPP, a relatively poorly understood disease. Moreover, screening libraries comprising FDA-approved drugs can minimize lag time between discovery and translation to interventional trials in human subjects.
Beyond such strategic discovery considerations, perhaps more exciting is the therapeutic potential for anti-histaminergic therapy to mitigate hepatic manifestations in EPP. Specifically, other porphyrias with hepatic complications have FDA-approved treatments, such as anti-ALAS1 siRNAs to treat acute hepatic porphyria (AHP). No such treatment currently exists for liver dysfunction in EPP, yet CCZ and other histamine-1 receptor blockers hold such promise. Indeed, the H1 inhibitor, cimetidine, is currently in an active phase 2 trial to treat EPP (NCT05020184).
Given the already widespread use of antihistamines to symptomatically treat cutaneous EPP, we may not be too distant from pivoting and deploying readily available H1Bs like cimetidine to treat EPP liver manifestations as well. Given recent data by Kuo and colleagues, such an outcome should not be too far-FECHed.
Brian DeBosch, MD, PhD, is Center Director of the nutrition & molecular metabolism research program, in the Division of Gastroenterology, Hepatology & Nutrition at Indiana University School of Medicine, Indianapolis. He declares no conflicts of interest.
Mutations in the ferrochelatase (FECH) gene cause erythropoietic porphyria. EPP is characterized biochemically by liver and bone marrow accumulation of protoporphyrin-IX (PP-IX), and is characterized clinically by hepatic dysfunction with progression in 1-4% to advanced liver disease.
A recent study by Kuo and colleagues exemplifies a bench-to-bedside evolution comprising pharmacological screening, mechanistic dissection, and ultimately translation of this mechanism to human subjects to treat EPP. They utilized high-throughput compound screening in a zebrafish model to identify the anti-histamine, chlorcyclizine (CCZ), as a candidate EPP therapy. Chlorciclizine lowered hepatocyte PP-IX in multiple EPP models by blocking peripheral histamine production, and by inducing hepatocyte PP-IX efflux. The data represent advances in the realms of both clinical therapeutics and molecular pathophysiological discovery.
From a discovery standpoint, strategic compound screening that utilizes the LOPAC (library of pharmaceutically active compounds) and Prestwick libraries offers at least two key characteristics. First, these compounds have largely known targets. The known pharmacology of chlorcyclizine provided immediate clues to validate mechanism rapidly in hepatic HPP, a relatively poorly understood disease. Moreover, screening libraries comprising FDA-approved drugs can minimize lag time between discovery and translation to interventional trials in human subjects.
Beyond such strategic discovery considerations, perhaps more exciting is the therapeutic potential for anti-histaminergic therapy to mitigate hepatic manifestations in EPP. Specifically, other porphyrias with hepatic complications have FDA-approved treatments, such as anti-ALAS1 siRNAs to treat acute hepatic porphyria (AHP). No such treatment currently exists for liver dysfunction in EPP, yet CCZ and other histamine-1 receptor blockers hold such promise. Indeed, the H1 inhibitor, cimetidine, is currently in an active phase 2 trial to treat EPP (NCT05020184).
Given the already widespread use of antihistamines to symptomatically treat cutaneous EPP, we may not be too distant from pivoting and deploying readily available H1Bs like cimetidine to treat EPP liver manifestations as well. Given recent data by Kuo and colleagues, such an outcome should not be too far-FECHed.
Brian DeBosch, MD, PhD, is Center Director of the nutrition & molecular metabolism research program, in the Division of Gastroenterology, Hepatology & Nutrition at Indiana University School of Medicine, Indianapolis. He declares no conflicts of interest.
Mutations in the ferrochelatase (FECH) gene cause erythropoietic porphyria. EPP is characterized biochemically by liver and bone marrow accumulation of protoporphyrin-IX (PP-IX), and is characterized clinically by hepatic dysfunction with progression in 1-4% to advanced liver disease.
A recent study by Kuo and colleagues exemplifies a bench-to-bedside evolution comprising pharmacological screening, mechanistic dissection, and ultimately translation of this mechanism to human subjects to treat EPP. They utilized high-throughput compound screening in a zebrafish model to identify the anti-histamine, chlorcyclizine (CCZ), as a candidate EPP therapy. Chlorciclizine lowered hepatocyte PP-IX in multiple EPP models by blocking peripheral histamine production, and by inducing hepatocyte PP-IX efflux. The data represent advances in the realms of both clinical therapeutics and molecular pathophysiological discovery.
From a discovery standpoint, strategic compound screening that utilizes the LOPAC (library of pharmaceutically active compounds) and Prestwick libraries offers at least two key characteristics. First, these compounds have largely known targets. The known pharmacology of chlorcyclizine provided immediate clues to validate mechanism rapidly in hepatic HPP, a relatively poorly understood disease. Moreover, screening libraries comprising FDA-approved drugs can minimize lag time between discovery and translation to interventional trials in human subjects.
Beyond such strategic discovery considerations, perhaps more exciting is the therapeutic potential for anti-histaminergic therapy to mitigate hepatic manifestations in EPP. Specifically, other porphyrias with hepatic complications have FDA-approved treatments, such as anti-ALAS1 siRNAs to treat acute hepatic porphyria (AHP). No such treatment currently exists for liver dysfunction in EPP, yet CCZ and other histamine-1 receptor blockers hold such promise. Indeed, the H1 inhibitor, cimetidine, is currently in an active phase 2 trial to treat EPP (NCT05020184).
Given the already widespread use of antihistamines to symptomatically treat cutaneous EPP, we may not be too distant from pivoting and deploying readily available H1Bs like cimetidine to treat EPP liver manifestations as well. Given recent data by Kuo and colleagues, such an outcome should not be too far-FECHed.
Brian DeBosch, MD, PhD, is Center Director of the nutrition & molecular metabolism research program, in the Division of Gastroenterology, Hepatology & Nutrition at Indiana University School of Medicine, Indianapolis. He declares no conflicts of interest.
An experimental study in zebrafish has suggested the decades-old, first-generation antihistamine chlorcyclizine and/or other antihistamines may be a strategy for treating erythropoietic protoporphyria (EPP)-associated liver disease by decreasing hepatic protoporphorin IX (PP-IX) accumulation.
Currently, liver transplantation is the primary treatment for this rare, painful, and life-threatening genetic disease, which is caused by excessive PP-IX accumulation and affects about 4000 people in the United States.
The findings could eventually lead to a simpler treatment that prevent shepatic damage at a much earlier stage, according to researchers led by M. Bishr Omary, MD, PhD, a professor in the Center for Advanced Biotechnology and Medicine and Robert Wood Johnson Medical School at Rutgers University in Piscataway, New Jersey.
Reporting in Cellular and Molecular Gastroenterology and Hepatology, the investigators found that chlorcyclizine reduced PP-IX levels. EPP is caused by mutations leading to deficiency of the enzyme ferrochelatase, which inserts iron into PP-IX to generate heme. The resulting condition is characterized by PP-IX accumulation, skin photosensitivity, cholestasis, and end-stage liver disease. “Despite available drugs that address photosensitivity, the treatment of EPP-related liver disease remains an unmet need,” Omary and colleagues wrote.
The Study
In order to trigger PP-IX overproduction and accumulation, the investigators administered delta-aminolevulinic acid and deferoxamine to zebrafish. These freshwater tropical fish share many physiological characteristics with humans and have been used to model human disease and develop drugs. Furthermore, these fish are transparent at the larval stage, allowing quantification and visualization of porphyrin, which is fluorescent.
The researchers had screened some 2500 approved and bioactive compounds and identified chlorcyclizine as a potent PP-IX–lowering agent.
High-throughput compound screening of ALA + DFO-treated zebrafish found that the HH-1 blocker reduced zebrafish liver PP-IX levels. The effect of chlorcyclizine was validated in porphyrin-loaded primary mouse hepatocytes, transgenic mice, and mice fed the porphyrinogenic compound 3,5-diethoxycarbonyl-1,4-dihydrocollidine.
Plasma and tissue PP-IX were measured by fluorescence; livers were analyzed by histology, immunoblotting, and quantitative polymerase chain reaction.
Chlorcyclizine-treated zebrafish larvae as well as the two types of mice all showed reduced hepatic PP-IX levels compared with controls. While the neurotransmitter played an important role in PP-IX accumulation in porphyrin-stressed hepatocytes, blockading notably decreased PP-IX levels.
Detailed analysis showed that chlorcyclizine appeared to work through multiple mechanisms, helping the liver clear toxic porphyrin buildup and reducing inflammation. It also decreased the presence of histamine-producing mast cells. The result was less liver injury, decreased porphyrin-triggered protein aggregation and oxidation, and increased clearance of s PP-I in stool.
Interestingly, in both mouse models, chlorcyclizine lowered PP-IX levels in female but not male mice in liver, erythrocytes, and bone marrow. This sex-specific effect appeared to be related to the greater speed at which male murines metabolize the drug, the authors explained in a news release. In rats, for example, the metabolism of chlorcyclizine is 8 times higher in male than in female livers.
The investigators plan to launch a clinical trial in EPP patients to evaluate the effectiveness of chlorcyclizine for both liver and skin involvement. And a phase 2 trial is already underway testing the antacid cimetidine for treating EPP skin manifestations. It is possible that the different antihistamines may act additively or synergistically.
This work was supported by National Institutes of Health (NIH) grants and the Henry and Mala Dorfman Family Professorship of Pediatric Hematology/Oncology.
Omary is a member of the NIH/National Institute of Diabetes and Digestive and Kidney Diseases Data and Safety Monitoring Board of the Porphyrias Consortium.
A provisional patent application has been submitted for the use of H1-receptor blockers with or without receptor blockers to treat protoporphyrias associated with PP-IX accumulation.
An experimental study in zebrafish has suggested the decades-old, first-generation antihistamine chlorcyclizine and/or other antihistamines may be a strategy for treating erythropoietic protoporphyria (EPP)-associated liver disease by decreasing hepatic protoporphorin IX (PP-IX) accumulation.
Currently, liver transplantation is the primary treatment for this rare, painful, and life-threatening genetic disease, which is caused by excessive PP-IX accumulation and affects about 4000 people in the United States.
The findings could eventually lead to a simpler treatment that prevent shepatic damage at a much earlier stage, according to researchers led by M. Bishr Omary, MD, PhD, a professor in the Center for Advanced Biotechnology and Medicine and Robert Wood Johnson Medical School at Rutgers University in Piscataway, New Jersey.
Reporting in Cellular and Molecular Gastroenterology and Hepatology, the investigators found that chlorcyclizine reduced PP-IX levels. EPP is caused by mutations leading to deficiency of the enzyme ferrochelatase, which inserts iron into PP-IX to generate heme. The resulting condition is characterized by PP-IX accumulation, skin photosensitivity, cholestasis, and end-stage liver disease. “Despite available drugs that address photosensitivity, the treatment of EPP-related liver disease remains an unmet need,” Omary and colleagues wrote.
The Study
In order to trigger PP-IX overproduction and accumulation, the investigators administered delta-aminolevulinic acid and deferoxamine to zebrafish. These freshwater tropical fish share many physiological characteristics with humans and have been used to model human disease and develop drugs. Furthermore, these fish are transparent at the larval stage, allowing quantification and visualization of porphyrin, which is fluorescent.
The researchers had screened some 2500 approved and bioactive compounds and identified chlorcyclizine as a potent PP-IX–lowering agent.
High-throughput compound screening of ALA + DFO-treated zebrafish found that the HH-1 blocker reduced zebrafish liver PP-IX levels. The effect of chlorcyclizine was validated in porphyrin-loaded primary mouse hepatocytes, transgenic mice, and mice fed the porphyrinogenic compound 3,5-diethoxycarbonyl-1,4-dihydrocollidine.
Plasma and tissue PP-IX were measured by fluorescence; livers were analyzed by histology, immunoblotting, and quantitative polymerase chain reaction.
Chlorcyclizine-treated zebrafish larvae as well as the two types of mice all showed reduced hepatic PP-IX levels compared with controls. While the neurotransmitter played an important role in PP-IX accumulation in porphyrin-stressed hepatocytes, blockading notably decreased PP-IX levels.
Detailed analysis showed that chlorcyclizine appeared to work through multiple mechanisms, helping the liver clear toxic porphyrin buildup and reducing inflammation. It also decreased the presence of histamine-producing mast cells. The result was less liver injury, decreased porphyrin-triggered protein aggregation and oxidation, and increased clearance of s PP-I in stool.
Interestingly, in both mouse models, chlorcyclizine lowered PP-IX levels in female but not male mice in liver, erythrocytes, and bone marrow. This sex-specific effect appeared to be related to the greater speed at which male murines metabolize the drug, the authors explained in a news release. In rats, for example, the metabolism of chlorcyclizine is 8 times higher in male than in female livers.
The investigators plan to launch a clinical trial in EPP patients to evaluate the effectiveness of chlorcyclizine for both liver and skin involvement. And a phase 2 trial is already underway testing the antacid cimetidine for treating EPP skin manifestations. It is possible that the different antihistamines may act additively or synergistically.
This work was supported by National Institutes of Health (NIH) grants and the Henry and Mala Dorfman Family Professorship of Pediatric Hematology/Oncology.
Omary is a member of the NIH/National Institute of Diabetes and Digestive and Kidney Diseases Data and Safety Monitoring Board of the Porphyrias Consortium.
A provisional patent application has been submitted for the use of H1-receptor blockers with or without receptor blockers to treat protoporphyrias associated with PP-IX accumulation.
From Cellular and Molecular Gastroenterology and Hepatology
IgG-Guided Elimination Diet Beats Sham Diet for IBS Pain
An irritable bowel syndrome (IBS) elimination diet based on a novel IBS–specific, immunoglobulin G (IgG) was superior to a sham diet for abdominal pain, an 8-center, randomized double-blind controlled trial found.
While elimination diets can provide a personalized approach to dietary therapy, existing studies have had serious methodological issues, noted lead author Prashant Singh, MBBS, of the Division of Gastroenterology and Hepatology, Department of Internal Medicine, Michigan Medicine, Ann Arbor, Mich., and colleagues in Gastroenterology.
For example, previous studies on IgG-based diets used assays developed without determining IBS trigger foods or establishing a 95% confidence interval–based cutoff using a healthy control comparison group.
Study Details
From June 2018 to December 2021, 238 IBS patients testing positive for at least one food on 18-food IgG ELISA (enzyme-linked immunosorbent assay) testing and an average daily abdominal pain intensity score of 3.0 to 7.5 on an 11.0-point scale during a 2-week run-in period were randomized for 8 weeks to an experimental antibody-guided diet or to a sham diet. The primary outcome was a 30% decrease in abdominal pain intensity (API) for 2 of the last 4 weeks of treatment.
The overall study population had a mean age of about 40 years, and more than three-quarters were female. The 3 IBS types – constipation-predominant (IBS-C), diarrhea-predominant (IBS-D), and mixed bowel habits-predominant (IBS-M) – accounted for about a third each in both arms.
The experimental diet eliminated foods based on a positive ELISA result. Its sham counterpart had the same number of foods removed as the number of positive-testing food sensitivities, but the foods eliminated in the sham diet had tested negative on the IgG assay.
Participants reported daily abdominal pain intensity, bloating, and stool consistency, and frequency. They also reported dietary compliance and daily medication use.
Of the 238 randomized adults, 223 were included in the modified intention-to-treat analysis. A significantly greater proportion of subjects in the experimental group met the primary outcome than those in the sham group: 59.6% vs 42.1%, P = .02). “This highlights the potential effectiveness of a personalized elimination diet based on a novel IBS-specific IgG assay,” the authors wrote.
Symptom improvement between arms began to separate out at around 2 weeks, suggesting the effect of the experimental diet was relatively rapid in onset, and continued for at least 8 weeks. The durability of response, however, needs to be assessed in future studies “and it is unclear if there is a role for repeat IgG testing to monitor treatment response,” the authors wrote.
Subgroup analysis revealed that a higher proportion of those with IBS-C and IBS-M in the experimental diet group met the primary endpoint vs the sham group: 67.1% vs 35.8% and 66% vs 29.5%, respectively.
Interestingly, more patients in the experimental arm were noncompliant with their diet. “It is possible that subjects found the experimental diet more difficult to comply with compared with the sham diet or that because the experimental diet was more likely to improve symptoms, dietary indiscretion may have been more common in this group (a phenomenon seen with other elimination diets such as gluten-free diet in celiac disease),” the authors wrote.
Adverse events, deemed unrelated to either regimen, were 3 in the experimental arm vs 8 in the sham arm, which had 2 urinary tract infections.
The authors called for a larger, adequately powered study to assess the efficacy of an elimination diet based on this novel immunoglobulin G assay in patients with IBS-C and IBS-M. Future studies should perform detailed adherence assessments using food diaries.
“Mechanisms of how immunoglobulin G-antibody response to food antigen generates symptoms in irritable bowel syndrome are not well understood. Delineating this might provide new insights into food-related irritable bowel syndrome pathophysiology,” they concluded.
This study was funded by Biomerica Inc.
Symptoms in most people with irritable bowel syndrome (IBS) are perceived to be closely linked to diet. The low FODMAP diet has been pivotal for the treatment of IBS, and a range of other diet approaches are now on the research horizon.
Whilst IgE-mediated allergy is relatively rare, there has been research suggesting a role of IgG-mediated food sensitivity in causing symptoms in IBS, although the role of IgG testing and dietary elimination has been controversial. This study from Singh and colleagues suggests an IgG-based elimination diet could improve abdominal pain and global symptoms in two thirds of people with Rome IV IBS. Critically, the study is one of the largest so far and provides the most robust and detailed description of the trial diets to date.
The potential of a new diet approach is extremely appealing, especially as the low FODMAP diet is not universally effective. However, there is still some work to be done to transition the IgG-based elimination diet into guidelines and routine practice. Notably, some common foods restricted in IgG-based elimination diets are also high in FODMAPs leaving questions about the true driver of symptom benefit. Should convincing mechanistic studies and further additional RCT data validate these findings, this could present a major step forward for personalised nutrition in IBS.
Heidi Staudacher, PhD, is associate professor in the School of Translational Medicine, Monash University, Melbourne, Australia. She declared no conflicts of interest.
Symptoms in most people with irritable bowel syndrome (IBS) are perceived to be closely linked to diet. The low FODMAP diet has been pivotal for the treatment of IBS, and a range of other diet approaches are now on the research horizon.
Whilst IgE-mediated allergy is relatively rare, there has been research suggesting a role of IgG-mediated food sensitivity in causing symptoms in IBS, although the role of IgG testing and dietary elimination has been controversial. This study from Singh and colleagues suggests an IgG-based elimination diet could improve abdominal pain and global symptoms in two thirds of people with Rome IV IBS. Critically, the study is one of the largest so far and provides the most robust and detailed description of the trial diets to date.
The potential of a new diet approach is extremely appealing, especially as the low FODMAP diet is not universally effective. However, there is still some work to be done to transition the IgG-based elimination diet into guidelines and routine practice. Notably, some common foods restricted in IgG-based elimination diets are also high in FODMAPs leaving questions about the true driver of symptom benefit. Should convincing mechanistic studies and further additional RCT data validate these findings, this could present a major step forward for personalised nutrition in IBS.
Heidi Staudacher, PhD, is associate professor in the School of Translational Medicine, Monash University, Melbourne, Australia. She declared no conflicts of interest.
Symptoms in most people with irritable bowel syndrome (IBS) are perceived to be closely linked to diet. The low FODMAP diet has been pivotal for the treatment of IBS, and a range of other diet approaches are now on the research horizon.
Whilst IgE-mediated allergy is relatively rare, there has been research suggesting a role of IgG-mediated food sensitivity in causing symptoms in IBS, although the role of IgG testing and dietary elimination has been controversial. This study from Singh and colleagues suggests an IgG-based elimination diet could improve abdominal pain and global symptoms in two thirds of people with Rome IV IBS. Critically, the study is one of the largest so far and provides the most robust and detailed description of the trial diets to date.
The potential of a new diet approach is extremely appealing, especially as the low FODMAP diet is not universally effective. However, there is still some work to be done to transition the IgG-based elimination diet into guidelines and routine practice. Notably, some common foods restricted in IgG-based elimination diets are also high in FODMAPs leaving questions about the true driver of symptom benefit. Should convincing mechanistic studies and further additional RCT data validate these findings, this could present a major step forward for personalised nutrition in IBS.
Heidi Staudacher, PhD, is associate professor in the School of Translational Medicine, Monash University, Melbourne, Australia. She declared no conflicts of interest.
An irritable bowel syndrome (IBS) elimination diet based on a novel IBS–specific, immunoglobulin G (IgG) was superior to a sham diet for abdominal pain, an 8-center, randomized double-blind controlled trial found.
While elimination diets can provide a personalized approach to dietary therapy, existing studies have had serious methodological issues, noted lead author Prashant Singh, MBBS, of the Division of Gastroenterology and Hepatology, Department of Internal Medicine, Michigan Medicine, Ann Arbor, Mich., and colleagues in Gastroenterology.
For example, previous studies on IgG-based diets used assays developed without determining IBS trigger foods or establishing a 95% confidence interval–based cutoff using a healthy control comparison group.
Study Details
From June 2018 to December 2021, 238 IBS patients testing positive for at least one food on 18-food IgG ELISA (enzyme-linked immunosorbent assay) testing and an average daily abdominal pain intensity score of 3.0 to 7.5 on an 11.0-point scale during a 2-week run-in period were randomized for 8 weeks to an experimental antibody-guided diet or to a sham diet. The primary outcome was a 30% decrease in abdominal pain intensity (API) for 2 of the last 4 weeks of treatment.
The overall study population had a mean age of about 40 years, and more than three-quarters were female. The 3 IBS types – constipation-predominant (IBS-C), diarrhea-predominant (IBS-D), and mixed bowel habits-predominant (IBS-M) – accounted for about a third each in both arms.
The experimental diet eliminated foods based on a positive ELISA result. Its sham counterpart had the same number of foods removed as the number of positive-testing food sensitivities, but the foods eliminated in the sham diet had tested negative on the IgG assay.
Participants reported daily abdominal pain intensity, bloating, and stool consistency, and frequency. They also reported dietary compliance and daily medication use.
Of the 238 randomized adults, 223 were included in the modified intention-to-treat analysis. A significantly greater proportion of subjects in the experimental group met the primary outcome than those in the sham group: 59.6% vs 42.1%, P = .02). “This highlights the potential effectiveness of a personalized elimination diet based on a novel IBS-specific IgG assay,” the authors wrote.
Symptom improvement between arms began to separate out at around 2 weeks, suggesting the effect of the experimental diet was relatively rapid in onset, and continued for at least 8 weeks. The durability of response, however, needs to be assessed in future studies “and it is unclear if there is a role for repeat IgG testing to monitor treatment response,” the authors wrote.
Subgroup analysis revealed that a higher proportion of those with IBS-C and IBS-M in the experimental diet group met the primary endpoint vs the sham group: 67.1% vs 35.8% and 66% vs 29.5%, respectively.
Interestingly, more patients in the experimental arm were noncompliant with their diet. “It is possible that subjects found the experimental diet more difficult to comply with compared with the sham diet or that because the experimental diet was more likely to improve symptoms, dietary indiscretion may have been more common in this group (a phenomenon seen with other elimination diets such as gluten-free diet in celiac disease),” the authors wrote.
Adverse events, deemed unrelated to either regimen, were 3 in the experimental arm vs 8 in the sham arm, which had 2 urinary tract infections.
The authors called for a larger, adequately powered study to assess the efficacy of an elimination diet based on this novel immunoglobulin G assay in patients with IBS-C and IBS-M. Future studies should perform detailed adherence assessments using food diaries.
“Mechanisms of how immunoglobulin G-antibody response to food antigen generates symptoms in irritable bowel syndrome are not well understood. Delineating this might provide new insights into food-related irritable bowel syndrome pathophysiology,” they concluded.
This study was funded by Biomerica Inc.
An irritable bowel syndrome (IBS) elimination diet based on a novel IBS–specific, immunoglobulin G (IgG) was superior to a sham diet for abdominal pain, an 8-center, randomized double-blind controlled trial found.
While elimination diets can provide a personalized approach to dietary therapy, existing studies have had serious methodological issues, noted lead author Prashant Singh, MBBS, of the Division of Gastroenterology and Hepatology, Department of Internal Medicine, Michigan Medicine, Ann Arbor, Mich., and colleagues in Gastroenterology.
For example, previous studies on IgG-based diets used assays developed without determining IBS trigger foods or establishing a 95% confidence interval–based cutoff using a healthy control comparison group.
Study Details
From June 2018 to December 2021, 238 IBS patients testing positive for at least one food on 18-food IgG ELISA (enzyme-linked immunosorbent assay) testing and an average daily abdominal pain intensity score of 3.0 to 7.5 on an 11.0-point scale during a 2-week run-in period were randomized for 8 weeks to an experimental antibody-guided diet or to a sham diet. The primary outcome was a 30% decrease in abdominal pain intensity (API) for 2 of the last 4 weeks of treatment.
The overall study population had a mean age of about 40 years, and more than three-quarters were female. The 3 IBS types – constipation-predominant (IBS-C), diarrhea-predominant (IBS-D), and mixed bowel habits-predominant (IBS-M) – accounted for about a third each in both arms.
The experimental diet eliminated foods based on a positive ELISA result. Its sham counterpart had the same number of foods removed as the number of positive-testing food sensitivities, but the foods eliminated in the sham diet had tested negative on the IgG assay.
Participants reported daily abdominal pain intensity, bloating, and stool consistency, and frequency. They also reported dietary compliance and daily medication use.
Of the 238 randomized adults, 223 were included in the modified intention-to-treat analysis. A significantly greater proportion of subjects in the experimental group met the primary outcome than those in the sham group: 59.6% vs 42.1%, P = .02). “This highlights the potential effectiveness of a personalized elimination diet based on a novel IBS-specific IgG assay,” the authors wrote.
Symptom improvement between arms began to separate out at around 2 weeks, suggesting the effect of the experimental diet was relatively rapid in onset, and continued for at least 8 weeks. The durability of response, however, needs to be assessed in future studies “and it is unclear if there is a role for repeat IgG testing to monitor treatment response,” the authors wrote.
Subgroup analysis revealed that a higher proportion of those with IBS-C and IBS-M in the experimental diet group met the primary endpoint vs the sham group: 67.1% vs 35.8% and 66% vs 29.5%, respectively.
Interestingly, more patients in the experimental arm were noncompliant with their diet. “It is possible that subjects found the experimental diet more difficult to comply with compared with the sham diet or that because the experimental diet was more likely to improve symptoms, dietary indiscretion may have been more common in this group (a phenomenon seen with other elimination diets such as gluten-free diet in celiac disease),” the authors wrote.
Adverse events, deemed unrelated to either regimen, were 3 in the experimental arm vs 8 in the sham arm, which had 2 urinary tract infections.
The authors called for a larger, adequately powered study to assess the efficacy of an elimination diet based on this novel immunoglobulin G assay in patients with IBS-C and IBS-M. Future studies should perform detailed adherence assessments using food diaries.
“Mechanisms of how immunoglobulin G-antibody response to food antigen generates symptoms in irritable bowel syndrome are not well understood. Delineating this might provide new insights into food-related irritable bowel syndrome pathophysiology,” they concluded.
This study was funded by Biomerica Inc.
FROM GASTROENTEROLOGY
Gastric Cancer Prevention: New AGA Update Reflects Latest High-Risk Screening and Surveillance Advice
Clinicians can help reduce gastric cancer incidence and mortality in high-risk groups through endoscopic screening and surveillance of precancerous conditions, such as gastric intestinal metaplasia (GIM), according to a new clinical practice update from AGA.
The update supports additional gastric guidance published so far in 2025, including a clinical guideline on the diagnosis and management of gastric premalignant conditions (GPMC) from the American College of Gastroenterology (ACG) and upper GI endoscopy quality indicators from ACG and the American Society for Gastrointestinal Endoscopy (ASGE).
“The synergy of these three publications coming out at the same time helps us to finally establish surveillance of high-risk gastric conditions in practice, as we do in the colon and esophagus,” said Douglas R. Morgan, MD, professor of medicine in gastroenterology and hepatology and director of Global Health programs in gastroenterology at the University of Alabama at Birmingham.
Morgan, who wasn’t involved with the AGA update, served as lead author for the ACG guideline and co-author of the ACG-ASGE quality indicators. He also co-authored the 2024 ACG clinical guideline on treating Helicobacter pylori infection, which has implications for gastric cancer.
“The AGA and ACG updates provide detail, while the QI document is an enforcer with medical, legal, and reimbursement implications,” he said. “We have an alignment of the stars with this overdue move toward concrete surveillance for high-risk lesions in the stomach.”
The clinical practice update was published in Gastroenterology.
Gastric Cancer Screening
, the authors wrote. The top ways to reduce mortality include primary prevention, particularly by eradicating H pylori, and secondary prevention through screening and surveillance.
High-risk groups in the United States should be considered for gastric cancer screening, including first-generation immigrants from high-incidence regions and potentially other non-White racial and ethnic groups, those with a family history of gastric cancer in a first-degree relative, and those with certain hereditary GI polyposis or hereditary cancer syndromes.
Endoscopy remains the best test for screening or surveillance of high-risk groups, the authors wrote, since it allows for direct visualization to endoscopically stage the mucosa, identify any concerning areas of neoplasia, and enable biopsies. Both endoscopic and histologic staging are key for risk stratification and surveillance decisions.
In particular, clinicians should use a high-definition white light endoscopy system with image enhancement, gastric mucosal cleansing, and insufflation to see the mucosa. As part of this, clinicians should allow for adequate visual inspection time, photodocumentation, and systematic biopsy protocol for mucosal staging, where appropriate.
As part of this, clinicians should consider H pylori eradication as an essential adjunct to endoscopic screening, the authors wrote. Opportunistic screening for H pylori should be considered in high-risk groups, and familial-based testing should be considered among adult household members of patients who test positive for H pylori.
Endoscopic Biopsy and Diagnosis
In patients with suspected gastric atrophy — with or without GIM — gastric biopsies should be obtained with a systematic approach, the authors wrote. Clinicians should take a minimum of five biopsies, sampling from the antrum/incisura and corpus.
Endoscopists should work with their pathologists on consistent documentation of histologic risk-stratification parameters when atrophic gastritis is diagnosed, the authors wrote. To inform clinical decision-making, this should include documentation of the presence or absence of H pylori infection, severity of atrophy or metaplasia, and histologic subtyping of GIM.
Although GIM and dysplasia are endoscopically detectable, these findings often go undiagnosed when endoscopists aren’t familiar with the characteristic visual features, the authors wrote. More training is needed, especially in the US, and although artificial intelligence tools appear promising for detecting early gastric neoplasia, data remain too preliminary to recommend routine use, the authors added.
Since indefinite and low-grade dysplasia can be difficult to identify by endoscopy and accurately diagnosis on histopathology, all dysplasia should be confirmed by an experienced gastrointestinal pathologist, the authors wrote. Clinicians should refer patients with visible or nonvisible dysplasia to an endoscopist or center with expertise in gastric neoplasia.
Endoscopic Management and Surveillance
If an index screening endoscopy doesn’t identify atrophy, GIM, or neoplasia, ongoing screening should be based on a patient’s risk factors and preferences. If the patient has a family history or multiple risk factors, ongoing screening should be considered. However, the optimal screening intervals in these scenarios aren’t well-defined.
Patients with confirmed gastric atrophy should undergo risk stratification, the authors wrote. Those with severe atrophic gastritis or multifocal/incomplete GIM would likely benefit from endoscopic surveillance, particularly if they have other risk factors such as family history. Surveillance should be considered every 3 years, though shorter intervals may be advisable for those with multiple risk factors such as severe GIM.
Patients with high-grade dysplasia or early gastric cancer should undergo endoscopic submucosal dissection (ESD), with the goal of en bloc, R0 resection to enable accurate pathologic staging and the intent to cure. Eradicating active H pylori infection is essential — but shouldn’t delay endoscopic intervention, the authors wrote.
In addition, patients with a history of successfully resected gastric dysplasia or cancer should undergo endoscopic surveillance. Although post-ESD surveillance intervals have been suggested in other recent AGA clinical practice updates, additional data are needed, particularly for US recommendations, the authors wrote.
Although type 1 gastric carcinoids in patients with atrophic gastritis are typically indolent, especially if less than 1 cm, endoscopists may consider resecting them and should resect lesions between 1and 2 cm. Patients with lesions over 2 cm should undergo cross-sectional imaging and be referred for surgical resection, given the risk for metastasis.
Patient-Centered Approach
The guideline authors suggested thinking about screening and surveillance on a patient-level basis. For instance, only those who are fit for endoscopic or potentially surgical treatment should be screened for gastric cancer and continued surveillance of GPMC, they wrote. If a person is no longer fit for endoscopic or surgical treatment, whether due to life expectancy or other comorbidities, then screening should be stopped.
In addition, to achieve health equity, clinicians should take a personalized approach to assess a patient’s risk for gastric cancer and determine whether to pursue screening and surveillance, the authors wrote. Modifiable risk factors — such as tobacco use, high-salt and processed food diets, and lack of health care — should also be addressed, since most of these risk factors disproportionately affect high-risk patients and represent healthcare disparities, they added.
“This update provides clinicians with a framework for understanding the natural history and epidemiology of gastric polyps, as well as guidance on best practices for the endoscopic detection and classification of gastric polyps, best practices for the endoscopic resection of gastric polyps, and best practices for endoscopic surveillance following resection,” said Hashem El-Serag, MD, professor and chair of medicine at the Baylor College of Medicine and director of the Texas Medical Center Digestive Diseases Center in Houston.
El-Serag, who wasn’t involved with the clinical practice update, has researched and published on consensus around the diagnosis and management of GIM.
“Stomach polyps are commonly found during routine endoscopic procedures. They are mostly asymptomatic and incidental, and therefore, clinicians may not be prepared ahead of time on how to deal with them,” he said. “The appropriate management requires proper identification and sampling of the polyp features and the uninvolved gastric mucosa, as well as a clear understanding of the risk factors and prognosis. Recent changes in the epidemiology and endoscopic management of gastric polyps makes this update timely and important.”
The update received no particular funding. The authors disclosed receiving grant support, having consultant relationships with, and serving in advisory roles for numerous pharmaceutical, biomedical, and biotechnology firms. Morgan and El-Serag reported having no relevant disclosures.
A version of this article appeared on Medscape.com.
Clinicians can help reduce gastric cancer incidence and mortality in high-risk groups through endoscopic screening and surveillance of precancerous conditions, such as gastric intestinal metaplasia (GIM), according to a new clinical practice update from AGA.
The update supports additional gastric guidance published so far in 2025, including a clinical guideline on the diagnosis and management of gastric premalignant conditions (GPMC) from the American College of Gastroenterology (ACG) and upper GI endoscopy quality indicators from ACG and the American Society for Gastrointestinal Endoscopy (ASGE).
“The synergy of these three publications coming out at the same time helps us to finally establish surveillance of high-risk gastric conditions in practice, as we do in the colon and esophagus,” said Douglas R. Morgan, MD, professor of medicine in gastroenterology and hepatology and director of Global Health programs in gastroenterology at the University of Alabama at Birmingham.
Morgan, who wasn’t involved with the AGA update, served as lead author for the ACG guideline and co-author of the ACG-ASGE quality indicators. He also co-authored the 2024 ACG clinical guideline on treating Helicobacter pylori infection, which has implications for gastric cancer.
“The AGA and ACG updates provide detail, while the QI document is an enforcer with medical, legal, and reimbursement implications,” he said. “We have an alignment of the stars with this overdue move toward concrete surveillance for high-risk lesions in the stomach.”
The clinical practice update was published in Gastroenterology.
Gastric Cancer Screening
, the authors wrote. The top ways to reduce mortality include primary prevention, particularly by eradicating H pylori, and secondary prevention through screening and surveillance.
High-risk groups in the United States should be considered for gastric cancer screening, including first-generation immigrants from high-incidence regions and potentially other non-White racial and ethnic groups, those with a family history of gastric cancer in a first-degree relative, and those with certain hereditary GI polyposis or hereditary cancer syndromes.
Endoscopy remains the best test for screening or surveillance of high-risk groups, the authors wrote, since it allows for direct visualization to endoscopically stage the mucosa, identify any concerning areas of neoplasia, and enable biopsies. Both endoscopic and histologic staging are key for risk stratification and surveillance decisions.
In particular, clinicians should use a high-definition white light endoscopy system with image enhancement, gastric mucosal cleansing, and insufflation to see the mucosa. As part of this, clinicians should allow for adequate visual inspection time, photodocumentation, and systematic biopsy protocol for mucosal staging, where appropriate.
As part of this, clinicians should consider H pylori eradication as an essential adjunct to endoscopic screening, the authors wrote. Opportunistic screening for H pylori should be considered in high-risk groups, and familial-based testing should be considered among adult household members of patients who test positive for H pylori.
Endoscopic Biopsy and Diagnosis
In patients with suspected gastric atrophy — with or without GIM — gastric biopsies should be obtained with a systematic approach, the authors wrote. Clinicians should take a minimum of five biopsies, sampling from the antrum/incisura and corpus.
Endoscopists should work with their pathologists on consistent documentation of histologic risk-stratification parameters when atrophic gastritis is diagnosed, the authors wrote. To inform clinical decision-making, this should include documentation of the presence or absence of H pylori infection, severity of atrophy or metaplasia, and histologic subtyping of GIM.
Although GIM and dysplasia are endoscopically detectable, these findings often go undiagnosed when endoscopists aren’t familiar with the characteristic visual features, the authors wrote. More training is needed, especially in the US, and although artificial intelligence tools appear promising for detecting early gastric neoplasia, data remain too preliminary to recommend routine use, the authors added.
Since indefinite and low-grade dysplasia can be difficult to identify by endoscopy and accurately diagnosis on histopathology, all dysplasia should be confirmed by an experienced gastrointestinal pathologist, the authors wrote. Clinicians should refer patients with visible or nonvisible dysplasia to an endoscopist or center with expertise in gastric neoplasia.
Endoscopic Management and Surveillance
If an index screening endoscopy doesn’t identify atrophy, GIM, or neoplasia, ongoing screening should be based on a patient’s risk factors and preferences. If the patient has a family history or multiple risk factors, ongoing screening should be considered. However, the optimal screening intervals in these scenarios aren’t well-defined.
Patients with confirmed gastric atrophy should undergo risk stratification, the authors wrote. Those with severe atrophic gastritis or multifocal/incomplete GIM would likely benefit from endoscopic surveillance, particularly if they have other risk factors such as family history. Surveillance should be considered every 3 years, though shorter intervals may be advisable for those with multiple risk factors such as severe GIM.
Patients with high-grade dysplasia or early gastric cancer should undergo endoscopic submucosal dissection (ESD), with the goal of en bloc, R0 resection to enable accurate pathologic staging and the intent to cure. Eradicating active H pylori infection is essential — but shouldn’t delay endoscopic intervention, the authors wrote.
In addition, patients with a history of successfully resected gastric dysplasia or cancer should undergo endoscopic surveillance. Although post-ESD surveillance intervals have been suggested in other recent AGA clinical practice updates, additional data are needed, particularly for US recommendations, the authors wrote.
Although type 1 gastric carcinoids in patients with atrophic gastritis are typically indolent, especially if less than 1 cm, endoscopists may consider resecting them and should resect lesions between 1and 2 cm. Patients with lesions over 2 cm should undergo cross-sectional imaging and be referred for surgical resection, given the risk for metastasis.
Patient-Centered Approach
The guideline authors suggested thinking about screening and surveillance on a patient-level basis. For instance, only those who are fit for endoscopic or potentially surgical treatment should be screened for gastric cancer and continued surveillance of GPMC, they wrote. If a person is no longer fit for endoscopic or surgical treatment, whether due to life expectancy or other comorbidities, then screening should be stopped.
In addition, to achieve health equity, clinicians should take a personalized approach to assess a patient’s risk for gastric cancer and determine whether to pursue screening and surveillance, the authors wrote. Modifiable risk factors — such as tobacco use, high-salt and processed food diets, and lack of health care — should also be addressed, since most of these risk factors disproportionately affect high-risk patients and represent healthcare disparities, they added.
“This update provides clinicians with a framework for understanding the natural history and epidemiology of gastric polyps, as well as guidance on best practices for the endoscopic detection and classification of gastric polyps, best practices for the endoscopic resection of gastric polyps, and best practices for endoscopic surveillance following resection,” said Hashem El-Serag, MD, professor and chair of medicine at the Baylor College of Medicine and director of the Texas Medical Center Digestive Diseases Center in Houston.
El-Serag, who wasn’t involved with the clinical practice update, has researched and published on consensus around the diagnosis and management of GIM.
“Stomach polyps are commonly found during routine endoscopic procedures. They are mostly asymptomatic and incidental, and therefore, clinicians may not be prepared ahead of time on how to deal with them,” he said. “The appropriate management requires proper identification and sampling of the polyp features and the uninvolved gastric mucosa, as well as a clear understanding of the risk factors and prognosis. Recent changes in the epidemiology and endoscopic management of gastric polyps makes this update timely and important.”
The update received no particular funding. The authors disclosed receiving grant support, having consultant relationships with, and serving in advisory roles for numerous pharmaceutical, biomedical, and biotechnology firms. Morgan and El-Serag reported having no relevant disclosures.
A version of this article appeared on Medscape.com.
Clinicians can help reduce gastric cancer incidence and mortality in high-risk groups through endoscopic screening and surveillance of precancerous conditions, such as gastric intestinal metaplasia (GIM), according to a new clinical practice update from AGA.
The update supports additional gastric guidance published so far in 2025, including a clinical guideline on the diagnosis and management of gastric premalignant conditions (GPMC) from the American College of Gastroenterology (ACG) and upper GI endoscopy quality indicators from ACG and the American Society for Gastrointestinal Endoscopy (ASGE).
“The synergy of these three publications coming out at the same time helps us to finally establish surveillance of high-risk gastric conditions in practice, as we do in the colon and esophagus,” said Douglas R. Morgan, MD, professor of medicine in gastroenterology and hepatology and director of Global Health programs in gastroenterology at the University of Alabama at Birmingham.
Morgan, who wasn’t involved with the AGA update, served as lead author for the ACG guideline and co-author of the ACG-ASGE quality indicators. He also co-authored the 2024 ACG clinical guideline on treating Helicobacter pylori infection, which has implications for gastric cancer.
“The AGA and ACG updates provide detail, while the QI document is an enforcer with medical, legal, and reimbursement implications,” he said. “We have an alignment of the stars with this overdue move toward concrete surveillance for high-risk lesions in the stomach.”
The clinical practice update was published in Gastroenterology.
Gastric Cancer Screening
, the authors wrote. The top ways to reduce mortality include primary prevention, particularly by eradicating H pylori, and secondary prevention through screening and surveillance.
High-risk groups in the United States should be considered for gastric cancer screening, including first-generation immigrants from high-incidence regions and potentially other non-White racial and ethnic groups, those with a family history of gastric cancer in a first-degree relative, and those with certain hereditary GI polyposis or hereditary cancer syndromes.
Endoscopy remains the best test for screening or surveillance of high-risk groups, the authors wrote, since it allows for direct visualization to endoscopically stage the mucosa, identify any concerning areas of neoplasia, and enable biopsies. Both endoscopic and histologic staging are key for risk stratification and surveillance decisions.
In particular, clinicians should use a high-definition white light endoscopy system with image enhancement, gastric mucosal cleansing, and insufflation to see the mucosa. As part of this, clinicians should allow for adequate visual inspection time, photodocumentation, and systematic biopsy protocol for mucosal staging, where appropriate.
As part of this, clinicians should consider H pylori eradication as an essential adjunct to endoscopic screening, the authors wrote. Opportunistic screening for H pylori should be considered in high-risk groups, and familial-based testing should be considered among adult household members of patients who test positive for H pylori.
Endoscopic Biopsy and Diagnosis
In patients with suspected gastric atrophy — with or without GIM — gastric biopsies should be obtained with a systematic approach, the authors wrote. Clinicians should take a minimum of five biopsies, sampling from the antrum/incisura and corpus.
Endoscopists should work with their pathologists on consistent documentation of histologic risk-stratification parameters when atrophic gastritis is diagnosed, the authors wrote. To inform clinical decision-making, this should include documentation of the presence or absence of H pylori infection, severity of atrophy or metaplasia, and histologic subtyping of GIM.
Although GIM and dysplasia are endoscopically detectable, these findings often go undiagnosed when endoscopists aren’t familiar with the characteristic visual features, the authors wrote. More training is needed, especially in the US, and although artificial intelligence tools appear promising for detecting early gastric neoplasia, data remain too preliminary to recommend routine use, the authors added.
Since indefinite and low-grade dysplasia can be difficult to identify by endoscopy and accurately diagnosis on histopathology, all dysplasia should be confirmed by an experienced gastrointestinal pathologist, the authors wrote. Clinicians should refer patients with visible or nonvisible dysplasia to an endoscopist or center with expertise in gastric neoplasia.
Endoscopic Management and Surveillance
If an index screening endoscopy doesn’t identify atrophy, GIM, or neoplasia, ongoing screening should be based on a patient’s risk factors and preferences. If the patient has a family history or multiple risk factors, ongoing screening should be considered. However, the optimal screening intervals in these scenarios aren’t well-defined.
Patients with confirmed gastric atrophy should undergo risk stratification, the authors wrote. Those with severe atrophic gastritis or multifocal/incomplete GIM would likely benefit from endoscopic surveillance, particularly if they have other risk factors such as family history. Surveillance should be considered every 3 years, though shorter intervals may be advisable for those with multiple risk factors such as severe GIM.
Patients with high-grade dysplasia or early gastric cancer should undergo endoscopic submucosal dissection (ESD), with the goal of en bloc, R0 resection to enable accurate pathologic staging and the intent to cure. Eradicating active H pylori infection is essential — but shouldn’t delay endoscopic intervention, the authors wrote.
In addition, patients with a history of successfully resected gastric dysplasia or cancer should undergo endoscopic surveillance. Although post-ESD surveillance intervals have been suggested in other recent AGA clinical practice updates, additional data are needed, particularly for US recommendations, the authors wrote.
Although type 1 gastric carcinoids in patients with atrophic gastritis are typically indolent, especially if less than 1 cm, endoscopists may consider resecting them and should resect lesions between 1and 2 cm. Patients with lesions over 2 cm should undergo cross-sectional imaging and be referred for surgical resection, given the risk for metastasis.
Patient-Centered Approach
The guideline authors suggested thinking about screening and surveillance on a patient-level basis. For instance, only those who are fit for endoscopic or potentially surgical treatment should be screened for gastric cancer and continued surveillance of GPMC, they wrote. If a person is no longer fit for endoscopic or surgical treatment, whether due to life expectancy or other comorbidities, then screening should be stopped.
In addition, to achieve health equity, clinicians should take a personalized approach to assess a patient’s risk for gastric cancer and determine whether to pursue screening and surveillance, the authors wrote. Modifiable risk factors — such as tobacco use, high-salt and processed food diets, and lack of health care — should also be addressed, since most of these risk factors disproportionately affect high-risk patients and represent healthcare disparities, they added.
“This update provides clinicians with a framework for understanding the natural history and epidemiology of gastric polyps, as well as guidance on best practices for the endoscopic detection and classification of gastric polyps, best practices for the endoscopic resection of gastric polyps, and best practices for endoscopic surveillance following resection,” said Hashem El-Serag, MD, professor and chair of medicine at the Baylor College of Medicine and director of the Texas Medical Center Digestive Diseases Center in Houston.
El-Serag, who wasn’t involved with the clinical practice update, has researched and published on consensus around the diagnosis and management of GIM.
“Stomach polyps are commonly found during routine endoscopic procedures. They are mostly asymptomatic and incidental, and therefore, clinicians may not be prepared ahead of time on how to deal with them,” he said. “The appropriate management requires proper identification and sampling of the polyp features and the uninvolved gastric mucosa, as well as a clear understanding of the risk factors and prognosis. Recent changes in the epidemiology and endoscopic management of gastric polyps makes this update timely and important.”
The update received no particular funding. The authors disclosed receiving grant support, having consultant relationships with, and serving in advisory roles for numerous pharmaceutical, biomedical, and biotechnology firms. Morgan and El-Serag reported having no relevant disclosures.
A version of this article appeared on Medscape.com.
FROM GASTROENTEROLOGY
Computer-Aided Colonoscopy Not Ready for Prime Time: AGA Clinical Practice Guideline
cancer mortality in the United States.
, the third most common cause ofThe systematic data review is a collaboration between AGA and The BMJ’s MAGIC Rapid Recommendations. The BMJ issued a separate recommendation against CADe shortly after the AGA guideline was published.
Led by Shahnaz S. Sultan, MD, MHSc, AGAF, of the Division of Gastroenterology, Hepatology, and Nutrition at University of Minnesota, Minneapolis, and recently published in Gastroenterology, found only very low certainty of GRADE-based evidence for several critical long-term outcomes, both desirable and undesirable. These included the following: 11 fewer CRCs per 10,000 individuals and two fewer CRC deaths per 10,000 individuals, an increased burden of more intensive surveillance colonoscopies (635 more per 10,000 individuals), and cost and resource implications.
This technology did, however, yield an 8% (95% CI, 6-10) absolute increase in the adenoma detection rate (ADR) and a 2% (95% CI, 0-4) increase in the detection rate of advanced adenomas and/or sessile serrated lesions. “How this translates into a reduction in CRC incidence or death is where we were uncertain,” Sultan said. “Our best effort at trying to translate the ADR and other endoscopy outcomes to CRC incidence and CRC death relied on the modeling study, which included a lot of assumptions, which also contributed to our overall lower certainty.”
The systematic and meta-analysis included 41 randomized controlled trials with more than 32,108 participants who underwent CADe-assisted colonoscopy. This technology was associated with a higher polyp detection rate than standard colonoscopy: 56.1% vs 47.9% (relative risk [RR], 1.22, 95% CI, 1.15-1.28). It also had a higher ADR: 44.8% vs 37.4% (RR, 1.22; 95% CI, 1.16-1.29).
But although CADe-assisted colonoscopy may increase ADR, it carries a risk for overdiagnosis, as most polyps detected during colonoscopy are diminutive (< 5 mm) and of low malignant potential, the panel noted. Approximately 25% of lesions are missed at colonoscopy. More than 15 million colonoscopies are performed annually in the United States, but studies have demonstrated variable quality of colonoscopies across key quality indicators.
“Artificial intelligence [AI] is revolutionizing medicine and healthcare in the field of GI [gastroenterology], and CADe in colonoscopy has been brought to commercialization,” Sultan told GI & Hepatology News. “Unlike many areas of endoscopic research where we often have a finite number of clinical trial data, CADe-assisted colonoscopy intervention has been studied in over 44 randomized controlled trials and numerous nonrandomized, real-world studies. The question of whether or not to adopt this intervention at a health system or practice level is an important question that was prioritized to be addressed as guidance was needed.”
Commenting on the guideline but not involved in its formulation, Larry S. Kim, MD, MBA, AGAF, a gastroenterologist at South Denver Gastroenterology in Denver, Colorado, said his practice group has used the GI Genius AI system in its affiliated hospitals but has so far chosen not to implement the technology at its endoscopy centers. “At the hospital, our physicians have the ability to utilize the system for select patients or not at all,” he told GI & Hepatology News.
The fact that The BMJ reached a different conclusion based on the same data, evidence-grading system, and microsimulation, Kim added, “highlights the point that when evidence for benefit is uncertain, underlying values are critical.” In declining to make a recommendation, the AGA panel balanced the benefit of improved detection of potentially precancerous adenomas vs increased resource utilization in the face of unclear benefit. “With different priorities, other bodies could reasonably decide to recommend either for or against CADe.”
The Future
According to Sultan, gastroenterologists need a better understanding of patient values and preferences and the value placed on increased adenoma detection, which may also lead to more lifetime colonoscopies without reducing the risk for CRC. “We need better intermediate- and long-term data on the impact of adenoma detection on interval cancers and CRC incidence,” she said. “We need data on detection of polyps that are more clinically significant such as those 6-10 mm in size, as well as serrated sessile lesions. We also need to understand at the population or health system level what the impact is on resources, cost, and access.”
Ultimately, the living guideline underscores the trade-off between desirable and undesirable effects and the limitations of current evidence to support a recommendation, but CADe has to improve as an iterative AI application with further validation and better training.
With the anticipated improvement in software accuracy as AI machine learning reads increasing numbers of images, Sultan added, “the next version of the software may perform better, especially for polyps that are more clinically significant or for flat sessile serrated polyps, which are harder to detect. We plan to revisit the question in the next year or two and potentially revise the guideline.”
These guidelines were fully funded by the AGA Institute with no funding from any outside agency or industry.
Sultan is supported by the US Food and Drug Administration. Co-authors Shazia Mehmood Siddique, Dennis L. Shung, and Benjamin Lebwohl are supported by grants from the National Institute of Diabetes and Digestive and Kidney Diseases. Theodore R. Levin is supported by the Permanente Medical Group Delivery Science and Applied Research Program. Cesare Hassan is a consultant for Fujifilm and Olympus. Peter S. Liang reported doing research work for Freenome and advisory board work for Guardant Health and Natera.
Kim is the AGA president-elect. He disclosed no competing interests relevant to his comments.
A version of this article appeared on Medscape.com.
cancer mortality in the United States.
, the third most common cause ofThe systematic data review is a collaboration between AGA and The BMJ’s MAGIC Rapid Recommendations. The BMJ issued a separate recommendation against CADe shortly after the AGA guideline was published.
Led by Shahnaz S. Sultan, MD, MHSc, AGAF, of the Division of Gastroenterology, Hepatology, and Nutrition at University of Minnesota, Minneapolis, and recently published in Gastroenterology, found only very low certainty of GRADE-based evidence for several critical long-term outcomes, both desirable and undesirable. These included the following: 11 fewer CRCs per 10,000 individuals and two fewer CRC deaths per 10,000 individuals, an increased burden of more intensive surveillance colonoscopies (635 more per 10,000 individuals), and cost and resource implications.
This technology did, however, yield an 8% (95% CI, 6-10) absolute increase in the adenoma detection rate (ADR) and a 2% (95% CI, 0-4) increase in the detection rate of advanced adenomas and/or sessile serrated lesions. “How this translates into a reduction in CRC incidence or death is where we were uncertain,” Sultan said. “Our best effort at trying to translate the ADR and other endoscopy outcomes to CRC incidence and CRC death relied on the modeling study, which included a lot of assumptions, which also contributed to our overall lower certainty.”
The systematic and meta-analysis included 41 randomized controlled trials with more than 32,108 participants who underwent CADe-assisted colonoscopy. This technology was associated with a higher polyp detection rate than standard colonoscopy: 56.1% vs 47.9% (relative risk [RR], 1.22, 95% CI, 1.15-1.28). It also had a higher ADR: 44.8% vs 37.4% (RR, 1.22; 95% CI, 1.16-1.29).
But although CADe-assisted colonoscopy may increase ADR, it carries a risk for overdiagnosis, as most polyps detected during colonoscopy are diminutive (< 5 mm) and of low malignant potential, the panel noted. Approximately 25% of lesions are missed at colonoscopy. More than 15 million colonoscopies are performed annually in the United States, but studies have demonstrated variable quality of colonoscopies across key quality indicators.
“Artificial intelligence [AI] is revolutionizing medicine and healthcare in the field of GI [gastroenterology], and CADe in colonoscopy has been brought to commercialization,” Sultan told GI & Hepatology News. “Unlike many areas of endoscopic research where we often have a finite number of clinical trial data, CADe-assisted colonoscopy intervention has been studied in over 44 randomized controlled trials and numerous nonrandomized, real-world studies. The question of whether or not to adopt this intervention at a health system or practice level is an important question that was prioritized to be addressed as guidance was needed.”
Commenting on the guideline but not involved in its formulation, Larry S. Kim, MD, MBA, AGAF, a gastroenterologist at South Denver Gastroenterology in Denver, Colorado, said his practice group has used the GI Genius AI system in its affiliated hospitals but has so far chosen not to implement the technology at its endoscopy centers. “At the hospital, our physicians have the ability to utilize the system for select patients or not at all,” he told GI & Hepatology News.
The fact that The BMJ reached a different conclusion based on the same data, evidence-grading system, and microsimulation, Kim added, “highlights the point that when evidence for benefit is uncertain, underlying values are critical.” In declining to make a recommendation, the AGA panel balanced the benefit of improved detection of potentially precancerous adenomas vs increased resource utilization in the face of unclear benefit. “With different priorities, other bodies could reasonably decide to recommend either for or against CADe.”
The Future
According to Sultan, gastroenterologists need a better understanding of patient values and preferences and the value placed on increased adenoma detection, which may also lead to more lifetime colonoscopies without reducing the risk for CRC. “We need better intermediate- and long-term data on the impact of adenoma detection on interval cancers and CRC incidence,” she said. “We need data on detection of polyps that are more clinically significant such as those 6-10 mm in size, as well as serrated sessile lesions. We also need to understand at the population or health system level what the impact is on resources, cost, and access.”
Ultimately, the living guideline underscores the trade-off between desirable and undesirable effects and the limitations of current evidence to support a recommendation, but CADe has to improve as an iterative AI application with further validation and better training.
With the anticipated improvement in software accuracy as AI machine learning reads increasing numbers of images, Sultan added, “the next version of the software may perform better, especially for polyps that are more clinically significant or for flat sessile serrated polyps, which are harder to detect. We plan to revisit the question in the next year or two and potentially revise the guideline.”
These guidelines were fully funded by the AGA Institute with no funding from any outside agency or industry.
Sultan is supported by the US Food and Drug Administration. Co-authors Shazia Mehmood Siddique, Dennis L. Shung, and Benjamin Lebwohl are supported by grants from the National Institute of Diabetes and Digestive and Kidney Diseases. Theodore R. Levin is supported by the Permanente Medical Group Delivery Science and Applied Research Program. Cesare Hassan is a consultant for Fujifilm and Olympus. Peter S. Liang reported doing research work for Freenome and advisory board work for Guardant Health and Natera.
Kim is the AGA president-elect. He disclosed no competing interests relevant to his comments.
A version of this article appeared on Medscape.com.
cancer mortality in the United States.
, the third most common cause ofThe systematic data review is a collaboration between AGA and The BMJ’s MAGIC Rapid Recommendations. The BMJ issued a separate recommendation against CADe shortly after the AGA guideline was published.
Led by Shahnaz S. Sultan, MD, MHSc, AGAF, of the Division of Gastroenterology, Hepatology, and Nutrition at University of Minnesota, Minneapolis, and recently published in Gastroenterology, found only very low certainty of GRADE-based evidence for several critical long-term outcomes, both desirable and undesirable. These included the following: 11 fewer CRCs per 10,000 individuals and two fewer CRC deaths per 10,000 individuals, an increased burden of more intensive surveillance colonoscopies (635 more per 10,000 individuals), and cost and resource implications.
This technology did, however, yield an 8% (95% CI, 6-10) absolute increase in the adenoma detection rate (ADR) and a 2% (95% CI, 0-4) increase in the detection rate of advanced adenomas and/or sessile serrated lesions. “How this translates into a reduction in CRC incidence or death is where we were uncertain,” Sultan said. “Our best effort at trying to translate the ADR and other endoscopy outcomes to CRC incidence and CRC death relied on the modeling study, which included a lot of assumptions, which also contributed to our overall lower certainty.”
The systematic and meta-analysis included 41 randomized controlled trials with more than 32,108 participants who underwent CADe-assisted colonoscopy. This technology was associated with a higher polyp detection rate than standard colonoscopy: 56.1% vs 47.9% (relative risk [RR], 1.22, 95% CI, 1.15-1.28). It also had a higher ADR: 44.8% vs 37.4% (RR, 1.22; 95% CI, 1.16-1.29).
But although CADe-assisted colonoscopy may increase ADR, it carries a risk for overdiagnosis, as most polyps detected during colonoscopy are diminutive (< 5 mm) and of low malignant potential, the panel noted. Approximately 25% of lesions are missed at colonoscopy. More than 15 million colonoscopies are performed annually in the United States, but studies have demonstrated variable quality of colonoscopies across key quality indicators.
“Artificial intelligence [AI] is revolutionizing medicine and healthcare in the field of GI [gastroenterology], and CADe in colonoscopy has been brought to commercialization,” Sultan told GI & Hepatology News. “Unlike many areas of endoscopic research where we often have a finite number of clinical trial data, CADe-assisted colonoscopy intervention has been studied in over 44 randomized controlled trials and numerous nonrandomized, real-world studies. The question of whether or not to adopt this intervention at a health system or practice level is an important question that was prioritized to be addressed as guidance was needed.”
Commenting on the guideline but not involved in its formulation, Larry S. Kim, MD, MBA, AGAF, a gastroenterologist at South Denver Gastroenterology in Denver, Colorado, said his practice group has used the GI Genius AI system in its affiliated hospitals but has so far chosen not to implement the technology at its endoscopy centers. “At the hospital, our physicians have the ability to utilize the system for select patients or not at all,” he told GI & Hepatology News.
The fact that The BMJ reached a different conclusion based on the same data, evidence-grading system, and microsimulation, Kim added, “highlights the point that when evidence for benefit is uncertain, underlying values are critical.” In declining to make a recommendation, the AGA panel balanced the benefit of improved detection of potentially precancerous adenomas vs increased resource utilization in the face of unclear benefit. “With different priorities, other bodies could reasonably decide to recommend either for or against CADe.”
The Future
According to Sultan, gastroenterologists need a better understanding of patient values and preferences and the value placed on increased adenoma detection, which may also lead to more lifetime colonoscopies without reducing the risk for CRC. “We need better intermediate- and long-term data on the impact of adenoma detection on interval cancers and CRC incidence,” she said. “We need data on detection of polyps that are more clinically significant such as those 6-10 mm in size, as well as serrated sessile lesions. We also need to understand at the population or health system level what the impact is on resources, cost, and access.”
Ultimately, the living guideline underscores the trade-off between desirable and undesirable effects and the limitations of current evidence to support a recommendation, but CADe has to improve as an iterative AI application with further validation and better training.
With the anticipated improvement in software accuracy as AI machine learning reads increasing numbers of images, Sultan added, “the next version of the software may perform better, especially for polyps that are more clinically significant or for flat sessile serrated polyps, which are harder to detect. We plan to revisit the question in the next year or two and potentially revise the guideline.”
These guidelines were fully funded by the AGA Institute with no funding from any outside agency or industry.
Sultan is supported by the US Food and Drug Administration. Co-authors Shazia Mehmood Siddique, Dennis L. Shung, and Benjamin Lebwohl are supported by grants from the National Institute of Diabetes and Digestive and Kidney Diseases. Theodore R. Levin is supported by the Permanente Medical Group Delivery Science and Applied Research Program. Cesare Hassan is a consultant for Fujifilm and Olympus. Peter S. Liang reported doing research work for Freenome and advisory board work for Guardant Health and Natera.
Kim is the AGA president-elect. He disclosed no competing interests relevant to his comments.
A version of this article appeared on Medscape.com.
FROM GASTROENTEROLOGY
Elemental Diet Eases Symptoms in Microbiome Gastro Disorders
, according to a new study.
“Elemental diets have long shown promise for treating gastrointestinal disorders like Crohn’s disease, eosinophilic esophagitis, SIBO (small intestinal bacterial overgrowth), and IMO (intestinal methanogen overgrowth), but poor palatability has limited their use,” lead author Ali Rezaie, MD, medical director of the Gastrointestinal (GI) Motility Program and director of Bioinformatics at Cedars-Sinai Medical Center, Los Angeles, told GI & Hepatology News.
Elemental diets are specialized formulas tailored to meet an individual’s specific nutritional needs and daily requirements for vitamins, minerals, fat, free amino acids, and carbohydrates.
In SIBO and IMO specifically, only about half the patients respond to antibiotics, and many require repeat treatments, which underscores the need for effective nonantibiotic alternatives, said Rezaie. “This is the first prospective trial using a PED, aiming to make this approach both viable and accessible for patients,” he noted.
Assessing a Novel Diet in IMO and SIBO
In the study, which was recently published in Clinical Gastroenterology and Hepatology, Rezaie and colleagues enrolled 30 adults with IMO (40%), SIBO (20%), or both (40%). The mean participant age was 45 years, and 63% were women.
All participants completed 2 weeks of a PED, transitioned to 2-3 days of a bland diet, and then resumed their regular diets for 2 weeks.
The diet consisted of multiple 300-calorie packets, adjusted for individual caloric needs. Participants could consume additional packets for hunger but were prohibited from eating other foods. There was no restriction on water intake.
The primary endpoint was changes in stool microbiome after the PED and reintroduction of regular food. Secondary endpoints included lactose breath test normalization to determine bacterial overgrowth in the gut, symptom response, and adverse events.
Researchers collected 29 stool samples at baseline, 27 post-PED, and 27 at study conclusion (2 weeks post-diet).
Key Outcomes
Although the stool samples’ alpha diversity decreased after the PED, the difference was not statistically significant at the end of the study. However, 30 bacterial families showed significant differences in relative abundance post-PED.
Daily symptom severity improved significantly during the second week of the diet compared with baseline, with reduction in abdominal discomfort, bloating, distention, constipation, and flatulence. Further significant improvements in measures such as abdominal pain, diarrhea, fatigue, urgency, and brain fog were observed after reintroducing regular food.
“We observed 73% breath test normalization and 83% global symptom relief — with 100% adherence and tolerance to 2 weeks of exclusive PED,” Rezaie told GI & Hepatology News. No serious adverse events occurred during the study, he added.
Lactose breath test normalization rates post-PED were 58% in patients with IMO, 100% in patients with SIBO, and 75% in those with both conditions.
The extent of patient response to PED was notable, given that 83% had failed prior treatments, Rezaie said.
“While we expected benefit based on palatability improvements and prior retrospective data, the rapid reduction in methane and hydrogen gas — and the sustained microbiome modulation even after reintroducing a regular diet — exceeded expectations,” he said. A significant reduction in visceral fat was another novel finding.
“This study reinforces the power of diet as a therapeutic tool,” Rezaie said, adding that the results show that elemental diets can be palatable, thereby improving patient adherence, tolerance, and, eventually, effectiveness. This is particularly valuable for patients with SIBO and IMO who do not tolerate or respond to antibiotics, prefer nonpharmacologic options, or experience recurrent symptoms after antibiotic treatment.
Limitations and Next Steps
Study limitations included the lack of a placebo group with a sham diet, the short follow-up after reintroducing a regular diet, and the inability to assess microbial gene function.
However, the results support the safety, tolerance, and benefit of a PED in patients with IMO/SIBO. Personalized dietary interventions that support the growth of beneficial bacteria may be an effective approach to treating these disorders, Rezaie and colleagues noted in their publication.
Although the current study is a promising first step, longer-term studies are needed to evaluate the durability of microbiome and symptom improvements, Rezaie said.
Making the Most of Microbiome Manipulation
Elemental diets may help modulate the gut microbiome while reducing immune activation, making them attractive for microbiome-targeted gastrointestinal therapies, Jatin Roper, MD, a gastroenterologist at Duke University, Durham, North Carolina, told GI & Hepatology News.
“Antibiotics are only effective in half of SIBO cases and often require retreatment, so better therapies are needed,” said Roper, who was not affiliated with the study. He added that its findings confirmed the researchers’ hypothesis that a PED can be both safe and effective in patients with SIBO.
Roper noted the 83% symptom improvement as the study’s most unexpected and encouraging finding, as it represents a substantial improvement compared with standard antibiotic therapy. “It is also surprising that the tolerance rate of the elemental diet in this study was 100%,” he said.
However, diet palatability remains a major barrier in real-world practice.
“Adherence rates are likely to be far lower than in trials in which patients are closely monitored, and this challenge will not be easily overcome,” he added.
The study’s limitations, including the lack of metagenomic analysis and a placebo group, are important to address in future research, Roper said. In particular, controlled trials of elemental diets are needed to determine whether microbiome changes are directly responsible for symptom improvement.
The study was supported in part by Good LFE and the John and Geraldine Cusenza Foundation. Rezaie disclosed serving as a consultant/speaker for Bausch Health and having equity in Dieta Health, Gemelli Biotech, and Good LFE. Roper had no financial conflicts to disclose.
A version of this article first appeared on Medscape.com.
, according to a new study.
“Elemental diets have long shown promise for treating gastrointestinal disorders like Crohn’s disease, eosinophilic esophagitis, SIBO (small intestinal bacterial overgrowth), and IMO (intestinal methanogen overgrowth), but poor palatability has limited their use,” lead author Ali Rezaie, MD, medical director of the Gastrointestinal (GI) Motility Program and director of Bioinformatics at Cedars-Sinai Medical Center, Los Angeles, told GI & Hepatology News.
Elemental diets are specialized formulas tailored to meet an individual’s specific nutritional needs and daily requirements for vitamins, minerals, fat, free amino acids, and carbohydrates.
In SIBO and IMO specifically, only about half the patients respond to antibiotics, and many require repeat treatments, which underscores the need for effective nonantibiotic alternatives, said Rezaie. “This is the first prospective trial using a PED, aiming to make this approach both viable and accessible for patients,” he noted.
Assessing a Novel Diet in IMO and SIBO
In the study, which was recently published in Clinical Gastroenterology and Hepatology, Rezaie and colleagues enrolled 30 adults with IMO (40%), SIBO (20%), or both (40%). The mean participant age was 45 years, and 63% were women.
All participants completed 2 weeks of a PED, transitioned to 2-3 days of a bland diet, and then resumed their regular diets for 2 weeks.
The diet consisted of multiple 300-calorie packets, adjusted for individual caloric needs. Participants could consume additional packets for hunger but were prohibited from eating other foods. There was no restriction on water intake.
The primary endpoint was changes in stool microbiome after the PED and reintroduction of regular food. Secondary endpoints included lactose breath test normalization to determine bacterial overgrowth in the gut, symptom response, and adverse events.
Researchers collected 29 stool samples at baseline, 27 post-PED, and 27 at study conclusion (2 weeks post-diet).
Key Outcomes
Although the stool samples’ alpha diversity decreased after the PED, the difference was not statistically significant at the end of the study. However, 30 bacterial families showed significant differences in relative abundance post-PED.
Daily symptom severity improved significantly during the second week of the diet compared with baseline, with reduction in abdominal discomfort, bloating, distention, constipation, and flatulence. Further significant improvements in measures such as abdominal pain, diarrhea, fatigue, urgency, and brain fog were observed after reintroducing regular food.
“We observed 73% breath test normalization and 83% global symptom relief — with 100% adherence and tolerance to 2 weeks of exclusive PED,” Rezaie told GI & Hepatology News. No serious adverse events occurred during the study, he added.
Lactose breath test normalization rates post-PED were 58% in patients with IMO, 100% in patients with SIBO, and 75% in those with both conditions.
The extent of patient response to PED was notable, given that 83% had failed prior treatments, Rezaie said.
“While we expected benefit based on palatability improvements and prior retrospective data, the rapid reduction in methane and hydrogen gas — and the sustained microbiome modulation even after reintroducing a regular diet — exceeded expectations,” he said. A significant reduction in visceral fat was another novel finding.
“This study reinforces the power of diet as a therapeutic tool,” Rezaie said, adding that the results show that elemental diets can be palatable, thereby improving patient adherence, tolerance, and, eventually, effectiveness. This is particularly valuable for patients with SIBO and IMO who do not tolerate or respond to antibiotics, prefer nonpharmacologic options, or experience recurrent symptoms after antibiotic treatment.
Limitations and Next Steps
Study limitations included the lack of a placebo group with a sham diet, the short follow-up after reintroducing a regular diet, and the inability to assess microbial gene function.
However, the results support the safety, tolerance, and benefit of a PED in patients with IMO/SIBO. Personalized dietary interventions that support the growth of beneficial bacteria may be an effective approach to treating these disorders, Rezaie and colleagues noted in their publication.
Although the current study is a promising first step, longer-term studies are needed to evaluate the durability of microbiome and symptom improvements, Rezaie said.
Making the Most of Microbiome Manipulation
Elemental diets may help modulate the gut microbiome while reducing immune activation, making them attractive for microbiome-targeted gastrointestinal therapies, Jatin Roper, MD, a gastroenterologist at Duke University, Durham, North Carolina, told GI & Hepatology News.
“Antibiotics are only effective in half of SIBO cases and often require retreatment, so better therapies are needed,” said Roper, who was not affiliated with the study. He added that its findings confirmed the researchers’ hypothesis that a PED can be both safe and effective in patients with SIBO.
Roper noted the 83% symptom improvement as the study’s most unexpected and encouraging finding, as it represents a substantial improvement compared with standard antibiotic therapy. “It is also surprising that the tolerance rate of the elemental diet in this study was 100%,” he said.
However, diet palatability remains a major barrier in real-world practice.
“Adherence rates are likely to be far lower than in trials in which patients are closely monitored, and this challenge will not be easily overcome,” he added.
The study’s limitations, including the lack of metagenomic analysis and a placebo group, are important to address in future research, Roper said. In particular, controlled trials of elemental diets are needed to determine whether microbiome changes are directly responsible for symptom improvement.
The study was supported in part by Good LFE and the John and Geraldine Cusenza Foundation. Rezaie disclosed serving as a consultant/speaker for Bausch Health and having equity in Dieta Health, Gemelli Biotech, and Good LFE. Roper had no financial conflicts to disclose.
A version of this article first appeared on Medscape.com.
, according to a new study.
“Elemental diets have long shown promise for treating gastrointestinal disorders like Crohn’s disease, eosinophilic esophagitis, SIBO (small intestinal bacterial overgrowth), and IMO (intestinal methanogen overgrowth), but poor palatability has limited their use,” lead author Ali Rezaie, MD, medical director of the Gastrointestinal (GI) Motility Program and director of Bioinformatics at Cedars-Sinai Medical Center, Los Angeles, told GI & Hepatology News.
Elemental diets are specialized formulas tailored to meet an individual’s specific nutritional needs and daily requirements for vitamins, minerals, fat, free amino acids, and carbohydrates.
In SIBO and IMO specifically, only about half the patients respond to antibiotics, and many require repeat treatments, which underscores the need for effective nonantibiotic alternatives, said Rezaie. “This is the first prospective trial using a PED, aiming to make this approach both viable and accessible for patients,” he noted.
Assessing a Novel Diet in IMO and SIBO
In the study, which was recently published in Clinical Gastroenterology and Hepatology, Rezaie and colleagues enrolled 30 adults with IMO (40%), SIBO (20%), or both (40%). The mean participant age was 45 years, and 63% were women.
All participants completed 2 weeks of a PED, transitioned to 2-3 days of a bland diet, and then resumed their regular diets for 2 weeks.
The diet consisted of multiple 300-calorie packets, adjusted for individual caloric needs. Participants could consume additional packets for hunger but were prohibited from eating other foods. There was no restriction on water intake.
The primary endpoint was changes in stool microbiome after the PED and reintroduction of regular food. Secondary endpoints included lactose breath test normalization to determine bacterial overgrowth in the gut, symptom response, and adverse events.
Researchers collected 29 stool samples at baseline, 27 post-PED, and 27 at study conclusion (2 weeks post-diet).
Key Outcomes
Although the stool samples’ alpha diversity decreased after the PED, the difference was not statistically significant at the end of the study. However, 30 bacterial families showed significant differences in relative abundance post-PED.
Daily symptom severity improved significantly during the second week of the diet compared with baseline, with reduction in abdominal discomfort, bloating, distention, constipation, and flatulence. Further significant improvements in measures such as abdominal pain, diarrhea, fatigue, urgency, and brain fog were observed after reintroducing regular food.
“We observed 73% breath test normalization and 83% global symptom relief — with 100% adherence and tolerance to 2 weeks of exclusive PED,” Rezaie told GI & Hepatology News. No serious adverse events occurred during the study, he added.
Lactose breath test normalization rates post-PED were 58% in patients with IMO, 100% in patients with SIBO, and 75% in those with both conditions.
The extent of patient response to PED was notable, given that 83% had failed prior treatments, Rezaie said.
“While we expected benefit based on palatability improvements and prior retrospective data, the rapid reduction in methane and hydrogen gas — and the sustained microbiome modulation even after reintroducing a regular diet — exceeded expectations,” he said. A significant reduction in visceral fat was another novel finding.
“This study reinforces the power of diet as a therapeutic tool,” Rezaie said, adding that the results show that elemental diets can be palatable, thereby improving patient adherence, tolerance, and, eventually, effectiveness. This is particularly valuable for patients with SIBO and IMO who do not tolerate or respond to antibiotics, prefer nonpharmacologic options, or experience recurrent symptoms after antibiotic treatment.
Limitations and Next Steps
Study limitations included the lack of a placebo group with a sham diet, the short follow-up after reintroducing a regular diet, and the inability to assess microbial gene function.
However, the results support the safety, tolerance, and benefit of a PED in patients with IMO/SIBO. Personalized dietary interventions that support the growth of beneficial bacteria may be an effective approach to treating these disorders, Rezaie and colleagues noted in their publication.
Although the current study is a promising first step, longer-term studies are needed to evaluate the durability of microbiome and symptom improvements, Rezaie said.
Making the Most of Microbiome Manipulation
Elemental diets may help modulate the gut microbiome while reducing immune activation, making them attractive for microbiome-targeted gastrointestinal therapies, Jatin Roper, MD, a gastroenterologist at Duke University, Durham, North Carolina, told GI & Hepatology News.
“Antibiotics are only effective in half of SIBO cases and often require retreatment, so better therapies are needed,” said Roper, who was not affiliated with the study. He added that its findings confirmed the researchers’ hypothesis that a PED can be both safe and effective in patients with SIBO.
Roper noted the 83% symptom improvement as the study’s most unexpected and encouraging finding, as it represents a substantial improvement compared with standard antibiotic therapy. “It is also surprising that the tolerance rate of the elemental diet in this study was 100%,” he said.
However, diet palatability remains a major barrier in real-world practice.
“Adherence rates are likely to be far lower than in trials in which patients are closely monitored, and this challenge will not be easily overcome,” he added.
The study’s limitations, including the lack of metagenomic analysis and a placebo group, are important to address in future research, Roper said. In particular, controlled trials of elemental diets are needed to determine whether microbiome changes are directly responsible for symptom improvement.
The study was supported in part by Good LFE and the John and Geraldine Cusenza Foundation. Rezaie disclosed serving as a consultant/speaker for Bausch Health and having equity in Dieta Health, Gemelli Biotech, and Good LFE. Roper had no financial conflicts to disclose.
A version of this article first appeared on Medscape.com.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Wearable Devices May Predict IBD Flares Weeks in Advance
according to investigators.
These findings suggest that widely used consumer wearables could support long-term monitoring of IBD and other chronic inflammatory conditions, lead author Robert P. Hirten, MD, of Icahn School of Medicine at Mount Sinai, New York, and colleagues reported.
“Wearable devices are an increasingly accepted tool for monitoring health and disease,” the investigators wrote in Gastroenterology. “They are frequently used in non–inflammatory-based diseases for remote patient monitoring, allowing individuals to be monitored outside of the clinical setting, which has resulted in improved outcomes in multiple disease states.”
Progress has been slower for inflammatory conditions, the investigators noted, despite interest from both providers and patients. Prior studies have explored activity and sleep tracking, or sweat-based biomarkers, as potential tools for monitoring IBD.
Hirten and colleagues took a novel approach, focusing on physiologic changes driven by autonomic nervous system dysfunction — a hallmark of chronic inflammation. Conditions like IBD are associated with reduced parasympathetic activity and increased sympathetic tone, which in turn affect heart rate and heart rate variability. Heart rate tends to rise during flares, while heart rate variability decreases.
Their prospective cohort study included 309 adults with Crohn’s disease (n = 196) or ulcerative colitis (n = 113). Participants used their own or a study-provided Apple Watch, Fitbit, or Oura Ring to passively collect physiological data, including heart rate, resting heart rate, heart rate variability, and step count. A subset of Apple Watch users also contributed oxygen saturation data.
Participants also completed daily symptom surveys using a custom smartphone app and reported laboratory values such as C-reactive protein, erythrocyte sedimentation rate, and fecal calprotectin, as part of routine care. These data were used to identify symptomatic and inflammatory flare periods.
Over a mean follow-up of about 7 months, the physiological data consistently distinguished both types of flares from periods of remission. Heart rate variability dropped significantly during flares, while heart rate and resting heart rate increased. Step counts decreased during inflammatory flares but not during symptom-only flares. Oxygen saturation stayed mostly the same, except for a slight drop seen in participants with Crohn’s disease.
These physiological changes could be detected as early as 7 weeks before a flare. Predictive models that combined multiple metrics — heart rate variability, heart rate, resting heart rate, and step count — were highly accurate, with F1 scores as high as 0.90 for predicting inflammatory flares and 0.83 for predicting symptomatic flares.
In addition, wearable data helped differentiate between flares caused by active inflammation and those driven by symptoms alone. Even when symptoms were similar, heart rate variability, heart rate, and resting heart rate were significantly higher when inflammation was present—suggesting wearable devices may help address the common mismatch between symptoms and actual disease activity in IBD.
“These findings support the further evaluation of wearable devices in the monitoring of IBD,” the investigators concluded.
The study was supported by the National Institute of Diabetes and Digestive and Kidney Diseases and Ms. Jenny Steingart. The investigators disclosed additional relationships with Agomab, Lilly, Merck, and others.
Dana J. Lukin, MD, PhD, AGAF, of New York-Presbyterian Hospital/Weill Cornell Medicine, New York City, described the study by Hirten et al as “provocative.”
“While the data require a machine learning approach to transform the recorded values into predictive algorithms, it is intriguing that routinely recorded information from smart devices can be used in a manner to inform disease activity,” Lukin said in an interview. “Furthermore, the use of continuously recorded physiological data in this study likely reflects longitudinal health status more accurately than cross-sectional use of patient-reported outcomes or episodic biomarker testing.”
In addition to offering potentially higher accuracy than conventional monitoring, the remote strategy is also more convenient, he noted.
“The use of these devices is likely easier to adhere to than the use of other contemporary monitoring strategies involving the collection of stool or blood samples,” Lukin said. “It may become possible to passively monitor a larger number of patients at risk for flares remotely,” especially given that “almost half of Americans utilize wearables, such as the Apple Watch, Oura Ring, and Fitbit.”
Still, Lukin predicted challenges with widespread adoption.
“More than half of Americans do not routinely [use these devices],” Lukin said. “Cost, access to internet and smartphones, and adoption of new technology may all be barriers to more widespread use.”
He suggested that the present study offers proof of concept, but more prospective data are needed to demonstrate how this type of remote monitoring might improve real-world IBD care.
“Potential studies will assess change in healthcare utilization, corticosteroids, surgery, and clinical flare activity with the use of these data,” Lukin said. “As we learn more about how to handle the large amount of data generated by these devices, our algorithms can be refined to make a feasible platform for practices to employ in routine care.”
Lukin disclosed relationships with Boehringer Ingelheim, Takeda, Vedanta, and others.
Dana J. Lukin, MD, PhD, AGAF, of New York-Presbyterian Hospital/Weill Cornell Medicine, New York City, described the study by Hirten et al as “provocative.”
“While the data require a machine learning approach to transform the recorded values into predictive algorithms, it is intriguing that routinely recorded information from smart devices can be used in a manner to inform disease activity,” Lukin said in an interview. “Furthermore, the use of continuously recorded physiological data in this study likely reflects longitudinal health status more accurately than cross-sectional use of patient-reported outcomes or episodic biomarker testing.”
In addition to offering potentially higher accuracy than conventional monitoring, the remote strategy is also more convenient, he noted.
“The use of these devices is likely easier to adhere to than the use of other contemporary monitoring strategies involving the collection of stool or blood samples,” Lukin said. “It may become possible to passively monitor a larger number of patients at risk for flares remotely,” especially given that “almost half of Americans utilize wearables, such as the Apple Watch, Oura Ring, and Fitbit.”
Still, Lukin predicted challenges with widespread adoption.
“More than half of Americans do not routinely [use these devices],” Lukin said. “Cost, access to internet and smartphones, and adoption of new technology may all be barriers to more widespread use.”
He suggested that the present study offers proof of concept, but more prospective data are needed to demonstrate how this type of remote monitoring might improve real-world IBD care.
“Potential studies will assess change in healthcare utilization, corticosteroids, surgery, and clinical flare activity with the use of these data,” Lukin said. “As we learn more about how to handle the large amount of data generated by these devices, our algorithms can be refined to make a feasible platform for practices to employ in routine care.”
Lukin disclosed relationships with Boehringer Ingelheim, Takeda, Vedanta, and others.
Dana J. Lukin, MD, PhD, AGAF, of New York-Presbyterian Hospital/Weill Cornell Medicine, New York City, described the study by Hirten et al as “provocative.”
“While the data require a machine learning approach to transform the recorded values into predictive algorithms, it is intriguing that routinely recorded information from smart devices can be used in a manner to inform disease activity,” Lukin said in an interview. “Furthermore, the use of continuously recorded physiological data in this study likely reflects longitudinal health status more accurately than cross-sectional use of patient-reported outcomes or episodic biomarker testing.”
In addition to offering potentially higher accuracy than conventional monitoring, the remote strategy is also more convenient, he noted.
“The use of these devices is likely easier to adhere to than the use of other contemporary monitoring strategies involving the collection of stool or blood samples,” Lukin said. “It may become possible to passively monitor a larger number of patients at risk for flares remotely,” especially given that “almost half of Americans utilize wearables, such as the Apple Watch, Oura Ring, and Fitbit.”
Still, Lukin predicted challenges with widespread adoption.
“More than half of Americans do not routinely [use these devices],” Lukin said. “Cost, access to internet and smartphones, and adoption of new technology may all be barriers to more widespread use.”
He suggested that the present study offers proof of concept, but more prospective data are needed to demonstrate how this type of remote monitoring might improve real-world IBD care.
“Potential studies will assess change in healthcare utilization, corticosteroids, surgery, and clinical flare activity with the use of these data,” Lukin said. “As we learn more about how to handle the large amount of data generated by these devices, our algorithms can be refined to make a feasible platform for practices to employ in routine care.”
Lukin disclosed relationships with Boehringer Ingelheim, Takeda, Vedanta, and others.
according to investigators.
These findings suggest that widely used consumer wearables could support long-term monitoring of IBD and other chronic inflammatory conditions, lead author Robert P. Hirten, MD, of Icahn School of Medicine at Mount Sinai, New York, and colleagues reported.
“Wearable devices are an increasingly accepted tool for monitoring health and disease,” the investigators wrote in Gastroenterology. “They are frequently used in non–inflammatory-based diseases for remote patient monitoring, allowing individuals to be monitored outside of the clinical setting, which has resulted in improved outcomes in multiple disease states.”
Progress has been slower for inflammatory conditions, the investigators noted, despite interest from both providers and patients. Prior studies have explored activity and sleep tracking, or sweat-based biomarkers, as potential tools for monitoring IBD.
Hirten and colleagues took a novel approach, focusing on physiologic changes driven by autonomic nervous system dysfunction — a hallmark of chronic inflammation. Conditions like IBD are associated with reduced parasympathetic activity and increased sympathetic tone, which in turn affect heart rate and heart rate variability. Heart rate tends to rise during flares, while heart rate variability decreases.
Their prospective cohort study included 309 adults with Crohn’s disease (n = 196) or ulcerative colitis (n = 113). Participants used their own or a study-provided Apple Watch, Fitbit, or Oura Ring to passively collect physiological data, including heart rate, resting heart rate, heart rate variability, and step count. A subset of Apple Watch users also contributed oxygen saturation data.
Participants also completed daily symptom surveys using a custom smartphone app and reported laboratory values such as C-reactive protein, erythrocyte sedimentation rate, and fecal calprotectin, as part of routine care. These data were used to identify symptomatic and inflammatory flare periods.
Over a mean follow-up of about 7 months, the physiological data consistently distinguished both types of flares from periods of remission. Heart rate variability dropped significantly during flares, while heart rate and resting heart rate increased. Step counts decreased during inflammatory flares but not during symptom-only flares. Oxygen saturation stayed mostly the same, except for a slight drop seen in participants with Crohn’s disease.
These physiological changes could be detected as early as 7 weeks before a flare. Predictive models that combined multiple metrics — heart rate variability, heart rate, resting heart rate, and step count — were highly accurate, with F1 scores as high as 0.90 for predicting inflammatory flares and 0.83 for predicting symptomatic flares.
In addition, wearable data helped differentiate between flares caused by active inflammation and those driven by symptoms alone. Even when symptoms were similar, heart rate variability, heart rate, and resting heart rate were significantly higher when inflammation was present—suggesting wearable devices may help address the common mismatch between symptoms and actual disease activity in IBD.
“These findings support the further evaluation of wearable devices in the monitoring of IBD,” the investigators concluded.
The study was supported by the National Institute of Diabetes and Digestive and Kidney Diseases and Ms. Jenny Steingart. The investigators disclosed additional relationships with Agomab, Lilly, Merck, and others.
according to investigators.
These findings suggest that widely used consumer wearables could support long-term monitoring of IBD and other chronic inflammatory conditions, lead author Robert P. Hirten, MD, of Icahn School of Medicine at Mount Sinai, New York, and colleagues reported.
“Wearable devices are an increasingly accepted tool for monitoring health and disease,” the investigators wrote in Gastroenterology. “They are frequently used in non–inflammatory-based diseases for remote patient monitoring, allowing individuals to be monitored outside of the clinical setting, which has resulted in improved outcomes in multiple disease states.”
Progress has been slower for inflammatory conditions, the investigators noted, despite interest from both providers and patients. Prior studies have explored activity and sleep tracking, or sweat-based biomarkers, as potential tools for monitoring IBD.
Hirten and colleagues took a novel approach, focusing on physiologic changes driven by autonomic nervous system dysfunction — a hallmark of chronic inflammation. Conditions like IBD are associated with reduced parasympathetic activity and increased sympathetic tone, which in turn affect heart rate and heart rate variability. Heart rate tends to rise during flares, while heart rate variability decreases.
Their prospective cohort study included 309 adults with Crohn’s disease (n = 196) or ulcerative colitis (n = 113). Participants used their own or a study-provided Apple Watch, Fitbit, or Oura Ring to passively collect physiological data, including heart rate, resting heart rate, heart rate variability, and step count. A subset of Apple Watch users also contributed oxygen saturation data.
Participants also completed daily symptom surveys using a custom smartphone app and reported laboratory values such as C-reactive protein, erythrocyte sedimentation rate, and fecal calprotectin, as part of routine care. These data were used to identify symptomatic and inflammatory flare periods.
Over a mean follow-up of about 7 months, the physiological data consistently distinguished both types of flares from periods of remission. Heart rate variability dropped significantly during flares, while heart rate and resting heart rate increased. Step counts decreased during inflammatory flares but not during symptom-only flares. Oxygen saturation stayed mostly the same, except for a slight drop seen in participants with Crohn’s disease.
These physiological changes could be detected as early as 7 weeks before a flare. Predictive models that combined multiple metrics — heart rate variability, heart rate, resting heart rate, and step count — were highly accurate, with F1 scores as high as 0.90 for predicting inflammatory flares and 0.83 for predicting symptomatic flares.
In addition, wearable data helped differentiate between flares caused by active inflammation and those driven by symptoms alone. Even when symptoms were similar, heart rate variability, heart rate, and resting heart rate were significantly higher when inflammation was present—suggesting wearable devices may help address the common mismatch between symptoms and actual disease activity in IBD.
“These findings support the further evaluation of wearable devices in the monitoring of IBD,” the investigators concluded.
The study was supported by the National Institute of Diabetes and Digestive and Kidney Diseases and Ms. Jenny Steingart. The investigators disclosed additional relationships with Agomab, Lilly, Merck, and others.
FROM GASTROENTEROLOGY
Low-Quality Food Environments Increase MASLD-related Mortality
according to investigators.
These findings highlight the importance of addressing disparities in food environments and social determinants of health to help reduce MASLD-related mortality, lead author Annette Paik, MD, of Inova Health System, Falls Church, Virginia, and colleagues reported.
“Recent studies indicate that food swamps and deserts, as surrogates for food insecurity, are linked to poor glycemic control and higher adult obesity rates,” the investigators wrote in Clinical Gastroenterology and Hepatology. “Understanding the intersection of these factors with sociodemographic and clinical variables offers insights into MASLD-related outcomes, including mortality.”
To this end, the present study examined the association between food environments and MASLD-related mortality across more than 2,195 US counties. County-level mortality data were obtained from the CDC WONDER database (2016-2020) and linked to food environment data from the US Department of Agriculture Food Environment Atlas using Federal Information Processing Standards (FIPS) codes. Food deserts were defined as low-income areas with limited access to grocery stores, while food swamps were characterized by a predominance of unhealthy food outlets relative to healthy ones.
Additional data on obesity, type 2 diabetes (T2D), and nine social determinants of health were obtained from CDC PLACES and other publicly available datasets. Counties were stratified into quartiles based on MASLD-related mortality rates. Population-weighted mixed-effects linear regression models were used to evaluate associations between food environment exposures and MASLD mortality, adjusting for region, rural-urban status, age, sex, race, insurance coverage, chronic dis-ease prevalence, SNAP participation, and access to exercise facilities.
Counties with the worst food environments had significantly higher MASLD-related mortality, even after adjusting for clinical and sociodemographic factors. Compared with counties in the lowest quartile of MASLD mortality, those in the highest quartile had a greater proportion of food deserts (22.3% vs 14.9%; P < .001) and food swamps (73.1% vs 65.7%; P < .001). They also had a significantly higher prevalence of obesity (40.5% vs 32.5%), type 2 diabetes (15.8% vs 11.4%), and physical inactivity (33.7% vs 24.9%).
Demographically, counties with higher MASLD mortality had significantly larger proportions of Black and Hispanic residents, and were more likely to be rural and located in the South. These counties also had significantly lower median household incomes, higher poverty rates, fewer adults with a college education, lower access to exercise opportunities, greater SNAP participation, less broadband access, and more uninsured adults.
In multivariable regression models, both food deserts and food swamps remained independently associated with MASLD mortality. Counties in the highest quartile of food desert exposure had a 14.5% higher MASLD mortality rate, compared with the lowest quartile (P = .001), and those in the highest quartile for food swamp exposure had a 13.9% higher mortality rate (P = .005).
Type 2 diabetes, physical inactivity, and lack of health insurance were also independently associated with increased MASLD-related mortality.
“Implementing public health interventions that address the specific environmental factors of each county can help US policymakers promote access to healthy, culturally appropriate food choices at affordable prices and reduce the consumption of poor-quality food,” the investigators wrote. “Moreover, improving access to parks and exercise facilities can further enhance the impact of healthy nutrition. These strategies could help curb the growing epidemic of metabolic diseases, including MASLD and related mortality.”
This study was supported by King Faisal Specialist Hospital & Research Center, the Global NASH Council, Center for Outcomes Research in Liver Diseases, and the Beatty Liver and Obesity Research Fund, Inova Health System. The investigators disclosed no conflicts of interest.
A healthy lifestyle continues to be foundational to the management of metabolic dysfunction–associated steatotic liver disease (MASLD). Poor diet quality is a risk factor for developing MASLD in the US general population. Food deserts and food swamps are symptoms of socioeconomic hardship, as they both are characterized by limited access to healthy food (as described by the US Department of Agriculture Dietary Guidelines for Americans) owing to the absence of grocery stores/supermarkets. However, food swamps suffer from abundant access to unhealthy, energy-dense, yet nutritionally sparse (EDYNS) foods.
The article by Paik et al shows that food deserts and food swamps are not only associated with the burden of MASLD in the United States but also with MASLD-related mortality. The counties with the highest MASLD-related mortality carried higher food swamps and food deserts, poverty, unemployment, household crowding, absence of broadband internet access, lack of high school education, and elderly, Hispanic residents and likely to be located in the South.
MASLD appears to have origins in the dark underbelly of socioeconomic hardship that might preclude many of our patients from complying with lifestyle changes. Policy changes are urgently needed at a national level, from increasing incentives to establish grocery stores in the food deserts to limiting the proportion of EDYNS foods in grocery stores and conspicuous labeling by the Food and Drug Administration of EDYNS foods. At an individual practice level, supporting MASLD patients in the clinic with a dietitian, educational material, and, where possible, utilizing applications to assist healthy dietary habits to empower them in choosing healthy food options.
Niharika Samala, MD, is assistant professor of medicine, associate program director of the GI Fellowship, and director of the IUH MASLD/NAFLD Clinic at the Indiana University School of Medicine, Indianapolis. She reported no relevant conflicts of interest.
A healthy lifestyle continues to be foundational to the management of metabolic dysfunction–associated steatotic liver disease (MASLD). Poor diet quality is a risk factor for developing MASLD in the US general population. Food deserts and food swamps are symptoms of socioeconomic hardship, as they both are characterized by limited access to healthy food (as described by the US Department of Agriculture Dietary Guidelines for Americans) owing to the absence of grocery stores/supermarkets. However, food swamps suffer from abundant access to unhealthy, energy-dense, yet nutritionally sparse (EDYNS) foods.
The article by Paik et al shows that food deserts and food swamps are not only associated with the burden of MASLD in the United States but also with MASLD-related mortality. The counties with the highest MASLD-related mortality carried higher food swamps and food deserts, poverty, unemployment, household crowding, absence of broadband internet access, lack of high school education, and elderly, Hispanic residents and likely to be located in the South.
MASLD appears to have origins in the dark underbelly of socioeconomic hardship that might preclude many of our patients from complying with lifestyle changes. Policy changes are urgently needed at a national level, from increasing incentives to establish grocery stores in the food deserts to limiting the proportion of EDYNS foods in grocery stores and conspicuous labeling by the Food and Drug Administration of EDYNS foods. At an individual practice level, supporting MASLD patients in the clinic with a dietitian, educational material, and, where possible, utilizing applications to assist healthy dietary habits to empower them in choosing healthy food options.
Niharika Samala, MD, is assistant professor of medicine, associate program director of the GI Fellowship, and director of the IUH MASLD/NAFLD Clinic at the Indiana University School of Medicine, Indianapolis. She reported no relevant conflicts of interest.
A healthy lifestyle continues to be foundational to the management of metabolic dysfunction–associated steatotic liver disease (MASLD). Poor diet quality is a risk factor for developing MASLD in the US general population. Food deserts and food swamps are symptoms of socioeconomic hardship, as they both are characterized by limited access to healthy food (as described by the US Department of Agriculture Dietary Guidelines for Americans) owing to the absence of grocery stores/supermarkets. However, food swamps suffer from abundant access to unhealthy, energy-dense, yet nutritionally sparse (EDYNS) foods.
The article by Paik et al shows that food deserts and food swamps are not only associated with the burden of MASLD in the United States but also with MASLD-related mortality. The counties with the highest MASLD-related mortality carried higher food swamps and food deserts, poverty, unemployment, household crowding, absence of broadband internet access, lack of high school education, and elderly, Hispanic residents and likely to be located in the South.
MASLD appears to have origins in the dark underbelly of socioeconomic hardship that might preclude many of our patients from complying with lifestyle changes. Policy changes are urgently needed at a national level, from increasing incentives to establish grocery stores in the food deserts to limiting the proportion of EDYNS foods in grocery stores and conspicuous labeling by the Food and Drug Administration of EDYNS foods. At an individual practice level, supporting MASLD patients in the clinic with a dietitian, educational material, and, where possible, utilizing applications to assist healthy dietary habits to empower them in choosing healthy food options.
Niharika Samala, MD, is assistant professor of medicine, associate program director of the GI Fellowship, and director of the IUH MASLD/NAFLD Clinic at the Indiana University School of Medicine, Indianapolis. She reported no relevant conflicts of interest.
according to investigators.
These findings highlight the importance of addressing disparities in food environments and social determinants of health to help reduce MASLD-related mortality, lead author Annette Paik, MD, of Inova Health System, Falls Church, Virginia, and colleagues reported.
“Recent studies indicate that food swamps and deserts, as surrogates for food insecurity, are linked to poor glycemic control and higher adult obesity rates,” the investigators wrote in Clinical Gastroenterology and Hepatology. “Understanding the intersection of these factors with sociodemographic and clinical variables offers insights into MASLD-related outcomes, including mortality.”
To this end, the present study examined the association between food environments and MASLD-related mortality across more than 2,195 US counties. County-level mortality data were obtained from the CDC WONDER database (2016-2020) and linked to food environment data from the US Department of Agriculture Food Environment Atlas using Federal Information Processing Standards (FIPS) codes. Food deserts were defined as low-income areas with limited access to grocery stores, while food swamps were characterized by a predominance of unhealthy food outlets relative to healthy ones.
Additional data on obesity, type 2 diabetes (T2D), and nine social determinants of health were obtained from CDC PLACES and other publicly available datasets. Counties were stratified into quartiles based on MASLD-related mortality rates. Population-weighted mixed-effects linear regression models were used to evaluate associations between food environment exposures and MASLD mortality, adjusting for region, rural-urban status, age, sex, race, insurance coverage, chronic dis-ease prevalence, SNAP participation, and access to exercise facilities.
Counties with the worst food environments had significantly higher MASLD-related mortality, even after adjusting for clinical and sociodemographic factors. Compared with counties in the lowest quartile of MASLD mortality, those in the highest quartile had a greater proportion of food deserts (22.3% vs 14.9%; P < .001) and food swamps (73.1% vs 65.7%; P < .001). They also had a significantly higher prevalence of obesity (40.5% vs 32.5%), type 2 diabetes (15.8% vs 11.4%), and physical inactivity (33.7% vs 24.9%).
Demographically, counties with higher MASLD mortality had significantly larger proportions of Black and Hispanic residents, and were more likely to be rural and located in the South. These counties also had significantly lower median household incomes, higher poverty rates, fewer adults with a college education, lower access to exercise opportunities, greater SNAP participation, less broadband access, and more uninsured adults.
In multivariable regression models, both food deserts and food swamps remained independently associated with MASLD mortality. Counties in the highest quartile of food desert exposure had a 14.5% higher MASLD mortality rate, compared with the lowest quartile (P = .001), and those in the highest quartile for food swamp exposure had a 13.9% higher mortality rate (P = .005).
Type 2 diabetes, physical inactivity, and lack of health insurance were also independently associated with increased MASLD-related mortality.
“Implementing public health interventions that address the specific environmental factors of each county can help US policymakers promote access to healthy, culturally appropriate food choices at affordable prices and reduce the consumption of poor-quality food,” the investigators wrote. “Moreover, improving access to parks and exercise facilities can further enhance the impact of healthy nutrition. These strategies could help curb the growing epidemic of metabolic diseases, including MASLD and related mortality.”
This study was supported by King Faisal Specialist Hospital & Research Center, the Global NASH Council, Center for Outcomes Research in Liver Diseases, and the Beatty Liver and Obesity Research Fund, Inova Health System. The investigators disclosed no conflicts of interest.
according to investigators.
These findings highlight the importance of addressing disparities in food environments and social determinants of health to help reduce MASLD-related mortality, lead author Annette Paik, MD, of Inova Health System, Falls Church, Virginia, and colleagues reported.
“Recent studies indicate that food swamps and deserts, as surrogates for food insecurity, are linked to poor glycemic control and higher adult obesity rates,” the investigators wrote in Clinical Gastroenterology and Hepatology. “Understanding the intersection of these factors with sociodemographic and clinical variables offers insights into MASLD-related outcomes, including mortality.”
To this end, the present study examined the association between food environments and MASLD-related mortality across more than 2,195 US counties. County-level mortality data were obtained from the CDC WONDER database (2016-2020) and linked to food environment data from the US Department of Agriculture Food Environment Atlas using Federal Information Processing Standards (FIPS) codes. Food deserts were defined as low-income areas with limited access to grocery stores, while food swamps were characterized by a predominance of unhealthy food outlets relative to healthy ones.
Additional data on obesity, type 2 diabetes (T2D), and nine social determinants of health were obtained from CDC PLACES and other publicly available datasets. Counties were stratified into quartiles based on MASLD-related mortality rates. Population-weighted mixed-effects linear regression models were used to evaluate associations between food environment exposures and MASLD mortality, adjusting for region, rural-urban status, age, sex, race, insurance coverage, chronic dis-ease prevalence, SNAP participation, and access to exercise facilities.
Counties with the worst food environments had significantly higher MASLD-related mortality, even after adjusting for clinical and sociodemographic factors. Compared with counties in the lowest quartile of MASLD mortality, those in the highest quartile had a greater proportion of food deserts (22.3% vs 14.9%; P < .001) and food swamps (73.1% vs 65.7%; P < .001). They also had a significantly higher prevalence of obesity (40.5% vs 32.5%), type 2 diabetes (15.8% vs 11.4%), and physical inactivity (33.7% vs 24.9%).
Demographically, counties with higher MASLD mortality had significantly larger proportions of Black and Hispanic residents, and were more likely to be rural and located in the South. These counties also had significantly lower median household incomes, higher poverty rates, fewer adults with a college education, lower access to exercise opportunities, greater SNAP participation, less broadband access, and more uninsured adults.
In multivariable regression models, both food deserts and food swamps remained independently associated with MASLD mortality. Counties in the highest quartile of food desert exposure had a 14.5% higher MASLD mortality rate, compared with the lowest quartile (P = .001), and those in the highest quartile for food swamp exposure had a 13.9% higher mortality rate (P = .005).
Type 2 diabetes, physical inactivity, and lack of health insurance were also independently associated with increased MASLD-related mortality.
“Implementing public health interventions that address the specific environmental factors of each county can help US policymakers promote access to healthy, culturally appropriate food choices at affordable prices and reduce the consumption of poor-quality food,” the investigators wrote. “Moreover, improving access to parks and exercise facilities can further enhance the impact of healthy nutrition. These strategies could help curb the growing epidemic of metabolic diseases, including MASLD and related mortality.”
This study was supported by King Faisal Specialist Hospital & Research Center, the Global NASH Council, Center for Outcomes Research in Liver Diseases, and the Beatty Liver and Obesity Research Fund, Inova Health System. The investigators disclosed no conflicts of interest.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Infrequent HDV Testing Raises Concern for Worse Liver Outcomes
—according to new findings.
The low testing rate suggests limited awareness of HDV-associated risks in patients with CHB, and underscores the need for earlier testing and diagnosis, lead author Robert J. Wong, MD, of Stanford University School of Medicine, Stanford, California, and colleagues, reported.
“Data among US populations are lacking to describe the epidemiology and long-term outcomes of patients with CHB and concurrent HDV infection,” the investigators wrote in Gastro Hep Advances (2025 Oct. doi: 10.1016/j.gastha.2024.10.015).
Prior studies have found that only 6% to 19% of patients with CHB get tested for HDV, and among those tested, the prevalence is relatively low—between 2% and 4.6%. Although relatively uncommon, HDV carries a substantial clinical and economic burden, Dr. Wong and colleagues noted, highlighting the importance of clinical awareness and accurate epidemiologic data.
The present study analyzed data from the Veterans Affairs (VA) Corporate Data Warehouse between 2010 and 2023. Adults with CHB were identified based on laboratory-confirmed markers and ICD-9/10 codes. HDV testing (anti-HDV antibody and HDV RNA) was assessed, and predictors of testing were evaluated using multivariable logistic regression.
To examine liver-related outcomes, patients who tested positive for HDV were propensity score–matched 1:2 with CHB patients who tested negative. Matching accounted for age, sex, race/ethnicity, HBeAg status, antiviral treatment, HCV and HIV coinfection, diabetes, and alcohol use. Patients with cirrhosis or hepatocellular carcinoma (HCC) at base-line were excluded. Incidence of cirrhosis, hepatic decompensation, and HCC was estimated using competing risks Nelson-Aalen methods.
Among 27,548 veterans with CHB, only 16.1% underwent HDV testing. Of those tested, 3.25% were HDV positive. Testing rates were higher among patients who were HBeAg positive, on antiviral therapy, or identified as Asian or Pacific Islander.
Conversely, testing was significantly less common among patients with high-risk alcohol use, past or current drug use, cirrhosis at diagnosis, or HCV coinfection. In contrast, HIV coinfection was associated with increased odds of being tested.
Among those tested, HDV positivity was more likely in patients with HCV coinfection, cirrhosis, or a history of drug use. On multivariable analysis, these factors were independent predictors of HDV positivity.
In the matched cohort of 71 HDV-positive patients and 140 HDV-negative controls, the incidence of cirrhosis was more than 3-fold higher in HDV-positive patients (4.39 vs 1.30 per 100,000 person-years; P less than .01), and hepatic decompensation was over 5 times more common (2.18 vs 0.41 per 100,000 person-years; P = .01). There was also a non-significant trend toward increased HCC risk in the HDV group.
“These findings align with existing studies and confirm that among a predominantly non-Asian US cohort of CHB patients, presence of concurrent HDV is associated with more severe liver disease progression,” the investigators wrote. “These observations, taken together with the low rates of HDV testing overall and particularly among high-risk individuals, emphasizes the need for greater awareness and novel strategies on how to improve HDV testing and diagnosis, particularly given that novel HDV therapies are on the near horizon.”
The study was supported by Gilead. The investigators disclosed additional relationships with Exact Sciences, GSK, Novo Nordisk, and others.
Hepatitis D virus (HDV) is an RNA “sub-virus” that infects patients with co-existing hepatitis B virus (HBV) infections. HDV infection currently affects approximately 15-20 million people worldwide but is an orphan disease in the United States with fewer than 100,000 individuals infected today.
Those with HDV have a 70% lifetime risk of hepatocellular carcinoma (HCC), cirrhosis, liver failure, death, or liver transplant. But there are no current treatments in the US that are Food and Drug Administration (FDA)-approved for the treatment of HDV, and only one therapy in the European Union with full approval by the European Medicines Agency.
Despite HDV severity and limited treatment options, screening for HDV remains severely inadequate, often only testing those individuals at high risk sequentially. HDV screening, would benefit from a revamped approach that automatically reflexes testing when individuals are diagnosed with HBV if positive for hepatitis B surface antigen (HBsAg+), then proceeds to anti-HDV antibody total testing, and then double reflexed to HDV-RNA polymerase chain reaction (PCR) quantitation. This is especially true in the Veterans Administration (VA)’s hospitals and clinics, where Wong and colleagues found very low rates of HDV testing among a national cohort of US Veterans with chronic HBV.
This study highlights the importance of timely HDV testing using reflex tools to improve diagnosis and HDV treatment, reducing long-term risks of liver-related morbidity and mortality.
Robert G. Gish, MD, AGAF, is principal at Robert G Gish Consultants LLC, clinical professor of medicine at Loma Linda University, Loma Linda, Calif., and medical director of the Hepatitis B Foundation. His complete list of disclosures can be found at www.robertgish.com/about.
Hepatitis D virus (HDV) is an RNA “sub-virus” that infects patients with co-existing hepatitis B virus (HBV) infections. HDV infection currently affects approximately 15-20 million people worldwide but is an orphan disease in the United States with fewer than 100,000 individuals infected today.
Those with HDV have a 70% lifetime risk of hepatocellular carcinoma (HCC), cirrhosis, liver failure, death, or liver transplant. But there are no current treatments in the US that are Food and Drug Administration (FDA)-approved for the treatment of HDV, and only one therapy in the European Union with full approval by the European Medicines Agency.
Despite HDV severity and limited treatment options, screening for HDV remains severely inadequate, often only testing those individuals at high risk sequentially. HDV screening, would benefit from a revamped approach that automatically reflexes testing when individuals are diagnosed with HBV if positive for hepatitis B surface antigen (HBsAg+), then proceeds to anti-HDV antibody total testing, and then double reflexed to HDV-RNA polymerase chain reaction (PCR) quantitation. This is especially true in the Veterans Administration (VA)’s hospitals and clinics, where Wong and colleagues found very low rates of HDV testing among a national cohort of US Veterans with chronic HBV.
This study highlights the importance of timely HDV testing using reflex tools to improve diagnosis and HDV treatment, reducing long-term risks of liver-related morbidity and mortality.
Robert G. Gish, MD, AGAF, is principal at Robert G Gish Consultants LLC, clinical professor of medicine at Loma Linda University, Loma Linda, Calif., and medical director of the Hepatitis B Foundation. His complete list of disclosures can be found at www.robertgish.com/about.
Hepatitis D virus (HDV) is an RNA “sub-virus” that infects patients with co-existing hepatitis B virus (HBV) infections. HDV infection currently affects approximately 15-20 million people worldwide but is an orphan disease in the United States with fewer than 100,000 individuals infected today.
Those with HDV have a 70% lifetime risk of hepatocellular carcinoma (HCC), cirrhosis, liver failure, death, or liver transplant. But there are no current treatments in the US that are Food and Drug Administration (FDA)-approved for the treatment of HDV, and only one therapy in the European Union with full approval by the European Medicines Agency.
Despite HDV severity and limited treatment options, screening for HDV remains severely inadequate, often only testing those individuals at high risk sequentially. HDV screening, would benefit from a revamped approach that automatically reflexes testing when individuals are diagnosed with HBV if positive for hepatitis B surface antigen (HBsAg+), then proceeds to anti-HDV antibody total testing, and then double reflexed to HDV-RNA polymerase chain reaction (PCR) quantitation. This is especially true in the Veterans Administration (VA)’s hospitals and clinics, where Wong and colleagues found very low rates of HDV testing among a national cohort of US Veterans with chronic HBV.
This study highlights the importance of timely HDV testing using reflex tools to improve diagnosis and HDV treatment, reducing long-term risks of liver-related morbidity and mortality.
Robert G. Gish, MD, AGAF, is principal at Robert G Gish Consultants LLC, clinical professor of medicine at Loma Linda University, Loma Linda, Calif., and medical director of the Hepatitis B Foundation. His complete list of disclosures can be found at www.robertgish.com/about.
—according to new findings.
The low testing rate suggests limited awareness of HDV-associated risks in patients with CHB, and underscores the need for earlier testing and diagnosis, lead author Robert J. Wong, MD, of Stanford University School of Medicine, Stanford, California, and colleagues, reported.
“Data among US populations are lacking to describe the epidemiology and long-term outcomes of patients with CHB and concurrent HDV infection,” the investigators wrote in Gastro Hep Advances (2025 Oct. doi: 10.1016/j.gastha.2024.10.015).
Prior studies have found that only 6% to 19% of patients with CHB get tested for HDV, and among those tested, the prevalence is relatively low—between 2% and 4.6%. Although relatively uncommon, HDV carries a substantial clinical and economic burden, Dr. Wong and colleagues noted, highlighting the importance of clinical awareness and accurate epidemiologic data.
The present study analyzed data from the Veterans Affairs (VA) Corporate Data Warehouse between 2010 and 2023. Adults with CHB were identified based on laboratory-confirmed markers and ICD-9/10 codes. HDV testing (anti-HDV antibody and HDV RNA) was assessed, and predictors of testing were evaluated using multivariable logistic regression.
To examine liver-related outcomes, patients who tested positive for HDV were propensity score–matched 1:2 with CHB patients who tested negative. Matching accounted for age, sex, race/ethnicity, HBeAg status, antiviral treatment, HCV and HIV coinfection, diabetes, and alcohol use. Patients with cirrhosis or hepatocellular carcinoma (HCC) at base-line were excluded. Incidence of cirrhosis, hepatic decompensation, and HCC was estimated using competing risks Nelson-Aalen methods.
Among 27,548 veterans with CHB, only 16.1% underwent HDV testing. Of those tested, 3.25% were HDV positive. Testing rates were higher among patients who were HBeAg positive, on antiviral therapy, or identified as Asian or Pacific Islander.
Conversely, testing was significantly less common among patients with high-risk alcohol use, past or current drug use, cirrhosis at diagnosis, or HCV coinfection. In contrast, HIV coinfection was associated with increased odds of being tested.
Among those tested, HDV positivity was more likely in patients with HCV coinfection, cirrhosis, or a history of drug use. On multivariable analysis, these factors were independent predictors of HDV positivity.
In the matched cohort of 71 HDV-positive patients and 140 HDV-negative controls, the incidence of cirrhosis was more than 3-fold higher in HDV-positive patients (4.39 vs 1.30 per 100,000 person-years; P less than .01), and hepatic decompensation was over 5 times more common (2.18 vs 0.41 per 100,000 person-years; P = .01). There was also a non-significant trend toward increased HCC risk in the HDV group.
“These findings align with existing studies and confirm that among a predominantly non-Asian US cohort of CHB patients, presence of concurrent HDV is associated with more severe liver disease progression,” the investigators wrote. “These observations, taken together with the low rates of HDV testing overall and particularly among high-risk individuals, emphasizes the need for greater awareness and novel strategies on how to improve HDV testing and diagnosis, particularly given that novel HDV therapies are on the near horizon.”
The study was supported by Gilead. The investigators disclosed additional relationships with Exact Sciences, GSK, Novo Nordisk, and others.
—according to new findings.
The low testing rate suggests limited awareness of HDV-associated risks in patients with CHB, and underscores the need for earlier testing and diagnosis, lead author Robert J. Wong, MD, of Stanford University School of Medicine, Stanford, California, and colleagues, reported.
“Data among US populations are lacking to describe the epidemiology and long-term outcomes of patients with CHB and concurrent HDV infection,” the investigators wrote in Gastro Hep Advances (2025 Oct. doi: 10.1016/j.gastha.2024.10.015).
Prior studies have found that only 6% to 19% of patients with CHB get tested for HDV, and among those tested, the prevalence is relatively low—between 2% and 4.6%. Although relatively uncommon, HDV carries a substantial clinical and economic burden, Dr. Wong and colleagues noted, highlighting the importance of clinical awareness and accurate epidemiologic data.
The present study analyzed data from the Veterans Affairs (VA) Corporate Data Warehouse between 2010 and 2023. Adults with CHB were identified based on laboratory-confirmed markers and ICD-9/10 codes. HDV testing (anti-HDV antibody and HDV RNA) was assessed, and predictors of testing were evaluated using multivariable logistic regression.
To examine liver-related outcomes, patients who tested positive for HDV were propensity score–matched 1:2 with CHB patients who tested negative. Matching accounted for age, sex, race/ethnicity, HBeAg status, antiviral treatment, HCV and HIV coinfection, diabetes, and alcohol use. Patients with cirrhosis or hepatocellular carcinoma (HCC) at base-line were excluded. Incidence of cirrhosis, hepatic decompensation, and HCC was estimated using competing risks Nelson-Aalen methods.
Among 27,548 veterans with CHB, only 16.1% underwent HDV testing. Of those tested, 3.25% were HDV positive. Testing rates were higher among patients who were HBeAg positive, on antiviral therapy, or identified as Asian or Pacific Islander.
Conversely, testing was significantly less common among patients with high-risk alcohol use, past or current drug use, cirrhosis at diagnosis, or HCV coinfection. In contrast, HIV coinfection was associated with increased odds of being tested.
Among those tested, HDV positivity was more likely in patients with HCV coinfection, cirrhosis, or a history of drug use. On multivariable analysis, these factors were independent predictors of HDV positivity.
In the matched cohort of 71 HDV-positive patients and 140 HDV-negative controls, the incidence of cirrhosis was more than 3-fold higher in HDV-positive patients (4.39 vs 1.30 per 100,000 person-years; P less than .01), and hepatic decompensation was over 5 times more common (2.18 vs 0.41 per 100,000 person-years; P = .01). There was also a non-significant trend toward increased HCC risk in the HDV group.
“These findings align with existing studies and confirm that among a predominantly non-Asian US cohort of CHB patients, presence of concurrent HDV is associated with more severe liver disease progression,” the investigators wrote. “These observations, taken together with the low rates of HDV testing overall and particularly among high-risk individuals, emphasizes the need for greater awareness and novel strategies on how to improve HDV testing and diagnosis, particularly given that novel HDV therapies are on the near horizon.”
The study was supported by Gilead. The investigators disclosed additional relationships with Exact Sciences, GSK, Novo Nordisk, and others.
FROM GASTRO HEP ADVANCES
Intensive Nutrition Therapy Improves Outcomes in Alcohol-Related ACLF
In a randomized controlled trial, compared with standard care, dietitian-supported, intensive nutritional therapy improved survival, reduced frailty, and lowered hospitalization rates in men with alcohol-related ACLF.
The study, performed by a team from the Postgraduate Institute of Medical Education and Research, Chandigarh, India, was published in Clinical Gastroenterology and Hepatology.
ACLF related to alcohol use is associated with poor outcomes due to poor nutritional intake and frailty. Frail patients with ACLF face higher morbidity, mortality, and hospitalization rates than their nonfrail counterparts. However, research on the role of structured nutritional interventions in improving these outcomes is limited.
Patal Giri, MBBS, MD, and colleagues enrolled 70 men with alcohol-related ACLF and frailty (liver frailty index [LFI] > 4.5) in a single-center, open-label study. Half were randomly allocated to an intervention group receiving outpatient intensive nutrition therapy (OINT) plus standard medical treatment (SMT) and half to a control group receiving SMT alone for 3 months.
The intervention group received a monitored high-calorie, high-protein, and salt-restricted diet as prescribed by a dedicated senior liver dietitian. The control group received regular nutritional recommendations and were managed for the ACLF-associated complications, without intervention or guidance by the study team.
After 3 months follow-up, overall survival (the primary outcome) was significantly improved in the OINT group compared with the control group (91.4% vs 57.1%), “suggesting that the improvement in nutrition status is associated with better survival,” the study team noted. Three patients died in the OINT group vs 15 in the SMT group.
OINT also led to a significant improvement in frailty, with LFI scores decreasing by an average of 0.93 in the intervention group vs 0.33 in the control group; 97% of patients improved from frail to prefrail status in the OINT group, whereas only 20% of patients improved in the SMT group.
The mean change in LFI of 0.93 with OINT is “well above the substantially clinically important difference” (change of 0.8) established in a previous study, the authors noted.
Significant improvements in weight and body mass index were also observed in the OINT group relative to the control group.
Liver disease severity, including model for end-stage liver disease (MELD) scores, showed greater improvement in the OINT group than in the control group (−8.7 vs −6.3 points from baseline to 3 months).
During the follow-up period, fewer patients in the intervention group than in the control group required a hospital stay (17% vs 45.7%).
Limitations of the study include the single-center design and the short follow-up period of 3 months, which limits long-term outcome assessment. Further, the study only included patients meeting Asia Pacific Association for Study of Liver criteria for ACLF, which does not include the patients with organ failure as defined by European Association for the Study of the Liver-Chronic Liver Failure Consortium criteria. Patients with ACLF who had more severe disease (MELD score > 30 or AARC > 10) were also not included.
Despite these limitations, the authors said their study showed that “dietician-monitored goal-directed nutrition therapy is very important in the management of patients with alcohol-related ACLF along with SMT.”
Confirmatory Data
Reached for comment, Katherine Patton, MEd, RD, a registered dietitian with the Center for Human Nutrition at Cleveland Clinic, Cleveland, Ohio, said it’s well known that the ACLF patient population has a “very high rate of morbidity and mortality and their quality of life tends to be poor due to their frailty. It is also fairly well-known that proper nutrition therapy can improve outcomes, however barriers to adequate nutrition include decreased appetite, nausea, pain, altered taste, and early satiety from ascites.”
“Hepatologists are likely stressing the importance of adequate protein energy intake and doctors may refer patients to an outpatient dietitian, but it is up to the patient to make that appointment and act on the recommendations,” Patton told GI & Hepatology News.
“If a dietitian works in the same clinic as the hepatologist and patients can be referred and seen the same day, this is ideal. During a hospital admission, protein/calorie intake can be more closely monitored and encouraged by a multi-disciplinary team,” Patton explained.
She cautioned that “the average patient is not familiar with how to apply general calorie and protein goals to their everyday eating habits. This study amplifies the role of a dietitian and what consistent education and resources can do to improve a patient’s quality of life and survival.”
This study had no specific funding. The authors have declared no relevant conflicts of interest. Patton had no relevant disclosures.
A version of this article appeared on Medscape.com.
In a randomized controlled trial, compared with standard care, dietitian-supported, intensive nutritional therapy improved survival, reduced frailty, and lowered hospitalization rates in men with alcohol-related ACLF.
The study, performed by a team from the Postgraduate Institute of Medical Education and Research, Chandigarh, India, was published in Clinical Gastroenterology and Hepatology.
ACLF related to alcohol use is associated with poor outcomes due to poor nutritional intake and frailty. Frail patients with ACLF face higher morbidity, mortality, and hospitalization rates than their nonfrail counterparts. However, research on the role of structured nutritional interventions in improving these outcomes is limited.
Patal Giri, MBBS, MD, and colleagues enrolled 70 men with alcohol-related ACLF and frailty (liver frailty index [LFI] > 4.5) in a single-center, open-label study. Half were randomly allocated to an intervention group receiving outpatient intensive nutrition therapy (OINT) plus standard medical treatment (SMT) and half to a control group receiving SMT alone for 3 months.
The intervention group received a monitored high-calorie, high-protein, and salt-restricted diet as prescribed by a dedicated senior liver dietitian. The control group received regular nutritional recommendations and were managed for the ACLF-associated complications, without intervention or guidance by the study team.
After 3 months follow-up, overall survival (the primary outcome) was significantly improved in the OINT group compared with the control group (91.4% vs 57.1%), “suggesting that the improvement in nutrition status is associated with better survival,” the study team noted. Three patients died in the OINT group vs 15 in the SMT group.
OINT also led to a significant improvement in frailty, with LFI scores decreasing by an average of 0.93 in the intervention group vs 0.33 in the control group; 97% of patients improved from frail to prefrail status in the OINT group, whereas only 20% of patients improved in the SMT group.
The mean change in LFI of 0.93 with OINT is “well above the substantially clinically important difference” (change of 0.8) established in a previous study, the authors noted.
Significant improvements in weight and body mass index were also observed in the OINT group relative to the control group.
Liver disease severity, including model for end-stage liver disease (MELD) scores, showed greater improvement in the OINT group than in the control group (−8.7 vs −6.3 points from baseline to 3 months).
During the follow-up period, fewer patients in the intervention group than in the control group required a hospital stay (17% vs 45.7%).
Limitations of the study include the single-center design and the short follow-up period of 3 months, which limits long-term outcome assessment. Further, the study only included patients meeting Asia Pacific Association for Study of Liver criteria for ACLF, which does not include the patients with organ failure as defined by European Association for the Study of the Liver-Chronic Liver Failure Consortium criteria. Patients with ACLF who had more severe disease (MELD score > 30 or AARC > 10) were also not included.
Despite these limitations, the authors said their study showed that “dietician-monitored goal-directed nutrition therapy is very important in the management of patients with alcohol-related ACLF along with SMT.”
Confirmatory Data
Reached for comment, Katherine Patton, MEd, RD, a registered dietitian with the Center for Human Nutrition at Cleveland Clinic, Cleveland, Ohio, said it’s well known that the ACLF patient population has a “very high rate of morbidity and mortality and their quality of life tends to be poor due to their frailty. It is also fairly well-known that proper nutrition therapy can improve outcomes, however barriers to adequate nutrition include decreased appetite, nausea, pain, altered taste, and early satiety from ascites.”
“Hepatologists are likely stressing the importance of adequate protein energy intake and doctors may refer patients to an outpatient dietitian, but it is up to the patient to make that appointment and act on the recommendations,” Patton told GI & Hepatology News.
“If a dietitian works in the same clinic as the hepatologist and patients can be referred and seen the same day, this is ideal. During a hospital admission, protein/calorie intake can be more closely monitored and encouraged by a multi-disciplinary team,” Patton explained.
She cautioned that “the average patient is not familiar with how to apply general calorie and protein goals to their everyday eating habits. This study amplifies the role of a dietitian and what consistent education and resources can do to improve a patient’s quality of life and survival.”
This study had no specific funding. The authors have declared no relevant conflicts of interest. Patton had no relevant disclosures.
A version of this article appeared on Medscape.com.
In a randomized controlled trial, compared with standard care, dietitian-supported, intensive nutritional therapy improved survival, reduced frailty, and lowered hospitalization rates in men with alcohol-related ACLF.
The study, performed by a team from the Postgraduate Institute of Medical Education and Research, Chandigarh, India, was published in Clinical Gastroenterology and Hepatology.
ACLF related to alcohol use is associated with poor outcomes due to poor nutritional intake and frailty. Frail patients with ACLF face higher morbidity, mortality, and hospitalization rates than their nonfrail counterparts. However, research on the role of structured nutritional interventions in improving these outcomes is limited.
Patal Giri, MBBS, MD, and colleagues enrolled 70 men with alcohol-related ACLF and frailty (liver frailty index [LFI] > 4.5) in a single-center, open-label study. Half were randomly allocated to an intervention group receiving outpatient intensive nutrition therapy (OINT) plus standard medical treatment (SMT) and half to a control group receiving SMT alone for 3 months.
The intervention group received a monitored high-calorie, high-protein, and salt-restricted diet as prescribed by a dedicated senior liver dietitian. The control group received regular nutritional recommendations and were managed for the ACLF-associated complications, without intervention or guidance by the study team.
After 3 months follow-up, overall survival (the primary outcome) was significantly improved in the OINT group compared with the control group (91.4% vs 57.1%), “suggesting that the improvement in nutrition status is associated with better survival,” the study team noted. Three patients died in the OINT group vs 15 in the SMT group.
OINT also led to a significant improvement in frailty, with LFI scores decreasing by an average of 0.93 in the intervention group vs 0.33 in the control group; 97% of patients improved from frail to prefrail status in the OINT group, whereas only 20% of patients improved in the SMT group.
The mean change in LFI of 0.93 with OINT is “well above the substantially clinically important difference” (change of 0.8) established in a previous study, the authors noted.
Significant improvements in weight and body mass index were also observed in the OINT group relative to the control group.
Liver disease severity, including model for end-stage liver disease (MELD) scores, showed greater improvement in the OINT group than in the control group (−8.7 vs −6.3 points from baseline to 3 months).
During the follow-up period, fewer patients in the intervention group than in the control group required a hospital stay (17% vs 45.7%).
Limitations of the study include the single-center design and the short follow-up period of 3 months, which limits long-term outcome assessment. Further, the study only included patients meeting Asia Pacific Association for Study of Liver criteria for ACLF, which does not include the patients with organ failure as defined by European Association for the Study of the Liver-Chronic Liver Failure Consortium criteria. Patients with ACLF who had more severe disease (MELD score > 30 or AARC > 10) were also not included.
Despite these limitations, the authors said their study showed that “dietician-monitored goal-directed nutrition therapy is very important in the management of patients with alcohol-related ACLF along with SMT.”
Confirmatory Data
Reached for comment, Katherine Patton, MEd, RD, a registered dietitian with the Center for Human Nutrition at Cleveland Clinic, Cleveland, Ohio, said it’s well known that the ACLF patient population has a “very high rate of morbidity and mortality and their quality of life tends to be poor due to their frailty. It is also fairly well-known that proper nutrition therapy can improve outcomes, however barriers to adequate nutrition include decreased appetite, nausea, pain, altered taste, and early satiety from ascites.”
“Hepatologists are likely stressing the importance of adequate protein energy intake and doctors may refer patients to an outpatient dietitian, but it is up to the patient to make that appointment and act on the recommendations,” Patton told GI & Hepatology News.
“If a dietitian works in the same clinic as the hepatologist and patients can be referred and seen the same day, this is ideal. During a hospital admission, protein/calorie intake can be more closely monitored and encouraged by a multi-disciplinary team,” Patton explained.
She cautioned that “the average patient is not familiar with how to apply general calorie and protein goals to their everyday eating habits. This study amplifies the role of a dietitian and what consistent education and resources can do to improve a patient’s quality of life and survival.”
This study had no specific funding. The authors have declared no relevant conflicts of interest. Patton had no relevant disclosures.
A version of this article appeared on Medscape.com.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Better Prep, Better Scope: Task Force Updates Colonoscopy Bowel Prep Advice
The latest consensus recommendations emphasize the importance of verbal and written patient education, refine diet restrictions, update optimal purgative regimens, and advise tracking bowel prep adequacy rates at both the individual endoscopist and unit levels.
“Colorectal cancer remains the second most common cause of cancer death in the United States, and colonoscopy is considered the gold standard for evaluating the colon, including assessing causes of colon-related signs or symptoms and the detection of precancerous lesions. It is well recognized that the adequacy of bowel preparation is essential for optimal colonoscopy performance,” the task force wrote.
Choice of Prep, Dosing and Timing, and Dietary Restrictions
When choosing bowel preparation regimens, the task force recommends considering the individual’s medical history, medications, and, when available, the adequacy of bowel preparation reported from prior colonoscopies. Other considerations include patient preference, associated additional costs to the patient, and ease in obtaining and consuming any purgatives or adjuncts.
In terms of timing and dose, the task force now “suggests that lower-volume bowel preparation regimens, such as those that rely on only 2 liters of fluid compared to the traditional 4L, are acceptable options for individuals considered unlikely to have an inadequate bowel preparation. This assumes that the purgative is taken in a split-dose fashion (half the evening prior to colonoscopy and half the morning of the colonoscopy),” co–lead author Brian C. Jacobson, MD, MPH, AGAF, with Massachusetts General Hospital and Harvard Medical School, both in Boston, said in an interview.
The task force also states that a same-day bowel preparation regimen for afternoon, but not morning, colonoscopy is a “reasonable alternative to the now-common split-dose regimen,” Jacobson said.
The group did not find one bowel preparation purgative to be better than others, although table 7 in the document details characteristics of commonly used prep regimens including their side effects and contraindications.
Recommendations regarding dietary modifications depend upon the patient’s risk for inadequate bowel prep. For patients at low risk for inadequate bowel prep, the task force recommends limiting dietary restrictions to the day before a colonoscopy, relying on either clear liquids or low-fiber/low-residue diets for the early and midday meals. Table 5 in the document provides a list of low-residue foods and sample meals.
The task force also suggests the adjunctive use of oral simethicone (≥ 320 mg) to bowel prep as a way to potentially improve visualization, although they acknowledge that further research is needed.
How might these updated consensus recommendations change current clinical practice?
Jacobson said: “Some physicians may try to identify individuals who will do just as well with a more patient-friendly, easily tolerated bowel preparation regimen, including less stringent dietary restrictions leading up to colonoscopy.”
He noted that the task force prefers the term “guidance” to “guidelines.”
New Quality Benchmark
The task force recommends documenting bowel prep quality in the endoscopy report after all washing and suctioning have been completed using reliably understood descriptors that communicate the adequacy of the preparation.
They recommend the term “adequate bowel preparation” be used to indicate that standard screening or surveillance intervals can be assigned based on the findings of the colonoscopy.
Additionally, the task force recommends that endoscopy units and individual endoscopists track and aim for ≥ 90% adequacy rates in bowel preparation — up from the 85% benchmark contained in the prior recommendations.
Jacobson told this news organization it’s “currently unknown” how many individual endoscopists and endoscopy units track and meet the 90% benchmark at present.
David Johnson, MD, professor of medicine and chief of gastroenterology at Eastern Virginia Medical School, Norfolk, who wasn’t on the task force, said endoscopy units and providers “need to be accountable and should be tracking this quality metric.”
Johnson noted that bowel prep inadequacy has “intrinsic costs,” impacting lesion detection, CRC incidence, and patient outcomes. Inadequate prep leads to “increased risk for morbidity, mortality, longer appointment and wait times for rescheduling, and negative connotations that may deter patients from returning.”
Brian Sullivan, MD, MHS, assistant professor of medicine, division of gastroenterology, Duke University School of Medicine, Durham, North Carolina, who wasn’t on the task force, said the recommendation to target a 90% or higher bowel preparation adequacy rate is “appreciated.”
“This benchmark encourages practices to standardize measurement, tracking, and reporting of preparation quality at both the individual and unit levels. Specifically, it should motivate providers to critically evaluate their interpretation of preparation quality and ensure adequate cleansing before making determinations,” Sullivan said in an interview.
“At the unit level, this metric can identify whether there are opportunities for quality improvement, such as by implementing evidence-based initiatives (provided in the guidance) to enhance outpatient preparation processes,” Sullivan noted.
The task force emphasized that the majority of consensus recommendations focus on individuals at average risk for inadequate bowel prep. Patients at high risk for inadequate bowel prep (eg, diabetes, constipation, opioid use) should receive tailored instructions, including a more extended dietary prep and high-volume purgatives.
‘Timely and Important’ Updates
Sullivan said the updated consensus recommendations on optimizing bowel preparation quality for colonoscopy are both “timely and important.”
“Clear guidance facilitates dissemination and adoption, promoting flexible yet evidence-based approaches that enhance patient and provider satisfaction while potentially improving CRC prevention outcomes. For instance, surveys reveal that some practices still do not utilize split-dose bowel preparation, which is proven to improve preparation quality, particularly for the right-side of the colon. This gap underscores the need for standardized guidance to ensure high-quality colonoscopy and effective CRC screening,” Sullivan said.
He also noted that the inclusion of lower-volume bowel prep regimens and less intensive dietary modifications for selected patients is a “welcome update.”
“These options can improve patient adherence and satisfaction, which are critical not only for the quality of the index exam but also for ensuring patients return for future screenings, thereby supporting long-term CRC prevention efforts,” Sullivan said.
The task force includes representatives from the American Gastroenterological Association, the American College of Gastroenterology, and the American Society for Gastrointestinal Endoscopy.
The consensus document was published online in the three societies’ respective scientific journals — Gastroenterology, the American Journal of Gastroenterology, and Gastrointestinal Endsocopy.
This research had no financial support. Jacobson is a consultant for Curis and Guardant Health. Sullivan had no disclosures. Johnson is an adviser to ISOThrive and a past president of the American College of Gastroenterology.
A version of this article first appeared on Medscape.com.
The latest consensus recommendations emphasize the importance of verbal and written patient education, refine diet restrictions, update optimal purgative regimens, and advise tracking bowel prep adequacy rates at both the individual endoscopist and unit levels.
“Colorectal cancer remains the second most common cause of cancer death in the United States, and colonoscopy is considered the gold standard for evaluating the colon, including assessing causes of colon-related signs or symptoms and the detection of precancerous lesions. It is well recognized that the adequacy of bowel preparation is essential for optimal colonoscopy performance,” the task force wrote.
Choice of Prep, Dosing and Timing, and Dietary Restrictions
When choosing bowel preparation regimens, the task force recommends considering the individual’s medical history, medications, and, when available, the adequacy of bowel preparation reported from prior colonoscopies. Other considerations include patient preference, associated additional costs to the patient, and ease in obtaining and consuming any purgatives or adjuncts.
In terms of timing and dose, the task force now “suggests that lower-volume bowel preparation regimens, such as those that rely on only 2 liters of fluid compared to the traditional 4L, are acceptable options for individuals considered unlikely to have an inadequate bowel preparation. This assumes that the purgative is taken in a split-dose fashion (half the evening prior to colonoscopy and half the morning of the colonoscopy),” co–lead author Brian C. Jacobson, MD, MPH, AGAF, with Massachusetts General Hospital and Harvard Medical School, both in Boston, said in an interview.
The task force also states that a same-day bowel preparation regimen for afternoon, but not morning, colonoscopy is a “reasonable alternative to the now-common split-dose regimen,” Jacobson said.
The group did not find one bowel preparation purgative to be better than others, although table 7 in the document details characteristics of commonly used prep regimens including their side effects and contraindications.
Recommendations regarding dietary modifications depend upon the patient’s risk for inadequate bowel prep. For patients at low risk for inadequate bowel prep, the task force recommends limiting dietary restrictions to the day before a colonoscopy, relying on either clear liquids or low-fiber/low-residue diets for the early and midday meals. Table 5 in the document provides a list of low-residue foods and sample meals.
The task force also suggests the adjunctive use of oral simethicone (≥ 320 mg) to bowel prep as a way to potentially improve visualization, although they acknowledge that further research is needed.
How might these updated consensus recommendations change current clinical practice?
Jacobson said: “Some physicians may try to identify individuals who will do just as well with a more patient-friendly, easily tolerated bowel preparation regimen, including less stringent dietary restrictions leading up to colonoscopy.”
He noted that the task force prefers the term “guidance” to “guidelines.”
New Quality Benchmark
The task force recommends documenting bowel prep quality in the endoscopy report after all washing and suctioning have been completed using reliably understood descriptors that communicate the adequacy of the preparation.
They recommend the term “adequate bowel preparation” be used to indicate that standard screening or surveillance intervals can be assigned based on the findings of the colonoscopy.
Additionally, the task force recommends that endoscopy units and individual endoscopists track and aim for ≥ 90% adequacy rates in bowel preparation — up from the 85% benchmark contained in the prior recommendations.
Jacobson told this news organization it’s “currently unknown” how many individual endoscopists and endoscopy units track and meet the 90% benchmark at present.
David Johnson, MD, professor of medicine and chief of gastroenterology at Eastern Virginia Medical School, Norfolk, who wasn’t on the task force, said endoscopy units and providers “need to be accountable and should be tracking this quality metric.”
Johnson noted that bowel prep inadequacy has “intrinsic costs,” impacting lesion detection, CRC incidence, and patient outcomes. Inadequate prep leads to “increased risk for morbidity, mortality, longer appointment and wait times for rescheduling, and negative connotations that may deter patients from returning.”
Brian Sullivan, MD, MHS, assistant professor of medicine, division of gastroenterology, Duke University School of Medicine, Durham, North Carolina, who wasn’t on the task force, said the recommendation to target a 90% or higher bowel preparation adequacy rate is “appreciated.”
“This benchmark encourages practices to standardize measurement, tracking, and reporting of preparation quality at both the individual and unit levels. Specifically, it should motivate providers to critically evaluate their interpretation of preparation quality and ensure adequate cleansing before making determinations,” Sullivan said in an interview.
“At the unit level, this metric can identify whether there are opportunities for quality improvement, such as by implementing evidence-based initiatives (provided in the guidance) to enhance outpatient preparation processes,” Sullivan noted.
The task force emphasized that the majority of consensus recommendations focus on individuals at average risk for inadequate bowel prep. Patients at high risk for inadequate bowel prep (eg, diabetes, constipation, opioid use) should receive tailored instructions, including a more extended dietary prep and high-volume purgatives.
‘Timely and Important’ Updates
Sullivan said the updated consensus recommendations on optimizing bowel preparation quality for colonoscopy are both “timely and important.”
“Clear guidance facilitates dissemination and adoption, promoting flexible yet evidence-based approaches that enhance patient and provider satisfaction while potentially improving CRC prevention outcomes. For instance, surveys reveal that some practices still do not utilize split-dose bowel preparation, which is proven to improve preparation quality, particularly for the right-side of the colon. This gap underscores the need for standardized guidance to ensure high-quality colonoscopy and effective CRC screening,” Sullivan said.
He also noted that the inclusion of lower-volume bowel prep regimens and less intensive dietary modifications for selected patients is a “welcome update.”
“These options can improve patient adherence and satisfaction, which are critical not only for the quality of the index exam but also for ensuring patients return for future screenings, thereby supporting long-term CRC prevention efforts,” Sullivan said.
The task force includes representatives from the American Gastroenterological Association, the American College of Gastroenterology, and the American Society for Gastrointestinal Endoscopy.
The consensus document was published online in the three societies’ respective scientific journals — Gastroenterology, the American Journal of Gastroenterology, and Gastrointestinal Endsocopy.
This research had no financial support. Jacobson is a consultant for Curis and Guardant Health. Sullivan had no disclosures. Johnson is an adviser to ISOThrive and a past president of the American College of Gastroenterology.
A version of this article first appeared on Medscape.com.
The latest consensus recommendations emphasize the importance of verbal and written patient education, refine diet restrictions, update optimal purgative regimens, and advise tracking bowel prep adequacy rates at both the individual endoscopist and unit levels.
“Colorectal cancer remains the second most common cause of cancer death in the United States, and colonoscopy is considered the gold standard for evaluating the colon, including assessing causes of colon-related signs or symptoms and the detection of precancerous lesions. It is well recognized that the adequacy of bowel preparation is essential for optimal colonoscopy performance,” the task force wrote.
Choice of Prep, Dosing and Timing, and Dietary Restrictions
When choosing bowel preparation regimens, the task force recommends considering the individual’s medical history, medications, and, when available, the adequacy of bowel preparation reported from prior colonoscopies. Other considerations include patient preference, associated additional costs to the patient, and ease in obtaining and consuming any purgatives or adjuncts.
In terms of timing and dose, the task force now “suggests that lower-volume bowel preparation regimens, such as those that rely on only 2 liters of fluid compared to the traditional 4L, are acceptable options for individuals considered unlikely to have an inadequate bowel preparation. This assumes that the purgative is taken in a split-dose fashion (half the evening prior to colonoscopy and half the morning of the colonoscopy),” co–lead author Brian C. Jacobson, MD, MPH, AGAF, with Massachusetts General Hospital and Harvard Medical School, both in Boston, said in an interview.
The task force also states that a same-day bowel preparation regimen for afternoon, but not morning, colonoscopy is a “reasonable alternative to the now-common split-dose regimen,” Jacobson said.
The group did not find one bowel preparation purgative to be better than others, although table 7 in the document details characteristics of commonly used prep regimens including their side effects and contraindications.
Recommendations regarding dietary modifications depend upon the patient’s risk for inadequate bowel prep. For patients at low risk for inadequate bowel prep, the task force recommends limiting dietary restrictions to the day before a colonoscopy, relying on either clear liquids or low-fiber/low-residue diets for the early and midday meals. Table 5 in the document provides a list of low-residue foods and sample meals.
The task force also suggests the adjunctive use of oral simethicone (≥ 320 mg) to bowel prep as a way to potentially improve visualization, although they acknowledge that further research is needed.
How might these updated consensus recommendations change current clinical practice?
Jacobson said: “Some physicians may try to identify individuals who will do just as well with a more patient-friendly, easily tolerated bowel preparation regimen, including less stringent dietary restrictions leading up to colonoscopy.”
He noted that the task force prefers the term “guidance” to “guidelines.”
New Quality Benchmark
The task force recommends documenting bowel prep quality in the endoscopy report after all washing and suctioning have been completed using reliably understood descriptors that communicate the adequacy of the preparation.
They recommend the term “adequate bowel preparation” be used to indicate that standard screening or surveillance intervals can be assigned based on the findings of the colonoscopy.
Additionally, the task force recommends that endoscopy units and individual endoscopists track and aim for ≥ 90% adequacy rates in bowel preparation — up from the 85% benchmark contained in the prior recommendations.
Jacobson told this news organization it’s “currently unknown” how many individual endoscopists and endoscopy units track and meet the 90% benchmark at present.
David Johnson, MD, professor of medicine and chief of gastroenterology at Eastern Virginia Medical School, Norfolk, who wasn’t on the task force, said endoscopy units and providers “need to be accountable and should be tracking this quality metric.”
Johnson noted that bowel prep inadequacy has “intrinsic costs,” impacting lesion detection, CRC incidence, and patient outcomes. Inadequate prep leads to “increased risk for morbidity, mortality, longer appointment and wait times for rescheduling, and negative connotations that may deter patients from returning.”
Brian Sullivan, MD, MHS, assistant professor of medicine, division of gastroenterology, Duke University School of Medicine, Durham, North Carolina, who wasn’t on the task force, said the recommendation to target a 90% or higher bowel preparation adequacy rate is “appreciated.”
“This benchmark encourages practices to standardize measurement, tracking, and reporting of preparation quality at both the individual and unit levels. Specifically, it should motivate providers to critically evaluate their interpretation of preparation quality and ensure adequate cleansing before making determinations,” Sullivan said in an interview.
“At the unit level, this metric can identify whether there are opportunities for quality improvement, such as by implementing evidence-based initiatives (provided in the guidance) to enhance outpatient preparation processes,” Sullivan noted.
The task force emphasized that the majority of consensus recommendations focus on individuals at average risk for inadequate bowel prep. Patients at high risk for inadequate bowel prep (eg, diabetes, constipation, opioid use) should receive tailored instructions, including a more extended dietary prep and high-volume purgatives.
‘Timely and Important’ Updates
Sullivan said the updated consensus recommendations on optimizing bowel preparation quality for colonoscopy are both “timely and important.”
“Clear guidance facilitates dissemination and adoption, promoting flexible yet evidence-based approaches that enhance patient and provider satisfaction while potentially improving CRC prevention outcomes. For instance, surveys reveal that some practices still do not utilize split-dose bowel preparation, which is proven to improve preparation quality, particularly for the right-side of the colon. This gap underscores the need for standardized guidance to ensure high-quality colonoscopy and effective CRC screening,” Sullivan said.
He also noted that the inclusion of lower-volume bowel prep regimens and less intensive dietary modifications for selected patients is a “welcome update.”
“These options can improve patient adherence and satisfaction, which are critical not only for the quality of the index exam but also for ensuring patients return for future screenings, thereby supporting long-term CRC prevention efforts,” Sullivan said.
The task force includes representatives from the American Gastroenterological Association, the American College of Gastroenterology, and the American Society for Gastrointestinal Endoscopy.
The consensus document was published online in the three societies’ respective scientific journals — Gastroenterology, the American Journal of Gastroenterology, and Gastrointestinal Endsocopy.
This research had no financial support. Jacobson is a consultant for Curis and Guardant Health. Sullivan had no disclosures. Johnson is an adviser to ISOThrive and a past president of the American College of Gastroenterology.
A version of this article first appeared on Medscape.com.
FROM GASTROENTEROLOGY