User login
Most GI Service Chiefs Support POCUS Training, But Uptake Is Slow
, according to a national survey.
Low POCUS uptake may be explained by substantial barriers to implementation, including lack of trained instructors, necessary equipment, and support staff, lead author Keerthi Thallapureddy, MD, of the University of Texas Health San Antonio, and colleagues, reported.
“POCUS is being increasingly used by gastroenterologists due to its portability and real-time diagnostic ability,” the investigators wrote in Gastro Hep Advances, but “there is limited understanding of how gastroenterologists use POCUS.”
To learn more, the investigators conducted a nationwide survey of the VA healthcare system. Separate questionnaires were sent to chiefs of staff (n = 130) and GI service chiefs (n = 117), yielding response rates of 100% and 79%, respectively.
Respondents represented a wide distribution of geographic regions and institutional complexity levels, with 80% of GI groups based at high-complexity centers and 92% in urban locations. A minority (8%) reported the presence of a liver transplant program.
Data collection focused on the prevalence of POCUS use, types of clinical applications, institutional policies and training processes, and perceived or actual barriers to wider adoption. Barriers were sorted into three categories: training, equipment, and infrastructure.
Of the 93 GI service chiefs who participated in the survey, 44% reported that at least 1 gastroenterologist at their facility currently uses POCUS. Most common procedural uses were paracentesis (23%) and liver biopsy (13%), while ascites assessment (19%) and biliary visualization (7%) were the most common diagnostic uses.
Among the same respondents, 69% said they would support sending clinicians to a POCUS training course, and 37% said their teams had expressed an active interest in pursuing such training. Only 17% of facilities had a formal process in place to obtain POCUS training, and an equal proportion had implemented a facility-wide policy to guide its use.
Barriers to implementation were widespread and often multifactorial.
Most challenges related to training: 48% of sites reported a lack of trained providers, 28% cited insufficient funding for training, 24% noted a lack of training opportunities, and 14% reported difficulty securing travel funds.
Equipment limitations were also common, with 41% of sites lacking ultrasound machines and 27% lacking funding to purchase them.
Institutional infrastructure posed further hurdles. Nearly a quarter of GI chiefs (23%) reported lacking a clinician champion to lead implementation, while others cited a lack of support staff, simulation space, privileging criteria, image archiving capabilities, or standardized reporting forms.
“Our findings on current POCUS use, training, barriers, and infrastructure can guide expansion of POCUS use and training among GI groups,” Dr. Thallapureddy and colleagues wrote, noting that early efforts to expand access to GI-specific POCUS training are already underway.
They cited growing interest from national organizations such as the American Gastroenterological Association and the American Association for the Study of Liver Diseases, the latter of which piloted training workshops at the 2024 Liver Meeting. Similarly, the International Bowel Ultrasound Group now offers a 3-part certification program in intestinal ultrasound and is developing additional online and interactive modules to improve training accessibility.
The study was supported by the US Department of Veterans Affairs, Quality Enhancement Research Initiative Partnered Evaluation Initiative Grant, and the VA National Center for Patient Safety. The investigators reported no conflicts of interest.
, according to a national survey.
Low POCUS uptake may be explained by substantial barriers to implementation, including lack of trained instructors, necessary equipment, and support staff, lead author Keerthi Thallapureddy, MD, of the University of Texas Health San Antonio, and colleagues, reported.
“POCUS is being increasingly used by gastroenterologists due to its portability and real-time diagnostic ability,” the investigators wrote in Gastro Hep Advances, but “there is limited understanding of how gastroenterologists use POCUS.”
To learn more, the investigators conducted a nationwide survey of the VA healthcare system. Separate questionnaires were sent to chiefs of staff (n = 130) and GI service chiefs (n = 117), yielding response rates of 100% and 79%, respectively.
Respondents represented a wide distribution of geographic regions and institutional complexity levels, with 80% of GI groups based at high-complexity centers and 92% in urban locations. A minority (8%) reported the presence of a liver transplant program.
Data collection focused on the prevalence of POCUS use, types of clinical applications, institutional policies and training processes, and perceived or actual barriers to wider adoption. Barriers were sorted into three categories: training, equipment, and infrastructure.
Of the 93 GI service chiefs who participated in the survey, 44% reported that at least 1 gastroenterologist at their facility currently uses POCUS. Most common procedural uses were paracentesis (23%) and liver biopsy (13%), while ascites assessment (19%) and biliary visualization (7%) were the most common diagnostic uses.
Among the same respondents, 69% said they would support sending clinicians to a POCUS training course, and 37% said their teams had expressed an active interest in pursuing such training. Only 17% of facilities had a formal process in place to obtain POCUS training, and an equal proportion had implemented a facility-wide policy to guide its use.
Barriers to implementation were widespread and often multifactorial.
Most challenges related to training: 48% of sites reported a lack of trained providers, 28% cited insufficient funding for training, 24% noted a lack of training opportunities, and 14% reported difficulty securing travel funds.
Equipment limitations were also common, with 41% of sites lacking ultrasound machines and 27% lacking funding to purchase them.
Institutional infrastructure posed further hurdles. Nearly a quarter of GI chiefs (23%) reported lacking a clinician champion to lead implementation, while others cited a lack of support staff, simulation space, privileging criteria, image archiving capabilities, or standardized reporting forms.
“Our findings on current POCUS use, training, barriers, and infrastructure can guide expansion of POCUS use and training among GI groups,” Dr. Thallapureddy and colleagues wrote, noting that early efforts to expand access to GI-specific POCUS training are already underway.
They cited growing interest from national organizations such as the American Gastroenterological Association and the American Association for the Study of Liver Diseases, the latter of which piloted training workshops at the 2024 Liver Meeting. Similarly, the International Bowel Ultrasound Group now offers a 3-part certification program in intestinal ultrasound and is developing additional online and interactive modules to improve training accessibility.
The study was supported by the US Department of Veterans Affairs, Quality Enhancement Research Initiative Partnered Evaluation Initiative Grant, and the VA National Center for Patient Safety. The investigators reported no conflicts of interest.
, according to a national survey.
Low POCUS uptake may be explained by substantial barriers to implementation, including lack of trained instructors, necessary equipment, and support staff, lead author Keerthi Thallapureddy, MD, of the University of Texas Health San Antonio, and colleagues, reported.
“POCUS is being increasingly used by gastroenterologists due to its portability and real-time diagnostic ability,” the investigators wrote in Gastro Hep Advances, but “there is limited understanding of how gastroenterologists use POCUS.”
To learn more, the investigators conducted a nationwide survey of the VA healthcare system. Separate questionnaires were sent to chiefs of staff (n = 130) and GI service chiefs (n = 117), yielding response rates of 100% and 79%, respectively.
Respondents represented a wide distribution of geographic regions and institutional complexity levels, with 80% of GI groups based at high-complexity centers and 92% in urban locations. A minority (8%) reported the presence of a liver transplant program.
Data collection focused on the prevalence of POCUS use, types of clinical applications, institutional policies and training processes, and perceived or actual barriers to wider adoption. Barriers were sorted into three categories: training, equipment, and infrastructure.
Of the 93 GI service chiefs who participated in the survey, 44% reported that at least 1 gastroenterologist at their facility currently uses POCUS. Most common procedural uses were paracentesis (23%) and liver biopsy (13%), while ascites assessment (19%) and biliary visualization (7%) were the most common diagnostic uses.
Among the same respondents, 69% said they would support sending clinicians to a POCUS training course, and 37% said their teams had expressed an active interest in pursuing such training. Only 17% of facilities had a formal process in place to obtain POCUS training, and an equal proportion had implemented a facility-wide policy to guide its use.
Barriers to implementation were widespread and often multifactorial.
Most challenges related to training: 48% of sites reported a lack of trained providers, 28% cited insufficient funding for training, 24% noted a lack of training opportunities, and 14% reported difficulty securing travel funds.
Equipment limitations were also common, with 41% of sites lacking ultrasound machines and 27% lacking funding to purchase them.
Institutional infrastructure posed further hurdles. Nearly a quarter of GI chiefs (23%) reported lacking a clinician champion to lead implementation, while others cited a lack of support staff, simulation space, privileging criteria, image archiving capabilities, or standardized reporting forms.
“Our findings on current POCUS use, training, barriers, and infrastructure can guide expansion of POCUS use and training among GI groups,” Dr. Thallapureddy and colleagues wrote, noting that early efforts to expand access to GI-specific POCUS training are already underway.
They cited growing interest from national organizations such as the American Gastroenterological Association and the American Association for the Study of Liver Diseases, the latter of which piloted training workshops at the 2024 Liver Meeting. Similarly, the International Bowel Ultrasound Group now offers a 3-part certification program in intestinal ultrasound and is developing additional online and interactive modules to improve training accessibility.
The study was supported by the US Department of Veterans Affairs, Quality Enhancement Research Initiative Partnered Evaluation Initiative Grant, and the VA National Center for Patient Safety. The investigators reported no conflicts of interest.
FROM GASTRO HEP ADVANCES
IBD Medications Show No Link with Breast Cancer Recurrence
, according to investigators.
These findings diminish concerns that IBD therapy could theoretically reactivate dormant micrometastases, lead author Guillaume Le Cosquer, MD, of Toulouse University Hospital, Toulouse, France, and colleagues, reported.
“In patients with IBD, medical management of subjects with a history of breast cancer is a frequent and unresolved problem for clinicians,” the investigators wrote in Clinical Gastroenterology and Hepatology (2024 Nov. doi: 10.1016/j.cgh.2024.09.034).
Previous studies have reported that conventional immunosuppressants and biologics do not increase risk of incident cancer among IBD patients with a prior nondigestive malignancy; however, recent guidelines from the European Crohn’s and Colitis Organisation (ECCO) suggest that data are insufficient to make associated recommendations, prompting the present study.
“[T]he major strength of our work is that it is the first to focus on the most frequent cancer (breast cancer) in patients with IBD only, with the longest follow-up after breast cancer in patients with IBD ever published,” Dr. Le Cosquer and colleagues noted.
The dataset included 207 patients with IBD and a history of breast cancer, drawn from 7 tertiary centers across France.
The index date was the time of breast cancer diagnosis, and patients were followed for a median of 71 months. The median time from cancer diagnosis to initiation of IBD treatment was 28 months.
First-line post-cancer treatments included conventional immunosuppressants (19.3%), anti–tumor necrosis factor (anti-TNF) agents (19.8%), vedolizumab (7.2%), and ustekinumab (1.9%). Approximately half (51.6%) received no immunosuppressive therapy during follow-up.
Over the study period, 42 incident cancers were recorded (20.3%), among which 34 were breast cancer recurrences. Adjusted incidence rates per 1000 person-years were 10.2 (95% CI, 6.0–16.4) in the untreated group and 28.9 (95% CI, 11.6–59.6) in patients exposed to immunosuppressive or biologic therapies (P = .0519). Incident cancer–free survival did not differ significantly between treated and untreated groups (P = .4796).
On multivariable analysis, independent predictors of incident cancer included T4d stage (P = .036), triple-negative status (P = .016), and follow-up duration shorter than 71 months (P = .005).
“[I]mmunosuppressant and biologic use in selected patients with IBD with prior breast cancer does not seem to increase the risk of incident cancer,” the investigators wrote, noting that the main predictors of cancer recurrence were known poor prognostic features of breast cancer.
Dr. Le Cosquer and colleagues acknowledged a lack of prospective safety data for biologic therapies among patients with prior malignancy, as these individuals are often excluded from clinical trials. Still, they underscored alignment between their findings and earlier retrospective studies, including analyses from the SAPPHIRE registry and Medicare data, which also found no significant increase in breast cancer recurrence with anti-TNF agents or newer biologics such as vedolizumab and ustekinumab.
“Our findings will help clinicians to make decisions in multidisciplinary meetings to start immunosuppressants or biologics in case of IBD flare-up in these patients,” they concluded.
The investigators disclosed relationships with AbbVie, Janssen, Takeda, and others.
Patients with inflammatory bowel disease (IBD) are at risk for a host of other illnesses, including cancer, at rates similar to or greater than the general population. When faced with uncertainty about drug safety with a cancer diagnosis, the reflex is to avoid the therapy altogether. This may lead to significant flares which may in turn lead to difficulty in tolerating cancer therapy and a shortened and lower quality life.
Le Cosquer et al. address the question of the risk of incident cancer among patients with a history of breast cancer. The authors found that the risk was related to poor prognostic factors for breast cancer and not IBD therapy. This should be interpreted with caution as the numbers, though the largest reported, are 207 patients. After propensity score matching, crude incidence rates per 1000 person years appeared greater in the treatment arm (28.9) versus the untreated arm (10.2), P = .0519. With a greater number of patients, it is conceivable the difference is significant.
On the flip side, prior to diagnosis, the majority of IBD patients received immunosuppressant or biologic therapy; however, after the index cancer, 51.6% of patients received no treatment. The survival curves show a near 25% difference in favor of treated patients after 300 months, albeit with very small numbers, raising the question of whether withholding IBD therapy is more harmful.
It is reassuring that the multiple papers cited in the article have not shown an increase in solid organ tumors to date. However, the practitioner needs to balance maintenance of IBD remission and overall health with the risk of complications in the patient with underlying malignancy. This complex decision making will shift over time and should involve the patient, the oncologist, and the gastroenterologist. In my practice, thiopurines are avoided and anti-integrins and IL-23s are preferred. However, anti-TNF agents and JAK-inhibitors are used when the patients’ overall benefit from disease control outweighs their (theoretical) risk for recurrence, infection, and thromboembolism.
Uma Mahadevan, MD, AGAF, is the Lynne and Marc Benioff Professor of Gastroenterology, and director of the Colitis and Crohn’s Disease Center at the University of California, San Francisco. She declared research support from the Leona M. and Harry B. Helmsley Trust, and has served as a consultant for multiple pharmaceutical firms.
Patients with inflammatory bowel disease (IBD) are at risk for a host of other illnesses, including cancer, at rates similar to or greater than the general population. When faced with uncertainty about drug safety with a cancer diagnosis, the reflex is to avoid the therapy altogether. This may lead to significant flares which may in turn lead to difficulty in tolerating cancer therapy and a shortened and lower quality life.
Le Cosquer et al. address the question of the risk of incident cancer among patients with a history of breast cancer. The authors found that the risk was related to poor prognostic factors for breast cancer and not IBD therapy. This should be interpreted with caution as the numbers, though the largest reported, are 207 patients. After propensity score matching, crude incidence rates per 1000 person years appeared greater in the treatment arm (28.9) versus the untreated arm (10.2), P = .0519. With a greater number of patients, it is conceivable the difference is significant.
On the flip side, prior to diagnosis, the majority of IBD patients received immunosuppressant or biologic therapy; however, after the index cancer, 51.6% of patients received no treatment. The survival curves show a near 25% difference in favor of treated patients after 300 months, albeit with very small numbers, raising the question of whether withholding IBD therapy is more harmful.
It is reassuring that the multiple papers cited in the article have not shown an increase in solid organ tumors to date. However, the practitioner needs to balance maintenance of IBD remission and overall health with the risk of complications in the patient with underlying malignancy. This complex decision making will shift over time and should involve the patient, the oncologist, and the gastroenterologist. In my practice, thiopurines are avoided and anti-integrins and IL-23s are preferred. However, anti-TNF agents and JAK-inhibitors are used when the patients’ overall benefit from disease control outweighs their (theoretical) risk for recurrence, infection, and thromboembolism.
Uma Mahadevan, MD, AGAF, is the Lynne and Marc Benioff Professor of Gastroenterology, and director of the Colitis and Crohn’s Disease Center at the University of California, San Francisco. She declared research support from the Leona M. and Harry B. Helmsley Trust, and has served as a consultant for multiple pharmaceutical firms.
Patients with inflammatory bowel disease (IBD) are at risk for a host of other illnesses, including cancer, at rates similar to or greater than the general population. When faced with uncertainty about drug safety with a cancer diagnosis, the reflex is to avoid the therapy altogether. This may lead to significant flares which may in turn lead to difficulty in tolerating cancer therapy and a shortened and lower quality life.
Le Cosquer et al. address the question of the risk of incident cancer among patients with a history of breast cancer. The authors found that the risk was related to poor prognostic factors for breast cancer and not IBD therapy. This should be interpreted with caution as the numbers, though the largest reported, are 207 patients. After propensity score matching, crude incidence rates per 1000 person years appeared greater in the treatment arm (28.9) versus the untreated arm (10.2), P = .0519. With a greater number of patients, it is conceivable the difference is significant.
On the flip side, prior to diagnosis, the majority of IBD patients received immunosuppressant or biologic therapy; however, after the index cancer, 51.6% of patients received no treatment. The survival curves show a near 25% difference in favor of treated patients after 300 months, albeit with very small numbers, raising the question of whether withholding IBD therapy is more harmful.
It is reassuring that the multiple papers cited in the article have not shown an increase in solid organ tumors to date. However, the practitioner needs to balance maintenance of IBD remission and overall health with the risk of complications in the patient with underlying malignancy. This complex decision making will shift over time and should involve the patient, the oncologist, and the gastroenterologist. In my practice, thiopurines are avoided and anti-integrins and IL-23s are preferred. However, anti-TNF agents and JAK-inhibitors are used when the patients’ overall benefit from disease control outweighs their (theoretical) risk for recurrence, infection, and thromboembolism.
Uma Mahadevan, MD, AGAF, is the Lynne and Marc Benioff Professor of Gastroenterology, and director of the Colitis and Crohn’s Disease Center at the University of California, San Francisco. She declared research support from the Leona M. and Harry B. Helmsley Trust, and has served as a consultant for multiple pharmaceutical firms.
, according to investigators.
These findings diminish concerns that IBD therapy could theoretically reactivate dormant micrometastases, lead author Guillaume Le Cosquer, MD, of Toulouse University Hospital, Toulouse, France, and colleagues, reported.
“In patients with IBD, medical management of subjects with a history of breast cancer is a frequent and unresolved problem for clinicians,” the investigators wrote in Clinical Gastroenterology and Hepatology (2024 Nov. doi: 10.1016/j.cgh.2024.09.034).
Previous studies have reported that conventional immunosuppressants and biologics do not increase risk of incident cancer among IBD patients with a prior nondigestive malignancy; however, recent guidelines from the European Crohn’s and Colitis Organisation (ECCO) suggest that data are insufficient to make associated recommendations, prompting the present study.
“[T]he major strength of our work is that it is the first to focus on the most frequent cancer (breast cancer) in patients with IBD only, with the longest follow-up after breast cancer in patients with IBD ever published,” Dr. Le Cosquer and colleagues noted.
The dataset included 207 patients with IBD and a history of breast cancer, drawn from 7 tertiary centers across France.
The index date was the time of breast cancer diagnosis, and patients were followed for a median of 71 months. The median time from cancer diagnosis to initiation of IBD treatment was 28 months.
First-line post-cancer treatments included conventional immunosuppressants (19.3%), anti–tumor necrosis factor (anti-TNF) agents (19.8%), vedolizumab (7.2%), and ustekinumab (1.9%). Approximately half (51.6%) received no immunosuppressive therapy during follow-up.
Over the study period, 42 incident cancers were recorded (20.3%), among which 34 were breast cancer recurrences. Adjusted incidence rates per 1000 person-years were 10.2 (95% CI, 6.0–16.4) in the untreated group and 28.9 (95% CI, 11.6–59.6) in patients exposed to immunosuppressive or biologic therapies (P = .0519). Incident cancer–free survival did not differ significantly between treated and untreated groups (P = .4796).
On multivariable analysis, independent predictors of incident cancer included T4d stage (P = .036), triple-negative status (P = .016), and follow-up duration shorter than 71 months (P = .005).
“[I]mmunosuppressant and biologic use in selected patients with IBD with prior breast cancer does not seem to increase the risk of incident cancer,” the investigators wrote, noting that the main predictors of cancer recurrence were known poor prognostic features of breast cancer.
Dr. Le Cosquer and colleagues acknowledged a lack of prospective safety data for biologic therapies among patients with prior malignancy, as these individuals are often excluded from clinical trials. Still, they underscored alignment between their findings and earlier retrospective studies, including analyses from the SAPPHIRE registry and Medicare data, which also found no significant increase in breast cancer recurrence with anti-TNF agents or newer biologics such as vedolizumab and ustekinumab.
“Our findings will help clinicians to make decisions in multidisciplinary meetings to start immunosuppressants or biologics in case of IBD flare-up in these patients,” they concluded.
The investigators disclosed relationships with AbbVie, Janssen, Takeda, and others.
, according to investigators.
These findings diminish concerns that IBD therapy could theoretically reactivate dormant micrometastases, lead author Guillaume Le Cosquer, MD, of Toulouse University Hospital, Toulouse, France, and colleagues, reported.
“In patients with IBD, medical management of subjects with a history of breast cancer is a frequent and unresolved problem for clinicians,” the investigators wrote in Clinical Gastroenterology and Hepatology (2024 Nov. doi: 10.1016/j.cgh.2024.09.034).
Previous studies have reported that conventional immunosuppressants and biologics do not increase risk of incident cancer among IBD patients with a prior nondigestive malignancy; however, recent guidelines from the European Crohn’s and Colitis Organisation (ECCO) suggest that data are insufficient to make associated recommendations, prompting the present study.
“[T]he major strength of our work is that it is the first to focus on the most frequent cancer (breast cancer) in patients with IBD only, with the longest follow-up after breast cancer in patients with IBD ever published,” Dr. Le Cosquer and colleagues noted.
The dataset included 207 patients with IBD and a history of breast cancer, drawn from 7 tertiary centers across France.
The index date was the time of breast cancer diagnosis, and patients were followed for a median of 71 months. The median time from cancer diagnosis to initiation of IBD treatment was 28 months.
First-line post-cancer treatments included conventional immunosuppressants (19.3%), anti–tumor necrosis factor (anti-TNF) agents (19.8%), vedolizumab (7.2%), and ustekinumab (1.9%). Approximately half (51.6%) received no immunosuppressive therapy during follow-up.
Over the study period, 42 incident cancers were recorded (20.3%), among which 34 were breast cancer recurrences. Adjusted incidence rates per 1000 person-years were 10.2 (95% CI, 6.0–16.4) in the untreated group and 28.9 (95% CI, 11.6–59.6) in patients exposed to immunosuppressive or biologic therapies (P = .0519). Incident cancer–free survival did not differ significantly between treated and untreated groups (P = .4796).
On multivariable analysis, independent predictors of incident cancer included T4d stage (P = .036), triple-negative status (P = .016), and follow-up duration shorter than 71 months (P = .005).
“[I]mmunosuppressant and biologic use in selected patients with IBD with prior breast cancer does not seem to increase the risk of incident cancer,” the investigators wrote, noting that the main predictors of cancer recurrence were known poor prognostic features of breast cancer.
Dr. Le Cosquer and colleagues acknowledged a lack of prospective safety data for biologic therapies among patients with prior malignancy, as these individuals are often excluded from clinical trials. Still, they underscored alignment between their findings and earlier retrospective studies, including analyses from the SAPPHIRE registry and Medicare data, which also found no significant increase in breast cancer recurrence with anti-TNF agents or newer biologics such as vedolizumab and ustekinumab.
“Our findings will help clinicians to make decisions in multidisciplinary meetings to start immunosuppressants or biologics in case of IBD flare-up in these patients,” they concluded.
The investigators disclosed relationships with AbbVie, Janssen, Takeda, and others.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Sterile Water Bottles Deemed Unnecessary for Endoscopy
Like diners saving on drinks,
“No direct evidence supports the recommendation and widespread use of sterile water during gastrointestinal endosco-py procedures,” lead author Deepak Agrawal, MD, chief of gastroenterology & hepatology at the Dell Medical School, University Texas at Austin, and colleagues, wrote in Gastro Hep Advances. “Guidelines recommending sterile water during endoscopy are based on limited evidence and mostly expert opinions.”
After reviewing the literature back to 1975, Dr. Agrawal and colleagues considered the use of sterile water in endoscopy via three frameworks: medical evidence and guidelines, environmental and broader health effects, and financial costs.
Only 2 studies – both from the 1990s – directly compared sterile and tap water use in endoscopy. Neither showed an increased risk of infection from tap water. In fact, some cultures from allegedly sterile water bottles grew pathogenic bacteria, while no patient complications were reported in either study.
“The recommendations for sterile water contradict observations in other medical care scenarios, for example, for the irrigation of open wounds,” Dr. Agrawal and colleagues noted. “Similarly, there is no benefit in using sterile water for enteral feeds in immunosuppressed patients, and tap water enemas are routinely acceptable for colon cleansing before sigmoidoscopies in all patients, irrespective of immune status.”
Current guidelines, including the 2021 US multisociety guideline on reprocessing flexible GI endoscopes and accessories, recommend sterile water for procedures involving mucosal penetration but acknowledge low-quality supporting evidence. These recommendations are based on outdated studies, some unrelated to GI endoscopy, Dr. Agrawal and colleagues pointed out, and rely heavily on cross-referenced opinion statements rather than clinical data.
They went on to suggest a concerning possibility: all those plastic bottles may actually cause more health problems than prevent them. The review estimates that the production and transportation of sterile water bottles contributes over 6,000 metric tons of emissions per year from US endoscopy units alone. What’s more, as discarded bottles break down, they release greenhouse gases and microplastics, the latter of which have been linked to cardiovascular disease, inflammatory bowel disease, and endocrine disruption.
Dr. Agrawal and colleagues also underscored the financial toxicity of sterile water bottles. Considering a 1-liter bottle of sterile water costs $3-10, an endoscopy unit performing 30 procedures per day spends approximately $1,000-3,000 per month on bottled water alone. Scaled nationally, the routine use of sterile water costs tens of millions of dollars each year, not counting indirect expenses associated with stocking and waste disposal.
Considering the dubious clinical upside against the apparent environmental and financial downsides, Dr. Agrawal and colleagues urged endoscopy units to rethink routine sterile water use.
They proposed a pragmatic model: start the day with a new sterile or reusable bottle, refill with tap water for subsequent cases, and recycle the bottle at day’s end. Institutions should ensure their tap water meets safety standards, they added, such as those outlined in the Joint Commission’s 2022 R3 Report on standards for water management.
Dr. Agrawal and colleagues also called on GI societies to revise existing guidance to reflect today’s clinical and environmental realities. Until strong evidence supports the need for sterile water, they wrote, the smarter, safer, and more sustainable option may be simply turning on the tap.
The investigators disclosed relationships with Guardant, Exact Sciences, Freenome, and others.
In an editorial accompanying the study and comments to GI & Hepatology News, Dr. Seth A. Gross of NYU Langone Health urged gastroenterologists to reconsider the use of sterile water in endoscopy.
While the rationale for bottled water has centered on infection prevention, Gross argued that the evidence does not hold up, noting that this practice contradicts modern values around sustainability and evidence-based care.
The two relevant clinical studies comparing sterile versus tap water in endoscopy are almost 30 years old, he said, and neither detected an increased risk of infection with tap water, leading both to conclude that tap water is “safe and practical” for routine endoscopy.
Gross also pointed out the inconsistency of sterile water use in medical practice, noting that tap water is acceptable in procedures with higher infection risk than endoscopy.
“Lastly,” he added, “most people drink tap water and not sterile water on a daily basis without outbreaks of gastroenteritis from bacterial infections.”
Gross’s comments went beyond the data to emphasize the obvious but overlooked environmental impacts of sterile water bottles. He suggested several challenging suggestions to make medicine more ecofriendly, like reducing travel to conferences, increasing the availability of telehealth, and choosing reusable devices over disposables.
But “what’s hiding in plain sight,” he said, “is our use of sterile water.”
While acknowledging that some patients, like those who are immunocompromised, might still warrant sterile water, Gross supported the review’s recommendation to use tap water instead. He called on GI societies and regulatory bodies to re-examine current policy and pursue updated guidance.
“Sometimes going back to the basics,” he concluded, “could be the most innovative strategy with tremendous impact.”
Seth A. Gross, MD, AGAF, is clinical chief in the Division of Gastroenterology & Hepatology at NYU Langone Health, and professor at the NYU Grossman School of Medicine, both in New York City. He reported no conflicts of interest.
In an editorial accompanying the study and comments to GI & Hepatology News, Dr. Seth A. Gross of NYU Langone Health urged gastroenterologists to reconsider the use of sterile water in endoscopy.
While the rationale for bottled water has centered on infection prevention, Gross argued that the evidence does not hold up, noting that this practice contradicts modern values around sustainability and evidence-based care.
The two relevant clinical studies comparing sterile versus tap water in endoscopy are almost 30 years old, he said, and neither detected an increased risk of infection with tap water, leading both to conclude that tap water is “safe and practical” for routine endoscopy.
Gross also pointed out the inconsistency of sterile water use in medical practice, noting that tap water is acceptable in procedures with higher infection risk than endoscopy.
“Lastly,” he added, “most people drink tap water and not sterile water on a daily basis without outbreaks of gastroenteritis from bacterial infections.”
Gross’s comments went beyond the data to emphasize the obvious but overlooked environmental impacts of sterile water bottles. He suggested several challenging suggestions to make medicine more ecofriendly, like reducing travel to conferences, increasing the availability of telehealth, and choosing reusable devices over disposables.
But “what’s hiding in plain sight,” he said, “is our use of sterile water.”
While acknowledging that some patients, like those who are immunocompromised, might still warrant sterile water, Gross supported the review’s recommendation to use tap water instead. He called on GI societies and regulatory bodies to re-examine current policy and pursue updated guidance.
“Sometimes going back to the basics,” he concluded, “could be the most innovative strategy with tremendous impact.”
Seth A. Gross, MD, AGAF, is clinical chief in the Division of Gastroenterology & Hepatology at NYU Langone Health, and professor at the NYU Grossman School of Medicine, both in New York City. He reported no conflicts of interest.
In an editorial accompanying the study and comments to GI & Hepatology News, Dr. Seth A. Gross of NYU Langone Health urged gastroenterologists to reconsider the use of sterile water in endoscopy.
While the rationale for bottled water has centered on infection prevention, Gross argued that the evidence does not hold up, noting that this practice contradicts modern values around sustainability and evidence-based care.
The two relevant clinical studies comparing sterile versus tap water in endoscopy are almost 30 years old, he said, and neither detected an increased risk of infection with tap water, leading both to conclude that tap water is “safe and practical” for routine endoscopy.
Gross also pointed out the inconsistency of sterile water use in medical practice, noting that tap water is acceptable in procedures with higher infection risk than endoscopy.
“Lastly,” he added, “most people drink tap water and not sterile water on a daily basis without outbreaks of gastroenteritis from bacterial infections.”
Gross’s comments went beyond the data to emphasize the obvious but overlooked environmental impacts of sterile water bottles. He suggested several challenging suggestions to make medicine more ecofriendly, like reducing travel to conferences, increasing the availability of telehealth, and choosing reusable devices over disposables.
But “what’s hiding in plain sight,” he said, “is our use of sterile water.”
While acknowledging that some patients, like those who are immunocompromised, might still warrant sterile water, Gross supported the review’s recommendation to use tap water instead. He called on GI societies and regulatory bodies to re-examine current policy and pursue updated guidance.
“Sometimes going back to the basics,” he concluded, “could be the most innovative strategy with tremendous impact.”
Seth A. Gross, MD, AGAF, is clinical chief in the Division of Gastroenterology & Hepatology at NYU Langone Health, and professor at the NYU Grossman School of Medicine, both in New York City. He reported no conflicts of interest.
Like diners saving on drinks,
“No direct evidence supports the recommendation and widespread use of sterile water during gastrointestinal endosco-py procedures,” lead author Deepak Agrawal, MD, chief of gastroenterology & hepatology at the Dell Medical School, University Texas at Austin, and colleagues, wrote in Gastro Hep Advances. “Guidelines recommending sterile water during endoscopy are based on limited evidence and mostly expert opinions.”
After reviewing the literature back to 1975, Dr. Agrawal and colleagues considered the use of sterile water in endoscopy via three frameworks: medical evidence and guidelines, environmental and broader health effects, and financial costs.
Only 2 studies – both from the 1990s – directly compared sterile and tap water use in endoscopy. Neither showed an increased risk of infection from tap water. In fact, some cultures from allegedly sterile water bottles grew pathogenic bacteria, while no patient complications were reported in either study.
“The recommendations for sterile water contradict observations in other medical care scenarios, for example, for the irrigation of open wounds,” Dr. Agrawal and colleagues noted. “Similarly, there is no benefit in using sterile water for enteral feeds in immunosuppressed patients, and tap water enemas are routinely acceptable for colon cleansing before sigmoidoscopies in all patients, irrespective of immune status.”
Current guidelines, including the 2021 US multisociety guideline on reprocessing flexible GI endoscopes and accessories, recommend sterile water for procedures involving mucosal penetration but acknowledge low-quality supporting evidence. These recommendations are based on outdated studies, some unrelated to GI endoscopy, Dr. Agrawal and colleagues pointed out, and rely heavily on cross-referenced opinion statements rather than clinical data.
They went on to suggest a concerning possibility: all those plastic bottles may actually cause more health problems than prevent them. The review estimates that the production and transportation of sterile water bottles contributes over 6,000 metric tons of emissions per year from US endoscopy units alone. What’s more, as discarded bottles break down, they release greenhouse gases and microplastics, the latter of which have been linked to cardiovascular disease, inflammatory bowel disease, and endocrine disruption.
Dr. Agrawal and colleagues also underscored the financial toxicity of sterile water bottles. Considering a 1-liter bottle of sterile water costs $3-10, an endoscopy unit performing 30 procedures per day spends approximately $1,000-3,000 per month on bottled water alone. Scaled nationally, the routine use of sterile water costs tens of millions of dollars each year, not counting indirect expenses associated with stocking and waste disposal.
Considering the dubious clinical upside against the apparent environmental and financial downsides, Dr. Agrawal and colleagues urged endoscopy units to rethink routine sterile water use.
They proposed a pragmatic model: start the day with a new sterile or reusable bottle, refill with tap water for subsequent cases, and recycle the bottle at day’s end. Institutions should ensure their tap water meets safety standards, they added, such as those outlined in the Joint Commission’s 2022 R3 Report on standards for water management.
Dr. Agrawal and colleagues also called on GI societies to revise existing guidance to reflect today’s clinical and environmental realities. Until strong evidence supports the need for sterile water, they wrote, the smarter, safer, and more sustainable option may be simply turning on the tap.
The investigators disclosed relationships with Guardant, Exact Sciences, Freenome, and others.
Like diners saving on drinks,
“No direct evidence supports the recommendation and widespread use of sterile water during gastrointestinal endosco-py procedures,” lead author Deepak Agrawal, MD, chief of gastroenterology & hepatology at the Dell Medical School, University Texas at Austin, and colleagues, wrote in Gastro Hep Advances. “Guidelines recommending sterile water during endoscopy are based on limited evidence and mostly expert opinions.”
After reviewing the literature back to 1975, Dr. Agrawal and colleagues considered the use of sterile water in endoscopy via three frameworks: medical evidence and guidelines, environmental and broader health effects, and financial costs.
Only 2 studies – both from the 1990s – directly compared sterile and tap water use in endoscopy. Neither showed an increased risk of infection from tap water. In fact, some cultures from allegedly sterile water bottles grew pathogenic bacteria, while no patient complications were reported in either study.
“The recommendations for sterile water contradict observations in other medical care scenarios, for example, for the irrigation of open wounds,” Dr. Agrawal and colleagues noted. “Similarly, there is no benefit in using sterile water for enteral feeds in immunosuppressed patients, and tap water enemas are routinely acceptable for colon cleansing before sigmoidoscopies in all patients, irrespective of immune status.”
Current guidelines, including the 2021 US multisociety guideline on reprocessing flexible GI endoscopes and accessories, recommend sterile water for procedures involving mucosal penetration but acknowledge low-quality supporting evidence. These recommendations are based on outdated studies, some unrelated to GI endoscopy, Dr. Agrawal and colleagues pointed out, and rely heavily on cross-referenced opinion statements rather than clinical data.
They went on to suggest a concerning possibility: all those plastic bottles may actually cause more health problems than prevent them. The review estimates that the production and transportation of sterile water bottles contributes over 6,000 metric tons of emissions per year from US endoscopy units alone. What’s more, as discarded bottles break down, they release greenhouse gases and microplastics, the latter of which have been linked to cardiovascular disease, inflammatory bowel disease, and endocrine disruption.
Dr. Agrawal and colleagues also underscored the financial toxicity of sterile water bottles. Considering a 1-liter bottle of sterile water costs $3-10, an endoscopy unit performing 30 procedures per day spends approximately $1,000-3,000 per month on bottled water alone. Scaled nationally, the routine use of sterile water costs tens of millions of dollars each year, not counting indirect expenses associated with stocking and waste disposal.
Considering the dubious clinical upside against the apparent environmental and financial downsides, Dr. Agrawal and colleagues urged endoscopy units to rethink routine sterile water use.
They proposed a pragmatic model: start the day with a new sterile or reusable bottle, refill with tap water for subsequent cases, and recycle the bottle at day’s end. Institutions should ensure their tap water meets safety standards, they added, such as those outlined in the Joint Commission’s 2022 R3 Report on standards for water management.
Dr. Agrawal and colleagues also called on GI societies to revise existing guidance to reflect today’s clinical and environmental realities. Until strong evidence supports the need for sterile water, they wrote, the smarter, safer, and more sustainable option may be simply turning on the tap.
The investigators disclosed relationships with Guardant, Exact Sciences, Freenome, and others.
FROM GASTRO HEP ADVANCES
Wearable Devices May Predict IBD Flares Weeks in Advance
according to investigators.
These findings suggest that widely used consumer wearables could support long-term monitoring of IBD and other chronic inflammatory conditions, lead author Robert P. Hirten, MD, of Icahn School of Medicine at Mount Sinai, New York, and colleagues reported.
“Wearable devices are an increasingly accepted tool for monitoring health and disease,” the investigators wrote in Gastroenterology. “They are frequently used in non–inflammatory-based diseases for remote patient monitoring, allowing individuals to be monitored outside of the clinical setting, which has resulted in improved outcomes in multiple disease states.”
Progress has been slower for inflammatory conditions, the investigators noted, despite interest from both providers and patients. Prior studies have explored activity and sleep tracking, or sweat-based biomarkers, as potential tools for monitoring IBD.
Hirten and colleagues took a novel approach, focusing on physiologic changes driven by autonomic nervous system dysfunction — a hallmark of chronic inflammation. Conditions like IBD are associated with reduced parasympathetic activity and increased sympathetic tone, which in turn affect heart rate and heart rate variability. Heart rate tends to rise during flares, while heart rate variability decreases.
Their prospective cohort study included 309 adults with Crohn’s disease (n = 196) or ulcerative colitis (n = 113). Participants used their own or a study-provided Apple Watch, Fitbit, or Oura Ring to passively collect physiological data, including heart rate, resting heart rate, heart rate variability, and step count. A subset of Apple Watch users also contributed oxygen saturation data.
Participants also completed daily symptom surveys using a custom smartphone app and reported laboratory values such as C-reactive protein, erythrocyte sedimentation rate, and fecal calprotectin, as part of routine care. These data were used to identify symptomatic and inflammatory flare periods.
Over a mean follow-up of about 7 months, the physiological data consistently distinguished both types of flares from periods of remission. Heart rate variability dropped significantly during flares, while heart rate and resting heart rate increased. Step counts decreased during inflammatory flares but not during symptom-only flares. Oxygen saturation stayed mostly the same, except for a slight drop seen in participants with Crohn’s disease.
These physiological changes could be detected as early as 7 weeks before a flare. Predictive models that combined multiple metrics — heart rate variability, heart rate, resting heart rate, and step count — were highly accurate, with F1 scores as high as 0.90 for predicting inflammatory flares and 0.83 for predicting symptomatic flares.
In addition, wearable data helped differentiate between flares caused by active inflammation and those driven by symptoms alone. Even when symptoms were similar, heart rate variability, heart rate, and resting heart rate were significantly higher when inflammation was present—suggesting wearable devices may help address the common mismatch between symptoms and actual disease activity in IBD.
“These findings support the further evaluation of wearable devices in the monitoring of IBD,” the investigators concluded.
The study was supported by the National Institute of Diabetes and Digestive and Kidney Diseases and Ms. Jenny Steingart. The investigators disclosed additional relationships with Agomab, Lilly, Merck, and others.
Dana J. Lukin, MD, PhD, AGAF, of New York-Presbyterian Hospital/Weill Cornell Medicine, New York City, described the study by Hirten et al as “provocative.”
“While the data require a machine learning approach to transform the recorded values into predictive algorithms, it is intriguing that routinely recorded information from smart devices can be used in a manner to inform disease activity,” Lukin said in an interview. “Furthermore, the use of continuously recorded physiological data in this study likely reflects longitudinal health status more accurately than cross-sectional use of patient-reported outcomes or episodic biomarker testing.”
In addition to offering potentially higher accuracy than conventional monitoring, the remote strategy is also more convenient, he noted.
“The use of these devices is likely easier to adhere to than the use of other contemporary monitoring strategies involving the collection of stool or blood samples,” Lukin said. “It may become possible to passively monitor a larger number of patients at risk for flares remotely,” especially given that “almost half of Americans utilize wearables, such as the Apple Watch, Oura Ring, and Fitbit.”
Still, Lukin predicted challenges with widespread adoption.
“More than half of Americans do not routinely [use these devices],” Lukin said. “Cost, access to internet and smartphones, and adoption of new technology may all be barriers to more widespread use.”
He suggested that the present study offers proof of concept, but more prospective data are needed to demonstrate how this type of remote monitoring might improve real-world IBD care.
“Potential studies will assess change in healthcare utilization, corticosteroids, surgery, and clinical flare activity with the use of these data,” Lukin said. “As we learn more about how to handle the large amount of data generated by these devices, our algorithms can be refined to make a feasible platform for practices to employ in routine care.”
Lukin disclosed relationships with Boehringer Ingelheim, Takeda, Vedanta, and others.
Dana J. Lukin, MD, PhD, AGAF, of New York-Presbyterian Hospital/Weill Cornell Medicine, New York City, described the study by Hirten et al as “provocative.”
“While the data require a machine learning approach to transform the recorded values into predictive algorithms, it is intriguing that routinely recorded information from smart devices can be used in a manner to inform disease activity,” Lukin said in an interview. “Furthermore, the use of continuously recorded physiological data in this study likely reflects longitudinal health status more accurately than cross-sectional use of patient-reported outcomes or episodic biomarker testing.”
In addition to offering potentially higher accuracy than conventional monitoring, the remote strategy is also more convenient, he noted.
“The use of these devices is likely easier to adhere to than the use of other contemporary monitoring strategies involving the collection of stool or blood samples,” Lukin said. “It may become possible to passively monitor a larger number of patients at risk for flares remotely,” especially given that “almost half of Americans utilize wearables, such as the Apple Watch, Oura Ring, and Fitbit.”
Still, Lukin predicted challenges with widespread adoption.
“More than half of Americans do not routinely [use these devices],” Lukin said. “Cost, access to internet and smartphones, and adoption of new technology may all be barriers to more widespread use.”
He suggested that the present study offers proof of concept, but more prospective data are needed to demonstrate how this type of remote monitoring might improve real-world IBD care.
“Potential studies will assess change in healthcare utilization, corticosteroids, surgery, and clinical flare activity with the use of these data,” Lukin said. “As we learn more about how to handle the large amount of data generated by these devices, our algorithms can be refined to make a feasible platform for practices to employ in routine care.”
Lukin disclosed relationships with Boehringer Ingelheim, Takeda, Vedanta, and others.
Dana J. Lukin, MD, PhD, AGAF, of New York-Presbyterian Hospital/Weill Cornell Medicine, New York City, described the study by Hirten et al as “provocative.”
“While the data require a machine learning approach to transform the recorded values into predictive algorithms, it is intriguing that routinely recorded information from smart devices can be used in a manner to inform disease activity,” Lukin said in an interview. “Furthermore, the use of continuously recorded physiological data in this study likely reflects longitudinal health status more accurately than cross-sectional use of patient-reported outcomes or episodic biomarker testing.”
In addition to offering potentially higher accuracy than conventional monitoring, the remote strategy is also more convenient, he noted.
“The use of these devices is likely easier to adhere to than the use of other contemporary monitoring strategies involving the collection of stool or blood samples,” Lukin said. “It may become possible to passively monitor a larger number of patients at risk for flares remotely,” especially given that “almost half of Americans utilize wearables, such as the Apple Watch, Oura Ring, and Fitbit.”
Still, Lukin predicted challenges with widespread adoption.
“More than half of Americans do not routinely [use these devices],” Lukin said. “Cost, access to internet and smartphones, and adoption of new technology may all be barriers to more widespread use.”
He suggested that the present study offers proof of concept, but more prospective data are needed to demonstrate how this type of remote monitoring might improve real-world IBD care.
“Potential studies will assess change in healthcare utilization, corticosteroids, surgery, and clinical flare activity with the use of these data,” Lukin said. “As we learn more about how to handle the large amount of data generated by these devices, our algorithms can be refined to make a feasible platform for practices to employ in routine care.”
Lukin disclosed relationships with Boehringer Ingelheim, Takeda, Vedanta, and others.
according to investigators.
These findings suggest that widely used consumer wearables could support long-term monitoring of IBD and other chronic inflammatory conditions, lead author Robert P. Hirten, MD, of Icahn School of Medicine at Mount Sinai, New York, and colleagues reported.
“Wearable devices are an increasingly accepted tool for monitoring health and disease,” the investigators wrote in Gastroenterology. “They are frequently used in non–inflammatory-based diseases for remote patient monitoring, allowing individuals to be monitored outside of the clinical setting, which has resulted in improved outcomes in multiple disease states.”
Progress has been slower for inflammatory conditions, the investigators noted, despite interest from both providers and patients. Prior studies have explored activity and sleep tracking, or sweat-based biomarkers, as potential tools for monitoring IBD.
Hirten and colleagues took a novel approach, focusing on physiologic changes driven by autonomic nervous system dysfunction — a hallmark of chronic inflammation. Conditions like IBD are associated with reduced parasympathetic activity and increased sympathetic tone, which in turn affect heart rate and heart rate variability. Heart rate tends to rise during flares, while heart rate variability decreases.
Their prospective cohort study included 309 adults with Crohn’s disease (n = 196) or ulcerative colitis (n = 113). Participants used their own or a study-provided Apple Watch, Fitbit, or Oura Ring to passively collect physiological data, including heart rate, resting heart rate, heart rate variability, and step count. A subset of Apple Watch users also contributed oxygen saturation data.
Participants also completed daily symptom surveys using a custom smartphone app and reported laboratory values such as C-reactive protein, erythrocyte sedimentation rate, and fecal calprotectin, as part of routine care. These data were used to identify symptomatic and inflammatory flare periods.
Over a mean follow-up of about 7 months, the physiological data consistently distinguished both types of flares from periods of remission. Heart rate variability dropped significantly during flares, while heart rate and resting heart rate increased. Step counts decreased during inflammatory flares but not during symptom-only flares. Oxygen saturation stayed mostly the same, except for a slight drop seen in participants with Crohn’s disease.
These physiological changes could be detected as early as 7 weeks before a flare. Predictive models that combined multiple metrics — heart rate variability, heart rate, resting heart rate, and step count — were highly accurate, with F1 scores as high as 0.90 for predicting inflammatory flares and 0.83 for predicting symptomatic flares.
In addition, wearable data helped differentiate between flares caused by active inflammation and those driven by symptoms alone. Even when symptoms were similar, heart rate variability, heart rate, and resting heart rate were significantly higher when inflammation was present—suggesting wearable devices may help address the common mismatch between symptoms and actual disease activity in IBD.
“These findings support the further evaluation of wearable devices in the monitoring of IBD,” the investigators concluded.
The study was supported by the National Institute of Diabetes and Digestive and Kidney Diseases and Ms. Jenny Steingart. The investigators disclosed additional relationships with Agomab, Lilly, Merck, and others.
according to investigators.
These findings suggest that widely used consumer wearables could support long-term monitoring of IBD and other chronic inflammatory conditions, lead author Robert P. Hirten, MD, of Icahn School of Medicine at Mount Sinai, New York, and colleagues reported.
“Wearable devices are an increasingly accepted tool for monitoring health and disease,” the investigators wrote in Gastroenterology. “They are frequently used in non–inflammatory-based diseases for remote patient monitoring, allowing individuals to be monitored outside of the clinical setting, which has resulted in improved outcomes in multiple disease states.”
Progress has been slower for inflammatory conditions, the investigators noted, despite interest from both providers and patients. Prior studies have explored activity and sleep tracking, or sweat-based biomarkers, as potential tools for monitoring IBD.
Hirten and colleagues took a novel approach, focusing on physiologic changes driven by autonomic nervous system dysfunction — a hallmark of chronic inflammation. Conditions like IBD are associated with reduced parasympathetic activity and increased sympathetic tone, which in turn affect heart rate and heart rate variability. Heart rate tends to rise during flares, while heart rate variability decreases.
Their prospective cohort study included 309 adults with Crohn’s disease (n = 196) or ulcerative colitis (n = 113). Participants used their own or a study-provided Apple Watch, Fitbit, or Oura Ring to passively collect physiological data, including heart rate, resting heart rate, heart rate variability, and step count. A subset of Apple Watch users also contributed oxygen saturation data.
Participants also completed daily symptom surveys using a custom smartphone app and reported laboratory values such as C-reactive protein, erythrocyte sedimentation rate, and fecal calprotectin, as part of routine care. These data were used to identify symptomatic and inflammatory flare periods.
Over a mean follow-up of about 7 months, the physiological data consistently distinguished both types of flares from periods of remission. Heart rate variability dropped significantly during flares, while heart rate and resting heart rate increased. Step counts decreased during inflammatory flares but not during symptom-only flares. Oxygen saturation stayed mostly the same, except for a slight drop seen in participants with Crohn’s disease.
These physiological changes could be detected as early as 7 weeks before a flare. Predictive models that combined multiple metrics — heart rate variability, heart rate, resting heart rate, and step count — were highly accurate, with F1 scores as high as 0.90 for predicting inflammatory flares and 0.83 for predicting symptomatic flares.
In addition, wearable data helped differentiate between flares caused by active inflammation and those driven by symptoms alone. Even when symptoms were similar, heart rate variability, heart rate, and resting heart rate were significantly higher when inflammation was present—suggesting wearable devices may help address the common mismatch between symptoms and actual disease activity in IBD.
“These findings support the further evaluation of wearable devices in the monitoring of IBD,” the investigators concluded.
The study was supported by the National Institute of Diabetes and Digestive and Kidney Diseases and Ms. Jenny Steingart. The investigators disclosed additional relationships with Agomab, Lilly, Merck, and others.
FROM GASTROENTEROLOGY
Low-Quality Food Environments Increase MASLD-related Mortality
according to investigators.
These findings highlight the importance of addressing disparities in food environments and social determinants of health to help reduce MASLD-related mortality, lead author Annette Paik, MD, of Inova Health System, Falls Church, Virginia, and colleagues reported.
“Recent studies indicate that food swamps and deserts, as surrogates for food insecurity, are linked to poor glycemic control and higher adult obesity rates,” the investigators wrote in Clinical Gastroenterology and Hepatology. “Understanding the intersection of these factors with sociodemographic and clinical variables offers insights into MASLD-related outcomes, including mortality.”
To this end, the present study examined the association between food environments and MASLD-related mortality across more than 2,195 US counties. County-level mortality data were obtained from the CDC WONDER database (2016-2020) and linked to food environment data from the US Department of Agriculture Food Environment Atlas using Federal Information Processing Standards (FIPS) codes. Food deserts were defined as low-income areas with limited access to grocery stores, while food swamps were characterized by a predominance of unhealthy food outlets relative to healthy ones.
Additional data on obesity, type 2 diabetes (T2D), and nine social determinants of health were obtained from CDC PLACES and other publicly available datasets. Counties were stratified into quartiles based on MASLD-related mortality rates. Population-weighted mixed-effects linear regression models were used to evaluate associations between food environment exposures and MASLD mortality, adjusting for region, rural-urban status, age, sex, race, insurance coverage, chronic dis-ease prevalence, SNAP participation, and access to exercise facilities.
Counties with the worst food environments had significantly higher MASLD-related mortality, even after adjusting for clinical and sociodemographic factors. Compared with counties in the lowest quartile of MASLD mortality, those in the highest quartile had a greater proportion of food deserts (22.3% vs 14.9%; P < .001) and food swamps (73.1% vs 65.7%; P < .001). They also had a significantly higher prevalence of obesity (40.5% vs 32.5%), type 2 diabetes (15.8% vs 11.4%), and physical inactivity (33.7% vs 24.9%).
Demographically, counties with higher MASLD mortality had significantly larger proportions of Black and Hispanic residents, and were more likely to be rural and located in the South. These counties also had significantly lower median household incomes, higher poverty rates, fewer adults with a college education, lower access to exercise opportunities, greater SNAP participation, less broadband access, and more uninsured adults.
In multivariable regression models, both food deserts and food swamps remained independently associated with MASLD mortality. Counties in the highest quartile of food desert exposure had a 14.5% higher MASLD mortality rate, compared with the lowest quartile (P = .001), and those in the highest quartile for food swamp exposure had a 13.9% higher mortality rate (P = .005).
Type 2 diabetes, physical inactivity, and lack of health insurance were also independently associated with increased MASLD-related mortality.
“Implementing public health interventions that address the specific environmental factors of each county can help US policymakers promote access to healthy, culturally appropriate food choices at affordable prices and reduce the consumption of poor-quality food,” the investigators wrote. “Moreover, improving access to parks and exercise facilities can further enhance the impact of healthy nutrition. These strategies could help curb the growing epidemic of metabolic diseases, including MASLD and related mortality.”
This study was supported by King Faisal Specialist Hospital & Research Center, the Global NASH Council, Center for Outcomes Research in Liver Diseases, and the Beatty Liver and Obesity Research Fund, Inova Health System. The investigators disclosed no conflicts of interest.
A healthy lifestyle continues to be foundational to the management of metabolic dysfunction–associated steatotic liver disease (MASLD). Poor diet quality is a risk factor for developing MASLD in the US general population. Food deserts and food swamps are symptoms of socioeconomic hardship, as they both are characterized by limited access to healthy food (as described by the US Department of Agriculture Dietary Guidelines for Americans) owing to the absence of grocery stores/supermarkets. However, food swamps suffer from abundant access to unhealthy, energy-dense, yet nutritionally sparse (EDYNS) foods.
The article by Paik et al shows that food deserts and food swamps are not only associated with the burden of MASLD in the United States but also with MASLD-related mortality. The counties with the highest MASLD-related mortality carried higher food swamps and food deserts, poverty, unemployment, household crowding, absence of broadband internet access, lack of high school education, and elderly, Hispanic residents and likely to be located in the South.
MASLD appears to have origins in the dark underbelly of socioeconomic hardship that might preclude many of our patients from complying with lifestyle changes. Policy changes are urgently needed at a national level, from increasing incentives to establish grocery stores in the food deserts to limiting the proportion of EDYNS foods in grocery stores and conspicuous labeling by the Food and Drug Administration of EDYNS foods. At an individual practice level, supporting MASLD patients in the clinic with a dietitian, educational material, and, where possible, utilizing applications to assist healthy dietary habits to empower them in choosing healthy food options.
Niharika Samala, MD, is assistant professor of medicine, associate program director of the GI Fellowship, and director of the IUH MASLD/NAFLD Clinic at the Indiana University School of Medicine, Indianapolis. She reported no relevant conflicts of interest.
A healthy lifestyle continues to be foundational to the management of metabolic dysfunction–associated steatotic liver disease (MASLD). Poor diet quality is a risk factor for developing MASLD in the US general population. Food deserts and food swamps are symptoms of socioeconomic hardship, as they both are characterized by limited access to healthy food (as described by the US Department of Agriculture Dietary Guidelines for Americans) owing to the absence of grocery stores/supermarkets. However, food swamps suffer from abundant access to unhealthy, energy-dense, yet nutritionally sparse (EDYNS) foods.
The article by Paik et al shows that food deserts and food swamps are not only associated with the burden of MASLD in the United States but also with MASLD-related mortality. The counties with the highest MASLD-related mortality carried higher food swamps and food deserts, poverty, unemployment, household crowding, absence of broadband internet access, lack of high school education, and elderly, Hispanic residents and likely to be located in the South.
MASLD appears to have origins in the dark underbelly of socioeconomic hardship that might preclude many of our patients from complying with lifestyle changes. Policy changes are urgently needed at a national level, from increasing incentives to establish grocery stores in the food deserts to limiting the proportion of EDYNS foods in grocery stores and conspicuous labeling by the Food and Drug Administration of EDYNS foods. At an individual practice level, supporting MASLD patients in the clinic with a dietitian, educational material, and, where possible, utilizing applications to assist healthy dietary habits to empower them in choosing healthy food options.
Niharika Samala, MD, is assistant professor of medicine, associate program director of the GI Fellowship, and director of the IUH MASLD/NAFLD Clinic at the Indiana University School of Medicine, Indianapolis. She reported no relevant conflicts of interest.
A healthy lifestyle continues to be foundational to the management of metabolic dysfunction–associated steatotic liver disease (MASLD). Poor diet quality is a risk factor for developing MASLD in the US general population. Food deserts and food swamps are symptoms of socioeconomic hardship, as they both are characterized by limited access to healthy food (as described by the US Department of Agriculture Dietary Guidelines for Americans) owing to the absence of grocery stores/supermarkets. However, food swamps suffer from abundant access to unhealthy, energy-dense, yet nutritionally sparse (EDYNS) foods.
The article by Paik et al shows that food deserts and food swamps are not only associated with the burden of MASLD in the United States but also with MASLD-related mortality. The counties with the highest MASLD-related mortality carried higher food swamps and food deserts, poverty, unemployment, household crowding, absence of broadband internet access, lack of high school education, and elderly, Hispanic residents and likely to be located in the South.
MASLD appears to have origins in the dark underbelly of socioeconomic hardship that might preclude many of our patients from complying with lifestyle changes. Policy changes are urgently needed at a national level, from increasing incentives to establish grocery stores in the food deserts to limiting the proportion of EDYNS foods in grocery stores and conspicuous labeling by the Food and Drug Administration of EDYNS foods. At an individual practice level, supporting MASLD patients in the clinic with a dietitian, educational material, and, where possible, utilizing applications to assist healthy dietary habits to empower them in choosing healthy food options.
Niharika Samala, MD, is assistant professor of medicine, associate program director of the GI Fellowship, and director of the IUH MASLD/NAFLD Clinic at the Indiana University School of Medicine, Indianapolis. She reported no relevant conflicts of interest.
according to investigators.
These findings highlight the importance of addressing disparities in food environments and social determinants of health to help reduce MASLD-related mortality, lead author Annette Paik, MD, of Inova Health System, Falls Church, Virginia, and colleagues reported.
“Recent studies indicate that food swamps and deserts, as surrogates for food insecurity, are linked to poor glycemic control and higher adult obesity rates,” the investigators wrote in Clinical Gastroenterology and Hepatology. “Understanding the intersection of these factors with sociodemographic and clinical variables offers insights into MASLD-related outcomes, including mortality.”
To this end, the present study examined the association between food environments and MASLD-related mortality across more than 2,195 US counties. County-level mortality data were obtained from the CDC WONDER database (2016-2020) and linked to food environment data from the US Department of Agriculture Food Environment Atlas using Federal Information Processing Standards (FIPS) codes. Food deserts were defined as low-income areas with limited access to grocery stores, while food swamps were characterized by a predominance of unhealthy food outlets relative to healthy ones.
Additional data on obesity, type 2 diabetes (T2D), and nine social determinants of health were obtained from CDC PLACES and other publicly available datasets. Counties were stratified into quartiles based on MASLD-related mortality rates. Population-weighted mixed-effects linear regression models were used to evaluate associations between food environment exposures and MASLD mortality, adjusting for region, rural-urban status, age, sex, race, insurance coverage, chronic dis-ease prevalence, SNAP participation, and access to exercise facilities.
Counties with the worst food environments had significantly higher MASLD-related mortality, even after adjusting for clinical and sociodemographic factors. Compared with counties in the lowest quartile of MASLD mortality, those in the highest quartile had a greater proportion of food deserts (22.3% vs 14.9%; P < .001) and food swamps (73.1% vs 65.7%; P < .001). They also had a significantly higher prevalence of obesity (40.5% vs 32.5%), type 2 diabetes (15.8% vs 11.4%), and physical inactivity (33.7% vs 24.9%).
Demographically, counties with higher MASLD mortality had significantly larger proportions of Black and Hispanic residents, and were more likely to be rural and located in the South. These counties also had significantly lower median household incomes, higher poverty rates, fewer adults with a college education, lower access to exercise opportunities, greater SNAP participation, less broadband access, and more uninsured adults.
In multivariable regression models, both food deserts and food swamps remained independently associated with MASLD mortality. Counties in the highest quartile of food desert exposure had a 14.5% higher MASLD mortality rate, compared with the lowest quartile (P = .001), and those in the highest quartile for food swamp exposure had a 13.9% higher mortality rate (P = .005).
Type 2 diabetes, physical inactivity, and lack of health insurance were also independently associated with increased MASLD-related mortality.
“Implementing public health interventions that address the specific environmental factors of each county can help US policymakers promote access to healthy, culturally appropriate food choices at affordable prices and reduce the consumption of poor-quality food,” the investigators wrote. “Moreover, improving access to parks and exercise facilities can further enhance the impact of healthy nutrition. These strategies could help curb the growing epidemic of metabolic diseases, including MASLD and related mortality.”
This study was supported by King Faisal Specialist Hospital & Research Center, the Global NASH Council, Center for Outcomes Research in Liver Diseases, and the Beatty Liver and Obesity Research Fund, Inova Health System. The investigators disclosed no conflicts of interest.
according to investigators.
These findings highlight the importance of addressing disparities in food environments and social determinants of health to help reduce MASLD-related mortality, lead author Annette Paik, MD, of Inova Health System, Falls Church, Virginia, and colleagues reported.
“Recent studies indicate that food swamps and deserts, as surrogates for food insecurity, are linked to poor glycemic control and higher adult obesity rates,” the investigators wrote in Clinical Gastroenterology and Hepatology. “Understanding the intersection of these factors with sociodemographic and clinical variables offers insights into MASLD-related outcomes, including mortality.”
To this end, the present study examined the association between food environments and MASLD-related mortality across more than 2,195 US counties. County-level mortality data were obtained from the CDC WONDER database (2016-2020) and linked to food environment data from the US Department of Agriculture Food Environment Atlas using Federal Information Processing Standards (FIPS) codes. Food deserts were defined as low-income areas with limited access to grocery stores, while food swamps were characterized by a predominance of unhealthy food outlets relative to healthy ones.
Additional data on obesity, type 2 diabetes (T2D), and nine social determinants of health were obtained from CDC PLACES and other publicly available datasets. Counties were stratified into quartiles based on MASLD-related mortality rates. Population-weighted mixed-effects linear regression models were used to evaluate associations between food environment exposures and MASLD mortality, adjusting for region, rural-urban status, age, sex, race, insurance coverage, chronic dis-ease prevalence, SNAP participation, and access to exercise facilities.
Counties with the worst food environments had significantly higher MASLD-related mortality, even after adjusting for clinical and sociodemographic factors. Compared with counties in the lowest quartile of MASLD mortality, those in the highest quartile had a greater proportion of food deserts (22.3% vs 14.9%; P < .001) and food swamps (73.1% vs 65.7%; P < .001). They also had a significantly higher prevalence of obesity (40.5% vs 32.5%), type 2 diabetes (15.8% vs 11.4%), and physical inactivity (33.7% vs 24.9%).
Demographically, counties with higher MASLD mortality had significantly larger proportions of Black and Hispanic residents, and were more likely to be rural and located in the South. These counties also had significantly lower median household incomes, higher poverty rates, fewer adults with a college education, lower access to exercise opportunities, greater SNAP participation, less broadband access, and more uninsured adults.
In multivariable regression models, both food deserts and food swamps remained independently associated with MASLD mortality. Counties in the highest quartile of food desert exposure had a 14.5% higher MASLD mortality rate, compared with the lowest quartile (P = .001), and those in the highest quartile for food swamp exposure had a 13.9% higher mortality rate (P = .005).
Type 2 diabetes, physical inactivity, and lack of health insurance were also independently associated with increased MASLD-related mortality.
“Implementing public health interventions that address the specific environmental factors of each county can help US policymakers promote access to healthy, culturally appropriate food choices at affordable prices and reduce the consumption of poor-quality food,” the investigators wrote. “Moreover, improving access to parks and exercise facilities can further enhance the impact of healthy nutrition. These strategies could help curb the growing epidemic of metabolic diseases, including MASLD and related mortality.”
This study was supported by King Faisal Specialist Hospital & Research Center, the Global NASH Council, Center for Outcomes Research in Liver Diseases, and the Beatty Liver and Obesity Research Fund, Inova Health System. The investigators disclosed no conflicts of interest.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Infrequent HDV Testing Raises Concern for Worse Liver Outcomes
—according to new findings.
The low testing rate suggests limited awareness of HDV-associated risks in patients with CHB, and underscores the need for earlier testing and diagnosis, lead author Robert J. Wong, MD, of Stanford University School of Medicine, Stanford, California, and colleagues, reported.
“Data among US populations are lacking to describe the epidemiology and long-term outcomes of patients with CHB and concurrent HDV infection,” the investigators wrote in Gastro Hep Advances (2025 Oct. doi: 10.1016/j.gastha.2024.10.015).
Prior studies have found that only 6% to 19% of patients with CHB get tested for HDV, and among those tested, the prevalence is relatively low—between 2% and 4.6%. Although relatively uncommon, HDV carries a substantial clinical and economic burden, Dr. Wong and colleagues noted, highlighting the importance of clinical awareness and accurate epidemiologic data.
The present study analyzed data from the Veterans Affairs (VA) Corporate Data Warehouse between 2010 and 2023. Adults with CHB were identified based on laboratory-confirmed markers and ICD-9/10 codes. HDV testing (anti-HDV antibody and HDV RNA) was assessed, and predictors of testing were evaluated using multivariable logistic regression.
To examine liver-related outcomes, patients who tested positive for HDV were propensity score–matched 1:2 with CHB patients who tested negative. Matching accounted for age, sex, race/ethnicity, HBeAg status, antiviral treatment, HCV and HIV coinfection, diabetes, and alcohol use. Patients with cirrhosis or hepatocellular carcinoma (HCC) at base-line were excluded. Incidence of cirrhosis, hepatic decompensation, and HCC was estimated using competing risks Nelson-Aalen methods.
Among 27,548 veterans with CHB, only 16.1% underwent HDV testing. Of those tested, 3.25% were HDV positive. Testing rates were higher among patients who were HBeAg positive, on antiviral therapy, or identified as Asian or Pacific Islander.
Conversely, testing was significantly less common among patients with high-risk alcohol use, past or current drug use, cirrhosis at diagnosis, or HCV coinfection. In contrast, HIV coinfection was associated with increased odds of being tested.
Among those tested, HDV positivity was more likely in patients with HCV coinfection, cirrhosis, or a history of drug use. On multivariable analysis, these factors were independent predictors of HDV positivity.
In the matched cohort of 71 HDV-positive patients and 140 HDV-negative controls, the incidence of cirrhosis was more than 3-fold higher in HDV-positive patients (4.39 vs 1.30 per 100,000 person-years; P less than .01), and hepatic decompensation was over 5 times more common (2.18 vs 0.41 per 100,000 person-years; P = .01). There was also a non-significant trend toward increased HCC risk in the HDV group.
“These findings align with existing studies and confirm that among a predominantly non-Asian US cohort of CHB patients, presence of concurrent HDV is associated with more severe liver disease progression,” the investigators wrote. “These observations, taken together with the low rates of HDV testing overall and particularly among high-risk individuals, emphasizes the need for greater awareness and novel strategies on how to improve HDV testing and diagnosis, particularly given that novel HDV therapies are on the near horizon.”
The study was supported by Gilead. The investigators disclosed additional relationships with Exact Sciences, GSK, Novo Nordisk, and others.
Hepatitis D virus (HDV) is an RNA “sub-virus” that infects patients with co-existing hepatitis B virus (HBV) infections. HDV infection currently affects approximately 15-20 million people worldwide but is an orphan disease in the United States with fewer than 100,000 individuals infected today.
Those with HDV have a 70% lifetime risk of hepatocellular carcinoma (HCC), cirrhosis, liver failure, death, or liver transplant. But there are no current treatments in the US that are Food and Drug Administration (FDA)-approved for the treatment of HDV, and only one therapy in the European Union with full approval by the European Medicines Agency.
Despite HDV severity and limited treatment options, screening for HDV remains severely inadequate, often only testing those individuals at high risk sequentially. HDV screening, would benefit from a revamped approach that automatically reflexes testing when individuals are diagnosed with HBV if positive for hepatitis B surface antigen (HBsAg+), then proceeds to anti-HDV antibody total testing, and then double reflexed to HDV-RNA polymerase chain reaction (PCR) quantitation. This is especially true in the Veterans Administration (VA)’s hospitals and clinics, where Wong and colleagues found very low rates of HDV testing among a national cohort of US Veterans with chronic HBV.
This study highlights the importance of timely HDV testing using reflex tools to improve diagnosis and HDV treatment, reducing long-term risks of liver-related morbidity and mortality.
Robert G. Gish, MD, AGAF, is principal at Robert G Gish Consultants LLC, clinical professor of medicine at Loma Linda University, Loma Linda, Calif., and medical director of the Hepatitis B Foundation. His complete list of disclosures can be found at www.robertgish.com/about.
Hepatitis D virus (HDV) is an RNA “sub-virus” that infects patients with co-existing hepatitis B virus (HBV) infections. HDV infection currently affects approximately 15-20 million people worldwide but is an orphan disease in the United States with fewer than 100,000 individuals infected today.
Those with HDV have a 70% lifetime risk of hepatocellular carcinoma (HCC), cirrhosis, liver failure, death, or liver transplant. But there are no current treatments in the US that are Food and Drug Administration (FDA)-approved for the treatment of HDV, and only one therapy in the European Union with full approval by the European Medicines Agency.
Despite HDV severity and limited treatment options, screening for HDV remains severely inadequate, often only testing those individuals at high risk sequentially. HDV screening, would benefit from a revamped approach that automatically reflexes testing when individuals are diagnosed with HBV if positive for hepatitis B surface antigen (HBsAg+), then proceeds to anti-HDV antibody total testing, and then double reflexed to HDV-RNA polymerase chain reaction (PCR) quantitation. This is especially true in the Veterans Administration (VA)’s hospitals and clinics, where Wong and colleagues found very low rates of HDV testing among a national cohort of US Veterans with chronic HBV.
This study highlights the importance of timely HDV testing using reflex tools to improve diagnosis and HDV treatment, reducing long-term risks of liver-related morbidity and mortality.
Robert G. Gish, MD, AGAF, is principal at Robert G Gish Consultants LLC, clinical professor of medicine at Loma Linda University, Loma Linda, Calif., and medical director of the Hepatitis B Foundation. His complete list of disclosures can be found at www.robertgish.com/about.
Hepatitis D virus (HDV) is an RNA “sub-virus” that infects patients with co-existing hepatitis B virus (HBV) infections. HDV infection currently affects approximately 15-20 million people worldwide but is an orphan disease in the United States with fewer than 100,000 individuals infected today.
Those with HDV have a 70% lifetime risk of hepatocellular carcinoma (HCC), cirrhosis, liver failure, death, or liver transplant. But there are no current treatments in the US that are Food and Drug Administration (FDA)-approved for the treatment of HDV, and only one therapy in the European Union with full approval by the European Medicines Agency.
Despite HDV severity and limited treatment options, screening for HDV remains severely inadequate, often only testing those individuals at high risk sequentially. HDV screening, would benefit from a revamped approach that automatically reflexes testing when individuals are diagnosed with HBV if positive for hepatitis B surface antigen (HBsAg+), then proceeds to anti-HDV antibody total testing, and then double reflexed to HDV-RNA polymerase chain reaction (PCR) quantitation. This is especially true in the Veterans Administration (VA)’s hospitals and clinics, where Wong and colleagues found very low rates of HDV testing among a national cohort of US Veterans with chronic HBV.
This study highlights the importance of timely HDV testing using reflex tools to improve diagnosis and HDV treatment, reducing long-term risks of liver-related morbidity and mortality.
Robert G. Gish, MD, AGAF, is principal at Robert G Gish Consultants LLC, clinical professor of medicine at Loma Linda University, Loma Linda, Calif., and medical director of the Hepatitis B Foundation. His complete list of disclosures can be found at www.robertgish.com/about.
—according to new findings.
The low testing rate suggests limited awareness of HDV-associated risks in patients with CHB, and underscores the need for earlier testing and diagnosis, lead author Robert J. Wong, MD, of Stanford University School of Medicine, Stanford, California, and colleagues, reported.
“Data among US populations are lacking to describe the epidemiology and long-term outcomes of patients with CHB and concurrent HDV infection,” the investigators wrote in Gastro Hep Advances (2025 Oct. doi: 10.1016/j.gastha.2024.10.015).
Prior studies have found that only 6% to 19% of patients with CHB get tested for HDV, and among those tested, the prevalence is relatively low—between 2% and 4.6%. Although relatively uncommon, HDV carries a substantial clinical and economic burden, Dr. Wong and colleagues noted, highlighting the importance of clinical awareness and accurate epidemiologic data.
The present study analyzed data from the Veterans Affairs (VA) Corporate Data Warehouse between 2010 and 2023. Adults with CHB were identified based on laboratory-confirmed markers and ICD-9/10 codes. HDV testing (anti-HDV antibody and HDV RNA) was assessed, and predictors of testing were evaluated using multivariable logistic regression.
To examine liver-related outcomes, patients who tested positive for HDV were propensity score–matched 1:2 with CHB patients who tested negative. Matching accounted for age, sex, race/ethnicity, HBeAg status, antiviral treatment, HCV and HIV coinfection, diabetes, and alcohol use. Patients with cirrhosis or hepatocellular carcinoma (HCC) at base-line were excluded. Incidence of cirrhosis, hepatic decompensation, and HCC was estimated using competing risks Nelson-Aalen methods.
Among 27,548 veterans with CHB, only 16.1% underwent HDV testing. Of those tested, 3.25% were HDV positive. Testing rates were higher among patients who were HBeAg positive, on antiviral therapy, or identified as Asian or Pacific Islander.
Conversely, testing was significantly less common among patients with high-risk alcohol use, past or current drug use, cirrhosis at diagnosis, or HCV coinfection. In contrast, HIV coinfection was associated with increased odds of being tested.
Among those tested, HDV positivity was more likely in patients with HCV coinfection, cirrhosis, or a history of drug use. On multivariable analysis, these factors were independent predictors of HDV positivity.
In the matched cohort of 71 HDV-positive patients and 140 HDV-negative controls, the incidence of cirrhosis was more than 3-fold higher in HDV-positive patients (4.39 vs 1.30 per 100,000 person-years; P less than .01), and hepatic decompensation was over 5 times more common (2.18 vs 0.41 per 100,000 person-years; P = .01). There was also a non-significant trend toward increased HCC risk in the HDV group.
“These findings align with existing studies and confirm that among a predominantly non-Asian US cohort of CHB patients, presence of concurrent HDV is associated with more severe liver disease progression,” the investigators wrote. “These observations, taken together with the low rates of HDV testing overall and particularly among high-risk individuals, emphasizes the need for greater awareness and novel strategies on how to improve HDV testing and diagnosis, particularly given that novel HDV therapies are on the near horizon.”
The study was supported by Gilead. The investigators disclosed additional relationships with Exact Sciences, GSK, Novo Nordisk, and others.
—according to new findings.
The low testing rate suggests limited awareness of HDV-associated risks in patients with CHB, and underscores the need for earlier testing and diagnosis, lead author Robert J. Wong, MD, of Stanford University School of Medicine, Stanford, California, and colleagues, reported.
“Data among US populations are lacking to describe the epidemiology and long-term outcomes of patients with CHB and concurrent HDV infection,” the investigators wrote in Gastro Hep Advances (2025 Oct. doi: 10.1016/j.gastha.2024.10.015).
Prior studies have found that only 6% to 19% of patients with CHB get tested for HDV, and among those tested, the prevalence is relatively low—between 2% and 4.6%. Although relatively uncommon, HDV carries a substantial clinical and economic burden, Dr. Wong and colleagues noted, highlighting the importance of clinical awareness and accurate epidemiologic data.
The present study analyzed data from the Veterans Affairs (VA) Corporate Data Warehouse between 2010 and 2023. Adults with CHB were identified based on laboratory-confirmed markers and ICD-9/10 codes. HDV testing (anti-HDV antibody and HDV RNA) was assessed, and predictors of testing were evaluated using multivariable logistic regression.
To examine liver-related outcomes, patients who tested positive for HDV were propensity score–matched 1:2 with CHB patients who tested negative. Matching accounted for age, sex, race/ethnicity, HBeAg status, antiviral treatment, HCV and HIV coinfection, diabetes, and alcohol use. Patients with cirrhosis or hepatocellular carcinoma (HCC) at base-line were excluded. Incidence of cirrhosis, hepatic decompensation, and HCC was estimated using competing risks Nelson-Aalen methods.
Among 27,548 veterans with CHB, only 16.1% underwent HDV testing. Of those tested, 3.25% were HDV positive. Testing rates were higher among patients who were HBeAg positive, on antiviral therapy, or identified as Asian or Pacific Islander.
Conversely, testing was significantly less common among patients with high-risk alcohol use, past or current drug use, cirrhosis at diagnosis, or HCV coinfection. In contrast, HIV coinfection was associated with increased odds of being tested.
Among those tested, HDV positivity was more likely in patients with HCV coinfection, cirrhosis, or a history of drug use. On multivariable analysis, these factors were independent predictors of HDV positivity.
In the matched cohort of 71 HDV-positive patients and 140 HDV-negative controls, the incidence of cirrhosis was more than 3-fold higher in HDV-positive patients (4.39 vs 1.30 per 100,000 person-years; P less than .01), and hepatic decompensation was over 5 times more common (2.18 vs 0.41 per 100,000 person-years; P = .01). There was also a non-significant trend toward increased HCC risk in the HDV group.
“These findings align with existing studies and confirm that among a predominantly non-Asian US cohort of CHB patients, presence of concurrent HDV is associated with more severe liver disease progression,” the investigators wrote. “These observations, taken together with the low rates of HDV testing overall and particularly among high-risk individuals, emphasizes the need for greater awareness and novel strategies on how to improve HDV testing and diagnosis, particularly given that novel HDV therapies are on the near horizon.”
The study was supported by Gilead. The investigators disclosed additional relationships with Exact Sciences, GSK, Novo Nordisk, and others.
FROM GASTRO HEP ADVANCES
Safety Profile of GLP-1s ‘Reassuring’ in Upper Endoscopy
according to a meta-analysis of more than 80,000 patients.
Safety profiles, however, were comparable across groups, suggesting that prolonged fasting may be a sufficient management strategy, instead of withholding GLP-1RAs, lead author Antonio Facciorusso, MD, PhD, of the University of Foggia, Italy, and colleagues reported.
“The impact of GLP-1RAs on slowing gastric motility has raised concerns in patients undergoing endoscopic procedures, particularly upper endoscopies,” the investigators wrote in Clinical Gastroenterology and Hepatology. “This is due to the perceived risk of aspiration of retained gastric contents in sedated patients and the decreased visibility of the gastric mucosa, which can reduce the diagnostic yield of the examination.”
The American Society of Anesthesiologists (ASA) recommends withholding GLP-1RAs before procedures or surgery, whereas AGA suggests an individualized approach, citing limited supporting data.
A previous meta-analysis reported that GLP-1RAs mildly delayed gastric emptying, but clinical relevance remained unclear.
The present meta-analysis aimed to clarify this uncertainty by analyzing 13 retrospective studies that involved 84,065 patients undergoing upper endoscopy. Outcomes were compared among GLP-1RA users vs non-users, including rates of retained gastric contents, aborted procedures, and adverse events.
Patients on GLP-1RAs had significantly higher rates of retained gastric contents than non-users (odds ratio [OR], 5.56), a finding that held steady (OR, 4.20) after adjusting for age, sex, diabetes, body mass index, and other therapies.
GLP-1RAs were also associated with an increased likelihood of aborted procedures (OR, 5.13; 1% vs. 0.3%) and a higher need for repeat endoscopies (OR, 2.19; 1% vs 2%); however, Facciorusso and colleagues noted that these events, in absolute terms, were relatively uncommon.
“The rate of aborted and repeat procedures in the included studies was low,” the investigators wrote. “This meant that only for every 110 patients undergoing upper endoscopy while in GLP-1RA therapy would we observe an aborted procedure and only for every 120 patients would we need to repeat the procedure.”
The overall safety profile of GLP-1RAs in the context of upper endoscopy remained largely reassuring, they added. Specifically, rates of bronchial aspiration were not significantly different between users and non-users. What’s more, no single study reported a statistically significant increase in major complications, including pulmonary adverse events, among GLP-1RA users.
According to Facciorusso and colleagues, these findings suggest that retained gastric contents do not appear to substantially heighten the risk of serious harm, though further prospective studies are needed.
“Our comprehensive analysis indicates that, while the use of GLP-1RA results in higher rates of [retained gastric contents], the actual clinical impact appears to be limited,” they wrote. “Therefore, there is no strong evidence to support the routine discontinuation of the drug before upper endoscopy procedures.”
Instead, they supported the AGA task force’s recommendation for an individualized approach, and not withholding GLP-1RAs unnecessarily, calling this “the best compromise.”
“Prolonging the duration of fasting for solids could represent the optimal approach in these patients, although this strategy requires further evaluation,” the investigators concluded.
The investigators disclosed no conflicts of interest.
according to a meta-analysis of more than 80,000 patients.
Safety profiles, however, were comparable across groups, suggesting that prolonged fasting may be a sufficient management strategy, instead of withholding GLP-1RAs, lead author Antonio Facciorusso, MD, PhD, of the University of Foggia, Italy, and colleagues reported.
“The impact of GLP-1RAs on slowing gastric motility has raised concerns in patients undergoing endoscopic procedures, particularly upper endoscopies,” the investigators wrote in Clinical Gastroenterology and Hepatology. “This is due to the perceived risk of aspiration of retained gastric contents in sedated patients and the decreased visibility of the gastric mucosa, which can reduce the diagnostic yield of the examination.”
The American Society of Anesthesiologists (ASA) recommends withholding GLP-1RAs before procedures or surgery, whereas AGA suggests an individualized approach, citing limited supporting data.
A previous meta-analysis reported that GLP-1RAs mildly delayed gastric emptying, but clinical relevance remained unclear.
The present meta-analysis aimed to clarify this uncertainty by analyzing 13 retrospective studies that involved 84,065 patients undergoing upper endoscopy. Outcomes were compared among GLP-1RA users vs non-users, including rates of retained gastric contents, aborted procedures, and adverse events.
Patients on GLP-1RAs had significantly higher rates of retained gastric contents than non-users (odds ratio [OR], 5.56), a finding that held steady (OR, 4.20) after adjusting for age, sex, diabetes, body mass index, and other therapies.
GLP-1RAs were also associated with an increased likelihood of aborted procedures (OR, 5.13; 1% vs. 0.3%) and a higher need for repeat endoscopies (OR, 2.19; 1% vs 2%); however, Facciorusso and colleagues noted that these events, in absolute terms, were relatively uncommon.
“The rate of aborted and repeat procedures in the included studies was low,” the investigators wrote. “This meant that only for every 110 patients undergoing upper endoscopy while in GLP-1RA therapy would we observe an aborted procedure and only for every 120 patients would we need to repeat the procedure.”
The overall safety profile of GLP-1RAs in the context of upper endoscopy remained largely reassuring, they added. Specifically, rates of bronchial aspiration were not significantly different between users and non-users. What’s more, no single study reported a statistically significant increase in major complications, including pulmonary adverse events, among GLP-1RA users.
According to Facciorusso and colleagues, these findings suggest that retained gastric contents do not appear to substantially heighten the risk of serious harm, though further prospective studies are needed.
“Our comprehensive analysis indicates that, while the use of GLP-1RA results in higher rates of [retained gastric contents], the actual clinical impact appears to be limited,” they wrote. “Therefore, there is no strong evidence to support the routine discontinuation of the drug before upper endoscopy procedures.”
Instead, they supported the AGA task force’s recommendation for an individualized approach, and not withholding GLP-1RAs unnecessarily, calling this “the best compromise.”
“Prolonging the duration of fasting for solids could represent the optimal approach in these patients, although this strategy requires further evaluation,” the investigators concluded.
The investigators disclosed no conflicts of interest.
according to a meta-analysis of more than 80,000 patients.
Safety profiles, however, were comparable across groups, suggesting that prolonged fasting may be a sufficient management strategy, instead of withholding GLP-1RAs, lead author Antonio Facciorusso, MD, PhD, of the University of Foggia, Italy, and colleagues reported.
“The impact of GLP-1RAs on slowing gastric motility has raised concerns in patients undergoing endoscopic procedures, particularly upper endoscopies,” the investigators wrote in Clinical Gastroenterology and Hepatology. “This is due to the perceived risk of aspiration of retained gastric contents in sedated patients and the decreased visibility of the gastric mucosa, which can reduce the diagnostic yield of the examination.”
The American Society of Anesthesiologists (ASA) recommends withholding GLP-1RAs before procedures or surgery, whereas AGA suggests an individualized approach, citing limited supporting data.
A previous meta-analysis reported that GLP-1RAs mildly delayed gastric emptying, but clinical relevance remained unclear.
The present meta-analysis aimed to clarify this uncertainty by analyzing 13 retrospective studies that involved 84,065 patients undergoing upper endoscopy. Outcomes were compared among GLP-1RA users vs non-users, including rates of retained gastric contents, aborted procedures, and adverse events.
Patients on GLP-1RAs had significantly higher rates of retained gastric contents than non-users (odds ratio [OR], 5.56), a finding that held steady (OR, 4.20) after adjusting for age, sex, diabetes, body mass index, and other therapies.
GLP-1RAs were also associated with an increased likelihood of aborted procedures (OR, 5.13; 1% vs. 0.3%) and a higher need for repeat endoscopies (OR, 2.19; 1% vs 2%); however, Facciorusso and colleagues noted that these events, in absolute terms, were relatively uncommon.
“The rate of aborted and repeat procedures in the included studies was low,” the investigators wrote. “This meant that only for every 110 patients undergoing upper endoscopy while in GLP-1RA therapy would we observe an aborted procedure and only for every 120 patients would we need to repeat the procedure.”
The overall safety profile of GLP-1RAs in the context of upper endoscopy remained largely reassuring, they added. Specifically, rates of bronchial aspiration were not significantly different between users and non-users. What’s more, no single study reported a statistically significant increase in major complications, including pulmonary adverse events, among GLP-1RA users.
According to Facciorusso and colleagues, these findings suggest that retained gastric contents do not appear to substantially heighten the risk of serious harm, though further prospective studies are needed.
“Our comprehensive analysis indicates that, while the use of GLP-1RA results in higher rates of [retained gastric contents], the actual clinical impact appears to be limited,” they wrote. “Therefore, there is no strong evidence to support the routine discontinuation of the drug before upper endoscopy procedures.”
Instead, they supported the AGA task force’s recommendation for an individualized approach, and not withholding GLP-1RAs unnecessarily, calling this “the best compromise.”
“Prolonging the duration of fasting for solids could represent the optimal approach in these patients, although this strategy requires further evaluation,” the investigators concluded.
The investigators disclosed no conflicts of interest.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Two Cystic Duct Stents Appear Better Than One
according to a retrospective multicenter study.
These findings suggest that endoscopists should prioritize dual stent placement when feasible, and consider adding a second stent in patients who previously received a single stent, James D. Haddad, MD, of the University of Texas Southwestern, Dallas, and colleagues reported.
The American Gastroenterological Association (AGA) has recognized the role of endoscopic drainage in managing acute cholecystitis in high-risk patients, but specific guidance on optimal technique and follow-up remains unclear, the investigators wrote in Techniques and Innovations in Gastrointestinal Endoscopy.
“Despite accumulating data and increased interest in this technique, clear guidance on the ideal strategy for ETGBD is lacking,” Dr. Haddad and colleagues wrote. “For example, the optimal size, number, and follow-up of cystic duct stents for patients undergoing ETGBD has not been well established.”
To address this knowledge gap, the investigators analyzed data from 75 patients at five academic medical centers who had undergone ETGBD between June 2013 and October 2022. Patients were divided into two groups based on whether they received one or two cystic duct stents.
The primary outcome was clinical success, defined as symptom resolution without requiring another drainage procedure. Secondary outcomes included technical success (defined as successful stent placement), along with rates of adverse events and unplanned reinterventions.
Out of the 75 patients, 59 received a single stent, while 16 received dual stents. The median follow-up time was 407 days overall, with a longer follow-up in the single-stent group (433 days), compared with the double-stent group (118 days).
Clinical success was reported in 81.3% of cases, which technical success was achieved in 88.2% of cases.
Patients who received two stents had significantly lower rates of unplanned reintervention, compared with those who received a single stent (0% vs 25.4%; P = .02). The median time to unplanned reintervention in the single-stent group was 210 days.
Use of a 7 French stent was strongly associated with placement of two stents (odd ratio [OR], 15.5; P = .01). Similarly, patients with a prior percutaneous cholecystostomy tube were significantly more likely to have two stents placed (OR, 10.8; P = .001).
Adverse event rates were uncommon and not statistically different between groups, with an overall rate of 6.7%. Post-endoscopic retrograde cholangiopancreatography pancreatitis was the most common adverse event, occurring in two patients in the single-stent group and one patient in the double-stent group. There were no reported cases of cystic duct or gallbladder perforation.
“In conclusion,” the investigators wrote, “ETGBD with dual transpapillary gallbladder stenting is associated with a lower rate of unplanned reinterventions, compared with that with single stenting, and has a low rate of adverse events. Endoscopists performing ETGBD should consider planned exchange of solitary transpapillary gallbladder stents or interval ERCP for reattempted placement of a second stent if placement of two stents is not possible at the index ERCP.”
The investigators disclosed relationships with Boston Scientific, Motus GI, and ConMed.
according to a retrospective multicenter study.
These findings suggest that endoscopists should prioritize dual stent placement when feasible, and consider adding a second stent in patients who previously received a single stent, James D. Haddad, MD, of the University of Texas Southwestern, Dallas, and colleagues reported.
The American Gastroenterological Association (AGA) has recognized the role of endoscopic drainage in managing acute cholecystitis in high-risk patients, but specific guidance on optimal technique and follow-up remains unclear, the investigators wrote in Techniques and Innovations in Gastrointestinal Endoscopy.
“Despite accumulating data and increased interest in this technique, clear guidance on the ideal strategy for ETGBD is lacking,” Dr. Haddad and colleagues wrote. “For example, the optimal size, number, and follow-up of cystic duct stents for patients undergoing ETGBD has not been well established.”
To address this knowledge gap, the investigators analyzed data from 75 patients at five academic medical centers who had undergone ETGBD between June 2013 and October 2022. Patients were divided into two groups based on whether they received one or two cystic duct stents.
The primary outcome was clinical success, defined as symptom resolution without requiring another drainage procedure. Secondary outcomes included technical success (defined as successful stent placement), along with rates of adverse events and unplanned reinterventions.
Out of the 75 patients, 59 received a single stent, while 16 received dual stents. The median follow-up time was 407 days overall, with a longer follow-up in the single-stent group (433 days), compared with the double-stent group (118 days).
Clinical success was reported in 81.3% of cases, which technical success was achieved in 88.2% of cases.
Patients who received two stents had significantly lower rates of unplanned reintervention, compared with those who received a single stent (0% vs 25.4%; P = .02). The median time to unplanned reintervention in the single-stent group was 210 days.
Use of a 7 French stent was strongly associated with placement of two stents (odd ratio [OR], 15.5; P = .01). Similarly, patients with a prior percutaneous cholecystostomy tube were significantly more likely to have two stents placed (OR, 10.8; P = .001).
Adverse event rates were uncommon and not statistically different between groups, with an overall rate of 6.7%. Post-endoscopic retrograde cholangiopancreatography pancreatitis was the most common adverse event, occurring in two patients in the single-stent group and one patient in the double-stent group. There were no reported cases of cystic duct or gallbladder perforation.
“In conclusion,” the investigators wrote, “ETGBD with dual transpapillary gallbladder stenting is associated with a lower rate of unplanned reinterventions, compared with that with single stenting, and has a low rate of adverse events. Endoscopists performing ETGBD should consider planned exchange of solitary transpapillary gallbladder stents or interval ERCP for reattempted placement of a second stent if placement of two stents is not possible at the index ERCP.”
The investigators disclosed relationships with Boston Scientific, Motus GI, and ConMed.
according to a retrospective multicenter study.
These findings suggest that endoscopists should prioritize dual stent placement when feasible, and consider adding a second stent in patients who previously received a single stent, James D. Haddad, MD, of the University of Texas Southwestern, Dallas, and colleagues reported.
The American Gastroenterological Association (AGA) has recognized the role of endoscopic drainage in managing acute cholecystitis in high-risk patients, but specific guidance on optimal technique and follow-up remains unclear, the investigators wrote in Techniques and Innovations in Gastrointestinal Endoscopy.
“Despite accumulating data and increased interest in this technique, clear guidance on the ideal strategy for ETGBD is lacking,” Dr. Haddad and colleagues wrote. “For example, the optimal size, number, and follow-up of cystic duct stents for patients undergoing ETGBD has not been well established.”
To address this knowledge gap, the investigators analyzed data from 75 patients at five academic medical centers who had undergone ETGBD between June 2013 and October 2022. Patients were divided into two groups based on whether they received one or two cystic duct stents.
The primary outcome was clinical success, defined as symptom resolution without requiring another drainage procedure. Secondary outcomes included technical success (defined as successful stent placement), along with rates of adverse events and unplanned reinterventions.
Out of the 75 patients, 59 received a single stent, while 16 received dual stents. The median follow-up time was 407 days overall, with a longer follow-up in the single-stent group (433 days), compared with the double-stent group (118 days).
Clinical success was reported in 81.3% of cases, which technical success was achieved in 88.2% of cases.
Patients who received two stents had significantly lower rates of unplanned reintervention, compared with those who received a single stent (0% vs 25.4%; P = .02). The median time to unplanned reintervention in the single-stent group was 210 days.
Use of a 7 French stent was strongly associated with placement of two stents (odd ratio [OR], 15.5; P = .01). Similarly, patients with a prior percutaneous cholecystostomy tube were significantly more likely to have two stents placed (OR, 10.8; P = .001).
Adverse event rates were uncommon and not statistically different between groups, with an overall rate of 6.7%. Post-endoscopic retrograde cholangiopancreatography pancreatitis was the most common adverse event, occurring in two patients in the single-stent group and one patient in the double-stent group. There were no reported cases of cystic duct or gallbladder perforation.
“In conclusion,” the investigators wrote, “ETGBD with dual transpapillary gallbladder stenting is associated with a lower rate of unplanned reinterventions, compared with that with single stenting, and has a low rate of adverse events. Endoscopists performing ETGBD should consider planned exchange of solitary transpapillary gallbladder stents or interval ERCP for reattempted placement of a second stent if placement of two stents is not possible at the index ERCP.”
The investigators disclosed relationships with Boston Scientific, Motus GI, and ConMed.
FROM TECHNIQUES AND INNOVATIONS IN GASTROINTESTINAL ENDOSCOPY
Circulating Proteins Predict Crohn’s Disease Years in Advance
The 29-protein biosignature, which was validated across multiple independent cohorts, could potentially open doors to new preclinical interventions, lead author Olle Grännö, MD, of Örebro University in Sweden, and colleagues reported.
“Predictive biomarkers of future clinical onset of active inflammatory bowel disease could detect the disease during ‘a window of opportunity’ when the immune dysregulation is potentially reversible,” the investigators wrote in Gastroenterology.
Preclinical biomarker screening has proven effective in other immune-mediated diseases, such as type 1 diabetes, where risk stratification using autoantibodies enabled early intervention that delayed disease onset, they noted.
Previous studies suggested similar potential for inflammatory bowel disease (IBD) via predictive autoantibodies and serum proteins, although the accuracy of these markers was not validated in external cohorts. The present study aimed to fill this validation gap.
First, the investigators measured 178 plasma proteins in blood samples taken from 312 individuals before they were diagnosed with IBD. Using machine learning, Dr. Grännö and colleagues compared these findings with blood-matched controls who remained free of IBD through follow-up. This process revealed the 29-protein signature.
In the same discovery cohort, the panel of 29 proteins differentiated preclinical CD cases from controls with an area under the curve (AUC) of 0.85. The signature was then validated in an independent preclinical cohort of CD patients, with an AUC of 0.87.
While accuracy increased in proximity to clinical disease onset, the model was still highly predictive up to 16 years before CD diagnosis, at which time the AUC was 0.82. The panel showed perfect performance among newly diagnosed CD patients, with an AUC of 1.0, supporting clinical relevance.
Predictive power was statistically significant but less compelling among individuals with preclinical ulcerative colitis (UC). In this IBD subgroup, AUC for identification and validation cohorts was 0.77 and 0.67, respectively, while newly diagnosed patients had an AUC of 0.95.
“In preclinical samples, downregulated (but not upregulated) proteins related to gut barrier integrity and macrophage functionality correlated with time to diagnosis of CD,” Dr. Grännö and colleagues wrote. “Contrarily, all proteins associated with preclinical UC were upregulated, and only one protein marker correlated with the time to diagnosis.”
These findings suggest that disruptions in gut barrier integrity and macrophage function precede clinical CD onset, they explained, potentially serving as an early signal of inflammation-driven intestinal damage. In contrast, the preclinical UC signature primarily involved upregulated inflammatory markers.
Dr. Grännö and colleagues also examined the influence of genetic and environmental factors by comparing preclinical IBD signatures in unrelated and related twin pairs.
The CD biosignature had an AUC of 0.89 when comparing individuals with preclinical CD to matched external (unrelated) healthy twins. Predictive ability dropped significantly (AUC = 0.58) when comparing CD cases to their own healthy twin siblings, suggesting that genetic and shared environmental factors have a “predominant influence” on protein dysregulation.
In contrast, AUC among unrelated vs related twin controls was more similar for UC, at 0.76 and 0.64, respectively, indicating “a limited impact” of genetic and environmental factors on the protein signature.
Altogether, this study reinforces the concept of a long preclinical phase in CD, and highlights the potential for early detection and intervention, according to the investigators.
“The long preclinical period in CD endorses the adoption of early preventive strategies (e.g., diet alterations and medication) to potentially attenuate disease progression and improve the natural history of CD,” they concluded.
This study was funded by the Swedish Research Council, the Swedish Foundation for Strategic Research, the Örebro University Hospital Research Foundation, and others. The investigators disclosed relationships with Pfizer, Janssen, AbbVie, and others.
Nowadays, preclinical biomarker discovery for inflammatory bowel diseases (IBD) is one of the key areas of study, aiming to identify the earliest stages of disease development and to find opportunities for early intervention. The study by Grännö and colleagues taps into this area and provides a significant advancement in the early detection of Crohn’s disease (CD) with a validated 29-plasma protein biomarker signature.
With an AUC of up to 0.87 in preclinical CD cases and even 0.82 as early as 16 years before diagnosis, these findings strongly support the notion that CD has a prolonged preclinical phase that is detectable up to many years before diagnosis. Importantly, their identified protein signatures also shed light on distinct pathophysiological mechanisms between CD and ulcerative colitis (UC), with CD characterized by early disruptions in gut barrier integrity and macrophage function, while UC was more marked by upregulated inflammatory markers.
For clinical practitioners, these findings have a strong transformative potential. Following further validation in larger cohorts and allowing clinical accessibility, preclinical biomarker screening could become a routine tool for risk stratification in at-risk individuals, such as those with a strong family history or genetic predisposition. This could enable implementation of early interventions, including dietary modifications and potentially prophylactic therapies, to delay or even prevent disease onset. Given that similar approaches have proven effective in type 1 diabetes, applying this strategy to IBD could significantly alter disease progression and patient outcomes.
Challenges remain before implementation in clinical practice could be realized. Standardized thresholds for risk assessment, cost-effectiveness analyses, and potential therapeutic strategies tailored to biomarker-positive individuals require further exploration. However, this study provides important data needed for a paradigm shift in IBD management — one that moves from reactive treatment to proactive prevention.
Arno R. Bourgonje, MD, PhD, is a postdoctoral fellow at the Division of Gastroenterology, Icahn School of Medicine at Mount Sinai, New York, and at the University Medical Center Groningen in Groningen, the Netherlands. He is involved in the European INTERCEPT consortium, which is focused on prediction and prevention of IBD. He reported no conflicts of interest.
Nowadays, preclinical biomarker discovery for inflammatory bowel diseases (IBD) is one of the key areas of study, aiming to identify the earliest stages of disease development and to find opportunities for early intervention. The study by Grännö and colleagues taps into this area and provides a significant advancement in the early detection of Crohn’s disease (CD) with a validated 29-plasma protein biomarker signature.
With an AUC of up to 0.87 in preclinical CD cases and even 0.82 as early as 16 years before diagnosis, these findings strongly support the notion that CD has a prolonged preclinical phase that is detectable up to many years before diagnosis. Importantly, their identified protein signatures also shed light on distinct pathophysiological mechanisms between CD and ulcerative colitis (UC), with CD characterized by early disruptions in gut barrier integrity and macrophage function, while UC was more marked by upregulated inflammatory markers.
For clinical practitioners, these findings have a strong transformative potential. Following further validation in larger cohorts and allowing clinical accessibility, preclinical biomarker screening could become a routine tool for risk stratification in at-risk individuals, such as those with a strong family history or genetic predisposition. This could enable implementation of early interventions, including dietary modifications and potentially prophylactic therapies, to delay or even prevent disease onset. Given that similar approaches have proven effective in type 1 diabetes, applying this strategy to IBD could significantly alter disease progression and patient outcomes.
Challenges remain before implementation in clinical practice could be realized. Standardized thresholds for risk assessment, cost-effectiveness analyses, and potential therapeutic strategies tailored to biomarker-positive individuals require further exploration. However, this study provides important data needed for a paradigm shift in IBD management — one that moves from reactive treatment to proactive prevention.
Arno R. Bourgonje, MD, PhD, is a postdoctoral fellow at the Division of Gastroenterology, Icahn School of Medicine at Mount Sinai, New York, and at the University Medical Center Groningen in Groningen, the Netherlands. He is involved in the European INTERCEPT consortium, which is focused on prediction and prevention of IBD. He reported no conflicts of interest.
Nowadays, preclinical biomarker discovery for inflammatory bowel diseases (IBD) is one of the key areas of study, aiming to identify the earliest stages of disease development and to find opportunities for early intervention. The study by Grännö and colleagues taps into this area and provides a significant advancement in the early detection of Crohn’s disease (CD) with a validated 29-plasma protein biomarker signature.
With an AUC of up to 0.87 in preclinical CD cases and even 0.82 as early as 16 years before diagnosis, these findings strongly support the notion that CD has a prolonged preclinical phase that is detectable up to many years before diagnosis. Importantly, their identified protein signatures also shed light on distinct pathophysiological mechanisms between CD and ulcerative colitis (UC), with CD characterized by early disruptions in gut barrier integrity and macrophage function, while UC was more marked by upregulated inflammatory markers.
For clinical practitioners, these findings have a strong transformative potential. Following further validation in larger cohorts and allowing clinical accessibility, preclinical biomarker screening could become a routine tool for risk stratification in at-risk individuals, such as those with a strong family history or genetic predisposition. This could enable implementation of early interventions, including dietary modifications and potentially prophylactic therapies, to delay or even prevent disease onset. Given that similar approaches have proven effective in type 1 diabetes, applying this strategy to IBD could significantly alter disease progression and patient outcomes.
Challenges remain before implementation in clinical practice could be realized. Standardized thresholds for risk assessment, cost-effectiveness analyses, and potential therapeutic strategies tailored to biomarker-positive individuals require further exploration. However, this study provides important data needed for a paradigm shift in IBD management — one that moves from reactive treatment to proactive prevention.
Arno R. Bourgonje, MD, PhD, is a postdoctoral fellow at the Division of Gastroenterology, Icahn School of Medicine at Mount Sinai, New York, and at the University Medical Center Groningen in Groningen, the Netherlands. He is involved in the European INTERCEPT consortium, which is focused on prediction and prevention of IBD. He reported no conflicts of interest.
The 29-protein biosignature, which was validated across multiple independent cohorts, could potentially open doors to new preclinical interventions, lead author Olle Grännö, MD, of Örebro University in Sweden, and colleagues reported.
“Predictive biomarkers of future clinical onset of active inflammatory bowel disease could detect the disease during ‘a window of opportunity’ when the immune dysregulation is potentially reversible,” the investigators wrote in Gastroenterology.
Preclinical biomarker screening has proven effective in other immune-mediated diseases, such as type 1 diabetes, where risk stratification using autoantibodies enabled early intervention that delayed disease onset, they noted.
Previous studies suggested similar potential for inflammatory bowel disease (IBD) via predictive autoantibodies and serum proteins, although the accuracy of these markers was not validated in external cohorts. The present study aimed to fill this validation gap.
First, the investigators measured 178 plasma proteins in blood samples taken from 312 individuals before they were diagnosed with IBD. Using machine learning, Dr. Grännö and colleagues compared these findings with blood-matched controls who remained free of IBD through follow-up. This process revealed the 29-protein signature.
In the same discovery cohort, the panel of 29 proteins differentiated preclinical CD cases from controls with an area under the curve (AUC) of 0.85. The signature was then validated in an independent preclinical cohort of CD patients, with an AUC of 0.87.
While accuracy increased in proximity to clinical disease onset, the model was still highly predictive up to 16 years before CD diagnosis, at which time the AUC was 0.82. The panel showed perfect performance among newly diagnosed CD patients, with an AUC of 1.0, supporting clinical relevance.
Predictive power was statistically significant but less compelling among individuals with preclinical ulcerative colitis (UC). In this IBD subgroup, AUC for identification and validation cohorts was 0.77 and 0.67, respectively, while newly diagnosed patients had an AUC of 0.95.
“In preclinical samples, downregulated (but not upregulated) proteins related to gut barrier integrity and macrophage functionality correlated with time to diagnosis of CD,” Dr. Grännö and colleagues wrote. “Contrarily, all proteins associated with preclinical UC were upregulated, and only one protein marker correlated with the time to diagnosis.”
These findings suggest that disruptions in gut barrier integrity and macrophage function precede clinical CD onset, they explained, potentially serving as an early signal of inflammation-driven intestinal damage. In contrast, the preclinical UC signature primarily involved upregulated inflammatory markers.
Dr. Grännö and colleagues also examined the influence of genetic and environmental factors by comparing preclinical IBD signatures in unrelated and related twin pairs.
The CD biosignature had an AUC of 0.89 when comparing individuals with preclinical CD to matched external (unrelated) healthy twins. Predictive ability dropped significantly (AUC = 0.58) when comparing CD cases to their own healthy twin siblings, suggesting that genetic and shared environmental factors have a “predominant influence” on protein dysregulation.
In contrast, AUC among unrelated vs related twin controls was more similar for UC, at 0.76 and 0.64, respectively, indicating “a limited impact” of genetic and environmental factors on the protein signature.
Altogether, this study reinforces the concept of a long preclinical phase in CD, and highlights the potential for early detection and intervention, according to the investigators.
“The long preclinical period in CD endorses the adoption of early preventive strategies (e.g., diet alterations and medication) to potentially attenuate disease progression and improve the natural history of CD,” they concluded.
This study was funded by the Swedish Research Council, the Swedish Foundation for Strategic Research, the Örebro University Hospital Research Foundation, and others. The investigators disclosed relationships with Pfizer, Janssen, AbbVie, and others.
The 29-protein biosignature, which was validated across multiple independent cohorts, could potentially open doors to new preclinical interventions, lead author Olle Grännö, MD, of Örebro University in Sweden, and colleagues reported.
“Predictive biomarkers of future clinical onset of active inflammatory bowel disease could detect the disease during ‘a window of opportunity’ when the immune dysregulation is potentially reversible,” the investigators wrote in Gastroenterology.
Preclinical biomarker screening has proven effective in other immune-mediated diseases, such as type 1 diabetes, where risk stratification using autoantibodies enabled early intervention that delayed disease onset, they noted.
Previous studies suggested similar potential for inflammatory bowel disease (IBD) via predictive autoantibodies and serum proteins, although the accuracy of these markers was not validated in external cohorts. The present study aimed to fill this validation gap.
First, the investigators measured 178 plasma proteins in blood samples taken from 312 individuals before they were diagnosed with IBD. Using machine learning, Dr. Grännö and colleagues compared these findings with blood-matched controls who remained free of IBD through follow-up. This process revealed the 29-protein signature.
In the same discovery cohort, the panel of 29 proteins differentiated preclinical CD cases from controls with an area under the curve (AUC) of 0.85. The signature was then validated in an independent preclinical cohort of CD patients, with an AUC of 0.87.
While accuracy increased in proximity to clinical disease onset, the model was still highly predictive up to 16 years before CD diagnosis, at which time the AUC was 0.82. The panel showed perfect performance among newly diagnosed CD patients, with an AUC of 1.0, supporting clinical relevance.
Predictive power was statistically significant but less compelling among individuals with preclinical ulcerative colitis (UC). In this IBD subgroup, AUC for identification and validation cohorts was 0.77 and 0.67, respectively, while newly diagnosed patients had an AUC of 0.95.
“In preclinical samples, downregulated (but not upregulated) proteins related to gut barrier integrity and macrophage functionality correlated with time to diagnosis of CD,” Dr. Grännö and colleagues wrote. “Contrarily, all proteins associated with preclinical UC were upregulated, and only one protein marker correlated with the time to diagnosis.”
These findings suggest that disruptions in gut barrier integrity and macrophage function precede clinical CD onset, they explained, potentially serving as an early signal of inflammation-driven intestinal damage. In contrast, the preclinical UC signature primarily involved upregulated inflammatory markers.
Dr. Grännö and colleagues also examined the influence of genetic and environmental factors by comparing preclinical IBD signatures in unrelated and related twin pairs.
The CD biosignature had an AUC of 0.89 when comparing individuals with preclinical CD to matched external (unrelated) healthy twins. Predictive ability dropped significantly (AUC = 0.58) when comparing CD cases to their own healthy twin siblings, suggesting that genetic and shared environmental factors have a “predominant influence” on protein dysregulation.
In contrast, AUC among unrelated vs related twin controls was more similar for UC, at 0.76 and 0.64, respectively, indicating “a limited impact” of genetic and environmental factors on the protein signature.
Altogether, this study reinforces the concept of a long preclinical phase in CD, and highlights the potential for early detection and intervention, according to the investigators.
“The long preclinical period in CD endorses the adoption of early preventive strategies (e.g., diet alterations and medication) to potentially attenuate disease progression and improve the natural history of CD,” they concluded.
This study was funded by the Swedish Research Council, the Swedish Foundation for Strategic Research, the Örebro University Hospital Research Foundation, and others. The investigators disclosed relationships with Pfizer, Janssen, AbbVie, and others.
FROM GASTROENTEROLOGY
New Risk Score Might Improve HCC Surveillance Among Cirrhosis Patients
, based to a recent phase 3 biomarker validation study.
The Prognostic Liver Secretome Signature with Alpha-Fetoprotein plus Age, Male Sex, Albumin-Bilirubin, and Platelets (PAaM) score integrates both molecular and clinical variables to effectively classify cirrhosis patients by their risk of developing HCC, potentially sparing low-risk patients from unnecessary surveillance, lead author Naoto Fujiwara, MD, PhD, of the University of Texas Southwestern Medical Center, Dallas, and colleagues reported.
“Hepatocellular carcinoma risk stratification is an urgent unmet need for cost-effective screening and early detection in patients with cirrhosis,” the investigators wrote in Gastroenterology. “This study represents the largest and first phase 3 biomarker validation study that establishes an integrative molecular/clinical score, PAaM, for HCC risk stratification.”
The PAaM score combines an 8-protein prognostic liver secretome signature with traditional clinical variables, including alpha-fetoprotein (AFP) levels, age, sex, albumin-bilirubin levels, and platelet counts. The score stratifies patients into high-, intermediate-, and low-risk categories.
The PAaM score was validated using 2 independent prospective cohorts in the United States: the statewide Texas Hepatocellular Carcinoma Consortium (THCCC) and the nationwide Hepatocellular Carcinoma Early Detection Strategy (HEDS). Across both cohorts, 3,484 patients with cirrhosis were followed over time to assess the development of HCC.
In the Texas cohort, comprising 2,156 patients with cirrhosis, PAaM classified 19% of patients as high risk, 42% as intermediate risk, and 39% as low risk. The annual incidence of HCC was significantly different across these groups, with high-risk patients experiencing a 5.3% incidence rate, versus 2.7% for intermediate-risk patients and 0.6% for low-risk patients (P less than .001). Compared with those in the low-risk group, high-risk patients had sub-distribution hazard ratio (sHR) of 7.51 for developing HCC, while intermediate-risk patients had an sHR of 4.20.
In the nationwide HEDS cohort, which included 1,328 patients, PAaM similarly stratified 15% of participants as high risk, 41% as intermediate risk, and 44% as low risk. Annual HCC incidence rates were 6.2%, 1.8%, and 0.8% for high-, intermediate-, and low-risk patients, respectively (P less than .001). Among these patients, sub-distribution hazard ratios for HCC were 6.54 for high-risk patients and 1.77 for intermediate-risk patients, again underscoring the tool’s potential to identify individuals at elevated risk of developing HCC.
The PAaM score outperformed existing models like the aMAP score and the PLSec-AFP molecular marker alone, with consistent superiority across a diverse range of cirrhosis etiologies, including metabolic dysfunction–associated steatotic liver disease (MASLD), alcohol-associated liver disease (ALD), and cured hepatitis C virus (HCV) infection.
Based on these findings, high-risk patients might benefit from more intensive screening strategies, Fujiwara and colleagues suggested, while intermediate-risk patients could continue with semi-annual ultrasound-based screening. Of note, low-risk patients—comprising about 40% of the study population—could potentially avoid frequent screenings, thus reducing healthcare costs and minimizing unnecessary interventions.
“This represents a significant step toward the clinical translation of an individual risk-based HCC screening strategy to improve early HCC detection and reduce HCC mortality,” the investigators concluded.This study was supported by various the National Cancer Institute, Veterans Affairs, the Japan Society for the Promotion of Science, and others. The investigators disclosed additional relationships with Boston Scientific, Sirtex, Bayer, and others.
Nancy S. Reau, MD, AGAF, of RUSH University in Chicago, highlighted both the promise and challenges of the PAaM score for HCC risk stratification, emphasizing that current liver cancer screening strategies remain inadequate, with only about 25% of patients receiving guideline-recommended surveillance.
“An easy-to-apply cost effective tool could significantly improve screening strategies, which should lead to earlier identification of liver cancer—at a time when curative treatment options are available,” Reau said.
PAaM, however, may be impractical for routine use.
“A tool that classifies people into 3 different screening strategies and requires longitudinal applications and re-classification could add complexity,” she explained, predicting that “clinicians aren’t going to use it correctly.
Reau was particularly concerned about the need for repeated assessments over time.
“People change,” she said. “A low-risk categorization by PAaM at the age of 40 may no longer be relevant at 50 or 60 as liver disease progresses.”
Although the tool is “exciting,” Reau suggested that it is also “premature” until appropriate reclassification intervals are understood.
She also noted that some patients still develop HCC despite being considered low risk, including cases of HCC that develop in non-cirrhotic HCV infection or MASLD.
Beyond the above clinical considerations, Dr. Reau pointed out several barriers to implementing PAaM in routine practice, starting with the under-recognition of cirrhosis. Even if patients are identified, ensuring both clinicians and patients adhere to screening recommendations remains a challenge.
Finally, financial considerations may pose obstacles.
“If some payers cover the tool and others do not, it will be very difficult to implement,” Dr. Reau concluded.
Reau reported no conflicts of interest.
Nancy S. Reau, MD, AGAF, of RUSH University in Chicago, highlighted both the promise and challenges of the PAaM score for HCC risk stratification, emphasizing that current liver cancer screening strategies remain inadequate, with only about 25% of patients receiving guideline-recommended surveillance.
“An easy-to-apply cost effective tool could significantly improve screening strategies, which should lead to earlier identification of liver cancer—at a time when curative treatment options are available,” Reau said.
PAaM, however, may be impractical for routine use.
“A tool that classifies people into 3 different screening strategies and requires longitudinal applications and re-classification could add complexity,” she explained, predicting that “clinicians aren’t going to use it correctly.
Reau was particularly concerned about the need for repeated assessments over time.
“People change,” she said. “A low-risk categorization by PAaM at the age of 40 may no longer be relevant at 50 or 60 as liver disease progresses.”
Although the tool is “exciting,” Reau suggested that it is also “premature” until appropriate reclassification intervals are understood.
She also noted that some patients still develop HCC despite being considered low risk, including cases of HCC that develop in non-cirrhotic HCV infection or MASLD.
Beyond the above clinical considerations, Dr. Reau pointed out several barriers to implementing PAaM in routine practice, starting with the under-recognition of cirrhosis. Even if patients are identified, ensuring both clinicians and patients adhere to screening recommendations remains a challenge.
Finally, financial considerations may pose obstacles.
“If some payers cover the tool and others do not, it will be very difficult to implement,” Dr. Reau concluded.
Reau reported no conflicts of interest.
Nancy S. Reau, MD, AGAF, of RUSH University in Chicago, highlighted both the promise and challenges of the PAaM score for HCC risk stratification, emphasizing that current liver cancer screening strategies remain inadequate, with only about 25% of patients receiving guideline-recommended surveillance.
“An easy-to-apply cost effective tool could significantly improve screening strategies, which should lead to earlier identification of liver cancer—at a time when curative treatment options are available,” Reau said.
PAaM, however, may be impractical for routine use.
“A tool that classifies people into 3 different screening strategies and requires longitudinal applications and re-classification could add complexity,” she explained, predicting that “clinicians aren’t going to use it correctly.
Reau was particularly concerned about the need for repeated assessments over time.
“People change,” she said. “A low-risk categorization by PAaM at the age of 40 may no longer be relevant at 50 or 60 as liver disease progresses.”
Although the tool is “exciting,” Reau suggested that it is also “premature” until appropriate reclassification intervals are understood.
She also noted that some patients still develop HCC despite being considered low risk, including cases of HCC that develop in non-cirrhotic HCV infection or MASLD.
Beyond the above clinical considerations, Dr. Reau pointed out several barriers to implementing PAaM in routine practice, starting with the under-recognition of cirrhosis. Even if patients are identified, ensuring both clinicians and patients adhere to screening recommendations remains a challenge.
Finally, financial considerations may pose obstacles.
“If some payers cover the tool and others do not, it will be very difficult to implement,” Dr. Reau concluded.
Reau reported no conflicts of interest.
, based to a recent phase 3 biomarker validation study.
The Prognostic Liver Secretome Signature with Alpha-Fetoprotein plus Age, Male Sex, Albumin-Bilirubin, and Platelets (PAaM) score integrates both molecular and clinical variables to effectively classify cirrhosis patients by their risk of developing HCC, potentially sparing low-risk patients from unnecessary surveillance, lead author Naoto Fujiwara, MD, PhD, of the University of Texas Southwestern Medical Center, Dallas, and colleagues reported.
“Hepatocellular carcinoma risk stratification is an urgent unmet need for cost-effective screening and early detection in patients with cirrhosis,” the investigators wrote in Gastroenterology. “This study represents the largest and first phase 3 biomarker validation study that establishes an integrative molecular/clinical score, PAaM, for HCC risk stratification.”
The PAaM score combines an 8-protein prognostic liver secretome signature with traditional clinical variables, including alpha-fetoprotein (AFP) levels, age, sex, albumin-bilirubin levels, and platelet counts. The score stratifies patients into high-, intermediate-, and low-risk categories.
The PAaM score was validated using 2 independent prospective cohorts in the United States: the statewide Texas Hepatocellular Carcinoma Consortium (THCCC) and the nationwide Hepatocellular Carcinoma Early Detection Strategy (HEDS). Across both cohorts, 3,484 patients with cirrhosis were followed over time to assess the development of HCC.
In the Texas cohort, comprising 2,156 patients with cirrhosis, PAaM classified 19% of patients as high risk, 42% as intermediate risk, and 39% as low risk. The annual incidence of HCC was significantly different across these groups, with high-risk patients experiencing a 5.3% incidence rate, versus 2.7% for intermediate-risk patients and 0.6% for low-risk patients (P less than .001). Compared with those in the low-risk group, high-risk patients had sub-distribution hazard ratio (sHR) of 7.51 for developing HCC, while intermediate-risk patients had an sHR of 4.20.
In the nationwide HEDS cohort, which included 1,328 patients, PAaM similarly stratified 15% of participants as high risk, 41% as intermediate risk, and 44% as low risk. Annual HCC incidence rates were 6.2%, 1.8%, and 0.8% for high-, intermediate-, and low-risk patients, respectively (P less than .001). Among these patients, sub-distribution hazard ratios for HCC were 6.54 for high-risk patients and 1.77 for intermediate-risk patients, again underscoring the tool’s potential to identify individuals at elevated risk of developing HCC.
The PAaM score outperformed existing models like the aMAP score and the PLSec-AFP molecular marker alone, with consistent superiority across a diverse range of cirrhosis etiologies, including metabolic dysfunction–associated steatotic liver disease (MASLD), alcohol-associated liver disease (ALD), and cured hepatitis C virus (HCV) infection.
Based on these findings, high-risk patients might benefit from more intensive screening strategies, Fujiwara and colleagues suggested, while intermediate-risk patients could continue with semi-annual ultrasound-based screening. Of note, low-risk patients—comprising about 40% of the study population—could potentially avoid frequent screenings, thus reducing healthcare costs and minimizing unnecessary interventions.
“This represents a significant step toward the clinical translation of an individual risk-based HCC screening strategy to improve early HCC detection and reduce HCC mortality,” the investigators concluded.This study was supported by various the National Cancer Institute, Veterans Affairs, the Japan Society for the Promotion of Science, and others. The investigators disclosed additional relationships with Boston Scientific, Sirtex, Bayer, and others.
, based to a recent phase 3 biomarker validation study.
The Prognostic Liver Secretome Signature with Alpha-Fetoprotein plus Age, Male Sex, Albumin-Bilirubin, and Platelets (PAaM) score integrates both molecular and clinical variables to effectively classify cirrhosis patients by their risk of developing HCC, potentially sparing low-risk patients from unnecessary surveillance, lead author Naoto Fujiwara, MD, PhD, of the University of Texas Southwestern Medical Center, Dallas, and colleagues reported.
“Hepatocellular carcinoma risk stratification is an urgent unmet need for cost-effective screening and early detection in patients with cirrhosis,” the investigators wrote in Gastroenterology. “This study represents the largest and first phase 3 biomarker validation study that establishes an integrative molecular/clinical score, PAaM, for HCC risk stratification.”
The PAaM score combines an 8-protein prognostic liver secretome signature with traditional clinical variables, including alpha-fetoprotein (AFP) levels, age, sex, albumin-bilirubin levels, and platelet counts. The score stratifies patients into high-, intermediate-, and low-risk categories.
The PAaM score was validated using 2 independent prospective cohorts in the United States: the statewide Texas Hepatocellular Carcinoma Consortium (THCCC) and the nationwide Hepatocellular Carcinoma Early Detection Strategy (HEDS). Across both cohorts, 3,484 patients with cirrhosis were followed over time to assess the development of HCC.
In the Texas cohort, comprising 2,156 patients with cirrhosis, PAaM classified 19% of patients as high risk, 42% as intermediate risk, and 39% as low risk. The annual incidence of HCC was significantly different across these groups, with high-risk patients experiencing a 5.3% incidence rate, versus 2.7% for intermediate-risk patients and 0.6% for low-risk patients (P less than .001). Compared with those in the low-risk group, high-risk patients had sub-distribution hazard ratio (sHR) of 7.51 for developing HCC, while intermediate-risk patients had an sHR of 4.20.
In the nationwide HEDS cohort, which included 1,328 patients, PAaM similarly stratified 15% of participants as high risk, 41% as intermediate risk, and 44% as low risk. Annual HCC incidence rates were 6.2%, 1.8%, and 0.8% for high-, intermediate-, and low-risk patients, respectively (P less than .001). Among these patients, sub-distribution hazard ratios for HCC were 6.54 for high-risk patients and 1.77 for intermediate-risk patients, again underscoring the tool’s potential to identify individuals at elevated risk of developing HCC.
The PAaM score outperformed existing models like the aMAP score and the PLSec-AFP molecular marker alone, with consistent superiority across a diverse range of cirrhosis etiologies, including metabolic dysfunction–associated steatotic liver disease (MASLD), alcohol-associated liver disease (ALD), and cured hepatitis C virus (HCV) infection.
Based on these findings, high-risk patients might benefit from more intensive screening strategies, Fujiwara and colleagues suggested, while intermediate-risk patients could continue with semi-annual ultrasound-based screening. Of note, low-risk patients—comprising about 40% of the study population—could potentially avoid frequent screenings, thus reducing healthcare costs and minimizing unnecessary interventions.
“This represents a significant step toward the clinical translation of an individual risk-based HCC screening strategy to improve early HCC detection and reduce HCC mortality,” the investigators concluded.This study was supported by various the National Cancer Institute, Veterans Affairs, the Japan Society for the Promotion of Science, and others. The investigators disclosed additional relationships with Boston Scientific, Sirtex, Bayer, and others.
FROM GASTROENTEROLOGY