Repeat gastroscopy shows real-world value for detecting malignant ulcers

Article Type
Changed
Thu, 09/09/2021 - 11:04

Most malignancies with new ulcer were identified on initial gastroscopy in a retrospective cohort study, but it’s still worth performing follow-up procedures, according to investigators.

SEBASTIAN KAULITZKI/Science Photo Library/Getty Images

“Although the additional yield of malignancy at follow-up gastroscopy is low at 2%, our data supports the current strategy of repeat endoscopic assessment given variables in obtaining adequate ulcer histology and the lack of reliable endoscopic predictors of a malignant ulcer,” the study’s authors Linda Yang, MBBS, of the University of Melbourne, and colleagues wrote in the Journal of Clinical Gastroenterology.

Recommendations from the British Society of Gastroenterology emphasize the importance of repeat gastroscopy and biopsy of gastric ulcers within an 8-week period of the index gastroscopy. Additionally, the American Society of Gastrointestinal Endoscopy similarly recommends repeat gastroscopy in high-risk patients (ulcers >2 cm) within a 12-week period of the initial endoscopy. The authors noted that, despite these recommendations, there is a lack of consensus regarding timing of repeat gastroscopy, and no established ulcer biopsy protocols exist. Additionally, there is a lack of data on real-world repeat gastroscopy practices and follow-up outcomes.

To understand the current practice in gastric ulcer follow-up, Dr. Yang and researchers retrospectively examined new gastric ulcers diagnosed on gastroscopy between 2013 and 2017 at two separate Australian institutions.

Out of 795 patients (median age, 69 years; 59% male), approximately 55% (n = 440) underwent repeat gastroscopy at a median of 8 weeks after the initial endoscopy procedure. Overall, 52 patients (7%) received a malignancy diagnosis, with 83% (n = 43) of these diagnoses detected at the index gastroscopy; 2% overall received the diagnosis based on follow-up gastroscopy.

“I think these numbers would support the assumptions of most endoscopists that a small but still significant portion of new gastric ulcers will turn out to be malignant,” explained Michael DeSimone, MD, gastroenterologist at Emerson Hospital in Concord, Mass. Dr. DeSimone, who wasn’t involved in the study, said the data support the importance of “biopsy in the initial exam and bringing these patients back for a repeat endoscopy to check healing and biopsy unless malignancy was confirmed on the initial exam.”

In the study, a multivariate analysis revealed several predictors of benign ulcers, including lack of endoscopic suspicion at the index gastroscopy (odds ratio, 0.1; 95% confidence interval, 0.03-0.13; P ≤ .005), complete healing on repeat gastroscopy (OR, 0.5; 95% CI, 0.34-0.70; P = .036), and benign histology on initial biopsy (OR, 0.12; 95% CI, 0.43-0.90; P ≤ .005). However, no patient-related factors – such as H. pylori status and ethnicity – were associated with an increased likelihood of malignancy.

“Knowing that low suspicion for malignancy on initial exam and benign histology on initial biopsies predict benign ulcers ... reasonable endoscopists could feel more comfortable not repeating an exam where procedure safety is a significant concern if their suspicion was low on the index exam, especially if they had the opportunity to take initial biopsies and those ultimately show benign histology,” said Dr. DeSimone.

The investigators noted that the main reason behind 45% receiving no follow-up gastroscopy is that the ulcers had nonsuspicious appearance.

“Although not recommended, this is widely accepted clinical practice, especially in comorbid or elderly patients where the decision to undergo repeat gastroscopy requires consideration of their comorbidities, frailty, and life expectancy,” they wrote. They suggested that this, combined with high nonattendence rate in the cohort, emphasize the importance of ulcer biopsy at index gastroscopy, even in the absence of suspicious features.

Clinicians in the current study performed random gastric biopsies in 27% (n = 218) of patients. Helicobacter pylori, a component frequently described in high-risk populations, was detected in 22% of patients who had an ulcer or gastric biopsy performed.

The relatively low frequency with which random gastric biopsies were performed during the index endoscopy to look for H. pylori is a bit surprising, said Linda Lee, MD, medical director of endoscopy at Brigham and Women’s Hospital in Boston, given the bacterium remains a common and readily treatable etiology of gastric ulcers. “While it is known that yield of biopsy can be lower for H. pylori in the setting of acute upper gastrointestinal bleeding, it is still important to evaluate for this especially since biopsies carry low risk for bleeding,” explained Dr. Lee, who also wasn’t part of the study.

She added said that the study’s high negative predictive value for endoscopic suspicion of malignancy (96%) is reassuring. “This, combined with benign histology on initial biopsies, could serve to identify which patients should return for repeat endoscopy.”

“We need to ensure that more biopsies are obtained during the index endoscopy from gastric ulcers as well as randomly and that, during follow-up endoscopy, biopsies are obtained from all [partially or fully] nonhealed ulcers,” added Dr. Lee. She suggested it could be helpful to develop an evidence-based, prospectively validated algorithm and/or identify risk factors that reliably help endoscopists decide who would benefit from repeat endoscopy, “especially since there is a relatively high rate of noncompliance with a low rate of malignancy.”

A primary limitation of the study included its retrospective nature; however, the authors pointed out that the study currently represents the largest multicenter, retrospective cohort analysis of endoscopic follow-up for gastric ulcers.

“Before any change can be recommended to current clinical practice, prospective and potentially randomized studies are required to validate our findings and elucidate any high-risk features associated with malignant gastric ulcer,” the investigators wrote. Doing so could lead to reductions in health care cost and patient burden.

Some of the study authors received funding from the National Health and Medical Research Council of Australia, but the remaining authors declared having nothing to disclose. Dr. DeSimone and Dr Lee reported having no relevant conflicts.

Publications
Topics
Sections

Most malignancies with new ulcer were identified on initial gastroscopy in a retrospective cohort study, but it’s still worth performing follow-up procedures, according to investigators.

SEBASTIAN KAULITZKI/Science Photo Library/Getty Images

“Although the additional yield of malignancy at follow-up gastroscopy is low at 2%, our data supports the current strategy of repeat endoscopic assessment given variables in obtaining adequate ulcer histology and the lack of reliable endoscopic predictors of a malignant ulcer,” the study’s authors Linda Yang, MBBS, of the University of Melbourne, and colleagues wrote in the Journal of Clinical Gastroenterology.

Recommendations from the British Society of Gastroenterology emphasize the importance of repeat gastroscopy and biopsy of gastric ulcers within an 8-week period of the index gastroscopy. Additionally, the American Society of Gastrointestinal Endoscopy similarly recommends repeat gastroscopy in high-risk patients (ulcers >2 cm) within a 12-week period of the initial endoscopy. The authors noted that, despite these recommendations, there is a lack of consensus regarding timing of repeat gastroscopy, and no established ulcer biopsy protocols exist. Additionally, there is a lack of data on real-world repeat gastroscopy practices and follow-up outcomes.

To understand the current practice in gastric ulcer follow-up, Dr. Yang and researchers retrospectively examined new gastric ulcers diagnosed on gastroscopy between 2013 and 2017 at two separate Australian institutions.

Out of 795 patients (median age, 69 years; 59% male), approximately 55% (n = 440) underwent repeat gastroscopy at a median of 8 weeks after the initial endoscopy procedure. Overall, 52 patients (7%) received a malignancy diagnosis, with 83% (n = 43) of these diagnoses detected at the index gastroscopy; 2% overall received the diagnosis based on follow-up gastroscopy.

“I think these numbers would support the assumptions of most endoscopists that a small but still significant portion of new gastric ulcers will turn out to be malignant,” explained Michael DeSimone, MD, gastroenterologist at Emerson Hospital in Concord, Mass. Dr. DeSimone, who wasn’t involved in the study, said the data support the importance of “biopsy in the initial exam and bringing these patients back for a repeat endoscopy to check healing and biopsy unless malignancy was confirmed on the initial exam.”

In the study, a multivariate analysis revealed several predictors of benign ulcers, including lack of endoscopic suspicion at the index gastroscopy (odds ratio, 0.1; 95% confidence interval, 0.03-0.13; P ≤ .005), complete healing on repeat gastroscopy (OR, 0.5; 95% CI, 0.34-0.70; P = .036), and benign histology on initial biopsy (OR, 0.12; 95% CI, 0.43-0.90; P ≤ .005). However, no patient-related factors – such as H. pylori status and ethnicity – were associated with an increased likelihood of malignancy.

“Knowing that low suspicion for malignancy on initial exam and benign histology on initial biopsies predict benign ulcers ... reasonable endoscopists could feel more comfortable not repeating an exam where procedure safety is a significant concern if their suspicion was low on the index exam, especially if they had the opportunity to take initial biopsies and those ultimately show benign histology,” said Dr. DeSimone.

The investigators noted that the main reason behind 45% receiving no follow-up gastroscopy is that the ulcers had nonsuspicious appearance.

“Although not recommended, this is widely accepted clinical practice, especially in comorbid or elderly patients where the decision to undergo repeat gastroscopy requires consideration of their comorbidities, frailty, and life expectancy,” they wrote. They suggested that this, combined with high nonattendence rate in the cohort, emphasize the importance of ulcer biopsy at index gastroscopy, even in the absence of suspicious features.

Clinicians in the current study performed random gastric biopsies in 27% (n = 218) of patients. Helicobacter pylori, a component frequently described in high-risk populations, was detected in 22% of patients who had an ulcer or gastric biopsy performed.

The relatively low frequency with which random gastric biopsies were performed during the index endoscopy to look for H. pylori is a bit surprising, said Linda Lee, MD, medical director of endoscopy at Brigham and Women’s Hospital in Boston, given the bacterium remains a common and readily treatable etiology of gastric ulcers. “While it is known that yield of biopsy can be lower for H. pylori in the setting of acute upper gastrointestinal bleeding, it is still important to evaluate for this especially since biopsies carry low risk for bleeding,” explained Dr. Lee, who also wasn’t part of the study.

She added said that the study’s high negative predictive value for endoscopic suspicion of malignancy (96%) is reassuring. “This, combined with benign histology on initial biopsies, could serve to identify which patients should return for repeat endoscopy.”

“We need to ensure that more biopsies are obtained during the index endoscopy from gastric ulcers as well as randomly and that, during follow-up endoscopy, biopsies are obtained from all [partially or fully] nonhealed ulcers,” added Dr. Lee. She suggested it could be helpful to develop an evidence-based, prospectively validated algorithm and/or identify risk factors that reliably help endoscopists decide who would benefit from repeat endoscopy, “especially since there is a relatively high rate of noncompliance with a low rate of malignancy.”

A primary limitation of the study included its retrospective nature; however, the authors pointed out that the study currently represents the largest multicenter, retrospective cohort analysis of endoscopic follow-up for gastric ulcers.

“Before any change can be recommended to current clinical practice, prospective and potentially randomized studies are required to validate our findings and elucidate any high-risk features associated with malignant gastric ulcer,” the investigators wrote. Doing so could lead to reductions in health care cost and patient burden.

Some of the study authors received funding from the National Health and Medical Research Council of Australia, but the remaining authors declared having nothing to disclose. Dr. DeSimone and Dr Lee reported having no relevant conflicts.

Most malignancies with new ulcer were identified on initial gastroscopy in a retrospective cohort study, but it’s still worth performing follow-up procedures, according to investigators.

SEBASTIAN KAULITZKI/Science Photo Library/Getty Images

“Although the additional yield of malignancy at follow-up gastroscopy is low at 2%, our data supports the current strategy of repeat endoscopic assessment given variables in obtaining adequate ulcer histology and the lack of reliable endoscopic predictors of a malignant ulcer,” the study’s authors Linda Yang, MBBS, of the University of Melbourne, and colleagues wrote in the Journal of Clinical Gastroenterology.

Recommendations from the British Society of Gastroenterology emphasize the importance of repeat gastroscopy and biopsy of gastric ulcers within an 8-week period of the index gastroscopy. Additionally, the American Society of Gastrointestinal Endoscopy similarly recommends repeat gastroscopy in high-risk patients (ulcers >2 cm) within a 12-week period of the initial endoscopy. The authors noted that, despite these recommendations, there is a lack of consensus regarding timing of repeat gastroscopy, and no established ulcer biopsy protocols exist. Additionally, there is a lack of data on real-world repeat gastroscopy practices and follow-up outcomes.

To understand the current practice in gastric ulcer follow-up, Dr. Yang and researchers retrospectively examined new gastric ulcers diagnosed on gastroscopy between 2013 and 2017 at two separate Australian institutions.

Out of 795 patients (median age, 69 years; 59% male), approximately 55% (n = 440) underwent repeat gastroscopy at a median of 8 weeks after the initial endoscopy procedure. Overall, 52 patients (7%) received a malignancy diagnosis, with 83% (n = 43) of these diagnoses detected at the index gastroscopy; 2% overall received the diagnosis based on follow-up gastroscopy.

“I think these numbers would support the assumptions of most endoscopists that a small but still significant portion of new gastric ulcers will turn out to be malignant,” explained Michael DeSimone, MD, gastroenterologist at Emerson Hospital in Concord, Mass. Dr. DeSimone, who wasn’t involved in the study, said the data support the importance of “biopsy in the initial exam and bringing these patients back for a repeat endoscopy to check healing and biopsy unless malignancy was confirmed on the initial exam.”

In the study, a multivariate analysis revealed several predictors of benign ulcers, including lack of endoscopic suspicion at the index gastroscopy (odds ratio, 0.1; 95% confidence interval, 0.03-0.13; P ≤ .005), complete healing on repeat gastroscopy (OR, 0.5; 95% CI, 0.34-0.70; P = .036), and benign histology on initial biopsy (OR, 0.12; 95% CI, 0.43-0.90; P ≤ .005). However, no patient-related factors – such as H. pylori status and ethnicity – were associated with an increased likelihood of malignancy.

“Knowing that low suspicion for malignancy on initial exam and benign histology on initial biopsies predict benign ulcers ... reasonable endoscopists could feel more comfortable not repeating an exam where procedure safety is a significant concern if their suspicion was low on the index exam, especially if they had the opportunity to take initial biopsies and those ultimately show benign histology,” said Dr. DeSimone.

The investigators noted that the main reason behind 45% receiving no follow-up gastroscopy is that the ulcers had nonsuspicious appearance.

“Although not recommended, this is widely accepted clinical practice, especially in comorbid or elderly patients where the decision to undergo repeat gastroscopy requires consideration of their comorbidities, frailty, and life expectancy,” they wrote. They suggested that this, combined with high nonattendence rate in the cohort, emphasize the importance of ulcer biopsy at index gastroscopy, even in the absence of suspicious features.

Clinicians in the current study performed random gastric biopsies in 27% (n = 218) of patients. Helicobacter pylori, a component frequently described in high-risk populations, was detected in 22% of patients who had an ulcer or gastric biopsy performed.

The relatively low frequency with which random gastric biopsies were performed during the index endoscopy to look for H. pylori is a bit surprising, said Linda Lee, MD, medical director of endoscopy at Brigham and Women’s Hospital in Boston, given the bacterium remains a common and readily treatable etiology of gastric ulcers. “While it is known that yield of biopsy can be lower for H. pylori in the setting of acute upper gastrointestinal bleeding, it is still important to evaluate for this especially since biopsies carry low risk for bleeding,” explained Dr. Lee, who also wasn’t part of the study.

She added said that the study’s high negative predictive value for endoscopic suspicion of malignancy (96%) is reassuring. “This, combined with benign histology on initial biopsies, could serve to identify which patients should return for repeat endoscopy.”

“We need to ensure that more biopsies are obtained during the index endoscopy from gastric ulcers as well as randomly and that, during follow-up endoscopy, biopsies are obtained from all [partially or fully] nonhealed ulcers,” added Dr. Lee. She suggested it could be helpful to develop an evidence-based, prospectively validated algorithm and/or identify risk factors that reliably help endoscopists decide who would benefit from repeat endoscopy, “especially since there is a relatively high rate of noncompliance with a low rate of malignancy.”

A primary limitation of the study included its retrospective nature; however, the authors pointed out that the study currently represents the largest multicenter, retrospective cohort analysis of endoscopic follow-up for gastric ulcers.

“Before any change can be recommended to current clinical practice, prospective and potentially randomized studies are required to validate our findings and elucidate any high-risk features associated with malignant gastric ulcer,” the investigators wrote. Doing so could lead to reductions in health care cost and patient burden.

Some of the study authors received funding from the National Health and Medical Research Council of Australia, but the remaining authors declared having nothing to disclose. Dr. DeSimone and Dr Lee reported having no relevant conflicts.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF CLINICAL GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

WTC early responders have higher prevalence of liver disease

Article Type
Changed
Thu, 08/19/2021 - 10:31

Emergency responders to the World Trade Center (WTC) attack in 2001 paid a significant physical cost for their service in the form of exposure to chemicals, dust, and airborne particulates causally linked to hepatotoxicity. As we near the 20th anniversary of these attacks, researchers have determined that those responders who arrived at the WTC site earlier have a significantly higher prevalence of hepatic steatosis compared with those who arrived in the days that followed.

U.S. Army Corps of Engineers file photo
New York City firefighters take a much-needed break during emergency response efforts following the 9/11 attacks.

“This research is some of the first to suggest that there may be a link between the amount of exposure experienced by responders to the WTC site and the higher likelihood of excessive accumulation of fat in their livers,” study author Artit Jirapatnakul, PhD, of Icahn School of Medicine at Mount Sinai, New York, said in an interview. These findings were published in the American Journal of Industrial Medicine.

Dr. Artit Jirapatnakul

The excessive accumulation of liver fat is an indicator of liver injury, which can also predict subsequent future disease, such as cirrhosis, liver failure, and liver cancer.

Dr. Jirapatnakul said that arrival time to the WTC disaster may prove an important factor for predicting the risk of liver disease in this population and directing treatment to them accordingly.

“By identifying individuals with markers of liver injury, such as excess fat, we can offer referral to liver specialists and thereby open the door to early treatment,” he said.

“Our most important message is that many liver diseases can be treated if caught early,” Dr. Jirapatnakul added. “Early detection requires proactive monitoring because most liver diseases have few, if any, symptoms during the early stages.”

More than 20,000 men and women who responded to the WTC site on Sept. 11, 2001, were exposed to particulate matter and chemicals known to cause liver damage and increase the risk of toxicant‐associated fatty liver disease. These responders have been offered screening and treatment of different conditions associated with the attack, including CT lung cancer screening for those meeting age and smoking status criteria.
 

Measuring the impact of response time on the liver

To investigate the dose-response association between WTC site exposure intensity and the risk of hepatic steatosis, Dr. Jirapatnakul and colleagues reviewed low-dose CT chest scans of all participants in the WTC General Responders Cohort (GRC) who had available laboratory data within a 12-month period from their first scan following the Sept. 11, 2001, attack. Only CT chest scans performed between Sept. 11, 2001, and Dec. 31, 2018, were collected and reviewed in the study.

A total of 1,788 WTC responders were included (83.7% were male; mean age at time of attack, 42.5 years). Up to 56% of WTC responders in the study were White, and 20.4% of responders were current smokers. The mean body mass index of the group was 30.1 kg/m2.

The investigators stratified dust exposure into five groups according to when the responders arrived at the WTC site: Sept. 11, 2001, in the dust cloud; Sept. 11, no dust cloud (same-day arrival); Sept. 12 or 13 (second‐ and third‐day arrival); Sept. 14 to the end of September (fourth‐day arrival); and October and beyond.

The median duration between Sept. 11, 2001, and the earliest available CT scan was 11.3 years. Liver density was measured via Statistics‐based Liver Density Estimation from Imaging, a previously validated algorithm, with a slice thickness of 1.25 mm or below. On their earliest CT, approximately 14.4% (n = 258) of responders had liver attenuation < 40 Hounsfield units (HU). The prevalence of liver attenuation < 40 HU was 17% for responders who arrived on the day of the attack, 16% for responders who arrived at the site on Sept. 12 or 13, 10.9% for responders who arrived Sept. 14 through 30, and 9% for responders who arrived at the WTC site on Oct. 1, 2001, or later (P =.0015).

There was a statistically significant trend of increasing liver steatosis with earlier times of arrival (P <.0001). The WTC arrival time retained its status as a significant independent factor for decreased liver attenuation in an analysis adjusted for sex, age, race, smoking status, alcohol use, body mass index, diabetes, gastroesophageal reflux disease, and forced expiratory volume in 1 second.

Dr. Jirapatnakul said that the next step will be to determine whether WTC responders with excessive liver fat also have increased liver scarring. In addition, he and his colleagues are working to establish a registry to collect information on the impact of liver disease as it relates to quality of life in members of the WTC GRC.
 

Importance of disease severity

Another direction of future research will be to differentiate between those with only hepatic steatosis, those with inflammation from hepatic steatosis (steatohepatitis), and those with hepatic fibrosis which is the most concerning outcome from fatty liver diseases, according to Albert Do, MD, clinical director of the fatty liver disease program at Yale University, New Haven, Conn.

“It is the latter group of patients which we are most concerned about, given this is the group at highest risk for harm from liver disease,” added Dr. Do, who wasn’t involved in the research study. “The degree of steatosis is not closely linked with subsequent inflammation nor hepatic fibrosis, and so linkage of disease severity to specific occupational exposures and timing is needed to determine the allocation of support for patients who had suffered harm from fatty liver disease.”

Dr. Do noted that additional research will also need to identify the specific exposure that may be causing hepatic steatosis in early WTC responders. “Currently, only a small number of medications are known to cause this,” he explained, “and thus such knowledge will help us further understand occupational exposures and their associated risks.”

The researchers received study funding from the National Institute for Occupational Safety and Health. They disclosed conflicts of interest with Genentech, AstraZeneca, Pfizer, Bayer Healthcare, Gilead Sciences, and Boehringer Ingelheim. Dr. Do had no conflicts to declare.

Publications
Topics
Sections

Emergency responders to the World Trade Center (WTC) attack in 2001 paid a significant physical cost for their service in the form of exposure to chemicals, dust, and airborne particulates causally linked to hepatotoxicity. As we near the 20th anniversary of these attacks, researchers have determined that those responders who arrived at the WTC site earlier have a significantly higher prevalence of hepatic steatosis compared with those who arrived in the days that followed.

U.S. Army Corps of Engineers file photo
New York City firefighters take a much-needed break during emergency response efforts following the 9/11 attacks.

“This research is some of the first to suggest that there may be a link between the amount of exposure experienced by responders to the WTC site and the higher likelihood of excessive accumulation of fat in their livers,” study author Artit Jirapatnakul, PhD, of Icahn School of Medicine at Mount Sinai, New York, said in an interview. These findings were published in the American Journal of Industrial Medicine.

Dr. Artit Jirapatnakul

The excessive accumulation of liver fat is an indicator of liver injury, which can also predict subsequent future disease, such as cirrhosis, liver failure, and liver cancer.

Dr. Jirapatnakul said that arrival time to the WTC disaster may prove an important factor for predicting the risk of liver disease in this population and directing treatment to them accordingly.

“By identifying individuals with markers of liver injury, such as excess fat, we can offer referral to liver specialists and thereby open the door to early treatment,” he said.

“Our most important message is that many liver diseases can be treated if caught early,” Dr. Jirapatnakul added. “Early detection requires proactive monitoring because most liver diseases have few, if any, symptoms during the early stages.”

More than 20,000 men and women who responded to the WTC site on Sept. 11, 2001, were exposed to particulate matter and chemicals known to cause liver damage and increase the risk of toxicant‐associated fatty liver disease. These responders have been offered screening and treatment of different conditions associated with the attack, including CT lung cancer screening for those meeting age and smoking status criteria.
 

Measuring the impact of response time on the liver

To investigate the dose-response association between WTC site exposure intensity and the risk of hepatic steatosis, Dr. Jirapatnakul and colleagues reviewed low-dose CT chest scans of all participants in the WTC General Responders Cohort (GRC) who had available laboratory data within a 12-month period from their first scan following the Sept. 11, 2001, attack. Only CT chest scans performed between Sept. 11, 2001, and Dec. 31, 2018, were collected and reviewed in the study.

A total of 1,788 WTC responders were included (83.7% were male; mean age at time of attack, 42.5 years). Up to 56% of WTC responders in the study were White, and 20.4% of responders were current smokers. The mean body mass index of the group was 30.1 kg/m2.

The investigators stratified dust exposure into five groups according to when the responders arrived at the WTC site: Sept. 11, 2001, in the dust cloud; Sept. 11, no dust cloud (same-day arrival); Sept. 12 or 13 (second‐ and third‐day arrival); Sept. 14 to the end of September (fourth‐day arrival); and October and beyond.

The median duration between Sept. 11, 2001, and the earliest available CT scan was 11.3 years. Liver density was measured via Statistics‐based Liver Density Estimation from Imaging, a previously validated algorithm, with a slice thickness of 1.25 mm or below. On their earliest CT, approximately 14.4% (n = 258) of responders had liver attenuation < 40 Hounsfield units (HU). The prevalence of liver attenuation < 40 HU was 17% for responders who arrived on the day of the attack, 16% for responders who arrived at the site on Sept. 12 or 13, 10.9% for responders who arrived Sept. 14 through 30, and 9% for responders who arrived at the WTC site on Oct. 1, 2001, or later (P =.0015).

There was a statistically significant trend of increasing liver steatosis with earlier times of arrival (P <.0001). The WTC arrival time retained its status as a significant independent factor for decreased liver attenuation in an analysis adjusted for sex, age, race, smoking status, alcohol use, body mass index, diabetes, gastroesophageal reflux disease, and forced expiratory volume in 1 second.

Dr. Jirapatnakul said that the next step will be to determine whether WTC responders with excessive liver fat also have increased liver scarring. In addition, he and his colleagues are working to establish a registry to collect information on the impact of liver disease as it relates to quality of life in members of the WTC GRC.
 

Importance of disease severity

Another direction of future research will be to differentiate between those with only hepatic steatosis, those with inflammation from hepatic steatosis (steatohepatitis), and those with hepatic fibrosis which is the most concerning outcome from fatty liver diseases, according to Albert Do, MD, clinical director of the fatty liver disease program at Yale University, New Haven, Conn.

“It is the latter group of patients which we are most concerned about, given this is the group at highest risk for harm from liver disease,” added Dr. Do, who wasn’t involved in the research study. “The degree of steatosis is not closely linked with subsequent inflammation nor hepatic fibrosis, and so linkage of disease severity to specific occupational exposures and timing is needed to determine the allocation of support for patients who had suffered harm from fatty liver disease.”

Dr. Do noted that additional research will also need to identify the specific exposure that may be causing hepatic steatosis in early WTC responders. “Currently, only a small number of medications are known to cause this,” he explained, “and thus such knowledge will help us further understand occupational exposures and their associated risks.”

The researchers received study funding from the National Institute for Occupational Safety and Health. They disclosed conflicts of interest with Genentech, AstraZeneca, Pfizer, Bayer Healthcare, Gilead Sciences, and Boehringer Ingelheim. Dr. Do had no conflicts to declare.

Emergency responders to the World Trade Center (WTC) attack in 2001 paid a significant physical cost for their service in the form of exposure to chemicals, dust, and airborne particulates causally linked to hepatotoxicity. As we near the 20th anniversary of these attacks, researchers have determined that those responders who arrived at the WTC site earlier have a significantly higher prevalence of hepatic steatosis compared with those who arrived in the days that followed.

U.S. Army Corps of Engineers file photo
New York City firefighters take a much-needed break during emergency response efforts following the 9/11 attacks.

“This research is some of the first to suggest that there may be a link between the amount of exposure experienced by responders to the WTC site and the higher likelihood of excessive accumulation of fat in their livers,” study author Artit Jirapatnakul, PhD, of Icahn School of Medicine at Mount Sinai, New York, said in an interview. These findings were published in the American Journal of Industrial Medicine.

Dr. Artit Jirapatnakul

The excessive accumulation of liver fat is an indicator of liver injury, which can also predict subsequent future disease, such as cirrhosis, liver failure, and liver cancer.

Dr. Jirapatnakul said that arrival time to the WTC disaster may prove an important factor for predicting the risk of liver disease in this population and directing treatment to them accordingly.

“By identifying individuals with markers of liver injury, such as excess fat, we can offer referral to liver specialists and thereby open the door to early treatment,” he said.

“Our most important message is that many liver diseases can be treated if caught early,” Dr. Jirapatnakul added. “Early detection requires proactive monitoring because most liver diseases have few, if any, symptoms during the early stages.”

More than 20,000 men and women who responded to the WTC site on Sept. 11, 2001, were exposed to particulate matter and chemicals known to cause liver damage and increase the risk of toxicant‐associated fatty liver disease. These responders have been offered screening and treatment of different conditions associated with the attack, including CT lung cancer screening for those meeting age and smoking status criteria.
 

Measuring the impact of response time on the liver

To investigate the dose-response association between WTC site exposure intensity and the risk of hepatic steatosis, Dr. Jirapatnakul and colleagues reviewed low-dose CT chest scans of all participants in the WTC General Responders Cohort (GRC) who had available laboratory data within a 12-month period from their first scan following the Sept. 11, 2001, attack. Only CT chest scans performed between Sept. 11, 2001, and Dec. 31, 2018, were collected and reviewed in the study.

A total of 1,788 WTC responders were included (83.7% were male; mean age at time of attack, 42.5 years). Up to 56% of WTC responders in the study were White, and 20.4% of responders were current smokers. The mean body mass index of the group was 30.1 kg/m2.

The investigators stratified dust exposure into five groups according to when the responders arrived at the WTC site: Sept. 11, 2001, in the dust cloud; Sept. 11, no dust cloud (same-day arrival); Sept. 12 or 13 (second‐ and third‐day arrival); Sept. 14 to the end of September (fourth‐day arrival); and October and beyond.

The median duration between Sept. 11, 2001, and the earliest available CT scan was 11.3 years. Liver density was measured via Statistics‐based Liver Density Estimation from Imaging, a previously validated algorithm, with a slice thickness of 1.25 mm or below. On their earliest CT, approximately 14.4% (n = 258) of responders had liver attenuation < 40 Hounsfield units (HU). The prevalence of liver attenuation < 40 HU was 17% for responders who arrived on the day of the attack, 16% for responders who arrived at the site on Sept. 12 or 13, 10.9% for responders who arrived Sept. 14 through 30, and 9% for responders who arrived at the WTC site on Oct. 1, 2001, or later (P =.0015).

There was a statistically significant trend of increasing liver steatosis with earlier times of arrival (P <.0001). The WTC arrival time retained its status as a significant independent factor for decreased liver attenuation in an analysis adjusted for sex, age, race, smoking status, alcohol use, body mass index, diabetes, gastroesophageal reflux disease, and forced expiratory volume in 1 second.

Dr. Jirapatnakul said that the next step will be to determine whether WTC responders with excessive liver fat also have increased liver scarring. In addition, he and his colleagues are working to establish a registry to collect information on the impact of liver disease as it relates to quality of life in members of the WTC GRC.
 

Importance of disease severity

Another direction of future research will be to differentiate between those with only hepatic steatosis, those with inflammation from hepatic steatosis (steatohepatitis), and those with hepatic fibrosis which is the most concerning outcome from fatty liver diseases, according to Albert Do, MD, clinical director of the fatty liver disease program at Yale University, New Haven, Conn.

“It is the latter group of patients which we are most concerned about, given this is the group at highest risk for harm from liver disease,” added Dr. Do, who wasn’t involved in the research study. “The degree of steatosis is not closely linked with subsequent inflammation nor hepatic fibrosis, and so linkage of disease severity to specific occupational exposures and timing is needed to determine the allocation of support for patients who had suffered harm from fatty liver disease.”

Dr. Do noted that additional research will also need to identify the specific exposure that may be causing hepatic steatosis in early WTC responders. “Currently, only a small number of medications are known to cause this,” he explained, “and thus such knowledge will help us further understand occupational exposures and their associated risks.”

The researchers received study funding from the National Institute for Occupational Safety and Health. They disclosed conflicts of interest with Genentech, AstraZeneca, Pfizer, Bayer Healthcare, Gilead Sciences, and Boehringer Ingelheim. Dr. Do had no conflicts to declare.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE AMERICAN JOURNAL OF INDUSTRIAL MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

MR elastography could predict cirrhosis in NAFLD

Progress made on liver disease progression
Article Type
Changed
Fri, 08/13/2021 - 16:38

Liver stiffness measurement with magnetic resonance elastography (MRE) may prove predictive of future cirrhosis risk in patients with nonalcoholic fatty liver disease (NAFLD), according to researchers from the Mayo Clinic in Rochester, Minn.

“These data expand the role of MRE from an accurate diagnostic method to a prognostic noninvasive imaging biomarker that can risk-stratify patients with NAFLD and guide the timing of surveillance and further refine their clinical management,” wrote Tolga Gidener, MD, and colleagues. The study authors added that the research further expands “the role of MRE beyond liver fibrosis estimation by adding a predictive feature to improve individualized disease monitoring and patient counseling.” Their study was published in Clinical Gastroenterology and Hepatology.

Currently, there are no established noninvasive strategies that can effectively identify patients with NAFLD who are at high risk of progression to cirrhosis and liver-related complications. While fibrosis stage on histology may predict liver-associated outcomes in these patients, this approach is invasive, time consuming, and is generally not well tolerated by patients.

Although the technique has been noted for its high success rate and excellent levels of reliability and reproducibility, a possible limitation of MRE is its cost. That said, standalone MRE is reimbursed under Medicare Category I Current Procedural Terminology code 76391 with a cost of $240.02. However, there is also a lack of data on whether baseline liver stiffness measurement by MRE can predict progression of NAFLD to cirrhosis.

To gauge the role of baseline liver stiffness measurement by MRE, Dr. Gidener and colleagues performed a retrospective cohort study that evaluated hard liver–related outcomes in 829 adult patients with NAFLD with or without cirrhosis (median age, 58 years; 54% female) who underwent MRE during 2007-2019.

Patients in the study were followed from the first MRE until death, last clinical encounter, or the end of the study. Clinical outcomes assessed in individual chart review included cirrhosis, hepatic decompensation, and death.

At baseline, the median liver stiffness measurement was 2.8 kPa in 639 patients with NAFLD but without cirrhosis. Over a median 4-year follow-up period, a total of 20 patients developed cirrhosis, with an overall annual incidence rate of 1%.

Baseline liver stiffness measurement by MRE was significantly predictive of subsequent cirrhosis (hazard ratio, 2.93; 95% confidence interval, 1.86-4.62; P < .0001) per 1-kPa difference in liver stiffness measurement at baseline.

According to the researchers, the probability of future cirrhosis development can be ascertained using current liver stiffness measurement. As such, a greater than 1% probability threshold can be reached in 5 years in patients with a measurement of 2 kPa, 3 years in patients with a measurement of 3 kPA, and 1 year in patients with 4-5 kPa. “These time frames inform about estimated time to progression to hard outcomes and provide guidance for subsequent noninvasive monitoring for disease progression,” wrote the researchers.

The baseline liver stiffness measurement by MRE was also significantly predictive of future hepatic decompensation or death (HR, 1.32; 95% CI, 1.13-1.56; P = .0007) per 1-kPa increment in the liver stiffness measurement. Likewise, the 1-year probability of subsequent hepatic decompensation or death in patients with cirrhosis and baseline liver stiffness measurement of 5 kPa versus 8 kPa was 9% versus 20%, respectively. In terms of covariates, age was the only factor that increased the risk of hepatic decompensation or death.

While the current study offers a glimpse into the potential clinical implications of liver stiffness measurement by MRE in NAFLD, the researchers suggest the applicability of the findings are limited by the study’s small sample size, relatively short follow-up duration, and the small number of cirrhosis events.

The researchers received study funding from the National Institute of Diabetes and Digestive and Kidney Diseases, American College of Gastroenterology, National Institutes of Health, and the Department of Defense. The researchers disclosed no other relevant conflicts of interest.

Body

NAFLD is rapidly becoming one of the most common causes of liver disease. While most patients have a benign course, approximately 20% of patients develop nonalcoholic steatohepatitis, the progressive form of the disease. Given the high prevalence (30% of the U.S. population), it is vital to determine which patients are at risk for progression, cirrhosis, and decompensation. Although liver biopsy is the preferred method, this procedure is invasive and carries substantial risks, including severe bleeding. Noninvasive tests that measure liver stiffness have emerged: Examples are controlled elastography (VCTE), such as Fibroscan, and magnetic resonance elastography (MRE). Data support the use of liver stiffness as a surrogate measure of fibrosis; MRE has demonstrated higher fidelity and accuracy, compared with VCTE, while being limited because of cost and availability. However, there is a paucity of data regarding the use of liver stiffness to predict progression to cirrhosis or liver-related events.

Dr. Yamini Natarajan

This study by Dr. Gidener and colleagues highlights the use of MRE to evaluate liver stiffness measurements as a predictor for cirrhosis and decompensation. Baseline measurements more than 4-5 kPa should alert clinicians regarding increased risk of progression to cirrhosis. Patients with cirrhosis and baseline measurements of 8 kPa or higher have a high risk of decompensation/death, suggesting that they should be followed more closely. Given the burgeoning number of patients with NAFLD and NASH, this study demonstrates the importance of identifying high-risk patients in order to optimize use of resources and improve clinical outcomes.
 

Yamini Natarajan, MD, is an investigator at the Center for Innovations in Quality, Effectiveness and Safety at the Michael E. DeBakey VA Medical Center, Houston, and an assistant professor, department of medicine, section of gastroenterology and hepatology, Baylor College of Medicine, Houston. She has no conflicts.

Publications
Topics
Sections
Body

NAFLD is rapidly becoming one of the most common causes of liver disease. While most patients have a benign course, approximately 20% of patients develop nonalcoholic steatohepatitis, the progressive form of the disease. Given the high prevalence (30% of the U.S. population), it is vital to determine which patients are at risk for progression, cirrhosis, and decompensation. Although liver biopsy is the preferred method, this procedure is invasive and carries substantial risks, including severe bleeding. Noninvasive tests that measure liver stiffness have emerged: Examples are controlled elastography (VCTE), such as Fibroscan, and magnetic resonance elastography (MRE). Data support the use of liver stiffness as a surrogate measure of fibrosis; MRE has demonstrated higher fidelity and accuracy, compared with VCTE, while being limited because of cost and availability. However, there is a paucity of data regarding the use of liver stiffness to predict progression to cirrhosis or liver-related events.

Dr. Yamini Natarajan

This study by Dr. Gidener and colleagues highlights the use of MRE to evaluate liver stiffness measurements as a predictor for cirrhosis and decompensation. Baseline measurements more than 4-5 kPa should alert clinicians regarding increased risk of progression to cirrhosis. Patients with cirrhosis and baseline measurements of 8 kPa or higher have a high risk of decompensation/death, suggesting that they should be followed more closely. Given the burgeoning number of patients with NAFLD and NASH, this study demonstrates the importance of identifying high-risk patients in order to optimize use of resources and improve clinical outcomes.
 

Yamini Natarajan, MD, is an investigator at the Center for Innovations in Quality, Effectiveness and Safety at the Michael E. DeBakey VA Medical Center, Houston, and an assistant professor, department of medicine, section of gastroenterology and hepatology, Baylor College of Medicine, Houston. She has no conflicts.

Body

NAFLD is rapidly becoming one of the most common causes of liver disease. While most patients have a benign course, approximately 20% of patients develop nonalcoholic steatohepatitis, the progressive form of the disease. Given the high prevalence (30% of the U.S. population), it is vital to determine which patients are at risk for progression, cirrhosis, and decompensation. Although liver biopsy is the preferred method, this procedure is invasive and carries substantial risks, including severe bleeding. Noninvasive tests that measure liver stiffness have emerged: Examples are controlled elastography (VCTE), such as Fibroscan, and magnetic resonance elastography (MRE). Data support the use of liver stiffness as a surrogate measure of fibrosis; MRE has demonstrated higher fidelity and accuracy, compared with VCTE, while being limited because of cost and availability. However, there is a paucity of data regarding the use of liver stiffness to predict progression to cirrhosis or liver-related events.

Dr. Yamini Natarajan

This study by Dr. Gidener and colleagues highlights the use of MRE to evaluate liver stiffness measurements as a predictor for cirrhosis and decompensation. Baseline measurements more than 4-5 kPa should alert clinicians regarding increased risk of progression to cirrhosis. Patients with cirrhosis and baseline measurements of 8 kPa or higher have a high risk of decompensation/death, suggesting that they should be followed more closely. Given the burgeoning number of patients with NAFLD and NASH, this study demonstrates the importance of identifying high-risk patients in order to optimize use of resources and improve clinical outcomes.
 

Yamini Natarajan, MD, is an investigator at the Center for Innovations in Quality, Effectiveness and Safety at the Michael E. DeBakey VA Medical Center, Houston, and an assistant professor, department of medicine, section of gastroenterology and hepatology, Baylor College of Medicine, Houston. She has no conflicts.

Title
Progress made on liver disease progression
Progress made on liver disease progression

Liver stiffness measurement with magnetic resonance elastography (MRE) may prove predictive of future cirrhosis risk in patients with nonalcoholic fatty liver disease (NAFLD), according to researchers from the Mayo Clinic in Rochester, Minn.

“These data expand the role of MRE from an accurate diagnostic method to a prognostic noninvasive imaging biomarker that can risk-stratify patients with NAFLD and guide the timing of surveillance and further refine their clinical management,” wrote Tolga Gidener, MD, and colleagues. The study authors added that the research further expands “the role of MRE beyond liver fibrosis estimation by adding a predictive feature to improve individualized disease monitoring and patient counseling.” Their study was published in Clinical Gastroenterology and Hepatology.

Currently, there are no established noninvasive strategies that can effectively identify patients with NAFLD who are at high risk of progression to cirrhosis and liver-related complications. While fibrosis stage on histology may predict liver-associated outcomes in these patients, this approach is invasive, time consuming, and is generally not well tolerated by patients.

Although the technique has been noted for its high success rate and excellent levels of reliability and reproducibility, a possible limitation of MRE is its cost. That said, standalone MRE is reimbursed under Medicare Category I Current Procedural Terminology code 76391 with a cost of $240.02. However, there is also a lack of data on whether baseline liver stiffness measurement by MRE can predict progression of NAFLD to cirrhosis.

To gauge the role of baseline liver stiffness measurement by MRE, Dr. Gidener and colleagues performed a retrospective cohort study that evaluated hard liver–related outcomes in 829 adult patients with NAFLD with or without cirrhosis (median age, 58 years; 54% female) who underwent MRE during 2007-2019.

Patients in the study were followed from the first MRE until death, last clinical encounter, or the end of the study. Clinical outcomes assessed in individual chart review included cirrhosis, hepatic decompensation, and death.

At baseline, the median liver stiffness measurement was 2.8 kPa in 639 patients with NAFLD but without cirrhosis. Over a median 4-year follow-up period, a total of 20 patients developed cirrhosis, with an overall annual incidence rate of 1%.

Baseline liver stiffness measurement by MRE was significantly predictive of subsequent cirrhosis (hazard ratio, 2.93; 95% confidence interval, 1.86-4.62; P < .0001) per 1-kPa difference in liver stiffness measurement at baseline.

According to the researchers, the probability of future cirrhosis development can be ascertained using current liver stiffness measurement. As such, a greater than 1% probability threshold can be reached in 5 years in patients with a measurement of 2 kPa, 3 years in patients with a measurement of 3 kPA, and 1 year in patients with 4-5 kPa. “These time frames inform about estimated time to progression to hard outcomes and provide guidance for subsequent noninvasive monitoring for disease progression,” wrote the researchers.

The baseline liver stiffness measurement by MRE was also significantly predictive of future hepatic decompensation or death (HR, 1.32; 95% CI, 1.13-1.56; P = .0007) per 1-kPa increment in the liver stiffness measurement. Likewise, the 1-year probability of subsequent hepatic decompensation or death in patients with cirrhosis and baseline liver stiffness measurement of 5 kPa versus 8 kPa was 9% versus 20%, respectively. In terms of covariates, age was the only factor that increased the risk of hepatic decompensation or death.

While the current study offers a glimpse into the potential clinical implications of liver stiffness measurement by MRE in NAFLD, the researchers suggest the applicability of the findings are limited by the study’s small sample size, relatively short follow-up duration, and the small number of cirrhosis events.

The researchers received study funding from the National Institute of Diabetes and Digestive and Kidney Diseases, American College of Gastroenterology, National Institutes of Health, and the Department of Defense. The researchers disclosed no other relevant conflicts of interest.

Liver stiffness measurement with magnetic resonance elastography (MRE) may prove predictive of future cirrhosis risk in patients with nonalcoholic fatty liver disease (NAFLD), according to researchers from the Mayo Clinic in Rochester, Minn.

“These data expand the role of MRE from an accurate diagnostic method to a prognostic noninvasive imaging biomarker that can risk-stratify patients with NAFLD and guide the timing of surveillance and further refine their clinical management,” wrote Tolga Gidener, MD, and colleagues. The study authors added that the research further expands “the role of MRE beyond liver fibrosis estimation by adding a predictive feature to improve individualized disease monitoring and patient counseling.” Their study was published in Clinical Gastroenterology and Hepatology.

Currently, there are no established noninvasive strategies that can effectively identify patients with NAFLD who are at high risk of progression to cirrhosis and liver-related complications. While fibrosis stage on histology may predict liver-associated outcomes in these patients, this approach is invasive, time consuming, and is generally not well tolerated by patients.

Although the technique has been noted for its high success rate and excellent levels of reliability and reproducibility, a possible limitation of MRE is its cost. That said, standalone MRE is reimbursed under Medicare Category I Current Procedural Terminology code 76391 with a cost of $240.02. However, there is also a lack of data on whether baseline liver stiffness measurement by MRE can predict progression of NAFLD to cirrhosis.

To gauge the role of baseline liver stiffness measurement by MRE, Dr. Gidener and colleagues performed a retrospective cohort study that evaluated hard liver–related outcomes in 829 adult patients with NAFLD with or without cirrhosis (median age, 58 years; 54% female) who underwent MRE during 2007-2019.

Patients in the study were followed from the first MRE until death, last clinical encounter, or the end of the study. Clinical outcomes assessed in individual chart review included cirrhosis, hepatic decompensation, and death.

At baseline, the median liver stiffness measurement was 2.8 kPa in 639 patients with NAFLD but without cirrhosis. Over a median 4-year follow-up period, a total of 20 patients developed cirrhosis, with an overall annual incidence rate of 1%.

Baseline liver stiffness measurement by MRE was significantly predictive of subsequent cirrhosis (hazard ratio, 2.93; 95% confidence interval, 1.86-4.62; P < .0001) per 1-kPa difference in liver stiffness measurement at baseline.

According to the researchers, the probability of future cirrhosis development can be ascertained using current liver stiffness measurement. As such, a greater than 1% probability threshold can be reached in 5 years in patients with a measurement of 2 kPa, 3 years in patients with a measurement of 3 kPA, and 1 year in patients with 4-5 kPa. “These time frames inform about estimated time to progression to hard outcomes and provide guidance for subsequent noninvasive monitoring for disease progression,” wrote the researchers.

The baseline liver stiffness measurement by MRE was also significantly predictive of future hepatic decompensation or death (HR, 1.32; 95% CI, 1.13-1.56; P = .0007) per 1-kPa increment in the liver stiffness measurement. Likewise, the 1-year probability of subsequent hepatic decompensation or death in patients with cirrhosis and baseline liver stiffness measurement of 5 kPa versus 8 kPa was 9% versus 20%, respectively. In terms of covariates, age was the only factor that increased the risk of hepatic decompensation or death.

While the current study offers a glimpse into the potential clinical implications of liver stiffness measurement by MRE in NAFLD, the researchers suggest the applicability of the findings are limited by the study’s small sample size, relatively short follow-up duration, and the small number of cirrhosis events.

The researchers received study funding from the National Institute of Diabetes and Digestive and Kidney Diseases, American College of Gastroenterology, National Institutes of Health, and the Department of Defense. The researchers disclosed no other relevant conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

IBD: COVID-19 vaccination still effective in immunosuppressed

The results are reassuring
Article Type
Changed
Thu, 08/26/2021 - 15:43

In a real-world setting, full vaccination against SARS-CoV-2 was more than 80% effective at reducing infection in people with inflammatory bowel disease (IBD) who were taking immunosuppressive medications.

The study, which examined postvaccine infection rates in a Veterans Affairs cohort, further validates the benefit of COVID-19 vaccines, particularly in a subgroup most at risk for having compromised immune systems. Furthermore, the findings “may serve to increase patient and provider willingness to pursue vaccination in these settings,” wrote study authors Nabeel Khan, MD, of the Corporal Michael J. Crescenz VA Medical Center and Nadim Mahmud, MD, of the University of Pennsylvania, both in Philadelphia. The report was published in Gastroenterology. In addition, the researchers said the findings “should provide positive reinforcement to IBD patients taking immunosuppressive agents who may otherwise be reluctant to receive vaccination.”

Since the onset of the COVID-19 pandemic, concerns have been raised regarding the possible heightened risk of SARS-CoV-2 infection among patients with IBD and other diseases associated with immune system dysregulation. Despite these fears, patients with IBD appear to have comparable rates of SARS-CoV-2 infection to that of the general population.

Pfizer’s BNT162b2 and Moderna’s RNA-1273 vaccines are the most widely used COVID-19 vaccines in the United States. These vaccines have demonstrated over 90% efficacy for preventing infection and severe disease in late-stage trials; however, few trials have examined their pooled effectiveness in immunocompromised patients and those taking immunosuppressive therapies.

To address this gap, researchers conducted a retrospective cohort study that included 14,697 patients (median age, 68 years) from the Veterans Health Administration database who had been diagnosed with IBD before the start date of the administration’s vaccination program. A total of 7,321 patients in the cohort had received at least 1 dose of either the Pfizer (45.2%) or Moderna (54.8%) vaccines.

Approximately 61.8% of patients had ulcerative colitis, while the remaining patients had Crohn’s disease. In terms of medications, vaccinated versus unvaccinated patients in the study were exposed to mesalamine alone (54.9% vs. 54.6%), thiopurines (10.8% vs. 10.5%), anti–tumor necrosis factor (anti-TNF) biologic monotherapy (18.8% vs. 20.9%), vedolizumab (7.2% vs. 6.0%), ustekinumab (1.0% vs. 1.1%), tofacitinib (0.7% vs. 0.8%), methotrexate (2.3% vs. 2.0%%), and/or corticosteroids (6.8% vs. 5.6%).

A total of 3,561 patients who received the Moderna vaccine and 3,017 patients who received the Pfizer vaccine received both doses. The median time between each dose was 21 days for Pfizer and 28 days for Moderna.

Patients who were unvaccinated had significantly fewer comorbidities (P < .001). The majority of patients in the overall cohort were men (92.2%), a group identified as having a much greater risk of worse COVID-19–related outcomes.

Unvaccinated patients in the study had a higher rate of SARS-CoV-2 infection compared with the fully vaccinated group (1.34% vs. 0.11%, respectively) in follow-up data reported through April 20, 2021. Over a median follow-up duration of 20 days, researchers found 14 infections with SARS-CoV-2 (0.28%) in partially vaccinated individuals. Seven infections (0.11%) were reported in fully vaccinated individuals over a median 38-day follow-up period.

Compared with unvaccinated patients, full vaccination status was associated with a 69% reduction in the hazard ratio of infection (HR, 0.31; 95% confidence interval, 0.17-0.56; P < .001). Corresponding vaccine efficacy rates were 25.1% for partial vaccination and 80.4% for full vaccination.

There were no significant interactions between vaccination status and exposure to steroids (P =.64), mesalamine versus immunosuppressive agents (P =.46), or anti-TNFs with immunomodulators or steroids versus other therapies (P =.34). In addition, no difference was found in the association between vaccination status and infection for patients who received the Moderna versus the Pfizer vaccines (P =.09).

Unvaccinated individuals had the highest raw proportions of severe infection with the novel coronavirus (0.32%) and all-cause mortality (0.66%), compared with people who were partially vaccinated or fully vaccinated. In adjusted Cox regression analyses, there was no significant association between vaccination status and severe SARS-CoV-2 infection (fully vaccinated vs. unvaccinated, P = .18) or all-cause mortality (fully vaccinated vs. unvaccinated, P =.11). The researchers wrote that, “future studies with larger sample size and/or longer follow-up are needed to evaluate this further.”

An important limitation of this study was the inclusion of mostly older men who were also predominantly White (80.4%). Ultimately, this population may limit the generalizability of the findings for women and patients of other races/ethnicities.

While the study received no financial support, Dr. Khan has received research grants from several pharmaceutical companies, but Dr. Mahmud disclosed no conflicts.

Body

 

There is a need for evidence to clarify the effectiveness of SARS-CoV-2 vaccination in select subpopulations like inflammatory bowel disease (IBD) that were underrepresented in the vaccine clinical trials. Patients on select immune modifying therapies have historically had suboptimal immunologic responses to vaccines in the pre-COVID era, and early data from national and international IBD registries suggest that, while patients generally do mount humoral responses to SARS-CoV-2 vaccination, absolute postvaccination antibody titers may be blunted by specific drug mechanisms such as anti–tumor necrosis factor–alpha therapies or corticosteroids. These reports, however, do not tell the whole story. Postvaccination humoral and cellular (T-cell) immunity appear to be independently mediated, and the thresholds correlating antibody titers with rates of COVID-19 infection or prevention of serious complications have yet to be determined.

Dr. Gil Y. Melmed, MD, MS, is a professor of medicine at Cedars-Sinai in Los Angeles, calif.
Dr. Gil Y. Melmed
Therefore, this study by Mahmud and Khan looking at rates of COVID-19 infection in a large Veterans Affairs cohort of patients with IBD on a variety of immune modifying therapies after SARS-CoV-2 vaccination with an mRNA vaccine is highly clinically relevant and the findings are very reassuring. Patients who received both vaccine doses had significantly lower rates of COVID-19 infection, with an overall vaccine efficacy rates similar to those seen in the general population. Although antibody levels and cellular immunity correlations with protection against infection are still unknown, and the degree of prevention against severe disease has not yet been clarified with larger numbers over time, practitioners can confidently tell their patients with IBD that vaccination has a very high likelihood of protecting them from COVID-19 infection.

Gil Y. Melmed, MD, MS, is a professor of medicine at Cedars-Sinai, Los Angeles. He reports being a consultant to AbbVie, Arena, Boehringer Ingelheim, Bristol-Meyers Squibb/Celgene, Janssen, Pfizer, Samsung Bioepis, Shionogi, and Takeda. He is principal investigator of CORALE-IBD, a registry evaluating postvaccine outcomes in IBD after SARS-CoV-2 vaccination.

Publications
Topics
Sections
Body

 

There is a need for evidence to clarify the effectiveness of SARS-CoV-2 vaccination in select subpopulations like inflammatory bowel disease (IBD) that were underrepresented in the vaccine clinical trials. Patients on select immune modifying therapies have historically had suboptimal immunologic responses to vaccines in the pre-COVID era, and early data from national and international IBD registries suggest that, while patients generally do mount humoral responses to SARS-CoV-2 vaccination, absolute postvaccination antibody titers may be blunted by specific drug mechanisms such as anti–tumor necrosis factor–alpha therapies or corticosteroids. These reports, however, do not tell the whole story. Postvaccination humoral and cellular (T-cell) immunity appear to be independently mediated, and the thresholds correlating antibody titers with rates of COVID-19 infection or prevention of serious complications have yet to be determined.

Dr. Gil Y. Melmed, MD, MS, is a professor of medicine at Cedars-Sinai in Los Angeles, calif.
Dr. Gil Y. Melmed
Therefore, this study by Mahmud and Khan looking at rates of COVID-19 infection in a large Veterans Affairs cohort of patients with IBD on a variety of immune modifying therapies after SARS-CoV-2 vaccination with an mRNA vaccine is highly clinically relevant and the findings are very reassuring. Patients who received both vaccine doses had significantly lower rates of COVID-19 infection, with an overall vaccine efficacy rates similar to those seen in the general population. Although antibody levels and cellular immunity correlations with protection against infection are still unknown, and the degree of prevention against severe disease has not yet been clarified with larger numbers over time, practitioners can confidently tell their patients with IBD that vaccination has a very high likelihood of protecting them from COVID-19 infection.

Gil Y. Melmed, MD, MS, is a professor of medicine at Cedars-Sinai, Los Angeles. He reports being a consultant to AbbVie, Arena, Boehringer Ingelheim, Bristol-Meyers Squibb/Celgene, Janssen, Pfizer, Samsung Bioepis, Shionogi, and Takeda. He is principal investigator of CORALE-IBD, a registry evaluating postvaccine outcomes in IBD after SARS-CoV-2 vaccination.

Body

 

There is a need for evidence to clarify the effectiveness of SARS-CoV-2 vaccination in select subpopulations like inflammatory bowel disease (IBD) that were underrepresented in the vaccine clinical trials. Patients on select immune modifying therapies have historically had suboptimal immunologic responses to vaccines in the pre-COVID era, and early data from national and international IBD registries suggest that, while patients generally do mount humoral responses to SARS-CoV-2 vaccination, absolute postvaccination antibody titers may be blunted by specific drug mechanisms such as anti–tumor necrosis factor–alpha therapies or corticosteroids. These reports, however, do not tell the whole story. Postvaccination humoral and cellular (T-cell) immunity appear to be independently mediated, and the thresholds correlating antibody titers with rates of COVID-19 infection or prevention of serious complications have yet to be determined.

Dr. Gil Y. Melmed, MD, MS, is a professor of medicine at Cedars-Sinai in Los Angeles, calif.
Dr. Gil Y. Melmed
Therefore, this study by Mahmud and Khan looking at rates of COVID-19 infection in a large Veterans Affairs cohort of patients with IBD on a variety of immune modifying therapies after SARS-CoV-2 vaccination with an mRNA vaccine is highly clinically relevant and the findings are very reassuring. Patients who received both vaccine doses had significantly lower rates of COVID-19 infection, with an overall vaccine efficacy rates similar to those seen in the general population. Although antibody levels and cellular immunity correlations with protection against infection are still unknown, and the degree of prevention against severe disease has not yet been clarified with larger numbers over time, practitioners can confidently tell their patients with IBD that vaccination has a very high likelihood of protecting them from COVID-19 infection.

Gil Y. Melmed, MD, MS, is a professor of medicine at Cedars-Sinai, Los Angeles. He reports being a consultant to AbbVie, Arena, Boehringer Ingelheim, Bristol-Meyers Squibb/Celgene, Janssen, Pfizer, Samsung Bioepis, Shionogi, and Takeda. He is principal investigator of CORALE-IBD, a registry evaluating postvaccine outcomes in IBD after SARS-CoV-2 vaccination.

Title
The results are reassuring
The results are reassuring

In a real-world setting, full vaccination against SARS-CoV-2 was more than 80% effective at reducing infection in people with inflammatory bowel disease (IBD) who were taking immunosuppressive medications.

The study, which examined postvaccine infection rates in a Veterans Affairs cohort, further validates the benefit of COVID-19 vaccines, particularly in a subgroup most at risk for having compromised immune systems. Furthermore, the findings “may serve to increase patient and provider willingness to pursue vaccination in these settings,” wrote study authors Nabeel Khan, MD, of the Corporal Michael J. Crescenz VA Medical Center and Nadim Mahmud, MD, of the University of Pennsylvania, both in Philadelphia. The report was published in Gastroenterology. In addition, the researchers said the findings “should provide positive reinforcement to IBD patients taking immunosuppressive agents who may otherwise be reluctant to receive vaccination.”

Since the onset of the COVID-19 pandemic, concerns have been raised regarding the possible heightened risk of SARS-CoV-2 infection among patients with IBD and other diseases associated with immune system dysregulation. Despite these fears, patients with IBD appear to have comparable rates of SARS-CoV-2 infection to that of the general population.

Pfizer’s BNT162b2 and Moderna’s RNA-1273 vaccines are the most widely used COVID-19 vaccines in the United States. These vaccines have demonstrated over 90% efficacy for preventing infection and severe disease in late-stage trials; however, few trials have examined their pooled effectiveness in immunocompromised patients and those taking immunosuppressive therapies.

To address this gap, researchers conducted a retrospective cohort study that included 14,697 patients (median age, 68 years) from the Veterans Health Administration database who had been diagnosed with IBD before the start date of the administration’s vaccination program. A total of 7,321 patients in the cohort had received at least 1 dose of either the Pfizer (45.2%) or Moderna (54.8%) vaccines.

Approximately 61.8% of patients had ulcerative colitis, while the remaining patients had Crohn’s disease. In terms of medications, vaccinated versus unvaccinated patients in the study were exposed to mesalamine alone (54.9% vs. 54.6%), thiopurines (10.8% vs. 10.5%), anti–tumor necrosis factor (anti-TNF) biologic monotherapy (18.8% vs. 20.9%), vedolizumab (7.2% vs. 6.0%), ustekinumab (1.0% vs. 1.1%), tofacitinib (0.7% vs. 0.8%), methotrexate (2.3% vs. 2.0%%), and/or corticosteroids (6.8% vs. 5.6%).

A total of 3,561 patients who received the Moderna vaccine and 3,017 patients who received the Pfizer vaccine received both doses. The median time between each dose was 21 days for Pfizer and 28 days for Moderna.

Patients who were unvaccinated had significantly fewer comorbidities (P < .001). The majority of patients in the overall cohort were men (92.2%), a group identified as having a much greater risk of worse COVID-19–related outcomes.

Unvaccinated patients in the study had a higher rate of SARS-CoV-2 infection compared with the fully vaccinated group (1.34% vs. 0.11%, respectively) in follow-up data reported through April 20, 2021. Over a median follow-up duration of 20 days, researchers found 14 infections with SARS-CoV-2 (0.28%) in partially vaccinated individuals. Seven infections (0.11%) were reported in fully vaccinated individuals over a median 38-day follow-up period.

Compared with unvaccinated patients, full vaccination status was associated with a 69% reduction in the hazard ratio of infection (HR, 0.31; 95% confidence interval, 0.17-0.56; P < .001). Corresponding vaccine efficacy rates were 25.1% for partial vaccination and 80.4% for full vaccination.

There were no significant interactions between vaccination status and exposure to steroids (P =.64), mesalamine versus immunosuppressive agents (P =.46), or anti-TNFs with immunomodulators or steroids versus other therapies (P =.34). In addition, no difference was found in the association between vaccination status and infection for patients who received the Moderna versus the Pfizer vaccines (P =.09).

Unvaccinated individuals had the highest raw proportions of severe infection with the novel coronavirus (0.32%) and all-cause mortality (0.66%), compared with people who were partially vaccinated or fully vaccinated. In adjusted Cox regression analyses, there was no significant association between vaccination status and severe SARS-CoV-2 infection (fully vaccinated vs. unvaccinated, P = .18) or all-cause mortality (fully vaccinated vs. unvaccinated, P =.11). The researchers wrote that, “future studies with larger sample size and/or longer follow-up are needed to evaluate this further.”

An important limitation of this study was the inclusion of mostly older men who were also predominantly White (80.4%). Ultimately, this population may limit the generalizability of the findings for women and patients of other races/ethnicities.

While the study received no financial support, Dr. Khan has received research grants from several pharmaceutical companies, but Dr. Mahmud disclosed no conflicts.

In a real-world setting, full vaccination against SARS-CoV-2 was more than 80% effective at reducing infection in people with inflammatory bowel disease (IBD) who were taking immunosuppressive medications.

The study, which examined postvaccine infection rates in a Veterans Affairs cohort, further validates the benefit of COVID-19 vaccines, particularly in a subgroup most at risk for having compromised immune systems. Furthermore, the findings “may serve to increase patient and provider willingness to pursue vaccination in these settings,” wrote study authors Nabeel Khan, MD, of the Corporal Michael J. Crescenz VA Medical Center and Nadim Mahmud, MD, of the University of Pennsylvania, both in Philadelphia. The report was published in Gastroenterology. In addition, the researchers said the findings “should provide positive reinforcement to IBD patients taking immunosuppressive agents who may otherwise be reluctant to receive vaccination.”

Since the onset of the COVID-19 pandemic, concerns have been raised regarding the possible heightened risk of SARS-CoV-2 infection among patients with IBD and other diseases associated with immune system dysregulation. Despite these fears, patients with IBD appear to have comparable rates of SARS-CoV-2 infection to that of the general population.

Pfizer’s BNT162b2 and Moderna’s RNA-1273 vaccines are the most widely used COVID-19 vaccines in the United States. These vaccines have demonstrated over 90% efficacy for preventing infection and severe disease in late-stage trials; however, few trials have examined their pooled effectiveness in immunocompromised patients and those taking immunosuppressive therapies.

To address this gap, researchers conducted a retrospective cohort study that included 14,697 patients (median age, 68 years) from the Veterans Health Administration database who had been diagnosed with IBD before the start date of the administration’s vaccination program. A total of 7,321 patients in the cohort had received at least 1 dose of either the Pfizer (45.2%) or Moderna (54.8%) vaccines.

Approximately 61.8% of patients had ulcerative colitis, while the remaining patients had Crohn’s disease. In terms of medications, vaccinated versus unvaccinated patients in the study were exposed to mesalamine alone (54.9% vs. 54.6%), thiopurines (10.8% vs. 10.5%), anti–tumor necrosis factor (anti-TNF) biologic monotherapy (18.8% vs. 20.9%), vedolizumab (7.2% vs. 6.0%), ustekinumab (1.0% vs. 1.1%), tofacitinib (0.7% vs. 0.8%), methotrexate (2.3% vs. 2.0%%), and/or corticosteroids (6.8% vs. 5.6%).

A total of 3,561 patients who received the Moderna vaccine and 3,017 patients who received the Pfizer vaccine received both doses. The median time between each dose was 21 days for Pfizer and 28 days for Moderna.

Patients who were unvaccinated had significantly fewer comorbidities (P < .001). The majority of patients in the overall cohort were men (92.2%), a group identified as having a much greater risk of worse COVID-19–related outcomes.

Unvaccinated patients in the study had a higher rate of SARS-CoV-2 infection compared with the fully vaccinated group (1.34% vs. 0.11%, respectively) in follow-up data reported through April 20, 2021. Over a median follow-up duration of 20 days, researchers found 14 infections with SARS-CoV-2 (0.28%) in partially vaccinated individuals. Seven infections (0.11%) were reported in fully vaccinated individuals over a median 38-day follow-up period.

Compared with unvaccinated patients, full vaccination status was associated with a 69% reduction in the hazard ratio of infection (HR, 0.31; 95% confidence interval, 0.17-0.56; P < .001). Corresponding vaccine efficacy rates were 25.1% for partial vaccination and 80.4% for full vaccination.

There were no significant interactions between vaccination status and exposure to steroids (P =.64), mesalamine versus immunosuppressive agents (P =.46), or anti-TNFs with immunomodulators or steroids versus other therapies (P =.34). In addition, no difference was found in the association between vaccination status and infection for patients who received the Moderna versus the Pfizer vaccines (P =.09).

Unvaccinated individuals had the highest raw proportions of severe infection with the novel coronavirus (0.32%) and all-cause mortality (0.66%), compared with people who were partially vaccinated or fully vaccinated. In adjusted Cox regression analyses, there was no significant association between vaccination status and severe SARS-CoV-2 infection (fully vaccinated vs. unvaccinated, P = .18) or all-cause mortality (fully vaccinated vs. unvaccinated, P =.11). The researchers wrote that, “future studies with larger sample size and/or longer follow-up are needed to evaluate this further.”

An important limitation of this study was the inclusion of mostly older men who were also predominantly White (80.4%). Ultimately, this population may limit the generalizability of the findings for women and patients of other races/ethnicities.

While the study received no financial support, Dr. Khan has received research grants from several pharmaceutical companies, but Dr. Mahmud disclosed no conflicts.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

How diet affects NASH-to-HCC progression

A clinically relevant model emerges
Article Type
Changed
Wed, 08/11/2021 - 09:14

A new study sought to establish a new, clinically relevant mouse model of nonalcoholic steatohepatitis (NASH) that closely reflects human disease as well as the multitissue dynamics involved in the progression and regression of the condition, according to the researchers. This study focused on the association between progression of NASH and consumption of a Western diet, as well as the development of HCC.

The study used a model consisting of hyperphagic mice that lacked a functional ALMS1 gene (Foz/Foz), in addition to wild-type littermates. The model ultimately defined “the key signaling and cytokine pathways that are critical for disease development and resolution” associated with NASH, wrote Souradipta Ganguly, PhD, of the University of California, San Diego, and colleagues. The report was published in Cellular and Molecular Gastroenterology and Hepatology.

According to the researchers, this study is unique given “current rodent models of NASH do not reproduce the complete spectrum of metabolic and histologic” nonalcoholic fatty liver disease (NAFLD) phenotypes. Likewise, the lack of “systemic studies in a single rodent model of NASH that closely recapitulates the human pathology” reinforces the importance of the new model, the researchers added.

Over time, NASH can progress to cirrhosis and hepatocellular carcinoma (HCC). Studies that fed wild-type mice a Western diet have largely failed to mimic the full pathology of NASH to fibrosis to HCC. In addition, the models in these studies fail to reflect the multitissue injuries frequently observed in NASH.

To circumvent these challenges, Dr. Ganguly and colleagues used ALMS1-mutated mice to develop a rodent model of metabolic syndrome that included NASH with fibrosis, chronic kidney disease, and cardiovascular disease. The ALMS1 mutation also resulted in the mice becoming hyperphagic, which increases hunger and leads to early-onset obesity, among other conditions characteristic of metabolic syndrome.

Researchers fed the hyperphagic Foz/Foz mice and wild-type littermates a Western diet or standard diet during a 12-week period for NASH/fibrosis and a 24-week period for HCC. After NASH was established, mice were switched back to normal chow to see if the condition regressed.

Macronutrient distribution of the study’s Western diet included 40% fat, 15% protein, and 44% carbohydrates, based on total caloric content. In contrast, the standard chow included 12% fat, 23% protein, and 65% carbohydrates from total calories.

Within 1-2 weeks, Foz mice fed the Western diet were considered steatotic. These mice subsequently developed NASH by 4 weeks of the study and grade 3 fibrosis by 12 weeks. The researchers concurrently observed the development of chronic kidney injury in the animals. Mice continuing to the 24 weeks ultimately progressed to cirrhosis and HCC; these mice demonstrated reduced survival due to cardiac dysfunction.

Mice that developed NASH were then switched to a diet consisting of normal chow. Following this switch, NASH began to regress, and survival improved. These mice did not appear to develop HCC, and total liver weight was significantly reduced compared with the mice that didn’t enter the regression phase of the study. The researchers wrote that the resolution of hepatic steatosis was also consistent with improved glucose tolerance.

In transcriptomic and histologic analyses, the researchers found strong concordance between Foz/Foz mice NASH liver and human NASH.

The study also found that early disruption of gut barrier, microbial dysbiosis, lipopolysaccharide leakage, and intestinal inflammation preceded NASH in the Foz/Foz mice fed the Western diet, resulting in acute-phase liver inflammation. The early inflammation was reflected by an increase in several chemokines and cytokines by 1-2 weeks. As NASH progressed, the liver cytokine/chemokine profile continued to evolve, leading to monocyte recruitment predominance. “Further studies will elaborate the roles of these NASH-specific microbiomial features in the development and progression of NASH fibrosis,” wrote the researchers.

The study received financial support Janssen, in addition to funding from an ALF Liver Scholar award, ACTRI/National Institutes of Health, the SDDRC, and the NIAAA/National Institutes of Health. The authors disclosed no conflicts.

Body

The prevalence and incidence of nonalcoholic steatohepatitis and NASH-induced hepatocellular carcinoma (HCC) have rapidly increased worldwide in recent years. The growing number of patients with NASH and NASH-HCC poses a significant public health burden, further confounded by suboptimal approaches for disease management, including a lack of effective pharmacotherapy. To accelerate the development of novel treatment modalities, preclinical studies using animal models highly relevant to human disease are of utmost importance. The ideal experimental NASH model recapitulates the multifaceted human condition, including the etiology, underlying pathogenetic mechanisms, histologic features, and progression from NASH to NASH-related HCC.

Dr. Petra Hirsova
The study by Ganguly and colleagues demonstrates that, when hyperphagic Foz/Foz mice are provided with a Western diet as desired, they consume excess calories, leading to obesity, insulin resistance, kidney injury, cardiovascular disease, and NASH. Notably, Foz/Foz mice develop NASH with a more severe phenotype and about twice as fast as wild-type mice. When continuing the Western diet for 6 months, Foz/Foz mice develop NASH-related HCC. In this experimental setting, NASH onset and progression to HCC are markedly accelerated, compared with other common models of NASH-induced carcinogenesis, which require significantly longer time or diets and manipulations that are less relevant to human disease etiology and pathophysiology. Thus, Western diet–fed Foz/Foz mice represent a unique, convenient, and clinically relevant approach to model NASH and NASH-to-HCC progression. Future in-depth molecular characterization of this murine NASH-HCC should reveal the transcriptomic and mutational landscape of the tumors and contrast these features to human NASH-HCC, further underscoring the clinical utility of this preclinical model.

Petra Hirsova, PharmD, PhD, is an assistant professor and investigator in the division of gastroenterology and hepatology at the Mayo Clinic, Rochester, Minn. Dr. Hirsova reported having no disclosures.

Publications
Topics
Sections
Body

The prevalence and incidence of nonalcoholic steatohepatitis and NASH-induced hepatocellular carcinoma (HCC) have rapidly increased worldwide in recent years. The growing number of patients with NASH and NASH-HCC poses a significant public health burden, further confounded by suboptimal approaches for disease management, including a lack of effective pharmacotherapy. To accelerate the development of novel treatment modalities, preclinical studies using animal models highly relevant to human disease are of utmost importance. The ideal experimental NASH model recapitulates the multifaceted human condition, including the etiology, underlying pathogenetic mechanisms, histologic features, and progression from NASH to NASH-related HCC.

Dr. Petra Hirsova
The study by Ganguly and colleagues demonstrates that, when hyperphagic Foz/Foz mice are provided with a Western diet as desired, they consume excess calories, leading to obesity, insulin resistance, kidney injury, cardiovascular disease, and NASH. Notably, Foz/Foz mice develop NASH with a more severe phenotype and about twice as fast as wild-type mice. When continuing the Western diet for 6 months, Foz/Foz mice develop NASH-related HCC. In this experimental setting, NASH onset and progression to HCC are markedly accelerated, compared with other common models of NASH-induced carcinogenesis, which require significantly longer time or diets and manipulations that are less relevant to human disease etiology and pathophysiology. Thus, Western diet–fed Foz/Foz mice represent a unique, convenient, and clinically relevant approach to model NASH and NASH-to-HCC progression. Future in-depth molecular characterization of this murine NASH-HCC should reveal the transcriptomic and mutational landscape of the tumors and contrast these features to human NASH-HCC, further underscoring the clinical utility of this preclinical model.

Petra Hirsova, PharmD, PhD, is an assistant professor and investigator in the division of gastroenterology and hepatology at the Mayo Clinic, Rochester, Minn. Dr. Hirsova reported having no disclosures.

Body

The prevalence and incidence of nonalcoholic steatohepatitis and NASH-induced hepatocellular carcinoma (HCC) have rapidly increased worldwide in recent years. The growing number of patients with NASH and NASH-HCC poses a significant public health burden, further confounded by suboptimal approaches for disease management, including a lack of effective pharmacotherapy. To accelerate the development of novel treatment modalities, preclinical studies using animal models highly relevant to human disease are of utmost importance. The ideal experimental NASH model recapitulates the multifaceted human condition, including the etiology, underlying pathogenetic mechanisms, histologic features, and progression from NASH to NASH-related HCC.

Dr. Petra Hirsova
The study by Ganguly and colleagues demonstrates that, when hyperphagic Foz/Foz mice are provided with a Western diet as desired, they consume excess calories, leading to obesity, insulin resistance, kidney injury, cardiovascular disease, and NASH. Notably, Foz/Foz mice develop NASH with a more severe phenotype and about twice as fast as wild-type mice. When continuing the Western diet for 6 months, Foz/Foz mice develop NASH-related HCC. In this experimental setting, NASH onset and progression to HCC are markedly accelerated, compared with other common models of NASH-induced carcinogenesis, which require significantly longer time or diets and manipulations that are less relevant to human disease etiology and pathophysiology. Thus, Western diet–fed Foz/Foz mice represent a unique, convenient, and clinically relevant approach to model NASH and NASH-to-HCC progression. Future in-depth molecular characterization of this murine NASH-HCC should reveal the transcriptomic and mutational landscape of the tumors and contrast these features to human NASH-HCC, further underscoring the clinical utility of this preclinical model.

Petra Hirsova, PharmD, PhD, is an assistant professor and investigator in the division of gastroenterology and hepatology at the Mayo Clinic, Rochester, Minn. Dr. Hirsova reported having no disclosures.

Title
A clinically relevant model emerges
A clinically relevant model emerges

A new study sought to establish a new, clinically relevant mouse model of nonalcoholic steatohepatitis (NASH) that closely reflects human disease as well as the multitissue dynamics involved in the progression and regression of the condition, according to the researchers. This study focused on the association between progression of NASH and consumption of a Western diet, as well as the development of HCC.

The study used a model consisting of hyperphagic mice that lacked a functional ALMS1 gene (Foz/Foz), in addition to wild-type littermates. The model ultimately defined “the key signaling and cytokine pathways that are critical for disease development and resolution” associated with NASH, wrote Souradipta Ganguly, PhD, of the University of California, San Diego, and colleagues. The report was published in Cellular and Molecular Gastroenterology and Hepatology.

According to the researchers, this study is unique given “current rodent models of NASH do not reproduce the complete spectrum of metabolic and histologic” nonalcoholic fatty liver disease (NAFLD) phenotypes. Likewise, the lack of “systemic studies in a single rodent model of NASH that closely recapitulates the human pathology” reinforces the importance of the new model, the researchers added.

Over time, NASH can progress to cirrhosis and hepatocellular carcinoma (HCC). Studies that fed wild-type mice a Western diet have largely failed to mimic the full pathology of NASH to fibrosis to HCC. In addition, the models in these studies fail to reflect the multitissue injuries frequently observed in NASH.

To circumvent these challenges, Dr. Ganguly and colleagues used ALMS1-mutated mice to develop a rodent model of metabolic syndrome that included NASH with fibrosis, chronic kidney disease, and cardiovascular disease. The ALMS1 mutation also resulted in the mice becoming hyperphagic, which increases hunger and leads to early-onset obesity, among other conditions characteristic of metabolic syndrome.

Researchers fed the hyperphagic Foz/Foz mice and wild-type littermates a Western diet or standard diet during a 12-week period for NASH/fibrosis and a 24-week period for HCC. After NASH was established, mice were switched back to normal chow to see if the condition regressed.

Macronutrient distribution of the study’s Western diet included 40% fat, 15% protein, and 44% carbohydrates, based on total caloric content. In contrast, the standard chow included 12% fat, 23% protein, and 65% carbohydrates from total calories.

Within 1-2 weeks, Foz mice fed the Western diet were considered steatotic. These mice subsequently developed NASH by 4 weeks of the study and grade 3 fibrosis by 12 weeks. The researchers concurrently observed the development of chronic kidney injury in the animals. Mice continuing to the 24 weeks ultimately progressed to cirrhosis and HCC; these mice demonstrated reduced survival due to cardiac dysfunction.

Mice that developed NASH were then switched to a diet consisting of normal chow. Following this switch, NASH began to regress, and survival improved. These mice did not appear to develop HCC, and total liver weight was significantly reduced compared with the mice that didn’t enter the regression phase of the study. The researchers wrote that the resolution of hepatic steatosis was also consistent with improved glucose tolerance.

In transcriptomic and histologic analyses, the researchers found strong concordance between Foz/Foz mice NASH liver and human NASH.

The study also found that early disruption of gut barrier, microbial dysbiosis, lipopolysaccharide leakage, and intestinal inflammation preceded NASH in the Foz/Foz mice fed the Western diet, resulting in acute-phase liver inflammation. The early inflammation was reflected by an increase in several chemokines and cytokines by 1-2 weeks. As NASH progressed, the liver cytokine/chemokine profile continued to evolve, leading to monocyte recruitment predominance. “Further studies will elaborate the roles of these NASH-specific microbiomial features in the development and progression of NASH fibrosis,” wrote the researchers.

The study received financial support Janssen, in addition to funding from an ALF Liver Scholar award, ACTRI/National Institutes of Health, the SDDRC, and the NIAAA/National Institutes of Health. The authors disclosed no conflicts.

A new study sought to establish a new, clinically relevant mouse model of nonalcoholic steatohepatitis (NASH) that closely reflects human disease as well as the multitissue dynamics involved in the progression and regression of the condition, according to the researchers. This study focused on the association between progression of NASH and consumption of a Western diet, as well as the development of HCC.

The study used a model consisting of hyperphagic mice that lacked a functional ALMS1 gene (Foz/Foz), in addition to wild-type littermates. The model ultimately defined “the key signaling and cytokine pathways that are critical for disease development and resolution” associated with NASH, wrote Souradipta Ganguly, PhD, of the University of California, San Diego, and colleagues. The report was published in Cellular and Molecular Gastroenterology and Hepatology.

According to the researchers, this study is unique given “current rodent models of NASH do not reproduce the complete spectrum of metabolic and histologic” nonalcoholic fatty liver disease (NAFLD) phenotypes. Likewise, the lack of “systemic studies in a single rodent model of NASH that closely recapitulates the human pathology” reinforces the importance of the new model, the researchers added.

Over time, NASH can progress to cirrhosis and hepatocellular carcinoma (HCC). Studies that fed wild-type mice a Western diet have largely failed to mimic the full pathology of NASH to fibrosis to HCC. In addition, the models in these studies fail to reflect the multitissue injuries frequently observed in NASH.

To circumvent these challenges, Dr. Ganguly and colleagues used ALMS1-mutated mice to develop a rodent model of metabolic syndrome that included NASH with fibrosis, chronic kidney disease, and cardiovascular disease. The ALMS1 mutation also resulted in the mice becoming hyperphagic, which increases hunger and leads to early-onset obesity, among other conditions characteristic of metabolic syndrome.

Researchers fed the hyperphagic Foz/Foz mice and wild-type littermates a Western diet or standard diet during a 12-week period for NASH/fibrosis and a 24-week period for HCC. After NASH was established, mice were switched back to normal chow to see if the condition regressed.

Macronutrient distribution of the study’s Western diet included 40% fat, 15% protein, and 44% carbohydrates, based on total caloric content. In contrast, the standard chow included 12% fat, 23% protein, and 65% carbohydrates from total calories.

Within 1-2 weeks, Foz mice fed the Western diet were considered steatotic. These mice subsequently developed NASH by 4 weeks of the study and grade 3 fibrosis by 12 weeks. The researchers concurrently observed the development of chronic kidney injury in the animals. Mice continuing to the 24 weeks ultimately progressed to cirrhosis and HCC; these mice demonstrated reduced survival due to cardiac dysfunction.

Mice that developed NASH were then switched to a diet consisting of normal chow. Following this switch, NASH began to regress, and survival improved. These mice did not appear to develop HCC, and total liver weight was significantly reduced compared with the mice that didn’t enter the regression phase of the study. The researchers wrote that the resolution of hepatic steatosis was also consistent with improved glucose tolerance.

In transcriptomic and histologic analyses, the researchers found strong concordance between Foz/Foz mice NASH liver and human NASH.

The study also found that early disruption of gut barrier, microbial dysbiosis, lipopolysaccharide leakage, and intestinal inflammation preceded NASH in the Foz/Foz mice fed the Western diet, resulting in acute-phase liver inflammation. The early inflammation was reflected by an increase in several chemokines and cytokines by 1-2 weeks. As NASH progressed, the liver cytokine/chemokine profile continued to evolve, leading to monocyte recruitment predominance. “Further studies will elaborate the roles of these NASH-specific microbiomial features in the development and progression of NASH fibrosis,” wrote the researchers.

The study received financial support Janssen, in addition to funding from an ALF Liver Scholar award, ACTRI/National Institutes of Health, the SDDRC, and the NIAAA/National Institutes of Health. The authors disclosed no conflicts.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CELLULAR AND MOLECULAR GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

IBD: COVID-19 vaccination still effective in immunosuppressed

Article Type
Changed
Mon, 08/02/2021 - 15:18

In a real-world setting, full vaccination against SARS-CoV-2 was more than 80% effective at reducing infection in people with inflammatory bowel disease (IBD) who were taking immunosuppressive medications.

The study, which examined postvaccine infection rates in a Veterans Affairs cohort, further validates the benefit of COVID-19 vaccines, particularly in a subgroup most at risk for having compromised immune systems. Furthermore, the findings “may serve to increase patient and provider willingness to pursue vaccination in these settings,” wrote study authors Nabeel Khan, MD, of the Corporal Michael J. Crescenz VA Medical Center and Nadim Mahmud, MD, of the University of Pennsylvania, both in Philadelphia. The report was published in Gastroenterology. In addition, the researchers said the findings “should provide positive reinforcement to IBD patients taking immunosuppressive agents who may otherwise be reluctant to receive vaccination.”

Since the onset of the COVID-19 pandemic, concerns have been raised regarding the possible heightened risk of SARS-CoV-2 infection among patients with IBD and other diseases associated with immune system dysregulation. Despite these fears, patients with IBD appear to have comparable rates of SARS-CoV-2 infection to that of the general population.

Pfizer’s BNT162b2 and Moderna’s RNA-1273 vaccines are the most widely used COVID-19 vaccines in the United States. These vaccines have demonstrated over 90% efficacy for preventing infection and severe disease in late-stage trials; however, few trials have examined their pooled effectiveness in immunocompromised patients and those taking immunosuppressive therapies.

To address this gap, researchers conducted a retrospective cohort study that included 14,697 patients (median age, 68 years) from the Veterans Health Administration database who had been diagnosed with IBD before the start date of the administration’s vaccination program. A total of 7,321 patients in the cohort had received at least 1 dose of either the Pfizer (45.2%) or Moderna (54.8%) vaccines.

Approximately 61.8% of patients had ulcerative colitis, while the remaining patients had Crohn’s disease. In terms of medications, vaccinated versus unvaccinated patients in the study were exposed to mesalamine alone (54.9% vs. 54.6%), thiopurines (10.8% vs. 10.5%), anti–tumor necrosis factor (anti-TNF) biologic monotherapy (18.8% vs. 20.9%), vedolizumab (7.2% vs. 6.0%), ustekinumab (1.0% vs. 1.1%), tofacitinib (0.7% vs. 0.8%), methotrexate (2.3% vs. 2.0%%), and/or corticosteroids (6.8% vs. 5.6%).

A total of 3,561 patients who received the Moderna vaccine and 3,017 patients who received the Pfizer vaccine received both doses. The median time between each dose was 21 days for Pfizer and 28 days for Moderna.

Patients who were unvaccinated had significantly fewer comorbidities (P < .001). The majority of patients in the overall cohort were men (92.2%), a group identified as having a much greater risk of worse COVID-19–related outcomes.

Unvaccinated patients in the study had a higher rate of SARS-CoV-2 infection compared with the fully vaccinated group (1.34% vs. 0.11%, respectively) in follow-up data reported through April 20, 2021. Over a median follow-up duration of 20 days, researchers found 14 infections with SARS-CoV-2 (0.28%) in partially vaccinated individuals. Seven infections (0.11%) were reported in fully vaccinated individuals over a median 38-day follow-up period.

Compared with unvaccinated patients, full vaccination status was associated with a 69% reduction in the hazard ratio of infection (HR, 0.31; 95% confidence interval, 0.17-0.56; P < .001). Corresponding vaccine efficacy rates were 25.1% for partial vaccination and 80.4% for full vaccination.

There were no significant interactions between vaccination status and exposure to steroids (P =.64), mesalamine versus immunosuppressive agents (P =.46), or anti-TNFs with immunomodulators or steroids versus other therapies (P =.34). In addition, no difference was found in the association between vaccination status and infection for patients who received the Moderna versus the Pfizer vaccines (P =.09).

Unvaccinated individuals had the highest raw proportions of severe infection with the novel coronavirus (0.32%) and all-cause mortality (0.66%), compared with people who were partially vaccinated or fully vaccinated. In adjusted Cox regression analyses, there was no significant association between vaccination status and severe SARS-CoV-2 infection (fully vaccinated vs. unvaccinated, P = .18) or all-cause mortality (fully vaccinated vs. unvaccinated, P =.11). The researchers wrote that, “future studies with larger sample size and/or longer follow-up are needed to evaluate this further.”

An important limitation of this study was the inclusion of mostly older men who were also predominantly White (80.4%). Ultimately, this population may limit the generalizability of the findings for women and patients of other races/ethnicities.

While the study received no financial support, Dr. Khan has received research grants from several pharmaceutical companies, but Dr. Mahmud disclosed no conflicts.

Publications
Topics
Sections

In a real-world setting, full vaccination against SARS-CoV-2 was more than 80% effective at reducing infection in people with inflammatory bowel disease (IBD) who were taking immunosuppressive medications.

The study, which examined postvaccine infection rates in a Veterans Affairs cohort, further validates the benefit of COVID-19 vaccines, particularly in a subgroup most at risk for having compromised immune systems. Furthermore, the findings “may serve to increase patient and provider willingness to pursue vaccination in these settings,” wrote study authors Nabeel Khan, MD, of the Corporal Michael J. Crescenz VA Medical Center and Nadim Mahmud, MD, of the University of Pennsylvania, both in Philadelphia. The report was published in Gastroenterology. In addition, the researchers said the findings “should provide positive reinforcement to IBD patients taking immunosuppressive agents who may otherwise be reluctant to receive vaccination.”

Since the onset of the COVID-19 pandemic, concerns have been raised regarding the possible heightened risk of SARS-CoV-2 infection among patients with IBD and other diseases associated with immune system dysregulation. Despite these fears, patients with IBD appear to have comparable rates of SARS-CoV-2 infection to that of the general population.

Pfizer’s BNT162b2 and Moderna’s RNA-1273 vaccines are the most widely used COVID-19 vaccines in the United States. These vaccines have demonstrated over 90% efficacy for preventing infection and severe disease in late-stage trials; however, few trials have examined their pooled effectiveness in immunocompromised patients and those taking immunosuppressive therapies.

To address this gap, researchers conducted a retrospective cohort study that included 14,697 patients (median age, 68 years) from the Veterans Health Administration database who had been diagnosed with IBD before the start date of the administration’s vaccination program. A total of 7,321 patients in the cohort had received at least 1 dose of either the Pfizer (45.2%) or Moderna (54.8%) vaccines.

Approximately 61.8% of patients had ulcerative colitis, while the remaining patients had Crohn’s disease. In terms of medications, vaccinated versus unvaccinated patients in the study were exposed to mesalamine alone (54.9% vs. 54.6%), thiopurines (10.8% vs. 10.5%), anti–tumor necrosis factor (anti-TNF) biologic monotherapy (18.8% vs. 20.9%), vedolizumab (7.2% vs. 6.0%), ustekinumab (1.0% vs. 1.1%), tofacitinib (0.7% vs. 0.8%), methotrexate (2.3% vs. 2.0%%), and/or corticosteroids (6.8% vs. 5.6%).

A total of 3,561 patients who received the Moderna vaccine and 3,017 patients who received the Pfizer vaccine received both doses. The median time between each dose was 21 days for Pfizer and 28 days for Moderna.

Patients who were unvaccinated had significantly fewer comorbidities (P < .001). The majority of patients in the overall cohort were men (92.2%), a group identified as having a much greater risk of worse COVID-19–related outcomes.

Unvaccinated patients in the study had a higher rate of SARS-CoV-2 infection compared with the fully vaccinated group (1.34% vs. 0.11%, respectively) in follow-up data reported through April 20, 2021. Over a median follow-up duration of 20 days, researchers found 14 infections with SARS-CoV-2 (0.28%) in partially vaccinated individuals. Seven infections (0.11%) were reported in fully vaccinated individuals over a median 38-day follow-up period.

Compared with unvaccinated patients, full vaccination status was associated with a 69% reduction in the hazard ratio of infection (HR, 0.31; 95% confidence interval, 0.17-0.56; P < .001). Corresponding vaccine efficacy rates were 25.1% for partial vaccination and 80.4% for full vaccination.

There were no significant interactions between vaccination status and exposure to steroids (P =.64), mesalamine versus immunosuppressive agents (P =.46), or anti-TNFs with immunomodulators or steroids versus other therapies (P =.34). In addition, no difference was found in the association between vaccination status and infection for patients who received the Moderna versus the Pfizer vaccines (P =.09).

Unvaccinated individuals had the highest raw proportions of severe infection with the novel coronavirus (0.32%) and all-cause mortality (0.66%), compared with people who were partially vaccinated or fully vaccinated. In adjusted Cox regression analyses, there was no significant association between vaccination status and severe SARS-CoV-2 infection (fully vaccinated vs. unvaccinated, P = .18) or all-cause mortality (fully vaccinated vs. unvaccinated, P =.11). The researchers wrote that, “future studies with larger sample size and/or longer follow-up are needed to evaluate this further.”

An important limitation of this study was the inclusion of mostly older men who were also predominantly White (80.4%). Ultimately, this population may limit the generalizability of the findings for women and patients of other races/ethnicities.

While the study received no financial support, Dr. Khan has received research grants from several pharmaceutical companies, but Dr. Mahmud disclosed no conflicts.

In a real-world setting, full vaccination against SARS-CoV-2 was more than 80% effective at reducing infection in people with inflammatory bowel disease (IBD) who were taking immunosuppressive medications.

The study, which examined postvaccine infection rates in a Veterans Affairs cohort, further validates the benefit of COVID-19 vaccines, particularly in a subgroup most at risk for having compromised immune systems. Furthermore, the findings “may serve to increase patient and provider willingness to pursue vaccination in these settings,” wrote study authors Nabeel Khan, MD, of the Corporal Michael J. Crescenz VA Medical Center and Nadim Mahmud, MD, of the University of Pennsylvania, both in Philadelphia. The report was published in Gastroenterology. In addition, the researchers said the findings “should provide positive reinforcement to IBD patients taking immunosuppressive agents who may otherwise be reluctant to receive vaccination.”

Since the onset of the COVID-19 pandemic, concerns have been raised regarding the possible heightened risk of SARS-CoV-2 infection among patients with IBD and other diseases associated with immune system dysregulation. Despite these fears, patients with IBD appear to have comparable rates of SARS-CoV-2 infection to that of the general population.

Pfizer’s BNT162b2 and Moderna’s RNA-1273 vaccines are the most widely used COVID-19 vaccines in the United States. These vaccines have demonstrated over 90% efficacy for preventing infection and severe disease in late-stage trials; however, few trials have examined their pooled effectiveness in immunocompromised patients and those taking immunosuppressive therapies.

To address this gap, researchers conducted a retrospective cohort study that included 14,697 patients (median age, 68 years) from the Veterans Health Administration database who had been diagnosed with IBD before the start date of the administration’s vaccination program. A total of 7,321 patients in the cohort had received at least 1 dose of either the Pfizer (45.2%) or Moderna (54.8%) vaccines.

Approximately 61.8% of patients had ulcerative colitis, while the remaining patients had Crohn’s disease. In terms of medications, vaccinated versus unvaccinated patients in the study were exposed to mesalamine alone (54.9% vs. 54.6%), thiopurines (10.8% vs. 10.5%), anti–tumor necrosis factor (anti-TNF) biologic monotherapy (18.8% vs. 20.9%), vedolizumab (7.2% vs. 6.0%), ustekinumab (1.0% vs. 1.1%), tofacitinib (0.7% vs. 0.8%), methotrexate (2.3% vs. 2.0%%), and/or corticosteroids (6.8% vs. 5.6%).

A total of 3,561 patients who received the Moderna vaccine and 3,017 patients who received the Pfizer vaccine received both doses. The median time between each dose was 21 days for Pfizer and 28 days for Moderna.

Patients who were unvaccinated had significantly fewer comorbidities (P < .001). The majority of patients in the overall cohort were men (92.2%), a group identified as having a much greater risk of worse COVID-19–related outcomes.

Unvaccinated patients in the study had a higher rate of SARS-CoV-2 infection compared with the fully vaccinated group (1.34% vs. 0.11%, respectively) in follow-up data reported through April 20, 2021. Over a median follow-up duration of 20 days, researchers found 14 infections with SARS-CoV-2 (0.28%) in partially vaccinated individuals. Seven infections (0.11%) were reported in fully vaccinated individuals over a median 38-day follow-up period.

Compared with unvaccinated patients, full vaccination status was associated with a 69% reduction in the hazard ratio of infection (HR, 0.31; 95% confidence interval, 0.17-0.56; P < .001). Corresponding vaccine efficacy rates were 25.1% for partial vaccination and 80.4% for full vaccination.

There were no significant interactions between vaccination status and exposure to steroids (P =.64), mesalamine versus immunosuppressive agents (P =.46), or anti-TNFs with immunomodulators or steroids versus other therapies (P =.34). In addition, no difference was found in the association between vaccination status and infection for patients who received the Moderna versus the Pfizer vaccines (P =.09).

Unvaccinated individuals had the highest raw proportions of severe infection with the novel coronavirus (0.32%) and all-cause mortality (0.66%), compared with people who were partially vaccinated or fully vaccinated. In adjusted Cox regression analyses, there was no significant association between vaccination status and severe SARS-CoV-2 infection (fully vaccinated vs. unvaccinated, P = .18) or all-cause mortality (fully vaccinated vs. unvaccinated, P =.11). The researchers wrote that, “future studies with larger sample size and/or longer follow-up are needed to evaluate this further.”

An important limitation of this study was the inclusion of mostly older men who were also predominantly White (80.4%). Ultimately, this population may limit the generalizability of the findings for women and patients of other races/ethnicities.

While the study received no financial support, Dr. Khan has received research grants from several pharmaceutical companies, but Dr. Mahmud disclosed no conflicts.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

How diet affects NASH-to-HCC progression

Article Type
Changed
Mon, 08/02/2021 - 11:20

A new study sought to establish a new, clinically relevant mouse model of nonalcoholic steatohepatitis (NASH) that closely reflects human disease as well as the multitissue dynamics involved in the progression and regression of the condition, according to the researchers. This study focused on the association between progression of NASH and consumption of a Western diet, as well as the development of HCC.

The study used a model consisting of hyperphagic mice that lacked a functional ALMS1 gene (Foz/Foz), in addition to wild-type littermates. The model ultimately defined “the key signaling and cytokine pathways that are critical for disease development and resolution” associated with NASH, wrote Souradipta Ganguly, PhD, of the University of California, San Diego, and colleagues. The report was published in Cellular and Molecular Gastroenterology and Hepatology.

According to the researchers, this study is unique given “current rodent models of NASH do not reproduce the complete spectrum of metabolic and histologic” nonalcoholic fatty liver disease (NAFLD) phenotypes. Likewise, the lack of “systemic studies in a single rodent model of NASH that closely recapitulates the human pathology” reinforces the importance of the new model, the researchers added.

Over time, NASH can progress to cirrhosis and hepatocellular carcinoma (HCC). Studies that fed wild-type mice a Western diet have largely failed to mimic the full pathology of NASH to fibrosis to HCC. In addition, the models in these studies fail to reflect the multitissue injuries frequently observed in NASH.

To circumvent these challenges, Dr. Ganguly and colleagues used ALMS1-mutated mice to develop a rodent model of metabolic syndrome that included NASH with fibrosis, chronic kidney disease, and cardiovascular disease. The ALMS1 mutation also resulted in the mice becoming hyperphagic, which increases hunger and leads to early-onset obesity, among other conditions characteristic of metabolic syndrome.

Researchers fed the hyperphagic Foz/Foz mice and wild-type littermates a Western diet or standard diet during a 12-week period for NASH/fibrosis and a 24-week period for HCC. After NASH was established, mice were switched back to normal chow to see if the condition regressed.

Macronutrient distribution of the study’s Western diet included 40% fat, 15% protein, and 44% carbohydrates, based on total caloric content. In contrast, the standard chow included 12% fat, 23% protein, and 65% carbohydrates from total calories.

Within 1-2 weeks, Foz mice fed the Western diet were considered steatotic. These mice subsequently developed NASH by 4 weeks of the study and grade 3 fibrosis by 12 weeks. The researchers concurrently observed the development of chronic kidney injury in the animals. Mice continuing to the 24 weeks ultimately progressed to cirrhosis and HCC; these mice demonstrated reduced survival due to cardiac dysfunction.

Mice that developed NASH were then switched to a diet consisting of normal chow. Following this switch, NASH began to regress, and survival improved. These mice did not appear to develop HCC, and total liver weight was significantly reduced compared with the mice that didn’t enter the regression phase of the study. The researchers wrote that the resolution of hepatic steatosis was also consistent with improved glucose tolerance.

In transcriptomic and histologic analyses, the researchers found strong concordance between Foz/Foz mice NASH liver and human NASH.

The study also found that early disruption of gut barrier, microbial dysbiosis, lipopolysaccharide leakage, and intestinal inflammation preceded NASH in the Foz/Foz mice fed the Western diet, resulting in acute-phase liver inflammation. The early inflammation was reflected by an increase in several chemokines and cytokines by 1-2 weeks. As NASH progressed, the liver cytokine/chemokine profile continued to evolve, leading to monocyte recruitment predominance. “Further studies will elaborate the roles of these NASH-specific microbiomial features in the development and progression of NASH fibrosis,” wrote the researchers.

The study received financial support Janssen, in addition to funding from an ALF Liver Scholar award, ACTRI/National Institutes of Health, the SDDRC, and the NIAAA/National Institutes of Health. The authors disclosed no conflicts.

Publications
Topics
Sections

A new study sought to establish a new, clinically relevant mouse model of nonalcoholic steatohepatitis (NASH) that closely reflects human disease as well as the multitissue dynamics involved in the progression and regression of the condition, according to the researchers. This study focused on the association between progression of NASH and consumption of a Western diet, as well as the development of HCC.

The study used a model consisting of hyperphagic mice that lacked a functional ALMS1 gene (Foz/Foz), in addition to wild-type littermates. The model ultimately defined “the key signaling and cytokine pathways that are critical for disease development and resolution” associated with NASH, wrote Souradipta Ganguly, PhD, of the University of California, San Diego, and colleagues. The report was published in Cellular and Molecular Gastroenterology and Hepatology.

According to the researchers, this study is unique given “current rodent models of NASH do not reproduce the complete spectrum of metabolic and histologic” nonalcoholic fatty liver disease (NAFLD) phenotypes. Likewise, the lack of “systemic studies in a single rodent model of NASH that closely recapitulates the human pathology” reinforces the importance of the new model, the researchers added.

Over time, NASH can progress to cirrhosis and hepatocellular carcinoma (HCC). Studies that fed wild-type mice a Western diet have largely failed to mimic the full pathology of NASH to fibrosis to HCC. In addition, the models in these studies fail to reflect the multitissue injuries frequently observed in NASH.

To circumvent these challenges, Dr. Ganguly and colleagues used ALMS1-mutated mice to develop a rodent model of metabolic syndrome that included NASH with fibrosis, chronic kidney disease, and cardiovascular disease. The ALMS1 mutation also resulted in the mice becoming hyperphagic, which increases hunger and leads to early-onset obesity, among other conditions characteristic of metabolic syndrome.

Researchers fed the hyperphagic Foz/Foz mice and wild-type littermates a Western diet or standard diet during a 12-week period for NASH/fibrosis and a 24-week period for HCC. After NASH was established, mice were switched back to normal chow to see if the condition regressed.

Macronutrient distribution of the study’s Western diet included 40% fat, 15% protein, and 44% carbohydrates, based on total caloric content. In contrast, the standard chow included 12% fat, 23% protein, and 65% carbohydrates from total calories.

Within 1-2 weeks, Foz mice fed the Western diet were considered steatotic. These mice subsequently developed NASH by 4 weeks of the study and grade 3 fibrosis by 12 weeks. The researchers concurrently observed the development of chronic kidney injury in the animals. Mice continuing to the 24 weeks ultimately progressed to cirrhosis and HCC; these mice demonstrated reduced survival due to cardiac dysfunction.

Mice that developed NASH were then switched to a diet consisting of normal chow. Following this switch, NASH began to regress, and survival improved. These mice did not appear to develop HCC, and total liver weight was significantly reduced compared with the mice that didn’t enter the regression phase of the study. The researchers wrote that the resolution of hepatic steatosis was also consistent with improved glucose tolerance.

In transcriptomic and histologic analyses, the researchers found strong concordance between Foz/Foz mice NASH liver and human NASH.

The study also found that early disruption of gut barrier, microbial dysbiosis, lipopolysaccharide leakage, and intestinal inflammation preceded NASH in the Foz/Foz mice fed the Western diet, resulting in acute-phase liver inflammation. The early inflammation was reflected by an increase in several chemokines and cytokines by 1-2 weeks. As NASH progressed, the liver cytokine/chemokine profile continued to evolve, leading to monocyte recruitment predominance. “Further studies will elaborate the roles of these NASH-specific microbiomial features in the development and progression of NASH fibrosis,” wrote the researchers.

The study received financial support Janssen, in addition to funding from an ALF Liver Scholar award, ACTRI/National Institutes of Health, the SDDRC, and the NIAAA/National Institutes of Health. The authors disclosed no conflicts.

A new study sought to establish a new, clinically relevant mouse model of nonalcoholic steatohepatitis (NASH) that closely reflects human disease as well as the multitissue dynamics involved in the progression and regression of the condition, according to the researchers. This study focused on the association between progression of NASH and consumption of a Western diet, as well as the development of HCC.

The study used a model consisting of hyperphagic mice that lacked a functional ALMS1 gene (Foz/Foz), in addition to wild-type littermates. The model ultimately defined “the key signaling and cytokine pathways that are critical for disease development and resolution” associated with NASH, wrote Souradipta Ganguly, PhD, of the University of California, San Diego, and colleagues. The report was published in Cellular and Molecular Gastroenterology and Hepatology.

According to the researchers, this study is unique given “current rodent models of NASH do not reproduce the complete spectrum of metabolic and histologic” nonalcoholic fatty liver disease (NAFLD) phenotypes. Likewise, the lack of “systemic studies in a single rodent model of NASH that closely recapitulates the human pathology” reinforces the importance of the new model, the researchers added.

Over time, NASH can progress to cirrhosis and hepatocellular carcinoma (HCC). Studies that fed wild-type mice a Western diet have largely failed to mimic the full pathology of NASH to fibrosis to HCC. In addition, the models in these studies fail to reflect the multitissue injuries frequently observed in NASH.

To circumvent these challenges, Dr. Ganguly and colleagues used ALMS1-mutated mice to develop a rodent model of metabolic syndrome that included NASH with fibrosis, chronic kidney disease, and cardiovascular disease. The ALMS1 mutation also resulted in the mice becoming hyperphagic, which increases hunger and leads to early-onset obesity, among other conditions characteristic of metabolic syndrome.

Researchers fed the hyperphagic Foz/Foz mice and wild-type littermates a Western diet or standard diet during a 12-week period for NASH/fibrosis and a 24-week period for HCC. After NASH was established, mice were switched back to normal chow to see if the condition regressed.

Macronutrient distribution of the study’s Western diet included 40% fat, 15% protein, and 44% carbohydrates, based on total caloric content. In contrast, the standard chow included 12% fat, 23% protein, and 65% carbohydrates from total calories.

Within 1-2 weeks, Foz mice fed the Western diet were considered steatotic. These mice subsequently developed NASH by 4 weeks of the study and grade 3 fibrosis by 12 weeks. The researchers concurrently observed the development of chronic kidney injury in the animals. Mice continuing to the 24 weeks ultimately progressed to cirrhosis and HCC; these mice demonstrated reduced survival due to cardiac dysfunction.

Mice that developed NASH were then switched to a diet consisting of normal chow. Following this switch, NASH began to regress, and survival improved. These mice did not appear to develop HCC, and total liver weight was significantly reduced compared with the mice that didn’t enter the regression phase of the study. The researchers wrote that the resolution of hepatic steatosis was also consistent with improved glucose tolerance.

In transcriptomic and histologic analyses, the researchers found strong concordance between Foz/Foz mice NASH liver and human NASH.

The study also found that early disruption of gut barrier, microbial dysbiosis, lipopolysaccharide leakage, and intestinal inflammation preceded NASH in the Foz/Foz mice fed the Western diet, resulting in acute-phase liver inflammation. The early inflammation was reflected by an increase in several chemokines and cytokines by 1-2 weeks. As NASH progressed, the liver cytokine/chemokine profile continued to evolve, leading to monocyte recruitment predominance. “Further studies will elaborate the roles of these NASH-specific microbiomial features in the development and progression of NASH fibrosis,” wrote the researchers.

The study received financial support Janssen, in addition to funding from an ALF Liver Scholar award, ACTRI/National Institutes of Health, the SDDRC, and the NIAAA/National Institutes of Health. The authors disclosed no conflicts.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CELLULAR AND MOLECULAR GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

MR elastography could predict cirrhosis in NAFLD

Article Type
Changed
Tue, 08/03/2021 - 09:17

Liver stiffness measurement with magnetic resonance elastography (MRE) may prove predictive of future cirrhosis risk in patients with nonalcoholic fatty liver disease (NAFLD), according to researchers from the Mayo Clinic in Rochester, Minn.

“These data expand the role of MRE from an accurate diagnostic method to a prognostic noninvasive imaging biomarker that can risk-stratify patients with NAFLD and guide the timing of surveillance and further refine their clinical management,” wrote Tolga Gidener, MD, and colleagues. The study authors added that the research further expands “the role of MRE beyond liver fibrosis estimation by adding a predictive feature to improve individualized disease monitoring and patient counseling.” Their study was published in Clinical Gastroenterology and Hepatology.

Currently, there are no established noninvasive strategies that can effectively identify patients with NAFLD who are at high risk of progression to cirrhosis and liver-related complications. While fibrosis stage on histology may predict liver-associated outcomes in these patients, this approach is invasive, time consuming, and is generally not well tolerated by patients.

Although the technique has been noted for its high success rate and excellent levels of reliability and reproducibility, a possible limitation of MRE is its cost. That said, standalone MRE is reimbursed under Medicare Category I Current Procedural Terminology code 76391 with a cost of $240.02. However, there is also a lack of data on whether baseline liver stiffness measurement by MRE can predict progression of NAFLD to cirrhosis.

To gauge the role of baseline liver stiffness measurement by MRE, Dr. Gidener and colleagues performed a retrospective cohort study that evaluated hard liver–related outcomes in 829 adult patients with NAFLD with or without cirrhosis (median age, 58 years; 54% female) who underwent MRE during 2007-2019.

Patients in the study were followed from the first MRE until death, last clinical encounter, or the end of the study. Clinical outcomes assessed in individual chart review included cirrhosis, hepatic decompensation, and death.

At baseline, the median liver stiffness measurement was 2.8 kPa in 639 patients with NAFLD but without cirrhosis. Over a median 4-year follow-up period, a total of 20 patients developed cirrhosis, with an overall annual incidence rate of 1%.

Baseline liver stiffness measurement by MRE was significantly predictive of subsequent cirrhosis (hazard ratio, 2.93; 95% confidence interval, 1.86-4.62; P < .0001) per 1-kPa difference in liver stiffness measurement at baseline.

According to the researchers, the probability of future cirrhosis development can be ascertained using current liver stiffness measurement. As such, a greater than 1% probability threshold can be reached in 5 years in patients with a measurement of 2 kPa, 3 years in patients with a measurement of 3 kPA, and 1 year in patients with 4-5 kPa. “These time frames inform about estimated time to progression to hard outcomes and provide guidance for subsequent noninvasive monitoring for disease progression,” wrote the researchers.

The baseline liver stiffness measurement by MRE was also significantly predictive of future hepatic decompensation or death (HR, 1.32; 95% CI, 1.13-1.56; P = .0007) per 1-kPa increment in the liver stiffness measurement. Likewise, the 1-year probability of subsequent hepatic decompensation or death in patients with cirrhosis and baseline liver stiffness measurement of 5 kPa versus 8 kPa was 9% versus 20%, respectively. In terms of covariates, age was the only factor that increased the risk of hepatic decompensation or death.

While the current study offers a glimpse into the potential clinical implications of liver stiffness measurement by MRE in NAFLD, the researchers suggest the applicability of the findings are limited by the study’s small sample size, relatively short follow-up duration, and the small number of cirrhosis events.

The researchers received study funding from the National Institute of Diabetes and Digestive and Kidney Diseases, American College of Gastroenterology, National Institutes of Health, and the Department of Defense. The researchers disclosed no other relevant conflicts of interest.

Publications
Topics
Sections

Liver stiffness measurement with magnetic resonance elastography (MRE) may prove predictive of future cirrhosis risk in patients with nonalcoholic fatty liver disease (NAFLD), according to researchers from the Mayo Clinic in Rochester, Minn.

“These data expand the role of MRE from an accurate diagnostic method to a prognostic noninvasive imaging biomarker that can risk-stratify patients with NAFLD and guide the timing of surveillance and further refine their clinical management,” wrote Tolga Gidener, MD, and colleagues. The study authors added that the research further expands “the role of MRE beyond liver fibrosis estimation by adding a predictive feature to improve individualized disease monitoring and patient counseling.” Their study was published in Clinical Gastroenterology and Hepatology.

Currently, there are no established noninvasive strategies that can effectively identify patients with NAFLD who are at high risk of progression to cirrhosis and liver-related complications. While fibrosis stage on histology may predict liver-associated outcomes in these patients, this approach is invasive, time consuming, and is generally not well tolerated by patients.

Although the technique has been noted for its high success rate and excellent levels of reliability and reproducibility, a possible limitation of MRE is its cost. That said, standalone MRE is reimbursed under Medicare Category I Current Procedural Terminology code 76391 with a cost of $240.02. However, there is also a lack of data on whether baseline liver stiffness measurement by MRE can predict progression of NAFLD to cirrhosis.

To gauge the role of baseline liver stiffness measurement by MRE, Dr. Gidener and colleagues performed a retrospective cohort study that evaluated hard liver–related outcomes in 829 adult patients with NAFLD with or without cirrhosis (median age, 58 years; 54% female) who underwent MRE during 2007-2019.

Patients in the study were followed from the first MRE until death, last clinical encounter, or the end of the study. Clinical outcomes assessed in individual chart review included cirrhosis, hepatic decompensation, and death.

At baseline, the median liver stiffness measurement was 2.8 kPa in 639 patients with NAFLD but without cirrhosis. Over a median 4-year follow-up period, a total of 20 patients developed cirrhosis, with an overall annual incidence rate of 1%.

Baseline liver stiffness measurement by MRE was significantly predictive of subsequent cirrhosis (hazard ratio, 2.93; 95% confidence interval, 1.86-4.62; P < .0001) per 1-kPa difference in liver stiffness measurement at baseline.

According to the researchers, the probability of future cirrhosis development can be ascertained using current liver stiffness measurement. As such, a greater than 1% probability threshold can be reached in 5 years in patients with a measurement of 2 kPa, 3 years in patients with a measurement of 3 kPA, and 1 year in patients with 4-5 kPa. “These time frames inform about estimated time to progression to hard outcomes and provide guidance for subsequent noninvasive monitoring for disease progression,” wrote the researchers.

The baseline liver stiffness measurement by MRE was also significantly predictive of future hepatic decompensation or death (HR, 1.32; 95% CI, 1.13-1.56; P = .0007) per 1-kPa increment in the liver stiffness measurement. Likewise, the 1-year probability of subsequent hepatic decompensation or death in patients with cirrhosis and baseline liver stiffness measurement of 5 kPa versus 8 kPa was 9% versus 20%, respectively. In terms of covariates, age was the only factor that increased the risk of hepatic decompensation or death.

While the current study offers a glimpse into the potential clinical implications of liver stiffness measurement by MRE in NAFLD, the researchers suggest the applicability of the findings are limited by the study’s small sample size, relatively short follow-up duration, and the small number of cirrhosis events.

The researchers received study funding from the National Institute of Diabetes and Digestive and Kidney Diseases, American College of Gastroenterology, National Institutes of Health, and the Department of Defense. The researchers disclosed no other relevant conflicts of interest.

Liver stiffness measurement with magnetic resonance elastography (MRE) may prove predictive of future cirrhosis risk in patients with nonalcoholic fatty liver disease (NAFLD), according to researchers from the Mayo Clinic in Rochester, Minn.

“These data expand the role of MRE from an accurate diagnostic method to a prognostic noninvasive imaging biomarker that can risk-stratify patients with NAFLD and guide the timing of surveillance and further refine their clinical management,” wrote Tolga Gidener, MD, and colleagues. The study authors added that the research further expands “the role of MRE beyond liver fibrosis estimation by adding a predictive feature to improve individualized disease monitoring and patient counseling.” Their study was published in Clinical Gastroenterology and Hepatology.

Currently, there are no established noninvasive strategies that can effectively identify patients with NAFLD who are at high risk of progression to cirrhosis and liver-related complications. While fibrosis stage on histology may predict liver-associated outcomes in these patients, this approach is invasive, time consuming, and is generally not well tolerated by patients.

Although the technique has been noted for its high success rate and excellent levels of reliability and reproducibility, a possible limitation of MRE is its cost. That said, standalone MRE is reimbursed under Medicare Category I Current Procedural Terminology code 76391 with a cost of $240.02. However, there is also a lack of data on whether baseline liver stiffness measurement by MRE can predict progression of NAFLD to cirrhosis.

To gauge the role of baseline liver stiffness measurement by MRE, Dr. Gidener and colleagues performed a retrospective cohort study that evaluated hard liver–related outcomes in 829 adult patients with NAFLD with or without cirrhosis (median age, 58 years; 54% female) who underwent MRE during 2007-2019.

Patients in the study were followed from the first MRE until death, last clinical encounter, or the end of the study. Clinical outcomes assessed in individual chart review included cirrhosis, hepatic decompensation, and death.

At baseline, the median liver stiffness measurement was 2.8 kPa in 639 patients with NAFLD but without cirrhosis. Over a median 4-year follow-up period, a total of 20 patients developed cirrhosis, with an overall annual incidence rate of 1%.

Baseline liver stiffness measurement by MRE was significantly predictive of subsequent cirrhosis (hazard ratio, 2.93; 95% confidence interval, 1.86-4.62; P < .0001) per 1-kPa difference in liver stiffness measurement at baseline.

According to the researchers, the probability of future cirrhosis development can be ascertained using current liver stiffness measurement. As such, a greater than 1% probability threshold can be reached in 5 years in patients with a measurement of 2 kPa, 3 years in patients with a measurement of 3 kPA, and 1 year in patients with 4-5 kPa. “These time frames inform about estimated time to progression to hard outcomes and provide guidance for subsequent noninvasive monitoring for disease progression,” wrote the researchers.

The baseline liver stiffness measurement by MRE was also significantly predictive of future hepatic decompensation or death (HR, 1.32; 95% CI, 1.13-1.56; P = .0007) per 1-kPa increment in the liver stiffness measurement. Likewise, the 1-year probability of subsequent hepatic decompensation or death in patients with cirrhosis and baseline liver stiffness measurement of 5 kPa versus 8 kPa was 9% versus 20%, respectively. In terms of covariates, age was the only factor that increased the risk of hepatic decompensation or death.

While the current study offers a glimpse into the potential clinical implications of liver stiffness measurement by MRE in NAFLD, the researchers suggest the applicability of the findings are limited by the study’s small sample size, relatively short follow-up duration, and the small number of cirrhosis events.

The researchers received study funding from the National Institute of Diabetes and Digestive and Kidney Diseases, American College of Gastroenterology, National Institutes of Health, and the Department of Defense. The researchers disclosed no other relevant conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

PPIs could be bad news for oral cancer therapies

Article Type
Changed
Thu, 07/22/2021 - 14:24

 

A substantial proportion of patients with cancer use proton pump inhibitors (PPIs), and up to one-third of these patients are also using oral cancer treatments that could be adversely affected by concomitant PPI use, according to a cross-sectional analysis.

Amit Patel, MD, a gastroenterologist with Duke University, Durham, N.C., was not involved in the study but commented on it in an interview. The “sobering” study findings highlight the need for “clinicians to carefully and regularly assess the indications and need for PPI, which are often overutilized, and consider ‘deprescribing’ based on clinical guidance,” he explained.

Previous research indicates the use of PPIs can lower the bioavailability and efficacy of oral cancer treatments, such as tyrosine kinase inhibitors (TKIs) and checkpoint inhibitors. In the current study, published in JAMA Network Open, researchers sought to identify how many patients with cancer were taking treatments at risk for altered efficacy from PPI use and what factors were associated with use of PPIs.
 

The study findings

Jean-Luc Raoul, MD, and colleagues, analyzed physician-reported medical data of 566 women and 306 men with cancer from four comprehensive cancer centers in France, with a median age of 63 years. A total of 229 patients in the study (26.3%) were taking PPIs.

Most patients (71.1%) were using PPIs on a regular basis; reasons included epigastric pain (50.0%), retrosternal pain (14.0%), proven esophageal or gastric ulcer (8.0%), or gastroprotection (15.0%).

Factors associated with PPI use in this cohort included older age (odds ratio, 1.02; P <.001), Eastern Cooperative Oncology Group performance status (PS) (PS 1: OR, 1.92; PS 2: OR, 2.51; PS 3: OR, 2.33; P <.001), receipt of hormone therapy (OR, 0.59; P =.01), metastatic stage (P =.03), and tumor site (P =.045).

Older age and PS are particularly important characteristics, explained Dr. Patel. “Unfortunately, older patients with cancer and/or poor PS are more likely to have medical interactions that may result in their being prescribed PPI medications, often for indications that may not justify their use, and/or for indefinite durations.”

He noted that clinicians who are considering prescribing PPI medications should carefully address the indications for PPIs in the clinical scenario, the evidence supporting PPI use for the indication, ratio of benefits and risks, and potential alternatives to PPI use to mitigate potential issues with other therapies.

Approximately 29% of patients who took drugs whose efficacy might be affected by PPI use were also taking other medications, including capecitabine (n = 5), sunitinib (n = 5), cabozantinib (n = 2), pazopanib (n = 1), gefitinib (n = 1), erlotinib (n = 1), and sorafenib (n = 1). Another 39 out of 90 patients (25.6%) taking PPIs were also receiving checkpoint inhibitors. Of the 20 patients who took TKIs and PPIs, a total of 16 reported long-term PPI use. The most common reason for long-term use of PPIs was related to epigastric pain (n = 11).

Since this study was based on physician-reported data, the analysis was limited by the lack of data for all patients seen by each participating physician. In spite of this limitation, the investigators reported no sources of major bias and suggested the study’s prospective nature and relatively large-sized cohorts strengthened the analysis.
 

 

 

PPI use and cancer care

Although issues exist with PPIs in respect to cancer therapies, there are some strategies which may help reduce possible negative effects, Dr. Patel said. “When PPI medications are prescribed, they should be used at the lowest effective dose for the shortest necessary duration, and their use should be regularly reevaluated for dose reduction and/or potential discontinuation.”

Dr. Patel noted that, based on the indication for PPIs, alternatives to PPIs should be considered in the setting of potential drug-drug interactions that may affect the efficacy of oral cancer therapies. “For example, for intermittent typical reflux symptoms such as heartburn, over-the-counter antacids may be considered, along with reflux lifestyle medications,” he explained.

Likewise, the study authors stated in their research letter that “PPIs should be actively identified and substituted” in certain cases. The authors added that antacids are also the best option for patients taking checkpoint inhibitors.

“For those patients who absolutely must take TKI and PPI, clinicians can also consider staggering the dosing schedule, such as taking the TKI in the morning at least 2 hours before PPI and/or with an acidic beverage,” added Dr. Patel.

Although the findings from this study raise potential concerns, Dr. Patel stated further clinical investigations are needed to help the medical community better understand the specific effects of PPIs on the efficacy of various chemotherapeutic agents and to also help develop better management options for patients in these settings.

The authors reported relationships with Bayer, Merck, Transgene, and others. Dr. Patel has no relevant conflicts of interest to report.

Publications
Topics
Sections

 

A substantial proportion of patients with cancer use proton pump inhibitors (PPIs), and up to one-third of these patients are also using oral cancer treatments that could be adversely affected by concomitant PPI use, according to a cross-sectional analysis.

Amit Patel, MD, a gastroenterologist with Duke University, Durham, N.C., was not involved in the study but commented on it in an interview. The “sobering” study findings highlight the need for “clinicians to carefully and regularly assess the indications and need for PPI, which are often overutilized, and consider ‘deprescribing’ based on clinical guidance,” he explained.

Previous research indicates the use of PPIs can lower the bioavailability and efficacy of oral cancer treatments, such as tyrosine kinase inhibitors (TKIs) and checkpoint inhibitors. In the current study, published in JAMA Network Open, researchers sought to identify how many patients with cancer were taking treatments at risk for altered efficacy from PPI use and what factors were associated with use of PPIs.
 

The study findings

Jean-Luc Raoul, MD, and colleagues, analyzed physician-reported medical data of 566 women and 306 men with cancer from four comprehensive cancer centers in France, with a median age of 63 years. A total of 229 patients in the study (26.3%) were taking PPIs.

Most patients (71.1%) were using PPIs on a regular basis; reasons included epigastric pain (50.0%), retrosternal pain (14.0%), proven esophageal or gastric ulcer (8.0%), or gastroprotection (15.0%).

Factors associated with PPI use in this cohort included older age (odds ratio, 1.02; P <.001), Eastern Cooperative Oncology Group performance status (PS) (PS 1: OR, 1.92; PS 2: OR, 2.51; PS 3: OR, 2.33; P <.001), receipt of hormone therapy (OR, 0.59; P =.01), metastatic stage (P =.03), and tumor site (P =.045).

Older age and PS are particularly important characteristics, explained Dr. Patel. “Unfortunately, older patients with cancer and/or poor PS are more likely to have medical interactions that may result in their being prescribed PPI medications, often for indications that may not justify their use, and/or for indefinite durations.”

He noted that clinicians who are considering prescribing PPI medications should carefully address the indications for PPIs in the clinical scenario, the evidence supporting PPI use for the indication, ratio of benefits and risks, and potential alternatives to PPI use to mitigate potential issues with other therapies.

Approximately 29% of patients who took drugs whose efficacy might be affected by PPI use were also taking other medications, including capecitabine (n = 5), sunitinib (n = 5), cabozantinib (n = 2), pazopanib (n = 1), gefitinib (n = 1), erlotinib (n = 1), and sorafenib (n = 1). Another 39 out of 90 patients (25.6%) taking PPIs were also receiving checkpoint inhibitors. Of the 20 patients who took TKIs and PPIs, a total of 16 reported long-term PPI use. The most common reason for long-term use of PPIs was related to epigastric pain (n = 11).

Since this study was based on physician-reported data, the analysis was limited by the lack of data for all patients seen by each participating physician. In spite of this limitation, the investigators reported no sources of major bias and suggested the study’s prospective nature and relatively large-sized cohorts strengthened the analysis.
 

 

 

PPI use and cancer care

Although issues exist with PPIs in respect to cancer therapies, there are some strategies which may help reduce possible negative effects, Dr. Patel said. “When PPI medications are prescribed, they should be used at the lowest effective dose for the shortest necessary duration, and their use should be regularly reevaluated for dose reduction and/or potential discontinuation.”

Dr. Patel noted that, based on the indication for PPIs, alternatives to PPIs should be considered in the setting of potential drug-drug interactions that may affect the efficacy of oral cancer therapies. “For example, for intermittent typical reflux symptoms such as heartburn, over-the-counter antacids may be considered, along with reflux lifestyle medications,” he explained.

Likewise, the study authors stated in their research letter that “PPIs should be actively identified and substituted” in certain cases. The authors added that antacids are also the best option for patients taking checkpoint inhibitors.

“For those patients who absolutely must take TKI and PPI, clinicians can also consider staggering the dosing schedule, such as taking the TKI in the morning at least 2 hours before PPI and/or with an acidic beverage,” added Dr. Patel.

Although the findings from this study raise potential concerns, Dr. Patel stated further clinical investigations are needed to help the medical community better understand the specific effects of PPIs on the efficacy of various chemotherapeutic agents and to also help develop better management options for patients in these settings.

The authors reported relationships with Bayer, Merck, Transgene, and others. Dr. Patel has no relevant conflicts of interest to report.

 

A substantial proportion of patients with cancer use proton pump inhibitors (PPIs), and up to one-third of these patients are also using oral cancer treatments that could be adversely affected by concomitant PPI use, according to a cross-sectional analysis.

Amit Patel, MD, a gastroenterologist with Duke University, Durham, N.C., was not involved in the study but commented on it in an interview. The “sobering” study findings highlight the need for “clinicians to carefully and regularly assess the indications and need for PPI, which are often overutilized, and consider ‘deprescribing’ based on clinical guidance,” he explained.

Previous research indicates the use of PPIs can lower the bioavailability and efficacy of oral cancer treatments, such as tyrosine kinase inhibitors (TKIs) and checkpoint inhibitors. In the current study, published in JAMA Network Open, researchers sought to identify how many patients with cancer were taking treatments at risk for altered efficacy from PPI use and what factors were associated with use of PPIs.
 

The study findings

Jean-Luc Raoul, MD, and colleagues, analyzed physician-reported medical data of 566 women and 306 men with cancer from four comprehensive cancer centers in France, with a median age of 63 years. A total of 229 patients in the study (26.3%) were taking PPIs.

Most patients (71.1%) were using PPIs on a regular basis; reasons included epigastric pain (50.0%), retrosternal pain (14.0%), proven esophageal or gastric ulcer (8.0%), or gastroprotection (15.0%).

Factors associated with PPI use in this cohort included older age (odds ratio, 1.02; P <.001), Eastern Cooperative Oncology Group performance status (PS) (PS 1: OR, 1.92; PS 2: OR, 2.51; PS 3: OR, 2.33; P <.001), receipt of hormone therapy (OR, 0.59; P =.01), metastatic stage (P =.03), and tumor site (P =.045).

Older age and PS are particularly important characteristics, explained Dr. Patel. “Unfortunately, older patients with cancer and/or poor PS are more likely to have medical interactions that may result in their being prescribed PPI medications, often for indications that may not justify their use, and/or for indefinite durations.”

He noted that clinicians who are considering prescribing PPI medications should carefully address the indications for PPIs in the clinical scenario, the evidence supporting PPI use for the indication, ratio of benefits and risks, and potential alternatives to PPI use to mitigate potential issues with other therapies.

Approximately 29% of patients who took drugs whose efficacy might be affected by PPI use were also taking other medications, including capecitabine (n = 5), sunitinib (n = 5), cabozantinib (n = 2), pazopanib (n = 1), gefitinib (n = 1), erlotinib (n = 1), and sorafenib (n = 1). Another 39 out of 90 patients (25.6%) taking PPIs were also receiving checkpoint inhibitors. Of the 20 patients who took TKIs and PPIs, a total of 16 reported long-term PPI use. The most common reason for long-term use of PPIs was related to epigastric pain (n = 11).

Since this study was based on physician-reported data, the analysis was limited by the lack of data for all patients seen by each participating physician. In spite of this limitation, the investigators reported no sources of major bias and suggested the study’s prospective nature and relatively large-sized cohorts strengthened the analysis.
 

 

 

PPI use and cancer care

Although issues exist with PPIs in respect to cancer therapies, there are some strategies which may help reduce possible negative effects, Dr. Patel said. “When PPI medications are prescribed, they should be used at the lowest effective dose for the shortest necessary duration, and their use should be regularly reevaluated for dose reduction and/or potential discontinuation.”

Dr. Patel noted that, based on the indication for PPIs, alternatives to PPIs should be considered in the setting of potential drug-drug interactions that may affect the efficacy of oral cancer therapies. “For example, for intermittent typical reflux symptoms such as heartburn, over-the-counter antacids may be considered, along with reflux lifestyle medications,” he explained.

Likewise, the study authors stated in their research letter that “PPIs should be actively identified and substituted” in certain cases. The authors added that antacids are also the best option for patients taking checkpoint inhibitors.

“For those patients who absolutely must take TKI and PPI, clinicians can also consider staggering the dosing schedule, such as taking the TKI in the morning at least 2 hours before PPI and/or with an acidic beverage,” added Dr. Patel.

Although the findings from this study raise potential concerns, Dr. Patel stated further clinical investigations are needed to help the medical community better understand the specific effects of PPIs on the efficacy of various chemotherapeutic agents and to also help develop better management options for patients in these settings.

The authors reported relationships with Bayer, Merck, Transgene, and others. Dr. Patel has no relevant conflicts of interest to report.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

New allergy guidelines call for end to food bans in schools

Article Type
Changed
Tue, 05/25/2021 - 16:41

 

Children with food allergies often require diligent monitoring and a restricted diet to reduce allergic attacks, but there is little evidence available to support so-called “food bans” at schools and childcare centers.

Instead, a new practice guideline published earlier this month in the Journal of Allergy and Clinical Immunology calls for better allergy management training for staff, as well as increased epinephrine availability in educational environments. The guidelines were developed by an international panel of clinicians, school personnel, and parents.
 

The guidance at a glance

Rather than creating site-wide food prohibitions on nuts, dairy, and other allergenic foods, the practice guidance recommends centers and schools use “common-sense approaches” to reduce allergic reaction risk among school-aged children. According to the guideline authors, these strategies could include promoting handwashing, providing adult supervision during snacks and meals, and cleaning surfaces where food is either eaten or prepared.

Additionally, the new evidence-based guidance calls for schools and childcare centers to teach school personnel to recognize, prevent, and respond appropriately to food-related allergic reactions when they do occur.

The guidance also recommends that educational institutions require an up-to-date allergy ‘action plan’ from parents, designed for their children with allergies. These action plans can be integrated into the training of teachers and nurses to help manage potential allergic reactions.

Moreover, the guidance suggests schools should keep unassigned epinephrine autoinjectors in stock, both on site and even when traveling, where laws permit, rather than requiring students with allergies to bring in their own autoinjectors. Ultimately, this represents a more proactive approach to treating anaphylaxis, particularly in settings where treatment is urgently needed, such as when students are away from campus and participating in a school-designated trip or event.
 

Expert perspectives

Jennifer A. Dantzer, MD, MHS, allergist-immunologist and assistant professor of pediatrics at Johns Hopkins University, Baltimore, told this news organization via email that the practice guidelines offer an important starting point for ensuring quality of life of students, parents, and other school personnel.

While the Centers for Disease Control and Prevention published voluntary guidance for managing food allergies in schools back in 2013, there has since “been a lack of universal policies and procedures to manage the risk of allergic reactions in schools,” explained Dr. Dantzer. “The new guidelines are a good first step of using available evidence and all the key stakeholders, clinicians, school personnel, and families to figure out the best way to keep children with food allergies safe at school.”

Dr. Dantzer wasn’t involved in the creation of the new practice guidelines, but she shared how her clinical experience reinforces the need for the evidence-based recommendations. “Every single week we talk with families, both in clinic and in our research studies, about living with food allergies, and we recognize that every child is different,” she said. “We constantly work to advocate for each individual child with food allergies.”

Pediatric allergist Malika Gupta, MBBS, MD, said in an interview via email that the guidelines could assist in the creation of new nationwide policies for food allergy management at schools. “Also, the guidelines are labeled ‘conditional,’ which gives policymakers the ability to adapt to their specific circumstances and individuals, as well as make modifications according to regional trends,” she added.

Dr. Gupta, a clinical assistant professor in the Division of Allergy and Clinical Immunology at the University of Michigan, Ann Arbor, echoed the guideline panel’s sentiments regarding food bans, explaining that prohibiting certain foods could lend a “false sense of security” and could also “promote bullying and a sense of isolation for the food-allergic child.” In spite of the lack of evidence supporting food bans, Dr. Gupta noted that these bans can give families a sense of control and security. Ideally, more research should be performed to determine whether food bans actually work, she added.

In addition to promoting the new guidelines, allergists and pediatricians can also implement proactive allergy reaction mitigation strategies that work with school systems, according to Dr. Gupta. “In-clinic, we ensure all families have food allergy action plans for school and current epinephrine auto-injectors,” she said. “We also often have our food allergy nurses educate schools when food allergy awareness is a concern.”

Many of the 25 authors of the food allergy guidelines disclosed relevant financial relationships. The full list is with the original article. According to a footnote within the guidelines, “Panel members who were deemed to have a real, perceived, or potential conflict of interest were asked to abstain from voting on recommendations related to that interest.”  

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

Children with food allergies often require diligent monitoring and a restricted diet to reduce allergic attacks, but there is little evidence available to support so-called “food bans” at schools and childcare centers.

Instead, a new practice guideline published earlier this month in the Journal of Allergy and Clinical Immunology calls for better allergy management training for staff, as well as increased epinephrine availability in educational environments. The guidelines were developed by an international panel of clinicians, school personnel, and parents.
 

The guidance at a glance

Rather than creating site-wide food prohibitions on nuts, dairy, and other allergenic foods, the practice guidance recommends centers and schools use “common-sense approaches” to reduce allergic reaction risk among school-aged children. According to the guideline authors, these strategies could include promoting handwashing, providing adult supervision during snacks and meals, and cleaning surfaces where food is either eaten or prepared.

Additionally, the new evidence-based guidance calls for schools and childcare centers to teach school personnel to recognize, prevent, and respond appropriately to food-related allergic reactions when they do occur.

The guidance also recommends that educational institutions require an up-to-date allergy ‘action plan’ from parents, designed for their children with allergies. These action plans can be integrated into the training of teachers and nurses to help manage potential allergic reactions.

Moreover, the guidance suggests schools should keep unassigned epinephrine autoinjectors in stock, both on site and even when traveling, where laws permit, rather than requiring students with allergies to bring in their own autoinjectors. Ultimately, this represents a more proactive approach to treating anaphylaxis, particularly in settings where treatment is urgently needed, such as when students are away from campus and participating in a school-designated trip or event.
 

Expert perspectives

Jennifer A. Dantzer, MD, MHS, allergist-immunologist and assistant professor of pediatrics at Johns Hopkins University, Baltimore, told this news organization via email that the practice guidelines offer an important starting point for ensuring quality of life of students, parents, and other school personnel.

While the Centers for Disease Control and Prevention published voluntary guidance for managing food allergies in schools back in 2013, there has since “been a lack of universal policies and procedures to manage the risk of allergic reactions in schools,” explained Dr. Dantzer. “The new guidelines are a good first step of using available evidence and all the key stakeholders, clinicians, school personnel, and families to figure out the best way to keep children with food allergies safe at school.”

Dr. Dantzer wasn’t involved in the creation of the new practice guidelines, but she shared how her clinical experience reinforces the need for the evidence-based recommendations. “Every single week we talk with families, both in clinic and in our research studies, about living with food allergies, and we recognize that every child is different,” she said. “We constantly work to advocate for each individual child with food allergies.”

Pediatric allergist Malika Gupta, MBBS, MD, said in an interview via email that the guidelines could assist in the creation of new nationwide policies for food allergy management at schools. “Also, the guidelines are labeled ‘conditional,’ which gives policymakers the ability to adapt to their specific circumstances and individuals, as well as make modifications according to regional trends,” she added.

Dr. Gupta, a clinical assistant professor in the Division of Allergy and Clinical Immunology at the University of Michigan, Ann Arbor, echoed the guideline panel’s sentiments regarding food bans, explaining that prohibiting certain foods could lend a “false sense of security” and could also “promote bullying and a sense of isolation for the food-allergic child.” In spite of the lack of evidence supporting food bans, Dr. Gupta noted that these bans can give families a sense of control and security. Ideally, more research should be performed to determine whether food bans actually work, she added.

In addition to promoting the new guidelines, allergists and pediatricians can also implement proactive allergy reaction mitigation strategies that work with school systems, according to Dr. Gupta. “In-clinic, we ensure all families have food allergy action plans for school and current epinephrine auto-injectors,” she said. “We also often have our food allergy nurses educate schools when food allergy awareness is a concern.”

Many of the 25 authors of the food allergy guidelines disclosed relevant financial relationships. The full list is with the original article. According to a footnote within the guidelines, “Panel members who were deemed to have a real, perceived, or potential conflict of interest were asked to abstain from voting on recommendations related to that interest.”  

A version of this article first appeared on Medscape.com.

 

Children with food allergies often require diligent monitoring and a restricted diet to reduce allergic attacks, but there is little evidence available to support so-called “food bans” at schools and childcare centers.

Instead, a new practice guideline published earlier this month in the Journal of Allergy and Clinical Immunology calls for better allergy management training for staff, as well as increased epinephrine availability in educational environments. The guidelines were developed by an international panel of clinicians, school personnel, and parents.
 

The guidance at a glance

Rather than creating site-wide food prohibitions on nuts, dairy, and other allergenic foods, the practice guidance recommends centers and schools use “common-sense approaches” to reduce allergic reaction risk among school-aged children. According to the guideline authors, these strategies could include promoting handwashing, providing adult supervision during snacks and meals, and cleaning surfaces where food is either eaten or prepared.

Additionally, the new evidence-based guidance calls for schools and childcare centers to teach school personnel to recognize, prevent, and respond appropriately to food-related allergic reactions when they do occur.

The guidance also recommends that educational institutions require an up-to-date allergy ‘action plan’ from parents, designed for their children with allergies. These action plans can be integrated into the training of teachers and nurses to help manage potential allergic reactions.

Moreover, the guidance suggests schools should keep unassigned epinephrine autoinjectors in stock, both on site and even when traveling, where laws permit, rather than requiring students with allergies to bring in their own autoinjectors. Ultimately, this represents a more proactive approach to treating anaphylaxis, particularly in settings where treatment is urgently needed, such as when students are away from campus and participating in a school-designated trip or event.
 

Expert perspectives

Jennifer A. Dantzer, MD, MHS, allergist-immunologist and assistant professor of pediatrics at Johns Hopkins University, Baltimore, told this news organization via email that the practice guidelines offer an important starting point for ensuring quality of life of students, parents, and other school personnel.

While the Centers for Disease Control and Prevention published voluntary guidance for managing food allergies in schools back in 2013, there has since “been a lack of universal policies and procedures to manage the risk of allergic reactions in schools,” explained Dr. Dantzer. “The new guidelines are a good first step of using available evidence and all the key stakeholders, clinicians, school personnel, and families to figure out the best way to keep children with food allergies safe at school.”

Dr. Dantzer wasn’t involved in the creation of the new practice guidelines, but she shared how her clinical experience reinforces the need for the evidence-based recommendations. “Every single week we talk with families, both in clinic and in our research studies, about living with food allergies, and we recognize that every child is different,” she said. “We constantly work to advocate for each individual child with food allergies.”

Pediatric allergist Malika Gupta, MBBS, MD, said in an interview via email that the guidelines could assist in the creation of new nationwide policies for food allergy management at schools. “Also, the guidelines are labeled ‘conditional,’ which gives policymakers the ability to adapt to their specific circumstances and individuals, as well as make modifications according to regional trends,” she added.

Dr. Gupta, a clinical assistant professor in the Division of Allergy and Clinical Immunology at the University of Michigan, Ann Arbor, echoed the guideline panel’s sentiments regarding food bans, explaining that prohibiting certain foods could lend a “false sense of security” and could also “promote bullying and a sense of isolation for the food-allergic child.” In spite of the lack of evidence supporting food bans, Dr. Gupta noted that these bans can give families a sense of control and security. Ideally, more research should be performed to determine whether food bans actually work, she added.

In addition to promoting the new guidelines, allergists and pediatricians can also implement proactive allergy reaction mitigation strategies that work with school systems, according to Dr. Gupta. “In-clinic, we ensure all families have food allergy action plans for school and current epinephrine auto-injectors,” she said. “We also often have our food allergy nurses educate schools when food allergy awareness is a concern.”

Many of the 25 authors of the food allergy guidelines disclosed relevant financial relationships. The full list is with the original article. According to a footnote within the guidelines, “Panel members who were deemed to have a real, perceived, or potential conflict of interest were asked to abstain from voting on recommendations related to that interest.”  

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article