U.S. Multi-Society Task Force publishes polypectomy guidance

Article Type
Changed

The U.S. Multi-Society Task Force (USMSTF) on Colorectal Cancer recently published recommendations for endoscopic removal of precancerous colorectal lesions.

According to lead author Tonya Kaltenbach, MD, of the University of California, San Francisco, and fellow panelists, the publication aims to improve complete resection rates, which can vary widely between endoscopists; almost one out of four lesions (22.7%) may be incompletely removed by some practitioners, leading to higher rates of colorectal cancer.

“[A]lthough the majority (50%) of postcolonoscopy colon cancers [are] likely due to missed lesions, close to one-fifth of incident cancers [are] related to incomplete resection,” the panelists wrote in Gastroenterology, referring to a pooled analysis of eight surveillance studies.

The panelists’ recommendations, which were based on both evidence and clinical experience, range from specific polyp removal techniques to guidance for institution-wide quality assurance of polypectomies. Each statement is described by both strength of recommendation and level of evidence, the latter of which was determined by Grading of Recommendations, Assessment, Development, and Evaluation Ratings of Evidence (GRADE) criteria. Recommendations were written by a panel of nine experts and approved by the governing boards of the three societies they represented – the American College of Gastroenterology, the American Gastroenterological Association, and the American Society for Gastrointestinal Endoscopy. The recommendations were copublished in the March issues of the American Journal of Gastroenterology, Gastroenterology, and Gastrointestinal Endoscopy. 

Central to the publication are recommended polypectomy techniques for specific types of lesions.

“Polypectomy techniques vary widely in clinical practice,” the panelists wrote. “They are often driven by physician preference based on how they were taught and on trial and error, due to the lack of standardized training and the paucity of published evidence. In the past decade, evidence has evolved on the superiority of specific methods.”

“Optimal techniques encompass effectiveness, safety, and efficiency,” they wrote. “Colorectal lesion characteristics, including location, size, morphology, and histology, influence the optimal removal method.”

For lesions up to 9 mm, the panelists recommended cold snare polypectomy “due to high complete resection rates and safety profile.” In contrast, they recommended against both cold and hot biopsy forceps, which have been associated with higher rates of incomplete resection. Furthermore, they cautioned that hot biopsy forceps may increase risks of complications and produce inadequate tissue samples for histopathology.

For nonpedunculated lesions between 10 and 19 mm, guidance is minimal. The panelists recommended cold or hot snare polypectomy, although this statement was conditional and based on low-quality evidence.

Recommendations were more extensive for large nonpedunculated lesions (at least 20 mm). For such lesions, the panelists strongly recommended endoscopic mucosal resection (EMR). They emphasized that large lesions should be removed in the fewest possible pieces by an appropriately experienced endoscopist during a single colonoscopy session. The panelists recommended the use of a viscous injection solution with a contrast agent and adjuvant thermal ablation of the post-EMR margin. They recommended against the use of tattoo as a submucosal injection solution, and ablation of residual lesion tissue that is endoscopically visible. Additional recommendations for large lesions, including prophylactic closure of resection defects and coagulation techniques, were based on low-quality evidence.

For pedunculated lesions greater than 10 mm, the panelists recommended hot snare polypectomy. For pedunculated lesions with a head greater than 20 mm or a stalk thickness greater than 5 mm, they recommended prophylactic mechanical ligation.

Beyond lesion assessment and removal, recommendations addressed lesion marking, equipment, surveillance, and quality of polypectomy.

Concerning quality, the panelists recommended that endoscopists participate in a quality assurance program that documents adverse events, and that institutions use standardized polypectomy competency assessments, such as Cold Snare Polypectomy Competency Assessment Tool and/or Direct Observation of Polypectomy Skills.

“Focused teaching is needed to ensure the optimal endoscopic management of colorectal lesions,” the panelists wrote. They went on to suggest that “development and implementation of polypectomy quality metrics may be necessary to optimize practice and outcomes.”

“For example, the type of resection method used for the colorectal lesion removal in the procedure report should be documented, and the inclusion of adequate resection technique as a quality indicator in colorectal cancer screening programs should be considered,” they wrote. “Adverse events, including bleeding, perforation, hospital admissions, and the number of benign colorectal lesions referred for surgical management, should be measured and reported. Finally, standards for pathology preparation and reporting of lesions suspicious for submucosal invasion should be in place to provide accurate staging and management.”

The investigators reported relationships with Covidien, Ironwood, Medtronic, and others.

SOURCE: Kaltenbach T et al. Gastroenterology. 2020 Jan 18. doi: 10.1053/j.gastro.2019.12.018.

Publications
Topics
Sections

The U.S. Multi-Society Task Force (USMSTF) on Colorectal Cancer recently published recommendations for endoscopic removal of precancerous colorectal lesions.

According to lead author Tonya Kaltenbach, MD, of the University of California, San Francisco, and fellow panelists, the publication aims to improve complete resection rates, which can vary widely between endoscopists; almost one out of four lesions (22.7%) may be incompletely removed by some practitioners, leading to higher rates of colorectal cancer.

“[A]lthough the majority (50%) of postcolonoscopy colon cancers [are] likely due to missed lesions, close to one-fifth of incident cancers [are] related to incomplete resection,” the panelists wrote in Gastroenterology, referring to a pooled analysis of eight surveillance studies.

The panelists’ recommendations, which were based on both evidence and clinical experience, range from specific polyp removal techniques to guidance for institution-wide quality assurance of polypectomies. Each statement is described by both strength of recommendation and level of evidence, the latter of which was determined by Grading of Recommendations, Assessment, Development, and Evaluation Ratings of Evidence (GRADE) criteria. Recommendations were written by a panel of nine experts and approved by the governing boards of the three societies they represented – the American College of Gastroenterology, the American Gastroenterological Association, and the American Society for Gastrointestinal Endoscopy. The recommendations were copublished in the March issues of the American Journal of Gastroenterology, Gastroenterology, and Gastrointestinal Endoscopy. 

Central to the publication are recommended polypectomy techniques for specific types of lesions.

“Polypectomy techniques vary widely in clinical practice,” the panelists wrote. “They are often driven by physician preference based on how they were taught and on trial and error, due to the lack of standardized training and the paucity of published evidence. In the past decade, evidence has evolved on the superiority of specific methods.”

“Optimal techniques encompass effectiveness, safety, and efficiency,” they wrote. “Colorectal lesion characteristics, including location, size, morphology, and histology, influence the optimal removal method.”

For lesions up to 9 mm, the panelists recommended cold snare polypectomy “due to high complete resection rates and safety profile.” In contrast, they recommended against both cold and hot biopsy forceps, which have been associated with higher rates of incomplete resection. Furthermore, they cautioned that hot biopsy forceps may increase risks of complications and produce inadequate tissue samples for histopathology.

For nonpedunculated lesions between 10 and 19 mm, guidance is minimal. The panelists recommended cold or hot snare polypectomy, although this statement was conditional and based on low-quality evidence.

Recommendations were more extensive for large nonpedunculated lesions (at least 20 mm). For such lesions, the panelists strongly recommended endoscopic mucosal resection (EMR). They emphasized that large lesions should be removed in the fewest possible pieces by an appropriately experienced endoscopist during a single colonoscopy session. The panelists recommended the use of a viscous injection solution with a contrast agent and adjuvant thermal ablation of the post-EMR margin. They recommended against the use of tattoo as a submucosal injection solution, and ablation of residual lesion tissue that is endoscopically visible. Additional recommendations for large lesions, including prophylactic closure of resection defects and coagulation techniques, were based on low-quality evidence.

For pedunculated lesions greater than 10 mm, the panelists recommended hot snare polypectomy. For pedunculated lesions with a head greater than 20 mm or a stalk thickness greater than 5 mm, they recommended prophylactic mechanical ligation.

Beyond lesion assessment and removal, recommendations addressed lesion marking, equipment, surveillance, and quality of polypectomy.

Concerning quality, the panelists recommended that endoscopists participate in a quality assurance program that documents adverse events, and that institutions use standardized polypectomy competency assessments, such as Cold Snare Polypectomy Competency Assessment Tool and/or Direct Observation of Polypectomy Skills.

“Focused teaching is needed to ensure the optimal endoscopic management of colorectal lesions,” the panelists wrote. They went on to suggest that “development and implementation of polypectomy quality metrics may be necessary to optimize practice and outcomes.”

“For example, the type of resection method used for the colorectal lesion removal in the procedure report should be documented, and the inclusion of adequate resection technique as a quality indicator in colorectal cancer screening programs should be considered,” they wrote. “Adverse events, including bleeding, perforation, hospital admissions, and the number of benign colorectal lesions referred for surgical management, should be measured and reported. Finally, standards for pathology preparation and reporting of lesions suspicious for submucosal invasion should be in place to provide accurate staging and management.”

The investigators reported relationships with Covidien, Ironwood, Medtronic, and others.

SOURCE: Kaltenbach T et al. Gastroenterology. 2020 Jan 18. doi: 10.1053/j.gastro.2019.12.018.

The U.S. Multi-Society Task Force (USMSTF) on Colorectal Cancer recently published recommendations for endoscopic removal of precancerous colorectal lesions.

According to lead author Tonya Kaltenbach, MD, of the University of California, San Francisco, and fellow panelists, the publication aims to improve complete resection rates, which can vary widely between endoscopists; almost one out of four lesions (22.7%) may be incompletely removed by some practitioners, leading to higher rates of colorectal cancer.

“[A]lthough the majority (50%) of postcolonoscopy colon cancers [are] likely due to missed lesions, close to one-fifth of incident cancers [are] related to incomplete resection,” the panelists wrote in Gastroenterology, referring to a pooled analysis of eight surveillance studies.

The panelists’ recommendations, which were based on both evidence and clinical experience, range from specific polyp removal techniques to guidance for institution-wide quality assurance of polypectomies. Each statement is described by both strength of recommendation and level of evidence, the latter of which was determined by Grading of Recommendations, Assessment, Development, and Evaluation Ratings of Evidence (GRADE) criteria. Recommendations were written by a panel of nine experts and approved by the governing boards of the three societies they represented – the American College of Gastroenterology, the American Gastroenterological Association, and the American Society for Gastrointestinal Endoscopy. The recommendations were copublished in the March issues of the American Journal of Gastroenterology, Gastroenterology, and Gastrointestinal Endoscopy. 

Central to the publication are recommended polypectomy techniques for specific types of lesions.

“Polypectomy techniques vary widely in clinical practice,” the panelists wrote. “They are often driven by physician preference based on how they were taught and on trial and error, due to the lack of standardized training and the paucity of published evidence. In the past decade, evidence has evolved on the superiority of specific methods.”

“Optimal techniques encompass effectiveness, safety, and efficiency,” they wrote. “Colorectal lesion characteristics, including location, size, morphology, and histology, influence the optimal removal method.”

For lesions up to 9 mm, the panelists recommended cold snare polypectomy “due to high complete resection rates and safety profile.” In contrast, they recommended against both cold and hot biopsy forceps, which have been associated with higher rates of incomplete resection. Furthermore, they cautioned that hot biopsy forceps may increase risks of complications and produce inadequate tissue samples for histopathology.

For nonpedunculated lesions between 10 and 19 mm, guidance is minimal. The panelists recommended cold or hot snare polypectomy, although this statement was conditional and based on low-quality evidence.

Recommendations were more extensive for large nonpedunculated lesions (at least 20 mm). For such lesions, the panelists strongly recommended endoscopic mucosal resection (EMR). They emphasized that large lesions should be removed in the fewest possible pieces by an appropriately experienced endoscopist during a single colonoscopy session. The panelists recommended the use of a viscous injection solution with a contrast agent and adjuvant thermal ablation of the post-EMR margin. They recommended against the use of tattoo as a submucosal injection solution, and ablation of residual lesion tissue that is endoscopically visible. Additional recommendations for large lesions, including prophylactic closure of resection defects and coagulation techniques, were based on low-quality evidence.

For pedunculated lesions greater than 10 mm, the panelists recommended hot snare polypectomy. For pedunculated lesions with a head greater than 20 mm or a stalk thickness greater than 5 mm, they recommended prophylactic mechanical ligation.

Beyond lesion assessment and removal, recommendations addressed lesion marking, equipment, surveillance, and quality of polypectomy.

Concerning quality, the panelists recommended that endoscopists participate in a quality assurance program that documents adverse events, and that institutions use standardized polypectomy competency assessments, such as Cold Snare Polypectomy Competency Assessment Tool and/or Direct Observation of Polypectomy Skills.

“Focused teaching is needed to ensure the optimal endoscopic management of colorectal lesions,” the panelists wrote. They went on to suggest that “development and implementation of polypectomy quality metrics may be necessary to optimize practice and outcomes.”

“For example, the type of resection method used for the colorectal lesion removal in the procedure report should be documented, and the inclusion of adequate resection technique as a quality indicator in colorectal cancer screening programs should be considered,” they wrote. “Adverse events, including bleeding, perforation, hospital admissions, and the number of benign colorectal lesions referred for surgical management, should be measured and reported. Finally, standards for pathology preparation and reporting of lesions suspicious for submucosal invasion should be in place to provide accurate staging and management.”

The investigators reported relationships with Covidien, Ironwood, Medtronic, and others.

SOURCE: Kaltenbach T et al. Gastroenterology. 2020 Jan 18. doi: 10.1053/j.gastro.2019.12.018.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Breast cancer treatments veer from guidelines

Article Type
Changed

 

Women with breast cancer may be receiving treatments that are discordant with guideline recommendations for genetic subtypes of disease, based on a retrospective analysis of more than 20,000 patients.

Radiotherapy and chemotherapy practices were particularly out of alignment with guidelines, reported lead author Allison W. Kurian, MD, of Stanford (Calif.) University, and colleagues.

“Integrating genetic testing into breast cancer care has been complex and challenging,” the investigators wrote in JAMA Oncology. “There is wide variability in which clinicians order testing and disclose results, in the clinical significance of results, and in how clinicians interpret results to patients.”

According to the investigators, while germline testing is on the rise, little is known about how these test results are translating to clinical care.

To learn more, the investigators evaluated data from 20,568 women with stage 0-III breast cancer who entered the Surveillance, Epidemiology, and End Results registries of Georgia and California between 2014 and 2016.

Three treatment types were evaluated: surgery (bilateral vs. unilateral mastectomy), radiotherapy after lumpectomy, and chemotherapy. Treatment selection was compared with test results for breast cancer–associated genes, such as BRCA1/2, TP53, PTEN, and others. Associations were then compared with guideline recommendations.

Data analysis suggested that many clinicians were correctly using genetic test results to guide surgical decisions. For example, almost two-thirds (61.7%) of women with a BRCA mutation underwent bilateral mastectomy, compared with one-quarter (24.3%) who were BRCA negative (odds ratio, 5.52). For other pathogenic variants, the rate of bilateral mastectomy was still elevated, albeit to a lesser degree (OR, 2.41).

Generally, these practices align with recommendations, the investigators wrote, noting that research supports bilateral mastectomy with BRCA1/2, TP53, and PTEN variants, while data are lacking for other genetic subtypes.

Radiotherapy and chemotherapy practices were more discordant with guidelines. For example, women with a BRCA mutation were 78% less likely to receive radiotherapy after lumpectomy (OR, 0.22) and 76% more likely to receive chemotherapy for early-stage, hormone-positive disease (OR, 1.76). According to investigators, these findings suggest possible trends in undertreatment and overtreatment, respectively.

“We believe more research is needed to confirm our results and to evaluate long-term outcomes of pathogenic variant carriers to understand treatment decision making and consequences,” the investigators concluded.

The study was funded by the National Institutes of Health and the California Department of Public Health. The investigators reported relationships with Myriad Genetics, Genomic Health, Roche, and other companies.

SOURCE: Kurian AW et al. JAMA Oncol. 2020 Feb 6. doi: 10.1001/jamaoncol.2019.6400.

Publications
Topics
Sections

 

Women with breast cancer may be receiving treatments that are discordant with guideline recommendations for genetic subtypes of disease, based on a retrospective analysis of more than 20,000 patients.

Radiotherapy and chemotherapy practices were particularly out of alignment with guidelines, reported lead author Allison W. Kurian, MD, of Stanford (Calif.) University, and colleagues.

“Integrating genetic testing into breast cancer care has been complex and challenging,” the investigators wrote in JAMA Oncology. “There is wide variability in which clinicians order testing and disclose results, in the clinical significance of results, and in how clinicians interpret results to patients.”

According to the investigators, while germline testing is on the rise, little is known about how these test results are translating to clinical care.

To learn more, the investigators evaluated data from 20,568 women with stage 0-III breast cancer who entered the Surveillance, Epidemiology, and End Results registries of Georgia and California between 2014 and 2016.

Three treatment types were evaluated: surgery (bilateral vs. unilateral mastectomy), radiotherapy after lumpectomy, and chemotherapy. Treatment selection was compared with test results for breast cancer–associated genes, such as BRCA1/2, TP53, PTEN, and others. Associations were then compared with guideline recommendations.

Data analysis suggested that many clinicians were correctly using genetic test results to guide surgical decisions. For example, almost two-thirds (61.7%) of women with a BRCA mutation underwent bilateral mastectomy, compared with one-quarter (24.3%) who were BRCA negative (odds ratio, 5.52). For other pathogenic variants, the rate of bilateral mastectomy was still elevated, albeit to a lesser degree (OR, 2.41).

Generally, these practices align with recommendations, the investigators wrote, noting that research supports bilateral mastectomy with BRCA1/2, TP53, and PTEN variants, while data are lacking for other genetic subtypes.

Radiotherapy and chemotherapy practices were more discordant with guidelines. For example, women with a BRCA mutation were 78% less likely to receive radiotherapy after lumpectomy (OR, 0.22) and 76% more likely to receive chemotherapy for early-stage, hormone-positive disease (OR, 1.76). According to investigators, these findings suggest possible trends in undertreatment and overtreatment, respectively.

“We believe more research is needed to confirm our results and to evaluate long-term outcomes of pathogenic variant carriers to understand treatment decision making and consequences,” the investigators concluded.

The study was funded by the National Institutes of Health and the California Department of Public Health. The investigators reported relationships with Myriad Genetics, Genomic Health, Roche, and other companies.

SOURCE: Kurian AW et al. JAMA Oncol. 2020 Feb 6. doi: 10.1001/jamaoncol.2019.6400.

 

Women with breast cancer may be receiving treatments that are discordant with guideline recommendations for genetic subtypes of disease, based on a retrospective analysis of more than 20,000 patients.

Radiotherapy and chemotherapy practices were particularly out of alignment with guidelines, reported lead author Allison W. Kurian, MD, of Stanford (Calif.) University, and colleagues.

“Integrating genetic testing into breast cancer care has been complex and challenging,” the investigators wrote in JAMA Oncology. “There is wide variability in which clinicians order testing and disclose results, in the clinical significance of results, and in how clinicians interpret results to patients.”

According to the investigators, while germline testing is on the rise, little is known about how these test results are translating to clinical care.

To learn more, the investigators evaluated data from 20,568 women with stage 0-III breast cancer who entered the Surveillance, Epidemiology, and End Results registries of Georgia and California between 2014 and 2016.

Three treatment types were evaluated: surgery (bilateral vs. unilateral mastectomy), radiotherapy after lumpectomy, and chemotherapy. Treatment selection was compared with test results for breast cancer–associated genes, such as BRCA1/2, TP53, PTEN, and others. Associations were then compared with guideline recommendations.

Data analysis suggested that many clinicians were correctly using genetic test results to guide surgical decisions. For example, almost two-thirds (61.7%) of women with a BRCA mutation underwent bilateral mastectomy, compared with one-quarter (24.3%) who were BRCA negative (odds ratio, 5.52). For other pathogenic variants, the rate of bilateral mastectomy was still elevated, albeit to a lesser degree (OR, 2.41).

Generally, these practices align with recommendations, the investigators wrote, noting that research supports bilateral mastectomy with BRCA1/2, TP53, and PTEN variants, while data are lacking for other genetic subtypes.

Radiotherapy and chemotherapy practices were more discordant with guidelines. For example, women with a BRCA mutation were 78% less likely to receive radiotherapy after lumpectomy (OR, 0.22) and 76% more likely to receive chemotherapy for early-stage, hormone-positive disease (OR, 1.76). According to investigators, these findings suggest possible trends in undertreatment and overtreatment, respectively.

“We believe more research is needed to confirm our results and to evaluate long-term outcomes of pathogenic variant carriers to understand treatment decision making and consequences,” the investigators concluded.

The study was funded by the National Institutes of Health and the California Department of Public Health. The investigators reported relationships with Myriad Genetics, Genomic Health, Roche, and other companies.

SOURCE: Kurian AW et al. JAMA Oncol. 2020 Feb 6. doi: 10.1001/jamaoncol.2019.6400.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA ONCOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

APOE genotype directly regulates alpha-synuclein accumulation

Article Type
Changed

Apolipoprotein E epsilon 4 (APOE4) directly and independently exacerbates accumulation of alpha-synuclein in patients with Lewy body dementia, whereas APOE2 may have a protective effect, based on two recent studies involving mouse models and human patients.

Dr. Eliezer Masliah

These insights confirm the importance of APOE in synucleinopathies, and may lead to new treatments, according to Eliezer Masliah, MD, director of the division of neuroscience at the National Institute on Aging.

“These [studies] definitely implicate a role of APOE4,” Dr. Masliah said in an interview.

According to Dr. Masliah, previous studies linked the APOE4 genotype with cognitive decline in synucleinopathies, but underlying molecular mechanisms remained unknown.

“We [now] have more direct confirmation [based on] different experimental animal models,” Dr. Masliah said. “It also means that APOE4 could be a therapeutic target for dementia with Lewy bodies.”

The two studies were published simultaneously in Science Translational Medicine. The first study was conducted by Albert A. Davis, MD, PhD, of Washington University, St. Louis, and colleagues; the second was led by Na Zhao, MD, PhD, of the Mayo Clinic in Jacksonville, Fla.

“The studies are very synergistic, but used different techniques,” said Dr. Masliah, who was not involved in the studies.

Both studies involved mice that expressed a human variant of APOE: APOE2, APOE3, or APOE4. Three independent techniques were used to concurrently overexpress alpha-synuclein; Dr. Davis and colleagues used a transgenic approach, as well as striatal injection of alpha-synuclein preformed fibrils, whereas Dr. Zhao and colleagues turned to a viral vector. Regardless of technique, each APOE variant had a distinct impact on the level of alpha-synuclein accumulation.

“In a nutshell, [Dr. Davis and colleagues] found that those mice that have synuclein and APOE4 have a much more rapid progression of the disease,” Dr. Masliah said. “They become Parkinsonian much faster, but also, they become cognitively impaired much faster, and they have more synuclein in the brain. Remarkably, on the opposite side, those that were expressing APOE2, which we know is a protective allele, actually were far less impaired. So that’s really a remarkable finding.”

The study at the Mayo Clinic echoed these findings.

“Essentially, [Dr. Zhao and colleagues] had very similar results,” Dr. Masliah said. “[In mice expressing] APOE4, synuclein accumulation was worse and pathology was worse, and with APOE2, there was relative protection.”

Both studies found that the exacerbating effect of APOE4 translated to human patients.

Dr. Davis and colleagues evaluated data from 251 patients in the Parkinson’s Progression Markers Initiative. A multivariate model showed that patients with the APOE4 genotype had faster cognitive decline, an impact that was independent of other variables, including cerebrospinal fluid concentrations of amyloid beta and tau protein (P = .0119). This finding was further supported by additional analyses involving 177 patients with Parkinson’s disease from the Washington University Movement Disorders Center, and another 1,030 patients enrolled in the NeuroGenetics Research Consortium study.

Dr. Zhao and colleagues evaluated postmortem samples from patients with Lewy body dementia who had minimal amyloid pathology. Comparing 22 APOE4 carriers versus 22 age- and sex-matched noncarriers, they found that carriers had significantly greater accumulations of alpha-synuclein (P less than .05).

According to the investigators, these findings could have both prognostic and therapeutic implications.

“[I]t is intriguing to speculate whether APOE and other potential genetic risk or resilience genes could be useful as screening tools to stratify risk for individual patients,” Dr. Davis and colleagues wrote in their paper. They went on to suggest that APOE genotyping may one day be used to personalize treatments for patients with neurodegenerative disease.

According to Dr. Masliah, several treatment strategies are under investigation.

“There are some pharmaceutical companies and also some academic groups that have been developing antibodies against APOE4 for Alzheimer’s disease, but certainly that could also be used for dementia with Lewy bodies,” he said. “There are other ways. One could [be] to suppress the expression of APOE4 with antisense or other technologies.

“There is also a very innovative technology that has been developed by the group at the Gladstone Institutes in San Francisco, which is to switch APOE4 to APOE3.” This technique, Dr. Masliah explained, is accomplished by breaking a disulfide bond in APOE4, which opens the structure into an isoform that mimics APOE3. “They have developed small molecules that actually can break that bond and essentially chemically switch APOE4 to APOE3,” he said.

Although multiple techniques are feasible, Dr. Masliah stressed that these therapeutic efforts are still in their infancy.

“We need to better understand the mechanisms as to how APOE4 and alpha-synuclein interact,” he said. “I think we need a lot more work in this area.”

The Davis study was funded by the American Academy of Neurology/American Brain Foundation, the BrightFocus Foundation, the Mary E. Groff Charitable Trust, and others; the investigators reported additional relationships with Biogen, Alector, Parabon, and others. The Zhao study was funded by the National Institutes of Health and the Lewy Body Dementia Center Without Walls; the investigators reported no competing interests. Dr. Masliah reported no conflicts of interest.

SOURCES: Davis AA et al. Sci Transl Med. 2020 Feb 5. doi: 10.1126/scitranslmed.aay3069; Zhao N et al. Sci Transl Med. 2020 Feb 5. doi: 10.1126/scitranslmed.aay1809.

Publications
Topics
Sections

Apolipoprotein E epsilon 4 (APOE4) directly and independently exacerbates accumulation of alpha-synuclein in patients with Lewy body dementia, whereas APOE2 may have a protective effect, based on two recent studies involving mouse models and human patients.

Dr. Eliezer Masliah

These insights confirm the importance of APOE in synucleinopathies, and may lead to new treatments, according to Eliezer Masliah, MD, director of the division of neuroscience at the National Institute on Aging.

“These [studies] definitely implicate a role of APOE4,” Dr. Masliah said in an interview.

According to Dr. Masliah, previous studies linked the APOE4 genotype with cognitive decline in synucleinopathies, but underlying molecular mechanisms remained unknown.

“We [now] have more direct confirmation [based on] different experimental animal models,” Dr. Masliah said. “It also means that APOE4 could be a therapeutic target for dementia with Lewy bodies.”

The two studies were published simultaneously in Science Translational Medicine. The first study was conducted by Albert A. Davis, MD, PhD, of Washington University, St. Louis, and colleagues; the second was led by Na Zhao, MD, PhD, of the Mayo Clinic in Jacksonville, Fla.

“The studies are very synergistic, but used different techniques,” said Dr. Masliah, who was not involved in the studies.

Both studies involved mice that expressed a human variant of APOE: APOE2, APOE3, or APOE4. Three independent techniques were used to concurrently overexpress alpha-synuclein; Dr. Davis and colleagues used a transgenic approach, as well as striatal injection of alpha-synuclein preformed fibrils, whereas Dr. Zhao and colleagues turned to a viral vector. Regardless of technique, each APOE variant had a distinct impact on the level of alpha-synuclein accumulation.

“In a nutshell, [Dr. Davis and colleagues] found that those mice that have synuclein and APOE4 have a much more rapid progression of the disease,” Dr. Masliah said. “They become Parkinsonian much faster, but also, they become cognitively impaired much faster, and they have more synuclein in the brain. Remarkably, on the opposite side, those that were expressing APOE2, which we know is a protective allele, actually were far less impaired. So that’s really a remarkable finding.”

The study at the Mayo Clinic echoed these findings.

“Essentially, [Dr. Zhao and colleagues] had very similar results,” Dr. Masliah said. “[In mice expressing] APOE4, synuclein accumulation was worse and pathology was worse, and with APOE2, there was relative protection.”

Both studies found that the exacerbating effect of APOE4 translated to human patients.

Dr. Davis and colleagues evaluated data from 251 patients in the Parkinson’s Progression Markers Initiative. A multivariate model showed that patients with the APOE4 genotype had faster cognitive decline, an impact that was independent of other variables, including cerebrospinal fluid concentrations of amyloid beta and tau protein (P = .0119). This finding was further supported by additional analyses involving 177 patients with Parkinson’s disease from the Washington University Movement Disorders Center, and another 1,030 patients enrolled in the NeuroGenetics Research Consortium study.

Dr. Zhao and colleagues evaluated postmortem samples from patients with Lewy body dementia who had minimal amyloid pathology. Comparing 22 APOE4 carriers versus 22 age- and sex-matched noncarriers, they found that carriers had significantly greater accumulations of alpha-synuclein (P less than .05).

According to the investigators, these findings could have both prognostic and therapeutic implications.

“[I]t is intriguing to speculate whether APOE and other potential genetic risk or resilience genes could be useful as screening tools to stratify risk for individual patients,” Dr. Davis and colleagues wrote in their paper. They went on to suggest that APOE genotyping may one day be used to personalize treatments for patients with neurodegenerative disease.

According to Dr. Masliah, several treatment strategies are under investigation.

“There are some pharmaceutical companies and also some academic groups that have been developing antibodies against APOE4 for Alzheimer’s disease, but certainly that could also be used for dementia with Lewy bodies,” he said. “There are other ways. One could [be] to suppress the expression of APOE4 with antisense or other technologies.

“There is also a very innovative technology that has been developed by the group at the Gladstone Institutes in San Francisco, which is to switch APOE4 to APOE3.” This technique, Dr. Masliah explained, is accomplished by breaking a disulfide bond in APOE4, which opens the structure into an isoform that mimics APOE3. “They have developed small molecules that actually can break that bond and essentially chemically switch APOE4 to APOE3,” he said.

Although multiple techniques are feasible, Dr. Masliah stressed that these therapeutic efforts are still in their infancy.

“We need to better understand the mechanisms as to how APOE4 and alpha-synuclein interact,” he said. “I think we need a lot more work in this area.”

The Davis study was funded by the American Academy of Neurology/American Brain Foundation, the BrightFocus Foundation, the Mary E. Groff Charitable Trust, and others; the investigators reported additional relationships with Biogen, Alector, Parabon, and others. The Zhao study was funded by the National Institutes of Health and the Lewy Body Dementia Center Without Walls; the investigators reported no competing interests. Dr. Masliah reported no conflicts of interest.

SOURCES: Davis AA et al. Sci Transl Med. 2020 Feb 5. doi: 10.1126/scitranslmed.aay3069; Zhao N et al. Sci Transl Med. 2020 Feb 5. doi: 10.1126/scitranslmed.aay1809.

Apolipoprotein E epsilon 4 (APOE4) directly and independently exacerbates accumulation of alpha-synuclein in patients with Lewy body dementia, whereas APOE2 may have a protective effect, based on two recent studies involving mouse models and human patients.

Dr. Eliezer Masliah

These insights confirm the importance of APOE in synucleinopathies, and may lead to new treatments, according to Eliezer Masliah, MD, director of the division of neuroscience at the National Institute on Aging.

“These [studies] definitely implicate a role of APOE4,” Dr. Masliah said in an interview.

According to Dr. Masliah, previous studies linked the APOE4 genotype with cognitive decline in synucleinopathies, but underlying molecular mechanisms remained unknown.

“We [now] have more direct confirmation [based on] different experimental animal models,” Dr. Masliah said. “It also means that APOE4 could be a therapeutic target for dementia with Lewy bodies.”

The two studies were published simultaneously in Science Translational Medicine. The first study was conducted by Albert A. Davis, MD, PhD, of Washington University, St. Louis, and colleagues; the second was led by Na Zhao, MD, PhD, of the Mayo Clinic in Jacksonville, Fla.

“The studies are very synergistic, but used different techniques,” said Dr. Masliah, who was not involved in the studies.

Both studies involved mice that expressed a human variant of APOE: APOE2, APOE3, or APOE4. Three independent techniques were used to concurrently overexpress alpha-synuclein; Dr. Davis and colleagues used a transgenic approach, as well as striatal injection of alpha-synuclein preformed fibrils, whereas Dr. Zhao and colleagues turned to a viral vector. Regardless of technique, each APOE variant had a distinct impact on the level of alpha-synuclein accumulation.

“In a nutshell, [Dr. Davis and colleagues] found that those mice that have synuclein and APOE4 have a much more rapid progression of the disease,” Dr. Masliah said. “They become Parkinsonian much faster, but also, they become cognitively impaired much faster, and they have more synuclein in the brain. Remarkably, on the opposite side, those that were expressing APOE2, which we know is a protective allele, actually were far less impaired. So that’s really a remarkable finding.”

The study at the Mayo Clinic echoed these findings.

“Essentially, [Dr. Zhao and colleagues] had very similar results,” Dr. Masliah said. “[In mice expressing] APOE4, synuclein accumulation was worse and pathology was worse, and with APOE2, there was relative protection.”

Both studies found that the exacerbating effect of APOE4 translated to human patients.

Dr. Davis and colleagues evaluated data from 251 patients in the Parkinson’s Progression Markers Initiative. A multivariate model showed that patients with the APOE4 genotype had faster cognitive decline, an impact that was independent of other variables, including cerebrospinal fluid concentrations of amyloid beta and tau protein (P = .0119). This finding was further supported by additional analyses involving 177 patients with Parkinson’s disease from the Washington University Movement Disorders Center, and another 1,030 patients enrolled in the NeuroGenetics Research Consortium study.

Dr. Zhao and colleagues evaluated postmortem samples from patients with Lewy body dementia who had minimal amyloid pathology. Comparing 22 APOE4 carriers versus 22 age- and sex-matched noncarriers, they found that carriers had significantly greater accumulations of alpha-synuclein (P less than .05).

According to the investigators, these findings could have both prognostic and therapeutic implications.

“[I]t is intriguing to speculate whether APOE and other potential genetic risk or resilience genes could be useful as screening tools to stratify risk for individual patients,” Dr. Davis and colleagues wrote in their paper. They went on to suggest that APOE genotyping may one day be used to personalize treatments for patients with neurodegenerative disease.

According to Dr. Masliah, several treatment strategies are under investigation.

“There are some pharmaceutical companies and also some academic groups that have been developing antibodies against APOE4 for Alzheimer’s disease, but certainly that could also be used for dementia with Lewy bodies,” he said. “There are other ways. One could [be] to suppress the expression of APOE4 with antisense or other technologies.

“There is also a very innovative technology that has been developed by the group at the Gladstone Institutes in San Francisco, which is to switch APOE4 to APOE3.” This technique, Dr. Masliah explained, is accomplished by breaking a disulfide bond in APOE4, which opens the structure into an isoform that mimics APOE3. “They have developed small molecules that actually can break that bond and essentially chemically switch APOE4 to APOE3,” he said.

Although multiple techniques are feasible, Dr. Masliah stressed that these therapeutic efforts are still in their infancy.

“We need to better understand the mechanisms as to how APOE4 and alpha-synuclein interact,” he said. “I think we need a lot more work in this area.”

The Davis study was funded by the American Academy of Neurology/American Brain Foundation, the BrightFocus Foundation, the Mary E. Groff Charitable Trust, and others; the investigators reported additional relationships with Biogen, Alector, Parabon, and others. The Zhao study was funded by the National Institutes of Health and the Lewy Body Dementia Center Without Walls; the investigators reported no competing interests. Dr. Masliah reported no conflicts of interest.

SOURCES: Davis AA et al. Sci Transl Med. 2020 Feb 5. doi: 10.1126/scitranslmed.aay3069; Zhao N et al. Sci Transl Med. 2020 Feb 5. doi: 10.1126/scitranslmed.aay1809.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM SCIENCE TRANSLATIONAL MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Flow-mediated dilation of brachial artery predicts renal dysfunction in sickle cell disease

Article Type
Changed

Sonographic flow-mediated dilation (FMD) of the brachial artery predicts renal dysfunction in patients with sickle cell disease (SCD), according to investigators.

Mohammed Haneefa Nizamudeen/Getty Images

This is the first study to show that FMD – a surrogate biomarker for endothelial dysfunction – inversely correlates with renal artery resistivity index (RARI) and serum cystatin C, reported lead author Oluwagbemiga Oluwole Ayoola, MBChB, of Obafemi Awolowo University in Ile-Ife, Nigeria, and colleagues.

“[B]rachial artery FMD is an essential test in the management of SCD patients for noninvasive assessment of the vascular endothelium,” the investigators wrote in Kidney360. They went on to suggest that FMD could be used to detect early renal impairment in sickle cell disease.

The study involved 44 patients with steady-state, homozygous SCD (HbSS) and 33 age- and sex-matched controls (HbAA). Eligibility criteria excluded individuals with risk factors for endothelial dysfunction, such as obesity, diabetes, and hypertension, as well as those with thalassemia carrier traits.

For each participant, various data were gathered, including demographic and clinical characteristics, serum assays, FMD measurement of the brachial artery, and RARI.

Results showed that patients with sickle cell disease had a significantly lower median FMD value than that of healthy controls (3.44 vs. 5.35; P = .043).

Among patients with SCD, FMD was negatively and independently correlated with RARI (r = -.307; P = .042) and serum cystatin C (r = -.372; P = .013), correlations that the investigators described as “modest.” FMD was not associated with any other biomarkers of SCD severity, such as homocysteine, fetal hemoglobin, or soluble platelet selectin.

Patients in the SCD cohort were further subdivided into two groups based on an FMD cut-off value of 5.35, which was the median measurement among healthy controls. This revealed that median cystatin C level was significantly higher in patients with an FMD value less than 5.35, compared with those who had an FMD value of 5.35 or more.

“[The study] findings suggest that SCD patients with impaired FMD are more likely to have impaired renal function,” the investigators wrote. The results support previous research, they added.

“Even though our findings show relationships rather than causation, we believe it is still a step forward in the ongoing quest to unravel the mysteries of this genetic disease,” they concluded. “Determining the exact age at which FMD impairment [begins] in children with sickle cell disease could be the subject of a future study.”

The study was funded by the Obafemi Awolowo University Teaching Hospital. The investigators reported no conflicts of interest.

SOURCE: Ayoola et al. Kidney360. 2020 Jan 30. doi: 10.34067/KID.0000142019.

Publications
Topics
Sections

Sonographic flow-mediated dilation (FMD) of the brachial artery predicts renal dysfunction in patients with sickle cell disease (SCD), according to investigators.

Mohammed Haneefa Nizamudeen/Getty Images

This is the first study to show that FMD – a surrogate biomarker for endothelial dysfunction – inversely correlates with renal artery resistivity index (RARI) and serum cystatin C, reported lead author Oluwagbemiga Oluwole Ayoola, MBChB, of Obafemi Awolowo University in Ile-Ife, Nigeria, and colleagues.

“[B]rachial artery FMD is an essential test in the management of SCD patients for noninvasive assessment of the vascular endothelium,” the investigators wrote in Kidney360. They went on to suggest that FMD could be used to detect early renal impairment in sickle cell disease.

The study involved 44 patients with steady-state, homozygous SCD (HbSS) and 33 age- and sex-matched controls (HbAA). Eligibility criteria excluded individuals with risk factors for endothelial dysfunction, such as obesity, diabetes, and hypertension, as well as those with thalassemia carrier traits.

For each participant, various data were gathered, including demographic and clinical characteristics, serum assays, FMD measurement of the brachial artery, and RARI.

Results showed that patients with sickle cell disease had a significantly lower median FMD value than that of healthy controls (3.44 vs. 5.35; P = .043).

Among patients with SCD, FMD was negatively and independently correlated with RARI (r = -.307; P = .042) and serum cystatin C (r = -.372; P = .013), correlations that the investigators described as “modest.” FMD was not associated with any other biomarkers of SCD severity, such as homocysteine, fetal hemoglobin, or soluble platelet selectin.

Patients in the SCD cohort were further subdivided into two groups based on an FMD cut-off value of 5.35, which was the median measurement among healthy controls. This revealed that median cystatin C level was significantly higher in patients with an FMD value less than 5.35, compared with those who had an FMD value of 5.35 or more.

“[The study] findings suggest that SCD patients with impaired FMD are more likely to have impaired renal function,” the investigators wrote. The results support previous research, they added.

“Even though our findings show relationships rather than causation, we believe it is still a step forward in the ongoing quest to unravel the mysteries of this genetic disease,” they concluded. “Determining the exact age at which FMD impairment [begins] in children with sickle cell disease could be the subject of a future study.”

The study was funded by the Obafemi Awolowo University Teaching Hospital. The investigators reported no conflicts of interest.

SOURCE: Ayoola et al. Kidney360. 2020 Jan 30. doi: 10.34067/KID.0000142019.

Sonographic flow-mediated dilation (FMD) of the brachial artery predicts renal dysfunction in patients with sickle cell disease (SCD), according to investigators.

Mohammed Haneefa Nizamudeen/Getty Images

This is the first study to show that FMD – a surrogate biomarker for endothelial dysfunction – inversely correlates with renal artery resistivity index (RARI) and serum cystatin C, reported lead author Oluwagbemiga Oluwole Ayoola, MBChB, of Obafemi Awolowo University in Ile-Ife, Nigeria, and colleagues.

“[B]rachial artery FMD is an essential test in the management of SCD patients for noninvasive assessment of the vascular endothelium,” the investigators wrote in Kidney360. They went on to suggest that FMD could be used to detect early renal impairment in sickle cell disease.

The study involved 44 patients with steady-state, homozygous SCD (HbSS) and 33 age- and sex-matched controls (HbAA). Eligibility criteria excluded individuals with risk factors for endothelial dysfunction, such as obesity, diabetes, and hypertension, as well as those with thalassemia carrier traits.

For each participant, various data were gathered, including demographic and clinical characteristics, serum assays, FMD measurement of the brachial artery, and RARI.

Results showed that patients with sickle cell disease had a significantly lower median FMD value than that of healthy controls (3.44 vs. 5.35; P = .043).

Among patients with SCD, FMD was negatively and independently correlated with RARI (r = -.307; P = .042) and serum cystatin C (r = -.372; P = .013), correlations that the investigators described as “modest.” FMD was not associated with any other biomarkers of SCD severity, such as homocysteine, fetal hemoglobin, or soluble platelet selectin.

Patients in the SCD cohort were further subdivided into two groups based on an FMD cut-off value of 5.35, which was the median measurement among healthy controls. This revealed that median cystatin C level was significantly higher in patients with an FMD value less than 5.35, compared with those who had an FMD value of 5.35 or more.

“[The study] findings suggest that SCD patients with impaired FMD are more likely to have impaired renal function,” the investigators wrote. The results support previous research, they added.

“Even though our findings show relationships rather than causation, we believe it is still a step forward in the ongoing quest to unravel the mysteries of this genetic disease,” they concluded. “Determining the exact age at which FMD impairment [begins] in children with sickle cell disease could be the subject of a future study.”

The study was funded by the Obafemi Awolowo University Teaching Hospital. The investigators reported no conflicts of interest.

SOURCE: Ayoola et al. Kidney360. 2020 Jan 30. doi: 10.34067/KID.0000142019.

Publications
Publications
Topics
Article Type
Click for Credit Status
Active
Sections
Article Source

FROM KIDNEY360

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
CME ID
216903
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap

Large study probes colonoscopy surveillance intervals

Lengthen LRA surveillance intervals
Article Type
Changed

Compared with patients who have normal baseline colonoscopy findings, those with low-risk adenomas may not have elevated risks of colorectal cancer (CRC) or CRC-related death, based on a retrospective analysis of more than 64,000 patients.

In contrast, patients with high-risk adenomas at baseline had significantly elevated rates of both CRC and CRC-related death, reported lead author Jeffrey K. Lee, MD, of Kaiser Permanente San Francisco and colleagues.

With additional research, these findings may influence colonoscopy surveillance intervals, the investigators wrote in Gastroenterology.

“Current guidelines recommend that patients with a low-risk adenoma finding ... receive surveillance colonoscopy in 5-10 years, although in practice, clinicians often use even more frequent surveillance ... in this low-risk group,” they wrote. “The rationale for continued support of shorter-than-recommended surveillance intervals for patients with low-risk adenomas is unclear, but could stem from a lack of long-term population-based studies assessing colorectal cancer incidence and related deaths following low-risk adenoma removal or randomized trials evaluating optimal postpolypectomy surveillance intervals.”

To alleviate this knowledge gap, the investigators began by screening data from 186,046 patients who underwent baseline colonoscopy between 2004 and 2010 at 21 medical centers in California. Following exclusions based on family history, confounding gastrointestinal diseases, and incomplete data, 64,422 patients remained. Among these patients, the mean age was 61.6 years, with a slight female majority (54.3%). Almost three out of four patients (71.2%) had normal colonoscopy findings, followed by smaller proportions who were diagnosed with low-risk adenoma (17.0%) or high-risk adenoma (11.7%), based on United States Multi-Society Task Force guidelines.

After a median follow-up of 8.1 years, 117 patients who had normal colonoscopy findings developed CRC, 22 of whom died from the disease. In comparison, the low-risk adenoma group had 37 cases of CRC and 3 instances of CRC-related death, whereas the high-risk adenoma group had 60 cases of CRC and 13 instances of CRC-related death.

In the no-adenoma and low-risk groups, trends in age-adjusted CRC incidence rates were similar; in both cohorts, CRC incidence increased gradually over the decade following colonoscopy, with each group reaching approximately 50 cases per 100,000 person-years by year 10. In contrast, CRC incidence climbed rapidly in the high-risk adenoma group, ultimately peaking a decade later at almost 220 cases per 100,000 person-years. Average incidence rates per 100,000 person-years were similar among patients with no adenoma (31.1) and low-risk adenoma (38.8), but markedly higher among those with high-risk adenoma (90.8). At the end of the 14-year follow-up period, absolute risks of CRC among patients with no adenoma, low-risk adenoma, and high-risk adenoma were 0.51%, 0.57%, and 2.03%, respectively.

Based on covariate-adjusted Cox regression models, patients with low-risk adenoma did not have a significantly higher risk of CRC or CRC-related death than did patients with no adenoma. In contrast, patients with high-risk adenoma had significantly higher risks of CRC (hazard ratio, 2.61) and CRC-related death (HR, 3.94).

“These findings support guideline recommendations for intensive colonoscopy surveillance in [patients with high-risk adenomas at baseline],” the investigators wrote.

Considering similar risks between patients with low-risk adenomas and those with normal findings, the investigators suggested that longer surveillance intervals may be acceptable for both of these patient populations.

“Guidelines recommending comparable follow-up for low-risk adenomas and normal examinations, such as lengthening the surveillance interval to more than 5 years and possibly 10 years, may provide comparable cancer incidence and mortality benefits for these two groups,” they wrote.

Still, the investigators noted that study limitations – such as disparate rates of subsequent colonoscopy between groups – make it difficult to draw definitive, practice-changing conclusions.

“Additional studies, potentially including randomized trials, on the natural history of low-risk adenoma and normal findings without intervening surveillance exams before 10 years are needed to help guide future surveillance practices,” they concluded.

The study was supported by the National Cancer Institute and the American Gastroenterological Association. The investigators disclosed no conflicts of interest.

 

SOURCE: Lee JK et al. Gastroenterology. 2019 Oct 4. doi: 10.1053/j.gastro.2019.09.039.

Body

 

Dr. Joseph C. Anderson
The current CRC surveillance paradigm stratifies adults into high- and low-risk groups based on index findings. However, there are few data on postcolonoscopy CRC incidence to support this approach. Lee et al. provided valuable long-term data in their retrospective analysis of data from an integrated health organization. While index high-risk adenomas were associated with an increased CRC risk, compared with no adenomas, low-risk adenomas (LRA; 1-2 tubular adenomas less than 1 cm) had no increased risk. A lower CRC mortality in those with LRAs decreased the likelihood that CRCs resulted from overdiagnosis or lead time bias caused by differences in exposure among the three groups to subsequent surveillance colonoscopies, a common issue in long-term studies. These data add to growing evidence, such as that from the Prostate, Lung, Colorectal and Ovarian Cancer Trial, that support lengthening current surveillance intervals for LRAs.

 


Study strengths include a large sample and inclusion of quality measures such as adenoma detection rates. However, to examine conventional adenoma risk, individuals with serrated polyps were excluded and thus the impact of these lesions is unclear. Since New Hampshire Colonoscopy Registry data demonstrate a higher risk of metachronous advanced adenomas for those with both sessile serrated polyps and high-risk adenomas, long-term CRC data for serrated polyps is crucial. In addition, data from short-term studies suggest that there may be heterogeneity in risk for LRAs, a higher risk for an 8-mm lesion than a 3-mm one. Thus, we await more long-term studies to address these and other issues.

 

 

Joseph C. Anderson, MD, MHCDS, is an associate professor of medicine at White River Junction VAMC, Dartmouth College, Hanover, N.H., and the University of Connecticut Health Center, Farmington, Conn. The contents of this work do not represent the views of the Department of Veterans Affairs or the United States Government. He has no relevant conflicts of interest.

 

Publications
Topics
Sections
Body

 

Dr. Joseph C. Anderson
The current CRC surveillance paradigm stratifies adults into high- and low-risk groups based on index findings. However, there are few data on postcolonoscopy CRC incidence to support this approach. Lee et al. provided valuable long-term data in their retrospective analysis of data from an integrated health organization. While index high-risk adenomas were associated with an increased CRC risk, compared with no adenomas, low-risk adenomas (LRA; 1-2 tubular adenomas less than 1 cm) had no increased risk. A lower CRC mortality in those with LRAs decreased the likelihood that CRCs resulted from overdiagnosis or lead time bias caused by differences in exposure among the three groups to subsequent surveillance colonoscopies, a common issue in long-term studies. These data add to growing evidence, such as that from the Prostate, Lung, Colorectal and Ovarian Cancer Trial, that support lengthening current surveillance intervals for LRAs.

 


Study strengths include a large sample and inclusion of quality measures such as adenoma detection rates. However, to examine conventional adenoma risk, individuals with serrated polyps were excluded and thus the impact of these lesions is unclear. Since New Hampshire Colonoscopy Registry data demonstrate a higher risk of metachronous advanced adenomas for those with both sessile serrated polyps and high-risk adenomas, long-term CRC data for serrated polyps is crucial. In addition, data from short-term studies suggest that there may be heterogeneity in risk for LRAs, a higher risk for an 8-mm lesion than a 3-mm one. Thus, we await more long-term studies to address these and other issues.

 

 

Joseph C. Anderson, MD, MHCDS, is an associate professor of medicine at White River Junction VAMC, Dartmouth College, Hanover, N.H., and the University of Connecticut Health Center, Farmington, Conn. The contents of this work do not represent the views of the Department of Veterans Affairs or the United States Government. He has no relevant conflicts of interest.

 

Body

 

Dr. Joseph C. Anderson
The current CRC surveillance paradigm stratifies adults into high- and low-risk groups based on index findings. However, there are few data on postcolonoscopy CRC incidence to support this approach. Lee et al. provided valuable long-term data in their retrospective analysis of data from an integrated health organization. While index high-risk adenomas were associated with an increased CRC risk, compared with no adenomas, low-risk adenomas (LRA; 1-2 tubular adenomas less than 1 cm) had no increased risk. A lower CRC mortality in those with LRAs decreased the likelihood that CRCs resulted from overdiagnosis or lead time bias caused by differences in exposure among the three groups to subsequent surveillance colonoscopies, a common issue in long-term studies. These data add to growing evidence, such as that from the Prostate, Lung, Colorectal and Ovarian Cancer Trial, that support lengthening current surveillance intervals for LRAs.

 


Study strengths include a large sample and inclusion of quality measures such as adenoma detection rates. However, to examine conventional adenoma risk, individuals with serrated polyps were excluded and thus the impact of these lesions is unclear. Since New Hampshire Colonoscopy Registry data demonstrate a higher risk of metachronous advanced adenomas for those with both sessile serrated polyps and high-risk adenomas, long-term CRC data for serrated polyps is crucial. In addition, data from short-term studies suggest that there may be heterogeneity in risk for LRAs, a higher risk for an 8-mm lesion than a 3-mm one. Thus, we await more long-term studies to address these and other issues.

 

 

Joseph C. Anderson, MD, MHCDS, is an associate professor of medicine at White River Junction VAMC, Dartmouth College, Hanover, N.H., and the University of Connecticut Health Center, Farmington, Conn. The contents of this work do not represent the views of the Department of Veterans Affairs or the United States Government. He has no relevant conflicts of interest.

 

Title
Lengthen LRA surveillance intervals
Lengthen LRA surveillance intervals

Compared with patients who have normal baseline colonoscopy findings, those with low-risk adenomas may not have elevated risks of colorectal cancer (CRC) or CRC-related death, based on a retrospective analysis of more than 64,000 patients.

In contrast, patients with high-risk adenomas at baseline had significantly elevated rates of both CRC and CRC-related death, reported lead author Jeffrey K. Lee, MD, of Kaiser Permanente San Francisco and colleagues.

With additional research, these findings may influence colonoscopy surveillance intervals, the investigators wrote in Gastroenterology.

“Current guidelines recommend that patients with a low-risk adenoma finding ... receive surveillance colonoscopy in 5-10 years, although in practice, clinicians often use even more frequent surveillance ... in this low-risk group,” they wrote. “The rationale for continued support of shorter-than-recommended surveillance intervals for patients with low-risk adenomas is unclear, but could stem from a lack of long-term population-based studies assessing colorectal cancer incidence and related deaths following low-risk adenoma removal or randomized trials evaluating optimal postpolypectomy surveillance intervals.”

To alleviate this knowledge gap, the investigators began by screening data from 186,046 patients who underwent baseline colonoscopy between 2004 and 2010 at 21 medical centers in California. Following exclusions based on family history, confounding gastrointestinal diseases, and incomplete data, 64,422 patients remained. Among these patients, the mean age was 61.6 years, with a slight female majority (54.3%). Almost three out of four patients (71.2%) had normal colonoscopy findings, followed by smaller proportions who were diagnosed with low-risk adenoma (17.0%) or high-risk adenoma (11.7%), based on United States Multi-Society Task Force guidelines.

After a median follow-up of 8.1 years, 117 patients who had normal colonoscopy findings developed CRC, 22 of whom died from the disease. In comparison, the low-risk adenoma group had 37 cases of CRC and 3 instances of CRC-related death, whereas the high-risk adenoma group had 60 cases of CRC and 13 instances of CRC-related death.

In the no-adenoma and low-risk groups, trends in age-adjusted CRC incidence rates were similar; in both cohorts, CRC incidence increased gradually over the decade following colonoscopy, with each group reaching approximately 50 cases per 100,000 person-years by year 10. In contrast, CRC incidence climbed rapidly in the high-risk adenoma group, ultimately peaking a decade later at almost 220 cases per 100,000 person-years. Average incidence rates per 100,000 person-years were similar among patients with no adenoma (31.1) and low-risk adenoma (38.8), but markedly higher among those with high-risk adenoma (90.8). At the end of the 14-year follow-up period, absolute risks of CRC among patients with no adenoma, low-risk adenoma, and high-risk adenoma were 0.51%, 0.57%, and 2.03%, respectively.

Based on covariate-adjusted Cox regression models, patients with low-risk adenoma did not have a significantly higher risk of CRC or CRC-related death than did patients with no adenoma. In contrast, patients with high-risk adenoma had significantly higher risks of CRC (hazard ratio, 2.61) and CRC-related death (HR, 3.94).

“These findings support guideline recommendations for intensive colonoscopy surveillance in [patients with high-risk adenomas at baseline],” the investigators wrote.

Considering similar risks between patients with low-risk adenomas and those with normal findings, the investigators suggested that longer surveillance intervals may be acceptable for both of these patient populations.

“Guidelines recommending comparable follow-up for low-risk adenomas and normal examinations, such as lengthening the surveillance interval to more than 5 years and possibly 10 years, may provide comparable cancer incidence and mortality benefits for these two groups,” they wrote.

Still, the investigators noted that study limitations – such as disparate rates of subsequent colonoscopy between groups – make it difficult to draw definitive, practice-changing conclusions.

“Additional studies, potentially including randomized trials, on the natural history of low-risk adenoma and normal findings without intervening surveillance exams before 10 years are needed to help guide future surveillance practices,” they concluded.

The study was supported by the National Cancer Institute and the American Gastroenterological Association. The investigators disclosed no conflicts of interest.

 

SOURCE: Lee JK et al. Gastroenterology. 2019 Oct 4. doi: 10.1053/j.gastro.2019.09.039.

Compared with patients who have normal baseline colonoscopy findings, those with low-risk adenomas may not have elevated risks of colorectal cancer (CRC) or CRC-related death, based on a retrospective analysis of more than 64,000 patients.

In contrast, patients with high-risk adenomas at baseline had significantly elevated rates of both CRC and CRC-related death, reported lead author Jeffrey K. Lee, MD, of Kaiser Permanente San Francisco and colleagues.

With additional research, these findings may influence colonoscopy surveillance intervals, the investigators wrote in Gastroenterology.

“Current guidelines recommend that patients with a low-risk adenoma finding ... receive surveillance colonoscopy in 5-10 years, although in practice, clinicians often use even more frequent surveillance ... in this low-risk group,” they wrote. “The rationale for continued support of shorter-than-recommended surveillance intervals for patients with low-risk adenomas is unclear, but could stem from a lack of long-term population-based studies assessing colorectal cancer incidence and related deaths following low-risk adenoma removal or randomized trials evaluating optimal postpolypectomy surveillance intervals.”

To alleviate this knowledge gap, the investigators began by screening data from 186,046 patients who underwent baseline colonoscopy between 2004 and 2010 at 21 medical centers in California. Following exclusions based on family history, confounding gastrointestinal diseases, and incomplete data, 64,422 patients remained. Among these patients, the mean age was 61.6 years, with a slight female majority (54.3%). Almost three out of four patients (71.2%) had normal colonoscopy findings, followed by smaller proportions who were diagnosed with low-risk adenoma (17.0%) or high-risk adenoma (11.7%), based on United States Multi-Society Task Force guidelines.

After a median follow-up of 8.1 years, 117 patients who had normal colonoscopy findings developed CRC, 22 of whom died from the disease. In comparison, the low-risk adenoma group had 37 cases of CRC and 3 instances of CRC-related death, whereas the high-risk adenoma group had 60 cases of CRC and 13 instances of CRC-related death.

In the no-adenoma and low-risk groups, trends in age-adjusted CRC incidence rates were similar; in both cohorts, CRC incidence increased gradually over the decade following colonoscopy, with each group reaching approximately 50 cases per 100,000 person-years by year 10. In contrast, CRC incidence climbed rapidly in the high-risk adenoma group, ultimately peaking a decade later at almost 220 cases per 100,000 person-years. Average incidence rates per 100,000 person-years were similar among patients with no adenoma (31.1) and low-risk adenoma (38.8), but markedly higher among those with high-risk adenoma (90.8). At the end of the 14-year follow-up period, absolute risks of CRC among patients with no adenoma, low-risk adenoma, and high-risk adenoma were 0.51%, 0.57%, and 2.03%, respectively.

Based on covariate-adjusted Cox regression models, patients with low-risk adenoma did not have a significantly higher risk of CRC or CRC-related death than did patients with no adenoma. In contrast, patients with high-risk adenoma had significantly higher risks of CRC (hazard ratio, 2.61) and CRC-related death (HR, 3.94).

“These findings support guideline recommendations for intensive colonoscopy surveillance in [patients with high-risk adenomas at baseline],” the investigators wrote.

Considering similar risks between patients with low-risk adenomas and those with normal findings, the investigators suggested that longer surveillance intervals may be acceptable for both of these patient populations.

“Guidelines recommending comparable follow-up for low-risk adenomas and normal examinations, such as lengthening the surveillance interval to more than 5 years and possibly 10 years, may provide comparable cancer incidence and mortality benefits for these two groups,” they wrote.

Still, the investigators noted that study limitations – such as disparate rates of subsequent colonoscopy between groups – make it difficult to draw definitive, practice-changing conclusions.

“Additional studies, potentially including randomized trials, on the natural history of low-risk adenoma and normal findings without intervening surveillance exams before 10 years are needed to help guide future surveillance practices,” they concluded.

The study was supported by the National Cancer Institute and the American Gastroenterological Association. The investigators disclosed no conflicts of interest.

 

SOURCE: Lee JK et al. Gastroenterology. 2019 Oct 4. doi: 10.1053/j.gastro.2019.09.039.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: High-risk adenomas, but not low-risk adenomas, are associated with increased long-term risks of colorectal cancer (CRC) and CRC-related death.

Major finding: Compared with patients without adenomas, patients diagnosed with high-risk adenoma had a significantly increased risk of colorectal cancer (hazard ratio, 2.61).

Study details: A retrospective cohort study involving 64,422 patients who underwent colonoscopy between 2004 and 2010.

Disclosures: The study was supported by the National Cancer Institute and the American Gastroenterological Association. The investigators disclosed no conflicts of interest.

Source: Lee JK et al. Gastroenterology. 2019 Oct 4. doi: 10.1053/j.gastro.2019.09.039.

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Mailed fecal testing may catch more cancer than endoscopic screening

Article Type
Changed

On a population level, mailed fecal immunohistochemical tests (FITs) may catch more cases of advanced neoplasia than endoscopic methods, based on a Dutch screening study that invited more than 30,000 people to participate.

The relative success of mailed FIT screening was largely due a participation rate of 73%, compared with participation rates between 24% and 31% among those invited to undergo endoscopic screening, reported lead author Esmée J. Grobbee, MD, of Erasmus University Medical Centre in Rotterdam, the Netherlands, and colleagues.

In addition to high participation, previous research has shown that successful FIT screening depends upon continued adherence to the screening program, the investigators wrote in Clinical Gastroenterology and Hepatology. They noted that, in the present study, just two rounds of FIT were needed to outperform endoscopic methods, and that these comparative findings are a first for the field.

“No literature is available on the comparison between endoscopic screening strategies and multiple rounds of FIT screening,” the investigators wrote. “It is of key importance for policy makers to know the impact of different screening programs over multiple rounds with long-term follow-up.”

To this end, the investigators invited 30,052 screening-naive people in the Netherlands to participate in the present study. Each invitation was for one of three groups: once-only colonoscopy, once-only flexible sigmoidoscopy, or four rounds of FIT. All individuals received an advanced notification by mail followed 2 weeks later by a more substantial information kit (and first FIT test when applicable). If these steps received no response, a reminder was sent 6 weeks later.

Participants in the FIT group received one test every 2 years. Patients who had a positive FIT (hemoglobin concentration of at least 10 mcg Hb/g feces) were scheduled for a colonoscopy. Similarly, colonoscopies were performed in patients who had concerning findings on flexible sigmoidoscopy (e.g., sessile serrated adenoma. This sequential system reduced the relative number of colonoscopies in these two groups; colonoscopy rates in the FIT group and flexible sigmoidoscopy group were 13% and 3%, respectively, compared with the 24% participation rate in the colonoscopy group.

At a population level, FIT screening had the highest advanced neoplasia detection rate, at 4.5%, compared with 2.3% and 2.2% for screening by sigmoidoscopy and colonoscopy, respectively.

“In the intention-to-screen analysis, FIT already detected significantly more advanced neoplasia and colorectal cancer (CRC) after only 2 rounds of FIT, and this difference increased over rounds,” the investigators noted.

Again in the intention-to-screen population, mailed FIT detected three times as many cases of CRC than either of the other two groups (0.6% vs. 0.2% for both). In contrast, colonoscopy and sigmoidoscopy had higher detection rates for nonadvanced adenomas, at 5.6% and 3.7%, respectively, compared with 3.2% for FIT, although the investigators noted that nonadvanced adenomas are “of uncertain clinical importance.” Sessile adenoma detection rates were similar across all three groups.

The as-screened analysis revealed higher detection rates of advanced neoplasia for colonoscopy (9.1%), compared with sigmoidoscopy (7.4%) and FIT (6.1%). In the same analysis, detection rates of colorectal cancer (CRC) were comparable across all three groups.

According to the investigators, the CRC-related findings require careful interpretation.

“Comparing CRC detection rates of FIT and endoscopic screening is complex … because CRCs detected in FIT screening could in theory have been prevented in a once-only colonoscopy by the removal of adenomas,” they wrote.

Still, the key takeaway of the study – that FIT screening was the most effective strategy – may have practical implications on a global scale, according to the investigators.

“Because many countries are considering implementing screening programs, the findings of this study aid in deciding on choice of screening strategies worldwide, which is based on expected participation rates and available colonoscopy resources,” they wrote.

The study was funded by the Netherlands Organization for Health Research and Development. The investigators disclosed no conflicts of interest.

SOURCE: Grobbee EJ et al. Clin Gastro Hepatol. 2019 Aug 13. doi: 10.1016/j.cgh.2019.08.015.

Publications
Topics
Sections

On a population level, mailed fecal immunohistochemical tests (FITs) may catch more cases of advanced neoplasia than endoscopic methods, based on a Dutch screening study that invited more than 30,000 people to participate.

The relative success of mailed FIT screening was largely due a participation rate of 73%, compared with participation rates between 24% and 31% among those invited to undergo endoscopic screening, reported lead author Esmée J. Grobbee, MD, of Erasmus University Medical Centre in Rotterdam, the Netherlands, and colleagues.

In addition to high participation, previous research has shown that successful FIT screening depends upon continued adherence to the screening program, the investigators wrote in Clinical Gastroenterology and Hepatology. They noted that, in the present study, just two rounds of FIT were needed to outperform endoscopic methods, and that these comparative findings are a first for the field.

“No literature is available on the comparison between endoscopic screening strategies and multiple rounds of FIT screening,” the investigators wrote. “It is of key importance for policy makers to know the impact of different screening programs over multiple rounds with long-term follow-up.”

To this end, the investigators invited 30,052 screening-naive people in the Netherlands to participate in the present study. Each invitation was for one of three groups: once-only colonoscopy, once-only flexible sigmoidoscopy, or four rounds of FIT. All individuals received an advanced notification by mail followed 2 weeks later by a more substantial information kit (and first FIT test when applicable). If these steps received no response, a reminder was sent 6 weeks later.

Participants in the FIT group received one test every 2 years. Patients who had a positive FIT (hemoglobin concentration of at least 10 mcg Hb/g feces) were scheduled for a colonoscopy. Similarly, colonoscopies were performed in patients who had concerning findings on flexible sigmoidoscopy (e.g., sessile serrated adenoma. This sequential system reduced the relative number of colonoscopies in these two groups; colonoscopy rates in the FIT group and flexible sigmoidoscopy group were 13% and 3%, respectively, compared with the 24% participation rate in the colonoscopy group.

At a population level, FIT screening had the highest advanced neoplasia detection rate, at 4.5%, compared with 2.3% and 2.2% for screening by sigmoidoscopy and colonoscopy, respectively.

“In the intention-to-screen analysis, FIT already detected significantly more advanced neoplasia and colorectal cancer (CRC) after only 2 rounds of FIT, and this difference increased over rounds,” the investigators noted.

Again in the intention-to-screen population, mailed FIT detected three times as many cases of CRC than either of the other two groups (0.6% vs. 0.2% for both). In contrast, colonoscopy and sigmoidoscopy had higher detection rates for nonadvanced adenomas, at 5.6% and 3.7%, respectively, compared with 3.2% for FIT, although the investigators noted that nonadvanced adenomas are “of uncertain clinical importance.” Sessile adenoma detection rates were similar across all three groups.

The as-screened analysis revealed higher detection rates of advanced neoplasia for colonoscopy (9.1%), compared with sigmoidoscopy (7.4%) and FIT (6.1%). In the same analysis, detection rates of colorectal cancer (CRC) were comparable across all three groups.

According to the investigators, the CRC-related findings require careful interpretation.

“Comparing CRC detection rates of FIT and endoscopic screening is complex … because CRCs detected in FIT screening could in theory have been prevented in a once-only colonoscopy by the removal of adenomas,” they wrote.

Still, the key takeaway of the study – that FIT screening was the most effective strategy – may have practical implications on a global scale, according to the investigators.

“Because many countries are considering implementing screening programs, the findings of this study aid in deciding on choice of screening strategies worldwide, which is based on expected participation rates and available colonoscopy resources,” they wrote.

The study was funded by the Netherlands Organization for Health Research and Development. The investigators disclosed no conflicts of interest.

SOURCE: Grobbee EJ et al. Clin Gastro Hepatol. 2019 Aug 13. doi: 10.1016/j.cgh.2019.08.015.

On a population level, mailed fecal immunohistochemical tests (FITs) may catch more cases of advanced neoplasia than endoscopic methods, based on a Dutch screening study that invited more than 30,000 people to participate.

The relative success of mailed FIT screening was largely due a participation rate of 73%, compared with participation rates between 24% and 31% among those invited to undergo endoscopic screening, reported lead author Esmée J. Grobbee, MD, of Erasmus University Medical Centre in Rotterdam, the Netherlands, and colleagues.

In addition to high participation, previous research has shown that successful FIT screening depends upon continued adherence to the screening program, the investigators wrote in Clinical Gastroenterology and Hepatology. They noted that, in the present study, just two rounds of FIT were needed to outperform endoscopic methods, and that these comparative findings are a first for the field.

“No literature is available on the comparison between endoscopic screening strategies and multiple rounds of FIT screening,” the investigators wrote. “It is of key importance for policy makers to know the impact of different screening programs over multiple rounds with long-term follow-up.”

To this end, the investigators invited 30,052 screening-naive people in the Netherlands to participate in the present study. Each invitation was for one of three groups: once-only colonoscopy, once-only flexible sigmoidoscopy, or four rounds of FIT. All individuals received an advanced notification by mail followed 2 weeks later by a more substantial information kit (and first FIT test when applicable). If these steps received no response, a reminder was sent 6 weeks later.

Participants in the FIT group received one test every 2 years. Patients who had a positive FIT (hemoglobin concentration of at least 10 mcg Hb/g feces) were scheduled for a colonoscopy. Similarly, colonoscopies were performed in patients who had concerning findings on flexible sigmoidoscopy (e.g., sessile serrated adenoma. This sequential system reduced the relative number of colonoscopies in these two groups; colonoscopy rates in the FIT group and flexible sigmoidoscopy group were 13% and 3%, respectively, compared with the 24% participation rate in the colonoscopy group.

At a population level, FIT screening had the highest advanced neoplasia detection rate, at 4.5%, compared with 2.3% and 2.2% for screening by sigmoidoscopy and colonoscopy, respectively.

“In the intention-to-screen analysis, FIT already detected significantly more advanced neoplasia and colorectal cancer (CRC) after only 2 rounds of FIT, and this difference increased over rounds,” the investigators noted.

Again in the intention-to-screen population, mailed FIT detected three times as many cases of CRC than either of the other two groups (0.6% vs. 0.2% for both). In contrast, colonoscopy and sigmoidoscopy had higher detection rates for nonadvanced adenomas, at 5.6% and 3.7%, respectively, compared with 3.2% for FIT, although the investigators noted that nonadvanced adenomas are “of uncertain clinical importance.” Sessile adenoma detection rates were similar across all three groups.

The as-screened analysis revealed higher detection rates of advanced neoplasia for colonoscopy (9.1%), compared with sigmoidoscopy (7.4%) and FIT (6.1%). In the same analysis, detection rates of colorectal cancer (CRC) were comparable across all three groups.

According to the investigators, the CRC-related findings require careful interpretation.

“Comparing CRC detection rates of FIT and endoscopic screening is complex … because CRCs detected in FIT screening could in theory have been prevented in a once-only colonoscopy by the removal of adenomas,” they wrote.

Still, the key takeaway of the study – that FIT screening was the most effective strategy – may have practical implications on a global scale, according to the investigators.

“Because many countries are considering implementing screening programs, the findings of this study aid in deciding on choice of screening strategies worldwide, which is based on expected participation rates and available colonoscopy resources,” they wrote.

The study was funded by the Netherlands Organization for Health Research and Development. The investigators disclosed no conflicts of interest.

SOURCE: Grobbee EJ et al. Clin Gastro Hepatol. 2019 Aug 13. doi: 10.1016/j.cgh.2019.08.015.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

GALAD score predicts NASH-HCC more than a year in advance

Ultrasound surveillance works poorly in NASH
Article Type
Changed

For patients with nonalcoholic steatohepatitis (NASH), the GALAD score may accurately predict hepatocellular carcinoma (HCC) as early as 560 days before diagnosis, according to investigators.

The GALAD score, which combines sex, age, alpha-fetoprotein-L3 (AFP-L3), alpha-fetoprotein, and des-gamma-carboxyprothrombin (DCP), could improve cancer surveillance among NASH patients whose obesity limits sensitivity of ultrasound, reported lead author Jan Best, MD, of the University Hospital Magdeburg in Germany, and colleagues.

“The limitations of ultrasound surveillance alone for early detection of HCC are particularly evident in patients with NASH,” the investigators wrote in Clinical Gastroenterology and Hepatology. “Serum-based biomarkers might be more effective, with or without ultrasound surveillance, for HCC surveillance in NASH patients, although data in this patient population are currently lacking. The current study assessed the performance of the GALAD score for early HCC detection in patients with NASH-related liver disease.”

The study consisted of two parts: first, a retrospective case-control analysis, and second, a phase 3 prospective trial that implemented the GALAD score in a real-world population.

The retrospective component of the study involved 126 NASH patients with HCC (cases) and 231 NASH patients without HCC (controls), all of whom were treated at eight centers in Germany. The median GALAD score was significantly higher among NASH patients with HCC than in those without (2.93 vs. –3.96; P less than .001). At an optimal cutoff of –1.334, the GALAD score predicted HCC with a sensitivity of 91.2% and a specificity of 95.2%. Each component of the GALAD score aligned with previously published findings, as patients with HCC were predominantly older men with elevated serum AFP-L3, AFP, and DCP. But a closer look at the data showed that the GALAD score more accurately predicted HCC than any of its constituent serum measurements in isolation. For any stage of HCC, GALAD had an area under the curve (AUC) of 0.96, compared with significantly lower values for AFP (0.88), AFP-L3 (0.86), and DCP (0.87). Similarly, for early-stage HCC, GALAD score AUC was 0.92, compared with significantly lower values for AFP (0.77), AFP-L3 (0.74), and DCP (0.87).

The accuracy of the GALAD score – for detection of both any-stage and early-stage HCC — remained high regardless of cirrhosis status. Among patients with cirrhosis, the AUC for any-stage HCC was 0.93, and 0.85 for early-stage HCC. For patients without cirrhosis, GALAD was slightly more predictive, based on AUC’s of 0.98 and 0.94 for detection of any-stage and early-stage HCC, respectively. Again, these accuracy values significantly outmatched each serum measurement in isolation.

“These data on NASH-HCC patients demonstrate that GALAD can detect HCC independent of cirrhosis or stage of HCC,” the investigators wrote. “Indeed, even early noncirrhotic NASH-HCC seems clearly separable from NASH controls, as even small groups resulted in robust performance.”

The prospective component of the study involved screening 392 patients with NASH at a single treatment center in Japan. From this cohort, 28 patients developed HCC after a median of 10.1 years. Many patients in this group had significantly higher GALAD scores for 5 or more years before being diagnosed with HCC, and scores rose sharply in the months preceding diagnosis. Depending on selected cutoff value, the GALAD score predicted HCC from 200 to 560 days prior to diagnosis.

“While this specific result has to be confirmed in further prospective studies, it is a promising observation for potential use of GALAD as a screening tool in NASH patients,” the investigators wrote.

“In conclusion, our data confirm that the GALAD score is superior to individual serum markers for detection of HCC in NASH, independent of tumor stage or cirrhosis,” the investigators wrote. “The findings suggest that GALAD should be investigated as a potential tool for screening of NASH individuals to detect HCC at a resectable stage in a sufficiently large prospective study to identify a cutoff.”

The study was funded by Deutsche Forschungsgemeinschaft, the Wilhelm-Laupitz Foundation, and the Werner Jackstaedt Foundation. The investigators declared no conflicts of interest.

SOURCE: Best J et al. Clin Gastro Hepatol. 2019 Nov 8. doi: 10.1016/j.cgh.2019.11.012.

Body

There has been increasing recognition that ultrasound-based HCC surveillance in patients with cirrhosis has suboptimal sensitivity and specificity for early HCC detection, particularly when applied to those with nonalcoholic steatohepatitis (NASH). These data highlight the critical need for novel biomarkers to improve early HCC detection and reduce mortality. The study by Dr. Best and colleagues evaluated a blood-based biomarker panel, GALAD, in patients with NASH and found that it was able to detect HCC at an early stage with a sensitivity of 68% and specificity of 95% - performance comparable, if not superior, to that of abdominal ultrasound. In an accompanying pilot prospective cohort study, the authors also found GALAD may detect HCC more than 1 year prior to diagnosis. Although earlier studies had similarly demonstrated high performance of GALAD for early HCC detection, this study specifically examined patients with NASH - a cohort that increasingly accounts for HCC cases in the Western world but has been underrepresented in prior studies. Therefore, it is reassuring to know that GALAD appears to have high sensitivity and specificity in this patient group. However, while the data by Best et al. are promising, validation of these results in larger cohort studies is needed before routine adoption in clinical practice. Fortunately, maturation of phase 3 biomarker cohorts, including the Early Detection Research Network Hepatocellular Early Detection Strategy (EDRN HEDS) and Texas HCC Consortium, will facilitate this evaluation in the near future and will hopefully translate promising biomarkers into clinical practice.  


Amit G. Singal, MD, is an associate professor of medicine, medical director of the liver tumor program, and chief of hepatology at UT Southwestern Medical Center, Dallas. He has served as a consultant for Wako Diagnostics, Glycotest, Exact Sciences, Roche Diagnostics, and TARGET Pharmasolutions.

Publications
Topics
Sections
Body

There has been increasing recognition that ultrasound-based HCC surveillance in patients with cirrhosis has suboptimal sensitivity and specificity for early HCC detection, particularly when applied to those with nonalcoholic steatohepatitis (NASH). These data highlight the critical need for novel biomarkers to improve early HCC detection and reduce mortality. The study by Dr. Best and colleagues evaluated a blood-based biomarker panel, GALAD, in patients with NASH and found that it was able to detect HCC at an early stage with a sensitivity of 68% and specificity of 95% - performance comparable, if not superior, to that of abdominal ultrasound. In an accompanying pilot prospective cohort study, the authors also found GALAD may detect HCC more than 1 year prior to diagnosis. Although earlier studies had similarly demonstrated high performance of GALAD for early HCC detection, this study specifically examined patients with NASH - a cohort that increasingly accounts for HCC cases in the Western world but has been underrepresented in prior studies. Therefore, it is reassuring to know that GALAD appears to have high sensitivity and specificity in this patient group. However, while the data by Best et al. are promising, validation of these results in larger cohort studies is needed before routine adoption in clinical practice. Fortunately, maturation of phase 3 biomarker cohorts, including the Early Detection Research Network Hepatocellular Early Detection Strategy (EDRN HEDS) and Texas HCC Consortium, will facilitate this evaluation in the near future and will hopefully translate promising biomarkers into clinical practice.  


Amit G. Singal, MD, is an associate professor of medicine, medical director of the liver tumor program, and chief of hepatology at UT Southwestern Medical Center, Dallas. He has served as a consultant for Wako Diagnostics, Glycotest, Exact Sciences, Roche Diagnostics, and TARGET Pharmasolutions.

Body

There has been increasing recognition that ultrasound-based HCC surveillance in patients with cirrhosis has suboptimal sensitivity and specificity for early HCC detection, particularly when applied to those with nonalcoholic steatohepatitis (NASH). These data highlight the critical need for novel biomarkers to improve early HCC detection and reduce mortality. The study by Dr. Best and colleagues evaluated a blood-based biomarker panel, GALAD, in patients with NASH and found that it was able to detect HCC at an early stage with a sensitivity of 68% and specificity of 95% - performance comparable, if not superior, to that of abdominal ultrasound. In an accompanying pilot prospective cohort study, the authors also found GALAD may detect HCC more than 1 year prior to diagnosis. Although earlier studies had similarly demonstrated high performance of GALAD for early HCC detection, this study specifically examined patients with NASH - a cohort that increasingly accounts for HCC cases in the Western world but has been underrepresented in prior studies. Therefore, it is reassuring to know that GALAD appears to have high sensitivity and specificity in this patient group. However, while the data by Best et al. are promising, validation of these results in larger cohort studies is needed before routine adoption in clinical practice. Fortunately, maturation of phase 3 biomarker cohorts, including the Early Detection Research Network Hepatocellular Early Detection Strategy (EDRN HEDS) and Texas HCC Consortium, will facilitate this evaluation in the near future and will hopefully translate promising biomarkers into clinical practice.  


Amit G. Singal, MD, is an associate professor of medicine, medical director of the liver tumor program, and chief of hepatology at UT Southwestern Medical Center, Dallas. He has served as a consultant for Wako Diagnostics, Glycotest, Exact Sciences, Roche Diagnostics, and TARGET Pharmasolutions.

Title
Ultrasound surveillance works poorly in NASH
Ultrasound surveillance works poorly in NASH

For patients with nonalcoholic steatohepatitis (NASH), the GALAD score may accurately predict hepatocellular carcinoma (HCC) as early as 560 days before diagnosis, according to investigators.

The GALAD score, which combines sex, age, alpha-fetoprotein-L3 (AFP-L3), alpha-fetoprotein, and des-gamma-carboxyprothrombin (DCP), could improve cancer surveillance among NASH patients whose obesity limits sensitivity of ultrasound, reported lead author Jan Best, MD, of the University Hospital Magdeburg in Germany, and colleagues.

“The limitations of ultrasound surveillance alone for early detection of HCC are particularly evident in patients with NASH,” the investigators wrote in Clinical Gastroenterology and Hepatology. “Serum-based biomarkers might be more effective, with or without ultrasound surveillance, for HCC surveillance in NASH patients, although data in this patient population are currently lacking. The current study assessed the performance of the GALAD score for early HCC detection in patients with NASH-related liver disease.”

The study consisted of two parts: first, a retrospective case-control analysis, and second, a phase 3 prospective trial that implemented the GALAD score in a real-world population.

The retrospective component of the study involved 126 NASH patients with HCC (cases) and 231 NASH patients without HCC (controls), all of whom were treated at eight centers in Germany. The median GALAD score was significantly higher among NASH patients with HCC than in those without (2.93 vs. –3.96; P less than .001). At an optimal cutoff of –1.334, the GALAD score predicted HCC with a sensitivity of 91.2% and a specificity of 95.2%. Each component of the GALAD score aligned with previously published findings, as patients with HCC were predominantly older men with elevated serum AFP-L3, AFP, and DCP. But a closer look at the data showed that the GALAD score more accurately predicted HCC than any of its constituent serum measurements in isolation. For any stage of HCC, GALAD had an area under the curve (AUC) of 0.96, compared with significantly lower values for AFP (0.88), AFP-L3 (0.86), and DCP (0.87). Similarly, for early-stage HCC, GALAD score AUC was 0.92, compared with significantly lower values for AFP (0.77), AFP-L3 (0.74), and DCP (0.87).

The accuracy of the GALAD score – for detection of both any-stage and early-stage HCC — remained high regardless of cirrhosis status. Among patients with cirrhosis, the AUC for any-stage HCC was 0.93, and 0.85 for early-stage HCC. For patients without cirrhosis, GALAD was slightly more predictive, based on AUC’s of 0.98 and 0.94 for detection of any-stage and early-stage HCC, respectively. Again, these accuracy values significantly outmatched each serum measurement in isolation.

“These data on NASH-HCC patients demonstrate that GALAD can detect HCC independent of cirrhosis or stage of HCC,” the investigators wrote. “Indeed, even early noncirrhotic NASH-HCC seems clearly separable from NASH controls, as even small groups resulted in robust performance.”

The prospective component of the study involved screening 392 patients with NASH at a single treatment center in Japan. From this cohort, 28 patients developed HCC after a median of 10.1 years. Many patients in this group had significantly higher GALAD scores for 5 or more years before being diagnosed with HCC, and scores rose sharply in the months preceding diagnosis. Depending on selected cutoff value, the GALAD score predicted HCC from 200 to 560 days prior to diagnosis.

“While this specific result has to be confirmed in further prospective studies, it is a promising observation for potential use of GALAD as a screening tool in NASH patients,” the investigators wrote.

“In conclusion, our data confirm that the GALAD score is superior to individual serum markers for detection of HCC in NASH, independent of tumor stage or cirrhosis,” the investigators wrote. “The findings suggest that GALAD should be investigated as a potential tool for screening of NASH individuals to detect HCC at a resectable stage in a sufficiently large prospective study to identify a cutoff.”

The study was funded by Deutsche Forschungsgemeinschaft, the Wilhelm-Laupitz Foundation, and the Werner Jackstaedt Foundation. The investigators declared no conflicts of interest.

SOURCE: Best J et al. Clin Gastro Hepatol. 2019 Nov 8. doi: 10.1016/j.cgh.2019.11.012.

For patients with nonalcoholic steatohepatitis (NASH), the GALAD score may accurately predict hepatocellular carcinoma (HCC) as early as 560 days before diagnosis, according to investigators.

The GALAD score, which combines sex, age, alpha-fetoprotein-L3 (AFP-L3), alpha-fetoprotein, and des-gamma-carboxyprothrombin (DCP), could improve cancer surveillance among NASH patients whose obesity limits sensitivity of ultrasound, reported lead author Jan Best, MD, of the University Hospital Magdeburg in Germany, and colleagues.

“The limitations of ultrasound surveillance alone for early detection of HCC are particularly evident in patients with NASH,” the investigators wrote in Clinical Gastroenterology and Hepatology. “Serum-based biomarkers might be more effective, with or without ultrasound surveillance, for HCC surveillance in NASH patients, although data in this patient population are currently lacking. The current study assessed the performance of the GALAD score for early HCC detection in patients with NASH-related liver disease.”

The study consisted of two parts: first, a retrospective case-control analysis, and second, a phase 3 prospective trial that implemented the GALAD score in a real-world population.

The retrospective component of the study involved 126 NASH patients with HCC (cases) and 231 NASH patients without HCC (controls), all of whom were treated at eight centers in Germany. The median GALAD score was significantly higher among NASH patients with HCC than in those without (2.93 vs. –3.96; P less than .001). At an optimal cutoff of –1.334, the GALAD score predicted HCC with a sensitivity of 91.2% and a specificity of 95.2%. Each component of the GALAD score aligned with previously published findings, as patients with HCC were predominantly older men with elevated serum AFP-L3, AFP, and DCP. But a closer look at the data showed that the GALAD score more accurately predicted HCC than any of its constituent serum measurements in isolation. For any stage of HCC, GALAD had an area under the curve (AUC) of 0.96, compared with significantly lower values for AFP (0.88), AFP-L3 (0.86), and DCP (0.87). Similarly, for early-stage HCC, GALAD score AUC was 0.92, compared with significantly lower values for AFP (0.77), AFP-L3 (0.74), and DCP (0.87).

The accuracy of the GALAD score – for detection of both any-stage and early-stage HCC — remained high regardless of cirrhosis status. Among patients with cirrhosis, the AUC for any-stage HCC was 0.93, and 0.85 for early-stage HCC. For patients without cirrhosis, GALAD was slightly more predictive, based on AUC’s of 0.98 and 0.94 for detection of any-stage and early-stage HCC, respectively. Again, these accuracy values significantly outmatched each serum measurement in isolation.

“These data on NASH-HCC patients demonstrate that GALAD can detect HCC independent of cirrhosis or stage of HCC,” the investigators wrote. “Indeed, even early noncirrhotic NASH-HCC seems clearly separable from NASH controls, as even small groups resulted in robust performance.”

The prospective component of the study involved screening 392 patients with NASH at a single treatment center in Japan. From this cohort, 28 patients developed HCC after a median of 10.1 years. Many patients in this group had significantly higher GALAD scores for 5 or more years before being diagnosed with HCC, and scores rose sharply in the months preceding diagnosis. Depending on selected cutoff value, the GALAD score predicted HCC from 200 to 560 days prior to diagnosis.

“While this specific result has to be confirmed in further prospective studies, it is a promising observation for potential use of GALAD as a screening tool in NASH patients,” the investigators wrote.

“In conclusion, our data confirm that the GALAD score is superior to individual serum markers for detection of HCC in NASH, independent of tumor stage or cirrhosis,” the investigators wrote. “The findings suggest that GALAD should be investigated as a potential tool for screening of NASH individuals to detect HCC at a resectable stage in a sufficiently large prospective study to identify a cutoff.”

The study was funded by Deutsche Forschungsgemeinschaft, the Wilhelm-Laupitz Foundation, and the Werner Jackstaedt Foundation. The investigators declared no conflicts of interest.

SOURCE: Best J et al. Clin Gastro Hepatol. 2019 Nov 8. doi: 10.1016/j.cgh.2019.11.012.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Circulating tumor cells at baseline predict recurrence in stage III melanoma

Article Type
Changed

Patients with stage III melanoma who have circulating tumor cells (CTCs) at baseline may benefit from adjuvant therapy, according to investigators.

A prospective study showed that patients with at least one CTC upon first presentation had increased risks of both short-term and long-term recurrence, reported lead author Anthony Lucci, MD, of the University of Texas MD Anderson Cancer Center, Houston, and colleagues.

While previous studies have suggested that CTCs hold prognostic value for melanoma patients, no trials had evaluated the CellSearch CTC Test – a standardized technique approved by the Food and Drug Administration – in patients with stage III disease, the investigators wrote. Their report is in Clinical Cancer Research.

In the present study, the investigators tested the CellSearch system in 243 patients with stage III cutaneous melanoma who were treated at MD Anderson Cancer Center. Patients with uveal or mucosal melanoma, or distant metastatic disease, were excluded.

Baseline blood samples were drawn within 3 months of regional lymph node metastasis, determined by either lymphadenectomy or sentinel lymph node biopsy. CTC assay positivity required that at least one CTC was detected within a single 7.5 mL tube of blood.

Out of 243 patients, 90 (37%) had a positive test. Of these 90 patients, almost one-quarter (23%) relapsed within 6 months, compared with 8% of patients who had a negative CTC assay. Within the full follow-up period, which was as long as 64 months, 48% of patients with CTCs at baseline relapsed, compared with 37% of patients without CTCs.

Multivariable regression analysis, which was adjusted for age, sex, pathological nodal stage, Breslow thickness, ulceration, and lymphovascular invasion, showed that baseline CTC positivity was an independent risk factor for melanoma recurrence, both in the short term and the long term. Compared with patients who lacked CTCs, those who tested positive were three times as likely to have disease recurrence within 6 months (hazard ratio, 3.13; P = .018). For relapse-free survival within 54 months, this hazard ratio decreased to 2.25 (P = .006).

Although a Cochran-Armitage test suggested that recurrence risks increased with CTC count, the investigators noted that a minority of patients (17%) had two or more CTCs, and just 5% had three or more CTCs.

According to the investigators, CTCs at baseline could become the first reliable blood-based biomarker for this patient population.

“[CTCs] clearly identified a group of stage III patients at high risk for relapse,” the investigators wrote. “This would be clinically very significant as an independent risk factor to help identify the stage III patients who would benefit most from adjuvant systemic therapy.”

This study was funded by the Kiefer family, Sheila Prenowitz, the Simon and Linda Eyles Foundation, the Sam and Janna Moore family, and the Wintermann Foundation. The investigators reported no conflicts of interest.

SOURCE: Lucci et al. Clin Cancer Res. doi: 10.1158/1078-0432.CCR-19-2670.

Publications
Topics
Sections

Patients with stage III melanoma who have circulating tumor cells (CTCs) at baseline may benefit from adjuvant therapy, according to investigators.

A prospective study showed that patients with at least one CTC upon first presentation had increased risks of both short-term and long-term recurrence, reported lead author Anthony Lucci, MD, of the University of Texas MD Anderson Cancer Center, Houston, and colleagues.

While previous studies have suggested that CTCs hold prognostic value for melanoma patients, no trials had evaluated the CellSearch CTC Test – a standardized technique approved by the Food and Drug Administration – in patients with stage III disease, the investigators wrote. Their report is in Clinical Cancer Research.

In the present study, the investigators tested the CellSearch system in 243 patients with stage III cutaneous melanoma who were treated at MD Anderson Cancer Center. Patients with uveal or mucosal melanoma, or distant metastatic disease, were excluded.

Baseline blood samples were drawn within 3 months of regional lymph node metastasis, determined by either lymphadenectomy or sentinel lymph node biopsy. CTC assay positivity required that at least one CTC was detected within a single 7.5 mL tube of blood.

Out of 243 patients, 90 (37%) had a positive test. Of these 90 patients, almost one-quarter (23%) relapsed within 6 months, compared with 8% of patients who had a negative CTC assay. Within the full follow-up period, which was as long as 64 months, 48% of patients with CTCs at baseline relapsed, compared with 37% of patients without CTCs.

Multivariable regression analysis, which was adjusted for age, sex, pathological nodal stage, Breslow thickness, ulceration, and lymphovascular invasion, showed that baseline CTC positivity was an independent risk factor for melanoma recurrence, both in the short term and the long term. Compared with patients who lacked CTCs, those who tested positive were three times as likely to have disease recurrence within 6 months (hazard ratio, 3.13; P = .018). For relapse-free survival within 54 months, this hazard ratio decreased to 2.25 (P = .006).

Although a Cochran-Armitage test suggested that recurrence risks increased with CTC count, the investigators noted that a minority of patients (17%) had two or more CTCs, and just 5% had three or more CTCs.

According to the investigators, CTCs at baseline could become the first reliable blood-based biomarker for this patient population.

“[CTCs] clearly identified a group of stage III patients at high risk for relapse,” the investigators wrote. “This would be clinically very significant as an independent risk factor to help identify the stage III patients who would benefit most from adjuvant systemic therapy.”

This study was funded by the Kiefer family, Sheila Prenowitz, the Simon and Linda Eyles Foundation, the Sam and Janna Moore family, and the Wintermann Foundation. The investigators reported no conflicts of interest.

SOURCE: Lucci et al. Clin Cancer Res. doi: 10.1158/1078-0432.CCR-19-2670.

Patients with stage III melanoma who have circulating tumor cells (CTCs) at baseline may benefit from adjuvant therapy, according to investigators.

A prospective study showed that patients with at least one CTC upon first presentation had increased risks of both short-term and long-term recurrence, reported lead author Anthony Lucci, MD, of the University of Texas MD Anderson Cancer Center, Houston, and colleagues.

While previous studies have suggested that CTCs hold prognostic value for melanoma patients, no trials had evaluated the CellSearch CTC Test – a standardized technique approved by the Food and Drug Administration – in patients with stage III disease, the investigators wrote. Their report is in Clinical Cancer Research.

In the present study, the investigators tested the CellSearch system in 243 patients with stage III cutaneous melanoma who were treated at MD Anderson Cancer Center. Patients with uveal or mucosal melanoma, or distant metastatic disease, were excluded.

Baseline blood samples were drawn within 3 months of regional lymph node metastasis, determined by either lymphadenectomy or sentinel lymph node biopsy. CTC assay positivity required that at least one CTC was detected within a single 7.5 mL tube of blood.

Out of 243 patients, 90 (37%) had a positive test. Of these 90 patients, almost one-quarter (23%) relapsed within 6 months, compared with 8% of patients who had a negative CTC assay. Within the full follow-up period, which was as long as 64 months, 48% of patients with CTCs at baseline relapsed, compared with 37% of patients without CTCs.

Multivariable regression analysis, which was adjusted for age, sex, pathological nodal stage, Breslow thickness, ulceration, and lymphovascular invasion, showed that baseline CTC positivity was an independent risk factor for melanoma recurrence, both in the short term and the long term. Compared with patients who lacked CTCs, those who tested positive were three times as likely to have disease recurrence within 6 months (hazard ratio, 3.13; P = .018). For relapse-free survival within 54 months, this hazard ratio decreased to 2.25 (P = .006).

Although a Cochran-Armitage test suggested that recurrence risks increased with CTC count, the investigators noted that a minority of patients (17%) had two or more CTCs, and just 5% had three or more CTCs.

According to the investigators, CTCs at baseline could become the first reliable blood-based biomarker for this patient population.

“[CTCs] clearly identified a group of stage III patients at high risk for relapse,” the investigators wrote. “This would be clinically very significant as an independent risk factor to help identify the stage III patients who would benefit most from adjuvant systemic therapy.”

This study was funded by the Kiefer family, Sheila Prenowitz, the Simon and Linda Eyles Foundation, the Sam and Janna Moore family, and the Wintermann Foundation. The investigators reported no conflicts of interest.

SOURCE: Lucci et al. Clin Cancer Res. doi: 10.1158/1078-0432.CCR-19-2670.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL CANCER RESEARCH

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Esophageal length ratio predicts hiatal hernia recurrence

Article Type
Changed

A new ratio based on manometric esophageal length in relation to patient height could offer an objective means of preoperatively identifying shortened esophagus, which could improve surgical planning and outcomes with hiatal hernia repair, according to investigators.

In a retrospective analysis, patients with a lower manometric esophageal length-to-height (MELH) ratio had a higher rate of hiatal hernia recurrence, reported lead author Pooja Lal, MD, of the Cleveland Clinic, and colleagues.

A short esophagus increases tension at the gastroesophageal junction, which may necessitate a lengthening procedure in addition to hiatal hernia repair, the investigators wrote in the Journal of Clinical Gastroenterology. As lengthening may require additional expertise, preoperative knowledge of a short esophagus is beneficial; however, until this point, short esophagus could only be identified intraoperatively. Since previous attempts to define short esophagus were confounded by patient height, the investigators devised the MELH ratio to account for this variable.

The investigators evaluated data from 245 patients who underwent hiatal hernia repair by Nissen fundoplication, of whom 157 also underwent esophageal lengthening with a Collis gastroplasty. The decision to perform a Collis gastroplasty was made intraoperatively if a patient did not have at least 2-3 cm of intra-abdominal esophageal length with minimal tension.

For all patients, the MELH ratio was determined by dividing manometric esophageal length by patient height (both in centimeters).

On average, patients who needed a Collis gastroplasty had a shorter esophagus (20.2 vs. 22.4 cm; P less than .001) and a lower MELH ratio (0.12 vs. 0.13; P less than .001).

Multivariable hazard regression showed that regardless of surgical approach, for every 0.01 U-increment increase in MELH ratio, risk of hernia recurrence decreased by 33% (hazard ratio, 0.67; P less than .001). In contrast, regardless of MELH ratio, repair without Collis was associated with a 500% increased risk of recurrence (HR, 6.1; P less than .001). Over 5 years, the benefit of Collis gastroplasty translated to a significantly lower rate of both hernia recurrence (18% vs. 55%; P less than .001) and reoperations for recurrence (0% vs. 10%; P less than .001).

“We suggest that surgeons and gastroenterologists calculate the MELH ratio before repair of a hiatal hernia, and be cognizant of patients with a shortened esophagus,” the investigators concluded. “An esophageal lengthening procedure such as a Collis gastroplasty may reduce the risk of hernia recurrence and reoperation for recurrence, especially for patients with a MELH ratio less than 0.12.”The investigators reported no conflicts of interest.

SOURCE: Lal P et al. J Clin Gastroenterol. 2020 Jan 20. doi: 10.1097/MCG.0000000000001316.

Publications
Topics
Sections

A new ratio based on manometric esophageal length in relation to patient height could offer an objective means of preoperatively identifying shortened esophagus, which could improve surgical planning and outcomes with hiatal hernia repair, according to investigators.

In a retrospective analysis, patients with a lower manometric esophageal length-to-height (MELH) ratio had a higher rate of hiatal hernia recurrence, reported lead author Pooja Lal, MD, of the Cleveland Clinic, and colleagues.

A short esophagus increases tension at the gastroesophageal junction, which may necessitate a lengthening procedure in addition to hiatal hernia repair, the investigators wrote in the Journal of Clinical Gastroenterology. As lengthening may require additional expertise, preoperative knowledge of a short esophagus is beneficial; however, until this point, short esophagus could only be identified intraoperatively. Since previous attempts to define short esophagus were confounded by patient height, the investigators devised the MELH ratio to account for this variable.

The investigators evaluated data from 245 patients who underwent hiatal hernia repair by Nissen fundoplication, of whom 157 also underwent esophageal lengthening with a Collis gastroplasty. The decision to perform a Collis gastroplasty was made intraoperatively if a patient did not have at least 2-3 cm of intra-abdominal esophageal length with minimal tension.

For all patients, the MELH ratio was determined by dividing manometric esophageal length by patient height (both in centimeters).

On average, patients who needed a Collis gastroplasty had a shorter esophagus (20.2 vs. 22.4 cm; P less than .001) and a lower MELH ratio (0.12 vs. 0.13; P less than .001).

Multivariable hazard regression showed that regardless of surgical approach, for every 0.01 U-increment increase in MELH ratio, risk of hernia recurrence decreased by 33% (hazard ratio, 0.67; P less than .001). In contrast, regardless of MELH ratio, repair without Collis was associated with a 500% increased risk of recurrence (HR, 6.1; P less than .001). Over 5 years, the benefit of Collis gastroplasty translated to a significantly lower rate of both hernia recurrence (18% vs. 55%; P less than .001) and reoperations for recurrence (0% vs. 10%; P less than .001).

“We suggest that surgeons and gastroenterologists calculate the MELH ratio before repair of a hiatal hernia, and be cognizant of patients with a shortened esophagus,” the investigators concluded. “An esophageal lengthening procedure such as a Collis gastroplasty may reduce the risk of hernia recurrence and reoperation for recurrence, especially for patients with a MELH ratio less than 0.12.”The investigators reported no conflicts of interest.

SOURCE: Lal P et al. J Clin Gastroenterol. 2020 Jan 20. doi: 10.1097/MCG.0000000000001316.

A new ratio based on manometric esophageal length in relation to patient height could offer an objective means of preoperatively identifying shortened esophagus, which could improve surgical planning and outcomes with hiatal hernia repair, according to investigators.

In a retrospective analysis, patients with a lower manometric esophageal length-to-height (MELH) ratio had a higher rate of hiatal hernia recurrence, reported lead author Pooja Lal, MD, of the Cleveland Clinic, and colleagues.

A short esophagus increases tension at the gastroesophageal junction, which may necessitate a lengthening procedure in addition to hiatal hernia repair, the investigators wrote in the Journal of Clinical Gastroenterology. As lengthening may require additional expertise, preoperative knowledge of a short esophagus is beneficial; however, until this point, short esophagus could only be identified intraoperatively. Since previous attempts to define short esophagus were confounded by patient height, the investigators devised the MELH ratio to account for this variable.

The investigators evaluated data from 245 patients who underwent hiatal hernia repair by Nissen fundoplication, of whom 157 also underwent esophageal lengthening with a Collis gastroplasty. The decision to perform a Collis gastroplasty was made intraoperatively if a patient did not have at least 2-3 cm of intra-abdominal esophageal length with minimal tension.

For all patients, the MELH ratio was determined by dividing manometric esophageal length by patient height (both in centimeters).

On average, patients who needed a Collis gastroplasty had a shorter esophagus (20.2 vs. 22.4 cm; P less than .001) and a lower MELH ratio (0.12 vs. 0.13; P less than .001).

Multivariable hazard regression showed that regardless of surgical approach, for every 0.01 U-increment increase in MELH ratio, risk of hernia recurrence decreased by 33% (hazard ratio, 0.67; P less than .001). In contrast, regardless of MELH ratio, repair without Collis was associated with a 500% increased risk of recurrence (HR, 6.1; P less than .001). Over 5 years, the benefit of Collis gastroplasty translated to a significantly lower rate of both hernia recurrence (18% vs. 55%; P less than .001) and reoperations for recurrence (0% vs. 10%; P less than .001).

“We suggest that surgeons and gastroenterologists calculate the MELH ratio before repair of a hiatal hernia, and be cognizant of patients with a shortened esophagus,” the investigators concluded. “An esophageal lengthening procedure such as a Collis gastroplasty may reduce the risk of hernia recurrence and reoperation for recurrence, especially for patients with a MELH ratio less than 0.12.”The investigators reported no conflicts of interest.

SOURCE: Lal P et al. J Clin Gastroenterol. 2020 Jan 20. doi: 10.1097/MCG.0000000000001316.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF CLINICAL GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

IBD: Inpatient opioids linked with outpatient use

Article Type
Changed

Patients with inflammatory bowel disease (IBD) who receive opioids while hospitalized are three times as likely to be prescribed opioids after discharge, based on a retrospective analysis of more than 800 patients.

Awareness of this dose-dependent relationship and IBD-related risks of opioid use should encourage physicians to consider alternative analgesics, according to lead author Rahul S. Dalal, MD, of Brigham and Women’s Hospital, Boston, and colleagues.

“Recent evidence has demonstrated that opioid use is associated with severe infections and increased mortality among IBD patients,” the investigators wrote in Clinical Gastroenterology and Hepatology. “Despite these concerns, opioids are commonly prescribed to IBD patients in the outpatient setting and to as many as 70% of IBD patients who are hospitalized.”

To look for a possible relationship between inpatient and outpatient opioid use, the investigators reviewed electronic medical records of 862 IBD patients who were treated at three urban hospitals in the University of Pennsylvania Health System. The primary outcome was opioid prescription within 12 months of discharge, including prescriptions at time of hospital dismissal.

During hospitalization, about two-thirds (67.6%) of patients received intravenous opioids. Of the total population, slightly more than half (54.6%) received intravenous hydromorphone and about one-quarter (25.9%) received intravenous morphine. Following discharge, almost half of the population (44.7%) was prescribed opioids, and about 3 out of 4 patients (77.9%) received an additional opioid prescription within the same year.

After accounting for confounders such as IBD severity, preadmission opioid use, pain scores, and psychiatric conditions, data analysis showed that inpatients who received intravenous opioids had a threefold (odds ratio [OR], 3.3) increased likelihood of receiving postdischarge opioid prescription, compared with patients who received no opioids while hospitalized. This association was stronger among those who had IBD flares (OR, 5.4). Furthermore, intravenous dose was positively correlated with postdischarge opioid prescription.

Avoiding intravenous opioids had no impact on the relationship between inpatient and outpatient opioid use. Among inpatients who received only oral or transdermal opioids, a similarly increased likelihood of postdischarge opioid prescription was observed (OR, 4.2), although this was a small cohort (n = 67).

Compared with other physicians, gastroenterologists were the least likely to prescribe opioids. Considering that gastroenterologists were also most likely aware of IBD-related risks of opioid use, the investigators concluded that more interdisciplinary communication and education are needed.

“Alternative analgesics such as acetaminophen, dicyclomine, hyoscyamine, and celecoxib could be advised, as many of these therapies have been deemed relatively safe and effective in this population,” they wrote.The investigators disclosed relationships with Abbott, Gilead, Romark, and others.

SOURCE: Dalal RS et al. Clin Gastro Hepatol. 2019 Dec 27. doi: 10.1016/j.cgh.2019.12.024.

Publications
Topics
Sections

Patients with inflammatory bowel disease (IBD) who receive opioids while hospitalized are three times as likely to be prescribed opioids after discharge, based on a retrospective analysis of more than 800 patients.

Awareness of this dose-dependent relationship and IBD-related risks of opioid use should encourage physicians to consider alternative analgesics, according to lead author Rahul S. Dalal, MD, of Brigham and Women’s Hospital, Boston, and colleagues.

“Recent evidence has demonstrated that opioid use is associated with severe infections and increased mortality among IBD patients,” the investigators wrote in Clinical Gastroenterology and Hepatology. “Despite these concerns, opioids are commonly prescribed to IBD patients in the outpatient setting and to as many as 70% of IBD patients who are hospitalized.”

To look for a possible relationship between inpatient and outpatient opioid use, the investigators reviewed electronic medical records of 862 IBD patients who were treated at three urban hospitals in the University of Pennsylvania Health System. The primary outcome was opioid prescription within 12 months of discharge, including prescriptions at time of hospital dismissal.

During hospitalization, about two-thirds (67.6%) of patients received intravenous opioids. Of the total population, slightly more than half (54.6%) received intravenous hydromorphone and about one-quarter (25.9%) received intravenous morphine. Following discharge, almost half of the population (44.7%) was prescribed opioids, and about 3 out of 4 patients (77.9%) received an additional opioid prescription within the same year.

After accounting for confounders such as IBD severity, preadmission opioid use, pain scores, and psychiatric conditions, data analysis showed that inpatients who received intravenous opioids had a threefold (odds ratio [OR], 3.3) increased likelihood of receiving postdischarge opioid prescription, compared with patients who received no opioids while hospitalized. This association was stronger among those who had IBD flares (OR, 5.4). Furthermore, intravenous dose was positively correlated with postdischarge opioid prescription.

Avoiding intravenous opioids had no impact on the relationship between inpatient and outpatient opioid use. Among inpatients who received only oral or transdermal opioids, a similarly increased likelihood of postdischarge opioid prescription was observed (OR, 4.2), although this was a small cohort (n = 67).

Compared with other physicians, gastroenterologists were the least likely to prescribe opioids. Considering that gastroenterologists were also most likely aware of IBD-related risks of opioid use, the investigators concluded that more interdisciplinary communication and education are needed.

“Alternative analgesics such as acetaminophen, dicyclomine, hyoscyamine, and celecoxib could be advised, as many of these therapies have been deemed relatively safe and effective in this population,” they wrote.The investigators disclosed relationships with Abbott, Gilead, Romark, and others.

SOURCE: Dalal RS et al. Clin Gastro Hepatol. 2019 Dec 27. doi: 10.1016/j.cgh.2019.12.024.

Patients with inflammatory bowel disease (IBD) who receive opioids while hospitalized are three times as likely to be prescribed opioids after discharge, based on a retrospective analysis of more than 800 patients.

Awareness of this dose-dependent relationship and IBD-related risks of opioid use should encourage physicians to consider alternative analgesics, according to lead author Rahul S. Dalal, MD, of Brigham and Women’s Hospital, Boston, and colleagues.

“Recent evidence has demonstrated that opioid use is associated with severe infections and increased mortality among IBD patients,” the investigators wrote in Clinical Gastroenterology and Hepatology. “Despite these concerns, opioids are commonly prescribed to IBD patients in the outpatient setting and to as many as 70% of IBD patients who are hospitalized.”

To look for a possible relationship between inpatient and outpatient opioid use, the investigators reviewed electronic medical records of 862 IBD patients who were treated at three urban hospitals in the University of Pennsylvania Health System. The primary outcome was opioid prescription within 12 months of discharge, including prescriptions at time of hospital dismissal.

During hospitalization, about two-thirds (67.6%) of patients received intravenous opioids. Of the total population, slightly more than half (54.6%) received intravenous hydromorphone and about one-quarter (25.9%) received intravenous morphine. Following discharge, almost half of the population (44.7%) was prescribed opioids, and about 3 out of 4 patients (77.9%) received an additional opioid prescription within the same year.

After accounting for confounders such as IBD severity, preadmission opioid use, pain scores, and psychiatric conditions, data analysis showed that inpatients who received intravenous opioids had a threefold (odds ratio [OR], 3.3) increased likelihood of receiving postdischarge opioid prescription, compared with patients who received no opioids while hospitalized. This association was stronger among those who had IBD flares (OR, 5.4). Furthermore, intravenous dose was positively correlated with postdischarge opioid prescription.

Avoiding intravenous opioids had no impact on the relationship between inpatient and outpatient opioid use. Among inpatients who received only oral or transdermal opioids, a similarly increased likelihood of postdischarge opioid prescription was observed (OR, 4.2), although this was a small cohort (n = 67).

Compared with other physicians, gastroenterologists were the least likely to prescribe opioids. Considering that gastroenterologists were also most likely aware of IBD-related risks of opioid use, the investigators concluded that more interdisciplinary communication and education are needed.

“Alternative analgesics such as acetaminophen, dicyclomine, hyoscyamine, and celecoxib could be advised, as many of these therapies have been deemed relatively safe and effective in this population,” they wrote.The investigators disclosed relationships with Abbott, Gilead, Romark, and others.

SOURCE: Dalal RS et al. Clin Gastro Hepatol. 2019 Dec 27. doi: 10.1016/j.cgh.2019.12.024.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

Key clinical point: Patients with inflammatory bowel disease (IBD) who receive opioids while hospitalized are three times as likely to be prescribed opioids after discharge.

Major finding: Patients who were given intravenous opioids while hospitalized were three times as likely to receive a postdischarge opioid prescription, compared with patients who did not receive inpatient intravenous opioids (odds ratio, 3.3).

Study details: A retrospective cohort study involving 862 patients with inflammatory bowel disease.

Disclosures: The investigators disclosed relationships Abbott, Gilead, Romark, and others.

Source: Dalal RS et al. Clin Gastro Hepatol. 2019 Dec 27. doi: 10.1016/j.cgh.2019.12.024.

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.