Study Questions Relationship Between Crohn’s Strictures and Cancer Risk

Article Type
Changed
Thu, 09/12/2024 - 10:41

 

Colonic strictures in patients with Crohn’s disease (CD) may not increase long-term risk of colorectal cancer (CRC), offering support for a conservative approach to stricture management, according to investigators.

Although 8% of patients with strictures in a multicenter study were diagnosed with CRC, this diagnosis was made either simultaneously or within 1 year of stricture diagnosis, suggesting that cancer may have driven stricture development, and not the other way around, lead author Thomas Hunaut, MD, of Université de Champagne-Ardenne, Reims, France, and colleagues reported.

“The occurrence of colonic stricture in CD always raises concerns about the risk for dysplasia/cancer,” the investigators wrote in Gastro Hep Advances, noting that no consensus approach is currently available to guide stricture management. “Few studies with conflicting results have evaluated the frequency of CRC associated with colonic stricture in CD, and the natural history of colonic stricture in CD is poorly known.”The present retrospective study included 88 consecutive CD patients with 96 colorectal strictures who were managed at three French referral centers between 1993 and 2022.

Strictures were symptomatic in 62.5% of cases, not passable by scope in 61.4% of cases, and ulcerated in 70.5% of cases. Colonic resection was needed in 47.7% of patients, while endoscopic balloon dilation was performed in 13.6% of patients.

After a median follow-up of 21.5 months, seven patients (8%) were diagnosed with malignant stricture, including five cases of colonic adenocarcinoma, one case of neuroendocrine carcinoma, and one case of B-cell lymphoproliferative neoplasia.

Malignant strictures were more common among older patients with longer disease duration and frequent obstructive symptoms; however, these factors were not supported by multivariate analyses, likely due to sample size, according to the investigators.

Instead, Dr. Hunaut and colleagues highlighted the timing of the diagnoses. In four out of seven patients with malignant stricture, both stricture and cancer were diagnosed at the same time. In the remaining three patients, cancer was diagnosed at 3 months, 8 months, and 12 months after stricture diagnosis. No cases of cancer were diagnosed later than 1 year after the stricture diagnosis.

“We believe that this result is important for the management of colonic strictures complicating CD in clinical practice,” Dr. Hunaut and colleagues wrote.

The simultaneity or proximity of the diagnoses suggests that the “strictures observed are already a neoplastic complication of the colonic inflammatory disease,” they explained.

In other words, common concerns about strictures causing cancer at the same site could be unfounded.

This conclusion echoes a recent administrative database study that reported no independent association between colorectal stricture and CRC, the investigators noted.

“Given the recent evidence on the risk of cancer associated with colonic strictures in CD, systematic colectomy is probably no longer justified,” they wrote. “Factors such as a long disease duration, primary sclerosing cholangitis, a history of dysplasia, and nonpassable and/or symptomatic stricture despite endoscopic dilation tend to argue in favor of surgery — especially if limited resection is possible.”

In contrast, patients with strictures who have low risk of CRC may be better served by a conservative approach, including endoscopy and systematic biopsies, followed by close endoscopic surveillance, according to the investigators. If the stricture is impassable, they recommended endoscopic balloon dilation, followed by intensification of medical therapy if ulceration is observed.

The investigators disclosed relationships with MSD, Ferring, Biogen, and others.

Publications
Topics
Sections

 

Colonic strictures in patients with Crohn’s disease (CD) may not increase long-term risk of colorectal cancer (CRC), offering support for a conservative approach to stricture management, according to investigators.

Although 8% of patients with strictures in a multicenter study were diagnosed with CRC, this diagnosis was made either simultaneously or within 1 year of stricture diagnosis, suggesting that cancer may have driven stricture development, and not the other way around, lead author Thomas Hunaut, MD, of Université de Champagne-Ardenne, Reims, France, and colleagues reported.

“The occurrence of colonic stricture in CD always raises concerns about the risk for dysplasia/cancer,” the investigators wrote in Gastro Hep Advances, noting that no consensus approach is currently available to guide stricture management. “Few studies with conflicting results have evaluated the frequency of CRC associated with colonic stricture in CD, and the natural history of colonic stricture in CD is poorly known.”The present retrospective study included 88 consecutive CD patients with 96 colorectal strictures who were managed at three French referral centers between 1993 and 2022.

Strictures were symptomatic in 62.5% of cases, not passable by scope in 61.4% of cases, and ulcerated in 70.5% of cases. Colonic resection was needed in 47.7% of patients, while endoscopic balloon dilation was performed in 13.6% of patients.

After a median follow-up of 21.5 months, seven patients (8%) were diagnosed with malignant stricture, including five cases of colonic adenocarcinoma, one case of neuroendocrine carcinoma, and one case of B-cell lymphoproliferative neoplasia.

Malignant strictures were more common among older patients with longer disease duration and frequent obstructive symptoms; however, these factors were not supported by multivariate analyses, likely due to sample size, according to the investigators.

Instead, Dr. Hunaut and colleagues highlighted the timing of the diagnoses. In four out of seven patients with malignant stricture, both stricture and cancer were diagnosed at the same time. In the remaining three patients, cancer was diagnosed at 3 months, 8 months, and 12 months after stricture diagnosis. No cases of cancer were diagnosed later than 1 year after the stricture diagnosis.

“We believe that this result is important for the management of colonic strictures complicating CD in clinical practice,” Dr. Hunaut and colleagues wrote.

The simultaneity or proximity of the diagnoses suggests that the “strictures observed are already a neoplastic complication of the colonic inflammatory disease,” they explained.

In other words, common concerns about strictures causing cancer at the same site could be unfounded.

This conclusion echoes a recent administrative database study that reported no independent association between colorectal stricture and CRC, the investigators noted.

“Given the recent evidence on the risk of cancer associated with colonic strictures in CD, systematic colectomy is probably no longer justified,” they wrote. “Factors such as a long disease duration, primary sclerosing cholangitis, a history of dysplasia, and nonpassable and/or symptomatic stricture despite endoscopic dilation tend to argue in favor of surgery — especially if limited resection is possible.”

In contrast, patients with strictures who have low risk of CRC may be better served by a conservative approach, including endoscopy and systematic biopsies, followed by close endoscopic surveillance, according to the investigators. If the stricture is impassable, they recommended endoscopic balloon dilation, followed by intensification of medical therapy if ulceration is observed.

The investigators disclosed relationships with MSD, Ferring, Biogen, and others.

 

Colonic strictures in patients with Crohn’s disease (CD) may not increase long-term risk of colorectal cancer (CRC), offering support for a conservative approach to stricture management, according to investigators.

Although 8% of patients with strictures in a multicenter study were diagnosed with CRC, this diagnosis was made either simultaneously or within 1 year of stricture diagnosis, suggesting that cancer may have driven stricture development, and not the other way around, lead author Thomas Hunaut, MD, of Université de Champagne-Ardenne, Reims, France, and colleagues reported.

“The occurrence of colonic stricture in CD always raises concerns about the risk for dysplasia/cancer,” the investigators wrote in Gastro Hep Advances, noting that no consensus approach is currently available to guide stricture management. “Few studies with conflicting results have evaluated the frequency of CRC associated with colonic stricture in CD, and the natural history of colonic stricture in CD is poorly known.”The present retrospective study included 88 consecutive CD patients with 96 colorectal strictures who were managed at three French referral centers between 1993 and 2022.

Strictures were symptomatic in 62.5% of cases, not passable by scope in 61.4% of cases, and ulcerated in 70.5% of cases. Colonic resection was needed in 47.7% of patients, while endoscopic balloon dilation was performed in 13.6% of patients.

After a median follow-up of 21.5 months, seven patients (8%) were diagnosed with malignant stricture, including five cases of colonic adenocarcinoma, one case of neuroendocrine carcinoma, and one case of B-cell lymphoproliferative neoplasia.

Malignant strictures were more common among older patients with longer disease duration and frequent obstructive symptoms; however, these factors were not supported by multivariate analyses, likely due to sample size, according to the investigators.

Instead, Dr. Hunaut and colleagues highlighted the timing of the diagnoses. In four out of seven patients with malignant stricture, both stricture and cancer were diagnosed at the same time. In the remaining three patients, cancer was diagnosed at 3 months, 8 months, and 12 months after stricture diagnosis. No cases of cancer were diagnosed later than 1 year after the stricture diagnosis.

“We believe that this result is important for the management of colonic strictures complicating CD in clinical practice,” Dr. Hunaut and colleagues wrote.

The simultaneity or proximity of the diagnoses suggests that the “strictures observed are already a neoplastic complication of the colonic inflammatory disease,” they explained.

In other words, common concerns about strictures causing cancer at the same site could be unfounded.

This conclusion echoes a recent administrative database study that reported no independent association between colorectal stricture and CRC, the investigators noted.

“Given the recent evidence on the risk of cancer associated with colonic strictures in CD, systematic colectomy is probably no longer justified,” they wrote. “Factors such as a long disease duration, primary sclerosing cholangitis, a history of dysplasia, and nonpassable and/or symptomatic stricture despite endoscopic dilation tend to argue in favor of surgery — especially if limited resection is possible.”

In contrast, patients with strictures who have low risk of CRC may be better served by a conservative approach, including endoscopy and systematic biopsies, followed by close endoscopic surveillance, according to the investigators. If the stricture is impassable, they recommended endoscopic balloon dilation, followed by intensification of medical therapy if ulceration is observed.

The investigators disclosed relationships with MSD, Ferring, Biogen, and others.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTRO HEP ADVANCES

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Subcutaneous Infliximab Beats Placebo for IBD Maintenance Therapy

A Milestone in Biosimilar Development
Article Type
Changed
Wed, 09/11/2024 - 13:23

 

Subcutaneous (SC) infliximab is safe and effective, compared with placebo, for maintenance therapy in patients with inflammatory bowel disease (IBD), based on results of the phase 3 LIBERTY trials.

These two randomized trials should increase confidence in SC infliximab as a convenient alternative to intravenous delivery, reported co–lead authors Stephen B. Hanauer, MD, AGAF, of Northwestern Feinberg School of Medicine, Chicago, Illinois, and Bruce E. Sands, MD, AGAF, of Icahn School of Medicine at Mount Sinai, New York City, and colleagues.

Northwestern University
Dr. Stephen B. Hanauer

Specifically, the trials evaluated CT-P13, an infliximab biosimilar, which was Food and Drug Administration approved for intravenous (IV) use in 2016. The SC formulation was approved in the United States in 2023 as a new drug, requiring phase 3 efficacy confirmatory trials.

“Physicians and patients may prefer SC to IV treatment for IBD, owing to the convenience and flexibility of at-home self-administration, a different exposure profile with high steady-state levels, reduced exposure to nosocomial infection, and health care system resource benefits,” the investigators wrote in Gastroenterology.

One trial included patients with Crohn’s disease (CD), while the other enrolled patients with ulcerative colitis (UC). Eligibility depended upon inadequate responses or intolerance to corticosteroids and immunomodulators.

Courtesy Icahn School of Medicine at Mount Sinai
Dr. Bruce E. Sands

All participants began by receiving open-label IV CT-P13, at a dosage of 5 mg/kg, at weeks 0, 2, and 6. At week 10, those who responded to the IV induction therapy were randomized in a 2:1 ratio to continue with either the SC formulation of CT-P13 (120 mg) or switch to placebo, administered every 2 weeks until week 54.

The CD study randomized 343 patients, while the UC study had a larger cohort, with 438 randomized. Median age of participants was in the mid-30s to late 30s, with a majority being White and male. Baseline disease severity, assessed by the Crohn’s Disease Activity Index (CDAI) for CD and the modified Mayo score for UC, was similar across treatment groups.

The primary efficacy endpoint was clinical remission at week 54, defined as a CDAI score of less than 150 for CD and a modified Mayo score of 0-1 for UC.

In the CD study, 62.3% of patients receiving CT-P13 SC achieved clinical remission, compared with 32.1% in the placebo group, with a treatment difference of 32.1% (95% CI, 20.9-42.1; P < .0001). In addition, 51.1% of CT-P13 SC-treated patients achieved endoscopic response, compared with 17.9% in the placebo group, yielding a treatment difference of 34.6% (95% CI, 24.1-43.5; P < .0001).

In the UC study, 43.2% of patients on CT-P13 SC achieved clinical remission at week 54, compared with 20.8% of those on placebo, with a treatment difference of 21.1% (95% CI, 11.8-29.3; P < .0001). Key secondary endpoints, including endoscopic-histologic mucosal improvement, also favored CT-P13 SC over placebo with statistically significant differences.

The safety profile of CT-P13 SC was comparable with that of IV infliximab, with no new safety concerns emerging during the trials.

“Our results demonstrate the superior efficacy of CT-P13 SC over placebo for maintenance therapy in patients with moderately to severely active CD or UC after induction with CT-P13 IV,” the investigators wrote. “Importantly, the findings confirm that CT-P13 SC is well tolerated in this population, with no clinically meaningful differences in safety profile, compared with placebo. Overall, the results support CT-P13 SC as a treatment option for maintenance therapy in patients with IBD.”

The LIBERTY studies were funded by Celltrion. The investigators disclosed relationships with Pfizer, Gilead, Takeda, and others.

Body

 

Intravenous (IV) infliximab-dyyb, also called CT-P13 in clinical trials, is a biosimilar that was approved in the United States in 2016 under the brand name Inflectra. It received approval in Europe and elsewhere under the brand name Remsima.

The study from Hanauer and colleagues represents a milestone in biosimilar development as the authors studied an injectable form of the approved IV biosimilar, infliximab-dyyb. How might efficacy compare amongst the two formulations? The LIBERTY studies did not include an active IV infliximab comparator to answer this question. Based on a phase 1, open label trial, subcutaneous (SC) infliximab appears noninferior to IV infliximab.

courtesy Kaiser Permanente San Francisco Medical Center
Dr. Fernando S. Velayos
The approval of SC infliximab-dyyb is notable for highlighting the distinct process for approving “modified” biosimilars in the United States, compared with elsewhere. For SC infliximab, the Food and Drug Administration required a new drug application and additional trials (the LIBERTY trials). As a result, SC infliximab-dyyb has a different name (Zymfentra) than its IV formulation (Inflectra) in the United States. This contrasts with other areas of the globe, where the SC formulation (Remsima-SC) was approved as a line-extension to the IV biosimilar (Remsima-IV).

It is remarkable that we have progressed from creating highly similar copies of older biologics whose patents have expired, to reimagining and modifying biosimilars to potentially improve on efficacy, dosing, tolerability, or as in the case of SC infliximab-dyyb, providing a new mode of delivery. For SC infliximab, whether the innovator designation will cause different patterns of use based on cost or other factors, compared with places where the injectable and intravenous formulations are both considered biosimilars, remains to be seen.

Fernando S. Velayos, MD, MPH, AGAF, is director of the Inflammatory Bowel Disease Program, The Permanente Group Northern California; adjunct investigator at the Kaiser Permanente Division of Research; and chief of Gastroenterology and Hepatology, Kaiser Permanente San Francisco Medical Center. He reported no conflicts of interest.

Publications
Topics
Sections
Body

 

Intravenous (IV) infliximab-dyyb, also called CT-P13 in clinical trials, is a biosimilar that was approved in the United States in 2016 under the brand name Inflectra. It received approval in Europe and elsewhere under the brand name Remsima.

The study from Hanauer and colleagues represents a milestone in biosimilar development as the authors studied an injectable form of the approved IV biosimilar, infliximab-dyyb. How might efficacy compare amongst the two formulations? The LIBERTY studies did not include an active IV infliximab comparator to answer this question. Based on a phase 1, open label trial, subcutaneous (SC) infliximab appears noninferior to IV infliximab.

courtesy Kaiser Permanente San Francisco Medical Center
Dr. Fernando S. Velayos
The approval of SC infliximab-dyyb is notable for highlighting the distinct process for approving “modified” biosimilars in the United States, compared with elsewhere. For SC infliximab, the Food and Drug Administration required a new drug application and additional trials (the LIBERTY trials). As a result, SC infliximab-dyyb has a different name (Zymfentra) than its IV formulation (Inflectra) in the United States. This contrasts with other areas of the globe, where the SC formulation (Remsima-SC) was approved as a line-extension to the IV biosimilar (Remsima-IV).

It is remarkable that we have progressed from creating highly similar copies of older biologics whose patents have expired, to reimagining and modifying biosimilars to potentially improve on efficacy, dosing, tolerability, or as in the case of SC infliximab-dyyb, providing a new mode of delivery. For SC infliximab, whether the innovator designation will cause different patterns of use based on cost or other factors, compared with places where the injectable and intravenous formulations are both considered biosimilars, remains to be seen.

Fernando S. Velayos, MD, MPH, AGAF, is director of the Inflammatory Bowel Disease Program, The Permanente Group Northern California; adjunct investigator at the Kaiser Permanente Division of Research; and chief of Gastroenterology and Hepatology, Kaiser Permanente San Francisco Medical Center. He reported no conflicts of interest.

Body

 

Intravenous (IV) infliximab-dyyb, also called CT-P13 in clinical trials, is a biosimilar that was approved in the United States in 2016 under the brand name Inflectra. It received approval in Europe and elsewhere under the brand name Remsima.

The study from Hanauer and colleagues represents a milestone in biosimilar development as the authors studied an injectable form of the approved IV biosimilar, infliximab-dyyb. How might efficacy compare amongst the two formulations? The LIBERTY studies did not include an active IV infliximab comparator to answer this question. Based on a phase 1, open label trial, subcutaneous (SC) infliximab appears noninferior to IV infliximab.

courtesy Kaiser Permanente San Francisco Medical Center
Dr. Fernando S. Velayos
The approval of SC infliximab-dyyb is notable for highlighting the distinct process for approving “modified” biosimilars in the United States, compared with elsewhere. For SC infliximab, the Food and Drug Administration required a new drug application and additional trials (the LIBERTY trials). As a result, SC infliximab-dyyb has a different name (Zymfentra) than its IV formulation (Inflectra) in the United States. This contrasts with other areas of the globe, where the SC formulation (Remsima-SC) was approved as a line-extension to the IV biosimilar (Remsima-IV).

It is remarkable that we have progressed from creating highly similar copies of older biologics whose patents have expired, to reimagining and modifying biosimilars to potentially improve on efficacy, dosing, tolerability, or as in the case of SC infliximab-dyyb, providing a new mode of delivery. For SC infliximab, whether the innovator designation will cause different patterns of use based on cost or other factors, compared with places where the injectable and intravenous formulations are both considered biosimilars, remains to be seen.

Fernando S. Velayos, MD, MPH, AGAF, is director of the Inflammatory Bowel Disease Program, The Permanente Group Northern California; adjunct investigator at the Kaiser Permanente Division of Research; and chief of Gastroenterology and Hepatology, Kaiser Permanente San Francisco Medical Center. He reported no conflicts of interest.

Title
A Milestone in Biosimilar Development
A Milestone in Biosimilar Development

 

Subcutaneous (SC) infliximab is safe and effective, compared with placebo, for maintenance therapy in patients with inflammatory bowel disease (IBD), based on results of the phase 3 LIBERTY trials.

These two randomized trials should increase confidence in SC infliximab as a convenient alternative to intravenous delivery, reported co–lead authors Stephen B. Hanauer, MD, AGAF, of Northwestern Feinberg School of Medicine, Chicago, Illinois, and Bruce E. Sands, MD, AGAF, of Icahn School of Medicine at Mount Sinai, New York City, and colleagues.

Northwestern University
Dr. Stephen B. Hanauer

Specifically, the trials evaluated CT-P13, an infliximab biosimilar, which was Food and Drug Administration approved for intravenous (IV) use in 2016. The SC formulation was approved in the United States in 2023 as a new drug, requiring phase 3 efficacy confirmatory trials.

“Physicians and patients may prefer SC to IV treatment for IBD, owing to the convenience and flexibility of at-home self-administration, a different exposure profile with high steady-state levels, reduced exposure to nosocomial infection, and health care system resource benefits,” the investigators wrote in Gastroenterology.

One trial included patients with Crohn’s disease (CD), while the other enrolled patients with ulcerative colitis (UC). Eligibility depended upon inadequate responses or intolerance to corticosteroids and immunomodulators.

Courtesy Icahn School of Medicine at Mount Sinai
Dr. Bruce E. Sands

All participants began by receiving open-label IV CT-P13, at a dosage of 5 mg/kg, at weeks 0, 2, and 6. At week 10, those who responded to the IV induction therapy were randomized in a 2:1 ratio to continue with either the SC formulation of CT-P13 (120 mg) or switch to placebo, administered every 2 weeks until week 54.

The CD study randomized 343 patients, while the UC study had a larger cohort, with 438 randomized. Median age of participants was in the mid-30s to late 30s, with a majority being White and male. Baseline disease severity, assessed by the Crohn’s Disease Activity Index (CDAI) for CD and the modified Mayo score for UC, was similar across treatment groups.

The primary efficacy endpoint was clinical remission at week 54, defined as a CDAI score of less than 150 for CD and a modified Mayo score of 0-1 for UC.

In the CD study, 62.3% of patients receiving CT-P13 SC achieved clinical remission, compared with 32.1% in the placebo group, with a treatment difference of 32.1% (95% CI, 20.9-42.1; P < .0001). In addition, 51.1% of CT-P13 SC-treated patients achieved endoscopic response, compared with 17.9% in the placebo group, yielding a treatment difference of 34.6% (95% CI, 24.1-43.5; P < .0001).

In the UC study, 43.2% of patients on CT-P13 SC achieved clinical remission at week 54, compared with 20.8% of those on placebo, with a treatment difference of 21.1% (95% CI, 11.8-29.3; P < .0001). Key secondary endpoints, including endoscopic-histologic mucosal improvement, also favored CT-P13 SC over placebo with statistically significant differences.

The safety profile of CT-P13 SC was comparable with that of IV infliximab, with no new safety concerns emerging during the trials.

“Our results demonstrate the superior efficacy of CT-P13 SC over placebo for maintenance therapy in patients with moderately to severely active CD or UC after induction with CT-P13 IV,” the investigators wrote. “Importantly, the findings confirm that CT-P13 SC is well tolerated in this population, with no clinically meaningful differences in safety profile, compared with placebo. Overall, the results support CT-P13 SC as a treatment option for maintenance therapy in patients with IBD.”

The LIBERTY studies were funded by Celltrion. The investigators disclosed relationships with Pfizer, Gilead, Takeda, and others.

 

Subcutaneous (SC) infliximab is safe and effective, compared with placebo, for maintenance therapy in patients with inflammatory bowel disease (IBD), based on results of the phase 3 LIBERTY trials.

These two randomized trials should increase confidence in SC infliximab as a convenient alternative to intravenous delivery, reported co–lead authors Stephen B. Hanauer, MD, AGAF, of Northwestern Feinberg School of Medicine, Chicago, Illinois, and Bruce E. Sands, MD, AGAF, of Icahn School of Medicine at Mount Sinai, New York City, and colleagues.

Northwestern University
Dr. Stephen B. Hanauer

Specifically, the trials evaluated CT-P13, an infliximab biosimilar, which was Food and Drug Administration approved for intravenous (IV) use in 2016. The SC formulation was approved in the United States in 2023 as a new drug, requiring phase 3 efficacy confirmatory trials.

“Physicians and patients may prefer SC to IV treatment for IBD, owing to the convenience and flexibility of at-home self-administration, a different exposure profile with high steady-state levels, reduced exposure to nosocomial infection, and health care system resource benefits,” the investigators wrote in Gastroenterology.

One trial included patients with Crohn’s disease (CD), while the other enrolled patients with ulcerative colitis (UC). Eligibility depended upon inadequate responses or intolerance to corticosteroids and immunomodulators.

Courtesy Icahn School of Medicine at Mount Sinai
Dr. Bruce E. Sands

All participants began by receiving open-label IV CT-P13, at a dosage of 5 mg/kg, at weeks 0, 2, and 6. At week 10, those who responded to the IV induction therapy were randomized in a 2:1 ratio to continue with either the SC formulation of CT-P13 (120 mg) or switch to placebo, administered every 2 weeks until week 54.

The CD study randomized 343 patients, while the UC study had a larger cohort, with 438 randomized. Median age of participants was in the mid-30s to late 30s, with a majority being White and male. Baseline disease severity, assessed by the Crohn’s Disease Activity Index (CDAI) for CD and the modified Mayo score for UC, was similar across treatment groups.

The primary efficacy endpoint was clinical remission at week 54, defined as a CDAI score of less than 150 for CD and a modified Mayo score of 0-1 for UC.

In the CD study, 62.3% of patients receiving CT-P13 SC achieved clinical remission, compared with 32.1% in the placebo group, with a treatment difference of 32.1% (95% CI, 20.9-42.1; P < .0001). In addition, 51.1% of CT-P13 SC-treated patients achieved endoscopic response, compared with 17.9% in the placebo group, yielding a treatment difference of 34.6% (95% CI, 24.1-43.5; P < .0001).

In the UC study, 43.2% of patients on CT-P13 SC achieved clinical remission at week 54, compared with 20.8% of those on placebo, with a treatment difference of 21.1% (95% CI, 11.8-29.3; P < .0001). Key secondary endpoints, including endoscopic-histologic mucosal improvement, also favored CT-P13 SC over placebo with statistically significant differences.

The safety profile of CT-P13 SC was comparable with that of IV infliximab, with no new safety concerns emerging during the trials.

“Our results demonstrate the superior efficacy of CT-P13 SC over placebo for maintenance therapy in patients with moderately to severely active CD or UC after induction with CT-P13 IV,” the investigators wrote. “Importantly, the findings confirm that CT-P13 SC is well tolerated in this population, with no clinically meaningful differences in safety profile, compared with placebo. Overall, the results support CT-P13 SC as a treatment option for maintenance therapy in patients with IBD.”

The LIBERTY studies were funded by Celltrion. The investigators disclosed relationships with Pfizer, Gilead, Takeda, and others.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Should All Patients With Early Breast Cancer Receive Adjuvant Radiotherapy?

Article Type
Changed
Fri, 09/06/2024 - 13:03

Adjuvant radiotherapy reduces the risk for short-term recurrence in patients with early breast cancer, but it may have no impact on long-term recurrence or overall survival, based on a 30-year follow-up of the Scottish Breast Conservation Trial.

These findings suggest that patients with biology predicting late relapse may receive little benefit from adjuvant radiotherapy, lead author Linda J. Williams, PhD, of the University of Edinburgh in Scotland, and colleagues, reported.

“During the past 30 years, several randomized controlled trials have investigated the role of postoperative radiotherapy after breast-conserving surgery for early breast cancer,” the investigators wrote in The Lancet Oncology. “These trials showed that radiotherapy reduces the risk of local recurrence but were underpowered individually to detect a difference in overall survival.”
 

How Did the Present Study Increase Our Understanding of the Benefits of Adjuvant Radiotherapy in Early Breast Cancer?

The present analysis included data from a trial that began in 1985, when 589 patients with early breast cancer (tumors ≤ 4 cm [T1 or T2 and N0 or N1]) were randomized to receive either high-dose or no radiotherapy, with final cohorts including 291 patients and 294 patients, respectively. The radiotherapy was given 50 Gy in 20-25 fractions, either locally or locoregionally.

Estrogen receptor (ER)–positive patients (≥ 20 fmol/mg protein) received 5 years of daily oral tamoxifen. ER-poor patients (< 20 fmol/mg protein) received a chemotherapy combination of cyclophosphamide, methotrexate, and fluorouracil on a 21-day cycle for eight cycles.

Considering all data across a median follow-up of 17.5 years, adjuvant radiotherapy appeared to offer benefit, as it was associated with significantly lower ipsilateral breast tumor recurrence (16% vs 36%; hazard ratio [HR], 0.39; P < .0001).

But that tells only part of the story.

The positive impact of radiotherapy persisted for 1 decade (HR, 0.24; P < .0001), but risk beyond this point was no different between groups (HR, 0.98; P = .95).

“[The] benefit of radiotherapy was time dependent,” the investigators noted.

What’s more, median overall survival was no different between those who received radiotherapy and those who did not (18.7 vs 19.2 years; HR, 1.08; log-rank P = .43), and “reassuringly,” omitting radiotherapy did not increase the rate of distant metastasis.
 

How Might These Findings Influence Treatment Planning for Patients With Early Breast Cancer?

“The results can help clinicians to advise patients better about their choice to have radiotherapy or not if they better understand what benefits it does and does not bring,” the investigators wrote. “These results might provide clues perhaps to the biology of radiotherapy benefit, given that it does not prevent late recurrences, suggesting that patients whose biology predicts a late relapse only might not gain a benefit from radiotherapy.”

Gary M. Freedman, MD, chief of Women’s Health Service, Radiation Oncology, at Penn Medicine, Philadelphia, offered a different perspective.

“The study lumps together a local recurrence of breast cancer — that is relapse of the cancer years after treatment with lumpectomy and radiation — with the development of an entirely new breast cancer in the same breast,” Dr. Freedman said in a written comment. “When something comes back between years 0-5 and 0-8, we usually think of it as a true local recurrence arbitrarily, but beyond that they are new cancers.”

He went on to emphasize the clinical importance of reducing local recurrence within the first decade, noting that “this leads to much less morbidity and better quality of life for the patients.”

Dr. Freedman also shared his perspective on the survival data.

“Radiation did reduce breast cancer mortality very significantly — death from breast cancers went down from 46% to 37%,” he wrote (P = .054). “This is on the same level as chemo or hormone therapy. The study was not powered to detect significant differences in survival by radiation, but that has been shown with other meta-analyses.”
 

 

 

Are Findings From a Trial Started 30 Years Ago Still Relevant Today?

“Clearly the treatment of early breast cancer has advanced since the 1980s when the Scottish Conservation trial was launched,” study coauthor Ian Kunkler, MB, FRCR, of the University of Edinburgh, said in a written comment. “There is more breast screening, attention to clearing surgical margins of residual disease, more effective and longer periods of adjuvant hormonal therapy, reduced radiotherapy toxicity from more precise delivery. However, most anticancer treatments lose their effectiveness over time.”

He suggested that more trials are needed to confirm the present findings and reiterated that the lack of long-term recurrence benefit is most relevant for patients with disease features that predict late relapse, who “seem to gain little from adjuvant radiotherapy given as part of primary treatment.”

Dr. Kunkler noted that the observed benefit in the first decade supports the continued use of radiotherapy alongside anticancer drug treatment.

When asked the same question, Freedman emphasized the differences in treatment today vs the 1980s.

“The results of modern multidisciplinary cancer care are much, much better than these 30-year results,” Dr. Freedman said. “The risk for local recurrence in the breast after radiation is now about 2%-3% at 10 years in most studies.”

He also noted that modern radiotherapy techniques have “significantly lowered dose and risks to heart and lung,” compared with techniques used 30 years ago.

“A take-home point for the study is after breast conservation, whether or not you have radiation, you have to continue long-term screening mammograms for new breast cancers that may occur even decades later,” Dr. Freedman concluded.
 

How Might These Findings Impact Future Research Design and Funding?

“The findings should encourage trial funders to consider funding long-term follow-up beyond 10 years to assess benefits and risks of anticancer therapies,” Dr. Kunkler said. “The importance of long-term follow-up cannot be understated.”

This study was funded by Breast Cancer Institute (part of Edinburgh and Lothians Health Foundation), PFS Genomics (now part of Exact Sciences), the University of Edinburgh, and NHS Lothian. The investigators reported no conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Adjuvant radiotherapy reduces the risk for short-term recurrence in patients with early breast cancer, but it may have no impact on long-term recurrence or overall survival, based on a 30-year follow-up of the Scottish Breast Conservation Trial.

These findings suggest that patients with biology predicting late relapse may receive little benefit from adjuvant radiotherapy, lead author Linda J. Williams, PhD, of the University of Edinburgh in Scotland, and colleagues, reported.

“During the past 30 years, several randomized controlled trials have investigated the role of postoperative radiotherapy after breast-conserving surgery for early breast cancer,” the investigators wrote in The Lancet Oncology. “These trials showed that radiotherapy reduces the risk of local recurrence but were underpowered individually to detect a difference in overall survival.”
 

How Did the Present Study Increase Our Understanding of the Benefits of Adjuvant Radiotherapy in Early Breast Cancer?

The present analysis included data from a trial that began in 1985, when 589 patients with early breast cancer (tumors ≤ 4 cm [T1 or T2 and N0 or N1]) were randomized to receive either high-dose or no radiotherapy, with final cohorts including 291 patients and 294 patients, respectively. The radiotherapy was given 50 Gy in 20-25 fractions, either locally or locoregionally.

Estrogen receptor (ER)–positive patients (≥ 20 fmol/mg protein) received 5 years of daily oral tamoxifen. ER-poor patients (< 20 fmol/mg protein) received a chemotherapy combination of cyclophosphamide, methotrexate, and fluorouracil on a 21-day cycle for eight cycles.

Considering all data across a median follow-up of 17.5 years, adjuvant radiotherapy appeared to offer benefit, as it was associated with significantly lower ipsilateral breast tumor recurrence (16% vs 36%; hazard ratio [HR], 0.39; P < .0001).

But that tells only part of the story.

The positive impact of radiotherapy persisted for 1 decade (HR, 0.24; P < .0001), but risk beyond this point was no different between groups (HR, 0.98; P = .95).

“[The] benefit of radiotherapy was time dependent,” the investigators noted.

What’s more, median overall survival was no different between those who received radiotherapy and those who did not (18.7 vs 19.2 years; HR, 1.08; log-rank P = .43), and “reassuringly,” omitting radiotherapy did not increase the rate of distant metastasis.
 

How Might These Findings Influence Treatment Planning for Patients With Early Breast Cancer?

“The results can help clinicians to advise patients better about their choice to have radiotherapy or not if they better understand what benefits it does and does not bring,” the investigators wrote. “These results might provide clues perhaps to the biology of radiotherapy benefit, given that it does not prevent late recurrences, suggesting that patients whose biology predicts a late relapse only might not gain a benefit from radiotherapy.”

Gary M. Freedman, MD, chief of Women’s Health Service, Radiation Oncology, at Penn Medicine, Philadelphia, offered a different perspective.

“The study lumps together a local recurrence of breast cancer — that is relapse of the cancer years after treatment with lumpectomy and radiation — with the development of an entirely new breast cancer in the same breast,” Dr. Freedman said in a written comment. “When something comes back between years 0-5 and 0-8, we usually think of it as a true local recurrence arbitrarily, but beyond that they are new cancers.”

He went on to emphasize the clinical importance of reducing local recurrence within the first decade, noting that “this leads to much less morbidity and better quality of life for the patients.”

Dr. Freedman also shared his perspective on the survival data.

“Radiation did reduce breast cancer mortality very significantly — death from breast cancers went down from 46% to 37%,” he wrote (P = .054). “This is on the same level as chemo or hormone therapy. The study was not powered to detect significant differences in survival by radiation, but that has been shown with other meta-analyses.”
 

 

 

Are Findings From a Trial Started 30 Years Ago Still Relevant Today?

“Clearly the treatment of early breast cancer has advanced since the 1980s when the Scottish Conservation trial was launched,” study coauthor Ian Kunkler, MB, FRCR, of the University of Edinburgh, said in a written comment. “There is more breast screening, attention to clearing surgical margins of residual disease, more effective and longer periods of adjuvant hormonal therapy, reduced radiotherapy toxicity from more precise delivery. However, most anticancer treatments lose their effectiveness over time.”

He suggested that more trials are needed to confirm the present findings and reiterated that the lack of long-term recurrence benefit is most relevant for patients with disease features that predict late relapse, who “seem to gain little from adjuvant radiotherapy given as part of primary treatment.”

Dr. Kunkler noted that the observed benefit in the first decade supports the continued use of radiotherapy alongside anticancer drug treatment.

When asked the same question, Freedman emphasized the differences in treatment today vs the 1980s.

“The results of modern multidisciplinary cancer care are much, much better than these 30-year results,” Dr. Freedman said. “The risk for local recurrence in the breast after radiation is now about 2%-3% at 10 years in most studies.”

He also noted that modern radiotherapy techniques have “significantly lowered dose and risks to heart and lung,” compared with techniques used 30 years ago.

“A take-home point for the study is after breast conservation, whether or not you have radiation, you have to continue long-term screening mammograms for new breast cancers that may occur even decades later,” Dr. Freedman concluded.
 

How Might These Findings Impact Future Research Design and Funding?

“The findings should encourage trial funders to consider funding long-term follow-up beyond 10 years to assess benefits and risks of anticancer therapies,” Dr. Kunkler said. “The importance of long-term follow-up cannot be understated.”

This study was funded by Breast Cancer Institute (part of Edinburgh and Lothians Health Foundation), PFS Genomics (now part of Exact Sciences), the University of Edinburgh, and NHS Lothian. The investigators reported no conflicts of interest.

A version of this article first appeared on Medscape.com.

Adjuvant radiotherapy reduces the risk for short-term recurrence in patients with early breast cancer, but it may have no impact on long-term recurrence or overall survival, based on a 30-year follow-up of the Scottish Breast Conservation Trial.

These findings suggest that patients with biology predicting late relapse may receive little benefit from adjuvant radiotherapy, lead author Linda J. Williams, PhD, of the University of Edinburgh in Scotland, and colleagues, reported.

“During the past 30 years, several randomized controlled trials have investigated the role of postoperative radiotherapy after breast-conserving surgery for early breast cancer,” the investigators wrote in The Lancet Oncology. “These trials showed that radiotherapy reduces the risk of local recurrence but were underpowered individually to detect a difference in overall survival.”
 

How Did the Present Study Increase Our Understanding of the Benefits of Adjuvant Radiotherapy in Early Breast Cancer?

The present analysis included data from a trial that began in 1985, when 589 patients with early breast cancer (tumors ≤ 4 cm [T1 or T2 and N0 or N1]) were randomized to receive either high-dose or no radiotherapy, with final cohorts including 291 patients and 294 patients, respectively. The radiotherapy was given 50 Gy in 20-25 fractions, either locally or locoregionally.

Estrogen receptor (ER)–positive patients (≥ 20 fmol/mg protein) received 5 years of daily oral tamoxifen. ER-poor patients (< 20 fmol/mg protein) received a chemotherapy combination of cyclophosphamide, methotrexate, and fluorouracil on a 21-day cycle for eight cycles.

Considering all data across a median follow-up of 17.5 years, adjuvant radiotherapy appeared to offer benefit, as it was associated with significantly lower ipsilateral breast tumor recurrence (16% vs 36%; hazard ratio [HR], 0.39; P < .0001).

But that tells only part of the story.

The positive impact of radiotherapy persisted for 1 decade (HR, 0.24; P < .0001), but risk beyond this point was no different between groups (HR, 0.98; P = .95).

“[The] benefit of radiotherapy was time dependent,” the investigators noted.

What’s more, median overall survival was no different between those who received radiotherapy and those who did not (18.7 vs 19.2 years; HR, 1.08; log-rank P = .43), and “reassuringly,” omitting radiotherapy did not increase the rate of distant metastasis.
 

How Might These Findings Influence Treatment Planning for Patients With Early Breast Cancer?

“The results can help clinicians to advise patients better about their choice to have radiotherapy or not if they better understand what benefits it does and does not bring,” the investigators wrote. “These results might provide clues perhaps to the biology of radiotherapy benefit, given that it does not prevent late recurrences, suggesting that patients whose biology predicts a late relapse only might not gain a benefit from radiotherapy.”

Gary M. Freedman, MD, chief of Women’s Health Service, Radiation Oncology, at Penn Medicine, Philadelphia, offered a different perspective.

“The study lumps together a local recurrence of breast cancer — that is relapse of the cancer years after treatment with lumpectomy and radiation — with the development of an entirely new breast cancer in the same breast,” Dr. Freedman said in a written comment. “When something comes back between years 0-5 and 0-8, we usually think of it as a true local recurrence arbitrarily, but beyond that they are new cancers.”

He went on to emphasize the clinical importance of reducing local recurrence within the first decade, noting that “this leads to much less morbidity and better quality of life for the patients.”

Dr. Freedman also shared his perspective on the survival data.

“Radiation did reduce breast cancer mortality very significantly — death from breast cancers went down from 46% to 37%,” he wrote (P = .054). “This is on the same level as chemo or hormone therapy. The study was not powered to detect significant differences in survival by radiation, but that has been shown with other meta-analyses.”
 

 

 

Are Findings From a Trial Started 30 Years Ago Still Relevant Today?

“Clearly the treatment of early breast cancer has advanced since the 1980s when the Scottish Conservation trial was launched,” study coauthor Ian Kunkler, MB, FRCR, of the University of Edinburgh, said in a written comment. “There is more breast screening, attention to clearing surgical margins of residual disease, more effective and longer periods of adjuvant hormonal therapy, reduced radiotherapy toxicity from more precise delivery. However, most anticancer treatments lose their effectiveness over time.”

He suggested that more trials are needed to confirm the present findings and reiterated that the lack of long-term recurrence benefit is most relevant for patients with disease features that predict late relapse, who “seem to gain little from adjuvant radiotherapy given as part of primary treatment.”

Dr. Kunkler noted that the observed benefit in the first decade supports the continued use of radiotherapy alongside anticancer drug treatment.

When asked the same question, Freedman emphasized the differences in treatment today vs the 1980s.

“The results of modern multidisciplinary cancer care are much, much better than these 30-year results,” Dr. Freedman said. “The risk for local recurrence in the breast after radiation is now about 2%-3% at 10 years in most studies.”

He also noted that modern radiotherapy techniques have “significantly lowered dose and risks to heart and lung,” compared with techniques used 30 years ago.

“A take-home point for the study is after breast conservation, whether or not you have radiation, you have to continue long-term screening mammograms for new breast cancers that may occur even decades later,” Dr. Freedman concluded.
 

How Might These Findings Impact Future Research Design and Funding?

“The findings should encourage trial funders to consider funding long-term follow-up beyond 10 years to assess benefits and risks of anticancer therapies,” Dr. Kunkler said. “The importance of long-term follow-up cannot be understated.”

This study was funded by Breast Cancer Institute (part of Edinburgh and Lothians Health Foundation), PFS Genomics (now part of Exact Sciences), the University of Edinburgh, and NHS Lothian. The investigators reported no conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE LANCET ONCOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Do Clonal Hematopoiesis and Mosaic Chromosomal Alterations Increase Solid Tumor Risk?

Article Type
Changed
Wed, 09/25/2024 - 06:41

Clonal hematopoiesis of indeterminate potential (CHIP) and mosaic chromosomal alterations (mCAs) are associated with an increased risk for breast cancer, and CHIP is associated with increased mortality in patients with colon cancer, according to the authors of new research.

These findings, drawn from almost 11,000 patients in the Women’s Health Initiative (WHI) study, add further evidence that CHIP and mCA drive solid tumor risk, alongside known associations with hematologic malignancies, reported lead author Pinkal Desai, MD, associate professor of medicine and clinical director of molecular aging at Englander Institute for Precision Medicine, Weill Cornell Medical College, New York City, and colleagues.
 

How This Study Differs From Others of Breast Cancer Risk Factors

“The independent effect of CHIP and mCA on risk and mortality from solid tumors has not been elucidated due to lack of detailed data on mortality outcomes and risk factors,” the investigators wrote in Cancer, although some previous studies have suggested a link.

In particular, the investigators highlighted a 2022 UK Biobank study, which reported an association between CHIP and lung cancer and a borderline association with breast cancer that did not quite reach statistical significance.

But the UK Biobank study was confined to a UK population, Dr. Desai noted in an interview, and the data were less detailed than those in the present investigation.

“In terms of risk, the part that was lacking in previous studies was a comprehensive assessment of risk factors that increase risk for all these cancers,” Dr. Desai said. “For example, for breast cancer, we had very detailed data on [participants’] Gail risk score, which is known to impact breast cancer risk. We also had mammogram data and colonoscopy data.”

In an accompanying editorial, Koichi Takahashi, MD, PhD , and Nehali Shah, BS, of The University of Texas MD Anderson Cancer Center, Houston, Texas, pointed out the same UK Biobank findings, then noted that CHIP has also been linked with worse overall survival in unselected cancer patients. Still, they wrote, “the impact of CH on cancer risk and mortality remains controversial due to conflicting data and context‐dependent effects,” necessitating studies like this one by Dr. Desai and colleagues.
 

How Was the Relationship Between CHIP, MCA, and Solid Tumor Risk Assessed?

To explore possible associations between CHIP, mCA, and solid tumors, the investigators analyzed whole genome sequencing data from 10,866 women in the WHI, a multi-study program that began in 1992 and involved 161,808 women in both observational and clinical trial cohorts.

In 2002, the first big data release from the WHI suggested that hormone replacement therapy (HRT) increased breast cancer risk, leading to widespread reduction in HRT use.

More recent reports continue to shape our understanding of these risks, suggesting differences across cancer types. For breast cancer, the WHI data suggested that HRT-associated risk was largely driven by formulations involving progesterone and estrogen, whereas estrogen-only formulations, now more common, are generally considered to present an acceptable risk profile for suitable patients.

The new study accounted for this potential HRT-associated risk, including by adjusting for patients who received HRT, type of HRT received, and duration of HRT received. According to Desai, this approach is commonly used when analyzing data from the WHI, nullifying concerns about the potentially deleterious effects of the hormones used in the study.

“Our question was not ‘does HRT cause cancer?’ ” Dr. Desai said in an interview. “But HRT can be linked to breast cancer risk and has a potential to be a confounder, and hence the above methodology.

“So I can say that the confounding/effect modification that HRT would have contributed to in the relationship between exposure (CH and mCA) and outcome (cancer) is well adjusted for as described above. This is standard in WHI analyses,” she continued.

“Every Women’s Health Initiative analysis that comes out — not just for our study — uses a standard method ... where you account for hormonal therapy,” Dr. Desai added, again noting that many other potential risk factors were considered, enabling a “detailed, robust” analysis.

Dr. Takahashi and Ms. Shah agreed. “A notable strength of this study is its adjustment for many confounding factors,” they wrote. “The cohort’s well‐annotated data on other known cancer risk factors allowed for a robust assessment of CH’s independent risk.”
 

 

 

How Do Findings Compare With Those of the UK Biobank Study?

CHIP was associated with a 30% increased risk for breast cancer (hazard ratio [HR], 1.30; 95% CI, 1.03-1.64; P = .02), strengthening the borderline association reported by the UK Biobank study.

In contrast with the UK Biobank study, CHIP was not associated with lung cancer risk, although this may have been caused by fewer cases of lung cancer and a lack of male patients, Dr. Desai suggested.

“The discrepancy between the studies lies in the risk of lung cancer, although the point estimate in the current study suggested a positive association,” wrote Dr. Takahashi and Ms. Shah.

As in the UK Biobank study, CHIP was not associated with increased risk of developing colorectal cancer.

Mortality analysis, however, which was not conducted in the UK Biobank study, offered a new insight: Patients with existing colorectal cancer and CHIP had a significantly higher mortality risk than those without CHIP. Before stage adjustment, risk for mortality among those with colorectal cancer and CHIP was fourfold higher than those without CHIP (HR, 3.99; 95% CI, 2.41-6.62; P < .001). After stage adjustment, CHIP was still associated with a twofold higher mortality risk (HR, 2.50; 95% CI, 1.32-4.72; P = .004).

The investigators’ first mCA analyses, which employed a cell fraction cutoff greater than 3%, were unfruitful. But raising the cell fraction threshold to 5% in an exploratory analysis showed that autosomal mCA was associated with a 39% increased risk for breast cancer (HR, 1.39; 95% CI, 1.06-1.83; P = .01). No such associations were found between mCA and colorectal or lung cancer, regardless of cell fraction threshold.

The original 3% cell fraction threshold was selected on the basis of previous studies reporting a link between mCA and hematologic malignancies at this cutoff, Dr. Desai said.

She and her colleagues said a higher 5% cutoff might be needed, as they suspected that the link between mCA and solid tumors may not be causal, requiring a higher mutation rate.
 

Why Do Results Differ Between These Types of Studies?

Dr. Takahashi and Ms. Shah suggested that one possible limitation of the new study, and an obstacle to comparing results with the UK Biobank study and others like it, goes beyond population heterogeneity; incongruent findings could also be explained by differences in whole genome sequencing (WGS) technique.

“Although WGS allows sensitive detection of mCA through broad genomic coverage, it is less effective at detecting CHIP with low variant allele frequency (VAF) due to its relatively shallow depth (30x),” they wrote. “Consequently, the prevalence of mCA (18.8%) was much higher than that of CHIP (8.3%) in this cohort, contrasting with other studies using deeper sequencing.” As a result, the present study may have underestimated CHIP prevalence because of shallow sequencing depth.

“This inconsistency is a common challenge in CH population studies due to the lack of standardized methodologies and the frequent reliance on preexisting data not originally intended for CH detection,” Dr. Takahashi and Ms. Shah said.

Even so, despite the “heavily context-dependent” nature of these reported risks, the body of evidence to date now offers a convincing biological rationale linking CH with cancer development and outcomes, they added.
 

 

 

How Do the CHIP- and mCA-associated Risks Differ Between Solid Tumors and Blood Cancers?

“[These solid tumor risks are] not causal in the way CHIP mutations are causal for blood cancers,” Dr. Desai said. “Here we are talking about solid tumor risk, and it’s kind of scattered. It’s not just breast cancer ... there’s also increased colon cancer mortality. So I feel these mutations are doing something different ... they are sort of an added factor.”

Specific mechanisms remain unclear, Dr. Desai said, although she speculated about possible impacts on the inflammatory state or alterations to the tumor microenvironment.

“These are blood cells, right?” Dr. Desai asked. “They’re everywhere, and they’re changing something inherently in these tumors.”
 

Future research and therapeutic development

Siddhartha Jaiswal, MD, PhD, assistant professor in the Department of Pathology at Stanford University in California, whose lab focuses on clonal hematopoiesis, said the causality question is central to future research.

“The key question is, are these mutations acting because they alter the function of blood cells in some way to promote cancer risk, or is it reflective of some sort of shared etiology that’s not causal?” Dr. Jaiswal said in an interview.

Available data support both possibilities.

On one side, “reasonable evidence” supports the noncausal view, Dr. Jaiswal noted, because telomere length is one of the most common genetic risk factors for clonal hematopoiesis and also for solid tumors, suggesting a shared genetic factor. On the other hand, CHIP and mCA could be directly protumorigenic via conferred disturbances of immune cell function.

When asked if both causal and noncausal factors could be at play, Dr. Jaiswal said, “yeah, absolutely.”

The presence of a causal association could be promising from a therapeutic standpoint.

“If it turns out that this association is driven by a direct causal effect of the mutations, perhaps related to immune cell function or dysfunction, then targeting that dysfunction could be a therapeutic path to improve outcomes in people, and there’s a lot of interest in this,” Dr. Jaiswal said. He went on to explain how a trial exploring this approach via interleukin-8 inhibition in lung cancer fell short.

Yet earlier intervention may still hold promise, according to experts.

“[This study] provokes the hypothesis that CH‐targeted interventions could potentially reduce cancer risk in the future,” Dr. Takahashi and Ms. Shah said in their editorial.

The WHI program is funded by the National Heart, Lung, and Blood Institute; National Institutes of Health; and the Department of Health & Human Services. The investigators disclosed relationships with Eli Lilly, AbbVie, Celgene, and others. Dr. Jaiswal reported stock equity in a company that has an interest in clonal hematopoiesis.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Clonal hematopoiesis of indeterminate potential (CHIP) and mosaic chromosomal alterations (mCAs) are associated with an increased risk for breast cancer, and CHIP is associated with increased mortality in patients with colon cancer, according to the authors of new research.

These findings, drawn from almost 11,000 patients in the Women’s Health Initiative (WHI) study, add further evidence that CHIP and mCA drive solid tumor risk, alongside known associations with hematologic malignancies, reported lead author Pinkal Desai, MD, associate professor of medicine and clinical director of molecular aging at Englander Institute for Precision Medicine, Weill Cornell Medical College, New York City, and colleagues.
 

How This Study Differs From Others of Breast Cancer Risk Factors

“The independent effect of CHIP and mCA on risk and mortality from solid tumors has not been elucidated due to lack of detailed data on mortality outcomes and risk factors,” the investigators wrote in Cancer, although some previous studies have suggested a link.

In particular, the investigators highlighted a 2022 UK Biobank study, which reported an association between CHIP and lung cancer and a borderline association with breast cancer that did not quite reach statistical significance.

But the UK Biobank study was confined to a UK population, Dr. Desai noted in an interview, and the data were less detailed than those in the present investigation.

“In terms of risk, the part that was lacking in previous studies was a comprehensive assessment of risk factors that increase risk for all these cancers,” Dr. Desai said. “For example, for breast cancer, we had very detailed data on [participants’] Gail risk score, which is known to impact breast cancer risk. We also had mammogram data and colonoscopy data.”

In an accompanying editorial, Koichi Takahashi, MD, PhD , and Nehali Shah, BS, of The University of Texas MD Anderson Cancer Center, Houston, Texas, pointed out the same UK Biobank findings, then noted that CHIP has also been linked with worse overall survival in unselected cancer patients. Still, they wrote, “the impact of CH on cancer risk and mortality remains controversial due to conflicting data and context‐dependent effects,” necessitating studies like this one by Dr. Desai and colleagues.
 

How Was the Relationship Between CHIP, MCA, and Solid Tumor Risk Assessed?

To explore possible associations between CHIP, mCA, and solid tumors, the investigators analyzed whole genome sequencing data from 10,866 women in the WHI, a multi-study program that began in 1992 and involved 161,808 women in both observational and clinical trial cohorts.

In 2002, the first big data release from the WHI suggested that hormone replacement therapy (HRT) increased breast cancer risk, leading to widespread reduction in HRT use.

More recent reports continue to shape our understanding of these risks, suggesting differences across cancer types. For breast cancer, the WHI data suggested that HRT-associated risk was largely driven by formulations involving progesterone and estrogen, whereas estrogen-only formulations, now more common, are generally considered to present an acceptable risk profile for suitable patients.

The new study accounted for this potential HRT-associated risk, including by adjusting for patients who received HRT, type of HRT received, and duration of HRT received. According to Desai, this approach is commonly used when analyzing data from the WHI, nullifying concerns about the potentially deleterious effects of the hormones used in the study.

“Our question was not ‘does HRT cause cancer?’ ” Dr. Desai said in an interview. “But HRT can be linked to breast cancer risk and has a potential to be a confounder, and hence the above methodology.

“So I can say that the confounding/effect modification that HRT would have contributed to in the relationship between exposure (CH and mCA) and outcome (cancer) is well adjusted for as described above. This is standard in WHI analyses,” she continued.

“Every Women’s Health Initiative analysis that comes out — not just for our study — uses a standard method ... where you account for hormonal therapy,” Dr. Desai added, again noting that many other potential risk factors were considered, enabling a “detailed, robust” analysis.

Dr. Takahashi and Ms. Shah agreed. “A notable strength of this study is its adjustment for many confounding factors,” they wrote. “The cohort’s well‐annotated data on other known cancer risk factors allowed for a robust assessment of CH’s independent risk.”
 

 

 

How Do Findings Compare With Those of the UK Biobank Study?

CHIP was associated with a 30% increased risk for breast cancer (hazard ratio [HR], 1.30; 95% CI, 1.03-1.64; P = .02), strengthening the borderline association reported by the UK Biobank study.

In contrast with the UK Biobank study, CHIP was not associated with lung cancer risk, although this may have been caused by fewer cases of lung cancer and a lack of male patients, Dr. Desai suggested.

“The discrepancy between the studies lies in the risk of lung cancer, although the point estimate in the current study suggested a positive association,” wrote Dr. Takahashi and Ms. Shah.

As in the UK Biobank study, CHIP was not associated with increased risk of developing colorectal cancer.

Mortality analysis, however, which was not conducted in the UK Biobank study, offered a new insight: Patients with existing colorectal cancer and CHIP had a significantly higher mortality risk than those without CHIP. Before stage adjustment, risk for mortality among those with colorectal cancer and CHIP was fourfold higher than those without CHIP (HR, 3.99; 95% CI, 2.41-6.62; P < .001). After stage adjustment, CHIP was still associated with a twofold higher mortality risk (HR, 2.50; 95% CI, 1.32-4.72; P = .004).

The investigators’ first mCA analyses, which employed a cell fraction cutoff greater than 3%, were unfruitful. But raising the cell fraction threshold to 5% in an exploratory analysis showed that autosomal mCA was associated with a 39% increased risk for breast cancer (HR, 1.39; 95% CI, 1.06-1.83; P = .01). No such associations were found between mCA and colorectal or lung cancer, regardless of cell fraction threshold.

The original 3% cell fraction threshold was selected on the basis of previous studies reporting a link between mCA and hematologic malignancies at this cutoff, Dr. Desai said.

She and her colleagues said a higher 5% cutoff might be needed, as they suspected that the link between mCA and solid tumors may not be causal, requiring a higher mutation rate.
 

Why Do Results Differ Between These Types of Studies?

Dr. Takahashi and Ms. Shah suggested that one possible limitation of the new study, and an obstacle to comparing results with the UK Biobank study and others like it, goes beyond population heterogeneity; incongruent findings could also be explained by differences in whole genome sequencing (WGS) technique.

“Although WGS allows sensitive detection of mCA through broad genomic coverage, it is less effective at detecting CHIP with low variant allele frequency (VAF) due to its relatively shallow depth (30x),” they wrote. “Consequently, the prevalence of mCA (18.8%) was much higher than that of CHIP (8.3%) in this cohort, contrasting with other studies using deeper sequencing.” As a result, the present study may have underestimated CHIP prevalence because of shallow sequencing depth.

“This inconsistency is a common challenge in CH population studies due to the lack of standardized methodologies and the frequent reliance on preexisting data not originally intended for CH detection,” Dr. Takahashi and Ms. Shah said.

Even so, despite the “heavily context-dependent” nature of these reported risks, the body of evidence to date now offers a convincing biological rationale linking CH with cancer development and outcomes, they added.
 

 

 

How Do the CHIP- and mCA-associated Risks Differ Between Solid Tumors and Blood Cancers?

“[These solid tumor risks are] not causal in the way CHIP mutations are causal for blood cancers,” Dr. Desai said. “Here we are talking about solid tumor risk, and it’s kind of scattered. It’s not just breast cancer ... there’s also increased colon cancer mortality. So I feel these mutations are doing something different ... they are sort of an added factor.”

Specific mechanisms remain unclear, Dr. Desai said, although she speculated about possible impacts on the inflammatory state or alterations to the tumor microenvironment.

“These are blood cells, right?” Dr. Desai asked. “They’re everywhere, and they’re changing something inherently in these tumors.”
 

Future research and therapeutic development

Siddhartha Jaiswal, MD, PhD, assistant professor in the Department of Pathology at Stanford University in California, whose lab focuses on clonal hematopoiesis, said the causality question is central to future research.

“The key question is, are these mutations acting because they alter the function of blood cells in some way to promote cancer risk, or is it reflective of some sort of shared etiology that’s not causal?” Dr. Jaiswal said in an interview.

Available data support both possibilities.

On one side, “reasonable evidence” supports the noncausal view, Dr. Jaiswal noted, because telomere length is one of the most common genetic risk factors for clonal hematopoiesis and also for solid tumors, suggesting a shared genetic factor. On the other hand, CHIP and mCA could be directly protumorigenic via conferred disturbances of immune cell function.

When asked if both causal and noncausal factors could be at play, Dr. Jaiswal said, “yeah, absolutely.”

The presence of a causal association could be promising from a therapeutic standpoint.

“If it turns out that this association is driven by a direct causal effect of the mutations, perhaps related to immune cell function or dysfunction, then targeting that dysfunction could be a therapeutic path to improve outcomes in people, and there’s a lot of interest in this,” Dr. Jaiswal said. He went on to explain how a trial exploring this approach via interleukin-8 inhibition in lung cancer fell short.

Yet earlier intervention may still hold promise, according to experts.

“[This study] provokes the hypothesis that CH‐targeted interventions could potentially reduce cancer risk in the future,” Dr. Takahashi and Ms. Shah said in their editorial.

The WHI program is funded by the National Heart, Lung, and Blood Institute; National Institutes of Health; and the Department of Health & Human Services. The investigators disclosed relationships with Eli Lilly, AbbVie, Celgene, and others. Dr. Jaiswal reported stock equity in a company that has an interest in clonal hematopoiesis.

A version of this article first appeared on Medscape.com.

Clonal hematopoiesis of indeterminate potential (CHIP) and mosaic chromosomal alterations (mCAs) are associated with an increased risk for breast cancer, and CHIP is associated with increased mortality in patients with colon cancer, according to the authors of new research.

These findings, drawn from almost 11,000 patients in the Women’s Health Initiative (WHI) study, add further evidence that CHIP and mCA drive solid tumor risk, alongside known associations with hematologic malignancies, reported lead author Pinkal Desai, MD, associate professor of medicine and clinical director of molecular aging at Englander Institute for Precision Medicine, Weill Cornell Medical College, New York City, and colleagues.
 

How This Study Differs From Others of Breast Cancer Risk Factors

“The independent effect of CHIP and mCA on risk and mortality from solid tumors has not been elucidated due to lack of detailed data on mortality outcomes and risk factors,” the investigators wrote in Cancer, although some previous studies have suggested a link.

In particular, the investigators highlighted a 2022 UK Biobank study, which reported an association between CHIP and lung cancer and a borderline association with breast cancer that did not quite reach statistical significance.

But the UK Biobank study was confined to a UK population, Dr. Desai noted in an interview, and the data were less detailed than those in the present investigation.

“In terms of risk, the part that was lacking in previous studies was a comprehensive assessment of risk factors that increase risk for all these cancers,” Dr. Desai said. “For example, for breast cancer, we had very detailed data on [participants’] Gail risk score, which is known to impact breast cancer risk. We also had mammogram data and colonoscopy data.”

In an accompanying editorial, Koichi Takahashi, MD, PhD , and Nehali Shah, BS, of The University of Texas MD Anderson Cancer Center, Houston, Texas, pointed out the same UK Biobank findings, then noted that CHIP has also been linked with worse overall survival in unselected cancer patients. Still, they wrote, “the impact of CH on cancer risk and mortality remains controversial due to conflicting data and context‐dependent effects,” necessitating studies like this one by Dr. Desai and colleagues.
 

How Was the Relationship Between CHIP, MCA, and Solid Tumor Risk Assessed?

To explore possible associations between CHIP, mCA, and solid tumors, the investigators analyzed whole genome sequencing data from 10,866 women in the WHI, a multi-study program that began in 1992 and involved 161,808 women in both observational and clinical trial cohorts.

In 2002, the first big data release from the WHI suggested that hormone replacement therapy (HRT) increased breast cancer risk, leading to widespread reduction in HRT use.

More recent reports continue to shape our understanding of these risks, suggesting differences across cancer types. For breast cancer, the WHI data suggested that HRT-associated risk was largely driven by formulations involving progesterone and estrogen, whereas estrogen-only formulations, now more common, are generally considered to present an acceptable risk profile for suitable patients.

The new study accounted for this potential HRT-associated risk, including by adjusting for patients who received HRT, type of HRT received, and duration of HRT received. According to Desai, this approach is commonly used when analyzing data from the WHI, nullifying concerns about the potentially deleterious effects of the hormones used in the study.

“Our question was not ‘does HRT cause cancer?’ ” Dr. Desai said in an interview. “But HRT can be linked to breast cancer risk and has a potential to be a confounder, and hence the above methodology.

“So I can say that the confounding/effect modification that HRT would have contributed to in the relationship between exposure (CH and mCA) and outcome (cancer) is well adjusted for as described above. This is standard in WHI analyses,” she continued.

“Every Women’s Health Initiative analysis that comes out — not just for our study — uses a standard method ... where you account for hormonal therapy,” Dr. Desai added, again noting that many other potential risk factors were considered, enabling a “detailed, robust” analysis.

Dr. Takahashi and Ms. Shah agreed. “A notable strength of this study is its adjustment for many confounding factors,” they wrote. “The cohort’s well‐annotated data on other known cancer risk factors allowed for a robust assessment of CH’s independent risk.”
 

 

 

How Do Findings Compare With Those of the UK Biobank Study?

CHIP was associated with a 30% increased risk for breast cancer (hazard ratio [HR], 1.30; 95% CI, 1.03-1.64; P = .02), strengthening the borderline association reported by the UK Biobank study.

In contrast with the UK Biobank study, CHIP was not associated with lung cancer risk, although this may have been caused by fewer cases of lung cancer and a lack of male patients, Dr. Desai suggested.

“The discrepancy between the studies lies in the risk of lung cancer, although the point estimate in the current study suggested a positive association,” wrote Dr. Takahashi and Ms. Shah.

As in the UK Biobank study, CHIP was not associated with increased risk of developing colorectal cancer.

Mortality analysis, however, which was not conducted in the UK Biobank study, offered a new insight: Patients with existing colorectal cancer and CHIP had a significantly higher mortality risk than those without CHIP. Before stage adjustment, risk for mortality among those with colorectal cancer and CHIP was fourfold higher than those without CHIP (HR, 3.99; 95% CI, 2.41-6.62; P < .001). After stage adjustment, CHIP was still associated with a twofold higher mortality risk (HR, 2.50; 95% CI, 1.32-4.72; P = .004).

The investigators’ first mCA analyses, which employed a cell fraction cutoff greater than 3%, were unfruitful. But raising the cell fraction threshold to 5% in an exploratory analysis showed that autosomal mCA was associated with a 39% increased risk for breast cancer (HR, 1.39; 95% CI, 1.06-1.83; P = .01). No such associations were found between mCA and colorectal or lung cancer, regardless of cell fraction threshold.

The original 3% cell fraction threshold was selected on the basis of previous studies reporting a link between mCA and hematologic malignancies at this cutoff, Dr. Desai said.

She and her colleagues said a higher 5% cutoff might be needed, as they suspected that the link between mCA and solid tumors may not be causal, requiring a higher mutation rate.
 

Why Do Results Differ Between These Types of Studies?

Dr. Takahashi and Ms. Shah suggested that one possible limitation of the new study, and an obstacle to comparing results with the UK Biobank study and others like it, goes beyond population heterogeneity; incongruent findings could also be explained by differences in whole genome sequencing (WGS) technique.

“Although WGS allows sensitive detection of mCA through broad genomic coverage, it is less effective at detecting CHIP with low variant allele frequency (VAF) due to its relatively shallow depth (30x),” they wrote. “Consequently, the prevalence of mCA (18.8%) was much higher than that of CHIP (8.3%) in this cohort, contrasting with other studies using deeper sequencing.” As a result, the present study may have underestimated CHIP prevalence because of shallow sequencing depth.

“This inconsistency is a common challenge in CH population studies due to the lack of standardized methodologies and the frequent reliance on preexisting data not originally intended for CH detection,” Dr. Takahashi and Ms. Shah said.

Even so, despite the “heavily context-dependent” nature of these reported risks, the body of evidence to date now offers a convincing biological rationale linking CH with cancer development and outcomes, they added.
 

 

 

How Do the CHIP- and mCA-associated Risks Differ Between Solid Tumors and Blood Cancers?

“[These solid tumor risks are] not causal in the way CHIP mutations are causal for blood cancers,” Dr. Desai said. “Here we are talking about solid tumor risk, and it’s kind of scattered. It’s not just breast cancer ... there’s also increased colon cancer mortality. So I feel these mutations are doing something different ... they are sort of an added factor.”

Specific mechanisms remain unclear, Dr. Desai said, although she speculated about possible impacts on the inflammatory state or alterations to the tumor microenvironment.

“These are blood cells, right?” Dr. Desai asked. “They’re everywhere, and they’re changing something inherently in these tumors.”
 

Future research and therapeutic development

Siddhartha Jaiswal, MD, PhD, assistant professor in the Department of Pathology at Stanford University in California, whose lab focuses on clonal hematopoiesis, said the causality question is central to future research.

“The key question is, are these mutations acting because they alter the function of blood cells in some way to promote cancer risk, or is it reflective of some sort of shared etiology that’s not causal?” Dr. Jaiswal said in an interview.

Available data support both possibilities.

On one side, “reasonable evidence” supports the noncausal view, Dr. Jaiswal noted, because telomere length is one of the most common genetic risk factors for clonal hematopoiesis and also for solid tumors, suggesting a shared genetic factor. On the other hand, CHIP and mCA could be directly protumorigenic via conferred disturbances of immune cell function.

When asked if both causal and noncausal factors could be at play, Dr. Jaiswal said, “yeah, absolutely.”

The presence of a causal association could be promising from a therapeutic standpoint.

“If it turns out that this association is driven by a direct causal effect of the mutations, perhaps related to immune cell function or dysfunction, then targeting that dysfunction could be a therapeutic path to improve outcomes in people, and there’s a lot of interest in this,” Dr. Jaiswal said. He went on to explain how a trial exploring this approach via interleukin-8 inhibition in lung cancer fell short.

Yet earlier intervention may still hold promise, according to experts.

“[This study] provokes the hypothesis that CH‐targeted interventions could potentially reduce cancer risk in the future,” Dr. Takahashi and Ms. Shah said in their editorial.

The WHI program is funded by the National Heart, Lung, and Blood Institute; National Institutes of Health; and the Department of Health & Human Services. The investigators disclosed relationships with Eli Lilly, AbbVie, Celgene, and others. Dr. Jaiswal reported stock equity in a company that has an interest in clonal hematopoiesis.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CANCER

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Could Baseline MRIs Reshape Prostate Cancer Risk Assessment?

Article Type
Changed
Mon, 08/26/2024 - 09:12

Adding baseline MRI to conventional prostate cancer risk stratification could improve prognostic accuracy, potentially affecting active surveillance and treatment decisions for some patients, according to the investigators on a new trial.

The multicenter, real-world trial showed that men with low-risk or favorable intermediate-risk disease who had higher Prostate Imaging Reporting and Data System (PI-RADS) scores at baseline were more likely to be reclassified with more aggressive disease on a future biopsy, wrote lead author Kiran R. Nandalur, MD and colleagues. The study was published in The Journal of Urology.

This means that without MRI, some cases of prostate cancer are being labeled as lower-risk than they actually are.

The investigators noted that MRI is increasingly being used to choose patients who are appropriate for active surveillance instead of treatment, but related clinical data are scarce.

Although PI-RADS is the preferred metric for characterizing prostate tumors via MRI, “most previous studies on the prognostic implications of baseline PI-RADS score included smaller populations from academic centers, limited inclusion of clinical and pathologic data into models, and/or [are] ambiguous on the implications of PI-RADS score,” they wrote.

These knowledge gaps prompted the present study.
 

How Were Baseline MRI Findings Related to Prostate Cancer Disease Risk?

The dataset included 1491 men with prostate cancer that was diagnosed at 46 hospital-based, academic, or private practice urology groups. All had low-risk or favorable intermediate-risk disease and had undergone MRI within 6 months before or after initial biopsy, along with enrollment in active surveillance.

“A novel aspect of this study was that the MRIs were not read by dedicated prostate MRI experts at academic institutions, but rather a mix of community and academic radiologists,” Dr. Nandalur, medical director of Corewell Health East Radiology, Royal Oak, Michigan, said in an interview.

After traditional risk factors were accounted for, baseline PI-RADS (four or more lesions) was significantly associated with increased likelihood of biopsy reclassification to high-grade prostate cancer on surveillance biopsy (hazard ratio, 2.3; 95% CI 1.6-3.2; P < .001). 

“These patients with suspicious lesions on their initial MRI were more than twice as likely to have higher-grade disease within 5 years,” Nandalur noted. “This result was not only seen in the low-risk group but also in the favorable intermediate-risk group, which hasn’t been shown before.”

Grade group 2 vs 1 and increasing age were also associated with significantly increased risk for reclassification to a more aggressive cancer type.
 

How Might These Findings Improve Outcomes in Patients With Prostate Cancer?

Currently, 60%-70% of patients with low-risk disease choose active surveillance over immediate treatment, whereas 20% with favorable intermediate-risk disease choose active surveillance, according to Dr. Nandalur.

For low-risk patients, PI-RADS score is unlikely to change this decision, although surveillance intervals could be adjusted in accordance with risk. More notably, those with favorable intermediate-risk disease may benefit from considering PI-RADS score when choosing between active surveillance and immediate treatment.

“Most of the management strategies for prostate cancer are based on just your lab values and your pathology,” Dr. Nandalur said, “but this study shows that maybe we should start taking MRI into account — into the general paradigm of management of prostate cancer.”

Ideally, he added, prospective studies will confirm these findings, although such studies can be challenging to perform and similar data have historically been sufficient to reshape clinical practice.

“We are hoping that [baseline PI-RADS score] will be adopted into the NCCN [National Comprehensive Cancer Network] guidelines,” Dr. Nandalur said.
 

 

 

How Likely Are These Findings to Reshape Clinical Practice?

“The study’s large, multicenter cohort and its focus on the prognostic value of baseline MRI in active surveillance make it a crucial contribution to the field, providing evidence that can potentially refine patient management strategies in clinical practice,” Ismail Baris Turkbey, MD, FSAR, head of MRI Section, Molecular Imaging Branch, National Cancer Institute, Rockville, Maryland, said in a written comment.

“The findings from this study are likely to have a significant impact on clinical practice and potentially influence future guidelines in the management of localized prostate cancer, particularly in the context of active surveillance,” Dr. Turkbey said. “MRI, already a commonly used imaging modality in prostate cancer management, may become an even more integral part of the initial assessment and ongoing monitoring of patients with low or favorable-intermediate risk prostate cancer.”

Dr. Turkbey noted several strengths of the study.  

First, the size and the diversity of the cohort, along with the variety of treatment centers, support generalizability of findings. Second, the study pinpoints a “critical aspect” of active surveillance by uncovering the link between baseline MRI findings and later risk reclassification. Finally, the study also showed that increasing age was associated with higher likelihood of risk reclassification, “further emphasizing the need for personalized risk assessment” among these patients.
 

What Were Some Limitations of This Study?

“One important limitation is the lack of inter-reader agreement for PI-RADS evaluations for baseline MRIs,” Dr. Turkbey said. “Variation of PI-RADS is quite known, and centralized evaluations could have made this study stronger. Same applies for centralized quality evaluation of MRIs using The Prostate Imaging Quality (PI-QUAL) score. These items are difficult to do in a multicenter prospective data registry, and maybe authors will consider including these additional analyses in their future work.” 

How Does This New Approach to Prostate Cancer Risk Assessment Compare With Recent Advances in AI-Based Risk Assessment?

Over the past few years, artificial intelligence (AI)–assisted risk assessment in prostate cancer has been gaining increasing attention. Recently, for example, Artera, a self-styled “precision medicine company,” released the first AI tool to help patients choose between active surveillance and active treatment on the basis of analysis of digital pathology images. 

When asked to compare this approach with the methods used in the present study, Dr. Nandalur called the AI model “a step forward” but noted that it still relies on conventional risk criteria.

“Our data show imaging with MRI has independent prognostic information for prostate cancer patients considering active surveillance, over and above these traditional factors,” he said. “Moreover, this predictive ability of MRI was seen in low and favorable intermediate risk groups, so the additive value is broad.”

Still, he predicted that the future will not involve a binary choice, but a combination approach.

“The exciting aspect is that MRI results can eventually be added to this novel AI model and further improve prediction models for patients,” Dr. Nandalur said. “The combination of recent AI models and MRI will likely represent the future paradigm for prostate cancer patients considering active surveillance versus immediate treatment.”

The study was supported by Blue Cross and Blue Shield of Michigan. The investigators and Dr. Turkbey reported no conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Adding baseline MRI to conventional prostate cancer risk stratification could improve prognostic accuracy, potentially affecting active surveillance and treatment decisions for some patients, according to the investigators on a new trial.

The multicenter, real-world trial showed that men with low-risk or favorable intermediate-risk disease who had higher Prostate Imaging Reporting and Data System (PI-RADS) scores at baseline were more likely to be reclassified with more aggressive disease on a future biopsy, wrote lead author Kiran R. Nandalur, MD and colleagues. The study was published in The Journal of Urology.

This means that without MRI, some cases of prostate cancer are being labeled as lower-risk than they actually are.

The investigators noted that MRI is increasingly being used to choose patients who are appropriate for active surveillance instead of treatment, but related clinical data are scarce.

Although PI-RADS is the preferred metric for characterizing prostate tumors via MRI, “most previous studies on the prognostic implications of baseline PI-RADS score included smaller populations from academic centers, limited inclusion of clinical and pathologic data into models, and/or [are] ambiguous on the implications of PI-RADS score,” they wrote.

These knowledge gaps prompted the present study.
 

How Were Baseline MRI Findings Related to Prostate Cancer Disease Risk?

The dataset included 1491 men with prostate cancer that was diagnosed at 46 hospital-based, academic, or private practice urology groups. All had low-risk or favorable intermediate-risk disease and had undergone MRI within 6 months before or after initial biopsy, along with enrollment in active surveillance.

“A novel aspect of this study was that the MRIs were not read by dedicated prostate MRI experts at academic institutions, but rather a mix of community and academic radiologists,” Dr. Nandalur, medical director of Corewell Health East Radiology, Royal Oak, Michigan, said in an interview.

After traditional risk factors were accounted for, baseline PI-RADS (four or more lesions) was significantly associated with increased likelihood of biopsy reclassification to high-grade prostate cancer on surveillance biopsy (hazard ratio, 2.3; 95% CI 1.6-3.2; P < .001). 

“These patients with suspicious lesions on their initial MRI were more than twice as likely to have higher-grade disease within 5 years,” Nandalur noted. “This result was not only seen in the low-risk group but also in the favorable intermediate-risk group, which hasn’t been shown before.”

Grade group 2 vs 1 and increasing age were also associated with significantly increased risk for reclassification to a more aggressive cancer type.
 

How Might These Findings Improve Outcomes in Patients With Prostate Cancer?

Currently, 60%-70% of patients with low-risk disease choose active surveillance over immediate treatment, whereas 20% with favorable intermediate-risk disease choose active surveillance, according to Dr. Nandalur.

For low-risk patients, PI-RADS score is unlikely to change this decision, although surveillance intervals could be adjusted in accordance with risk. More notably, those with favorable intermediate-risk disease may benefit from considering PI-RADS score when choosing between active surveillance and immediate treatment.

“Most of the management strategies for prostate cancer are based on just your lab values and your pathology,” Dr. Nandalur said, “but this study shows that maybe we should start taking MRI into account — into the general paradigm of management of prostate cancer.”

Ideally, he added, prospective studies will confirm these findings, although such studies can be challenging to perform and similar data have historically been sufficient to reshape clinical practice.

“We are hoping that [baseline PI-RADS score] will be adopted into the NCCN [National Comprehensive Cancer Network] guidelines,” Dr. Nandalur said.
 

 

 

How Likely Are These Findings to Reshape Clinical Practice?

“The study’s large, multicenter cohort and its focus on the prognostic value of baseline MRI in active surveillance make it a crucial contribution to the field, providing evidence that can potentially refine patient management strategies in clinical practice,” Ismail Baris Turkbey, MD, FSAR, head of MRI Section, Molecular Imaging Branch, National Cancer Institute, Rockville, Maryland, said in a written comment.

“The findings from this study are likely to have a significant impact on clinical practice and potentially influence future guidelines in the management of localized prostate cancer, particularly in the context of active surveillance,” Dr. Turkbey said. “MRI, already a commonly used imaging modality in prostate cancer management, may become an even more integral part of the initial assessment and ongoing monitoring of patients with low or favorable-intermediate risk prostate cancer.”

Dr. Turkbey noted several strengths of the study.  

First, the size and the diversity of the cohort, along with the variety of treatment centers, support generalizability of findings. Second, the study pinpoints a “critical aspect” of active surveillance by uncovering the link between baseline MRI findings and later risk reclassification. Finally, the study also showed that increasing age was associated with higher likelihood of risk reclassification, “further emphasizing the need for personalized risk assessment” among these patients.
 

What Were Some Limitations of This Study?

“One important limitation is the lack of inter-reader agreement for PI-RADS evaluations for baseline MRIs,” Dr. Turkbey said. “Variation of PI-RADS is quite known, and centralized evaluations could have made this study stronger. Same applies for centralized quality evaluation of MRIs using The Prostate Imaging Quality (PI-QUAL) score. These items are difficult to do in a multicenter prospective data registry, and maybe authors will consider including these additional analyses in their future work.” 

How Does This New Approach to Prostate Cancer Risk Assessment Compare With Recent Advances in AI-Based Risk Assessment?

Over the past few years, artificial intelligence (AI)–assisted risk assessment in prostate cancer has been gaining increasing attention. Recently, for example, Artera, a self-styled “precision medicine company,” released the first AI tool to help patients choose between active surveillance and active treatment on the basis of analysis of digital pathology images. 

When asked to compare this approach with the methods used in the present study, Dr. Nandalur called the AI model “a step forward” but noted that it still relies on conventional risk criteria.

“Our data show imaging with MRI has independent prognostic information for prostate cancer patients considering active surveillance, over and above these traditional factors,” he said. “Moreover, this predictive ability of MRI was seen in low and favorable intermediate risk groups, so the additive value is broad.”

Still, he predicted that the future will not involve a binary choice, but a combination approach.

“The exciting aspect is that MRI results can eventually be added to this novel AI model and further improve prediction models for patients,” Dr. Nandalur said. “The combination of recent AI models and MRI will likely represent the future paradigm for prostate cancer patients considering active surveillance versus immediate treatment.”

The study was supported by Blue Cross and Blue Shield of Michigan. The investigators and Dr. Turkbey reported no conflicts of interest.

A version of this article first appeared on Medscape.com.

Adding baseline MRI to conventional prostate cancer risk stratification could improve prognostic accuracy, potentially affecting active surveillance and treatment decisions for some patients, according to the investigators on a new trial.

The multicenter, real-world trial showed that men with low-risk or favorable intermediate-risk disease who had higher Prostate Imaging Reporting and Data System (PI-RADS) scores at baseline were more likely to be reclassified with more aggressive disease on a future biopsy, wrote lead author Kiran R. Nandalur, MD and colleagues. The study was published in The Journal of Urology.

This means that without MRI, some cases of prostate cancer are being labeled as lower-risk than they actually are.

The investigators noted that MRI is increasingly being used to choose patients who are appropriate for active surveillance instead of treatment, but related clinical data are scarce.

Although PI-RADS is the preferred metric for characterizing prostate tumors via MRI, “most previous studies on the prognostic implications of baseline PI-RADS score included smaller populations from academic centers, limited inclusion of clinical and pathologic data into models, and/or [are] ambiguous on the implications of PI-RADS score,” they wrote.

These knowledge gaps prompted the present study.
 

How Were Baseline MRI Findings Related to Prostate Cancer Disease Risk?

The dataset included 1491 men with prostate cancer that was diagnosed at 46 hospital-based, academic, or private practice urology groups. All had low-risk or favorable intermediate-risk disease and had undergone MRI within 6 months before or after initial biopsy, along with enrollment in active surveillance.

“A novel aspect of this study was that the MRIs were not read by dedicated prostate MRI experts at academic institutions, but rather a mix of community and academic radiologists,” Dr. Nandalur, medical director of Corewell Health East Radiology, Royal Oak, Michigan, said in an interview.

After traditional risk factors were accounted for, baseline PI-RADS (four or more lesions) was significantly associated with increased likelihood of biopsy reclassification to high-grade prostate cancer on surveillance biopsy (hazard ratio, 2.3; 95% CI 1.6-3.2; P < .001). 

“These patients with suspicious lesions on their initial MRI were more than twice as likely to have higher-grade disease within 5 years,” Nandalur noted. “This result was not only seen in the low-risk group but also in the favorable intermediate-risk group, which hasn’t been shown before.”

Grade group 2 vs 1 and increasing age were also associated with significantly increased risk for reclassification to a more aggressive cancer type.
 

How Might These Findings Improve Outcomes in Patients With Prostate Cancer?

Currently, 60%-70% of patients with low-risk disease choose active surveillance over immediate treatment, whereas 20% with favorable intermediate-risk disease choose active surveillance, according to Dr. Nandalur.

For low-risk patients, PI-RADS score is unlikely to change this decision, although surveillance intervals could be adjusted in accordance with risk. More notably, those with favorable intermediate-risk disease may benefit from considering PI-RADS score when choosing between active surveillance and immediate treatment.

“Most of the management strategies for prostate cancer are based on just your lab values and your pathology,” Dr. Nandalur said, “but this study shows that maybe we should start taking MRI into account — into the general paradigm of management of prostate cancer.”

Ideally, he added, prospective studies will confirm these findings, although such studies can be challenging to perform and similar data have historically been sufficient to reshape clinical practice.

“We are hoping that [baseline PI-RADS score] will be adopted into the NCCN [National Comprehensive Cancer Network] guidelines,” Dr. Nandalur said.
 

 

 

How Likely Are These Findings to Reshape Clinical Practice?

“The study’s large, multicenter cohort and its focus on the prognostic value of baseline MRI in active surveillance make it a crucial contribution to the field, providing evidence that can potentially refine patient management strategies in clinical practice,” Ismail Baris Turkbey, MD, FSAR, head of MRI Section, Molecular Imaging Branch, National Cancer Institute, Rockville, Maryland, said in a written comment.

“The findings from this study are likely to have a significant impact on clinical practice and potentially influence future guidelines in the management of localized prostate cancer, particularly in the context of active surveillance,” Dr. Turkbey said. “MRI, already a commonly used imaging modality in prostate cancer management, may become an even more integral part of the initial assessment and ongoing monitoring of patients with low or favorable-intermediate risk prostate cancer.”

Dr. Turkbey noted several strengths of the study.  

First, the size and the diversity of the cohort, along with the variety of treatment centers, support generalizability of findings. Second, the study pinpoints a “critical aspect” of active surveillance by uncovering the link between baseline MRI findings and later risk reclassification. Finally, the study also showed that increasing age was associated with higher likelihood of risk reclassification, “further emphasizing the need for personalized risk assessment” among these patients.
 

What Were Some Limitations of This Study?

“One important limitation is the lack of inter-reader agreement for PI-RADS evaluations for baseline MRIs,” Dr. Turkbey said. “Variation of PI-RADS is quite known, and centralized evaluations could have made this study stronger. Same applies for centralized quality evaluation of MRIs using The Prostate Imaging Quality (PI-QUAL) score. These items are difficult to do in a multicenter prospective data registry, and maybe authors will consider including these additional analyses in their future work.” 

How Does This New Approach to Prostate Cancer Risk Assessment Compare With Recent Advances in AI-Based Risk Assessment?

Over the past few years, artificial intelligence (AI)–assisted risk assessment in prostate cancer has been gaining increasing attention. Recently, for example, Artera, a self-styled “precision medicine company,” released the first AI tool to help patients choose between active surveillance and active treatment on the basis of analysis of digital pathology images. 

When asked to compare this approach with the methods used in the present study, Dr. Nandalur called the AI model “a step forward” but noted that it still relies on conventional risk criteria.

“Our data show imaging with MRI has independent prognostic information for prostate cancer patients considering active surveillance, over and above these traditional factors,” he said. “Moreover, this predictive ability of MRI was seen in low and favorable intermediate risk groups, so the additive value is broad.”

Still, he predicted that the future will not involve a binary choice, but a combination approach.

“The exciting aspect is that MRI results can eventually be added to this novel AI model and further improve prediction models for patients,” Dr. Nandalur said. “The combination of recent AI models and MRI will likely represent the future paradigm for prostate cancer patients considering active surveillance versus immediate treatment.”

The study was supported by Blue Cross and Blue Shield of Michigan. The investigators and Dr. Turkbey reported no conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF UROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

AI Matches Expert Interpretation of Routine EEGs

Article Type
Changed
Thu, 08/22/2024 - 13:03

Artificial intelligence (AI) can accurately interpret routine clinical EEGs across a diverse population of patients, equipment types, and recording settings, according to investigators.

These findings suggest that SCORE-AI, the model tested, can reliably interpret common EEGs in real-world practice, supporting its recent FDA approval, reported lead author Daniel Mansilla, MD, a neurologist at Montreal Neurological Institute and Hospital, and colleagues.

“Overinterpretation of clinical EEG is the most common cause of misdiagnosing epilepsy,” the investigators wrote in Epilepsia. “AI tools may be a solution for this challenge, both as an additional resource for confirmation and classification of epilepsy, and as an aid for the interpretation of EEG in critical care medicine.”

To date, however, AI tools have struggled with the variability encountered in real-world neurology practice.“When tested on external data from different centers and diverse patient populations, and using equipment distinct from the initial study, medical AI models frequently exhibit modest performance, and only a few AI tools have successfully transitioned into medical practice,” the investigators wrote.
 

SCORE-AI Matches Expert Interpretation of Routine EEGs

The present study put SCORE-AI to the test with EEGs from 104 patients between 16 and 91 years. These individuals hailed from “geographically distinct” regions, while recording equipment and conditions also varied widely, according to Dr. Mansilla and colleagues.

To set an external gold-standard for comparison, EEGs were first interpreted by three human expert raters, who were blinded to all case information except the EEGs themselves. The dataset comprised 50% normal and 50% abnormal EEGs. Four major classes of EEG abnormalities were included: focal epileptiform, generalized epileptiform, focal nonepileptiform, and diffuse nonepileptiform.

Comparing SCORE-AI interpretations with the experts’ interpretations revealed no significant difference in any metric or category. The AI tool had an overall accuracy of 92%, compared with 94% for the human experts. Of note, SCORE-AI maintained this level of performance regardless of vigilance state or normal variants.

“SCORE-AI has obtained FDA approval for routine clinical EEGs and is presently being integrated into broadly available EEG software (Natus NeuroWorks),” the investigators wrote.
 

Further Validation May Be Needed

Wesley T. Kerr, MD, PhD, functional (nonepileptic) seizures clinic lead epileptologist at the University of Pittsburgh Medical Center, and handling associate editor for this study in Epilepsia, said the present findings are important because they show that SCORE-AI can perform in scenarios beyond the one in which it was developed.

Still, it may be premature for broad commercial rollout.

University of Pittsburgh
Dr. Wesley T. Kerr


In a written comment, Dr. Kerr called for “much larger studies” to validate SCORE-AI, noting that seizures can be caused by “many rare conditions,” and some patients have multiple EEG abnormalities.

Since SCORE-AI has not yet demonstrated accuracy in those situations, he predicted that the tool will remain exactly that – a tool – before it replaces human experts.

“They have only looked at SCORE-AI by itself,” Dr. Kerr said. “Practically, SCORE-AI is going to be used in combination with a neurologist for a long time before SCORE-AI can operate semi-independently or independently. They need to do studies looking at this combination to see how this tool impacts the clinical practice of EEG interpretation.”

Daniel Friedman, MD, an epileptologist and associate clinical professor of neurology at NYU Langone, pointed out another limitation of the present study: The EEGs were collected at specialty centers.

NYU Langone
Dr. Daniel Friedman


“The technical standards of data collection were, therefore, pretty high,” Dr. Friedman said in a written comment. “The majority of EEGs performed in the world are not collected by highly skilled EEG technologists and the performance of AI classification algorithms under less-than-ideal technical conditions is unknown.”
 

 

 

AI-Assisted EEG Interpretation Is Here to Stay

When asked about the long-term future of AI-assisted EEG interpretation, Dr. Friedman predicted that it will be “critical” for helping improve the accuracy of epilepsy diagnoses, particularly because most EEGs worldwide are interpreted by non-experts, leading to the known issue with epilepsy misdiagnosis.

“However,” he added, “it is important to note that epilepsy is a clinical diagnosis ... [EEG] is only one piece of evidence in neurologic decision making. History and accurate eyewitness description of the events of concern are extremely critical to the diagnosis and cannot be replaced by AI yet.”

Dr. Kerr offered a similar view, highlighting the potential for SCORE-AI to raise the game of non-epileptologists.

“My anticipation is that neurologists who don’t use SCORE-AI will be replaced by neurologists who use SCORE-AI well,” he said. “Neurologists who use it well will be able to read more EEGs in less time without sacrificing quality. This will allow the neurologist to spend more time talking with the patient about the interpretation of the tests and how that impacts clinical care.”

Then again, that time spent talking with the patient may also one day be delegated to a machine.

“It is certainly imaginable that AI chatbots using large language models to interact with patients and family could be developed to extract consistent epilepsy histories for diagnostic support,” Dr. Wesley said.

This work was supported by a project grant from the Canadian Institutes of Health Research and Duke Neurology start-up funding. The investigators and interviewees reported no relevant conflicts of interest.

Publications
Topics
Sections

Artificial intelligence (AI) can accurately interpret routine clinical EEGs across a diverse population of patients, equipment types, and recording settings, according to investigators.

These findings suggest that SCORE-AI, the model tested, can reliably interpret common EEGs in real-world practice, supporting its recent FDA approval, reported lead author Daniel Mansilla, MD, a neurologist at Montreal Neurological Institute and Hospital, and colleagues.

“Overinterpretation of clinical EEG is the most common cause of misdiagnosing epilepsy,” the investigators wrote in Epilepsia. “AI tools may be a solution for this challenge, both as an additional resource for confirmation and classification of epilepsy, and as an aid for the interpretation of EEG in critical care medicine.”

To date, however, AI tools have struggled with the variability encountered in real-world neurology practice.“When tested on external data from different centers and diverse patient populations, and using equipment distinct from the initial study, medical AI models frequently exhibit modest performance, and only a few AI tools have successfully transitioned into medical practice,” the investigators wrote.
 

SCORE-AI Matches Expert Interpretation of Routine EEGs

The present study put SCORE-AI to the test with EEGs from 104 patients between 16 and 91 years. These individuals hailed from “geographically distinct” regions, while recording equipment and conditions also varied widely, according to Dr. Mansilla and colleagues.

To set an external gold-standard for comparison, EEGs were first interpreted by three human expert raters, who were blinded to all case information except the EEGs themselves. The dataset comprised 50% normal and 50% abnormal EEGs. Four major classes of EEG abnormalities were included: focal epileptiform, generalized epileptiform, focal nonepileptiform, and diffuse nonepileptiform.

Comparing SCORE-AI interpretations with the experts’ interpretations revealed no significant difference in any metric or category. The AI tool had an overall accuracy of 92%, compared with 94% for the human experts. Of note, SCORE-AI maintained this level of performance regardless of vigilance state or normal variants.

“SCORE-AI has obtained FDA approval for routine clinical EEGs and is presently being integrated into broadly available EEG software (Natus NeuroWorks),” the investigators wrote.
 

Further Validation May Be Needed

Wesley T. Kerr, MD, PhD, functional (nonepileptic) seizures clinic lead epileptologist at the University of Pittsburgh Medical Center, and handling associate editor for this study in Epilepsia, said the present findings are important because they show that SCORE-AI can perform in scenarios beyond the one in which it was developed.

Still, it may be premature for broad commercial rollout.

University of Pittsburgh
Dr. Wesley T. Kerr


In a written comment, Dr. Kerr called for “much larger studies” to validate SCORE-AI, noting that seizures can be caused by “many rare conditions,” and some patients have multiple EEG abnormalities.

Since SCORE-AI has not yet demonstrated accuracy in those situations, he predicted that the tool will remain exactly that – a tool – before it replaces human experts.

“They have only looked at SCORE-AI by itself,” Dr. Kerr said. “Practically, SCORE-AI is going to be used in combination with a neurologist for a long time before SCORE-AI can operate semi-independently or independently. They need to do studies looking at this combination to see how this tool impacts the clinical practice of EEG interpretation.”

Daniel Friedman, MD, an epileptologist and associate clinical professor of neurology at NYU Langone, pointed out another limitation of the present study: The EEGs were collected at specialty centers.

NYU Langone
Dr. Daniel Friedman


“The technical standards of data collection were, therefore, pretty high,” Dr. Friedman said in a written comment. “The majority of EEGs performed in the world are not collected by highly skilled EEG technologists and the performance of AI classification algorithms under less-than-ideal technical conditions is unknown.”
 

 

 

AI-Assisted EEG Interpretation Is Here to Stay

When asked about the long-term future of AI-assisted EEG interpretation, Dr. Friedman predicted that it will be “critical” for helping improve the accuracy of epilepsy diagnoses, particularly because most EEGs worldwide are interpreted by non-experts, leading to the known issue with epilepsy misdiagnosis.

“However,” he added, “it is important to note that epilepsy is a clinical diagnosis ... [EEG] is only one piece of evidence in neurologic decision making. History and accurate eyewitness description of the events of concern are extremely critical to the diagnosis and cannot be replaced by AI yet.”

Dr. Kerr offered a similar view, highlighting the potential for SCORE-AI to raise the game of non-epileptologists.

“My anticipation is that neurologists who don’t use SCORE-AI will be replaced by neurologists who use SCORE-AI well,” he said. “Neurologists who use it well will be able to read more EEGs in less time without sacrificing quality. This will allow the neurologist to spend more time talking with the patient about the interpretation of the tests and how that impacts clinical care.”

Then again, that time spent talking with the patient may also one day be delegated to a machine.

“It is certainly imaginable that AI chatbots using large language models to interact with patients and family could be developed to extract consistent epilepsy histories for diagnostic support,” Dr. Wesley said.

This work was supported by a project grant from the Canadian Institutes of Health Research and Duke Neurology start-up funding. The investigators and interviewees reported no relevant conflicts of interest.

Artificial intelligence (AI) can accurately interpret routine clinical EEGs across a diverse population of patients, equipment types, and recording settings, according to investigators.

These findings suggest that SCORE-AI, the model tested, can reliably interpret common EEGs in real-world practice, supporting its recent FDA approval, reported lead author Daniel Mansilla, MD, a neurologist at Montreal Neurological Institute and Hospital, and colleagues.

“Overinterpretation of clinical EEG is the most common cause of misdiagnosing epilepsy,” the investigators wrote in Epilepsia. “AI tools may be a solution for this challenge, both as an additional resource for confirmation and classification of epilepsy, and as an aid for the interpretation of EEG in critical care medicine.”

To date, however, AI tools have struggled with the variability encountered in real-world neurology practice.“When tested on external data from different centers and diverse patient populations, and using equipment distinct from the initial study, medical AI models frequently exhibit modest performance, and only a few AI tools have successfully transitioned into medical practice,” the investigators wrote.
 

SCORE-AI Matches Expert Interpretation of Routine EEGs

The present study put SCORE-AI to the test with EEGs from 104 patients between 16 and 91 years. These individuals hailed from “geographically distinct” regions, while recording equipment and conditions also varied widely, according to Dr. Mansilla and colleagues.

To set an external gold-standard for comparison, EEGs were first interpreted by three human expert raters, who were blinded to all case information except the EEGs themselves. The dataset comprised 50% normal and 50% abnormal EEGs. Four major classes of EEG abnormalities were included: focal epileptiform, generalized epileptiform, focal nonepileptiform, and diffuse nonepileptiform.

Comparing SCORE-AI interpretations with the experts’ interpretations revealed no significant difference in any metric or category. The AI tool had an overall accuracy of 92%, compared with 94% for the human experts. Of note, SCORE-AI maintained this level of performance regardless of vigilance state or normal variants.

“SCORE-AI has obtained FDA approval for routine clinical EEGs and is presently being integrated into broadly available EEG software (Natus NeuroWorks),” the investigators wrote.
 

Further Validation May Be Needed

Wesley T. Kerr, MD, PhD, functional (nonepileptic) seizures clinic lead epileptologist at the University of Pittsburgh Medical Center, and handling associate editor for this study in Epilepsia, said the present findings are important because they show that SCORE-AI can perform in scenarios beyond the one in which it was developed.

Still, it may be premature for broad commercial rollout.

University of Pittsburgh
Dr. Wesley T. Kerr


In a written comment, Dr. Kerr called for “much larger studies” to validate SCORE-AI, noting that seizures can be caused by “many rare conditions,” and some patients have multiple EEG abnormalities.

Since SCORE-AI has not yet demonstrated accuracy in those situations, he predicted that the tool will remain exactly that – a tool – before it replaces human experts.

“They have only looked at SCORE-AI by itself,” Dr. Kerr said. “Practically, SCORE-AI is going to be used in combination with a neurologist for a long time before SCORE-AI can operate semi-independently or independently. They need to do studies looking at this combination to see how this tool impacts the clinical practice of EEG interpretation.”

Daniel Friedman, MD, an epileptologist and associate clinical professor of neurology at NYU Langone, pointed out another limitation of the present study: The EEGs were collected at specialty centers.

NYU Langone
Dr. Daniel Friedman


“The technical standards of data collection were, therefore, pretty high,” Dr. Friedman said in a written comment. “The majority of EEGs performed in the world are not collected by highly skilled EEG technologists and the performance of AI classification algorithms under less-than-ideal technical conditions is unknown.”
 

 

 

AI-Assisted EEG Interpretation Is Here to Stay

When asked about the long-term future of AI-assisted EEG interpretation, Dr. Friedman predicted that it will be “critical” for helping improve the accuracy of epilepsy diagnoses, particularly because most EEGs worldwide are interpreted by non-experts, leading to the known issue with epilepsy misdiagnosis.

“However,” he added, “it is important to note that epilepsy is a clinical diagnosis ... [EEG] is only one piece of evidence in neurologic decision making. History and accurate eyewitness description of the events of concern are extremely critical to the diagnosis and cannot be replaced by AI yet.”

Dr. Kerr offered a similar view, highlighting the potential for SCORE-AI to raise the game of non-epileptologists.

“My anticipation is that neurologists who don’t use SCORE-AI will be replaced by neurologists who use SCORE-AI well,” he said. “Neurologists who use it well will be able to read more EEGs in less time without sacrificing quality. This will allow the neurologist to spend more time talking with the patient about the interpretation of the tests and how that impacts clinical care.”

Then again, that time spent talking with the patient may also one day be delegated to a machine.

“It is certainly imaginable that AI chatbots using large language models to interact with patients and family could be developed to extract consistent epilepsy histories for diagnostic support,” Dr. Wesley said.

This work was supported by a project grant from the Canadian Institutes of Health Research and Duke Neurology start-up funding. The investigators and interviewees reported no relevant conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM EPILEPSIA

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Automated ERCP Report Card Offers High Accuracy, Minimal Work

Article Type
Changed
Wed, 08/14/2024 - 09:30

A new endoscopic retrograde cholangiopancreatography (ERCP) report card automatically imports and analyzes performance metrics from endoscopy records, offering a real-time gauge of both individual- and institutional-level quality indicators, according to the developers.

The tool boasts an accuracy level exceeding 96%, integrates with multiple electronic health records, and requires minimal additional work time, reported Anmol Singh, MD, of TriStar Centennial Medical Center, Nashville, Tennessee, and colleagues.

“Implementation of quality indicator tracking remains difficult due to the complexity of ERCP as compared with other endoscopic procedures, resulting in significant limitations in the extraction and synthesis of these data,” the investigators wrote in Techniques and Innovations in Gastrointestinal Endoscopy. “Manual extraction methods such as self-assessment forms and chart reviews are both time intensive and error prone, and current automated extraction methods, such as natural language processing, can require substantial resources to implement and undesirably complicate the endoscopy work flow.”

To overcome these challenges, Dr. Singh and colleagues designed an analytics tool that automatically collects ERCP quality indicators from endoscopy reports with “minimal input” from the endoscopist, and is compatible with “any electronic reporting system.”

Development relied upon endoscopy records from 2,146 ERCPs performed by 12 endoscopists at four facilities. The most common reason for ERCP was choledocholithiasis, followed by malignant and benign biliary stricture. Most common procedures were stent placement and sphincterotomy.

Data were aggregated in a Health Level–7 (HL-7) interface, an international standard system that enables compatibility between different types of electronic health records. Some inputs were entered by the performing endoscopist via drop-down menus.

Next, data were shifted into an analytics suite, which evaluated quality indicators, including cannulation difficulty and success rate, and administration of post-ERCP pancreatitis prophylaxis.

Manual review showed that this approach yielded an accuracy of 96.5%-100%.

Beyond this high level of accuracy, Dr. Singh and colleagues described several reasons why their tool may be superior to previous attempts at an automated ERCP report card.

“Our HL-7–based tool offers several advantages, including versatility via compatibility with multiple types of commercial reporting software and flexibility in customizing the type and aesthetic of the data displayed,” they wrote. “These features improve the user interface, keep costs down, and allow for integration into smaller or nonacademic practice settings.”

They also highlighted how the tool measures quality in relation to procedure indication and difficulty at the provider level.

“Unlike in colonoscopy, where metrics such as adenoma detection rate can be ubiquitously applied to all screening procedures, the difficulty and risk profile of ERCP is inextricably dependent on patient and procedural factors such as indication of the procedure, history of interventions, or history of altered anatomy,” Dr. Singh and colleagues wrote. “Prior studies have shown that both the cost-effectiveness and complication rates of procedures are influenced by procedural indication and complexity. As such, benchmarking an individual provider’s performance necessarily requires the correct procedural context.”

With further optimization, this tool can be integrated into various types of existing endoscopy reporting software at a reasonable cost, and with minimal impact on routine work flow, the investigators concluded.

The investigators disclosed relationships with AbbVie, Boston Scientific, Organon, and others.

Publications
Topics
Sections

A new endoscopic retrograde cholangiopancreatography (ERCP) report card automatically imports and analyzes performance metrics from endoscopy records, offering a real-time gauge of both individual- and institutional-level quality indicators, according to the developers.

The tool boasts an accuracy level exceeding 96%, integrates with multiple electronic health records, and requires minimal additional work time, reported Anmol Singh, MD, of TriStar Centennial Medical Center, Nashville, Tennessee, and colleagues.

“Implementation of quality indicator tracking remains difficult due to the complexity of ERCP as compared with other endoscopic procedures, resulting in significant limitations in the extraction and synthesis of these data,” the investigators wrote in Techniques and Innovations in Gastrointestinal Endoscopy. “Manual extraction methods such as self-assessment forms and chart reviews are both time intensive and error prone, and current automated extraction methods, such as natural language processing, can require substantial resources to implement and undesirably complicate the endoscopy work flow.”

To overcome these challenges, Dr. Singh and colleagues designed an analytics tool that automatically collects ERCP quality indicators from endoscopy reports with “minimal input” from the endoscopist, and is compatible with “any electronic reporting system.”

Development relied upon endoscopy records from 2,146 ERCPs performed by 12 endoscopists at four facilities. The most common reason for ERCP was choledocholithiasis, followed by malignant and benign biliary stricture. Most common procedures were stent placement and sphincterotomy.

Data were aggregated in a Health Level–7 (HL-7) interface, an international standard system that enables compatibility between different types of electronic health records. Some inputs were entered by the performing endoscopist via drop-down menus.

Next, data were shifted into an analytics suite, which evaluated quality indicators, including cannulation difficulty and success rate, and administration of post-ERCP pancreatitis prophylaxis.

Manual review showed that this approach yielded an accuracy of 96.5%-100%.

Beyond this high level of accuracy, Dr. Singh and colleagues described several reasons why their tool may be superior to previous attempts at an automated ERCP report card.

“Our HL-7–based tool offers several advantages, including versatility via compatibility with multiple types of commercial reporting software and flexibility in customizing the type and aesthetic of the data displayed,” they wrote. “These features improve the user interface, keep costs down, and allow for integration into smaller or nonacademic practice settings.”

They also highlighted how the tool measures quality in relation to procedure indication and difficulty at the provider level.

“Unlike in colonoscopy, where metrics such as adenoma detection rate can be ubiquitously applied to all screening procedures, the difficulty and risk profile of ERCP is inextricably dependent on patient and procedural factors such as indication of the procedure, history of interventions, or history of altered anatomy,” Dr. Singh and colleagues wrote. “Prior studies have shown that both the cost-effectiveness and complication rates of procedures are influenced by procedural indication and complexity. As such, benchmarking an individual provider’s performance necessarily requires the correct procedural context.”

With further optimization, this tool can be integrated into various types of existing endoscopy reporting software at a reasonable cost, and with minimal impact on routine work flow, the investigators concluded.

The investigators disclosed relationships with AbbVie, Boston Scientific, Organon, and others.

A new endoscopic retrograde cholangiopancreatography (ERCP) report card automatically imports and analyzes performance metrics from endoscopy records, offering a real-time gauge of both individual- and institutional-level quality indicators, according to the developers.

The tool boasts an accuracy level exceeding 96%, integrates with multiple electronic health records, and requires minimal additional work time, reported Anmol Singh, MD, of TriStar Centennial Medical Center, Nashville, Tennessee, and colleagues.

“Implementation of quality indicator tracking remains difficult due to the complexity of ERCP as compared with other endoscopic procedures, resulting in significant limitations in the extraction and synthesis of these data,” the investigators wrote in Techniques and Innovations in Gastrointestinal Endoscopy. “Manual extraction methods such as self-assessment forms and chart reviews are both time intensive and error prone, and current automated extraction methods, such as natural language processing, can require substantial resources to implement and undesirably complicate the endoscopy work flow.”

To overcome these challenges, Dr. Singh and colleagues designed an analytics tool that automatically collects ERCP quality indicators from endoscopy reports with “minimal input” from the endoscopist, and is compatible with “any electronic reporting system.”

Development relied upon endoscopy records from 2,146 ERCPs performed by 12 endoscopists at four facilities. The most common reason for ERCP was choledocholithiasis, followed by malignant and benign biliary stricture. Most common procedures were stent placement and sphincterotomy.

Data were aggregated in a Health Level–7 (HL-7) interface, an international standard system that enables compatibility between different types of electronic health records. Some inputs were entered by the performing endoscopist via drop-down menus.

Next, data were shifted into an analytics suite, which evaluated quality indicators, including cannulation difficulty and success rate, and administration of post-ERCP pancreatitis prophylaxis.

Manual review showed that this approach yielded an accuracy of 96.5%-100%.

Beyond this high level of accuracy, Dr. Singh and colleagues described several reasons why their tool may be superior to previous attempts at an automated ERCP report card.

“Our HL-7–based tool offers several advantages, including versatility via compatibility with multiple types of commercial reporting software and flexibility in customizing the type and aesthetic of the data displayed,” they wrote. “These features improve the user interface, keep costs down, and allow for integration into smaller or nonacademic practice settings.”

They also highlighted how the tool measures quality in relation to procedure indication and difficulty at the provider level.

“Unlike in colonoscopy, where metrics such as adenoma detection rate can be ubiquitously applied to all screening procedures, the difficulty and risk profile of ERCP is inextricably dependent on patient and procedural factors such as indication of the procedure, history of interventions, or history of altered anatomy,” Dr. Singh and colleagues wrote. “Prior studies have shown that both the cost-effectiveness and complication rates of procedures are influenced by procedural indication and complexity. As such, benchmarking an individual provider’s performance necessarily requires the correct procedural context.”

With further optimization, this tool can be integrated into various types of existing endoscopy reporting software at a reasonable cost, and with minimal impact on routine work flow, the investigators concluded.

The investigators disclosed relationships with AbbVie, Boston Scientific, Organon, and others.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM TECHNIQUES AND INNOVATIONS IN GASTROINTESTINAL ENDOSCOPY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Family Size, Dog Ownership Linked With Reduced Risk of Crohn’s

Article Type
Changed
Tue, 08/13/2024 - 11:57

People who live with at least two other people in their first year of life and have a dog during childhood may be at reduced risk of developing Crohn’s disease (CD), according to investigators.

Those who live with a pet bird may be more likely to develop CD, although few participants in the study lived with birds, requiring a cautious interpretation of this latter finding, lead author Mingyue Xue, PhD, of Mount Sinai Hospital, Toronto, Ontario, Canada, and colleagues reported.

“Environmental factors, such as smoking, large families, urban environments, and exposure to pets, have been shown to be associated with the risk of CD development,” the investigators wrote in Clinical Gastroenterology and Hepatology. “However, most of these studies were based on a retrospective study design, which makes it challenging to understand when and how environmental factors trigger the biological changes that lead to disease.”

The present study prospectively followed 4289 asymptomatic first-degree relatives (FDRs) of patients with CD. Environmental factors were identified via regression models that also considered biological factors, including gut inflammation via fecal calprotectin (FCP) levels, altered intestinal permeability measured by urinary fractional excretion of lactulose to mannitol ratio (LMR), and fecal microbiome composition through 16S rRNA sequencing.

After a median follow-up period of 5.62 years, 86 FDRs (1.9%) developed CD.

Living in a household of at least three people in the first year of life was associated with a 57% reduced risk of CD development (hazard ratio [HR], 0.43; P = .019). Similarly, living with a pet dog between the ages of 5 and 15 also demonstrated a protective effect, dropping risk of CD by 39% (HR, 0.61; P = .025).

“Our analysis revealed a protective trend of living with dogs that transcends the age of exposure, suggesting that dog ownership could confer health benefits in reducing the risk of CD,” the investigators wrote. “Our study also found that living in a large family during the first year of life is significantly associated with the future onset of CD, aligning with prior research that indicates that a larger family size in the first year of life can reduce the risk of developing IBD.”

In contrast, the study identified bird ownership at time of recruitment as a risk factor for CD, increasing risk almost three-fold (HR, 2.84; P = .005). The investigators urged a careful interpretation of this latter finding, however, as relatively few FDRs lived with birds.

“[A]lthough our sample size can be considered large, some environmental variables were uncommon, such as the participants having birds as pets, and would greatly benefit from replication of our findings in other cohorts,” Dr. Xue and colleagues noted.

They suggested several possible ways in which the above environmental factors may impact CD risk, including effects on subclinical inflammation, microbiome composition, and gut permeability.

“Understanding the relationship between CD-related environmental factors and these predisease biomarkers may shed light on the underlying mechanisms by which environmental factors impact host health and ultimately lead to CD onset,” the investigators concluded.

The study was supported by Crohn’s and Colitis Canada, Canadian Institutes of Health Research, the Helmsley Charitable Trust, and others. The investigators disclosed no conflicts of interest.

Publications
Topics
Sections

People who live with at least two other people in their first year of life and have a dog during childhood may be at reduced risk of developing Crohn’s disease (CD), according to investigators.

Those who live with a pet bird may be more likely to develop CD, although few participants in the study lived with birds, requiring a cautious interpretation of this latter finding, lead author Mingyue Xue, PhD, of Mount Sinai Hospital, Toronto, Ontario, Canada, and colleagues reported.

“Environmental factors, such as smoking, large families, urban environments, and exposure to pets, have been shown to be associated with the risk of CD development,” the investigators wrote in Clinical Gastroenterology and Hepatology. “However, most of these studies were based on a retrospective study design, which makes it challenging to understand when and how environmental factors trigger the biological changes that lead to disease.”

The present study prospectively followed 4289 asymptomatic first-degree relatives (FDRs) of patients with CD. Environmental factors were identified via regression models that also considered biological factors, including gut inflammation via fecal calprotectin (FCP) levels, altered intestinal permeability measured by urinary fractional excretion of lactulose to mannitol ratio (LMR), and fecal microbiome composition through 16S rRNA sequencing.

After a median follow-up period of 5.62 years, 86 FDRs (1.9%) developed CD.

Living in a household of at least three people in the first year of life was associated with a 57% reduced risk of CD development (hazard ratio [HR], 0.43; P = .019). Similarly, living with a pet dog between the ages of 5 and 15 also demonstrated a protective effect, dropping risk of CD by 39% (HR, 0.61; P = .025).

“Our analysis revealed a protective trend of living with dogs that transcends the age of exposure, suggesting that dog ownership could confer health benefits in reducing the risk of CD,” the investigators wrote. “Our study also found that living in a large family during the first year of life is significantly associated with the future onset of CD, aligning with prior research that indicates that a larger family size in the first year of life can reduce the risk of developing IBD.”

In contrast, the study identified bird ownership at time of recruitment as a risk factor for CD, increasing risk almost three-fold (HR, 2.84; P = .005). The investigators urged a careful interpretation of this latter finding, however, as relatively few FDRs lived with birds.

“[A]lthough our sample size can be considered large, some environmental variables were uncommon, such as the participants having birds as pets, and would greatly benefit from replication of our findings in other cohorts,” Dr. Xue and colleagues noted.

They suggested several possible ways in which the above environmental factors may impact CD risk, including effects on subclinical inflammation, microbiome composition, and gut permeability.

“Understanding the relationship between CD-related environmental factors and these predisease biomarkers may shed light on the underlying mechanisms by which environmental factors impact host health and ultimately lead to CD onset,” the investigators concluded.

The study was supported by Crohn’s and Colitis Canada, Canadian Institutes of Health Research, the Helmsley Charitable Trust, and others. The investigators disclosed no conflicts of interest.

People who live with at least two other people in their first year of life and have a dog during childhood may be at reduced risk of developing Crohn’s disease (CD), according to investigators.

Those who live with a pet bird may be more likely to develop CD, although few participants in the study lived with birds, requiring a cautious interpretation of this latter finding, lead author Mingyue Xue, PhD, of Mount Sinai Hospital, Toronto, Ontario, Canada, and colleagues reported.

“Environmental factors, such as smoking, large families, urban environments, and exposure to pets, have been shown to be associated with the risk of CD development,” the investigators wrote in Clinical Gastroenterology and Hepatology. “However, most of these studies were based on a retrospective study design, which makes it challenging to understand when and how environmental factors trigger the biological changes that lead to disease.”

The present study prospectively followed 4289 asymptomatic first-degree relatives (FDRs) of patients with CD. Environmental factors were identified via regression models that also considered biological factors, including gut inflammation via fecal calprotectin (FCP) levels, altered intestinal permeability measured by urinary fractional excretion of lactulose to mannitol ratio (LMR), and fecal microbiome composition through 16S rRNA sequencing.

After a median follow-up period of 5.62 years, 86 FDRs (1.9%) developed CD.

Living in a household of at least three people in the first year of life was associated with a 57% reduced risk of CD development (hazard ratio [HR], 0.43; P = .019). Similarly, living with a pet dog between the ages of 5 and 15 also demonstrated a protective effect, dropping risk of CD by 39% (HR, 0.61; P = .025).

“Our analysis revealed a protective trend of living with dogs that transcends the age of exposure, suggesting that dog ownership could confer health benefits in reducing the risk of CD,” the investigators wrote. “Our study also found that living in a large family during the first year of life is significantly associated with the future onset of CD, aligning with prior research that indicates that a larger family size in the first year of life can reduce the risk of developing IBD.”

In contrast, the study identified bird ownership at time of recruitment as a risk factor for CD, increasing risk almost three-fold (HR, 2.84; P = .005). The investigators urged a careful interpretation of this latter finding, however, as relatively few FDRs lived with birds.

“[A]lthough our sample size can be considered large, some environmental variables were uncommon, such as the participants having birds as pets, and would greatly benefit from replication of our findings in other cohorts,” Dr. Xue and colleagues noted.

They suggested several possible ways in which the above environmental factors may impact CD risk, including effects on subclinical inflammation, microbiome composition, and gut permeability.

“Understanding the relationship between CD-related environmental factors and these predisease biomarkers may shed light on the underlying mechanisms by which environmental factors impact host health and ultimately lead to CD onset,” the investigators concluded.

The study was supported by Crohn’s and Colitis Canada, Canadian Institutes of Health Research, the Helmsley Charitable Trust, and others. The investigators disclosed no conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Stool-Based Methylation Test May Improve CRC Screening

Article Type
Changed
Mon, 08/26/2024 - 06:51

A new stool-based syndecan-2 methylation (mSDC2) test may improve the detection of colorectal cancer (CRC) and advanced colorectal neoplasia (ACN), based on a prospective, real-world study.

These findings suggest that the mSDC2 assay could improve the efficacy and resource utilization of existing screening programs, reported co–lead authors Shengbing Zhao, MD and Zixuan He, MD, of Naval Medical University, Shanghai, China, and colleagues.

“Conventional risk-stratification strategies, such as fecal immunochemical test (FIT) and life risk factors, are still criticized for being inferior at identifying early-stage CRC and ACN, and their real-world performance is probably further weakened by the low annual participation rate and compliance of subsequent colonoscopy,” the investigators wrote in Gastroenterology. Recent case studies have reported “high diagnostic performance” using stool-based testing for mSDC2, which is “the most accurate single-targeted gene” for colorectal neoplasia, according to the investigators; however, real-world outcomes have yet to be demonstrated, prompting the present study. The prospective, multicenter, community-based trial compared the diagnostic performance of the mSDC2 test against FIT and Asia-Pacific Colorectal Screening (APCS) scores.

The primary outcome was detection of ACN. Secondary outcomes included detection of CRC, early-stage CRC, ACN, colorectal neoplasia (CN), and clinically relevant serrated polyp (CRSP). Screening strategies were also compared in terms of cost-effectiveness and impact on colonoscopy workload.The final dataset included 10,360 participants aged 45-75 years who underwent screening between 2020 and 2022.

After determining APCS scores, stool samples were analyzed for mSDC2 and FIT markers. Based on risk stratification results, participants were invited to undergo colonoscopy. A total of 3,381 participants completed colonoscopy, with 1914 from the increased-risk population and 1467 from the average-risk population. Participants who tested positive for mSDC2 had significantly higher detection rates for all measured outcomes than those who tested negative (all, P < .05). For example, the detection rate for ACN was 26.6% in mSDC2-positive participants, compared with 9.3% in mSDC2-negative participants, with a relative risk of 2.87 (95% CI, 2.39-3.44). For CRC, the detection rate was 4.2% in mSDC2-positive participants vs 0.1% in mSDC2-negative participants, yielding a relative risk of 29.73 (95% CI, 10.29-85.91). Performance held steady across subgroups.The mSDC2 test demonstrated cost-effectiveness by significantly reducing the number of colonoscopies needed to detect one case of ACN or CRC. Specifically, the number of colonoscopies needed to screen for ACN and CRC was reduced by 56.2% and 81.5%, respectively. Parallel combinations of mSDC2 with APCS or FIT enhanced both diagnostic performance and cost-effectiveness.

“This study further illustrates that the mSDC2 test consistently improves predictive abilities for CN, CRSP, ACN, and CRC, which is not influenced by subgroups of lesion location or risk factors, even under the risk stratification by FIT or APCS,” the investigators wrote. “The excellent diagnostic ability of mSDC2 in premalignant lesions, early-stage CRC, and early-onset CRC indicates a promising value in early detection and prevention of CRC ... the mSDC2 test or a parallel combination of multiple screening methods might be promising to improve real-world CRC screening performance and reduce colonoscopy workload in community practice.”The study was supported by the National Key Research and Development Program of China, Deep Blue Project of Naval Medical University, the Creative Biosciences, and others. The investigators reported no conflicts of interest.

Publications
Topics
Sections

A new stool-based syndecan-2 methylation (mSDC2) test may improve the detection of colorectal cancer (CRC) and advanced colorectal neoplasia (ACN), based on a prospective, real-world study.

These findings suggest that the mSDC2 assay could improve the efficacy and resource utilization of existing screening programs, reported co–lead authors Shengbing Zhao, MD and Zixuan He, MD, of Naval Medical University, Shanghai, China, and colleagues.

“Conventional risk-stratification strategies, such as fecal immunochemical test (FIT) and life risk factors, are still criticized for being inferior at identifying early-stage CRC and ACN, and their real-world performance is probably further weakened by the low annual participation rate and compliance of subsequent colonoscopy,” the investigators wrote in Gastroenterology. Recent case studies have reported “high diagnostic performance” using stool-based testing for mSDC2, which is “the most accurate single-targeted gene” for colorectal neoplasia, according to the investigators; however, real-world outcomes have yet to be demonstrated, prompting the present study. The prospective, multicenter, community-based trial compared the diagnostic performance of the mSDC2 test against FIT and Asia-Pacific Colorectal Screening (APCS) scores.

The primary outcome was detection of ACN. Secondary outcomes included detection of CRC, early-stage CRC, ACN, colorectal neoplasia (CN), and clinically relevant serrated polyp (CRSP). Screening strategies were also compared in terms of cost-effectiveness and impact on colonoscopy workload.The final dataset included 10,360 participants aged 45-75 years who underwent screening between 2020 and 2022.

After determining APCS scores, stool samples were analyzed for mSDC2 and FIT markers. Based on risk stratification results, participants were invited to undergo colonoscopy. A total of 3,381 participants completed colonoscopy, with 1914 from the increased-risk population and 1467 from the average-risk population. Participants who tested positive for mSDC2 had significantly higher detection rates for all measured outcomes than those who tested negative (all, P < .05). For example, the detection rate for ACN was 26.6% in mSDC2-positive participants, compared with 9.3% in mSDC2-negative participants, with a relative risk of 2.87 (95% CI, 2.39-3.44). For CRC, the detection rate was 4.2% in mSDC2-positive participants vs 0.1% in mSDC2-negative participants, yielding a relative risk of 29.73 (95% CI, 10.29-85.91). Performance held steady across subgroups.The mSDC2 test demonstrated cost-effectiveness by significantly reducing the number of colonoscopies needed to detect one case of ACN or CRC. Specifically, the number of colonoscopies needed to screen for ACN and CRC was reduced by 56.2% and 81.5%, respectively. Parallel combinations of mSDC2 with APCS or FIT enhanced both diagnostic performance and cost-effectiveness.

“This study further illustrates that the mSDC2 test consistently improves predictive abilities for CN, CRSP, ACN, and CRC, which is not influenced by subgroups of lesion location or risk factors, even under the risk stratification by FIT or APCS,” the investigators wrote. “The excellent diagnostic ability of mSDC2 in premalignant lesions, early-stage CRC, and early-onset CRC indicates a promising value in early detection and prevention of CRC ... the mSDC2 test or a parallel combination of multiple screening methods might be promising to improve real-world CRC screening performance and reduce colonoscopy workload in community practice.”The study was supported by the National Key Research and Development Program of China, Deep Blue Project of Naval Medical University, the Creative Biosciences, and others. The investigators reported no conflicts of interest.

A new stool-based syndecan-2 methylation (mSDC2) test may improve the detection of colorectal cancer (CRC) and advanced colorectal neoplasia (ACN), based on a prospective, real-world study.

These findings suggest that the mSDC2 assay could improve the efficacy and resource utilization of existing screening programs, reported co–lead authors Shengbing Zhao, MD and Zixuan He, MD, of Naval Medical University, Shanghai, China, and colleagues.

“Conventional risk-stratification strategies, such as fecal immunochemical test (FIT) and life risk factors, are still criticized for being inferior at identifying early-stage CRC and ACN, and their real-world performance is probably further weakened by the low annual participation rate and compliance of subsequent colonoscopy,” the investigators wrote in Gastroenterology. Recent case studies have reported “high diagnostic performance” using stool-based testing for mSDC2, which is “the most accurate single-targeted gene” for colorectal neoplasia, according to the investigators; however, real-world outcomes have yet to be demonstrated, prompting the present study. The prospective, multicenter, community-based trial compared the diagnostic performance of the mSDC2 test against FIT and Asia-Pacific Colorectal Screening (APCS) scores.

The primary outcome was detection of ACN. Secondary outcomes included detection of CRC, early-stage CRC, ACN, colorectal neoplasia (CN), and clinically relevant serrated polyp (CRSP). Screening strategies were also compared in terms of cost-effectiveness and impact on colonoscopy workload.The final dataset included 10,360 participants aged 45-75 years who underwent screening between 2020 and 2022.

After determining APCS scores, stool samples were analyzed for mSDC2 and FIT markers. Based on risk stratification results, participants were invited to undergo colonoscopy. A total of 3,381 participants completed colonoscopy, with 1914 from the increased-risk population and 1467 from the average-risk population. Participants who tested positive for mSDC2 had significantly higher detection rates for all measured outcomes than those who tested negative (all, P < .05). For example, the detection rate for ACN was 26.6% in mSDC2-positive participants, compared with 9.3% in mSDC2-negative participants, with a relative risk of 2.87 (95% CI, 2.39-3.44). For CRC, the detection rate was 4.2% in mSDC2-positive participants vs 0.1% in mSDC2-negative participants, yielding a relative risk of 29.73 (95% CI, 10.29-85.91). Performance held steady across subgroups.The mSDC2 test demonstrated cost-effectiveness by significantly reducing the number of colonoscopies needed to detect one case of ACN or CRC. Specifically, the number of colonoscopies needed to screen for ACN and CRC was reduced by 56.2% and 81.5%, respectively. Parallel combinations of mSDC2 with APCS or FIT enhanced both diagnostic performance and cost-effectiveness.

“This study further illustrates that the mSDC2 test consistently improves predictive abilities for CN, CRSP, ACN, and CRC, which is not influenced by subgroups of lesion location or risk factors, even under the risk stratification by FIT or APCS,” the investigators wrote. “The excellent diagnostic ability of mSDC2 in premalignant lesions, early-stage CRC, and early-onset CRC indicates a promising value in early detection and prevention of CRC ... the mSDC2 test or a parallel combination of multiple screening methods might be promising to improve real-world CRC screening performance and reduce colonoscopy workload in community practice.”The study was supported by the National Key Research and Development Program of China, Deep Blue Project of Naval Medical University, the Creative Biosciences, and others. The investigators reported no conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

AHS White Paper Guides Treatment of Posttraumatic Headache in Youth

Article Type
Changed
Fri, 08/09/2024 - 12:35

The American Headache Society (AHS) has published a white paper guiding the treatment of posttraumatic headache caused by concussion in youth.

The guidance document, the first of its kind, covers risk factors for prolonged recovery, along with pharmacologic and nonpharmacologic management strategies, and supports an emphasis on multidisciplinary care, lead author Carlyn Patterson Gentile, MD, PhD, attending physician in the Division of Neurology at Children’s Hospital of Philadelphia in Pennsylvania, and colleagues reported.

“There are no guidelines to inform the management of posttraumatic headache in youth, but multiple studies have been conducted over the past 2 decades,” the authors wrote in Headache. “This white paper aims to provide a thorough review of the current literature, identify gaps in knowledge, and provide a road map for [posttraumatic headache] management in youth based on available evidence and expert opinion.”
 

Clarity for an Underrecognized Issue

According to Russell Lonser, MD, professor and chair of neurological surgery at Ohio State University, Columbus, the white paper is important because it offers concrete guidance for health care providers who may be less familiar with posttraumatic headache in youth.

courtesy Ohio State College of Medicine
Dr. Russell Lonser

“It brings together all of the previous literature ... in a very well-written way,” Dr. Lonser said in an interview. “More than anything, it could reassure [providers] that they shouldn’t be hunting down potentially magical cures, and reassure them in symptomatic management.”

Meeryo C. Choe, MD, associate clinical professor of pediatric neurology at UCLA Health in Calabasas, California, said the paper also helps shine a light on what may be a more common condition than the public suspects.

“While the media focuses on the effects of concussion in professional sports athletes, the biggest population of athletes is in our youth population,” Dr. Choe said in a written comment. “Almost 25 million children participate in sports throughout the country, and yet we lack guidelines on how to treat posttraumatic headache which can often develop into persistent postconcussive symptoms.”

This white paper, she noted, builds on Dr. Gentile’s 2021 systematic review, introduces new management recommendations, and aligns with the latest consensus statement from the Concussion in Sport Group.

Risk Factors

The white paper first emphasizes the importance of early identification of youth at high risk for prolonged recovery from posttraumatic headache. Risk factors include female sex, adolescent age, a high number of acute symptoms following the initial injury, and social determinants of health.

courtesy UCLA Health
Dr. Meeryo C. Choe

“I agree that it is important to identify these patients early to improve the recovery trajectory,” Dr. Choe said.

Identifying these individuals quickly allows for timely intervention with both pharmacologic and nonpharmacologic therapies, Dr. Gentile and colleagues noted, potentially mitigating persistent symptoms. Clinicians are encouraged to perform thorough initial assessments to identify these risk factors and initiate early, personalized management plans.

 

 

Initial Management of Acute Posttraumatic Headache

For the initial management of acute posttraumatic headache, the white paper recommends a scheduled dosing regimen of simple analgesics. Ibuprofen at a dosage of 10 mg/kg every 6-8 hours (up to a maximum of 600 mg per dose) combined with acetaminophen has shown the best evidence for efficacy. Provided the patient is clinically stable, this regimen should be initiated within 48 hours of the injury and maintained with scheduled dosing for 3-10 days.

If effective, these medications can subsequently be used on an as-needed basis. Careful usage of analgesics is crucial, the white paper cautions, as overadministration can lead to medication-overuse headaches, complicating the recovery process.

Secondary Treatment Options

In cases where first-line oral medications are ineffective, the AHS white paper outlines several secondary treatment options. These include acute intravenous therapies such as ketorolac, dopamine receptor antagonists, and intravenous fluids. Nerve blocks and oral corticosteroid bridges may also be considered.

The white paper stresses the importance of individualized treatment plans that consider the specific needs and responses of each patient, noting that the evidence supporting these approaches is primarily derived from retrospective studies and case reports.

courtesy Nationwide Children&#039;s Hospital
Dr. Sean Rose

“Patient preferences should be factored in,” said Sean Rose, MD, pediatric neurologist and codirector of the Complex Concussion Clinic at Nationwide Children’s Hospital, Columbus, Ohio.

Supplements and Preventive Measures

For adolescents and young adults at high risk of prolonged posttraumatic headache, the white paper suggests the use of riboflavin and magnesium supplements. Small randomized clinical trials suggest that these supplements may aid in speeding recovery when administered for 1-2 weeks within 48 hours of injury.

If significant headache persists after 2 weeks, a regimen of riboflavin 400 mg daily and magnesium 400-500 mg nightly can be trialed for 6-8 weeks, in line with recommendations for migraine prevention. Additionally, melatonin at a dose of 3-5 mg nightly for an 8-week course may be considered for patients experiencing comorbid sleep disturbances.

Targeted Preventative Therapy

The white paper emphasizes the importance of targeting preventative therapy to the primary headache phenotype.

For instance, patients presenting with a migraine phenotype, or those with a personal or family history of migraines, may be most likely to respond to medications proven effective in migraine prevention, such as amitriptyline, topiramate, and propranolol.

“Most research evidence [for treating posttraumatic headache in youth] is still based on the treatment of migraine,” Dr. Rose pointed out in a written comment.

Dr. Gentile and colleagues recommend initiating preventive therapies 4-6 weeks post injury if headaches are not improving, occur more than 1-2 days per week, or significantly impact daily functioning.

Specialist Referrals and Physical Activity

Referral to a headache specialist is advised for patients who do not respond to first-line acute and preventive therapies. Specialists can offer advanced diagnostic and therapeutic options, the authors noted, ensuring a comprehensive approach to managing posttraumatic headache.

The white paper also recommends noncontact, sub–symptom threshold aerobic physical activity and activities of daily living after an initial 24-48 hour period of symptom-limited cognitive and physical rest. Engaging in these activities may promote faster recovery and help patients gradually return to their normal routines.

“This has been a shift in the concussion treatment approach over the last decade, and is one of the most important interventions we can recommend as physicians,” Dr. Choe noted. “This is where pediatricians and emergency department physicians seeing children acutely can really make a difference in the recovery trajectory for a child after a concussion. ‘Cocoon therapy’ has been proven not only to not work, but be detrimental to recovery.”
 

Nonpharmacologic Interventions

Based on clinical assessment, nonpharmacologic interventions may also be considered, according to the white paper. These interventions include cervico-vestibular therapy, which addresses neck and balance issues, and cognitive-behavioral therapy, which helps manage the psychological aspects of chronic headache. Dr. Gentile and colleagues highlighted the potential benefits of a collaborative care model that incorporates these nonpharmacologic interventions alongside pharmacologic treatments, providing a holistic approach to posttraumatic headache management.

“Persisting headaches after concussion are often driven by multiple factors,” Dr. Rose said. “Multidisciplinary concussion clinics can offer multiple treatment approaches such as behavioral, physical therapy, exercise, and medication options.”
 

Unmet Needs

The white paper concludes by calling for high-quality prospective cohort studies and placebo-controlled, randomized, controlled trials to further advance the understanding and treatment of posttraumatic headache in children.

Dr. Lonser, Dr. Choe, and Dr. Rose all agreed.

“More focused treatment trials are needed to gauge efficacy in children with headache after concussion,” Dr. Rose said.

Specifically, Dr. Gentile and colleagues underscored the need to standardize data collection via common elements, which could improve the ability to compare results across studies and develop more effective treatments. In addition, research into the underlying pathophysiology of posttraumatic headache is crucial for identifying new therapeutic targets and clinical and biological markers that can personalize patient care.

They also stressed the importance of exploring the impact of health disparities and social determinants on posttraumatic headache outcomes, aiming to develop interventions that are equitable and accessible to all patient populations.The white paper was approved by the AHS, and supported by the National Institutes of Health/National Institute of Neurological Disorders and Stroke K23 NS124986. The authors disclosed relationships with Eli Lilly, Pfizer, Amgen, and others. The interviewees disclosed no conflicts of interest.

Publications
Topics
Sections

The American Headache Society (AHS) has published a white paper guiding the treatment of posttraumatic headache caused by concussion in youth.

The guidance document, the first of its kind, covers risk factors for prolonged recovery, along with pharmacologic and nonpharmacologic management strategies, and supports an emphasis on multidisciplinary care, lead author Carlyn Patterson Gentile, MD, PhD, attending physician in the Division of Neurology at Children’s Hospital of Philadelphia in Pennsylvania, and colleagues reported.

“There are no guidelines to inform the management of posttraumatic headache in youth, but multiple studies have been conducted over the past 2 decades,” the authors wrote in Headache. “This white paper aims to provide a thorough review of the current literature, identify gaps in knowledge, and provide a road map for [posttraumatic headache] management in youth based on available evidence and expert opinion.”
 

Clarity for an Underrecognized Issue

According to Russell Lonser, MD, professor and chair of neurological surgery at Ohio State University, Columbus, the white paper is important because it offers concrete guidance for health care providers who may be less familiar with posttraumatic headache in youth.

courtesy Ohio State College of Medicine
Dr. Russell Lonser

“It brings together all of the previous literature ... in a very well-written way,” Dr. Lonser said in an interview. “More than anything, it could reassure [providers] that they shouldn’t be hunting down potentially magical cures, and reassure them in symptomatic management.”

Meeryo C. Choe, MD, associate clinical professor of pediatric neurology at UCLA Health in Calabasas, California, said the paper also helps shine a light on what may be a more common condition than the public suspects.

“While the media focuses on the effects of concussion in professional sports athletes, the biggest population of athletes is in our youth population,” Dr. Choe said in a written comment. “Almost 25 million children participate in sports throughout the country, and yet we lack guidelines on how to treat posttraumatic headache which can often develop into persistent postconcussive symptoms.”

This white paper, she noted, builds on Dr. Gentile’s 2021 systematic review, introduces new management recommendations, and aligns with the latest consensus statement from the Concussion in Sport Group.

Risk Factors

The white paper first emphasizes the importance of early identification of youth at high risk for prolonged recovery from posttraumatic headache. Risk factors include female sex, adolescent age, a high number of acute symptoms following the initial injury, and social determinants of health.

courtesy UCLA Health
Dr. Meeryo C. Choe

“I agree that it is important to identify these patients early to improve the recovery trajectory,” Dr. Choe said.

Identifying these individuals quickly allows for timely intervention with both pharmacologic and nonpharmacologic therapies, Dr. Gentile and colleagues noted, potentially mitigating persistent symptoms. Clinicians are encouraged to perform thorough initial assessments to identify these risk factors and initiate early, personalized management plans.

 

 

Initial Management of Acute Posttraumatic Headache

For the initial management of acute posttraumatic headache, the white paper recommends a scheduled dosing regimen of simple analgesics. Ibuprofen at a dosage of 10 mg/kg every 6-8 hours (up to a maximum of 600 mg per dose) combined with acetaminophen has shown the best evidence for efficacy. Provided the patient is clinically stable, this regimen should be initiated within 48 hours of the injury and maintained with scheduled dosing for 3-10 days.

If effective, these medications can subsequently be used on an as-needed basis. Careful usage of analgesics is crucial, the white paper cautions, as overadministration can lead to medication-overuse headaches, complicating the recovery process.

Secondary Treatment Options

In cases where first-line oral medications are ineffective, the AHS white paper outlines several secondary treatment options. These include acute intravenous therapies such as ketorolac, dopamine receptor antagonists, and intravenous fluids. Nerve blocks and oral corticosteroid bridges may also be considered.

The white paper stresses the importance of individualized treatment plans that consider the specific needs and responses of each patient, noting that the evidence supporting these approaches is primarily derived from retrospective studies and case reports.

courtesy Nationwide Children&#039;s Hospital
Dr. Sean Rose

“Patient preferences should be factored in,” said Sean Rose, MD, pediatric neurologist and codirector of the Complex Concussion Clinic at Nationwide Children’s Hospital, Columbus, Ohio.

Supplements and Preventive Measures

For adolescents and young adults at high risk of prolonged posttraumatic headache, the white paper suggests the use of riboflavin and magnesium supplements. Small randomized clinical trials suggest that these supplements may aid in speeding recovery when administered for 1-2 weeks within 48 hours of injury.

If significant headache persists after 2 weeks, a regimen of riboflavin 400 mg daily and magnesium 400-500 mg nightly can be trialed for 6-8 weeks, in line with recommendations for migraine prevention. Additionally, melatonin at a dose of 3-5 mg nightly for an 8-week course may be considered for patients experiencing comorbid sleep disturbances.

Targeted Preventative Therapy

The white paper emphasizes the importance of targeting preventative therapy to the primary headache phenotype.

For instance, patients presenting with a migraine phenotype, or those with a personal or family history of migraines, may be most likely to respond to medications proven effective in migraine prevention, such as amitriptyline, topiramate, and propranolol.

“Most research evidence [for treating posttraumatic headache in youth] is still based on the treatment of migraine,” Dr. Rose pointed out in a written comment.

Dr. Gentile and colleagues recommend initiating preventive therapies 4-6 weeks post injury if headaches are not improving, occur more than 1-2 days per week, or significantly impact daily functioning.

Specialist Referrals and Physical Activity

Referral to a headache specialist is advised for patients who do not respond to first-line acute and preventive therapies. Specialists can offer advanced diagnostic and therapeutic options, the authors noted, ensuring a comprehensive approach to managing posttraumatic headache.

The white paper also recommends noncontact, sub–symptom threshold aerobic physical activity and activities of daily living after an initial 24-48 hour period of symptom-limited cognitive and physical rest. Engaging in these activities may promote faster recovery and help patients gradually return to their normal routines.

“This has been a shift in the concussion treatment approach over the last decade, and is one of the most important interventions we can recommend as physicians,” Dr. Choe noted. “This is where pediatricians and emergency department physicians seeing children acutely can really make a difference in the recovery trajectory for a child after a concussion. ‘Cocoon therapy’ has been proven not only to not work, but be detrimental to recovery.”
 

Nonpharmacologic Interventions

Based on clinical assessment, nonpharmacologic interventions may also be considered, according to the white paper. These interventions include cervico-vestibular therapy, which addresses neck and balance issues, and cognitive-behavioral therapy, which helps manage the psychological aspects of chronic headache. Dr. Gentile and colleagues highlighted the potential benefits of a collaborative care model that incorporates these nonpharmacologic interventions alongside pharmacologic treatments, providing a holistic approach to posttraumatic headache management.

“Persisting headaches after concussion are often driven by multiple factors,” Dr. Rose said. “Multidisciplinary concussion clinics can offer multiple treatment approaches such as behavioral, physical therapy, exercise, and medication options.”
 

Unmet Needs

The white paper concludes by calling for high-quality prospective cohort studies and placebo-controlled, randomized, controlled trials to further advance the understanding and treatment of posttraumatic headache in children.

Dr. Lonser, Dr. Choe, and Dr. Rose all agreed.

“More focused treatment trials are needed to gauge efficacy in children with headache after concussion,” Dr. Rose said.

Specifically, Dr. Gentile and colleagues underscored the need to standardize data collection via common elements, which could improve the ability to compare results across studies and develop more effective treatments. In addition, research into the underlying pathophysiology of posttraumatic headache is crucial for identifying new therapeutic targets and clinical and biological markers that can personalize patient care.

They also stressed the importance of exploring the impact of health disparities and social determinants on posttraumatic headache outcomes, aiming to develop interventions that are equitable and accessible to all patient populations.The white paper was approved by the AHS, and supported by the National Institutes of Health/National Institute of Neurological Disorders and Stroke K23 NS124986. The authors disclosed relationships with Eli Lilly, Pfizer, Amgen, and others. The interviewees disclosed no conflicts of interest.

The American Headache Society (AHS) has published a white paper guiding the treatment of posttraumatic headache caused by concussion in youth.

The guidance document, the first of its kind, covers risk factors for prolonged recovery, along with pharmacologic and nonpharmacologic management strategies, and supports an emphasis on multidisciplinary care, lead author Carlyn Patterson Gentile, MD, PhD, attending physician in the Division of Neurology at Children’s Hospital of Philadelphia in Pennsylvania, and colleagues reported.

“There are no guidelines to inform the management of posttraumatic headache in youth, but multiple studies have been conducted over the past 2 decades,” the authors wrote in Headache. “This white paper aims to provide a thorough review of the current literature, identify gaps in knowledge, and provide a road map for [posttraumatic headache] management in youth based on available evidence and expert opinion.”
 

Clarity for an Underrecognized Issue

According to Russell Lonser, MD, professor and chair of neurological surgery at Ohio State University, Columbus, the white paper is important because it offers concrete guidance for health care providers who may be less familiar with posttraumatic headache in youth.

courtesy Ohio State College of Medicine
Dr. Russell Lonser

“It brings together all of the previous literature ... in a very well-written way,” Dr. Lonser said in an interview. “More than anything, it could reassure [providers] that they shouldn’t be hunting down potentially magical cures, and reassure them in symptomatic management.”

Meeryo C. Choe, MD, associate clinical professor of pediatric neurology at UCLA Health in Calabasas, California, said the paper also helps shine a light on what may be a more common condition than the public suspects.

“While the media focuses on the effects of concussion in professional sports athletes, the biggest population of athletes is in our youth population,” Dr. Choe said in a written comment. “Almost 25 million children participate in sports throughout the country, and yet we lack guidelines on how to treat posttraumatic headache which can often develop into persistent postconcussive symptoms.”

This white paper, she noted, builds on Dr. Gentile’s 2021 systematic review, introduces new management recommendations, and aligns with the latest consensus statement from the Concussion in Sport Group.

Risk Factors

The white paper first emphasizes the importance of early identification of youth at high risk for prolonged recovery from posttraumatic headache. Risk factors include female sex, adolescent age, a high number of acute symptoms following the initial injury, and social determinants of health.

courtesy UCLA Health
Dr. Meeryo C. Choe

“I agree that it is important to identify these patients early to improve the recovery trajectory,” Dr. Choe said.

Identifying these individuals quickly allows for timely intervention with both pharmacologic and nonpharmacologic therapies, Dr. Gentile and colleagues noted, potentially mitigating persistent symptoms. Clinicians are encouraged to perform thorough initial assessments to identify these risk factors and initiate early, personalized management plans.

 

 

Initial Management of Acute Posttraumatic Headache

For the initial management of acute posttraumatic headache, the white paper recommends a scheduled dosing regimen of simple analgesics. Ibuprofen at a dosage of 10 mg/kg every 6-8 hours (up to a maximum of 600 mg per dose) combined with acetaminophen has shown the best evidence for efficacy. Provided the patient is clinically stable, this regimen should be initiated within 48 hours of the injury and maintained with scheduled dosing for 3-10 days.

If effective, these medications can subsequently be used on an as-needed basis. Careful usage of analgesics is crucial, the white paper cautions, as overadministration can lead to medication-overuse headaches, complicating the recovery process.

Secondary Treatment Options

In cases where first-line oral medications are ineffective, the AHS white paper outlines several secondary treatment options. These include acute intravenous therapies such as ketorolac, dopamine receptor antagonists, and intravenous fluids. Nerve blocks and oral corticosteroid bridges may also be considered.

The white paper stresses the importance of individualized treatment plans that consider the specific needs and responses of each patient, noting that the evidence supporting these approaches is primarily derived from retrospective studies and case reports.

courtesy Nationwide Children&#039;s Hospital
Dr. Sean Rose

“Patient preferences should be factored in,” said Sean Rose, MD, pediatric neurologist and codirector of the Complex Concussion Clinic at Nationwide Children’s Hospital, Columbus, Ohio.

Supplements and Preventive Measures

For adolescents and young adults at high risk of prolonged posttraumatic headache, the white paper suggests the use of riboflavin and magnesium supplements. Small randomized clinical trials suggest that these supplements may aid in speeding recovery when administered for 1-2 weeks within 48 hours of injury.

If significant headache persists after 2 weeks, a regimen of riboflavin 400 mg daily and magnesium 400-500 mg nightly can be trialed for 6-8 weeks, in line with recommendations for migraine prevention. Additionally, melatonin at a dose of 3-5 mg nightly for an 8-week course may be considered for patients experiencing comorbid sleep disturbances.

Targeted Preventative Therapy

The white paper emphasizes the importance of targeting preventative therapy to the primary headache phenotype.

For instance, patients presenting with a migraine phenotype, or those with a personal or family history of migraines, may be most likely to respond to medications proven effective in migraine prevention, such as amitriptyline, topiramate, and propranolol.

“Most research evidence [for treating posttraumatic headache in youth] is still based on the treatment of migraine,” Dr. Rose pointed out in a written comment.

Dr. Gentile and colleagues recommend initiating preventive therapies 4-6 weeks post injury if headaches are not improving, occur more than 1-2 days per week, or significantly impact daily functioning.

Specialist Referrals and Physical Activity

Referral to a headache specialist is advised for patients who do not respond to first-line acute and preventive therapies. Specialists can offer advanced diagnostic and therapeutic options, the authors noted, ensuring a comprehensive approach to managing posttraumatic headache.

The white paper also recommends noncontact, sub–symptom threshold aerobic physical activity and activities of daily living after an initial 24-48 hour period of symptom-limited cognitive and physical rest. Engaging in these activities may promote faster recovery and help patients gradually return to their normal routines.

“This has been a shift in the concussion treatment approach over the last decade, and is one of the most important interventions we can recommend as physicians,” Dr. Choe noted. “This is where pediatricians and emergency department physicians seeing children acutely can really make a difference in the recovery trajectory for a child after a concussion. ‘Cocoon therapy’ has been proven not only to not work, but be detrimental to recovery.”
 

Nonpharmacologic Interventions

Based on clinical assessment, nonpharmacologic interventions may also be considered, according to the white paper. These interventions include cervico-vestibular therapy, which addresses neck and balance issues, and cognitive-behavioral therapy, which helps manage the psychological aspects of chronic headache. Dr. Gentile and colleagues highlighted the potential benefits of a collaborative care model that incorporates these nonpharmacologic interventions alongside pharmacologic treatments, providing a holistic approach to posttraumatic headache management.

“Persisting headaches after concussion are often driven by multiple factors,” Dr. Rose said. “Multidisciplinary concussion clinics can offer multiple treatment approaches such as behavioral, physical therapy, exercise, and medication options.”
 

Unmet Needs

The white paper concludes by calling for high-quality prospective cohort studies and placebo-controlled, randomized, controlled trials to further advance the understanding and treatment of posttraumatic headache in children.

Dr. Lonser, Dr. Choe, and Dr. Rose all agreed.

“More focused treatment trials are needed to gauge efficacy in children with headache after concussion,” Dr. Rose said.

Specifically, Dr. Gentile and colleagues underscored the need to standardize data collection via common elements, which could improve the ability to compare results across studies and develop more effective treatments. In addition, research into the underlying pathophysiology of posttraumatic headache is crucial for identifying new therapeutic targets and clinical and biological markers that can personalize patient care.

They also stressed the importance of exploring the impact of health disparities and social determinants on posttraumatic headache outcomes, aiming to develop interventions that are equitable and accessible to all patient populations.The white paper was approved by the AHS, and supported by the National Institutes of Health/National Institute of Neurological Disorders and Stroke K23 NS124986. The authors disclosed relationships with Eli Lilly, Pfizer, Amgen, and others. The interviewees disclosed no conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM HEADACHE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article