Durvalumab combinations show tentative promise in NSCLC

Article Type
Changed
Wed, 11/09/2022 - 12:12

The results of a phase 2 clinical trial of durvalumab with add-on therapies oleclumab or monalizumab, suggest this novel combination may prove beneficial in patients with unresectable stage 3 non–small-cell lung cancer.

Combinations of the PD-L1 inhibitor durvalumab (Imfinzi, AstraZeneca) with the anti-CD73 monoclonal antibody oleclumab or the anti-NKG2A monoclonal antibody monalizumab led to improved overall response rate and progression-free survival compared to durvalumab alone.

The findings support further study in a phase 3 clinical trial, according to the authors of the study recently published in the Journal of Clinical Oncology.

Durvalumab is the standard treatment following consolidation therapy of chemoradiotherapy in unresectable stage 3 non–small-cell lung cancer (NSCLC). Although it extended progression-free survival (PFS) and overall survival in the PACIFIC phase 3 study, some patients experience a recurrence, which has led to exploration of immunotherapy combinations.

Oleclumab inhibits the enzyme CD73, found on the surfaces of both tumor and immune cells. Its activity leads to an immunosuppressive effect in the tumor microenvironment, and preclinical studies have shown that it can have an additive antitumor effect when combined with PD-1 or PD-L1 inhibitors. A phase 1 study also suggested efficacy. Monalizumab blocks interactions between major histocompatibility complex-E (HLA-E) and an inhibitor receptor. A number of tumors overexpress HLA-E, triggering inhibitor signals that inhibit natural killer and CD8+ T cells.

“COAST was an interesting study that, although not definitive, suggested that the combination of durvalumab with oleclumab or with monalizumab was more effective than durvalumab alone in the consolidation setting after definitive concurrent chemoradiation for patients with stage 3 unresectable NSCLC,” said Nathan Pennell, MD, PhD, who wrote an accompanying editorial.

Despite the positive signal, Dr. Pennell expressed some skepticism that the combinations would pass a phase 3 test. He questioned the choice of response rate as the primary endpoint of the phase 2 study, and noted that the durvalumab arm had worse progression-free survival (PFS) than the previous PACIFIC trial. It could be that the clinical characteristics of the study population differed between the two trials, in which case the improved objective response rate (ORR) and PFS results should be encouraging. It’s also possible the COAST trial’s small sample size led to a mismatch between the control and treatment group despite randomization, in which case the findings may not be valid.

“These are the kinds of issues that keep drug developers up at night. There really is no way to know which scenario is correct without doing the larger trial. I do hope though that the phase 3 trials have robust biomarker analysis including PDL1 to make sure the arms are as well matched for known prognostic and predictive markers as possible,” said Dr. Pennell, who is vice chair of clinical research at Taussig Cancer Institute.
 

The study details

The researchers randomized 189 patients to durvalumab, durvalumab plus oleclumab, or durvalumab plus monalizumab between January 2019 and July 2020. After a median follow-up of 11.5 months, there was a higher confirmed objective response rate in the durvalumab plus oleclumab group (30.0%; 95% confidence interval, 18.8%-43.2%) and the durvalumab plus monalizumab group (35.5%; 95% CI, 23.7%-48.7%) versus durvalumab alone (17.9%; 95% CI, 9.6%-29.2%).

Compared to durvalumab alone, there was improved PFS in both durvalumab plus oleclumab (stratified hazard ratio, 0.44; 95% CI, 0.26-0.75) and durvalumab plus monalizumab (HR, 0.42; 95% CI, 0.24-0.72). At 12 months, PFS was 62.6% (95% CI, 48.1-74.2%) for durvalumab plus oleclumab, 72.7% (95% CI, 58.8-82.6%) for durvalumab plus monalizumab, and 33.9% (95% CI, 21.2-47.1%) for durvalumab alone.

The study was funded by AstraZeneca.

Publications
Topics
Sections

The results of a phase 2 clinical trial of durvalumab with add-on therapies oleclumab or monalizumab, suggest this novel combination may prove beneficial in patients with unresectable stage 3 non–small-cell lung cancer.

Combinations of the PD-L1 inhibitor durvalumab (Imfinzi, AstraZeneca) with the anti-CD73 monoclonal antibody oleclumab or the anti-NKG2A monoclonal antibody monalizumab led to improved overall response rate and progression-free survival compared to durvalumab alone.

The findings support further study in a phase 3 clinical trial, according to the authors of the study recently published in the Journal of Clinical Oncology.

Durvalumab is the standard treatment following consolidation therapy of chemoradiotherapy in unresectable stage 3 non–small-cell lung cancer (NSCLC). Although it extended progression-free survival (PFS) and overall survival in the PACIFIC phase 3 study, some patients experience a recurrence, which has led to exploration of immunotherapy combinations.

Oleclumab inhibits the enzyme CD73, found on the surfaces of both tumor and immune cells. Its activity leads to an immunosuppressive effect in the tumor microenvironment, and preclinical studies have shown that it can have an additive antitumor effect when combined with PD-1 or PD-L1 inhibitors. A phase 1 study also suggested efficacy. Monalizumab blocks interactions between major histocompatibility complex-E (HLA-E) and an inhibitor receptor. A number of tumors overexpress HLA-E, triggering inhibitor signals that inhibit natural killer and CD8+ T cells.

“COAST was an interesting study that, although not definitive, suggested that the combination of durvalumab with oleclumab or with monalizumab was more effective than durvalumab alone in the consolidation setting after definitive concurrent chemoradiation for patients with stage 3 unresectable NSCLC,” said Nathan Pennell, MD, PhD, who wrote an accompanying editorial.

Despite the positive signal, Dr. Pennell expressed some skepticism that the combinations would pass a phase 3 test. He questioned the choice of response rate as the primary endpoint of the phase 2 study, and noted that the durvalumab arm had worse progression-free survival (PFS) than the previous PACIFIC trial. It could be that the clinical characteristics of the study population differed between the two trials, in which case the improved objective response rate (ORR) and PFS results should be encouraging. It’s also possible the COAST trial’s small sample size led to a mismatch between the control and treatment group despite randomization, in which case the findings may not be valid.

“These are the kinds of issues that keep drug developers up at night. There really is no way to know which scenario is correct without doing the larger trial. I do hope though that the phase 3 trials have robust biomarker analysis including PDL1 to make sure the arms are as well matched for known prognostic and predictive markers as possible,” said Dr. Pennell, who is vice chair of clinical research at Taussig Cancer Institute.
 

The study details

The researchers randomized 189 patients to durvalumab, durvalumab plus oleclumab, or durvalumab plus monalizumab between January 2019 and July 2020. After a median follow-up of 11.5 months, there was a higher confirmed objective response rate in the durvalumab plus oleclumab group (30.0%; 95% confidence interval, 18.8%-43.2%) and the durvalumab plus monalizumab group (35.5%; 95% CI, 23.7%-48.7%) versus durvalumab alone (17.9%; 95% CI, 9.6%-29.2%).

Compared to durvalumab alone, there was improved PFS in both durvalumab plus oleclumab (stratified hazard ratio, 0.44; 95% CI, 0.26-0.75) and durvalumab plus monalizumab (HR, 0.42; 95% CI, 0.24-0.72). At 12 months, PFS was 62.6% (95% CI, 48.1-74.2%) for durvalumab plus oleclumab, 72.7% (95% CI, 58.8-82.6%) for durvalumab plus monalizumab, and 33.9% (95% CI, 21.2-47.1%) for durvalumab alone.

The study was funded by AstraZeneca.

The results of a phase 2 clinical trial of durvalumab with add-on therapies oleclumab or monalizumab, suggest this novel combination may prove beneficial in patients with unresectable stage 3 non–small-cell lung cancer.

Combinations of the PD-L1 inhibitor durvalumab (Imfinzi, AstraZeneca) with the anti-CD73 monoclonal antibody oleclumab or the anti-NKG2A monoclonal antibody monalizumab led to improved overall response rate and progression-free survival compared to durvalumab alone.

The findings support further study in a phase 3 clinical trial, according to the authors of the study recently published in the Journal of Clinical Oncology.

Durvalumab is the standard treatment following consolidation therapy of chemoradiotherapy in unresectable stage 3 non–small-cell lung cancer (NSCLC). Although it extended progression-free survival (PFS) and overall survival in the PACIFIC phase 3 study, some patients experience a recurrence, which has led to exploration of immunotherapy combinations.

Oleclumab inhibits the enzyme CD73, found on the surfaces of both tumor and immune cells. Its activity leads to an immunosuppressive effect in the tumor microenvironment, and preclinical studies have shown that it can have an additive antitumor effect when combined with PD-1 or PD-L1 inhibitors. A phase 1 study also suggested efficacy. Monalizumab blocks interactions between major histocompatibility complex-E (HLA-E) and an inhibitor receptor. A number of tumors overexpress HLA-E, triggering inhibitor signals that inhibit natural killer and CD8+ T cells.

“COAST was an interesting study that, although not definitive, suggested that the combination of durvalumab with oleclumab or with monalizumab was more effective than durvalumab alone in the consolidation setting after definitive concurrent chemoradiation for patients with stage 3 unresectable NSCLC,” said Nathan Pennell, MD, PhD, who wrote an accompanying editorial.

Despite the positive signal, Dr. Pennell expressed some skepticism that the combinations would pass a phase 3 test. He questioned the choice of response rate as the primary endpoint of the phase 2 study, and noted that the durvalumab arm had worse progression-free survival (PFS) than the previous PACIFIC trial. It could be that the clinical characteristics of the study population differed between the two trials, in which case the improved objective response rate (ORR) and PFS results should be encouraging. It’s also possible the COAST trial’s small sample size led to a mismatch between the control and treatment group despite randomization, in which case the findings may not be valid.

“These are the kinds of issues that keep drug developers up at night. There really is no way to know which scenario is correct without doing the larger trial. I do hope though that the phase 3 trials have robust biomarker analysis including PDL1 to make sure the arms are as well matched for known prognostic and predictive markers as possible,” said Dr. Pennell, who is vice chair of clinical research at Taussig Cancer Institute.
 

The study details

The researchers randomized 189 patients to durvalumab, durvalumab plus oleclumab, or durvalumab plus monalizumab between January 2019 and July 2020. After a median follow-up of 11.5 months, there was a higher confirmed objective response rate in the durvalumab plus oleclumab group (30.0%; 95% confidence interval, 18.8%-43.2%) and the durvalumab plus monalizumab group (35.5%; 95% CI, 23.7%-48.7%) versus durvalumab alone (17.9%; 95% CI, 9.6%-29.2%).

Compared to durvalumab alone, there was improved PFS in both durvalumab plus oleclumab (stratified hazard ratio, 0.44; 95% CI, 0.26-0.75) and durvalumab plus monalizumab (HR, 0.42; 95% CI, 0.24-0.72). At 12 months, PFS was 62.6% (95% CI, 48.1-74.2%) for durvalumab plus oleclumab, 72.7% (95% CI, 58.8-82.6%) for durvalumab plus monalizumab, and 33.9% (95% CI, 21.2-47.1%) for durvalumab alone.

The study was funded by AstraZeneca.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JOURNAL OF CLINICAL ONCOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Diazepam nasal spray effective in Lennox-Gastaut syndrome

Article Type
Changed
Tue, 10/25/2022 - 08:09

 

– A new analysis of data from a phase 3 clinical trial suggests that an inhaled diazepam nasal spray (Valtoco, Neurelis Inc.) works about as well among patients with Lennox-Gastaut Syndrome (LGS) as it does with other patients with pediatric encephalopathies.

LGS is a severe form of epilepsy that generally begins in early childhood and has a poor prognosis and seizures that are often treatment refractory. The findings of the analysis should be encouraging to physicians who may view patients with LGS as not benefiting from treatment, said Daniel C. Tarquinio, DO, who presented the results at the 2022 annual meeting of the Child Neurology Society.

“Their response to their first appropriate weight-based rescue dose of Valtoco was essentially no different. They were subtly different, but they’re not really meaningful differences. Very few needed a second dose. In practice this is helpful because we know that kids with LGS, we think of them as having worse epilepsy, if you will. But if they need rescue, if we prescribe an appropriate rescue dose based on their weight, that the same rescue will work for them as it will for a kid that doesn’t have – quote unquote – as bad epilepsy that needs rescue,” said Dr. Tarquinio, a child neurologist and epileptologist and founder of the Center for Rare Neurological Diseases.

During the Q&A, Dr. Tarquinio was asked if there is something about the biology of LGS that would suggest it might respond differently to the drug. Dr. Tarquinio said no. “The reason we even looked at this is because many clinicians told us that their sense was [that patients with LGS] did not respond as well to rescue in general no matter what they use. This allowed us to go back and look at a controlled data set and say, at least in our controlled dataset, they respond the same,” he said.

Grace Gombolay, MD, who moderated the session, agreed that the results should be encouraging. “It seems like a lot of clinicians have the sense that Lennox-Gastaut Syndrome is a very terrible refractory epilepsy syndrome, and so doing rescue doesn’t seem to make sense if they don’t really respond. I think it’s helpful to know because there are actually studies showing that Valtoco seems to actually work in those patients, so it’s actually useful clinically to prescribe those patients and give it a shot,” said Dr. Gombolay, director of the Pediatric Neuroimmunology and Multiple Sclerosis Clinic at Emory University, Atlanta.

LGS patients may experience hundreds of seizures per day. “It’s really hard for parents to quantify, did they get better? Did the rescue help or not, because they’re still having some seizures. I think the sense is, ‘oh, this isn’t working.’ That’s probably the bias. I think this is good data that if you are able to get Valtoco for your patients, I think it’s worth a shot even in Lennox-Gastaut,” said Dr. Gombolay.

The researchers conducted a post hoc analysis of the phase 3, open-label, repeat-dose safety study of Valtoco. The study included a 12-month treatment period with visits at day 30 and every 60 days following. Patients had the option of staying on the drug following the end of the treatment period. Seizure and dosing information were obtained from a diary. The study enrolled 163 patients whose physicians believed they would need to be treated with a benzodiazepine at least once every other month to achieve seizure control. Dosing was determined by a combination of age and weight. If a second dose was required, caregivers were instructed to provide it 4-12 hours after the first dose.

In the study cohort, 47.9% of patients were aged 6-17 years. The researchers looked specifically at 73 cases of seizure clusters. In nine cases, the patient had LGS (five male, four female). Nearly all (95.9%) of LGS cluster cases were treated with a single dose and 4.1% were exposed to a second dose. Among 64 cases involving a patient with pediatric epileptic encephalopathies, 89.4% were treated with a single dose and 10.6% received a second. The safety profile was similar between patients with LGS and those with pediatric encephalopathies.

Dr. Gombolay has no relevant financial disclosures.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– A new analysis of data from a phase 3 clinical trial suggests that an inhaled diazepam nasal spray (Valtoco, Neurelis Inc.) works about as well among patients with Lennox-Gastaut Syndrome (LGS) as it does with other patients with pediatric encephalopathies.

LGS is a severe form of epilepsy that generally begins in early childhood and has a poor prognosis and seizures that are often treatment refractory. The findings of the analysis should be encouraging to physicians who may view patients with LGS as not benefiting from treatment, said Daniel C. Tarquinio, DO, who presented the results at the 2022 annual meeting of the Child Neurology Society.

“Their response to their first appropriate weight-based rescue dose of Valtoco was essentially no different. They were subtly different, but they’re not really meaningful differences. Very few needed a second dose. In practice this is helpful because we know that kids with LGS, we think of them as having worse epilepsy, if you will. But if they need rescue, if we prescribe an appropriate rescue dose based on their weight, that the same rescue will work for them as it will for a kid that doesn’t have – quote unquote – as bad epilepsy that needs rescue,” said Dr. Tarquinio, a child neurologist and epileptologist and founder of the Center for Rare Neurological Diseases.

During the Q&A, Dr. Tarquinio was asked if there is something about the biology of LGS that would suggest it might respond differently to the drug. Dr. Tarquinio said no. “The reason we even looked at this is because many clinicians told us that their sense was [that patients with LGS] did not respond as well to rescue in general no matter what they use. This allowed us to go back and look at a controlled data set and say, at least in our controlled dataset, they respond the same,” he said.

Grace Gombolay, MD, who moderated the session, agreed that the results should be encouraging. “It seems like a lot of clinicians have the sense that Lennox-Gastaut Syndrome is a very terrible refractory epilepsy syndrome, and so doing rescue doesn’t seem to make sense if they don’t really respond. I think it’s helpful to know because there are actually studies showing that Valtoco seems to actually work in those patients, so it’s actually useful clinically to prescribe those patients and give it a shot,” said Dr. Gombolay, director of the Pediatric Neuroimmunology and Multiple Sclerosis Clinic at Emory University, Atlanta.

LGS patients may experience hundreds of seizures per day. “It’s really hard for parents to quantify, did they get better? Did the rescue help or not, because they’re still having some seizures. I think the sense is, ‘oh, this isn’t working.’ That’s probably the bias. I think this is good data that if you are able to get Valtoco for your patients, I think it’s worth a shot even in Lennox-Gastaut,” said Dr. Gombolay.

The researchers conducted a post hoc analysis of the phase 3, open-label, repeat-dose safety study of Valtoco. The study included a 12-month treatment period with visits at day 30 and every 60 days following. Patients had the option of staying on the drug following the end of the treatment period. Seizure and dosing information were obtained from a diary. The study enrolled 163 patients whose physicians believed they would need to be treated with a benzodiazepine at least once every other month to achieve seizure control. Dosing was determined by a combination of age and weight. If a second dose was required, caregivers were instructed to provide it 4-12 hours after the first dose.

In the study cohort, 47.9% of patients were aged 6-17 years. The researchers looked specifically at 73 cases of seizure clusters. In nine cases, the patient had LGS (five male, four female). Nearly all (95.9%) of LGS cluster cases were treated with a single dose and 4.1% were exposed to a second dose. Among 64 cases involving a patient with pediatric epileptic encephalopathies, 89.4% were treated with a single dose and 10.6% received a second. The safety profile was similar between patients with LGS and those with pediatric encephalopathies.

Dr. Gombolay has no relevant financial disclosures.

 

– A new analysis of data from a phase 3 clinical trial suggests that an inhaled diazepam nasal spray (Valtoco, Neurelis Inc.) works about as well among patients with Lennox-Gastaut Syndrome (LGS) as it does with other patients with pediatric encephalopathies.

LGS is a severe form of epilepsy that generally begins in early childhood and has a poor prognosis and seizures that are often treatment refractory. The findings of the analysis should be encouraging to physicians who may view patients with LGS as not benefiting from treatment, said Daniel C. Tarquinio, DO, who presented the results at the 2022 annual meeting of the Child Neurology Society.

“Their response to their first appropriate weight-based rescue dose of Valtoco was essentially no different. They were subtly different, but they’re not really meaningful differences. Very few needed a second dose. In practice this is helpful because we know that kids with LGS, we think of them as having worse epilepsy, if you will. But if they need rescue, if we prescribe an appropriate rescue dose based on their weight, that the same rescue will work for them as it will for a kid that doesn’t have – quote unquote – as bad epilepsy that needs rescue,” said Dr. Tarquinio, a child neurologist and epileptologist and founder of the Center for Rare Neurological Diseases.

During the Q&A, Dr. Tarquinio was asked if there is something about the biology of LGS that would suggest it might respond differently to the drug. Dr. Tarquinio said no. “The reason we even looked at this is because many clinicians told us that their sense was [that patients with LGS] did not respond as well to rescue in general no matter what they use. This allowed us to go back and look at a controlled data set and say, at least in our controlled dataset, they respond the same,” he said.

Grace Gombolay, MD, who moderated the session, agreed that the results should be encouraging. “It seems like a lot of clinicians have the sense that Lennox-Gastaut Syndrome is a very terrible refractory epilepsy syndrome, and so doing rescue doesn’t seem to make sense if they don’t really respond. I think it’s helpful to know because there are actually studies showing that Valtoco seems to actually work in those patients, so it’s actually useful clinically to prescribe those patients and give it a shot,” said Dr. Gombolay, director of the Pediatric Neuroimmunology and Multiple Sclerosis Clinic at Emory University, Atlanta.

LGS patients may experience hundreds of seizures per day. “It’s really hard for parents to quantify, did they get better? Did the rescue help or not, because they’re still having some seizures. I think the sense is, ‘oh, this isn’t working.’ That’s probably the bias. I think this is good data that if you are able to get Valtoco for your patients, I think it’s worth a shot even in Lennox-Gastaut,” said Dr. Gombolay.

The researchers conducted a post hoc analysis of the phase 3, open-label, repeat-dose safety study of Valtoco. The study included a 12-month treatment period with visits at day 30 and every 60 days following. Patients had the option of staying on the drug following the end of the treatment period. Seizure and dosing information were obtained from a diary. The study enrolled 163 patients whose physicians believed they would need to be treated with a benzodiazepine at least once every other month to achieve seizure control. Dosing was determined by a combination of age and weight. If a second dose was required, caregivers were instructed to provide it 4-12 hours after the first dose.

In the study cohort, 47.9% of patients were aged 6-17 years. The researchers looked specifically at 73 cases of seizure clusters. In nine cases, the patient had LGS (five male, four female). Nearly all (95.9%) of LGS cluster cases were treated with a single dose and 4.1% were exposed to a second dose. Among 64 cases involving a patient with pediatric epileptic encephalopathies, 89.4% were treated with a single dose and 10.6% received a second. The safety profile was similar between patients with LGS and those with pediatric encephalopathies.

Dr. Gombolay has no relevant financial disclosures.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT CNS 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Ulcerative colitis: Reassuring findings on long-term tofacitinib reported

Article Type
Changed
Wed, 11/02/2022 - 14:02

 

Updated long-term safety data suggest that tofacitinib (Xeljanz, Pfizer) is generally safe for long-term use in the treatment of moderate to severe ulcerative colitis (UC), with adverse events (AE) consistent with previous studies and showed stability over time in the incidence of adverse events of special interest.

“Findings from these integrated safety analyses are reassuring for patients with UC. The incidence rates of most key events that led to the black box warning are lower in these cohorts, compared with results observed in the ORAL Surveillance study of older patients with RA [rheumatoid arthritis],” said Siddharth Singh, MD, who was asked to comment on the study.

The results support the current clinical approach, in which tofacitinib is typically employed following infliximab (Remicade, Janssen) failure, though the paradigm may change. “These findings on safety reassure our approach, though there will still be hesitation to use 10-mg twice-daily dosing in older patients. However, with the recent approval of upadacitinib (Rinvoq, AbbVie), a selective JAK1 inhibitor that seems to be more effective than tofacitinib, positioning of tofacitinib may evolve,” said Dr. Singh, who is an associate professor of medicine at University of California, San Diego, and director of the UCSD IBD Center.

There has been evidence to support safety concerns with tofacitinib. The prospective ORAL Surveillance study compared two doses of tofacitinib (5 or 10 mg, twice daily) to tumor necrosis factor (TNF) inhibitors in patients with rheumatoid arthritis. The researchers selected patients aged 50 and older and with at least one additional cardiovascular risk factor. The study found higher rates of major adverse cardiovascular events and malignancies in the tofacitinib groups, as well as higher rates of mortality, serious infection, and venous thromboembolism. The findings prompted a Food and Drug Administration “black box” warning for tofacitinib in July 2019, which was extended to two other JAK inhibitors in September 2021.

However, patients in the UC clinical program are generally younger than participants in the ORAL Surveillance study and were less likely to have a smoking history.

The new study, published in the Journal of Crohn’s and Colitis, represents an update of a pooled analysis from phase 2, phase 3, and open-label extension studies with up to 4.4 years of exposure. Analysis of the earlier cohort showed that tofacitinib had a generally similar safety profile to other UC therapies, with the exception of a higher incidence of herpes zoster infection. Since that publication, the researchers have compiled additional person-years of tofacitinib exposure from the open-label extension OCTAVE Open study and the 6-month interim analysis of the phase 3b/4 RIVETING study.

The new study included 1,157 patients who received at least one dose of tofacitinib. Overall, 35.6% had received treatment for longer than 4 years. The mean age was 41.3 years, 58.7% were male, and 80.1% were White; 64.0% had never smoked, and 30.9% were ex-smokers. The mean disease duration was 8.2 years. In all, 83% of patients were on a 10 mg dose, and 17% were on 5 mg.

In the 2016 analysis, 82.1% of patients had an adverse event and 14.6% had a serious adverse event. In the overall cohort, the percentages were 85.7% and 21%, respectively.

In the updated analysis, 11.6% discontinued medication use because of an adverse event. For all doses, incidence rates (IRs) for adverse events were defined as unique patients with events per 100 person-years of exposure. The IRs for death and adverse events of special interest were similar between the original cohort and the updated cohort. For example, the IR for death was 0.24 for death in the earlier cohort (95% confidence interval, 0.07-0.61) and 0.23 in the combined cohort (95% CI, 0.09-0.46); it was 1.99 for serious infections in the earlier cohort (95% CI 1.37–2.79) and 1.69 in the combined cohort for serious infections (95% CI, 1.26-2.21); it was 4.07 for serious and nonserious herpes zoster infection in the earlier cohort (95% CI, 3.14-5.19), 3.30 for serious and nonserious herpes zoster infection in the combined cohort (95% CI, 2.67-4.04); and it was 1.28 for opportunistic infections in the earlier cohort (95% CI, 0.79-1.96) and 1.03 in the combined cohort for opportunistic infections (95% CI, 0.70-1.46).

The updated cohort included 3.4 more years of observation and an additional 1,386.9 person-years of exposure. That resulted in a final tally of up to 7.8 years of exposure and a combined 2,999.7 person-years of exposure, “thus demonstrating that the safety profile of tofacitinib remained consistent with increased extent and length of exposure,” the authors wrote.

Despite the promising findings, Dr. Singh called for more research. “We need a dedicated safety registry of tofacitinib and other JAK inhibitors in patients with IBD, who do not share the characteristics of those studied in the ORAL Surveillance study,” he said.

The authors disclose ties to various pharmaceutical companies, including Pfizer, which manufactures tofacitinib. Dr. Singh has received personal fees from Pfizer for ad hoc grant review.
 

Publications
Topics
Sections

 

Updated long-term safety data suggest that tofacitinib (Xeljanz, Pfizer) is generally safe for long-term use in the treatment of moderate to severe ulcerative colitis (UC), with adverse events (AE) consistent with previous studies and showed stability over time in the incidence of adverse events of special interest.

“Findings from these integrated safety analyses are reassuring for patients with UC. The incidence rates of most key events that led to the black box warning are lower in these cohorts, compared with results observed in the ORAL Surveillance study of older patients with RA [rheumatoid arthritis],” said Siddharth Singh, MD, who was asked to comment on the study.

The results support the current clinical approach, in which tofacitinib is typically employed following infliximab (Remicade, Janssen) failure, though the paradigm may change. “These findings on safety reassure our approach, though there will still be hesitation to use 10-mg twice-daily dosing in older patients. However, with the recent approval of upadacitinib (Rinvoq, AbbVie), a selective JAK1 inhibitor that seems to be more effective than tofacitinib, positioning of tofacitinib may evolve,” said Dr. Singh, who is an associate professor of medicine at University of California, San Diego, and director of the UCSD IBD Center.

There has been evidence to support safety concerns with tofacitinib. The prospective ORAL Surveillance study compared two doses of tofacitinib (5 or 10 mg, twice daily) to tumor necrosis factor (TNF) inhibitors in patients with rheumatoid arthritis. The researchers selected patients aged 50 and older and with at least one additional cardiovascular risk factor. The study found higher rates of major adverse cardiovascular events and malignancies in the tofacitinib groups, as well as higher rates of mortality, serious infection, and venous thromboembolism. The findings prompted a Food and Drug Administration “black box” warning for tofacitinib in July 2019, which was extended to two other JAK inhibitors in September 2021.

However, patients in the UC clinical program are generally younger than participants in the ORAL Surveillance study and were less likely to have a smoking history.

The new study, published in the Journal of Crohn’s and Colitis, represents an update of a pooled analysis from phase 2, phase 3, and open-label extension studies with up to 4.4 years of exposure. Analysis of the earlier cohort showed that tofacitinib had a generally similar safety profile to other UC therapies, with the exception of a higher incidence of herpes zoster infection. Since that publication, the researchers have compiled additional person-years of tofacitinib exposure from the open-label extension OCTAVE Open study and the 6-month interim analysis of the phase 3b/4 RIVETING study.

The new study included 1,157 patients who received at least one dose of tofacitinib. Overall, 35.6% had received treatment for longer than 4 years. The mean age was 41.3 years, 58.7% were male, and 80.1% were White; 64.0% had never smoked, and 30.9% were ex-smokers. The mean disease duration was 8.2 years. In all, 83% of patients were on a 10 mg dose, and 17% were on 5 mg.

In the 2016 analysis, 82.1% of patients had an adverse event and 14.6% had a serious adverse event. In the overall cohort, the percentages were 85.7% and 21%, respectively.

In the updated analysis, 11.6% discontinued medication use because of an adverse event. For all doses, incidence rates (IRs) for adverse events were defined as unique patients with events per 100 person-years of exposure. The IRs for death and adverse events of special interest were similar between the original cohort and the updated cohort. For example, the IR for death was 0.24 for death in the earlier cohort (95% confidence interval, 0.07-0.61) and 0.23 in the combined cohort (95% CI, 0.09-0.46); it was 1.99 for serious infections in the earlier cohort (95% CI 1.37–2.79) and 1.69 in the combined cohort for serious infections (95% CI, 1.26-2.21); it was 4.07 for serious and nonserious herpes zoster infection in the earlier cohort (95% CI, 3.14-5.19), 3.30 for serious and nonserious herpes zoster infection in the combined cohort (95% CI, 2.67-4.04); and it was 1.28 for opportunistic infections in the earlier cohort (95% CI, 0.79-1.96) and 1.03 in the combined cohort for opportunistic infections (95% CI, 0.70-1.46).

The updated cohort included 3.4 more years of observation and an additional 1,386.9 person-years of exposure. That resulted in a final tally of up to 7.8 years of exposure and a combined 2,999.7 person-years of exposure, “thus demonstrating that the safety profile of tofacitinib remained consistent with increased extent and length of exposure,” the authors wrote.

Despite the promising findings, Dr. Singh called for more research. “We need a dedicated safety registry of tofacitinib and other JAK inhibitors in patients with IBD, who do not share the characteristics of those studied in the ORAL Surveillance study,” he said.

The authors disclose ties to various pharmaceutical companies, including Pfizer, which manufactures tofacitinib. Dr. Singh has received personal fees from Pfizer for ad hoc grant review.
 

 

Updated long-term safety data suggest that tofacitinib (Xeljanz, Pfizer) is generally safe for long-term use in the treatment of moderate to severe ulcerative colitis (UC), with adverse events (AE) consistent with previous studies and showed stability over time in the incidence of adverse events of special interest.

“Findings from these integrated safety analyses are reassuring for patients with UC. The incidence rates of most key events that led to the black box warning are lower in these cohorts, compared with results observed in the ORAL Surveillance study of older patients with RA [rheumatoid arthritis],” said Siddharth Singh, MD, who was asked to comment on the study.

The results support the current clinical approach, in which tofacitinib is typically employed following infliximab (Remicade, Janssen) failure, though the paradigm may change. “These findings on safety reassure our approach, though there will still be hesitation to use 10-mg twice-daily dosing in older patients. However, with the recent approval of upadacitinib (Rinvoq, AbbVie), a selective JAK1 inhibitor that seems to be more effective than tofacitinib, positioning of tofacitinib may evolve,” said Dr. Singh, who is an associate professor of medicine at University of California, San Diego, and director of the UCSD IBD Center.

There has been evidence to support safety concerns with tofacitinib. The prospective ORAL Surveillance study compared two doses of tofacitinib (5 or 10 mg, twice daily) to tumor necrosis factor (TNF) inhibitors in patients with rheumatoid arthritis. The researchers selected patients aged 50 and older and with at least one additional cardiovascular risk factor. The study found higher rates of major adverse cardiovascular events and malignancies in the tofacitinib groups, as well as higher rates of mortality, serious infection, and venous thromboembolism. The findings prompted a Food and Drug Administration “black box” warning for tofacitinib in July 2019, which was extended to two other JAK inhibitors in September 2021.

However, patients in the UC clinical program are generally younger than participants in the ORAL Surveillance study and were less likely to have a smoking history.

The new study, published in the Journal of Crohn’s and Colitis, represents an update of a pooled analysis from phase 2, phase 3, and open-label extension studies with up to 4.4 years of exposure. Analysis of the earlier cohort showed that tofacitinib had a generally similar safety profile to other UC therapies, with the exception of a higher incidence of herpes zoster infection. Since that publication, the researchers have compiled additional person-years of tofacitinib exposure from the open-label extension OCTAVE Open study and the 6-month interim analysis of the phase 3b/4 RIVETING study.

The new study included 1,157 patients who received at least one dose of tofacitinib. Overall, 35.6% had received treatment for longer than 4 years. The mean age was 41.3 years, 58.7% were male, and 80.1% were White; 64.0% had never smoked, and 30.9% were ex-smokers. The mean disease duration was 8.2 years. In all, 83% of patients were on a 10 mg dose, and 17% were on 5 mg.

In the 2016 analysis, 82.1% of patients had an adverse event and 14.6% had a serious adverse event. In the overall cohort, the percentages were 85.7% and 21%, respectively.

In the updated analysis, 11.6% discontinued medication use because of an adverse event. For all doses, incidence rates (IRs) for adverse events were defined as unique patients with events per 100 person-years of exposure. The IRs for death and adverse events of special interest were similar between the original cohort and the updated cohort. For example, the IR for death was 0.24 for death in the earlier cohort (95% confidence interval, 0.07-0.61) and 0.23 in the combined cohort (95% CI, 0.09-0.46); it was 1.99 for serious infections in the earlier cohort (95% CI 1.37–2.79) and 1.69 in the combined cohort for serious infections (95% CI, 1.26-2.21); it was 4.07 for serious and nonserious herpes zoster infection in the earlier cohort (95% CI, 3.14-5.19), 3.30 for serious and nonserious herpes zoster infection in the combined cohort (95% CI, 2.67-4.04); and it was 1.28 for opportunistic infections in the earlier cohort (95% CI, 0.79-1.96) and 1.03 in the combined cohort for opportunistic infections (95% CI, 0.70-1.46).

The updated cohort included 3.4 more years of observation and an additional 1,386.9 person-years of exposure. That resulted in a final tally of up to 7.8 years of exposure and a combined 2,999.7 person-years of exposure, “thus demonstrating that the safety profile of tofacitinib remained consistent with increased extent and length of exposure,” the authors wrote.

Despite the promising findings, Dr. Singh called for more research. “We need a dedicated safety registry of tofacitinib and other JAK inhibitors in patients with IBD, who do not share the characteristics of those studied in the ORAL Surveillance study,” he said.

The authors disclose ties to various pharmaceutical companies, including Pfizer, which manufactures tofacitinib. Dr. Singh has received personal fees from Pfizer for ad hoc grant review.
 

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JOURNAL OF CROHN’S AND COLITIS

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Analysis suggests an ‘urgent need’ for personalized head and neck cancer follow-up

Article Type
Changed
Fri, 10/21/2022 - 14:31

 

There has been significant progress in recent years in the treatment of head and neck cancer, and there are useful evidence-based guidelines to inform treatment choices. But less guidance is available when it comes to follow-up and surveillance of patients who have gone into remission.

Existing guidelines for advanced head and neck cancer follow-up are quite broad, with recommended follow-ups ranging from 11 to 27 visits in the 5 years following treatment and no consideration of subtypes.

“Once patients complete treatment for head and neck cancer in particular, they move into survivorship and surveillance phases, and then we are sort of just following patients based on expert opinion,” said Daniel Clayburgh, MD, PhD.

A new study, published online in JAMA Otolaryngology–Head & Neck Surgery, used a novel approach to group head and neck cancer subtypes and calculate optimal follow-up schedules for each.

“They found that in the low-risk types of cancers like HPV-related oropharynx cancers, you really don’t need to see patients all that often because they tend to do quite well, versus the patients that don’t do very well in the long run, such as hypopharyngeal cancers. Their model predicts that you need to see them much, much more often. When you compare that to our guidelines, it says that we’re probably seeing some patients too often, and then there are other patients who we may not be seeing often enough,” said Dr. Clayburgh, who was asked to comment on the study. He is an associate professor of otolaryngology head and neck surgery at Oregon Health and Science University, Portland, and chief of surgery at the Portland VA Healthcare System.

“I thought it was a clever approach to investigate this other aspect of cancer care that often is a little bit overlooked,” said Dr. Clayburgh. who coauthored an invited commentary published in conjunction with the study.

He said the study results are intriguing, but not quite ready for general clinical practice. The study did not include oral cavity cancers, which are a major subtype, and the results need to be validated in larger patient populations. He also pointed out that recurrence is only one reason to see patients after the treatment phase. “There are long term treatment effects. There are various mental health issues and other health issues that they can have aside from just cancer. So it can be helpful to see patients more often than just purely as dictated on how often they’re going to have a cancer recurrence. I don’t think we’re quite there to exactly what is the right number of visits or how often they need to come in, but I think (this paper is) an important step toward that, and it definitely provides fodder for modification of current guidelines,” Dr. Clayburgh said.
 

The study details

The researchers estimated event-free survival, defined as the time from end of treatment to any event, using a piecewise exponential model. Optimal follow-up timepoint was defined through the occurrence of a 5% event rate. The study included 673 patients with locally advanced head and neck cancer, with a median age of 58 years. A total of 82.5% were men. The researchers did not report race or ethnicity. Over a median follow-up of 57.8 months, frequency of events was 18.9% among 227 patients with nasopharyngeal cancer (NPC), 14.8% among 237 patients with human papillomavirus-positive oropharyngeal cancer (HPV+ OPC), 36.2% among 47 patients with HPV– OPC, 44.6% among 65 patients with hypopharyngeal cancer (HPC), and 30.9% among 97 patients with laryngeal cancer (LC).

 

 

The researchers divided follow-up into a period of response evaluation and close follow-up, which included the first 6 months after end of treatment, and three phases: 6.0 to 16.5 months (first phase); 16.5 to 25.0 months (second phase); and 25.0 to 99.0 months (third phase). Open follow-up continues after the third phase.

The researchers identified surveillance intervals for each phase for the five patient groups: NPC, HPV+ OPC, HPV– OPC, HPC, and LC. They identified substantially different follow-up intervals for each. The longest intervals were recommended for HPV+ OPC and NPC patients, and the shortest for HPC. Overall, there was a threefold difference in the number of follow-ups recommended among HNC groups.

“Given the limited health care resources and the rising number of patients with head and neck cancer, patient-tailored and evidence-based assessment schedules will benefit both patients and health systems. Further investigation for consensus guidelines is needed, and we hope that the findings of this study will aid in their establishment in the near future,” the authors wrote.

The study is limited by its reliance on retrospective data, and must be validated in other patient populations before it is suitable for clinical practice.

Dr. Clayburgh has no relevant financial disclosures.

Publications
Topics
Sections

 

There has been significant progress in recent years in the treatment of head and neck cancer, and there are useful evidence-based guidelines to inform treatment choices. But less guidance is available when it comes to follow-up and surveillance of patients who have gone into remission.

Existing guidelines for advanced head and neck cancer follow-up are quite broad, with recommended follow-ups ranging from 11 to 27 visits in the 5 years following treatment and no consideration of subtypes.

“Once patients complete treatment for head and neck cancer in particular, they move into survivorship and surveillance phases, and then we are sort of just following patients based on expert opinion,” said Daniel Clayburgh, MD, PhD.

A new study, published online in JAMA Otolaryngology–Head & Neck Surgery, used a novel approach to group head and neck cancer subtypes and calculate optimal follow-up schedules for each.

“They found that in the low-risk types of cancers like HPV-related oropharynx cancers, you really don’t need to see patients all that often because they tend to do quite well, versus the patients that don’t do very well in the long run, such as hypopharyngeal cancers. Their model predicts that you need to see them much, much more often. When you compare that to our guidelines, it says that we’re probably seeing some patients too often, and then there are other patients who we may not be seeing often enough,” said Dr. Clayburgh, who was asked to comment on the study. He is an associate professor of otolaryngology head and neck surgery at Oregon Health and Science University, Portland, and chief of surgery at the Portland VA Healthcare System.

“I thought it was a clever approach to investigate this other aspect of cancer care that often is a little bit overlooked,” said Dr. Clayburgh. who coauthored an invited commentary published in conjunction with the study.

He said the study results are intriguing, but not quite ready for general clinical practice. The study did not include oral cavity cancers, which are a major subtype, and the results need to be validated in larger patient populations. He also pointed out that recurrence is only one reason to see patients after the treatment phase. “There are long term treatment effects. There are various mental health issues and other health issues that they can have aside from just cancer. So it can be helpful to see patients more often than just purely as dictated on how often they’re going to have a cancer recurrence. I don’t think we’re quite there to exactly what is the right number of visits or how often they need to come in, but I think (this paper is) an important step toward that, and it definitely provides fodder for modification of current guidelines,” Dr. Clayburgh said.
 

The study details

The researchers estimated event-free survival, defined as the time from end of treatment to any event, using a piecewise exponential model. Optimal follow-up timepoint was defined through the occurrence of a 5% event rate. The study included 673 patients with locally advanced head and neck cancer, with a median age of 58 years. A total of 82.5% were men. The researchers did not report race or ethnicity. Over a median follow-up of 57.8 months, frequency of events was 18.9% among 227 patients with nasopharyngeal cancer (NPC), 14.8% among 237 patients with human papillomavirus-positive oropharyngeal cancer (HPV+ OPC), 36.2% among 47 patients with HPV– OPC, 44.6% among 65 patients with hypopharyngeal cancer (HPC), and 30.9% among 97 patients with laryngeal cancer (LC).

 

 

The researchers divided follow-up into a period of response evaluation and close follow-up, which included the first 6 months after end of treatment, and three phases: 6.0 to 16.5 months (first phase); 16.5 to 25.0 months (second phase); and 25.0 to 99.0 months (third phase). Open follow-up continues after the third phase.

The researchers identified surveillance intervals for each phase for the five patient groups: NPC, HPV+ OPC, HPV– OPC, HPC, and LC. They identified substantially different follow-up intervals for each. The longest intervals were recommended for HPV+ OPC and NPC patients, and the shortest for HPC. Overall, there was a threefold difference in the number of follow-ups recommended among HNC groups.

“Given the limited health care resources and the rising number of patients with head and neck cancer, patient-tailored and evidence-based assessment schedules will benefit both patients and health systems. Further investigation for consensus guidelines is needed, and we hope that the findings of this study will aid in their establishment in the near future,” the authors wrote.

The study is limited by its reliance on retrospective data, and must be validated in other patient populations before it is suitable for clinical practice.

Dr. Clayburgh has no relevant financial disclosures.

 

There has been significant progress in recent years in the treatment of head and neck cancer, and there are useful evidence-based guidelines to inform treatment choices. But less guidance is available when it comes to follow-up and surveillance of patients who have gone into remission.

Existing guidelines for advanced head and neck cancer follow-up are quite broad, with recommended follow-ups ranging from 11 to 27 visits in the 5 years following treatment and no consideration of subtypes.

“Once patients complete treatment for head and neck cancer in particular, they move into survivorship and surveillance phases, and then we are sort of just following patients based on expert opinion,” said Daniel Clayburgh, MD, PhD.

A new study, published online in JAMA Otolaryngology–Head & Neck Surgery, used a novel approach to group head and neck cancer subtypes and calculate optimal follow-up schedules for each.

“They found that in the low-risk types of cancers like HPV-related oropharynx cancers, you really don’t need to see patients all that often because they tend to do quite well, versus the patients that don’t do very well in the long run, such as hypopharyngeal cancers. Their model predicts that you need to see them much, much more often. When you compare that to our guidelines, it says that we’re probably seeing some patients too often, and then there are other patients who we may not be seeing often enough,” said Dr. Clayburgh, who was asked to comment on the study. He is an associate professor of otolaryngology head and neck surgery at Oregon Health and Science University, Portland, and chief of surgery at the Portland VA Healthcare System.

“I thought it was a clever approach to investigate this other aspect of cancer care that often is a little bit overlooked,” said Dr. Clayburgh. who coauthored an invited commentary published in conjunction with the study.

He said the study results are intriguing, but not quite ready for general clinical practice. The study did not include oral cavity cancers, which are a major subtype, and the results need to be validated in larger patient populations. He also pointed out that recurrence is only one reason to see patients after the treatment phase. “There are long term treatment effects. There are various mental health issues and other health issues that they can have aside from just cancer. So it can be helpful to see patients more often than just purely as dictated on how often they’re going to have a cancer recurrence. I don’t think we’re quite there to exactly what is the right number of visits or how often they need to come in, but I think (this paper is) an important step toward that, and it definitely provides fodder for modification of current guidelines,” Dr. Clayburgh said.
 

The study details

The researchers estimated event-free survival, defined as the time from end of treatment to any event, using a piecewise exponential model. Optimal follow-up timepoint was defined through the occurrence of a 5% event rate. The study included 673 patients with locally advanced head and neck cancer, with a median age of 58 years. A total of 82.5% were men. The researchers did not report race or ethnicity. Over a median follow-up of 57.8 months, frequency of events was 18.9% among 227 patients with nasopharyngeal cancer (NPC), 14.8% among 237 patients with human papillomavirus-positive oropharyngeal cancer (HPV+ OPC), 36.2% among 47 patients with HPV– OPC, 44.6% among 65 patients with hypopharyngeal cancer (HPC), and 30.9% among 97 patients with laryngeal cancer (LC).

 

 

The researchers divided follow-up into a period of response evaluation and close follow-up, which included the first 6 months after end of treatment, and three phases: 6.0 to 16.5 months (first phase); 16.5 to 25.0 months (second phase); and 25.0 to 99.0 months (third phase). Open follow-up continues after the third phase.

The researchers identified surveillance intervals for each phase for the five patient groups: NPC, HPV+ OPC, HPV– OPC, HPC, and LC. They identified substantially different follow-up intervals for each. The longest intervals were recommended for HPV+ OPC and NPC patients, and the shortest for HPC. Overall, there was a threefold difference in the number of follow-ups recommended among HNC groups.

“Given the limited health care resources and the rising number of patients with head and neck cancer, patient-tailored and evidence-based assessment schedules will benefit both patients and health systems. Further investigation for consensus guidelines is needed, and we hope that the findings of this study will aid in their establishment in the near future,” the authors wrote.

The study is limited by its reliance on retrospective data, and must be validated in other patient populations before it is suitable for clinical practice.

Dr. Clayburgh has no relevant financial disclosures.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA OTOLARYNGOLOGY–HEAD & NECK SURGERY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

In epilepsy, heart issues linked to longer disease duration

Article Type
Changed
Tue, 11/22/2022 - 11:13

 

Pediatric patients with epilepsy have an increased risk of cardiovascular complications later in life, but little is known about how they progress. A new study finds that abnormalities in electrocardiograms are linked to an earlier age of diagnosis and longer epilepsy duration.

The findings could help researchers in the search for biomarkers that could predict later problems in children with epilepsy. “In pediatric neurology I think we’re a little bit removed from some of the cardiovascular complications that can happen within epilepsy, but cardiovascular complications are well established, especially in adults that have epilepsy. Adults with epilepsy are more likely to have coronary artery disease, atherosclerosis, arrhythmias, heart attacks, and sudden cardiac death. It’s a pretty substantial difference compared with their nonepileptic peers. So knowing that, the big question is, how do these changes develop, and how do we really counsel our patients in regards to these complications?” said Brittnie Bartlett, MD, during her presentation of the research at the 2022 annual meeting of the Child Neurology Society.

Identifying factors that increase cardiac complications

Previous studies suggested that epilepsy duration might be linked to cardiovascular complications. In children with Dravet syndrome, epilepsy duration has been shown to be associated with cardiac complications. Pathological T wave alternans, which indicates ventricular instability, has been observed in adults with longstanding epilepsy but not adults with newly diagnosed epilepsy.

“So our question in this preliminary report of our data is: What factors in our general pediatric epilepsy cohort can we identify that put them at a greater risk for having EKG changes, and specifically, we wanted to verify these findings from the other studies that epilepsy duration is, in fact, a risk factor for these EKG changes in general [among children] with epilepsy aside from channelopathies,” said Dr. Bartlett, who is an assistant professor at Baylor College of Medicine and a child neurologist at Texas Children’s Hospital, both in Houston.

She presented a striking finding that cardiovascular changes appear early. “The most important thing I want you all to make note of is the fact that, in this baseline study that we got on these kids, 47% already had changes that we were seeing on their EKGs,” said Dr. Bartlett.

The researchers also looked for factors associated with EKG changes, and found that duration of epilepsy and age at diagnosis were the two salient factors. “Our kids that did have EKG changes present had an average epilepsy duration of 73 months, as opposed to [the children] that did not have EKG changes and had an average epilepsy duration of 46 months,” said Dr. Bartlett.

Other factors, such epilepsy type, etiology, refractory epilepsy, and seizure frequency had no statistically significant association with EKG changes. They also saw no associations with high-risk seizure medications, even though some antiseizure drugs have been shown to be linked to EKG changes.

“We were able to confirm our hypothesis that EKG changes were more prevalent with longer duration of epilepsy. Unfortunately, we weren’t able to find any other clues that would help us counsel our patients, but this is part of a longitudinal prospective study that we’ll be following these kids over a couple of years’ time, so maybe we’ll be able to tease out some of these differences. Ideally, we’d be able to find some kind of a biomarker for future cardiovascular complications, and right now we’re working with some multivariable models to verify some of these findings,” said Dr. Bartlett.

 

 

Implications for clinical practice

During the Q&A, Dr. Bartlett was asked if all kids with epilepsy should undergo an EKG. She recommended against it for now. “At this point, I don’t think we have enough clear data to support getting an EKG on every kid with epilepsy. I do think it’s good practice to do them on all kids with channelopathies. As a general practice, I tend to have a low threshold towards many kids with epilepsy, but a lot of these cardiovascular risk factors tend to pop up more in adulthood, so it’s more preventative,” she said.

Grace Gombolay, MD, who moderated the session where the poster was presented, was asked for comment on the study. “What’s surprising about it is that up to half of patients actually had EKG changes, different what from what we see in normal population, and it’s interesting to think about the implications. One of the things that our epilepsy patients are at risk for is SUDEP – sudden, unexplained death in epilepsy. It’s interesting to think about what these EKG changes mean for clinical care. I think it’s too early to say at this time, but this might be one of those markers for SUDEP,” said Dr. Gombolay, who is an assistant professor at Emory University, Atlanta, and director of the Pediatric Neuroimmunology and Multiple Sclerosis Clinic at Children’s Healthcare of Atlanta.

The researchers prospectively studied 213 patients who were recruited. 46% were female, 42% were white, 41% were Hispanic, and 13% were African American. The mean age at enrollment was 116 months, and mean age of seizure onset was 45 months.

The researchers found that 47% had abnormal EKG readings. None of the changes were pathologic, but they may reflect changes to cardiac electrophysiology, according to Dr. Bartlett. Those with abnormal readings were older on average (11.6 vs. 8.3 years; P < .005) and had a longer epilepsy duration (73 vs. 46 months; P = .004).

Dr. Gombolay has no relevant financial disclosures.

Meeting/Event
Issue
Neurology Reviews - 30(12)
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

Pediatric patients with epilepsy have an increased risk of cardiovascular complications later in life, but little is known about how they progress. A new study finds that abnormalities in electrocardiograms are linked to an earlier age of diagnosis and longer epilepsy duration.

The findings could help researchers in the search for biomarkers that could predict later problems in children with epilepsy. “In pediatric neurology I think we’re a little bit removed from some of the cardiovascular complications that can happen within epilepsy, but cardiovascular complications are well established, especially in adults that have epilepsy. Adults with epilepsy are more likely to have coronary artery disease, atherosclerosis, arrhythmias, heart attacks, and sudden cardiac death. It’s a pretty substantial difference compared with their nonepileptic peers. So knowing that, the big question is, how do these changes develop, and how do we really counsel our patients in regards to these complications?” said Brittnie Bartlett, MD, during her presentation of the research at the 2022 annual meeting of the Child Neurology Society.

Identifying factors that increase cardiac complications

Previous studies suggested that epilepsy duration might be linked to cardiovascular complications. In children with Dravet syndrome, epilepsy duration has been shown to be associated with cardiac complications. Pathological T wave alternans, which indicates ventricular instability, has been observed in adults with longstanding epilepsy but not adults with newly diagnosed epilepsy.

“So our question in this preliminary report of our data is: What factors in our general pediatric epilepsy cohort can we identify that put them at a greater risk for having EKG changes, and specifically, we wanted to verify these findings from the other studies that epilepsy duration is, in fact, a risk factor for these EKG changes in general [among children] with epilepsy aside from channelopathies,” said Dr. Bartlett, who is an assistant professor at Baylor College of Medicine and a child neurologist at Texas Children’s Hospital, both in Houston.

She presented a striking finding that cardiovascular changes appear early. “The most important thing I want you all to make note of is the fact that, in this baseline study that we got on these kids, 47% already had changes that we were seeing on their EKGs,” said Dr. Bartlett.

The researchers also looked for factors associated with EKG changes, and found that duration of epilepsy and age at diagnosis were the two salient factors. “Our kids that did have EKG changes present had an average epilepsy duration of 73 months, as opposed to [the children] that did not have EKG changes and had an average epilepsy duration of 46 months,” said Dr. Bartlett.

Other factors, such epilepsy type, etiology, refractory epilepsy, and seizure frequency had no statistically significant association with EKG changes. They also saw no associations with high-risk seizure medications, even though some antiseizure drugs have been shown to be linked to EKG changes.

“We were able to confirm our hypothesis that EKG changes were more prevalent with longer duration of epilepsy. Unfortunately, we weren’t able to find any other clues that would help us counsel our patients, but this is part of a longitudinal prospective study that we’ll be following these kids over a couple of years’ time, so maybe we’ll be able to tease out some of these differences. Ideally, we’d be able to find some kind of a biomarker for future cardiovascular complications, and right now we’re working with some multivariable models to verify some of these findings,” said Dr. Bartlett.

 

 

Implications for clinical practice

During the Q&A, Dr. Bartlett was asked if all kids with epilepsy should undergo an EKG. She recommended against it for now. “At this point, I don’t think we have enough clear data to support getting an EKG on every kid with epilepsy. I do think it’s good practice to do them on all kids with channelopathies. As a general practice, I tend to have a low threshold towards many kids with epilepsy, but a lot of these cardiovascular risk factors tend to pop up more in adulthood, so it’s more preventative,” she said.

Grace Gombolay, MD, who moderated the session where the poster was presented, was asked for comment on the study. “What’s surprising about it is that up to half of patients actually had EKG changes, different what from what we see in normal population, and it’s interesting to think about the implications. One of the things that our epilepsy patients are at risk for is SUDEP – sudden, unexplained death in epilepsy. It’s interesting to think about what these EKG changes mean for clinical care. I think it’s too early to say at this time, but this might be one of those markers for SUDEP,” said Dr. Gombolay, who is an assistant professor at Emory University, Atlanta, and director of the Pediatric Neuroimmunology and Multiple Sclerosis Clinic at Children’s Healthcare of Atlanta.

The researchers prospectively studied 213 patients who were recruited. 46% were female, 42% were white, 41% were Hispanic, and 13% were African American. The mean age at enrollment was 116 months, and mean age of seizure onset was 45 months.

The researchers found that 47% had abnormal EKG readings. None of the changes were pathologic, but they may reflect changes to cardiac electrophysiology, according to Dr. Bartlett. Those with abnormal readings were older on average (11.6 vs. 8.3 years; P < .005) and had a longer epilepsy duration (73 vs. 46 months; P = .004).

Dr. Gombolay has no relevant financial disclosures.

 

Pediatric patients with epilepsy have an increased risk of cardiovascular complications later in life, but little is known about how they progress. A new study finds that abnormalities in electrocardiograms are linked to an earlier age of diagnosis and longer epilepsy duration.

The findings could help researchers in the search for biomarkers that could predict later problems in children with epilepsy. “In pediatric neurology I think we’re a little bit removed from some of the cardiovascular complications that can happen within epilepsy, but cardiovascular complications are well established, especially in adults that have epilepsy. Adults with epilepsy are more likely to have coronary artery disease, atherosclerosis, arrhythmias, heart attacks, and sudden cardiac death. It’s a pretty substantial difference compared with their nonepileptic peers. So knowing that, the big question is, how do these changes develop, and how do we really counsel our patients in regards to these complications?” said Brittnie Bartlett, MD, during her presentation of the research at the 2022 annual meeting of the Child Neurology Society.

Identifying factors that increase cardiac complications

Previous studies suggested that epilepsy duration might be linked to cardiovascular complications. In children with Dravet syndrome, epilepsy duration has been shown to be associated with cardiac complications. Pathological T wave alternans, which indicates ventricular instability, has been observed in adults with longstanding epilepsy but not adults with newly diagnosed epilepsy.

“So our question in this preliminary report of our data is: What factors in our general pediatric epilepsy cohort can we identify that put them at a greater risk for having EKG changes, and specifically, we wanted to verify these findings from the other studies that epilepsy duration is, in fact, a risk factor for these EKG changes in general [among children] with epilepsy aside from channelopathies,” said Dr. Bartlett, who is an assistant professor at Baylor College of Medicine and a child neurologist at Texas Children’s Hospital, both in Houston.

She presented a striking finding that cardiovascular changes appear early. “The most important thing I want you all to make note of is the fact that, in this baseline study that we got on these kids, 47% already had changes that we were seeing on their EKGs,” said Dr. Bartlett.

The researchers also looked for factors associated with EKG changes, and found that duration of epilepsy and age at diagnosis were the two salient factors. “Our kids that did have EKG changes present had an average epilepsy duration of 73 months, as opposed to [the children] that did not have EKG changes and had an average epilepsy duration of 46 months,” said Dr. Bartlett.

Other factors, such epilepsy type, etiology, refractory epilepsy, and seizure frequency had no statistically significant association with EKG changes. They also saw no associations with high-risk seizure medications, even though some antiseizure drugs have been shown to be linked to EKG changes.

“We were able to confirm our hypothesis that EKG changes were more prevalent with longer duration of epilepsy. Unfortunately, we weren’t able to find any other clues that would help us counsel our patients, but this is part of a longitudinal prospective study that we’ll be following these kids over a couple of years’ time, so maybe we’ll be able to tease out some of these differences. Ideally, we’d be able to find some kind of a biomarker for future cardiovascular complications, and right now we’re working with some multivariable models to verify some of these findings,” said Dr. Bartlett.

 

 

Implications for clinical practice

During the Q&A, Dr. Bartlett was asked if all kids with epilepsy should undergo an EKG. She recommended against it for now. “At this point, I don’t think we have enough clear data to support getting an EKG on every kid with epilepsy. I do think it’s good practice to do them on all kids with channelopathies. As a general practice, I tend to have a low threshold towards many kids with epilepsy, but a lot of these cardiovascular risk factors tend to pop up more in adulthood, so it’s more preventative,” she said.

Grace Gombolay, MD, who moderated the session where the poster was presented, was asked for comment on the study. “What’s surprising about it is that up to half of patients actually had EKG changes, different what from what we see in normal population, and it’s interesting to think about the implications. One of the things that our epilepsy patients are at risk for is SUDEP – sudden, unexplained death in epilepsy. It’s interesting to think about what these EKG changes mean for clinical care. I think it’s too early to say at this time, but this might be one of those markers for SUDEP,” said Dr. Gombolay, who is an assistant professor at Emory University, Atlanta, and director of the Pediatric Neuroimmunology and Multiple Sclerosis Clinic at Children’s Healthcare of Atlanta.

The researchers prospectively studied 213 patients who were recruited. 46% were female, 42% were white, 41% were Hispanic, and 13% were African American. The mean age at enrollment was 116 months, and mean age of seizure onset was 45 months.

The researchers found that 47% had abnormal EKG readings. None of the changes were pathologic, but they may reflect changes to cardiac electrophysiology, according to Dr. Bartlett. Those with abnormal readings were older on average (11.6 vs. 8.3 years; P < .005) and had a longer epilepsy duration (73 vs. 46 months; P = .004).

Dr. Gombolay has no relevant financial disclosures.

Issue
Neurology Reviews - 30(12)
Issue
Neurology Reviews - 30(12)
Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CNS 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

NICU signs hint at cerebral palsy risk

Article Type
Changed
Thu, 10/20/2022 - 14:58

 

– Cerebral palsy affects about 3 in every 1,000 children, but there is usually little sign of the condition at birth. Instead, it usually shows clinical manifestation between ages 2 and 5, and a diagnosis can trigger early interventions that can improve long-term outcomes.

Physicians and patients would benefit from a screening method for cerebral palsy at birth, but that has so far eluded researchers.

At the 2022 annual meeting of the Child Neurology Society, researchers presented evidence that respiratory rate measured in the last 24 hours of residence in the neonate intensive care unit (NICU) predicts later onset of cerebral palsy, with higher variability associated with increased cerebral palsy risk.

The study results were promising, according to Marc Patterson, MD, who comoderated the session. “It gives us more confidence in predicting the children at risk and making sure that they’re going to be followed closely to get the interventions they need to help them,” said Dr. Patterson, who is a professor of neurology, pediatrics, and medical genetics at Mayo Medical School in Rochester, Minn.

“By the time a child is 5 or 6, the symptoms are usually very obvious, but you really want to intervene as soon as possible before their brain’s plasticity decreases over time, so the earlier you can intervene in general, the better your results are going to be,” said Dr. Patterson.

There are tools available to diagnose cerebral palsy at an earlier age, including the Prechtl General Movements Assessment (GMA), which can be done up to 5 months of corrected age. It has 97% sensitivity and 89% specificity for cerebral palsy. The Hammersmith Infant Neurological Examination (HINE), which can be used in the same age range, and has 72-91% sensitivity and 100% specificity.

Both of the available tools are resource intensive and require trained clinicians, and may be unavailable in many areas. Despite these tools, early diagnosis of cerebral palsy is still underemployed, according to Arohi Saxena, a third-year medical student at Washington University in St. Louis, who presented the study results.
 

Respiratory rate variability may indicate increased risk

The researchers set out to identify objective metrics that correlated with HINE and GMA scores. They looked at kinematic data from practical assessments carried out by their physical therapists, as well as vital sign instability obtained at NICU discharge, which was based on suggestions that hemodynamic instability may be linked to later risk of cerebral palsy, according to Ms. Saxena.

They analyzed data from 31 infants with a corrected age of 8-25 weeks at a tertiary NICU follow-up clinic. Of these, 18 displayed fidgety movements on their Prechtl assessment, and 13 did not.

They used DeepLabCut software to analyze data from videos of the Prechtl assessment, with a focus on range and variance of hand and foot movements normalized to nose-to-umbilicus distance. They also analyzed pulse and respiratory data from the final 24 hours before NICU discharge.

They found that infants without fidgety movements had decreased hand and foot movement ranges (P = .04). There was no significant difference between the two groups with respect to pulse measurements. However, the respiratory rate range and variance was significantly higher in infants without fidgety movements. “Infants who are at higher risk for developing cerebral palsy had more respiratory instability early on in life,” said Ms. Saxena during her talk.

When they compared values to HINE scores, they found a correlation with less foot movement and a predisposition to develop cerebral palsy, but no correlation with hand movement. A lower HINE sore also correlated to larger respiratory rate range and variance (P < .01 for both).

“Our hypothesis to explain this link is that respiratory rate variability is likely driven by neonatal injury in the brainstem, where the respiratory centers are located. In some infants, this may correlate with more extensive cerebral injury that could predict the development of cerebral palsy,” said Ms. Saxena.

The group plans to increase its sample size as well as to conduct long-term follow-up on the infants to see how many receive formal diagnoses of cerebral palsy.

After her talk, asked by a moderator why motor assessments were not a reliable predictor in their study, Ms. Saxena pointed to the inexperience of assessors at the institution, where Prechtl testing had only recently begun.

“I think a lot of it is to do with the more subjective nature of the motor assessment. We definitely saw kind of a trend where in the earlier data that was collected, right when our institutions started doing these Prechtls, it was even less of a reliable effect. So I think possibly as clinicians continue to get more familiar with this assessment and there’s more like a validated and robust scoring system, maybe we’ll see a stronger correlation,” she said.

Ms. Saxena had no relevant disclosures. Coauthor Boomah Aravamuthan, MD, DPhil, is a consultant for Neurocrine Biosciences and has received royalties from UpToDate and funding from the National Institute of Neurological Disorders and Stroke.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– Cerebral palsy affects about 3 in every 1,000 children, but there is usually little sign of the condition at birth. Instead, it usually shows clinical manifestation between ages 2 and 5, and a diagnosis can trigger early interventions that can improve long-term outcomes.

Physicians and patients would benefit from a screening method for cerebral palsy at birth, but that has so far eluded researchers.

At the 2022 annual meeting of the Child Neurology Society, researchers presented evidence that respiratory rate measured in the last 24 hours of residence in the neonate intensive care unit (NICU) predicts later onset of cerebral palsy, with higher variability associated with increased cerebral palsy risk.

The study results were promising, according to Marc Patterson, MD, who comoderated the session. “It gives us more confidence in predicting the children at risk and making sure that they’re going to be followed closely to get the interventions they need to help them,” said Dr. Patterson, who is a professor of neurology, pediatrics, and medical genetics at Mayo Medical School in Rochester, Minn.

“By the time a child is 5 or 6, the symptoms are usually very obvious, but you really want to intervene as soon as possible before their brain’s plasticity decreases over time, so the earlier you can intervene in general, the better your results are going to be,” said Dr. Patterson.

There are tools available to diagnose cerebral palsy at an earlier age, including the Prechtl General Movements Assessment (GMA), which can be done up to 5 months of corrected age. It has 97% sensitivity and 89% specificity for cerebral palsy. The Hammersmith Infant Neurological Examination (HINE), which can be used in the same age range, and has 72-91% sensitivity and 100% specificity.

Both of the available tools are resource intensive and require trained clinicians, and may be unavailable in many areas. Despite these tools, early diagnosis of cerebral palsy is still underemployed, according to Arohi Saxena, a third-year medical student at Washington University in St. Louis, who presented the study results.
 

Respiratory rate variability may indicate increased risk

The researchers set out to identify objective metrics that correlated with HINE and GMA scores. They looked at kinematic data from practical assessments carried out by their physical therapists, as well as vital sign instability obtained at NICU discharge, which was based on suggestions that hemodynamic instability may be linked to later risk of cerebral palsy, according to Ms. Saxena.

They analyzed data from 31 infants with a corrected age of 8-25 weeks at a tertiary NICU follow-up clinic. Of these, 18 displayed fidgety movements on their Prechtl assessment, and 13 did not.

They used DeepLabCut software to analyze data from videos of the Prechtl assessment, with a focus on range and variance of hand and foot movements normalized to nose-to-umbilicus distance. They also analyzed pulse and respiratory data from the final 24 hours before NICU discharge.

They found that infants without fidgety movements had decreased hand and foot movement ranges (P = .04). There was no significant difference between the two groups with respect to pulse measurements. However, the respiratory rate range and variance was significantly higher in infants without fidgety movements. “Infants who are at higher risk for developing cerebral palsy had more respiratory instability early on in life,” said Ms. Saxena during her talk.

When they compared values to HINE scores, they found a correlation with less foot movement and a predisposition to develop cerebral palsy, but no correlation with hand movement. A lower HINE sore also correlated to larger respiratory rate range and variance (P < .01 for both).

“Our hypothesis to explain this link is that respiratory rate variability is likely driven by neonatal injury in the brainstem, where the respiratory centers are located. In some infants, this may correlate with more extensive cerebral injury that could predict the development of cerebral palsy,” said Ms. Saxena.

The group plans to increase its sample size as well as to conduct long-term follow-up on the infants to see how many receive formal diagnoses of cerebral palsy.

After her talk, asked by a moderator why motor assessments were not a reliable predictor in their study, Ms. Saxena pointed to the inexperience of assessors at the institution, where Prechtl testing had only recently begun.

“I think a lot of it is to do with the more subjective nature of the motor assessment. We definitely saw kind of a trend where in the earlier data that was collected, right when our institutions started doing these Prechtls, it was even less of a reliable effect. So I think possibly as clinicians continue to get more familiar with this assessment and there’s more like a validated and robust scoring system, maybe we’ll see a stronger correlation,” she said.

Ms. Saxena had no relevant disclosures. Coauthor Boomah Aravamuthan, MD, DPhil, is a consultant for Neurocrine Biosciences and has received royalties from UpToDate and funding from the National Institute of Neurological Disorders and Stroke.

 

– Cerebral palsy affects about 3 in every 1,000 children, but there is usually little sign of the condition at birth. Instead, it usually shows clinical manifestation between ages 2 and 5, and a diagnosis can trigger early interventions that can improve long-term outcomes.

Physicians and patients would benefit from a screening method for cerebral palsy at birth, but that has so far eluded researchers.

At the 2022 annual meeting of the Child Neurology Society, researchers presented evidence that respiratory rate measured in the last 24 hours of residence in the neonate intensive care unit (NICU) predicts later onset of cerebral palsy, with higher variability associated with increased cerebral palsy risk.

The study results were promising, according to Marc Patterson, MD, who comoderated the session. “It gives us more confidence in predicting the children at risk and making sure that they’re going to be followed closely to get the interventions they need to help them,” said Dr. Patterson, who is a professor of neurology, pediatrics, and medical genetics at Mayo Medical School in Rochester, Minn.

“By the time a child is 5 or 6, the symptoms are usually very obvious, but you really want to intervene as soon as possible before their brain’s plasticity decreases over time, so the earlier you can intervene in general, the better your results are going to be,” said Dr. Patterson.

There are tools available to diagnose cerebral palsy at an earlier age, including the Prechtl General Movements Assessment (GMA), which can be done up to 5 months of corrected age. It has 97% sensitivity and 89% specificity for cerebral palsy. The Hammersmith Infant Neurological Examination (HINE), which can be used in the same age range, and has 72-91% sensitivity and 100% specificity.

Both of the available tools are resource intensive and require trained clinicians, and may be unavailable in many areas. Despite these tools, early diagnosis of cerebral palsy is still underemployed, according to Arohi Saxena, a third-year medical student at Washington University in St. Louis, who presented the study results.
 

Respiratory rate variability may indicate increased risk

The researchers set out to identify objective metrics that correlated with HINE and GMA scores. They looked at kinematic data from practical assessments carried out by their physical therapists, as well as vital sign instability obtained at NICU discharge, which was based on suggestions that hemodynamic instability may be linked to later risk of cerebral palsy, according to Ms. Saxena.

They analyzed data from 31 infants with a corrected age of 8-25 weeks at a tertiary NICU follow-up clinic. Of these, 18 displayed fidgety movements on their Prechtl assessment, and 13 did not.

They used DeepLabCut software to analyze data from videos of the Prechtl assessment, with a focus on range and variance of hand and foot movements normalized to nose-to-umbilicus distance. They also analyzed pulse and respiratory data from the final 24 hours before NICU discharge.

They found that infants without fidgety movements had decreased hand and foot movement ranges (P = .04). There was no significant difference between the two groups with respect to pulse measurements. However, the respiratory rate range and variance was significantly higher in infants without fidgety movements. “Infants who are at higher risk for developing cerebral palsy had more respiratory instability early on in life,” said Ms. Saxena during her talk.

When they compared values to HINE scores, they found a correlation with less foot movement and a predisposition to develop cerebral palsy, but no correlation with hand movement. A lower HINE sore also correlated to larger respiratory rate range and variance (P < .01 for both).

“Our hypothesis to explain this link is that respiratory rate variability is likely driven by neonatal injury in the brainstem, where the respiratory centers are located. In some infants, this may correlate with more extensive cerebral injury that could predict the development of cerebral palsy,” said Ms. Saxena.

The group plans to increase its sample size as well as to conduct long-term follow-up on the infants to see how many receive formal diagnoses of cerebral palsy.

After her talk, asked by a moderator why motor assessments were not a reliable predictor in their study, Ms. Saxena pointed to the inexperience of assessors at the institution, where Prechtl testing had only recently begun.

“I think a lot of it is to do with the more subjective nature of the motor assessment. We definitely saw kind of a trend where in the earlier data that was collected, right when our institutions started doing these Prechtls, it was even less of a reliable effect. So I think possibly as clinicians continue to get more familiar with this assessment and there’s more like a validated and robust scoring system, maybe we’ll see a stronger correlation,” she said.

Ms. Saxena had no relevant disclosures. Coauthor Boomah Aravamuthan, MD, DPhil, is a consultant for Neurocrine Biosciences and has received royalties from UpToDate and funding from the National Institute of Neurological Disorders and Stroke.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CNS 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Cerebral palsy: Video clues suggest dystonia

Article Type
Changed
Wed, 10/19/2022 - 09:36

CINCINNATI – Dystonia is a frequent complication seen in cerebral palsy, but it often goes undiagnosed. Using a unique video analysis, researchers have identified some movement features that have the potential to simplify diagnosis.

“[We have] previously demonstrated that by the age of 5 years, only 30% of children seen in a clinical setting have had their predominant motor phenotype identified, including dystonia. This helps demonstrate a broad diagnostic gap and the need for novel solutions,” said Laura Gilbert, DO, during her presentation of the results at the 2022 annual meeting of the Child Neurology Society.

Diagnosis of dystonia is challenging because of its clinical variability, and diagnostic tools often require a trained physician, which limits access to diagnoses. Expert clinician consensus therefore remains the gold standard for diagnosis of dystonia.

Another clinical need is that specific features of dystonia have not been well described in the upper extremities, and the research suggests there could be differences in brain injuries contributing to dystonia in the two domains.

The researchers set out to discover expert-identified features of patient videos that could be used to allow nonexperts to make a diagnosis of dystonia.

The researchers analyzed 26 videos with upper extremity exam maneuvers performed on children with periventricular leukomalacia at St. Louis Children’s Hospital Cerebral Palsy Center from 2005 to 2018. Among the study cohort, 65% of patients were male, 77% were White, and 11% were Black; 24% of patients were Gross Motor Function Classification Scale I, 24% were GMFCS II, 24% were GMFCS III, 16% were GMFCS IV, and 12% were GMFCS V. A total of 12% of patients were older than 20, 11% were aged 15-20, 38% were aged 10-15, 31% were aged 5-10, and 8% were age 5 or younger.
 

Video clues aid diagnosis

Three pediatric movement disorder specialists independently reviewed each video and assessed severity of dystonia. They then met over Zoom to reach a diagnostic consensus for each case.

The research team performed a content analysis of the experts’ discussions and identified specific statement fragments. The frequency of these fragments was then linked to severity of dystonia.

A total of 45% of the statement fragments referenced movement codes, which in turn comprised five content areas: 33% referenced a body part, 24% focused on laterality, 22% described movement features, 18% an action, and 3% described exam maneuvers. Examples included shoulder as a body part, flexion as an action descriptor, brisk as a movement feature, unilateral, and finger-nose-finger for exam maneuver.

With increasing dystonia severity, the shoulder was more often cited and hand was cited less often. Mirror movements, defined as involuntary, contralateral movements that are similar to the voluntary action, occurred more often in patients with no dystonia or only mild dystonia. Variability of movement over time, which is a distinguishing feature found in lower extremities, was not significantly associated with dystonia severity.

Within the category of exam maneuver, hand opening and closing was the most commonly cited, and it was cited more frequently among individuals with mild dystonia (70% vs. about 10% for both no dystonia and moderate to severe dystonia; P < .005).

“So how can we adopt this clinically? First, we can add in a very brief exam maneuver of hand opening and closing that can help assess for mild dystonia. Shoulder involvement may suggest more severe dystonia, and we must recognize the dystonia features seem to differ by body region and the triggering task. Overall, to help improve dystonia diagnosis, we must continue to work towards understanding these salient features to fully grasp the breadth of dystonia manifestations in people with [cerebral palsy],” said Dr. Gilbert, who is a pediatric movements disorder fellow at Washington University in St. Louis.
 

 

 

Key features help determine dystonia severity

The study is particularly interesting for its different findings in upper extremities versus lower extremities, according to Keith Coffman, MD, who comoderated the session where the study was presented. “That same group showed that there are very clear differences in lower-extremity function, but when they looked at upper extremity, there really weren’t robust differences. What it may show is that the features of cerebral palsy regarding dystonia may be very dependent on what type of injury you have to your brain. Because when you think about where the motor fibers that provide leg function, they live along the medial walls of the brain right along the midline, whereas the representation of the hand and arm are more out on the lateral side of the brain. So it may be that those regional anatomy differences and where the injury occurred could be at the baseline of why they had such differences in motor function,” said Dr. Coffman, who is a professor of pediatrics at University of Missouri–Kansas City and director of the movement disorders program at Children’s Mercy Hospital, also in Kansas City, Mo.

He suggested that the researchers might also do kinematic analysis of the videos to make predictions using quantitative differences in movement.

The research has the potential to improve dystonia diagnosis, according to comoderator Marc Patterson, MD, professor of neurology, pediatrics, and medical genetics at Mayo Clinic in Rochester, Minn. “I think they really pointed to some key features that can help clinicians distinguish [dystonia severity]. Something like the speed of opening and closing the hands [is a] fairly simple thing. That was to me the chief value of that study,” Dr. Patterson said.

Dr. Gilbert reported no relevant disclosures.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

CINCINNATI – Dystonia is a frequent complication seen in cerebral palsy, but it often goes undiagnosed. Using a unique video analysis, researchers have identified some movement features that have the potential to simplify diagnosis.

“[We have] previously demonstrated that by the age of 5 years, only 30% of children seen in a clinical setting have had their predominant motor phenotype identified, including dystonia. This helps demonstrate a broad diagnostic gap and the need for novel solutions,” said Laura Gilbert, DO, during her presentation of the results at the 2022 annual meeting of the Child Neurology Society.

Diagnosis of dystonia is challenging because of its clinical variability, and diagnostic tools often require a trained physician, which limits access to diagnoses. Expert clinician consensus therefore remains the gold standard for diagnosis of dystonia.

Another clinical need is that specific features of dystonia have not been well described in the upper extremities, and the research suggests there could be differences in brain injuries contributing to dystonia in the two domains.

The researchers set out to discover expert-identified features of patient videos that could be used to allow nonexperts to make a diagnosis of dystonia.

The researchers analyzed 26 videos with upper extremity exam maneuvers performed on children with periventricular leukomalacia at St. Louis Children’s Hospital Cerebral Palsy Center from 2005 to 2018. Among the study cohort, 65% of patients were male, 77% were White, and 11% were Black; 24% of patients were Gross Motor Function Classification Scale I, 24% were GMFCS II, 24% were GMFCS III, 16% were GMFCS IV, and 12% were GMFCS V. A total of 12% of patients were older than 20, 11% were aged 15-20, 38% were aged 10-15, 31% were aged 5-10, and 8% were age 5 or younger.
 

Video clues aid diagnosis

Three pediatric movement disorder specialists independently reviewed each video and assessed severity of dystonia. They then met over Zoom to reach a diagnostic consensus for each case.

The research team performed a content analysis of the experts’ discussions and identified specific statement fragments. The frequency of these fragments was then linked to severity of dystonia.

A total of 45% of the statement fragments referenced movement codes, which in turn comprised five content areas: 33% referenced a body part, 24% focused on laterality, 22% described movement features, 18% an action, and 3% described exam maneuvers. Examples included shoulder as a body part, flexion as an action descriptor, brisk as a movement feature, unilateral, and finger-nose-finger for exam maneuver.

With increasing dystonia severity, the shoulder was more often cited and hand was cited less often. Mirror movements, defined as involuntary, contralateral movements that are similar to the voluntary action, occurred more often in patients with no dystonia or only mild dystonia. Variability of movement over time, which is a distinguishing feature found in lower extremities, was not significantly associated with dystonia severity.

Within the category of exam maneuver, hand opening and closing was the most commonly cited, and it was cited more frequently among individuals with mild dystonia (70% vs. about 10% for both no dystonia and moderate to severe dystonia; P < .005).

“So how can we adopt this clinically? First, we can add in a very brief exam maneuver of hand opening and closing that can help assess for mild dystonia. Shoulder involvement may suggest more severe dystonia, and we must recognize the dystonia features seem to differ by body region and the triggering task. Overall, to help improve dystonia diagnosis, we must continue to work towards understanding these salient features to fully grasp the breadth of dystonia manifestations in people with [cerebral palsy],” said Dr. Gilbert, who is a pediatric movements disorder fellow at Washington University in St. Louis.
 

 

 

Key features help determine dystonia severity

The study is particularly interesting for its different findings in upper extremities versus lower extremities, according to Keith Coffman, MD, who comoderated the session where the study was presented. “That same group showed that there are very clear differences in lower-extremity function, but when they looked at upper extremity, there really weren’t robust differences. What it may show is that the features of cerebral palsy regarding dystonia may be very dependent on what type of injury you have to your brain. Because when you think about where the motor fibers that provide leg function, they live along the medial walls of the brain right along the midline, whereas the representation of the hand and arm are more out on the lateral side of the brain. So it may be that those regional anatomy differences and where the injury occurred could be at the baseline of why they had such differences in motor function,” said Dr. Coffman, who is a professor of pediatrics at University of Missouri–Kansas City and director of the movement disorders program at Children’s Mercy Hospital, also in Kansas City, Mo.

He suggested that the researchers might also do kinematic analysis of the videos to make predictions using quantitative differences in movement.

The research has the potential to improve dystonia diagnosis, according to comoderator Marc Patterson, MD, professor of neurology, pediatrics, and medical genetics at Mayo Clinic in Rochester, Minn. “I think they really pointed to some key features that can help clinicians distinguish [dystonia severity]. Something like the speed of opening and closing the hands [is a] fairly simple thing. That was to me the chief value of that study,” Dr. Patterson said.

Dr. Gilbert reported no relevant disclosures.

CINCINNATI – Dystonia is a frequent complication seen in cerebral palsy, but it often goes undiagnosed. Using a unique video analysis, researchers have identified some movement features that have the potential to simplify diagnosis.

“[We have] previously demonstrated that by the age of 5 years, only 30% of children seen in a clinical setting have had their predominant motor phenotype identified, including dystonia. This helps demonstrate a broad diagnostic gap and the need for novel solutions,” said Laura Gilbert, DO, during her presentation of the results at the 2022 annual meeting of the Child Neurology Society.

Diagnosis of dystonia is challenging because of its clinical variability, and diagnostic tools often require a trained physician, which limits access to diagnoses. Expert clinician consensus therefore remains the gold standard for diagnosis of dystonia.

Another clinical need is that specific features of dystonia have not been well described in the upper extremities, and the research suggests there could be differences in brain injuries contributing to dystonia in the two domains.

The researchers set out to discover expert-identified features of patient videos that could be used to allow nonexperts to make a diagnosis of dystonia.

The researchers analyzed 26 videos with upper extremity exam maneuvers performed on children with periventricular leukomalacia at St. Louis Children’s Hospital Cerebral Palsy Center from 2005 to 2018. Among the study cohort, 65% of patients were male, 77% were White, and 11% were Black; 24% of patients were Gross Motor Function Classification Scale I, 24% were GMFCS II, 24% were GMFCS III, 16% were GMFCS IV, and 12% were GMFCS V. A total of 12% of patients were older than 20, 11% were aged 15-20, 38% were aged 10-15, 31% were aged 5-10, and 8% were age 5 or younger.
 

Video clues aid diagnosis

Three pediatric movement disorder specialists independently reviewed each video and assessed severity of dystonia. They then met over Zoom to reach a diagnostic consensus for each case.

The research team performed a content analysis of the experts’ discussions and identified specific statement fragments. The frequency of these fragments was then linked to severity of dystonia.

A total of 45% of the statement fragments referenced movement codes, which in turn comprised five content areas: 33% referenced a body part, 24% focused on laterality, 22% described movement features, 18% an action, and 3% described exam maneuvers. Examples included shoulder as a body part, flexion as an action descriptor, brisk as a movement feature, unilateral, and finger-nose-finger for exam maneuver.

With increasing dystonia severity, the shoulder was more often cited and hand was cited less often. Mirror movements, defined as involuntary, contralateral movements that are similar to the voluntary action, occurred more often in patients with no dystonia or only mild dystonia. Variability of movement over time, which is a distinguishing feature found in lower extremities, was not significantly associated with dystonia severity.

Within the category of exam maneuver, hand opening and closing was the most commonly cited, and it was cited more frequently among individuals with mild dystonia (70% vs. about 10% for both no dystonia and moderate to severe dystonia; P < .005).

“So how can we adopt this clinically? First, we can add in a very brief exam maneuver of hand opening and closing that can help assess for mild dystonia. Shoulder involvement may suggest more severe dystonia, and we must recognize the dystonia features seem to differ by body region and the triggering task. Overall, to help improve dystonia diagnosis, we must continue to work towards understanding these salient features to fully grasp the breadth of dystonia manifestations in people with [cerebral palsy],” said Dr. Gilbert, who is a pediatric movements disorder fellow at Washington University in St. Louis.
 

 

 

Key features help determine dystonia severity

The study is particularly interesting for its different findings in upper extremities versus lower extremities, according to Keith Coffman, MD, who comoderated the session where the study was presented. “That same group showed that there are very clear differences in lower-extremity function, but when they looked at upper extremity, there really weren’t robust differences. What it may show is that the features of cerebral palsy regarding dystonia may be very dependent on what type of injury you have to your brain. Because when you think about where the motor fibers that provide leg function, they live along the medial walls of the brain right along the midline, whereas the representation of the hand and arm are more out on the lateral side of the brain. So it may be that those regional anatomy differences and where the injury occurred could be at the baseline of why they had such differences in motor function,” said Dr. Coffman, who is a professor of pediatrics at University of Missouri–Kansas City and director of the movement disorders program at Children’s Mercy Hospital, also in Kansas City, Mo.

He suggested that the researchers might also do kinematic analysis of the videos to make predictions using quantitative differences in movement.

The research has the potential to improve dystonia diagnosis, according to comoderator Marc Patterson, MD, professor of neurology, pediatrics, and medical genetics at Mayo Clinic in Rochester, Minn. “I think they really pointed to some key features that can help clinicians distinguish [dystonia severity]. Something like the speed of opening and closing the hands [is a] fairly simple thing. That was to me the chief value of that study,” Dr. Patterson said.

Dr. Gilbert reported no relevant disclosures.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT CNS 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Real-world evidence seen for metal stents in biliary strictures

Article Type
Changed
Mon, 10/17/2022 - 13:29

A real-world analysis in the United Kingdom found that a fully covered metal stent is safe and effective at controlling anastomotic strictures (AS) following liver transplants.

Biliary AS occurs in an estimated 5%-32% of patients following a liver transplant. Generally, these have been managed by insertion of side-by-side plastic stents to remodel the stricture, but this often required multiple procedures to resolve the problem. More recently, transpapillary fully covered self-expanding metallic stents (FCSEMSs) have been introduced and they appear to perform equivalently to their plastic counterparts while requiring fewer procedures.

The new study “is yet another large experience demonstrating that use of fully covered metal stents for treating anastomotic biliary strictures is highly effective and also cost-effective because you really decrease the number of ERCPs [endoscopic retrograde cholangiopancreatographies] that are required to treat an anastomotic stricture,” said Vladimir Kushnir, MD, who was asked to comment on the study, which was published in Therapeutic Advances in Gastroenterology.

The researchers analyzed retrospective data from 162 consecutive patients who underwent ERCP with intraductal self-expanding metal stent (IDSEMS) insertion at nine tertiary centers. The procedures employed the Kaffes (Taewoong Niti-S) biliary covered stent, which is not available in the United States. Unlike conventional FCSEMSs, the device does not have to traverse the papilla. It is also shorter and includes an antimigration waist and removal wires that may reduce the risk of silent migration. Small case series suggested efficacy in the treatment of post–liver transplant AS.

There were 176 episodes of stent insertion among the 162 included patients; 62% of patients were male, and the median age at transplant was 54 years. Etiologies included hepatocellular carcinoma (22%), alcohol-related liver disease (18%), and nonalcoholic fatty liver disease (12%). The median time to development of a stricture was 24.9 weeks. Among all patients, 35% had previously received stents; 75% of those were plastic stents.

Overall, 10% of patients experienced stricture recurrence at a median interval of 19 weeks following stent removal. Median stent emplacement was 15 weeks, and 81% of patients had a resolution of their strictures.

Dr. Kushnir, from Washington University in St. Louis, highlighted the differences between the stent used in the study and those currently available in the United States. “This type of stent is a self-expanding metal stent that’s covered, but what’s different about it is that it’s designed to go completely within the bile duct, whereas a traditional fully covered metal stent traverses the major duodenal papilla.”

Despite those differences, he believes that the study can inform current practice in the United States. “In situations where you’re faced with a question of whether or not you leave multiple plastic stents in, or you put a full metal stent in that’s going to be fully within the bile duct, I think this data does provide some reassurance. If you’re using one of the traditional stents that we have in the United States and putting it fully within the bile duct, you do need to be prepared to have a little bit of a harder time removing the stent when the time comes for the removal procedure, which could require cholangioscopy. But this does provide some evidence to back up the practice of using fully covered metal stents fully within the bile duct to remediate anastomotic strictures that may be just a little too high up to treat traditionally with a stent that remains transpapillary,” said Dr. Kushnir.

The study also suggests an avenue for further research. “What’s also interesting about this study is that they only left the stents in for 3 months. In most clinical trials, where we’ve used fully covered metal stents for treating anastomotic biliary strictures, you leave the stent in from anywhere from 6 to 12 months. So with only 3 months dwell time they were able to get pretty impressive results, at least in the short term, in a retrospective study, so it does raise the question of should we be evaluating shorter dwell times for stents in treating anastomotic strictures when we’re using a fully covered metal stent that’s a larger diameter?” said Dr. Kushnir.

The authors noted some limitations, such as the retrospective design, small sample size, and lack of control group. They also noted that the multicenter design may have introduced heterogeneity in patient management and follow-up.

“In conclusion, IDSEMS appear to be safe and highly efficacious in the management of [post–liver transplant] AS,” concluded the authors. “Long-term outcomes appear good with low rates of AS recurrence.”

The authors declare no conflicts of interest. Dr. Kushnir is a consultant for ConMed and Boston Scientific.

Publications
Topics
Sections

A real-world analysis in the United Kingdom found that a fully covered metal stent is safe and effective at controlling anastomotic strictures (AS) following liver transplants.

Biliary AS occurs in an estimated 5%-32% of patients following a liver transplant. Generally, these have been managed by insertion of side-by-side plastic stents to remodel the stricture, but this often required multiple procedures to resolve the problem. More recently, transpapillary fully covered self-expanding metallic stents (FCSEMSs) have been introduced and they appear to perform equivalently to their plastic counterparts while requiring fewer procedures.

The new study “is yet another large experience demonstrating that use of fully covered metal stents for treating anastomotic biliary strictures is highly effective and also cost-effective because you really decrease the number of ERCPs [endoscopic retrograde cholangiopancreatographies] that are required to treat an anastomotic stricture,” said Vladimir Kushnir, MD, who was asked to comment on the study, which was published in Therapeutic Advances in Gastroenterology.

The researchers analyzed retrospective data from 162 consecutive patients who underwent ERCP with intraductal self-expanding metal stent (IDSEMS) insertion at nine tertiary centers. The procedures employed the Kaffes (Taewoong Niti-S) biliary covered stent, which is not available in the United States. Unlike conventional FCSEMSs, the device does not have to traverse the papilla. It is also shorter and includes an antimigration waist and removal wires that may reduce the risk of silent migration. Small case series suggested efficacy in the treatment of post–liver transplant AS.

There were 176 episodes of stent insertion among the 162 included patients; 62% of patients were male, and the median age at transplant was 54 years. Etiologies included hepatocellular carcinoma (22%), alcohol-related liver disease (18%), and nonalcoholic fatty liver disease (12%). The median time to development of a stricture was 24.9 weeks. Among all patients, 35% had previously received stents; 75% of those were plastic stents.

Overall, 10% of patients experienced stricture recurrence at a median interval of 19 weeks following stent removal. Median stent emplacement was 15 weeks, and 81% of patients had a resolution of their strictures.

Dr. Kushnir, from Washington University in St. Louis, highlighted the differences between the stent used in the study and those currently available in the United States. “This type of stent is a self-expanding metal stent that’s covered, but what’s different about it is that it’s designed to go completely within the bile duct, whereas a traditional fully covered metal stent traverses the major duodenal papilla.”

Despite those differences, he believes that the study can inform current practice in the United States. “In situations where you’re faced with a question of whether or not you leave multiple plastic stents in, or you put a full metal stent in that’s going to be fully within the bile duct, I think this data does provide some reassurance. If you’re using one of the traditional stents that we have in the United States and putting it fully within the bile duct, you do need to be prepared to have a little bit of a harder time removing the stent when the time comes for the removal procedure, which could require cholangioscopy. But this does provide some evidence to back up the practice of using fully covered metal stents fully within the bile duct to remediate anastomotic strictures that may be just a little too high up to treat traditionally with a stent that remains transpapillary,” said Dr. Kushnir.

The study also suggests an avenue for further research. “What’s also interesting about this study is that they only left the stents in for 3 months. In most clinical trials, where we’ve used fully covered metal stents for treating anastomotic biliary strictures, you leave the stent in from anywhere from 6 to 12 months. So with only 3 months dwell time they were able to get pretty impressive results, at least in the short term, in a retrospective study, so it does raise the question of should we be evaluating shorter dwell times for stents in treating anastomotic strictures when we’re using a fully covered metal stent that’s a larger diameter?” said Dr. Kushnir.

The authors noted some limitations, such as the retrospective design, small sample size, and lack of control group. They also noted that the multicenter design may have introduced heterogeneity in patient management and follow-up.

“In conclusion, IDSEMS appear to be safe and highly efficacious in the management of [post–liver transplant] AS,” concluded the authors. “Long-term outcomes appear good with low rates of AS recurrence.”

The authors declare no conflicts of interest. Dr. Kushnir is a consultant for ConMed and Boston Scientific.

A real-world analysis in the United Kingdom found that a fully covered metal stent is safe and effective at controlling anastomotic strictures (AS) following liver transplants.

Biliary AS occurs in an estimated 5%-32% of patients following a liver transplant. Generally, these have been managed by insertion of side-by-side plastic stents to remodel the stricture, but this often required multiple procedures to resolve the problem. More recently, transpapillary fully covered self-expanding metallic stents (FCSEMSs) have been introduced and they appear to perform equivalently to their plastic counterparts while requiring fewer procedures.

The new study “is yet another large experience demonstrating that use of fully covered metal stents for treating anastomotic biliary strictures is highly effective and also cost-effective because you really decrease the number of ERCPs [endoscopic retrograde cholangiopancreatographies] that are required to treat an anastomotic stricture,” said Vladimir Kushnir, MD, who was asked to comment on the study, which was published in Therapeutic Advances in Gastroenterology.

The researchers analyzed retrospective data from 162 consecutive patients who underwent ERCP with intraductal self-expanding metal stent (IDSEMS) insertion at nine tertiary centers. The procedures employed the Kaffes (Taewoong Niti-S) biliary covered stent, which is not available in the United States. Unlike conventional FCSEMSs, the device does not have to traverse the papilla. It is also shorter and includes an antimigration waist and removal wires that may reduce the risk of silent migration. Small case series suggested efficacy in the treatment of post–liver transplant AS.

There were 176 episodes of stent insertion among the 162 included patients; 62% of patients were male, and the median age at transplant was 54 years. Etiologies included hepatocellular carcinoma (22%), alcohol-related liver disease (18%), and nonalcoholic fatty liver disease (12%). The median time to development of a stricture was 24.9 weeks. Among all patients, 35% had previously received stents; 75% of those were plastic stents.

Overall, 10% of patients experienced stricture recurrence at a median interval of 19 weeks following stent removal. Median stent emplacement was 15 weeks, and 81% of patients had a resolution of their strictures.

Dr. Kushnir, from Washington University in St. Louis, highlighted the differences between the stent used in the study and those currently available in the United States. “This type of stent is a self-expanding metal stent that’s covered, but what’s different about it is that it’s designed to go completely within the bile duct, whereas a traditional fully covered metal stent traverses the major duodenal papilla.”

Despite those differences, he believes that the study can inform current practice in the United States. “In situations where you’re faced with a question of whether or not you leave multiple plastic stents in, or you put a full metal stent in that’s going to be fully within the bile duct, I think this data does provide some reassurance. If you’re using one of the traditional stents that we have in the United States and putting it fully within the bile duct, you do need to be prepared to have a little bit of a harder time removing the stent when the time comes for the removal procedure, which could require cholangioscopy. But this does provide some evidence to back up the practice of using fully covered metal stents fully within the bile duct to remediate anastomotic strictures that may be just a little too high up to treat traditionally with a stent that remains transpapillary,” said Dr. Kushnir.

The study also suggests an avenue for further research. “What’s also interesting about this study is that they only left the stents in for 3 months. In most clinical trials, where we’ve used fully covered metal stents for treating anastomotic biliary strictures, you leave the stent in from anywhere from 6 to 12 months. So with only 3 months dwell time they were able to get pretty impressive results, at least in the short term, in a retrospective study, so it does raise the question of should we be evaluating shorter dwell times for stents in treating anastomotic strictures when we’re using a fully covered metal stent that’s a larger diameter?” said Dr. Kushnir.

The authors noted some limitations, such as the retrospective design, small sample size, and lack of control group. They also noted that the multicenter design may have introduced heterogeneity in patient management and follow-up.

“In conclusion, IDSEMS appear to be safe and highly efficacious in the management of [post–liver transplant] AS,” concluded the authors. “Long-term outcomes appear good with low rates of AS recurrence.”

The authors declare no conflicts of interest. Dr. Kushnir is a consultant for ConMed and Boston Scientific.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THERAPEUTIC ADVANCES IN GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Looking for the source of neuroendocrine tumors

Reprogramming cells toward a neuroendocrine fate
Article Type
Changed
Tue, 10/11/2022 - 16:50

The diversity of neuroendocrine tumors (NETs) – which includes variation in location, mutational profile, and response to therapy – may be due to divergent cellular origins in different tissue sites, according to a new study.

The pathogenesis of gastroenteropancreatic neoplasms (GEP-NENs) is poorly understood, in part because of a lack of modeling systems, according to Suzann Duan, PhD, and colleagues. They are a heterogeneous group of tumors that are increasingly prevalent in the United States. GEP-NENs arise from endocrine-producing cells and include gastric carcinoids, gastrinomas, and pancreatic NETs.

Despite the general mystery surrounding GEP-NENs, there is at least one clue in the form of the MEN1 gene. Both inherited and sporadic mutations of this gene are associated with GEP-NENs. Menin is a tumor suppressor protein, and previous studies have shown that inactivation of MEN1 leads to loss of that protein and is associated with endocrine tumors in the pancreas, pituitary, and upper GI tract.

In new research published in Cellular and Molecular Gastroenterology and Hepatology, researchers investigated the role of MEN1 in neuroendocrine cell development and traced it to a potential role in the development of NETs.

Patients with MEN1 mutations are at increased risk of gastrinomas, which lead to increased production of the peptide hormone gastrin. Gastrin increases acid production and can lead to hyperplasia in parietal and enterochromaffin cells. These generally develop in Brunner’s glands within the submucosa of the duodenum. At time of diagnosis, more than half of such tumors have developed lymph node metastases.

It remains unclear how loss of MEN1 suppresses gastrin production. Previous research showed that homozygous MEN1 deletion in mice is lethal to embryos, while leaving one copy intact leads to heightened risk of endocrine tumors in the pancreas and pituitary gland, but not in the GI tract. The studies did not reveal the tumor’s origin cell.

The researchers developed a novel mouse model in which MEN1 is conditionally deleted from the GI tract epithelium. This led to hyperplasia of gastrin-producing cells (G cells) in the antrum, as well as hypergastrinemia and development of gastric NETs. Exposure to a proton pump inhibitor accelerated gastric NET development, and the researchers identified expansion of enteric glial cells that expressed gastrin and GFAP. Glial cells that differentiated into endocrine phenotype were associated with a reversible loss of menin. “Taken together, these observations suggest that hyperplastic G cells might emerge from reprogrammed neural crest–derived cells in addition to endoderm-derived enteroendocrine cells,” the authors wrote.

That idea is supported by previous research indicating that multipotent glial cells expressing GFAP or SOX10 may play a developmental role in formation of neuroendocrine cells.

With this in mind, the researchers deleted MEN1 in GFAP-expressing cells to see if it would promote neuroendocrine cell development.

The result was hyperplasia in the gastric antrum and NETs in the pituitary and pancreas. To the researchers’ surprise, NET development was associated with loss of GFAP expression as well as activation of neuronal and neuroendocrine-related genes in the stomach, pancreas, and pituitary. There was universal reduction of GFAP protein expression in pituitary and pancreatic NETs, but GFAP transcript levels stayed steady in the gastric antra despite a reduction in GFAP-reporter expression. This could indicate that the menin protein interacts with GFAP. If so, eliminating menin in GFAP-positive cells could change the localization of GFAP, which may in turn lead to changes in glial cell identity.

When the researchers compared transcriptomes of hyperplastic antral tissues to well-differentiated NETs, they found that NETs exhibited a greater loss of glial-restricted progenitor lineage–associated genes as well as more downregulation of gliogenesis-directing factors. “Thus, the transition from a glial-to-neuronal cell phenotype appears to promote the progression from neuroendocrine cell hyperplasia to tumor development,” the authors wrote. They also found that NETs have higher levels of expression of genes associated with neural stem and progenitor cells, as well as upregulation of factors secreted from neural crest cells that promote neurogenesis and restrict the glial cell fate. Many of these factors are part of the Hedgehog signaling pathway, and menin is known to repress Hedgehog signaling.

Intestinal glial cells have a high degree of plasticity. They can become neuronal progenitor cells and yet they can dedifferentiate to differentiate again into other cell lineages.

The research could eventually lead to identification of unique cells-of-origin for these tumors. The authors say that the diversity of the tumors – which includes variation in location, mutational profile, and response to therapy – may be due to divergent cellular origins in different tissue sites. “Defining the cells-of-origin and the events preceding neoplastic transformation will be critical to informing molecular signaling pathways that can then be targeted therapeutically,” the authors wrote.

The authors disclosed no conflicts of interest.

Body

Gastroenteropancreatic neuroendocrine neoplasms (GEP-NENs) share endocrine and neural features but are diverse in terms of their location, behaviors, and response to therapies. One explanation for heterogeneity in GEP-NENs is that they have diverse cellular origins. The study by Duan and colleagues suggests that glia could be a potential cell of origin in GEP-NENs. GEP-NEN development in the pancreas, pituitary, and upper gastrointestinal tract is associated with mutations in the Multiple Endocrine Neoplasia I (MEN1) gene that cause a loss of the tumor suppressor protein menin.

Dr. Brian D. Gulbransen
The authors found that deleting Men1 only in glial fibrillary acidic protein (GFAP)–expressing cells leads to the development of pancreatic and pituitary neuroendocrine tumors and changes to the epithelial lining of the stomach. These observations suggest a role for menin in glial development and/or maturation that, when lost, can contribute to cellular reprogramming toward a neuroendocrine fate. However, it is also possible that deleting Men1 affects the developmental trajectories of GFAP-expressing progenitor cells rather than reprogramming mature glia. Interestingly, tumor development and neuroendocrine reprogramming were only observed in the pituitary, pancreas, and stomach, and did not seem to occur in other organs with large populations of similar GFAP-positive cells such as the brain, spinal cord, or other peripheral organs. This seems to indicate specialized developmental roles of menin in these locations or that glia in the pituitary, pancreas, and stomach exhibit a heightened plastic potential that differs from other populations of glia.

The tumorigenic potential of GFAP-positive cells differs even between the pituitary, pancreas, and stomach since mice lacking Men1 in GFAP-positive cells did not develop gastrinomas while tumors were observed in the pituitary and pancreas. This could indicate that additional drivers are necessary to promote NENs in the intestine which are not required in other locations. These differences could be important when considering treatment strategies given the diverse nature of the cells and mechanisms involved.

Brian D. Gulbransen, PhD, is an associate professor in the department of physiology and an MSU Foundation Professor at Michigan State University, East Lansing. He has no conflicts.

Publications
Topics
Sections
Body

Gastroenteropancreatic neuroendocrine neoplasms (GEP-NENs) share endocrine and neural features but are diverse in terms of their location, behaviors, and response to therapies. One explanation for heterogeneity in GEP-NENs is that they have diverse cellular origins. The study by Duan and colleagues suggests that glia could be a potential cell of origin in GEP-NENs. GEP-NEN development in the pancreas, pituitary, and upper gastrointestinal tract is associated with mutations in the Multiple Endocrine Neoplasia I (MEN1) gene that cause a loss of the tumor suppressor protein menin.

Dr. Brian D. Gulbransen
The authors found that deleting Men1 only in glial fibrillary acidic protein (GFAP)–expressing cells leads to the development of pancreatic and pituitary neuroendocrine tumors and changes to the epithelial lining of the stomach. These observations suggest a role for menin in glial development and/or maturation that, when lost, can contribute to cellular reprogramming toward a neuroendocrine fate. However, it is also possible that deleting Men1 affects the developmental trajectories of GFAP-expressing progenitor cells rather than reprogramming mature glia. Interestingly, tumor development and neuroendocrine reprogramming were only observed in the pituitary, pancreas, and stomach, and did not seem to occur in other organs with large populations of similar GFAP-positive cells such as the brain, spinal cord, or other peripheral organs. This seems to indicate specialized developmental roles of menin in these locations or that glia in the pituitary, pancreas, and stomach exhibit a heightened plastic potential that differs from other populations of glia.

The tumorigenic potential of GFAP-positive cells differs even between the pituitary, pancreas, and stomach since mice lacking Men1 in GFAP-positive cells did not develop gastrinomas while tumors were observed in the pituitary and pancreas. This could indicate that additional drivers are necessary to promote NENs in the intestine which are not required in other locations. These differences could be important when considering treatment strategies given the diverse nature of the cells and mechanisms involved.

Brian D. Gulbransen, PhD, is an associate professor in the department of physiology and an MSU Foundation Professor at Michigan State University, East Lansing. He has no conflicts.

Body

Gastroenteropancreatic neuroendocrine neoplasms (GEP-NENs) share endocrine and neural features but are diverse in terms of their location, behaviors, and response to therapies. One explanation for heterogeneity in GEP-NENs is that they have diverse cellular origins. The study by Duan and colleagues suggests that glia could be a potential cell of origin in GEP-NENs. GEP-NEN development in the pancreas, pituitary, and upper gastrointestinal tract is associated with mutations in the Multiple Endocrine Neoplasia I (MEN1) gene that cause a loss of the tumor suppressor protein menin.

Dr. Brian D. Gulbransen
The authors found that deleting Men1 only in glial fibrillary acidic protein (GFAP)–expressing cells leads to the development of pancreatic and pituitary neuroendocrine tumors and changes to the epithelial lining of the stomach. These observations suggest a role for menin in glial development and/or maturation that, when lost, can contribute to cellular reprogramming toward a neuroendocrine fate. However, it is also possible that deleting Men1 affects the developmental trajectories of GFAP-expressing progenitor cells rather than reprogramming mature glia. Interestingly, tumor development and neuroendocrine reprogramming were only observed in the pituitary, pancreas, and stomach, and did not seem to occur in other organs with large populations of similar GFAP-positive cells such as the brain, spinal cord, or other peripheral organs. This seems to indicate specialized developmental roles of menin in these locations or that glia in the pituitary, pancreas, and stomach exhibit a heightened plastic potential that differs from other populations of glia.

The tumorigenic potential of GFAP-positive cells differs even between the pituitary, pancreas, and stomach since mice lacking Men1 in GFAP-positive cells did not develop gastrinomas while tumors were observed in the pituitary and pancreas. This could indicate that additional drivers are necessary to promote NENs in the intestine which are not required in other locations. These differences could be important when considering treatment strategies given the diverse nature of the cells and mechanisms involved.

Brian D. Gulbransen, PhD, is an associate professor in the department of physiology and an MSU Foundation Professor at Michigan State University, East Lansing. He has no conflicts.

Title
Reprogramming cells toward a neuroendocrine fate
Reprogramming cells toward a neuroendocrine fate

The diversity of neuroendocrine tumors (NETs) – which includes variation in location, mutational profile, and response to therapy – may be due to divergent cellular origins in different tissue sites, according to a new study.

The pathogenesis of gastroenteropancreatic neoplasms (GEP-NENs) is poorly understood, in part because of a lack of modeling systems, according to Suzann Duan, PhD, and colleagues. They are a heterogeneous group of tumors that are increasingly prevalent in the United States. GEP-NENs arise from endocrine-producing cells and include gastric carcinoids, gastrinomas, and pancreatic NETs.

Despite the general mystery surrounding GEP-NENs, there is at least one clue in the form of the MEN1 gene. Both inherited and sporadic mutations of this gene are associated with GEP-NENs. Menin is a tumor suppressor protein, and previous studies have shown that inactivation of MEN1 leads to loss of that protein and is associated with endocrine tumors in the pancreas, pituitary, and upper GI tract.

In new research published in Cellular and Molecular Gastroenterology and Hepatology, researchers investigated the role of MEN1 in neuroendocrine cell development and traced it to a potential role in the development of NETs.

Patients with MEN1 mutations are at increased risk of gastrinomas, which lead to increased production of the peptide hormone gastrin. Gastrin increases acid production and can lead to hyperplasia in parietal and enterochromaffin cells. These generally develop in Brunner’s glands within the submucosa of the duodenum. At time of diagnosis, more than half of such tumors have developed lymph node metastases.

It remains unclear how loss of MEN1 suppresses gastrin production. Previous research showed that homozygous MEN1 deletion in mice is lethal to embryos, while leaving one copy intact leads to heightened risk of endocrine tumors in the pancreas and pituitary gland, but not in the GI tract. The studies did not reveal the tumor’s origin cell.

The researchers developed a novel mouse model in which MEN1 is conditionally deleted from the GI tract epithelium. This led to hyperplasia of gastrin-producing cells (G cells) in the antrum, as well as hypergastrinemia and development of gastric NETs. Exposure to a proton pump inhibitor accelerated gastric NET development, and the researchers identified expansion of enteric glial cells that expressed gastrin and GFAP. Glial cells that differentiated into endocrine phenotype were associated with a reversible loss of menin. “Taken together, these observations suggest that hyperplastic G cells might emerge from reprogrammed neural crest–derived cells in addition to endoderm-derived enteroendocrine cells,” the authors wrote.

That idea is supported by previous research indicating that multipotent glial cells expressing GFAP or SOX10 may play a developmental role in formation of neuroendocrine cells.

With this in mind, the researchers deleted MEN1 in GFAP-expressing cells to see if it would promote neuroendocrine cell development.

The result was hyperplasia in the gastric antrum and NETs in the pituitary and pancreas. To the researchers’ surprise, NET development was associated with loss of GFAP expression as well as activation of neuronal and neuroendocrine-related genes in the stomach, pancreas, and pituitary. There was universal reduction of GFAP protein expression in pituitary and pancreatic NETs, but GFAP transcript levels stayed steady in the gastric antra despite a reduction in GFAP-reporter expression. This could indicate that the menin protein interacts with GFAP. If so, eliminating menin in GFAP-positive cells could change the localization of GFAP, which may in turn lead to changes in glial cell identity.

When the researchers compared transcriptomes of hyperplastic antral tissues to well-differentiated NETs, they found that NETs exhibited a greater loss of glial-restricted progenitor lineage–associated genes as well as more downregulation of gliogenesis-directing factors. “Thus, the transition from a glial-to-neuronal cell phenotype appears to promote the progression from neuroendocrine cell hyperplasia to tumor development,” the authors wrote. They also found that NETs have higher levels of expression of genes associated with neural stem and progenitor cells, as well as upregulation of factors secreted from neural crest cells that promote neurogenesis and restrict the glial cell fate. Many of these factors are part of the Hedgehog signaling pathway, and menin is known to repress Hedgehog signaling.

Intestinal glial cells have a high degree of plasticity. They can become neuronal progenitor cells and yet they can dedifferentiate to differentiate again into other cell lineages.

The research could eventually lead to identification of unique cells-of-origin for these tumors. The authors say that the diversity of the tumors – which includes variation in location, mutational profile, and response to therapy – may be due to divergent cellular origins in different tissue sites. “Defining the cells-of-origin and the events preceding neoplastic transformation will be critical to informing molecular signaling pathways that can then be targeted therapeutically,” the authors wrote.

The authors disclosed no conflicts of interest.

The diversity of neuroendocrine tumors (NETs) – which includes variation in location, mutational profile, and response to therapy – may be due to divergent cellular origins in different tissue sites, according to a new study.

The pathogenesis of gastroenteropancreatic neoplasms (GEP-NENs) is poorly understood, in part because of a lack of modeling systems, according to Suzann Duan, PhD, and colleagues. They are a heterogeneous group of tumors that are increasingly prevalent in the United States. GEP-NENs arise from endocrine-producing cells and include gastric carcinoids, gastrinomas, and pancreatic NETs.

Despite the general mystery surrounding GEP-NENs, there is at least one clue in the form of the MEN1 gene. Both inherited and sporadic mutations of this gene are associated with GEP-NENs. Menin is a tumor suppressor protein, and previous studies have shown that inactivation of MEN1 leads to loss of that protein and is associated with endocrine tumors in the pancreas, pituitary, and upper GI tract.

In new research published in Cellular and Molecular Gastroenterology and Hepatology, researchers investigated the role of MEN1 in neuroendocrine cell development and traced it to a potential role in the development of NETs.

Patients with MEN1 mutations are at increased risk of gastrinomas, which lead to increased production of the peptide hormone gastrin. Gastrin increases acid production and can lead to hyperplasia in parietal and enterochromaffin cells. These generally develop in Brunner’s glands within the submucosa of the duodenum. At time of diagnosis, more than half of such tumors have developed lymph node metastases.

It remains unclear how loss of MEN1 suppresses gastrin production. Previous research showed that homozygous MEN1 deletion in mice is lethal to embryos, while leaving one copy intact leads to heightened risk of endocrine tumors in the pancreas and pituitary gland, but not in the GI tract. The studies did not reveal the tumor’s origin cell.

The researchers developed a novel mouse model in which MEN1 is conditionally deleted from the GI tract epithelium. This led to hyperplasia of gastrin-producing cells (G cells) in the antrum, as well as hypergastrinemia and development of gastric NETs. Exposure to a proton pump inhibitor accelerated gastric NET development, and the researchers identified expansion of enteric glial cells that expressed gastrin and GFAP. Glial cells that differentiated into endocrine phenotype were associated with a reversible loss of menin. “Taken together, these observations suggest that hyperplastic G cells might emerge from reprogrammed neural crest–derived cells in addition to endoderm-derived enteroendocrine cells,” the authors wrote.

That idea is supported by previous research indicating that multipotent glial cells expressing GFAP or SOX10 may play a developmental role in formation of neuroendocrine cells.

With this in mind, the researchers deleted MEN1 in GFAP-expressing cells to see if it would promote neuroendocrine cell development.

The result was hyperplasia in the gastric antrum and NETs in the pituitary and pancreas. To the researchers’ surprise, NET development was associated with loss of GFAP expression as well as activation of neuronal and neuroendocrine-related genes in the stomach, pancreas, and pituitary. There was universal reduction of GFAP protein expression in pituitary and pancreatic NETs, but GFAP transcript levels stayed steady in the gastric antra despite a reduction in GFAP-reporter expression. This could indicate that the menin protein interacts with GFAP. If so, eliminating menin in GFAP-positive cells could change the localization of GFAP, which may in turn lead to changes in glial cell identity.

When the researchers compared transcriptomes of hyperplastic antral tissues to well-differentiated NETs, they found that NETs exhibited a greater loss of glial-restricted progenitor lineage–associated genes as well as more downregulation of gliogenesis-directing factors. “Thus, the transition from a glial-to-neuronal cell phenotype appears to promote the progression from neuroendocrine cell hyperplasia to tumor development,” the authors wrote. They also found that NETs have higher levels of expression of genes associated with neural stem and progenitor cells, as well as upregulation of factors secreted from neural crest cells that promote neurogenesis and restrict the glial cell fate. Many of these factors are part of the Hedgehog signaling pathway, and menin is known to repress Hedgehog signaling.

Intestinal glial cells have a high degree of plasticity. They can become neuronal progenitor cells and yet they can dedifferentiate to differentiate again into other cell lineages.

The research could eventually lead to identification of unique cells-of-origin for these tumors. The authors say that the diversity of the tumors – which includes variation in location, mutational profile, and response to therapy – may be due to divergent cellular origins in different tissue sites. “Defining the cells-of-origin and the events preceding neoplastic transformation will be critical to informing molecular signaling pathways that can then be targeted therapeutically,” the authors wrote.

The authors disclosed no conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CELLULAR AND MOLECULAR GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Water exchange boosts colonoscopy training experience

Article Type
Changed
Tue, 10/11/2022 - 10:15

A new study finds that colonoscopy trainees had a better experience with and performed better when using water exchange (WE) than when using air insufflation. The new study was published in the Journal of Clinical Gastroenterology.

According to study author Felix W. Leung, MD, from the Veterans Affairs Greater Los Angeles Healthcare System in North Hills, Calif., and the University of California, Los Angeles, WE is less painful than air insufflation and increases cecal intubation rate because it reduces loop formation. He added that it also increases polyp and adenoma detection rates.

Although WE has compared favorably with air insufflation for ADR and pain, there is little evidence regarding how trainees view WE versus air insufflation. Dr. Leung pointed out that the issue could be particularly important among millennial trainees, who may have a different learning style than previous generations. He also noted that previous studies of WE versus air insufflation among trainees measured the perspective of trainers, and did not include the trainees’ opinions of the learning process or trainee outcomes like polyp detection rate.

Seeking to fill this knowledge gap, Dr. Leung conducted a prospective observational study at a Veterans Administration Hospital. Trainees conducted unsedated colonoscopies using WE, as well as WE and air insufflation colonoscopies in alternating order in sedated patients. A total of 83 air insufflation and 119 WE colonoscopies were performed. Trainees rated their experiences on a 1- to 5-point scale, with 1 being “strongly agree” and 5 “strongly disagree” to two statements: “My colonoscopy experience was better than expected” then “I was confident with my technical skills using this method.”

On average, trainees using WE reported a better than expected experience when using WE, compared with air insufflation (2.02 vs. 2.43; P = .0087), but no significant difference in the ensuing confidence in their technical skills (2.76 vs. 2.85; P = .48). There was a longer insertion time for WE (40 minutes vs. 30 minutes; P = .0008). WE was associated with a significantly higher adjusted cecal intubation rate (99% vs. 89%; P = .0031) and a significantly higher polyp detection rate (54% vs. 32%; P = .0447). Overall insertion time was longer with WE than air insufflation (40 minutes vs. 30 minutes; P = .0008), but withdrawal times were similar (22 minutes vs. 20 minutes; P = .3369).

The reduction in pain associated with WE can potentially improve training, in which cases procedures are typically performed on patients under moderate sedation, according to John Allen, MD, who was asked to comment on the study.

He also said that WE can sometimes do a better job than air of opening the lumen. It can help clean the colon surface, and even improve visibility. “Viewing the mucosa under water is like having a lens that helps view the surface and enhance polyp detection,” said Dr. Allen, who is a retired clinical professor of medicine at the University of Michigan, Ann Arbor.

Dr. Allen noted that either air sufflation or WE can be used to overcome the inexperience of the trainee, and that there shouldn’t be much difference between the two methods for sedated colonoscopies. The time of exam is similar, and WE does not require use of carbon dioxide or other gases, which avoids extra costs. “A highly skilled colonoscopist can perform exams using any of the available media. That said, WE is proving to be helpful no matter what your skill level. The only disadvantage I can see is that many trainers do not know how WE works and are unused to this process, although it is easy to learn,” said Dr. Allen.

The study is limited by the fact that it was conducted at a single institution in a nonblinded, nonrandomized population.

Dr. Leung declared there are no conflicts of interest to disclose. Dr. Allen has no relevant financial disclosures.

Publications
Topics
Sections

A new study finds that colonoscopy trainees had a better experience with and performed better when using water exchange (WE) than when using air insufflation. The new study was published in the Journal of Clinical Gastroenterology.

According to study author Felix W. Leung, MD, from the Veterans Affairs Greater Los Angeles Healthcare System in North Hills, Calif., and the University of California, Los Angeles, WE is less painful than air insufflation and increases cecal intubation rate because it reduces loop formation. He added that it also increases polyp and adenoma detection rates.

Although WE has compared favorably with air insufflation for ADR and pain, there is little evidence regarding how trainees view WE versus air insufflation. Dr. Leung pointed out that the issue could be particularly important among millennial trainees, who may have a different learning style than previous generations. He also noted that previous studies of WE versus air insufflation among trainees measured the perspective of trainers, and did not include the trainees’ opinions of the learning process or trainee outcomes like polyp detection rate.

Seeking to fill this knowledge gap, Dr. Leung conducted a prospective observational study at a Veterans Administration Hospital. Trainees conducted unsedated colonoscopies using WE, as well as WE and air insufflation colonoscopies in alternating order in sedated patients. A total of 83 air insufflation and 119 WE colonoscopies were performed. Trainees rated their experiences on a 1- to 5-point scale, with 1 being “strongly agree” and 5 “strongly disagree” to two statements: “My colonoscopy experience was better than expected” then “I was confident with my technical skills using this method.”

On average, trainees using WE reported a better than expected experience when using WE, compared with air insufflation (2.02 vs. 2.43; P = .0087), but no significant difference in the ensuing confidence in their technical skills (2.76 vs. 2.85; P = .48). There was a longer insertion time for WE (40 minutes vs. 30 minutes; P = .0008). WE was associated with a significantly higher adjusted cecal intubation rate (99% vs. 89%; P = .0031) and a significantly higher polyp detection rate (54% vs. 32%; P = .0447). Overall insertion time was longer with WE than air insufflation (40 minutes vs. 30 minutes; P = .0008), but withdrawal times were similar (22 minutes vs. 20 minutes; P = .3369).

The reduction in pain associated with WE can potentially improve training, in which cases procedures are typically performed on patients under moderate sedation, according to John Allen, MD, who was asked to comment on the study.

He also said that WE can sometimes do a better job than air of opening the lumen. It can help clean the colon surface, and even improve visibility. “Viewing the mucosa under water is like having a lens that helps view the surface and enhance polyp detection,” said Dr. Allen, who is a retired clinical professor of medicine at the University of Michigan, Ann Arbor.

Dr. Allen noted that either air sufflation or WE can be used to overcome the inexperience of the trainee, and that there shouldn’t be much difference between the two methods for sedated colonoscopies. The time of exam is similar, and WE does not require use of carbon dioxide or other gases, which avoids extra costs. “A highly skilled colonoscopist can perform exams using any of the available media. That said, WE is proving to be helpful no matter what your skill level. The only disadvantage I can see is that many trainers do not know how WE works and are unused to this process, although it is easy to learn,” said Dr. Allen.

The study is limited by the fact that it was conducted at a single institution in a nonblinded, nonrandomized population.

Dr. Leung declared there are no conflicts of interest to disclose. Dr. Allen has no relevant financial disclosures.

A new study finds that colonoscopy trainees had a better experience with and performed better when using water exchange (WE) than when using air insufflation. The new study was published in the Journal of Clinical Gastroenterology.

According to study author Felix W. Leung, MD, from the Veterans Affairs Greater Los Angeles Healthcare System in North Hills, Calif., and the University of California, Los Angeles, WE is less painful than air insufflation and increases cecal intubation rate because it reduces loop formation. He added that it also increases polyp and adenoma detection rates.

Although WE has compared favorably with air insufflation for ADR and pain, there is little evidence regarding how trainees view WE versus air insufflation. Dr. Leung pointed out that the issue could be particularly important among millennial trainees, who may have a different learning style than previous generations. He also noted that previous studies of WE versus air insufflation among trainees measured the perspective of trainers, and did not include the trainees’ opinions of the learning process or trainee outcomes like polyp detection rate.

Seeking to fill this knowledge gap, Dr. Leung conducted a prospective observational study at a Veterans Administration Hospital. Trainees conducted unsedated colonoscopies using WE, as well as WE and air insufflation colonoscopies in alternating order in sedated patients. A total of 83 air insufflation and 119 WE colonoscopies were performed. Trainees rated their experiences on a 1- to 5-point scale, with 1 being “strongly agree” and 5 “strongly disagree” to two statements: “My colonoscopy experience was better than expected” then “I was confident with my technical skills using this method.”

On average, trainees using WE reported a better than expected experience when using WE, compared with air insufflation (2.02 vs. 2.43; P = .0087), but no significant difference in the ensuing confidence in their technical skills (2.76 vs. 2.85; P = .48). There was a longer insertion time for WE (40 minutes vs. 30 minutes; P = .0008). WE was associated with a significantly higher adjusted cecal intubation rate (99% vs. 89%; P = .0031) and a significantly higher polyp detection rate (54% vs. 32%; P = .0447). Overall insertion time was longer with WE than air insufflation (40 minutes vs. 30 minutes; P = .0008), but withdrawal times were similar (22 minutes vs. 20 minutes; P = .3369).

The reduction in pain associated with WE can potentially improve training, in which cases procedures are typically performed on patients under moderate sedation, according to John Allen, MD, who was asked to comment on the study.

He also said that WE can sometimes do a better job than air of opening the lumen. It can help clean the colon surface, and even improve visibility. “Viewing the mucosa under water is like having a lens that helps view the surface and enhance polyp detection,” said Dr. Allen, who is a retired clinical professor of medicine at the University of Michigan, Ann Arbor.

Dr. Allen noted that either air sufflation or WE can be used to overcome the inexperience of the trainee, and that there shouldn’t be much difference between the two methods for sedated colonoscopies. The time of exam is similar, and WE does not require use of carbon dioxide or other gases, which avoids extra costs. “A highly skilled colonoscopist can perform exams using any of the available media. That said, WE is proving to be helpful no matter what your skill level. The only disadvantage I can see is that many trainers do not know how WE works and are unused to this process, although it is easy to learn,” said Dr. Allen.

The study is limited by the fact that it was conducted at a single institution in a nonblinded, nonrandomized population.

Dr. Leung declared there are no conflicts of interest to disclose. Dr. Allen has no relevant financial disclosures.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF CLINICAL GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article