BTKi resistance: ‘Achilles’ heel’ in effective treatment of B-cell malignancies

Article Type
Changed
Fri, 12/16/2022 - 11:27

While the use of Bruton tyrosine kinase inhibitors has significantly enhanced treatment of patients with B-cell malignancies, BTKi resistance is the “Achilles’ heel” of this otherwise effective therapeutic option, Deborah M. Stephens, DO, and John C. Byrd, MD, stated in a review article published in Blood.

Among patients with B-cell malignancies – including chronic lymphocytic leukemia (CLL), Waldenström’s macroglobulinemia (WM), mantle cell lymphoma (MCL), and marginal zone lymphoma (MZL) – BTKis have substantial efficacy. The review article focuses mainly on extremely rare primary or more common acquired BTKi resistance, particularly among patients with acquired resistance to ibrutinib (11%-38% in large studies).

Primary resistance suggests an alternative diagnosis or transformation to a more aggressive lymphoma. Acquired ibrutinib resistance manifests either as progressive CLL (typically after 2 years of therapy) or as early transformation (within the first 2 years of therapy) to more aggressive entities such as diffuse large B-cell lymphoma, Hodgkin lymphoma, or prolymphocytic leukemia. Less studied than ibrutinib, acquired resistance to acalabrutinib and zanubrutinib has been in the 12%-15% range.

Acquired resistance has meant a reduction in expected overall survival, and while the introduction of new therapies like venetoclax has extended OS, short progression-free survival (PFS) provides a rationale for research into mechanisms of resistance and alternative treatments.

Acquired resistance

Most often acquired, resistance to ibrutinib monotherapy in CLL patients has been associated with high-risk genomic features: complex karyotype, TP53 mutation, del(17)p13.1, and heavy pretreatment. In the phase 3 RESONATE trial, patients with both TP53 mutation and del(17)p13.1 had shorter PFS than those with only one or the other genomic feature. This feature may have explained the fairly good ibrutinib monotherapy outcomes in treatment-naive patients with del(17p)13.1.

Through univariable and multivariable analysis, a machine-learning program consistently identified TP53 mutation, prior CLL therapy, beta-2 microglobulin of at least5 mg/L, and lactate dehydrogenase greater than250 U/L as four risk factors associated with impaired survival. A second survival factor program comparing ibrutinib with chemoimmunotherapy identified beta-2 microglobulin levels of at least5 mg/L, lactate dehydrogenase greater than ULN, hemoglobin less than 110 g/L for women or less than120 g/L for men, and time from initiation of last therapy less than 24 months as risk factors.

While the mechanisms leading to ibrutinib resistance are not clearly known for patients with these risk factors, some research suggests that survival of TP53-mutated CLL cells is less dependent on the BCR pathway, making this CLL type more prone to ibrutinib resistance. TP53-mutated CLL cells, compared with T53–wild-type CLL cells, demonstrate a down-regulation of BCR-related genes and an up-regulation of prosurvival and antiapototic genes.
 

BTK mutations

Mutation of the active kinase domain on the BTK enzyme (C481) is the most common BTKi resistance mechanism described in CLL. A thymidine to adenine mutation (nucleotide 1634) leads to a 25-fold decrease in drug potency. Other known gene or chromosome regions affected in BTKi resistance include PLCy2, Del(8p), CARD11, TRAF2&3, BIRC3, MAP3k14, ARID2, SMARCA2, SMARCA4, MYD88, KLH14, and TNFAIP3.

Multiple mutations of PLCy2, the next most common BTKi resistance mechanism, include mutations of arginine to tryptophan, leucine to phenylalanine, serine to tyrosine, and others. When activated, these gain-of-function mutations prolong BCR signaling.

Ibrutinib resistance has also been associated with deletion of the short arm of chromosome 8 (del[8p]), with CLL cells harboring del(8p) insensitive to TRAIL-induced apoptosis, leading to continuous cell growth. Ibrutinib resistance in patients with WM has also been associated with del(8p).

CARD11 mutations, which allow for BTK-independent activation of NFkB, have been documented in ibrutinib-resistant patients with CLL and other lymphoid malignancies, as detailed in this review.
 

 

 

Novel therapies suggest promise

Survival in CLL after BTKi resistance develops is quite short, according to the authors, and they expressed hope that continued research into novel agents would prolong this population’s survival.

Venetoclax, an oral inhibitor of the antiapoptotic protein BCL2, is approved for all patients with CLL, both as monotherapy and in combination with an anti-CD20 monoclonal antibody. Data support its use after BTKi resistance has been detected. Some evidence in CLL cell lines supports use of the oral phosphoinositide 3-kinases inhibitors idelalisib and duvelisib in relapsed CLL and the BTK C481S mutation. Early response data with third-generation BTKis, such as ARQ-531 and LOXO-305, suggest promise in this setting. Also, for young and healthy patients who have progressed on both BTKi and venetoclax therapy, allogeneic hematopoietic stem cell transplantation could be considered.

In patients with heavily pretreated CLL, early clinical data support chimeric antigen receptor T-cell therapy (CAR T), a novel therapy where patients’ own T cells are extracted, engineered, and reinfused. A related immunotherapy, using a similar process of retroviral vector insertion of an anti-CD19 CAR into donor NK cells before infusion into the patient, is termed CAR-NK cell therapy. It shows promise in early data from patients with CLL who all had previously been heavily treated with ibrutinib.

More research, more hope

Despite the significant advance that BTKis represent, BTKi resistance, with shortened survival, remains a clinical problem for patients with B-cell malignancies. BTKi resistance has been associated with several genetic and clinical risk factors, with mutations in BTK and PLCy2 the most common and most thoroughly researched. “Ongoing clinical trials of third-generation noncovalent BTKis and cellular therapies, such as CAR T, provide much hope for these patients. ... Continued additional research is needed to further prolong the survival of patients with BTKi-resistant B-cell malignancies.”

Dr. Stephens has received research funding and has served on advisory boards for a variety of pharmaceutical and biotechnology companies. Dr. Byrd has received research funding and has consulted for a variety of pharmaceutical and biotechnology companies.

Publications
Topics
Sections

While the use of Bruton tyrosine kinase inhibitors has significantly enhanced treatment of patients with B-cell malignancies, BTKi resistance is the “Achilles’ heel” of this otherwise effective therapeutic option, Deborah M. Stephens, DO, and John C. Byrd, MD, stated in a review article published in Blood.

Among patients with B-cell malignancies – including chronic lymphocytic leukemia (CLL), Waldenström’s macroglobulinemia (WM), mantle cell lymphoma (MCL), and marginal zone lymphoma (MZL) – BTKis have substantial efficacy. The review article focuses mainly on extremely rare primary or more common acquired BTKi resistance, particularly among patients with acquired resistance to ibrutinib (11%-38% in large studies).

Primary resistance suggests an alternative diagnosis or transformation to a more aggressive lymphoma. Acquired ibrutinib resistance manifests either as progressive CLL (typically after 2 years of therapy) or as early transformation (within the first 2 years of therapy) to more aggressive entities such as diffuse large B-cell lymphoma, Hodgkin lymphoma, or prolymphocytic leukemia. Less studied than ibrutinib, acquired resistance to acalabrutinib and zanubrutinib has been in the 12%-15% range.

Acquired resistance has meant a reduction in expected overall survival, and while the introduction of new therapies like venetoclax has extended OS, short progression-free survival (PFS) provides a rationale for research into mechanisms of resistance and alternative treatments.

Acquired resistance

Most often acquired, resistance to ibrutinib monotherapy in CLL patients has been associated with high-risk genomic features: complex karyotype, TP53 mutation, del(17)p13.1, and heavy pretreatment. In the phase 3 RESONATE trial, patients with both TP53 mutation and del(17)p13.1 had shorter PFS than those with only one or the other genomic feature. This feature may have explained the fairly good ibrutinib monotherapy outcomes in treatment-naive patients with del(17p)13.1.

Through univariable and multivariable analysis, a machine-learning program consistently identified TP53 mutation, prior CLL therapy, beta-2 microglobulin of at least5 mg/L, and lactate dehydrogenase greater than250 U/L as four risk factors associated with impaired survival. A second survival factor program comparing ibrutinib with chemoimmunotherapy identified beta-2 microglobulin levels of at least5 mg/L, lactate dehydrogenase greater than ULN, hemoglobin less than 110 g/L for women or less than120 g/L for men, and time from initiation of last therapy less than 24 months as risk factors.

While the mechanisms leading to ibrutinib resistance are not clearly known for patients with these risk factors, some research suggests that survival of TP53-mutated CLL cells is less dependent on the BCR pathway, making this CLL type more prone to ibrutinib resistance. TP53-mutated CLL cells, compared with T53–wild-type CLL cells, demonstrate a down-regulation of BCR-related genes and an up-regulation of prosurvival and antiapototic genes.
 

BTK mutations

Mutation of the active kinase domain on the BTK enzyme (C481) is the most common BTKi resistance mechanism described in CLL. A thymidine to adenine mutation (nucleotide 1634) leads to a 25-fold decrease in drug potency. Other known gene or chromosome regions affected in BTKi resistance include PLCy2, Del(8p), CARD11, TRAF2&3, BIRC3, MAP3k14, ARID2, SMARCA2, SMARCA4, MYD88, KLH14, and TNFAIP3.

Multiple mutations of PLCy2, the next most common BTKi resistance mechanism, include mutations of arginine to tryptophan, leucine to phenylalanine, serine to tyrosine, and others. When activated, these gain-of-function mutations prolong BCR signaling.

Ibrutinib resistance has also been associated with deletion of the short arm of chromosome 8 (del[8p]), with CLL cells harboring del(8p) insensitive to TRAIL-induced apoptosis, leading to continuous cell growth. Ibrutinib resistance in patients with WM has also been associated with del(8p).

CARD11 mutations, which allow for BTK-independent activation of NFkB, have been documented in ibrutinib-resistant patients with CLL and other lymphoid malignancies, as detailed in this review.
 

 

 

Novel therapies suggest promise

Survival in CLL after BTKi resistance develops is quite short, according to the authors, and they expressed hope that continued research into novel agents would prolong this population’s survival.

Venetoclax, an oral inhibitor of the antiapoptotic protein BCL2, is approved for all patients with CLL, both as monotherapy and in combination with an anti-CD20 monoclonal antibody. Data support its use after BTKi resistance has been detected. Some evidence in CLL cell lines supports use of the oral phosphoinositide 3-kinases inhibitors idelalisib and duvelisib in relapsed CLL and the BTK C481S mutation. Early response data with third-generation BTKis, such as ARQ-531 and LOXO-305, suggest promise in this setting. Also, for young and healthy patients who have progressed on both BTKi and venetoclax therapy, allogeneic hematopoietic stem cell transplantation could be considered.

In patients with heavily pretreated CLL, early clinical data support chimeric antigen receptor T-cell therapy (CAR T), a novel therapy where patients’ own T cells are extracted, engineered, and reinfused. A related immunotherapy, using a similar process of retroviral vector insertion of an anti-CD19 CAR into donor NK cells before infusion into the patient, is termed CAR-NK cell therapy. It shows promise in early data from patients with CLL who all had previously been heavily treated with ibrutinib.

More research, more hope

Despite the significant advance that BTKis represent, BTKi resistance, with shortened survival, remains a clinical problem for patients with B-cell malignancies. BTKi resistance has been associated with several genetic and clinical risk factors, with mutations in BTK and PLCy2 the most common and most thoroughly researched. “Ongoing clinical trials of third-generation noncovalent BTKis and cellular therapies, such as CAR T, provide much hope for these patients. ... Continued additional research is needed to further prolong the survival of patients with BTKi-resistant B-cell malignancies.”

Dr. Stephens has received research funding and has served on advisory boards for a variety of pharmaceutical and biotechnology companies. Dr. Byrd has received research funding and has consulted for a variety of pharmaceutical and biotechnology companies.

While the use of Bruton tyrosine kinase inhibitors has significantly enhanced treatment of patients with B-cell malignancies, BTKi resistance is the “Achilles’ heel” of this otherwise effective therapeutic option, Deborah M. Stephens, DO, and John C. Byrd, MD, stated in a review article published in Blood.

Among patients with B-cell malignancies – including chronic lymphocytic leukemia (CLL), Waldenström’s macroglobulinemia (WM), mantle cell lymphoma (MCL), and marginal zone lymphoma (MZL) – BTKis have substantial efficacy. The review article focuses mainly on extremely rare primary or more common acquired BTKi resistance, particularly among patients with acquired resistance to ibrutinib (11%-38% in large studies).

Primary resistance suggests an alternative diagnosis or transformation to a more aggressive lymphoma. Acquired ibrutinib resistance manifests either as progressive CLL (typically after 2 years of therapy) or as early transformation (within the first 2 years of therapy) to more aggressive entities such as diffuse large B-cell lymphoma, Hodgkin lymphoma, or prolymphocytic leukemia. Less studied than ibrutinib, acquired resistance to acalabrutinib and zanubrutinib has been in the 12%-15% range.

Acquired resistance has meant a reduction in expected overall survival, and while the introduction of new therapies like venetoclax has extended OS, short progression-free survival (PFS) provides a rationale for research into mechanisms of resistance and alternative treatments.

Acquired resistance

Most often acquired, resistance to ibrutinib monotherapy in CLL patients has been associated with high-risk genomic features: complex karyotype, TP53 mutation, del(17)p13.1, and heavy pretreatment. In the phase 3 RESONATE trial, patients with both TP53 mutation and del(17)p13.1 had shorter PFS than those with only one or the other genomic feature. This feature may have explained the fairly good ibrutinib monotherapy outcomes in treatment-naive patients with del(17p)13.1.

Through univariable and multivariable analysis, a machine-learning program consistently identified TP53 mutation, prior CLL therapy, beta-2 microglobulin of at least5 mg/L, and lactate dehydrogenase greater than250 U/L as four risk factors associated with impaired survival. A second survival factor program comparing ibrutinib with chemoimmunotherapy identified beta-2 microglobulin levels of at least5 mg/L, lactate dehydrogenase greater than ULN, hemoglobin less than 110 g/L for women or less than120 g/L for men, and time from initiation of last therapy less than 24 months as risk factors.

While the mechanisms leading to ibrutinib resistance are not clearly known for patients with these risk factors, some research suggests that survival of TP53-mutated CLL cells is less dependent on the BCR pathway, making this CLL type more prone to ibrutinib resistance. TP53-mutated CLL cells, compared with T53–wild-type CLL cells, demonstrate a down-regulation of BCR-related genes and an up-regulation of prosurvival and antiapototic genes.
 

BTK mutations

Mutation of the active kinase domain on the BTK enzyme (C481) is the most common BTKi resistance mechanism described in CLL. A thymidine to adenine mutation (nucleotide 1634) leads to a 25-fold decrease in drug potency. Other known gene or chromosome regions affected in BTKi resistance include PLCy2, Del(8p), CARD11, TRAF2&3, BIRC3, MAP3k14, ARID2, SMARCA2, SMARCA4, MYD88, KLH14, and TNFAIP3.

Multiple mutations of PLCy2, the next most common BTKi resistance mechanism, include mutations of arginine to tryptophan, leucine to phenylalanine, serine to tyrosine, and others. When activated, these gain-of-function mutations prolong BCR signaling.

Ibrutinib resistance has also been associated with deletion of the short arm of chromosome 8 (del[8p]), with CLL cells harboring del(8p) insensitive to TRAIL-induced apoptosis, leading to continuous cell growth. Ibrutinib resistance in patients with WM has also been associated with del(8p).

CARD11 mutations, which allow for BTK-independent activation of NFkB, have been documented in ibrutinib-resistant patients with CLL and other lymphoid malignancies, as detailed in this review.
 

 

 

Novel therapies suggest promise

Survival in CLL after BTKi resistance develops is quite short, according to the authors, and they expressed hope that continued research into novel agents would prolong this population’s survival.

Venetoclax, an oral inhibitor of the antiapoptotic protein BCL2, is approved for all patients with CLL, both as monotherapy and in combination with an anti-CD20 monoclonal antibody. Data support its use after BTKi resistance has been detected. Some evidence in CLL cell lines supports use of the oral phosphoinositide 3-kinases inhibitors idelalisib and duvelisib in relapsed CLL and the BTK C481S mutation. Early response data with third-generation BTKis, such as ARQ-531 and LOXO-305, suggest promise in this setting. Also, for young and healthy patients who have progressed on both BTKi and venetoclax therapy, allogeneic hematopoietic stem cell transplantation could be considered.

In patients with heavily pretreated CLL, early clinical data support chimeric antigen receptor T-cell therapy (CAR T), a novel therapy where patients’ own T cells are extracted, engineered, and reinfused. A related immunotherapy, using a similar process of retroviral vector insertion of an anti-CD19 CAR into donor NK cells before infusion into the patient, is termed CAR-NK cell therapy. It shows promise in early data from patients with CLL who all had previously been heavily treated with ibrutinib.

More research, more hope

Despite the significant advance that BTKis represent, BTKi resistance, with shortened survival, remains a clinical problem for patients with B-cell malignancies. BTKi resistance has been associated with several genetic and clinical risk factors, with mutations in BTK and PLCy2 the most common and most thoroughly researched. “Ongoing clinical trials of third-generation noncovalent BTKis and cellular therapies, such as CAR T, provide much hope for these patients. ... Continued additional research is needed to further prolong the survival of patients with BTKi-resistant B-cell malignancies.”

Dr. Stephens has received research funding and has served on advisory boards for a variety of pharmaceutical and biotechnology companies. Dr. Byrd has received research funding and has consulted for a variety of pharmaceutical and biotechnology companies.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM BLOOD

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Watchful waiting sometimes best for asymptomatic basal cell carcinoma

Article Type
Changed
Mon, 10/18/2021 - 17:09

In patients with basal cell carcinoma (BCC), watchful waiting may be more suitable than active treatment for patients with asymptomatic nodular or superficial BCC and a limited life expectancy, according to a study published in JAMA Dermatology.

“Patient preferences, treatment goals, and the option for proceeding with a watchful waiting approach should be discussed as part of personalized shared decision-making,” wrote Marieke van Winden, MD, MSc, of Radboud University Medical Center in Nijmegen, the Netherlands, and colleagues. “In patients with a limited life expectancy and asymptomatic low-risk tumors, the time to benefit from treatment might exceed life expectancy, and watchful waiting should be discussed as a potentially appropriate approach.”

As little research has been undertaken on watchful waiting in patients with BCC, the expected tumor growth, progression and the chance of developing symptoms while taking this approach are poorly understood. Patients with limited life expectancy might not live long enough to develop BCC symptoms and may benefit more from watchful waiting than active treatment, authors of the study wrote.

This observational cohort study evaluated the reasons for watchful waiting, along with the natural course of 280 BCCs in 89 patients (53% men, median age 83 years) who chose this approach. Patients had one or more untreated BCCs for at least 3 months and the median follow-up was 9 months. The researchers also looked at the reasons for initiating later treatment.

Patient-related factors, including limited life expectancy, comorbidity prioritizations, and frailty, were the most important reasons to choose watchful waiting in 83% of patients, followed by tumor-related factors in 55% of patients. Of the tumors, 47% increased in size. The estimated tumor diameter increase in 1 year was 4.46 mm for infiltrative/micronodular BCCs and 1.06 mm for nodular, superficial, or clinical BCCs. Tumor growth was not associated with initial tumor size and location.

The most common reasons to initiate active treatment were tumor burden, resolved reasons for watchful waiting, and reevaluation of patient-related factors.

“All patients should be followed up regularly to determine whether a watchful waiting approach is still suited and if patients still prefer watchful waiting to reconsider the consequences of refraining from treatment,” the authors wrote.

In an accompanying editorial, Mackenzie R. Wehner, MD, MPhil, of the University of Texas MD Anderson Cancer Center in Houston, said that, while the observational and retrospective design was a limitation of the study, this allowed the authors to observe patients avoiding or delaying treatment for BCC in real clinical practice.

The study “shows that few patients developed new symptoms, and few patients who decided to treat after a delay had more invasive interventions than originally anticipated, an encouraging result as we continue to study the option and hone the details of active surveillance in BCC,” Dr. Wehner wrote. “It is important to note that the authors did not perform actual active surveillance. This study did not prospectively enroll patients and see them in follow-up at set times, nor did it have prespecified end points for recommending treatment.”

“Before evidence-based active surveillance in BCC can become a viable option, prospective studies of active surveillance, with specified follow-up times and clear outcome measures, are needed,” Dr. Wehner wrote.

Dr. van Winden did not report any conflicts of interest.

Publications
Topics
Sections

In patients with basal cell carcinoma (BCC), watchful waiting may be more suitable than active treatment for patients with asymptomatic nodular or superficial BCC and a limited life expectancy, according to a study published in JAMA Dermatology.

“Patient preferences, treatment goals, and the option for proceeding with a watchful waiting approach should be discussed as part of personalized shared decision-making,” wrote Marieke van Winden, MD, MSc, of Radboud University Medical Center in Nijmegen, the Netherlands, and colleagues. “In patients with a limited life expectancy and asymptomatic low-risk tumors, the time to benefit from treatment might exceed life expectancy, and watchful waiting should be discussed as a potentially appropriate approach.”

As little research has been undertaken on watchful waiting in patients with BCC, the expected tumor growth, progression and the chance of developing symptoms while taking this approach are poorly understood. Patients with limited life expectancy might not live long enough to develop BCC symptoms and may benefit more from watchful waiting than active treatment, authors of the study wrote.

This observational cohort study evaluated the reasons for watchful waiting, along with the natural course of 280 BCCs in 89 patients (53% men, median age 83 years) who chose this approach. Patients had one or more untreated BCCs for at least 3 months and the median follow-up was 9 months. The researchers also looked at the reasons for initiating later treatment.

Patient-related factors, including limited life expectancy, comorbidity prioritizations, and frailty, were the most important reasons to choose watchful waiting in 83% of patients, followed by tumor-related factors in 55% of patients. Of the tumors, 47% increased in size. The estimated tumor diameter increase in 1 year was 4.46 mm for infiltrative/micronodular BCCs and 1.06 mm for nodular, superficial, or clinical BCCs. Tumor growth was not associated with initial tumor size and location.

The most common reasons to initiate active treatment were tumor burden, resolved reasons for watchful waiting, and reevaluation of patient-related factors.

“All patients should be followed up regularly to determine whether a watchful waiting approach is still suited and if patients still prefer watchful waiting to reconsider the consequences of refraining from treatment,” the authors wrote.

In an accompanying editorial, Mackenzie R. Wehner, MD, MPhil, of the University of Texas MD Anderson Cancer Center in Houston, said that, while the observational and retrospective design was a limitation of the study, this allowed the authors to observe patients avoiding or delaying treatment for BCC in real clinical practice.

The study “shows that few patients developed new symptoms, and few patients who decided to treat after a delay had more invasive interventions than originally anticipated, an encouraging result as we continue to study the option and hone the details of active surveillance in BCC,” Dr. Wehner wrote. “It is important to note that the authors did not perform actual active surveillance. This study did not prospectively enroll patients and see them in follow-up at set times, nor did it have prespecified end points for recommending treatment.”

“Before evidence-based active surveillance in BCC can become a viable option, prospective studies of active surveillance, with specified follow-up times and clear outcome measures, are needed,” Dr. Wehner wrote.

Dr. van Winden did not report any conflicts of interest.

In patients with basal cell carcinoma (BCC), watchful waiting may be more suitable than active treatment for patients with asymptomatic nodular or superficial BCC and a limited life expectancy, according to a study published in JAMA Dermatology.

“Patient preferences, treatment goals, and the option for proceeding with a watchful waiting approach should be discussed as part of personalized shared decision-making,” wrote Marieke van Winden, MD, MSc, of Radboud University Medical Center in Nijmegen, the Netherlands, and colleagues. “In patients with a limited life expectancy and asymptomatic low-risk tumors, the time to benefit from treatment might exceed life expectancy, and watchful waiting should be discussed as a potentially appropriate approach.”

As little research has been undertaken on watchful waiting in patients with BCC, the expected tumor growth, progression and the chance of developing symptoms while taking this approach are poorly understood. Patients with limited life expectancy might not live long enough to develop BCC symptoms and may benefit more from watchful waiting than active treatment, authors of the study wrote.

This observational cohort study evaluated the reasons for watchful waiting, along with the natural course of 280 BCCs in 89 patients (53% men, median age 83 years) who chose this approach. Patients had one or more untreated BCCs for at least 3 months and the median follow-up was 9 months. The researchers also looked at the reasons for initiating later treatment.

Patient-related factors, including limited life expectancy, comorbidity prioritizations, and frailty, were the most important reasons to choose watchful waiting in 83% of patients, followed by tumor-related factors in 55% of patients. Of the tumors, 47% increased in size. The estimated tumor diameter increase in 1 year was 4.46 mm for infiltrative/micronodular BCCs and 1.06 mm for nodular, superficial, or clinical BCCs. Tumor growth was not associated with initial tumor size and location.

The most common reasons to initiate active treatment were tumor burden, resolved reasons for watchful waiting, and reevaluation of patient-related factors.

“All patients should be followed up regularly to determine whether a watchful waiting approach is still suited and if patients still prefer watchful waiting to reconsider the consequences of refraining from treatment,” the authors wrote.

In an accompanying editorial, Mackenzie R. Wehner, MD, MPhil, of the University of Texas MD Anderson Cancer Center in Houston, said that, while the observational and retrospective design was a limitation of the study, this allowed the authors to observe patients avoiding or delaying treatment for BCC in real clinical practice.

The study “shows that few patients developed new symptoms, and few patients who decided to treat after a delay had more invasive interventions than originally anticipated, an encouraging result as we continue to study the option and hone the details of active surveillance in BCC,” Dr. Wehner wrote. “It is important to note that the authors did not perform actual active surveillance. This study did not prospectively enroll patients and see them in follow-up at set times, nor did it have prespecified end points for recommending treatment.”

“Before evidence-based active surveillance in BCC can become a viable option, prospective studies of active surveillance, with specified follow-up times and clear outcome measures, are needed,” Dr. Wehner wrote.

Dr. van Winden did not report any conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA DERMATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

In MS, baseline cortical lesions predict cognitive decline

Article Type
Changed
Mon, 11/01/2021 - 14:57

 

Three or more cortical lesions at the time of multiple sclerosis (MS) diagnosis predicts long-term cognitive decline, according to findings from a new analysis. The findings had good accuracy, and could help clinicians monitor and treat cognitive impairment as it develops, according to Stefano Ziccardi, PhD, who is a postdoctoral researcher at the University of Verona in Italy.

“The number of cortical lesions at MS diagnosis accurately discriminates between the presence or the absence of cognitive impairment after diagnosis of MS, and this should be considered a predictive marker of long-term cognitive impairment in these patients. Early cortical lesion evaluation should be conducted in each MS patient to anticipate the manifestation of cognitive problems to improve the monitoring of cognitive abilities, improve the diagnosis of cognitive impairment, enable prompt intervention as necessary,” said Dr. Ziccardi at the annual meeting of the European Committee for Treatment and Research in Multiple Sclerosis (ECTRIMS).

Cortical lesions are highly prevalent in MS, perhaps more so than white matter lesions, said Dr. Ziccardi. They are associated with clinical disability and lead to disease progression. “However, prognostic data about the role of early cortical lesions with reference to long-term cognitive impairment are still missing,” said Dr. Ziccardi.

That’s important because cognitive impairment is very common in MS, affecting between one-third and two-thirds of patients. It may appear early in the disease course and worsen over time, and it predicts worse clinical and neurological progression. And it presents a clinical challenge. “Clinicians struggle to predict the evolution of cognitive abilities over time,” said Dr. Ziccardi.

The findings drew praise from Iris-Katharina Penner, PhD, who comoderated the session. “I think the important point … is that the predictive value of cortical lesions is very high, because it indicates finally that we probably have a patient at risk for developing cognitive impairment in the future,” said Dr. Penner, who is a neuropsychologist and cognitive neuroscientist at Heinrich Heine University in Düsseldorf, Germany.

Clinicians often don’t pay enough attention to cognition and the complexities of MS, said Mark Gudesblatt, MD, who was asked to comment. “It’s just adding layers of complexity. We’re peeling back the onion and you realize it’s a really complicated disease. It’s not just white matter plaques, gray matter plaques, disconnection syndrome, wires cut, atrophy, ongoing inflammation, immune deficiency. All these diseases are fascinating. And we think we’re experts. But the fact is, we have much to learn,” said Dr. Gudesblatt, who is medical director of the Comprehensive MS Care Center at South Shore Neurologic Associates in Patchogue, New York.

The researchers analyzed data from 170 patients with MS who had a disease duration of approximately 20 years. Among the study cohort 62 patients were female, and the mean duration of disease was 19.2 years. Each patient had had a 1.5 Tesla magnetic resonance imaging scan to look for cortical lesions within 3 years of diagnosis. They had also undergone periodic MRIs as well as neuropsychological exams, and underwent a neuropsychological assessment at the end of the study, which included the Brief Repeatable Battery of Neuropsychological Tests (BRB-NT) and the Stroop Test.

A total of 41% of subjects had no cortical lesions according to their first MRI; 19% had 1-2 lesions, and 40% had 3 or more. At follow-up, 50% were cognitively normal (failed no tests), 25% had mild cognitive impairment (failed one or more tests), and 25% had severe cognitive impairment (failed three or more tests).

In the overall cohort, the median number of cortical lesions at baseline was 1 (interquartile range, 5.0). Among the 50% with normal cognitive function, the median was 0 (IQR, 2.5), while for the remaining 50% with cognitive impairment, the median was 3 (IQR, 7.0).

Those with 3 or more lesions had increased odds of cognitive impairment at follow-up (odds ratio, 3.70; P < .001), with an accuracy of 65% (95% confidence interval, 58%-72%), specificity of 75% (95% CI, 65%-84%), and a sensitivity of 55% (95% CI, 44%-66%). Three or more lesions discriminated between cognitive impairment and no impairment with an area under the curve of 0.67.

Individuals with no cognitive impairment had a median 0 lesions (IQR, 2.5), those with mild cognitive impairment had a median of 2.0 (IQR, 6.0), and those with severe cognitive impairment had 4.0 (IQR, 7.25).

In a multinomial regression model, 3 or more baseline cortical lesions were associated with a greater than threefold risk of severe cognitive impairment (OR, 3.33; P = .01).

Of subjects with 0 baseline lesions, 62% were cognitively normal at follow-up. In the 1-2 lesion group, 64% were normal. In the 3 or more group, 31% were cognitively normal (P < .001). In the 0 lesion group, 26% had mild cognitive impairment and 12% had severe cognitive impairment. In the 3 or more group, 28% had mild cognitive impairment, and 41% had severe cognitive impairment.

During the Q&A session following the talk, Dr. Ziccardi was asked if the group compared cortical lesions to other MRI correlates of cognitive impairment, such as gray matter volume or white matter integrity. He responded that the group is looking into those comparisons, and recently found that neither the number nor the volume of white matter lesions improved the accuracy of the predictive models based on the number of cortical lesions. The group is also looking into the applicability of gray matter volume.

Meeting/Event
Issue
Neurology Reviews - 29(11)
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

Three or more cortical lesions at the time of multiple sclerosis (MS) diagnosis predicts long-term cognitive decline, according to findings from a new analysis. The findings had good accuracy, and could help clinicians monitor and treat cognitive impairment as it develops, according to Stefano Ziccardi, PhD, who is a postdoctoral researcher at the University of Verona in Italy.

“The number of cortical lesions at MS diagnosis accurately discriminates between the presence or the absence of cognitive impairment after diagnosis of MS, and this should be considered a predictive marker of long-term cognitive impairment in these patients. Early cortical lesion evaluation should be conducted in each MS patient to anticipate the manifestation of cognitive problems to improve the monitoring of cognitive abilities, improve the diagnosis of cognitive impairment, enable prompt intervention as necessary,” said Dr. Ziccardi at the annual meeting of the European Committee for Treatment and Research in Multiple Sclerosis (ECTRIMS).

Cortical lesions are highly prevalent in MS, perhaps more so than white matter lesions, said Dr. Ziccardi. They are associated with clinical disability and lead to disease progression. “However, prognostic data about the role of early cortical lesions with reference to long-term cognitive impairment are still missing,” said Dr. Ziccardi.

That’s important because cognitive impairment is very common in MS, affecting between one-third and two-thirds of patients. It may appear early in the disease course and worsen over time, and it predicts worse clinical and neurological progression. And it presents a clinical challenge. “Clinicians struggle to predict the evolution of cognitive abilities over time,” said Dr. Ziccardi.

The findings drew praise from Iris-Katharina Penner, PhD, who comoderated the session. “I think the important point … is that the predictive value of cortical lesions is very high, because it indicates finally that we probably have a patient at risk for developing cognitive impairment in the future,” said Dr. Penner, who is a neuropsychologist and cognitive neuroscientist at Heinrich Heine University in Düsseldorf, Germany.

Clinicians often don’t pay enough attention to cognition and the complexities of MS, said Mark Gudesblatt, MD, who was asked to comment. “It’s just adding layers of complexity. We’re peeling back the onion and you realize it’s a really complicated disease. It’s not just white matter plaques, gray matter plaques, disconnection syndrome, wires cut, atrophy, ongoing inflammation, immune deficiency. All these diseases are fascinating. And we think we’re experts. But the fact is, we have much to learn,” said Dr. Gudesblatt, who is medical director of the Comprehensive MS Care Center at South Shore Neurologic Associates in Patchogue, New York.

The researchers analyzed data from 170 patients with MS who had a disease duration of approximately 20 years. Among the study cohort 62 patients were female, and the mean duration of disease was 19.2 years. Each patient had had a 1.5 Tesla magnetic resonance imaging scan to look for cortical lesions within 3 years of diagnosis. They had also undergone periodic MRIs as well as neuropsychological exams, and underwent a neuropsychological assessment at the end of the study, which included the Brief Repeatable Battery of Neuropsychological Tests (BRB-NT) and the Stroop Test.

A total of 41% of subjects had no cortical lesions according to their first MRI; 19% had 1-2 lesions, and 40% had 3 or more. At follow-up, 50% were cognitively normal (failed no tests), 25% had mild cognitive impairment (failed one or more tests), and 25% had severe cognitive impairment (failed three or more tests).

In the overall cohort, the median number of cortical lesions at baseline was 1 (interquartile range, 5.0). Among the 50% with normal cognitive function, the median was 0 (IQR, 2.5), while for the remaining 50% with cognitive impairment, the median was 3 (IQR, 7.0).

Those with 3 or more lesions had increased odds of cognitive impairment at follow-up (odds ratio, 3.70; P < .001), with an accuracy of 65% (95% confidence interval, 58%-72%), specificity of 75% (95% CI, 65%-84%), and a sensitivity of 55% (95% CI, 44%-66%). Three or more lesions discriminated between cognitive impairment and no impairment with an area under the curve of 0.67.

Individuals with no cognitive impairment had a median 0 lesions (IQR, 2.5), those with mild cognitive impairment had a median of 2.0 (IQR, 6.0), and those with severe cognitive impairment had 4.0 (IQR, 7.25).

In a multinomial regression model, 3 or more baseline cortical lesions were associated with a greater than threefold risk of severe cognitive impairment (OR, 3.33; P = .01).

Of subjects with 0 baseline lesions, 62% were cognitively normal at follow-up. In the 1-2 lesion group, 64% were normal. In the 3 or more group, 31% were cognitively normal (P < .001). In the 0 lesion group, 26% had mild cognitive impairment and 12% had severe cognitive impairment. In the 3 or more group, 28% had mild cognitive impairment, and 41% had severe cognitive impairment.

During the Q&A session following the talk, Dr. Ziccardi was asked if the group compared cortical lesions to other MRI correlates of cognitive impairment, such as gray matter volume or white matter integrity. He responded that the group is looking into those comparisons, and recently found that neither the number nor the volume of white matter lesions improved the accuracy of the predictive models based on the number of cortical lesions. The group is also looking into the applicability of gray matter volume.

 

Three or more cortical lesions at the time of multiple sclerosis (MS) diagnosis predicts long-term cognitive decline, according to findings from a new analysis. The findings had good accuracy, and could help clinicians monitor and treat cognitive impairment as it develops, according to Stefano Ziccardi, PhD, who is a postdoctoral researcher at the University of Verona in Italy.

“The number of cortical lesions at MS diagnosis accurately discriminates between the presence or the absence of cognitive impairment after diagnosis of MS, and this should be considered a predictive marker of long-term cognitive impairment in these patients. Early cortical lesion evaluation should be conducted in each MS patient to anticipate the manifestation of cognitive problems to improve the monitoring of cognitive abilities, improve the diagnosis of cognitive impairment, enable prompt intervention as necessary,” said Dr. Ziccardi at the annual meeting of the European Committee for Treatment and Research in Multiple Sclerosis (ECTRIMS).

Cortical lesions are highly prevalent in MS, perhaps more so than white matter lesions, said Dr. Ziccardi. They are associated with clinical disability and lead to disease progression. “However, prognostic data about the role of early cortical lesions with reference to long-term cognitive impairment are still missing,” said Dr. Ziccardi.

That’s important because cognitive impairment is very common in MS, affecting between one-third and two-thirds of patients. It may appear early in the disease course and worsen over time, and it predicts worse clinical and neurological progression. And it presents a clinical challenge. “Clinicians struggle to predict the evolution of cognitive abilities over time,” said Dr. Ziccardi.

The findings drew praise from Iris-Katharina Penner, PhD, who comoderated the session. “I think the important point … is that the predictive value of cortical lesions is very high, because it indicates finally that we probably have a patient at risk for developing cognitive impairment in the future,” said Dr. Penner, who is a neuropsychologist and cognitive neuroscientist at Heinrich Heine University in Düsseldorf, Germany.

Clinicians often don’t pay enough attention to cognition and the complexities of MS, said Mark Gudesblatt, MD, who was asked to comment. “It’s just adding layers of complexity. We’re peeling back the onion and you realize it’s a really complicated disease. It’s not just white matter plaques, gray matter plaques, disconnection syndrome, wires cut, atrophy, ongoing inflammation, immune deficiency. All these diseases are fascinating. And we think we’re experts. But the fact is, we have much to learn,” said Dr. Gudesblatt, who is medical director of the Comprehensive MS Care Center at South Shore Neurologic Associates in Patchogue, New York.

The researchers analyzed data from 170 patients with MS who had a disease duration of approximately 20 years. Among the study cohort 62 patients were female, and the mean duration of disease was 19.2 years. Each patient had had a 1.5 Tesla magnetic resonance imaging scan to look for cortical lesions within 3 years of diagnosis. They had also undergone periodic MRIs as well as neuropsychological exams, and underwent a neuropsychological assessment at the end of the study, which included the Brief Repeatable Battery of Neuropsychological Tests (BRB-NT) and the Stroop Test.

A total of 41% of subjects had no cortical lesions according to their first MRI; 19% had 1-2 lesions, and 40% had 3 or more. At follow-up, 50% were cognitively normal (failed no tests), 25% had mild cognitive impairment (failed one or more tests), and 25% had severe cognitive impairment (failed three or more tests).

In the overall cohort, the median number of cortical lesions at baseline was 1 (interquartile range, 5.0). Among the 50% with normal cognitive function, the median was 0 (IQR, 2.5), while for the remaining 50% with cognitive impairment, the median was 3 (IQR, 7.0).

Those with 3 or more lesions had increased odds of cognitive impairment at follow-up (odds ratio, 3.70; P < .001), with an accuracy of 65% (95% confidence interval, 58%-72%), specificity of 75% (95% CI, 65%-84%), and a sensitivity of 55% (95% CI, 44%-66%). Three or more lesions discriminated between cognitive impairment and no impairment with an area under the curve of 0.67.

Individuals with no cognitive impairment had a median 0 lesions (IQR, 2.5), those with mild cognitive impairment had a median of 2.0 (IQR, 6.0), and those with severe cognitive impairment had 4.0 (IQR, 7.25).

In a multinomial regression model, 3 or more baseline cortical lesions were associated with a greater than threefold risk of severe cognitive impairment (OR, 3.33; P = .01).

Of subjects with 0 baseline lesions, 62% were cognitively normal at follow-up. In the 1-2 lesion group, 64% were normal. In the 3 or more group, 31% were cognitively normal (P < .001). In the 0 lesion group, 26% had mild cognitive impairment and 12% had severe cognitive impairment. In the 3 or more group, 28% had mild cognitive impairment, and 41% had severe cognitive impairment.

During the Q&A session following the talk, Dr. Ziccardi was asked if the group compared cortical lesions to other MRI correlates of cognitive impairment, such as gray matter volume or white matter integrity. He responded that the group is looking into those comparisons, and recently found that neither the number nor the volume of white matter lesions improved the accuracy of the predictive models based on the number of cortical lesions. The group is also looking into the applicability of gray matter volume.

Issue
Neurology Reviews - 29(11)
Issue
Neurology Reviews - 29(11)
Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ECTRIMS 2021

Citation Override
Publish date: October 15, 2021
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Cortical lesions predict risk for secondary progressive MS

Article Type
Changed
Fri, 10/15/2021 - 12:44

The number of cortical lesions at baseline may indicate a patient’s risk of developing secondary progressive multiple sclerosis (MS), according to new research. Cortical lesions also may be an early marker of future disability accumulation.

In the study, patients who had developed secondary progressive MS after 20 years of follow-up had approximately 7 cortical lesions at baseline. This number was significantly higher than the baseline number of cortical lesions in patients with clinically isolated syndrome (CIS), relapsing-remitting MS, or primary progressive MS at 20 years.

“Our study represented a clear indication that the assessment, presence, and high number of cortical lesions at diagnosis is one of the tools at the disposal of the neurologist for the early identification of patients with more serious disease course,” said Gian Marco Schiavi, MD, a neurology resident at the University of Verona, Italy, during the presentation of his research.

The study was presented October 14 at the annual meeting of the European Committee for Treatment and Research in Multiple Sclerosis (ECTRIMS).
 

Accumulation of disability

Previous research has indicated that cortical lesions play a role in the accumulation of disability in MS and the conversion to secondary progressive MS. Other observations suggest that the number of cortical lesions after 30 years of follow-up explains more than 40% of the difference in disability between patients with secondary progressive MS.

The current investigators sought to understand whether cortical lesions at diagnosis could predict a patient’s risk for development of secondary progressive MS and risk for disability accumulation. They included 220 patients with MS and approximately 20 years of follow-up in their study.

At the time of diagnosis, all participants underwent 1.5-T MRI with double inversion recovery. Participants also presented for periodic MRI and clinical evaluations.

The researchers used analysis of variance to compare the baseline number of cortical lesions between patients with CIS, relapsing-remitting MS, secondary progressive MS, and primary progressive MS at 20 years. They also performed a multivariable regression analysis to predict patients’ final scores on the Expanded Disability Status Scale (EDSS). Variables included participants’ demographic, clinical, and radiological characteristics.
 

Lesions and disease progression

At baseline (the time of diagnosis), 162 patients had relapsing-remitting MS, 45 had CIS, and 12 had primary progressive MS. In all, 106 patients had no cortical lesions, 47 had 3 or fewer cortical lesions, and 67 had more than 3 cortical lesions.

At 20 years, 12 patients still had CIS, 152 had relapsing-remitting MS, and 44 had developed secondary progressive MS.

The mean number of cortical lesions at diagnosis was 6.6 in patients with secondary progressive MS at 20 years, which was significantly higher than the mean 1.3 cortical lesions in the other patients (P < .001).

In addition, post-hoc analysis showed that the median number of cortical lesions was significantly higher in patients with secondary progressive MS (6), compared with those with CIS (0; P < .001), relapsing-remitting MS (0; P < .001), and primary progressive MS (4.5; P = .013). Patients with primary progressive MS had a higher number of cortical lesions than patients with CIS and those with relapsing-remitting MS (P = .001).

The investigators also examined disability at 20 years. At that timepoint, mean EDSS score was 1.5 in patients with no cortical lesions, 3.0 in patients with 1 to 3 cortical lesions at baseline, and 6.0 in patients with more than 3 cortical lesions.

In a regression analysis, the number of cortical lesions and EDSS at diagnosis were the best predictors of long-term disability (P < .001). These factors explained about 57% of the variance in EDSS score after 20 years.
 

 

 

‘Important study’

“This important study supports that the presence of cortical lesions at the time of diagnosis is associated with long-term disability and transition to a secondary progressive disease course,” said Elias S. Sotirchos, MD, assistant professor of neurology at Johns Hopkins University, Baltimore. The study size and long duration of follow-up are important strengths of the findings, he added.

Still, further research is needed to validate cortical lesions as a biomarker in clinical practice. Aside from technical validation issues relating to the identification of cortical lesions, whether cortical lesion burden can be used to guide therapeutic decision-making in MS is not clear, said Dr. Sotirchos.

“Notably, these patients were diagnosed and enrolled in this study 20 years ago, prior to the availability of newer disease-modifying therapies [DMTs] that are more effective at preventing inflammatory disease activity in MS,” he said, referring to the participants in the current study.

While recent observational studies have suggested that early initiation of higher-efficacy disease-modifying therapies (DMTs) may reduce long-term disability and risk for transition to secondary progressive MS, the optimal approach to treatment in patients with a new diagnosis remains unclear, said Dr. Sotirchos.

Furthermore, it is unknown whether use of higher-efficacy DMTs may affect the risk of future disability in patients with high cortical lesion burden at baseline, said Dr. Sotirchos. “Or is it too late, especially considering the modest effects of DMTs in progressive patients and that cortical lesion burden was higher in patients that are progressive?”

One additional question to be addressed is how baseline cortical lesion burden adds to other factors that neurologists use in clinical practice to stratify patients’ risk of future disability, such as spinal cord involvement, motor or sphincter symptoms at onset, poor recovery from attacks, and white matter lesion burden, said Dr. Sotirchos.

The source of funding for this study was not reported. Dr. Schiavi and Dr. Sotirchos have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

The number of cortical lesions at baseline may indicate a patient’s risk of developing secondary progressive multiple sclerosis (MS), according to new research. Cortical lesions also may be an early marker of future disability accumulation.

In the study, patients who had developed secondary progressive MS after 20 years of follow-up had approximately 7 cortical lesions at baseline. This number was significantly higher than the baseline number of cortical lesions in patients with clinically isolated syndrome (CIS), relapsing-remitting MS, or primary progressive MS at 20 years.

“Our study represented a clear indication that the assessment, presence, and high number of cortical lesions at diagnosis is one of the tools at the disposal of the neurologist for the early identification of patients with more serious disease course,” said Gian Marco Schiavi, MD, a neurology resident at the University of Verona, Italy, during the presentation of his research.

The study was presented October 14 at the annual meeting of the European Committee for Treatment and Research in Multiple Sclerosis (ECTRIMS).
 

Accumulation of disability

Previous research has indicated that cortical lesions play a role in the accumulation of disability in MS and the conversion to secondary progressive MS. Other observations suggest that the number of cortical lesions after 30 years of follow-up explains more than 40% of the difference in disability between patients with secondary progressive MS.

The current investigators sought to understand whether cortical lesions at diagnosis could predict a patient’s risk for development of secondary progressive MS and risk for disability accumulation. They included 220 patients with MS and approximately 20 years of follow-up in their study.

At the time of diagnosis, all participants underwent 1.5-T MRI with double inversion recovery. Participants also presented for periodic MRI and clinical evaluations.

The researchers used analysis of variance to compare the baseline number of cortical lesions between patients with CIS, relapsing-remitting MS, secondary progressive MS, and primary progressive MS at 20 years. They also performed a multivariable regression analysis to predict patients’ final scores on the Expanded Disability Status Scale (EDSS). Variables included participants’ demographic, clinical, and radiological characteristics.
 

Lesions and disease progression

At baseline (the time of diagnosis), 162 patients had relapsing-remitting MS, 45 had CIS, and 12 had primary progressive MS. In all, 106 patients had no cortical lesions, 47 had 3 or fewer cortical lesions, and 67 had more than 3 cortical lesions.

At 20 years, 12 patients still had CIS, 152 had relapsing-remitting MS, and 44 had developed secondary progressive MS.

The mean number of cortical lesions at diagnosis was 6.6 in patients with secondary progressive MS at 20 years, which was significantly higher than the mean 1.3 cortical lesions in the other patients (P < .001).

In addition, post-hoc analysis showed that the median number of cortical lesions was significantly higher in patients with secondary progressive MS (6), compared with those with CIS (0; P < .001), relapsing-remitting MS (0; P < .001), and primary progressive MS (4.5; P = .013). Patients with primary progressive MS had a higher number of cortical lesions than patients with CIS and those with relapsing-remitting MS (P = .001).

The investigators also examined disability at 20 years. At that timepoint, mean EDSS score was 1.5 in patients with no cortical lesions, 3.0 in patients with 1 to 3 cortical lesions at baseline, and 6.0 in patients with more than 3 cortical lesions.

In a regression analysis, the number of cortical lesions and EDSS at diagnosis were the best predictors of long-term disability (P < .001). These factors explained about 57% of the variance in EDSS score after 20 years.
 

 

 

‘Important study’

“This important study supports that the presence of cortical lesions at the time of diagnosis is associated with long-term disability and transition to a secondary progressive disease course,” said Elias S. Sotirchos, MD, assistant professor of neurology at Johns Hopkins University, Baltimore. The study size and long duration of follow-up are important strengths of the findings, he added.

Still, further research is needed to validate cortical lesions as a biomarker in clinical practice. Aside from technical validation issues relating to the identification of cortical lesions, whether cortical lesion burden can be used to guide therapeutic decision-making in MS is not clear, said Dr. Sotirchos.

“Notably, these patients were diagnosed and enrolled in this study 20 years ago, prior to the availability of newer disease-modifying therapies [DMTs] that are more effective at preventing inflammatory disease activity in MS,” he said, referring to the participants in the current study.

While recent observational studies have suggested that early initiation of higher-efficacy disease-modifying therapies (DMTs) may reduce long-term disability and risk for transition to secondary progressive MS, the optimal approach to treatment in patients with a new diagnosis remains unclear, said Dr. Sotirchos.

Furthermore, it is unknown whether use of higher-efficacy DMTs may affect the risk of future disability in patients with high cortical lesion burden at baseline, said Dr. Sotirchos. “Or is it too late, especially considering the modest effects of DMTs in progressive patients and that cortical lesion burden was higher in patients that are progressive?”

One additional question to be addressed is how baseline cortical lesion burden adds to other factors that neurologists use in clinical practice to stratify patients’ risk of future disability, such as spinal cord involvement, motor or sphincter symptoms at onset, poor recovery from attacks, and white matter lesion burden, said Dr. Sotirchos.

The source of funding for this study was not reported. Dr. Schiavi and Dr. Sotirchos have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

The number of cortical lesions at baseline may indicate a patient’s risk of developing secondary progressive multiple sclerosis (MS), according to new research. Cortical lesions also may be an early marker of future disability accumulation.

In the study, patients who had developed secondary progressive MS after 20 years of follow-up had approximately 7 cortical lesions at baseline. This number was significantly higher than the baseline number of cortical lesions in patients with clinically isolated syndrome (CIS), relapsing-remitting MS, or primary progressive MS at 20 years.

“Our study represented a clear indication that the assessment, presence, and high number of cortical lesions at diagnosis is one of the tools at the disposal of the neurologist for the early identification of patients with more serious disease course,” said Gian Marco Schiavi, MD, a neurology resident at the University of Verona, Italy, during the presentation of his research.

The study was presented October 14 at the annual meeting of the European Committee for Treatment and Research in Multiple Sclerosis (ECTRIMS).
 

Accumulation of disability

Previous research has indicated that cortical lesions play a role in the accumulation of disability in MS and the conversion to secondary progressive MS. Other observations suggest that the number of cortical lesions after 30 years of follow-up explains more than 40% of the difference in disability between patients with secondary progressive MS.

The current investigators sought to understand whether cortical lesions at diagnosis could predict a patient’s risk for development of secondary progressive MS and risk for disability accumulation. They included 220 patients with MS and approximately 20 years of follow-up in their study.

At the time of diagnosis, all participants underwent 1.5-T MRI with double inversion recovery. Participants also presented for periodic MRI and clinical evaluations.

The researchers used analysis of variance to compare the baseline number of cortical lesions between patients with CIS, relapsing-remitting MS, secondary progressive MS, and primary progressive MS at 20 years. They also performed a multivariable regression analysis to predict patients’ final scores on the Expanded Disability Status Scale (EDSS). Variables included participants’ demographic, clinical, and radiological characteristics.
 

Lesions and disease progression

At baseline (the time of diagnosis), 162 patients had relapsing-remitting MS, 45 had CIS, and 12 had primary progressive MS. In all, 106 patients had no cortical lesions, 47 had 3 or fewer cortical lesions, and 67 had more than 3 cortical lesions.

At 20 years, 12 patients still had CIS, 152 had relapsing-remitting MS, and 44 had developed secondary progressive MS.

The mean number of cortical lesions at diagnosis was 6.6 in patients with secondary progressive MS at 20 years, which was significantly higher than the mean 1.3 cortical lesions in the other patients (P < .001).

In addition, post-hoc analysis showed that the median number of cortical lesions was significantly higher in patients with secondary progressive MS (6), compared with those with CIS (0; P < .001), relapsing-remitting MS (0; P < .001), and primary progressive MS (4.5; P = .013). Patients with primary progressive MS had a higher number of cortical lesions than patients with CIS and those with relapsing-remitting MS (P = .001).

The investigators also examined disability at 20 years. At that timepoint, mean EDSS score was 1.5 in patients with no cortical lesions, 3.0 in patients with 1 to 3 cortical lesions at baseline, and 6.0 in patients with more than 3 cortical lesions.

In a regression analysis, the number of cortical lesions and EDSS at diagnosis were the best predictors of long-term disability (P < .001). These factors explained about 57% of the variance in EDSS score after 20 years.
 

 

 

‘Important study’

“This important study supports that the presence of cortical lesions at the time of diagnosis is associated with long-term disability and transition to a secondary progressive disease course,” said Elias S. Sotirchos, MD, assistant professor of neurology at Johns Hopkins University, Baltimore. The study size and long duration of follow-up are important strengths of the findings, he added.

Still, further research is needed to validate cortical lesions as a biomarker in clinical practice. Aside from technical validation issues relating to the identification of cortical lesions, whether cortical lesion burden can be used to guide therapeutic decision-making in MS is not clear, said Dr. Sotirchos.

“Notably, these patients were diagnosed and enrolled in this study 20 years ago, prior to the availability of newer disease-modifying therapies [DMTs] that are more effective at preventing inflammatory disease activity in MS,” he said, referring to the participants in the current study.

While recent observational studies have suggested that early initiation of higher-efficacy disease-modifying therapies (DMTs) may reduce long-term disability and risk for transition to secondary progressive MS, the optimal approach to treatment in patients with a new diagnosis remains unclear, said Dr. Sotirchos.

Furthermore, it is unknown whether use of higher-efficacy DMTs may affect the risk of future disability in patients with high cortical lesion burden at baseline, said Dr. Sotirchos. “Or is it too late, especially considering the modest effects of DMTs in progressive patients and that cortical lesion burden was higher in patients that are progressive?”

One additional question to be addressed is how baseline cortical lesion burden adds to other factors that neurologists use in clinical practice to stratify patients’ risk of future disability, such as spinal cord involvement, motor or sphincter symptoms at onset, poor recovery from attacks, and white matter lesion burden, said Dr. Sotirchos.

The source of funding for this study was not reported. Dr. Schiavi and Dr. Sotirchos have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ECTRIMS 2021

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Lumbar epidural steroid jab lowers bone formation in older women

Article Type
Changed
Fri, 10/15/2021 - 12:42

Among postmenopausal women who received an epidural steroid injection (ESI) in the lumbar spine to treat back and leg pain arising from a compressed nerve in the spine, levels of bone formation biomarkers were decreased. The decrease in levels persisted more than 12 weeks, results from a new study show.

In addition, serum cortisol levels decreased by 50% at week 1 after the ESI, indicating systemic absorption of the steroid.

“The extent and duration of these effects suggest that patients who receive multiple [ESIs in the lumbar spine] may be at particular risk for harmful skeletal consequences,” Shannon Clare reported in an oral presentation at the annual meeting of the American Society for Bone and Mineral Research.

Further studies are needed of the relationship between these short-term changes in bone turnover and bone loss and the risk for fracture among the burgeoning population treated with ESIs, added Ms. Clare, of the Hospital for Special Surgery, New York.

The researchers examined changes in serum levels of bone formation and resorption markers and other analytes in 24 women who received a lumbar ESI for radicular back pain and in 8 other women from the hospital population who served as control persons.

Among the women who received ESI, 1 week after the injection, serum levels of two bone formation biomarkers – total procollagen type 1 N-terminal peptide (P1NP) and osteocalcin – were about 27% lower than at baseline. The suppression persisted beyond 12 weeks.

Serum levels of the bone resorption biomarker C-terminal telopeptide of type I collagen (CTX) did not differ significantly after ESI.

“Our results are notable because we found that the duration of suppression of bone formation extended beyond 12 weeks, a far longer duration than seen previously with intra-articular injections” of glucocorticoids, said Ms. Clare and senior author Emily M. Stein, MD, director of research for the Metabolic Bone Service and an endocrinologist at the Hospital for Special Surgery and is associate professor of medicine at Weill Cornell Medicine, both in New York.

The findings suggest that patients should not receive multiple doses within a 12-week period, they told this news organization in a joint email response.

Women are not typically screened for osteopenia or osteoporosis before ESI. However, “our results suggest that physicians should consider screening women for osteoporosis who receive ESI, particularly those who are treated with multiple doses,” said Ms. Clare and Dr. Stein. “Steroid exposure should be minimized as much as possible by having patients space injections as far as they can tolerate.”
 

Systemic absorption, negative impact on bone turnover markers

“The hypothesis that [ESIs] interfere with the vertebral osseous microenvironment and increase the risk of vertebral fractures has been supported with evidence in the literature,” Mohamad Bydon, MD, professor of neurosurgery, orthopedic surgery, and health services research at the Mayo Clinic, Rochester, Minn., said in an interview.

Prior studies have demonstrated a decrease in bone mineral density (BMD) and an increase in vertebral fractures following ESI, added Dr. Bydon, senior author of a 2018 review of the effect of ESI on BMD and vertebral fracture risk that was published in Pain Medicine. He was not involved with the current study.

“The article by Clare et al. provides evidence on the systemic absorption of glucocorticoids by demonstrating a drop in serum cortisol following ESI,” he noted. “The measurement of bone metabolism biomarkers offers molecular confirmation of clinical and radiological observations of previous studies” showing that ESI affects the vertebrae.
 

 

 

More than 9 million ESIs each year

Each year, more than 9 million ESIs are administered to patients in the United States to relieve radicular back and leg pain that may be caused by a herniated disc or spinal stenosis (a gradual narrowing of the open spaces in the spinal column, which is common in older adults), the researchers explained.

Some patients experience sufficient pain relief with ESIs. Others may not be eligible for surgery and may receive multiple ESIs annually for many years because they provide pain relief.

It is well established that oral and intravenous glucocorticoids profoundly suppress bone formation and transiently increase bone resorption, causing substantial bone loss and increased fracture risk within 3 months of administration, Ms. Clare explained in the session.

Long-term use of high-dose inhaled glucocorticoids has been associated with bone loss and fractures. However, the effect of ESIs on bone has been less well studied.

The researchers hypothesized that ESIs are systemically absorbed and cause suppression of bone formation without a compensatory decrease in bone resorption.

They enrolled 24 patients who had undergone lumbar ESIs and 8 control patients. The mean age of the patients in the two groups was 63 years and 68 years, respectively. Most patients were White (88% and 100%, respectively). The mean body mass index was 27 kg/m2 and 28 kg/m2, respectively. On average, the patients had entered menopause 12 and 16 years earlier, respectively.

In the group that received steroid injections, almost two-thirds (15 patients, 63%) received triamcinolone. The rest received dexamethasone (six patients, 25%) or betamethasone (three patients, 12%) at doses that were equivalent to 80 mg triamcinolone.

The patients’ baseline serum levels of 25-hydroxy vitamin D, parathyroid hormone, cortisol, P1NP, osteocalcin, and CTX were within the reference ranges and were similar in the two groups.

The researchers also determined serum levels of cortisol (to assess suppression of endogenous glucocorticoids), osteocalcin, P1NP, and CTX in the patients and control persons at 1, 4, 12, 26, and 52 weeks after patients had received the ESI.

The researchers acknowledged that the small sample is a study limitation. In addition, the first serum samples were taken 1 week after the injection, and so any earlier changes in analyte levels were not captured. The patients also received different types of steroids, although the doses were similar when converted to triamcinolone equivalents.

The study was supported by a Spine Service grant from the Hospital for Special Surgery. The authors disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Among postmenopausal women who received an epidural steroid injection (ESI) in the lumbar spine to treat back and leg pain arising from a compressed nerve in the spine, levels of bone formation biomarkers were decreased. The decrease in levels persisted more than 12 weeks, results from a new study show.

In addition, serum cortisol levels decreased by 50% at week 1 after the ESI, indicating systemic absorption of the steroid.

“The extent and duration of these effects suggest that patients who receive multiple [ESIs in the lumbar spine] may be at particular risk for harmful skeletal consequences,” Shannon Clare reported in an oral presentation at the annual meeting of the American Society for Bone and Mineral Research.

Further studies are needed of the relationship between these short-term changes in bone turnover and bone loss and the risk for fracture among the burgeoning population treated with ESIs, added Ms. Clare, of the Hospital for Special Surgery, New York.

The researchers examined changes in serum levels of bone formation and resorption markers and other analytes in 24 women who received a lumbar ESI for radicular back pain and in 8 other women from the hospital population who served as control persons.

Among the women who received ESI, 1 week after the injection, serum levels of two bone formation biomarkers – total procollagen type 1 N-terminal peptide (P1NP) and osteocalcin – were about 27% lower than at baseline. The suppression persisted beyond 12 weeks.

Serum levels of the bone resorption biomarker C-terminal telopeptide of type I collagen (CTX) did not differ significantly after ESI.

“Our results are notable because we found that the duration of suppression of bone formation extended beyond 12 weeks, a far longer duration than seen previously with intra-articular injections” of glucocorticoids, said Ms. Clare and senior author Emily M. Stein, MD, director of research for the Metabolic Bone Service and an endocrinologist at the Hospital for Special Surgery and is associate professor of medicine at Weill Cornell Medicine, both in New York.

The findings suggest that patients should not receive multiple doses within a 12-week period, they told this news organization in a joint email response.

Women are not typically screened for osteopenia or osteoporosis before ESI. However, “our results suggest that physicians should consider screening women for osteoporosis who receive ESI, particularly those who are treated with multiple doses,” said Ms. Clare and Dr. Stein. “Steroid exposure should be minimized as much as possible by having patients space injections as far as they can tolerate.”
 

Systemic absorption, negative impact on bone turnover markers

“The hypothesis that [ESIs] interfere with the vertebral osseous microenvironment and increase the risk of vertebral fractures has been supported with evidence in the literature,” Mohamad Bydon, MD, professor of neurosurgery, orthopedic surgery, and health services research at the Mayo Clinic, Rochester, Minn., said in an interview.

Prior studies have demonstrated a decrease in bone mineral density (BMD) and an increase in vertebral fractures following ESI, added Dr. Bydon, senior author of a 2018 review of the effect of ESI on BMD and vertebral fracture risk that was published in Pain Medicine. He was not involved with the current study.

“The article by Clare et al. provides evidence on the systemic absorption of glucocorticoids by demonstrating a drop in serum cortisol following ESI,” he noted. “The measurement of bone metabolism biomarkers offers molecular confirmation of clinical and radiological observations of previous studies” showing that ESI affects the vertebrae.
 

 

 

More than 9 million ESIs each year

Each year, more than 9 million ESIs are administered to patients in the United States to relieve radicular back and leg pain that may be caused by a herniated disc or spinal stenosis (a gradual narrowing of the open spaces in the spinal column, which is common in older adults), the researchers explained.

Some patients experience sufficient pain relief with ESIs. Others may not be eligible for surgery and may receive multiple ESIs annually for many years because they provide pain relief.

It is well established that oral and intravenous glucocorticoids profoundly suppress bone formation and transiently increase bone resorption, causing substantial bone loss and increased fracture risk within 3 months of administration, Ms. Clare explained in the session.

Long-term use of high-dose inhaled glucocorticoids has been associated with bone loss and fractures. However, the effect of ESIs on bone has been less well studied.

The researchers hypothesized that ESIs are systemically absorbed and cause suppression of bone formation without a compensatory decrease in bone resorption.

They enrolled 24 patients who had undergone lumbar ESIs and 8 control patients. The mean age of the patients in the two groups was 63 years and 68 years, respectively. Most patients were White (88% and 100%, respectively). The mean body mass index was 27 kg/m2 and 28 kg/m2, respectively. On average, the patients had entered menopause 12 and 16 years earlier, respectively.

In the group that received steroid injections, almost two-thirds (15 patients, 63%) received triamcinolone. The rest received dexamethasone (six patients, 25%) or betamethasone (three patients, 12%) at doses that were equivalent to 80 mg triamcinolone.

The patients’ baseline serum levels of 25-hydroxy vitamin D, parathyroid hormone, cortisol, P1NP, osteocalcin, and CTX were within the reference ranges and were similar in the two groups.

The researchers also determined serum levels of cortisol (to assess suppression of endogenous glucocorticoids), osteocalcin, P1NP, and CTX in the patients and control persons at 1, 4, 12, 26, and 52 weeks after patients had received the ESI.

The researchers acknowledged that the small sample is a study limitation. In addition, the first serum samples were taken 1 week after the injection, and so any earlier changes in analyte levels were not captured. The patients also received different types of steroids, although the doses were similar when converted to triamcinolone equivalents.

The study was supported by a Spine Service grant from the Hospital for Special Surgery. The authors disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Among postmenopausal women who received an epidural steroid injection (ESI) in the lumbar spine to treat back and leg pain arising from a compressed nerve in the spine, levels of bone formation biomarkers were decreased. The decrease in levels persisted more than 12 weeks, results from a new study show.

In addition, serum cortisol levels decreased by 50% at week 1 after the ESI, indicating systemic absorption of the steroid.

“The extent and duration of these effects suggest that patients who receive multiple [ESIs in the lumbar spine] may be at particular risk for harmful skeletal consequences,” Shannon Clare reported in an oral presentation at the annual meeting of the American Society for Bone and Mineral Research.

Further studies are needed of the relationship between these short-term changes in bone turnover and bone loss and the risk for fracture among the burgeoning population treated with ESIs, added Ms. Clare, of the Hospital for Special Surgery, New York.

The researchers examined changes in serum levels of bone formation and resorption markers and other analytes in 24 women who received a lumbar ESI for radicular back pain and in 8 other women from the hospital population who served as control persons.

Among the women who received ESI, 1 week after the injection, serum levels of two bone formation biomarkers – total procollagen type 1 N-terminal peptide (P1NP) and osteocalcin – were about 27% lower than at baseline. The suppression persisted beyond 12 weeks.

Serum levels of the bone resorption biomarker C-terminal telopeptide of type I collagen (CTX) did not differ significantly after ESI.

“Our results are notable because we found that the duration of suppression of bone formation extended beyond 12 weeks, a far longer duration than seen previously with intra-articular injections” of glucocorticoids, said Ms. Clare and senior author Emily M. Stein, MD, director of research for the Metabolic Bone Service and an endocrinologist at the Hospital for Special Surgery and is associate professor of medicine at Weill Cornell Medicine, both in New York.

The findings suggest that patients should not receive multiple doses within a 12-week period, they told this news organization in a joint email response.

Women are not typically screened for osteopenia or osteoporosis before ESI. However, “our results suggest that physicians should consider screening women for osteoporosis who receive ESI, particularly those who are treated with multiple doses,” said Ms. Clare and Dr. Stein. “Steroid exposure should be minimized as much as possible by having patients space injections as far as they can tolerate.”
 

Systemic absorption, negative impact on bone turnover markers

“The hypothesis that [ESIs] interfere with the vertebral osseous microenvironment and increase the risk of vertebral fractures has been supported with evidence in the literature,” Mohamad Bydon, MD, professor of neurosurgery, orthopedic surgery, and health services research at the Mayo Clinic, Rochester, Minn., said in an interview.

Prior studies have demonstrated a decrease in bone mineral density (BMD) and an increase in vertebral fractures following ESI, added Dr. Bydon, senior author of a 2018 review of the effect of ESI on BMD and vertebral fracture risk that was published in Pain Medicine. He was not involved with the current study.

“The article by Clare et al. provides evidence on the systemic absorption of glucocorticoids by demonstrating a drop in serum cortisol following ESI,” he noted. “The measurement of bone metabolism biomarkers offers molecular confirmation of clinical and radiological observations of previous studies” showing that ESI affects the vertebrae.
 

 

 

More than 9 million ESIs each year

Each year, more than 9 million ESIs are administered to patients in the United States to relieve radicular back and leg pain that may be caused by a herniated disc or spinal stenosis (a gradual narrowing of the open spaces in the spinal column, which is common in older adults), the researchers explained.

Some patients experience sufficient pain relief with ESIs. Others may not be eligible for surgery and may receive multiple ESIs annually for many years because they provide pain relief.

It is well established that oral and intravenous glucocorticoids profoundly suppress bone formation and transiently increase bone resorption, causing substantial bone loss and increased fracture risk within 3 months of administration, Ms. Clare explained in the session.

Long-term use of high-dose inhaled glucocorticoids has been associated with bone loss and fractures. However, the effect of ESIs on bone has been less well studied.

The researchers hypothesized that ESIs are systemically absorbed and cause suppression of bone formation without a compensatory decrease in bone resorption.

They enrolled 24 patients who had undergone lumbar ESIs and 8 control patients. The mean age of the patients in the two groups was 63 years and 68 years, respectively. Most patients were White (88% and 100%, respectively). The mean body mass index was 27 kg/m2 and 28 kg/m2, respectively. On average, the patients had entered menopause 12 and 16 years earlier, respectively.

In the group that received steroid injections, almost two-thirds (15 patients, 63%) received triamcinolone. The rest received dexamethasone (six patients, 25%) or betamethasone (three patients, 12%) at doses that were equivalent to 80 mg triamcinolone.

The patients’ baseline serum levels of 25-hydroxy vitamin D, parathyroid hormone, cortisol, P1NP, osteocalcin, and CTX were within the reference ranges and were similar in the two groups.

The researchers also determined serum levels of cortisol (to assess suppression of endogenous glucocorticoids), osteocalcin, P1NP, and CTX in the patients and control persons at 1, 4, 12, 26, and 52 weeks after patients had received the ESI.

The researchers acknowledged that the small sample is a study limitation. In addition, the first serum samples were taken 1 week after the injection, and so any earlier changes in analyte levels were not captured. The patients also received different types of steroids, although the doses were similar when converted to triamcinolone equivalents.

The study was supported by a Spine Service grant from the Hospital for Special Surgery. The authors disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

QI reduces daily labs and promotes sleep-friendly lab timing

Article Type
Changed
Fri, 10/15/2021 - 12:52

Background: Daily labs are often unnecessary on clinically stable inpatients. Additionally, daily labs are frequently drawn very early in the morning, resulting in sleep disruptions. No prior studies have attempted an EHR-based intervention to simultaneously improve both frequency and timing of labs.

Dr. Sean M. Lockwood


Study design: Quality improvement project.

Setting: Resident and hospitalist services at a single academic medical center.

Synopsis: After surveying providers about lab-ordering preferences, an EHR shortcut and visual reminder were built to facilitate labs being ordered every 48 hours at 6 a.m. (rather than daily at 4 a.m.). Results included 26.3% fewer routine lab draws per patient-day per week, and a significant increase in sleep-friendly lab order utilization per encounter per week on both resident services (intercept, 1.03; standard error, 0.29; P < .001) and hospitalist services (intercept, 1.17; SE, .50; P = .02).

Bottom line: An intervention consisting of physician education and an EHR tool reduced daily lab frequency and optimized morning lab timing to improve sleep.

Citation: Tapaskar N et al. Evaluation of the order SMARTT: An initiative to reduce phlebotomy and improve sleep-friendly labs on general medicine services. J Hosp Med. 2020;15:479-82.

Dr. Lockwood is a hospitalist and chief of quality, performance, and patient safety at the Lexington (Ky.) VA Health Care System.

Publications
Topics
Sections

Background: Daily labs are often unnecessary on clinically stable inpatients. Additionally, daily labs are frequently drawn very early in the morning, resulting in sleep disruptions. No prior studies have attempted an EHR-based intervention to simultaneously improve both frequency and timing of labs.

Dr. Sean M. Lockwood


Study design: Quality improvement project.

Setting: Resident and hospitalist services at a single academic medical center.

Synopsis: After surveying providers about lab-ordering preferences, an EHR shortcut and visual reminder were built to facilitate labs being ordered every 48 hours at 6 a.m. (rather than daily at 4 a.m.). Results included 26.3% fewer routine lab draws per patient-day per week, and a significant increase in sleep-friendly lab order utilization per encounter per week on both resident services (intercept, 1.03; standard error, 0.29; P < .001) and hospitalist services (intercept, 1.17; SE, .50; P = .02).

Bottom line: An intervention consisting of physician education and an EHR tool reduced daily lab frequency and optimized morning lab timing to improve sleep.

Citation: Tapaskar N et al. Evaluation of the order SMARTT: An initiative to reduce phlebotomy and improve sleep-friendly labs on general medicine services. J Hosp Med. 2020;15:479-82.

Dr. Lockwood is a hospitalist and chief of quality, performance, and patient safety at the Lexington (Ky.) VA Health Care System.

Background: Daily labs are often unnecessary on clinically stable inpatients. Additionally, daily labs are frequently drawn very early in the morning, resulting in sleep disruptions. No prior studies have attempted an EHR-based intervention to simultaneously improve both frequency and timing of labs.

Dr. Sean M. Lockwood


Study design: Quality improvement project.

Setting: Resident and hospitalist services at a single academic medical center.

Synopsis: After surveying providers about lab-ordering preferences, an EHR shortcut and visual reminder were built to facilitate labs being ordered every 48 hours at 6 a.m. (rather than daily at 4 a.m.). Results included 26.3% fewer routine lab draws per patient-day per week, and a significant increase in sleep-friendly lab order utilization per encounter per week on both resident services (intercept, 1.03; standard error, 0.29; P < .001) and hospitalist services (intercept, 1.17; SE, .50; P = .02).

Bottom line: An intervention consisting of physician education and an EHR tool reduced daily lab frequency and optimized morning lab timing to improve sleep.

Citation: Tapaskar N et al. Evaluation of the order SMARTT: An initiative to reduce phlebotomy and improve sleep-friendly labs on general medicine services. J Hosp Med. 2020;15:479-82.

Dr. Lockwood is a hospitalist and chief of quality, performance, and patient safety at the Lexington (Ky.) VA Health Care System.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Mixing COVID vaccine boosters may be better option: Study

Article Type
Changed
Mon, 10/18/2021 - 14:42

A new U.S. government study shows it isn’t risky and may even be a good idea to mix, rather than match, COVID-19 vaccines when getting a booster dose.

The study also shows mixing different kinds of vaccines appears to spur the body to make higher levels of virus-blocking antibodies than they would have gotten by boosting with a dose of the vaccine the person already had.

If regulators endorse the study findings, it should make getting a COVID-19 booster as easy as getting a yearly influenza vaccine.

“Currently when you go to do your flu shot nobody asks you what kind you had last year. Nobody cares what you had last year. And we were hoping that that was the same — that we would be able to boost regardless of what you had [previously],” said the study’s senior author, John Beigel, MD, who is associate director for clinical research in the division of microbiology and infectious diseases at the National Institutes of Health.

“But we needed to have the data,” he said.

Studies have suggested that higher antibody levels translate into better protection against disease, though the exact level that confers protection is not yet known.

“The antibody responses are so much higher [with mix and match], it’s really impressive,” said William Schaffner, MD, an infectious disease specialist at Vanderbilt University in Nashville, who was not involved in the study.

Dr. Shaffner said if the U.S. Food and Drug Administration (FDA) and the Centers for Disease Control and Prevention (CDC) sign off on the approach, he would especially recommend that people who got the Johnson & Johnson vaccine follow up with a dose of an mRNA vaccine from Pfizer or Moderna.

“It is a broader stimulation of the immune system, and I think that broader stimulation is advantageous,” he said.

Minimal side effects

The preprint study was published late Oct. 13 in medRxiv ahead of peer review, just before a slate of meetings involving vaccine experts that advise the FDA and CDC. 

These experts are tasked with trying to figure out whether additional shots of Moderna and Johnson & Johnson vaccines are safe and effective for boosting immunity against COVID-19.

The FDA’s panel is the Vaccines and Related Biological Products Advisory Committee (VRBPAC), and the CDC’s panel is the Advisory Committee on Immunization Practices (ACIP). 

During the pandemic, they have been meeting almost in lock step to tackle important vaccine-related questions.

“We got this data out because we knew VRBPAC was coming and we knew ACIP was going to grapple with these issues,” Dr. Beigel said.

He noted that these are just the first results. The study will continue for a year, and the researchers aim to deeply characterize the breadth and depth of the immune response to all nine of the different vaccine combinations included in the study.

The study included 458 participants at 10 study sites around the country who had been fully vaccinated with one of the three COVID-19 vaccines authorized for use in the United States: Moderna, Johnson & Johnson, or Pfizer-BioNTech. 

About 150 study participants were recruited from each group. Everyone in the study had finished their primary series at least 12 weeks before starting the study. None had a prior SARS-CoV-2 infection.

About 50 participants from each vaccine group were randomly assigned to get a third (booster) dose of either the same vaccine as the one they had already received, or a different vaccine, creating nine possible combinations of shots.

About half of study participants reported mild side effects — including pain at the injection site, fatigue, headache, and muscle aches.

Two study participants had serious medical problems during the study, but they were judged to be unrelated to vaccination. One study participant experienced kidney failure after their muscles broke down following a fall. The other experienced cholecystitis, or an inflamed gallbladder. 

Up to 1 month after the booster shots, no other serious adverse events were seen.

The study didn’t look at whether people got COVID-19, so it’s not possible to say that they were better protected against disease after their boosters.

 

 

Increase in antibodies

But all the groups saw substantial increases in their antibody levels, which is thought to indicate that they were better protected.

Overall, groups that got the same vaccine as their primary series saw 4 to 20-fold increases in their antibody levels. Groups that got different shots than the ones in their primary series got 6 to 76 fold increases in their antibody levels.

People who had originally gotten a Johnson & Johnson vaccine saw far bigger increases in antibodies, and were more likely to see a protective rise in antibodies if they got a second dose of an mRNA vaccine.

Dr. Schaffner noted that European countries had already been mixing the vaccine doses this way, giving people who had received the AstraZeneca vaccine, which is similar to the Johnson & Johnson shot, another dose of an mRNA vaccine.

German Chancellor Angela Merkel received a Moderna vaccine for her second dose after an initial shot of the Oxford-AstraZeneca vaccines, for example.

No safety signals related to mixing vaccines has been seen in countries that routinely use the approach for their initial series.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

A new U.S. government study shows it isn’t risky and may even be a good idea to mix, rather than match, COVID-19 vaccines when getting a booster dose.

The study also shows mixing different kinds of vaccines appears to spur the body to make higher levels of virus-blocking antibodies than they would have gotten by boosting with a dose of the vaccine the person already had.

If regulators endorse the study findings, it should make getting a COVID-19 booster as easy as getting a yearly influenza vaccine.

“Currently when you go to do your flu shot nobody asks you what kind you had last year. Nobody cares what you had last year. And we were hoping that that was the same — that we would be able to boost regardless of what you had [previously],” said the study’s senior author, John Beigel, MD, who is associate director for clinical research in the division of microbiology and infectious diseases at the National Institutes of Health.

“But we needed to have the data,” he said.

Studies have suggested that higher antibody levels translate into better protection against disease, though the exact level that confers protection is not yet known.

“The antibody responses are so much higher [with mix and match], it’s really impressive,” said William Schaffner, MD, an infectious disease specialist at Vanderbilt University in Nashville, who was not involved in the study.

Dr. Shaffner said if the U.S. Food and Drug Administration (FDA) and the Centers for Disease Control and Prevention (CDC) sign off on the approach, he would especially recommend that people who got the Johnson & Johnson vaccine follow up with a dose of an mRNA vaccine from Pfizer or Moderna.

“It is a broader stimulation of the immune system, and I think that broader stimulation is advantageous,” he said.

Minimal side effects

The preprint study was published late Oct. 13 in medRxiv ahead of peer review, just before a slate of meetings involving vaccine experts that advise the FDA and CDC. 

These experts are tasked with trying to figure out whether additional shots of Moderna and Johnson & Johnson vaccines are safe and effective for boosting immunity against COVID-19.

The FDA’s panel is the Vaccines and Related Biological Products Advisory Committee (VRBPAC), and the CDC’s panel is the Advisory Committee on Immunization Practices (ACIP). 

During the pandemic, they have been meeting almost in lock step to tackle important vaccine-related questions.

“We got this data out because we knew VRBPAC was coming and we knew ACIP was going to grapple with these issues,” Dr. Beigel said.

He noted that these are just the first results. The study will continue for a year, and the researchers aim to deeply characterize the breadth and depth of the immune response to all nine of the different vaccine combinations included in the study.

The study included 458 participants at 10 study sites around the country who had been fully vaccinated with one of the three COVID-19 vaccines authorized for use in the United States: Moderna, Johnson & Johnson, or Pfizer-BioNTech. 

About 150 study participants were recruited from each group. Everyone in the study had finished their primary series at least 12 weeks before starting the study. None had a prior SARS-CoV-2 infection.

About 50 participants from each vaccine group were randomly assigned to get a third (booster) dose of either the same vaccine as the one they had already received, or a different vaccine, creating nine possible combinations of shots.

About half of study participants reported mild side effects — including pain at the injection site, fatigue, headache, and muscle aches.

Two study participants had serious medical problems during the study, but they were judged to be unrelated to vaccination. One study participant experienced kidney failure after their muscles broke down following a fall. The other experienced cholecystitis, or an inflamed gallbladder. 

Up to 1 month after the booster shots, no other serious adverse events were seen.

The study didn’t look at whether people got COVID-19, so it’s not possible to say that they were better protected against disease after their boosters.

 

 

Increase in antibodies

But all the groups saw substantial increases in their antibody levels, which is thought to indicate that they were better protected.

Overall, groups that got the same vaccine as their primary series saw 4 to 20-fold increases in their antibody levels. Groups that got different shots than the ones in their primary series got 6 to 76 fold increases in their antibody levels.

People who had originally gotten a Johnson & Johnson vaccine saw far bigger increases in antibodies, and were more likely to see a protective rise in antibodies if they got a second dose of an mRNA vaccine.

Dr. Schaffner noted that European countries had already been mixing the vaccine doses this way, giving people who had received the AstraZeneca vaccine, which is similar to the Johnson & Johnson shot, another dose of an mRNA vaccine.

German Chancellor Angela Merkel received a Moderna vaccine for her second dose after an initial shot of the Oxford-AstraZeneca vaccines, for example.

No safety signals related to mixing vaccines has been seen in countries that routinely use the approach for their initial series.

A version of this article first appeared on Medscape.com.

A new U.S. government study shows it isn’t risky and may even be a good idea to mix, rather than match, COVID-19 vaccines when getting a booster dose.

The study also shows mixing different kinds of vaccines appears to spur the body to make higher levels of virus-blocking antibodies than they would have gotten by boosting with a dose of the vaccine the person already had.

If regulators endorse the study findings, it should make getting a COVID-19 booster as easy as getting a yearly influenza vaccine.

“Currently when you go to do your flu shot nobody asks you what kind you had last year. Nobody cares what you had last year. And we were hoping that that was the same — that we would be able to boost regardless of what you had [previously],” said the study’s senior author, John Beigel, MD, who is associate director for clinical research in the division of microbiology and infectious diseases at the National Institutes of Health.

“But we needed to have the data,” he said.

Studies have suggested that higher antibody levels translate into better protection against disease, though the exact level that confers protection is not yet known.

“The antibody responses are so much higher [with mix and match], it’s really impressive,” said William Schaffner, MD, an infectious disease specialist at Vanderbilt University in Nashville, who was not involved in the study.

Dr. Shaffner said if the U.S. Food and Drug Administration (FDA) and the Centers for Disease Control and Prevention (CDC) sign off on the approach, he would especially recommend that people who got the Johnson & Johnson vaccine follow up with a dose of an mRNA vaccine from Pfizer or Moderna.

“It is a broader stimulation of the immune system, and I think that broader stimulation is advantageous,” he said.

Minimal side effects

The preprint study was published late Oct. 13 in medRxiv ahead of peer review, just before a slate of meetings involving vaccine experts that advise the FDA and CDC. 

These experts are tasked with trying to figure out whether additional shots of Moderna and Johnson & Johnson vaccines are safe and effective for boosting immunity against COVID-19.

The FDA’s panel is the Vaccines and Related Biological Products Advisory Committee (VRBPAC), and the CDC’s panel is the Advisory Committee on Immunization Practices (ACIP). 

During the pandemic, they have been meeting almost in lock step to tackle important vaccine-related questions.

“We got this data out because we knew VRBPAC was coming and we knew ACIP was going to grapple with these issues,” Dr. Beigel said.

He noted that these are just the first results. The study will continue for a year, and the researchers aim to deeply characterize the breadth and depth of the immune response to all nine of the different vaccine combinations included in the study.

The study included 458 participants at 10 study sites around the country who had been fully vaccinated with one of the three COVID-19 vaccines authorized for use in the United States: Moderna, Johnson & Johnson, or Pfizer-BioNTech. 

About 150 study participants were recruited from each group. Everyone in the study had finished their primary series at least 12 weeks before starting the study. None had a prior SARS-CoV-2 infection.

About 50 participants from each vaccine group were randomly assigned to get a third (booster) dose of either the same vaccine as the one they had already received, or a different vaccine, creating nine possible combinations of shots.

About half of study participants reported mild side effects — including pain at the injection site, fatigue, headache, and muscle aches.

Two study participants had serious medical problems during the study, but they were judged to be unrelated to vaccination. One study participant experienced kidney failure after their muscles broke down following a fall. The other experienced cholecystitis, or an inflamed gallbladder. 

Up to 1 month after the booster shots, no other serious adverse events were seen.

The study didn’t look at whether people got COVID-19, so it’s not possible to say that they were better protected against disease after their boosters.

 

 

Increase in antibodies

But all the groups saw substantial increases in their antibody levels, which is thought to indicate that they were better protected.

Overall, groups that got the same vaccine as their primary series saw 4 to 20-fold increases in their antibody levels. Groups that got different shots than the ones in their primary series got 6 to 76 fold increases in their antibody levels.

People who had originally gotten a Johnson & Johnson vaccine saw far bigger increases in antibodies, and were more likely to see a protective rise in antibodies if they got a second dose of an mRNA vaccine.

Dr. Schaffner noted that European countries had already been mixing the vaccine doses this way, giving people who had received the AstraZeneca vaccine, which is similar to the Johnson & Johnson shot, another dose of an mRNA vaccine.

German Chancellor Angela Merkel received a Moderna vaccine for her second dose after an initial shot of the Oxford-AstraZeneca vaccines, for example.

No safety signals related to mixing vaccines has been seen in countries that routinely use the approach for their initial series.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

AGA Clinical Practice Update: Expert review on GI perforations

Article Type
Changed
Mon, 10/25/2021 - 10:30

A clinical practice update expert review from the American Gastroenterological Association gives advice on management of endoscopic perforations in the gastrointestinal tract, including esophageal, gastric, duodenal and periampullary, and colon perforation.

Dr. Jeffrey H. Lee

There are various techniques for dealing with perforations, including through-the-scope clips (TTSCs), over-the-scope clips (OTSCs), self-expanding metal stents (SEMS), and endoscopic suturing. Newer methods include biological glue and esophageal vacuum therapy. These techniques have been the subject of various retrospective analyses, but few prospective studies have examined their safety and efficacy.

In the expert review, published in Clinical Gastroenterology and Hepatology, authors led by Jeffrey H. Lee, MD, MPH, AGAF, of the department of gastroenterology at the University of Texas MD Anderson Cancer Center, Houston, emphasized that gastroenterologists should have a perforation protocol in place and practice procedures that will be used to address perforations. Endoscopists should also recognize their own limits and know when a patient should be sent to experienced, high-volume centers for further care.

In the event of a perforation, the entire team should be notified immediately, and carbon dioxide insufflation should be used at a low flow setting. The endoscopist should clean up luminal material to reduce the chance of peritoneal contamination, and then treat with an antibiotic regimen that counters gram-negative and anaerobic bacteria.
 

Esophageal perforation

Esophageal perforations most commonly occur during dilation of strictures, endoscopic mucosal resection (EMR), and endoscopic submucosal dissection (ESD). Perforations of the mucosal flap may happen during so-called third-space endoscopy techniques like peroral endoscopic myotomy (POEM). Small perforations can be readily addressed with TTSCs. Larger perforations call for some combination of TTSCs, endoscopic suturing, fibrin glue injection, or esophageal stenting, though the latter is discouraged because of the potential for erosion.

A more concerning complication of POEM is delayed barrier failure, which can cause leaks, mediastinitis, or peritonitis. These complications have been estimated to occur in 0.2%-1.1% of cases.

In the event of an esophageal perforation, the area should be kept clean by suctioning, or by altering patient position if required. Perforations 1-2 cm in size can be closed using OTSCs. Excessive bleeding or larger tears can be addressed using a fully covered SEMS.

Leaks that occur in the ensuing days after the procedure should be closed using TTSCs, OTSCs, or endosuturing, followed by putting in a fully covered stent. Esophageal fistula should be addressed with a fully covered stent with a tight fit.

Endoscopic vacuum therapy is a newer technique to address large or persistent esophageal perforations. A review found it had a 96% success rate for esophageal perforations.
 

Gastric perforations

Gastric perforations often result from peptic ulcer disease or ingestion of something caustic, and it is a high risk during EMR and ESD procedures (0.4%-0.7% intraprocedural risk). The proximal gastric wall isn’t thick as in the gastric antrum, so proximal endoscopic resections require extra care. Lengthy procedures should be done under anesthesia. Ongoing gaseous insufflation during a perforation may worsen the problem because of heightened intraperitoneal pressure. OTSCs may be a better choice than TTSCs for 1-3 cm perforations, while endoloop/TTSC can be used for larger ones.

 

 

Duodenal and periampullary perforations

Duodenal and periampullary perforations occur during duodenal stricture dilation, EMR, endoscopic submucosal dissection, endoscopic ultrasound, and endoscopic retrograde cholangiopancreatography (ECRP). The thin duodenal wall makes it more susceptible to perforation than the esophagus, stomach, or colon.

Closing a duodenum perforation can be difficult. Type 1 perforations typically show sudden bleeding and lumen deflation, and often require surgical intervention. Some recent reports have suggested success with TTSCs, OTSCs, band ligation, and endoloops. Type 2 perforations are less obvious, and the endoscopist must examine the gas pattern on fluoroscopic beneath the liver or in the area of the right kidney. Retroperitoneal air following ERCP, if asymptomatic, doesn’t necessarily require intervention.

The challenges presented by the duodenum mean that, for large duodenal polyps, EMR should only be done by experienced endoscopists who are skilled at mucosal closure, and only experts should attempt ESD. Proteolytic enzymes from the pancreas can also pool in the duodenum, which can degrade muscle tissue and lead to delayed perforations. TTSC, OTSC, endosuturing, polymer gels or sheets, and TTSC combined with endoloop cinching have been used to close resection-associated perforations.
 

Colon perforation

Colon perforation may be caused by diverticulitis, inflammatory bowel disease, or occasionally colonic obstruction. Iatrogenic causes are more common and include endoscopic resection, hot forceps biopsy, dilation of stricture resulting from radiation or Crohn’s disease, colonic stenting, and advancement of colonoscope across angulations or into diverticula without straightening the endoscope

Large perforations are usually immediately noticeable and should be treated surgically, as should hemodynamic instability or delayed perforations with peritoneal signs.

Endoscopic closure should be attempted when the perforation site is clean, and lower rectal perforations can generally be repaired with TTSC, OTSC, or endoscopic suturing. In the cecum, or in a torturous or unclean colon, it may be difficult or dangerous to remove the colonoscope and insert an OTSC, and endoscopic suturing may not be possible, making TTSC the only procedure available for right colon perforations. The X-Tack Endoscopic HeliX Tacking System is a recently introduced, through-the-scope technology that places suture-tethered tacks into tissue surrounding the perforation and cinches it together. The system in principle can close large or irregular colonic and small bowel perforations using gastroscopes and colonoscopes, but no human studies have yet been published.
 

Conclusion

This update was a collaborative effort by four endoscopists who felt that it was timely to review the issue of perforations since they can be serious and challenging to manage. The evolution of endoscopic techniques over the last few years, however, has made the closure of spontaneous and iatrogenic perforations much less fear provoking, and they wished to summarize the approaches to a variety of such situations in order to guide practitioners who may encounter them.

“Although perforation is a serious event, with novel endoscopic techniques and tools, the endoscopist should no longer be paralyzed when it occurs,” the authors concluded.

Some authors reported relationships, such as consulting for or royalties from, device companies such as Medtronic and Boston Scientific. The remaining authors disclosed no conflicts.

This article was updated Oct. 25, 2021.

Publications
Topics
Sections

A clinical practice update expert review from the American Gastroenterological Association gives advice on management of endoscopic perforations in the gastrointestinal tract, including esophageal, gastric, duodenal and periampullary, and colon perforation.

Dr. Jeffrey H. Lee

There are various techniques for dealing with perforations, including through-the-scope clips (TTSCs), over-the-scope clips (OTSCs), self-expanding metal stents (SEMS), and endoscopic suturing. Newer methods include biological glue and esophageal vacuum therapy. These techniques have been the subject of various retrospective analyses, but few prospective studies have examined their safety and efficacy.

In the expert review, published in Clinical Gastroenterology and Hepatology, authors led by Jeffrey H. Lee, MD, MPH, AGAF, of the department of gastroenterology at the University of Texas MD Anderson Cancer Center, Houston, emphasized that gastroenterologists should have a perforation protocol in place and practice procedures that will be used to address perforations. Endoscopists should also recognize their own limits and know when a patient should be sent to experienced, high-volume centers for further care.

In the event of a perforation, the entire team should be notified immediately, and carbon dioxide insufflation should be used at a low flow setting. The endoscopist should clean up luminal material to reduce the chance of peritoneal contamination, and then treat with an antibiotic regimen that counters gram-negative and anaerobic bacteria.
 

Esophageal perforation

Esophageal perforations most commonly occur during dilation of strictures, endoscopic mucosal resection (EMR), and endoscopic submucosal dissection (ESD). Perforations of the mucosal flap may happen during so-called third-space endoscopy techniques like peroral endoscopic myotomy (POEM). Small perforations can be readily addressed with TTSCs. Larger perforations call for some combination of TTSCs, endoscopic suturing, fibrin glue injection, or esophageal stenting, though the latter is discouraged because of the potential for erosion.

A more concerning complication of POEM is delayed barrier failure, which can cause leaks, mediastinitis, or peritonitis. These complications have been estimated to occur in 0.2%-1.1% of cases.

In the event of an esophageal perforation, the area should be kept clean by suctioning, or by altering patient position if required. Perforations 1-2 cm in size can be closed using OTSCs. Excessive bleeding or larger tears can be addressed using a fully covered SEMS.

Leaks that occur in the ensuing days after the procedure should be closed using TTSCs, OTSCs, or endosuturing, followed by putting in a fully covered stent. Esophageal fistula should be addressed with a fully covered stent with a tight fit.

Endoscopic vacuum therapy is a newer technique to address large or persistent esophageal perforations. A review found it had a 96% success rate for esophageal perforations.
 

Gastric perforations

Gastric perforations often result from peptic ulcer disease or ingestion of something caustic, and it is a high risk during EMR and ESD procedures (0.4%-0.7% intraprocedural risk). The proximal gastric wall isn’t thick as in the gastric antrum, so proximal endoscopic resections require extra care. Lengthy procedures should be done under anesthesia. Ongoing gaseous insufflation during a perforation may worsen the problem because of heightened intraperitoneal pressure. OTSCs may be a better choice than TTSCs for 1-3 cm perforations, while endoloop/TTSC can be used for larger ones.

 

 

Duodenal and periampullary perforations

Duodenal and periampullary perforations occur during duodenal stricture dilation, EMR, endoscopic submucosal dissection, endoscopic ultrasound, and endoscopic retrograde cholangiopancreatography (ECRP). The thin duodenal wall makes it more susceptible to perforation than the esophagus, stomach, or colon.

Closing a duodenum perforation can be difficult. Type 1 perforations typically show sudden bleeding and lumen deflation, and often require surgical intervention. Some recent reports have suggested success with TTSCs, OTSCs, band ligation, and endoloops. Type 2 perforations are less obvious, and the endoscopist must examine the gas pattern on fluoroscopic beneath the liver or in the area of the right kidney. Retroperitoneal air following ERCP, if asymptomatic, doesn’t necessarily require intervention.

The challenges presented by the duodenum mean that, for large duodenal polyps, EMR should only be done by experienced endoscopists who are skilled at mucosal closure, and only experts should attempt ESD. Proteolytic enzymes from the pancreas can also pool in the duodenum, which can degrade muscle tissue and lead to delayed perforations. TTSC, OTSC, endosuturing, polymer gels or sheets, and TTSC combined with endoloop cinching have been used to close resection-associated perforations.
 

Colon perforation

Colon perforation may be caused by diverticulitis, inflammatory bowel disease, or occasionally colonic obstruction. Iatrogenic causes are more common and include endoscopic resection, hot forceps biopsy, dilation of stricture resulting from radiation or Crohn’s disease, colonic stenting, and advancement of colonoscope across angulations or into diverticula without straightening the endoscope

Large perforations are usually immediately noticeable and should be treated surgically, as should hemodynamic instability or delayed perforations with peritoneal signs.

Endoscopic closure should be attempted when the perforation site is clean, and lower rectal perforations can generally be repaired with TTSC, OTSC, or endoscopic suturing. In the cecum, or in a torturous or unclean colon, it may be difficult or dangerous to remove the colonoscope and insert an OTSC, and endoscopic suturing may not be possible, making TTSC the only procedure available for right colon perforations. The X-Tack Endoscopic HeliX Tacking System is a recently introduced, through-the-scope technology that places suture-tethered tacks into tissue surrounding the perforation and cinches it together. The system in principle can close large or irregular colonic and small bowel perforations using gastroscopes and colonoscopes, but no human studies have yet been published.
 

Conclusion

This update was a collaborative effort by four endoscopists who felt that it was timely to review the issue of perforations since they can be serious and challenging to manage. The evolution of endoscopic techniques over the last few years, however, has made the closure of spontaneous and iatrogenic perforations much less fear provoking, and they wished to summarize the approaches to a variety of such situations in order to guide practitioners who may encounter them.

“Although perforation is a serious event, with novel endoscopic techniques and tools, the endoscopist should no longer be paralyzed when it occurs,” the authors concluded.

Some authors reported relationships, such as consulting for or royalties from, device companies such as Medtronic and Boston Scientific. The remaining authors disclosed no conflicts.

This article was updated Oct. 25, 2021.

A clinical practice update expert review from the American Gastroenterological Association gives advice on management of endoscopic perforations in the gastrointestinal tract, including esophageal, gastric, duodenal and periampullary, and colon perforation.

Dr. Jeffrey H. Lee

There are various techniques for dealing with perforations, including through-the-scope clips (TTSCs), over-the-scope clips (OTSCs), self-expanding metal stents (SEMS), and endoscopic suturing. Newer methods include biological glue and esophageal vacuum therapy. These techniques have been the subject of various retrospective analyses, but few prospective studies have examined their safety and efficacy.

In the expert review, published in Clinical Gastroenterology and Hepatology, authors led by Jeffrey H. Lee, MD, MPH, AGAF, of the department of gastroenterology at the University of Texas MD Anderson Cancer Center, Houston, emphasized that gastroenterologists should have a perforation protocol in place and practice procedures that will be used to address perforations. Endoscopists should also recognize their own limits and know when a patient should be sent to experienced, high-volume centers for further care.

In the event of a perforation, the entire team should be notified immediately, and carbon dioxide insufflation should be used at a low flow setting. The endoscopist should clean up luminal material to reduce the chance of peritoneal contamination, and then treat with an antibiotic regimen that counters gram-negative and anaerobic bacteria.
 

Esophageal perforation

Esophageal perforations most commonly occur during dilation of strictures, endoscopic mucosal resection (EMR), and endoscopic submucosal dissection (ESD). Perforations of the mucosal flap may happen during so-called third-space endoscopy techniques like peroral endoscopic myotomy (POEM). Small perforations can be readily addressed with TTSCs. Larger perforations call for some combination of TTSCs, endoscopic suturing, fibrin glue injection, or esophageal stenting, though the latter is discouraged because of the potential for erosion.

A more concerning complication of POEM is delayed barrier failure, which can cause leaks, mediastinitis, or peritonitis. These complications have been estimated to occur in 0.2%-1.1% of cases.

In the event of an esophageal perforation, the area should be kept clean by suctioning, or by altering patient position if required. Perforations 1-2 cm in size can be closed using OTSCs. Excessive bleeding or larger tears can be addressed using a fully covered SEMS.

Leaks that occur in the ensuing days after the procedure should be closed using TTSCs, OTSCs, or endosuturing, followed by putting in a fully covered stent. Esophageal fistula should be addressed with a fully covered stent with a tight fit.

Endoscopic vacuum therapy is a newer technique to address large or persistent esophageal perforations. A review found it had a 96% success rate for esophageal perforations.
 

Gastric perforations

Gastric perforations often result from peptic ulcer disease or ingestion of something caustic, and it is a high risk during EMR and ESD procedures (0.4%-0.7% intraprocedural risk). The proximal gastric wall isn’t thick as in the gastric antrum, so proximal endoscopic resections require extra care. Lengthy procedures should be done under anesthesia. Ongoing gaseous insufflation during a perforation may worsen the problem because of heightened intraperitoneal pressure. OTSCs may be a better choice than TTSCs for 1-3 cm perforations, while endoloop/TTSC can be used for larger ones.

 

 

Duodenal and periampullary perforations

Duodenal and periampullary perforations occur during duodenal stricture dilation, EMR, endoscopic submucosal dissection, endoscopic ultrasound, and endoscopic retrograde cholangiopancreatography (ECRP). The thin duodenal wall makes it more susceptible to perforation than the esophagus, stomach, or colon.

Closing a duodenum perforation can be difficult. Type 1 perforations typically show sudden bleeding and lumen deflation, and often require surgical intervention. Some recent reports have suggested success with TTSCs, OTSCs, band ligation, and endoloops. Type 2 perforations are less obvious, and the endoscopist must examine the gas pattern on fluoroscopic beneath the liver or in the area of the right kidney. Retroperitoneal air following ERCP, if asymptomatic, doesn’t necessarily require intervention.

The challenges presented by the duodenum mean that, for large duodenal polyps, EMR should only be done by experienced endoscopists who are skilled at mucosal closure, and only experts should attempt ESD. Proteolytic enzymes from the pancreas can also pool in the duodenum, which can degrade muscle tissue and lead to delayed perforations. TTSC, OTSC, endosuturing, polymer gels or sheets, and TTSC combined with endoloop cinching have been used to close resection-associated perforations.
 

Colon perforation

Colon perforation may be caused by diverticulitis, inflammatory bowel disease, or occasionally colonic obstruction. Iatrogenic causes are more common and include endoscopic resection, hot forceps biopsy, dilation of stricture resulting from radiation or Crohn’s disease, colonic stenting, and advancement of colonoscope across angulations or into diverticula without straightening the endoscope

Large perforations are usually immediately noticeable and should be treated surgically, as should hemodynamic instability or delayed perforations with peritoneal signs.

Endoscopic closure should be attempted when the perforation site is clean, and lower rectal perforations can generally be repaired with TTSC, OTSC, or endoscopic suturing. In the cecum, or in a torturous or unclean colon, it may be difficult or dangerous to remove the colonoscope and insert an OTSC, and endoscopic suturing may not be possible, making TTSC the only procedure available for right colon perforations. The X-Tack Endoscopic HeliX Tacking System is a recently introduced, through-the-scope technology that places suture-tethered tacks into tissue surrounding the perforation and cinches it together. The system in principle can close large or irregular colonic and small bowel perforations using gastroscopes and colonoscopes, but no human studies have yet been published.
 

Conclusion

This update was a collaborative effort by four endoscopists who felt that it was timely to review the issue of perforations since they can be serious and challenging to manage. The evolution of endoscopic techniques over the last few years, however, has made the closure of spontaneous and iatrogenic perforations much less fear provoking, and they wished to summarize the approaches to a variety of such situations in order to guide practitioners who may encounter them.

“Although perforation is a serious event, with novel endoscopic techniques and tools, the endoscopist should no longer be paralyzed when it occurs,” the authors concluded.

Some authors reported relationships, such as consulting for or royalties from, device companies such as Medtronic and Boston Scientific. The remaining authors disclosed no conflicts.

This article was updated Oct. 25, 2021.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Therapeutic homework adherence improves tics in Tourette’s disorder

Article Type
Changed
Fri, 10/15/2021 - 10:52

Homework adherence between behavior therapy sessions is a significant predictor of therapeutic improvement in patients with Tourette’s disorder (TD), a study of 119 youth and adults suggests.

The assigning of “homework” to be completed between sessions – often used in cognitive-behavioral therapy – has been shown to reinforce learning but has not been well studied in TD.

“Understanding the relationship between homework adherence and therapeutic improvement from behavior therapy for TD may offer new insights for enhancing tic severity reductions achieved during this evidence-based treatment,” wrote Joey Ka-Yee Essoe, PhD, of the department of psychiatry and behavioral sciences at Johns Hopkins University, Baltimore, and colleagues.

To conduct the study, published in Behaviour Research and Therapy, the researchers recruited 70 youth and 49 adults with TD, ranging in age from 9 to 67 years, who underwent treatment at a single center. The average age was 21 years, and 80 participants were male. Treatment response was based on the Clinical Global Impressions of Improvement scale (CGI-I). Participants were assessed at baseline for tic severity and received eight sessions over 10 weeks. During those sessions, they were taught to perform a competing response to inhibit the expression of a tic when the tic or urge was detected.

Participants received homework at each weekly therapy session; most consisted of three to four practice sessions of about 30 minutes per week. Therapists reviewed the homework at the following session and adapted as needed to improve tic reduction skills.

After eight sessions of behavior therapy, overall greater homework adherence significantly predicted reduced tic severity and therapeutic improvement. However, early homework adherence predicted therapeutic improvement in youth, while late homework adherence predicted it in adults.

Overall, homework adherence significantly predicted tic reductions, compared with baseline (P = .037), based on the clinician-rated Yale Global Tic Severity Scale.

However, homework adherence dipped midway through treatment in youth and showed a linear decline in adults, the researchers noted.

Among youth, baseline predictors of early homework adherence included lower levels of hyperactivity/impulsivity and caregiver strain. Among adults, baseline predictors of early homework adherence included lower anger scores, less social disability, and greater work disability.

The study findings were limited by several factors, including the absence of complete data on baseline predictors of homework adherence, reliance on a single measure of tic severity and improvement, and reliance on therapists’ reports of homework adherence, the researchers noted.

Future research should include objective measures of homework adherence, such as time-stamped videos, and different strategies may be needed for youth vs. adults, they added.

“Strategies that optimize homework adherence may enhance the efficacy of behavioral therapy, lead to greater tic severity reductions, and higher treatment response rates,” Dr. Essoe and colleagues wrote.

The study was supported by the Tourette Association of America, the National Institute of Mental Health, the American Academy of Neurology, and the American Psychological Foundation.

Publications
Topics
Sections

Homework adherence between behavior therapy sessions is a significant predictor of therapeutic improvement in patients with Tourette’s disorder (TD), a study of 119 youth and adults suggests.

The assigning of “homework” to be completed between sessions – often used in cognitive-behavioral therapy – has been shown to reinforce learning but has not been well studied in TD.

“Understanding the relationship between homework adherence and therapeutic improvement from behavior therapy for TD may offer new insights for enhancing tic severity reductions achieved during this evidence-based treatment,” wrote Joey Ka-Yee Essoe, PhD, of the department of psychiatry and behavioral sciences at Johns Hopkins University, Baltimore, and colleagues.

To conduct the study, published in Behaviour Research and Therapy, the researchers recruited 70 youth and 49 adults with TD, ranging in age from 9 to 67 years, who underwent treatment at a single center. The average age was 21 years, and 80 participants were male. Treatment response was based on the Clinical Global Impressions of Improvement scale (CGI-I). Participants were assessed at baseline for tic severity and received eight sessions over 10 weeks. During those sessions, they were taught to perform a competing response to inhibit the expression of a tic when the tic or urge was detected.

Participants received homework at each weekly therapy session; most consisted of three to four practice sessions of about 30 minutes per week. Therapists reviewed the homework at the following session and adapted as needed to improve tic reduction skills.

After eight sessions of behavior therapy, overall greater homework adherence significantly predicted reduced tic severity and therapeutic improvement. However, early homework adherence predicted therapeutic improvement in youth, while late homework adherence predicted it in adults.

Overall, homework adherence significantly predicted tic reductions, compared with baseline (P = .037), based on the clinician-rated Yale Global Tic Severity Scale.

However, homework adherence dipped midway through treatment in youth and showed a linear decline in adults, the researchers noted.

Among youth, baseline predictors of early homework adherence included lower levels of hyperactivity/impulsivity and caregiver strain. Among adults, baseline predictors of early homework adherence included lower anger scores, less social disability, and greater work disability.

The study findings were limited by several factors, including the absence of complete data on baseline predictors of homework adherence, reliance on a single measure of tic severity and improvement, and reliance on therapists’ reports of homework adherence, the researchers noted.

Future research should include objective measures of homework adherence, such as time-stamped videos, and different strategies may be needed for youth vs. adults, they added.

“Strategies that optimize homework adherence may enhance the efficacy of behavioral therapy, lead to greater tic severity reductions, and higher treatment response rates,” Dr. Essoe and colleagues wrote.

The study was supported by the Tourette Association of America, the National Institute of Mental Health, the American Academy of Neurology, and the American Psychological Foundation.

Homework adherence between behavior therapy sessions is a significant predictor of therapeutic improvement in patients with Tourette’s disorder (TD), a study of 119 youth and adults suggests.

The assigning of “homework” to be completed between sessions – often used in cognitive-behavioral therapy – has been shown to reinforce learning but has not been well studied in TD.

“Understanding the relationship between homework adherence and therapeutic improvement from behavior therapy for TD may offer new insights for enhancing tic severity reductions achieved during this evidence-based treatment,” wrote Joey Ka-Yee Essoe, PhD, of the department of psychiatry and behavioral sciences at Johns Hopkins University, Baltimore, and colleagues.

To conduct the study, published in Behaviour Research and Therapy, the researchers recruited 70 youth and 49 adults with TD, ranging in age from 9 to 67 years, who underwent treatment at a single center. The average age was 21 years, and 80 participants were male. Treatment response was based on the Clinical Global Impressions of Improvement scale (CGI-I). Participants were assessed at baseline for tic severity and received eight sessions over 10 weeks. During those sessions, they were taught to perform a competing response to inhibit the expression of a tic when the tic or urge was detected.

Participants received homework at each weekly therapy session; most consisted of three to four practice sessions of about 30 minutes per week. Therapists reviewed the homework at the following session and adapted as needed to improve tic reduction skills.

After eight sessions of behavior therapy, overall greater homework adherence significantly predicted reduced tic severity and therapeutic improvement. However, early homework adherence predicted therapeutic improvement in youth, while late homework adherence predicted it in adults.

Overall, homework adherence significantly predicted tic reductions, compared with baseline (P = .037), based on the clinician-rated Yale Global Tic Severity Scale.

However, homework adherence dipped midway through treatment in youth and showed a linear decline in adults, the researchers noted.

Among youth, baseline predictors of early homework adherence included lower levels of hyperactivity/impulsivity and caregiver strain. Among adults, baseline predictors of early homework adherence included lower anger scores, less social disability, and greater work disability.

The study findings were limited by several factors, including the absence of complete data on baseline predictors of homework adherence, reliance on a single measure of tic severity and improvement, and reliance on therapists’ reports of homework adherence, the researchers noted.

Future research should include objective measures of homework adherence, such as time-stamped videos, and different strategies may be needed for youth vs. adults, they added.

“Strategies that optimize homework adherence may enhance the efficacy of behavioral therapy, lead to greater tic severity reductions, and higher treatment response rates,” Dr. Essoe and colleagues wrote.

The study was supported by the Tourette Association of America, the National Institute of Mental Health, the American Academy of Neurology, and the American Psychological Foundation.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM BEHAVIOUR RESEARCH & THERAPY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Melatonin improves sleep in MS

Article Type
Changed
Fri, 10/15/2021 - 14:23

Melatonin improved sleep time and sleep efficiency in patients with multiple sclerosis (MS) who also had sleep disturbance, according to a new pilot study.

Dr. Wan-Yu Hsu

The study included only 30 patients, but the findings suggest that melatonin could potentially help patients with MS who have sleep issues, according to Wan-Yu Hsu, PhD, who presented the study at the annual meeting of the European Committee for Treatment and Research in Multiple Sclerosis (ECTRIMS).

There is no optimal management of sleep issues for these patients, and objective studies of sleep in patients with MS are scarce, said Dr. Hsu, who is an associate specialist in the department of neurology at the University of California, San Francisco. She worked with Riley Bove, MD, who is an associate professor of neurology at UCSF Weill Institute for Neurosciences.

“Melatonin use was associated with improvement in sleep quality and sleep disturbance in MS patients, although there was no significant change in other outcomes, like daytime sleepiness, mood, and walking ability” Dr. Hsu said in an interview.

Melatonin is inexpensive and readily available over the counter, but it’s too soon to begin recommending it to MS patients experiencing sleep problems, according to Dr. Hsu. “It’s a good start that we’re seeing some effects here with this relatively small group of people. Larger studies are needed to unravel the complex relationship between MS and sleep disturbances, as well as develop successful interventions. But for now, since melatonin is an over-the-counter, low-cost supplement, many patients are trying it already.”

Melatonin regulates the sleep-wake cycle, and previous research has shown a decrease in melatonin serum levels as a result of corticosteroid administration. Other work has suggested that the decline of melatonin secretion in MS may reflect progressive failure of the pineal gland in the pathogenesis of MS. “The cause of sleep problems can be lesions and neural damage to brain structures involved in sleep, or symptoms that indirectly disrupt sleep,” she said.

Indeed, sleep issues in MS are common and wide-ranging, according to Mark Gudesblatt, MD, who was asked to comment on the study. His group previously reported that 65% of people with MS who reported fatigue had undiagnosed obstructive sleep apnea. He also pointed out that disruption of the neural network also disrupts sleep. “That is not only sleep-disordered breathing, that’s sleep onset, REM latency, and sleep efficiency,” said Dr. Gudesblatt, who is medical director of the Comprehensive MS Care Center at South Shore Neurologic Associates in Patchogue, N.Y.

Dr. Gudesblatt cautioned that melatonin, as a dietary supplement, is unregulated. The potency listed on the package may not be accurate and also may not be the correct dose for the patient. “It’s fraught with problems, but ultimately it’s relatively safe,” said Dr. Gudesblatt.

The study was a double-blind, placebo-controlled, crossover study. Participants had a Pittsburgh Sleep Quality Index (PSQI) score of 5 or more, or an Insomnia Severity Index (ISI) score higher than 14 at baseline. Other baseline assessments included patient-reported outcomes for sleep disturbances, sleep quality, daytime sleepiness, fatigue, walking ability, and mood. Half of the participants received melatonin for the first 2 weeks and then switched to placebo. The other half started with placebo and moved over to melatonin at the beginning of week 3.

Participants in the trial started out at 0.5 mg melatonin and were stepped up to 3.0 mg after 3 days if they didn't feel it was working, both when taking melatonin and when taking placebo. Of the 30 patients, 24 stepped up to 3.0 mg when they were receiving melatonin.*

During the second and fourth weeks, participants wore an actigraph watch to measure their physical and sleep activities, and then repeated the patient-reported outcome measures at the end of weeks 2 and 4. Melatonin improved average sleep time (6.96 vs. 6.67 hours; P = .03) as measured by the actigraph watch. Sleep efficiency was also nominally improved (84.7% vs. 83.2%), though the result was not statistically significant (P = .07). Other trends toward statistical significance included improvements in ISI (–3.5 vs. –2.4; P = .07), change in PSQI component 1 (–0.03 vs. 0.0; P = .07), and change in the NeuroQoL-Fatigue score (–4.7 vs. –2.4; P = .06).

Dr. Hsu hopes to conduct larger studies to examine how the disease-modifying therapies might affect the results of the study.

The study was funded by the National Multiple Sclerosis Society. Dr. Hsu and Dr. Gudesblatt have no relevant financial disclosures.

*This article was updated on Oct. 15.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Melatonin improved sleep time and sleep efficiency in patients with multiple sclerosis (MS) who also had sleep disturbance, according to a new pilot study.

Dr. Wan-Yu Hsu

The study included only 30 patients, but the findings suggest that melatonin could potentially help patients with MS who have sleep issues, according to Wan-Yu Hsu, PhD, who presented the study at the annual meeting of the European Committee for Treatment and Research in Multiple Sclerosis (ECTRIMS).

There is no optimal management of sleep issues for these patients, and objective studies of sleep in patients with MS are scarce, said Dr. Hsu, who is an associate specialist in the department of neurology at the University of California, San Francisco. She worked with Riley Bove, MD, who is an associate professor of neurology at UCSF Weill Institute for Neurosciences.

“Melatonin use was associated with improvement in sleep quality and sleep disturbance in MS patients, although there was no significant change in other outcomes, like daytime sleepiness, mood, and walking ability” Dr. Hsu said in an interview.

Melatonin is inexpensive and readily available over the counter, but it’s too soon to begin recommending it to MS patients experiencing sleep problems, according to Dr. Hsu. “It’s a good start that we’re seeing some effects here with this relatively small group of people. Larger studies are needed to unravel the complex relationship between MS and sleep disturbances, as well as develop successful interventions. But for now, since melatonin is an over-the-counter, low-cost supplement, many patients are trying it already.”

Melatonin regulates the sleep-wake cycle, and previous research has shown a decrease in melatonin serum levels as a result of corticosteroid administration. Other work has suggested that the decline of melatonin secretion in MS may reflect progressive failure of the pineal gland in the pathogenesis of MS. “The cause of sleep problems can be lesions and neural damage to brain structures involved in sleep, or symptoms that indirectly disrupt sleep,” she said.

Indeed, sleep issues in MS are common and wide-ranging, according to Mark Gudesblatt, MD, who was asked to comment on the study. His group previously reported that 65% of people with MS who reported fatigue had undiagnosed obstructive sleep apnea. He also pointed out that disruption of the neural network also disrupts sleep. “That is not only sleep-disordered breathing, that’s sleep onset, REM latency, and sleep efficiency,” said Dr. Gudesblatt, who is medical director of the Comprehensive MS Care Center at South Shore Neurologic Associates in Patchogue, N.Y.

Dr. Gudesblatt cautioned that melatonin, as a dietary supplement, is unregulated. The potency listed on the package may not be accurate and also may not be the correct dose for the patient. “It’s fraught with problems, but ultimately it’s relatively safe,” said Dr. Gudesblatt.

The study was a double-blind, placebo-controlled, crossover study. Participants had a Pittsburgh Sleep Quality Index (PSQI) score of 5 or more, or an Insomnia Severity Index (ISI) score higher than 14 at baseline. Other baseline assessments included patient-reported outcomes for sleep disturbances, sleep quality, daytime sleepiness, fatigue, walking ability, and mood. Half of the participants received melatonin for the first 2 weeks and then switched to placebo. The other half started with placebo and moved over to melatonin at the beginning of week 3.

Participants in the trial started out at 0.5 mg melatonin and were stepped up to 3.0 mg after 3 days if they didn't feel it was working, both when taking melatonin and when taking placebo. Of the 30 patients, 24 stepped up to 3.0 mg when they were receiving melatonin.*

During the second and fourth weeks, participants wore an actigraph watch to measure their physical and sleep activities, and then repeated the patient-reported outcome measures at the end of weeks 2 and 4. Melatonin improved average sleep time (6.96 vs. 6.67 hours; P = .03) as measured by the actigraph watch. Sleep efficiency was also nominally improved (84.7% vs. 83.2%), though the result was not statistically significant (P = .07). Other trends toward statistical significance included improvements in ISI (–3.5 vs. –2.4; P = .07), change in PSQI component 1 (–0.03 vs. 0.0; P = .07), and change in the NeuroQoL-Fatigue score (–4.7 vs. –2.4; P = .06).

Dr. Hsu hopes to conduct larger studies to examine how the disease-modifying therapies might affect the results of the study.

The study was funded by the National Multiple Sclerosis Society. Dr. Hsu and Dr. Gudesblatt have no relevant financial disclosures.

*This article was updated on Oct. 15.

Melatonin improved sleep time and sleep efficiency in patients with multiple sclerosis (MS) who also had sleep disturbance, according to a new pilot study.

Dr. Wan-Yu Hsu

The study included only 30 patients, but the findings suggest that melatonin could potentially help patients with MS who have sleep issues, according to Wan-Yu Hsu, PhD, who presented the study at the annual meeting of the European Committee for Treatment and Research in Multiple Sclerosis (ECTRIMS).

There is no optimal management of sleep issues for these patients, and objective studies of sleep in patients with MS are scarce, said Dr. Hsu, who is an associate specialist in the department of neurology at the University of California, San Francisco. She worked with Riley Bove, MD, who is an associate professor of neurology at UCSF Weill Institute for Neurosciences.

“Melatonin use was associated with improvement in sleep quality and sleep disturbance in MS patients, although there was no significant change in other outcomes, like daytime sleepiness, mood, and walking ability” Dr. Hsu said in an interview.

Melatonin is inexpensive and readily available over the counter, but it’s too soon to begin recommending it to MS patients experiencing sleep problems, according to Dr. Hsu. “It’s a good start that we’re seeing some effects here with this relatively small group of people. Larger studies are needed to unravel the complex relationship between MS and sleep disturbances, as well as develop successful interventions. But for now, since melatonin is an over-the-counter, low-cost supplement, many patients are trying it already.”

Melatonin regulates the sleep-wake cycle, and previous research has shown a decrease in melatonin serum levels as a result of corticosteroid administration. Other work has suggested that the decline of melatonin secretion in MS may reflect progressive failure of the pineal gland in the pathogenesis of MS. “The cause of sleep problems can be lesions and neural damage to brain structures involved in sleep, or symptoms that indirectly disrupt sleep,” she said.

Indeed, sleep issues in MS are common and wide-ranging, according to Mark Gudesblatt, MD, who was asked to comment on the study. His group previously reported that 65% of people with MS who reported fatigue had undiagnosed obstructive sleep apnea. He also pointed out that disruption of the neural network also disrupts sleep. “That is not only sleep-disordered breathing, that’s sleep onset, REM latency, and sleep efficiency,” said Dr. Gudesblatt, who is medical director of the Comprehensive MS Care Center at South Shore Neurologic Associates in Patchogue, N.Y.

Dr. Gudesblatt cautioned that melatonin, as a dietary supplement, is unregulated. The potency listed on the package may not be accurate and also may not be the correct dose for the patient. “It’s fraught with problems, but ultimately it’s relatively safe,” said Dr. Gudesblatt.

The study was a double-blind, placebo-controlled, crossover study. Participants had a Pittsburgh Sleep Quality Index (PSQI) score of 5 or more, or an Insomnia Severity Index (ISI) score higher than 14 at baseline. Other baseline assessments included patient-reported outcomes for sleep disturbances, sleep quality, daytime sleepiness, fatigue, walking ability, and mood. Half of the participants received melatonin for the first 2 weeks and then switched to placebo. The other half started with placebo and moved over to melatonin at the beginning of week 3.

Participants in the trial started out at 0.5 mg melatonin and were stepped up to 3.0 mg after 3 days if they didn't feel it was working, both when taking melatonin and when taking placebo. Of the 30 patients, 24 stepped up to 3.0 mg when they were receiving melatonin.*

During the second and fourth weeks, participants wore an actigraph watch to measure their physical and sleep activities, and then repeated the patient-reported outcome measures at the end of weeks 2 and 4. Melatonin improved average sleep time (6.96 vs. 6.67 hours; P = .03) as measured by the actigraph watch. Sleep efficiency was also nominally improved (84.7% vs. 83.2%), though the result was not statistically significant (P = .07). Other trends toward statistical significance included improvements in ISI (–3.5 vs. –2.4; P = .07), change in PSQI component 1 (–0.03 vs. 0.0; P = .07), and change in the NeuroQoL-Fatigue score (–4.7 vs. –2.4; P = .06).

Dr. Hsu hopes to conduct larger studies to examine how the disease-modifying therapies might affect the results of the study.

The study was funded by the National Multiple Sclerosis Society. Dr. Hsu and Dr. Gudesblatt have no relevant financial disclosures.

*This article was updated on Oct. 15.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ECTRIMS 2021

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article