FDA class 1 recall for some Abiomed Impella heart pumps

Article Type
Changed
Wed, 06/07/2023 - 09:05

Abiomed has recalled 466 of its Impella 5.5 with SmartAssist heart pumps after receiving customer complaints about purge fluid leaking from the purge sidearm of the pump.

“If a purge leak occurs, the system will experience low purge pressures, prompting alarms and requiring evaluation,” the U.S. Food and Drug Administration says in an advisory posted on its website.



“If the leak issue is not resolved, persistent low purge pressure and purge flow may lead to pump stop and loss of therapy. In patients who are critical, failure of the pump’s support can lead to further deterioration and worsening of their already critical condition and may even lead to serious injury or death,” the FDA says.

The FDA has identified this as a class I recall, the most serious type, because of the potential for serious injury or death.

To date, Abiomed says it has received 179 complaints; there have been three injuries and no deaths related to this problem.

The Impella 5.5 with SmartAssist System is used for up to 14 days to support the ventricles in the setting of ongoing cardiogenic shock that occurs less than 48 hours after acute myocardial infarction, open-heart surgery, or when the heart is not functioning well owing to cardiomyopathy.

All the devices that are being recalled were distributed from September 2021 to March 2023. Detailed product information is available on the FDA website.

Abiomed has sent an urgent medical device recall letter to customers asking them to review their inventory to check for any recalled product and to contact Abiomed customer support to coordinate return of the product.

Customers are advised not to use affected products unless no other product is available. The letter includes “best practices” for situations in which no other option is available and the device must be used until a replacement is available.

Customers with questions about this recall should contact Abiomed’s clinical support center at 1-800-422-8666.

A version of this article was first published on Medscape.com.

Publications
Topics
Sections

Abiomed has recalled 466 of its Impella 5.5 with SmartAssist heart pumps after receiving customer complaints about purge fluid leaking from the purge sidearm of the pump.

“If a purge leak occurs, the system will experience low purge pressures, prompting alarms and requiring evaluation,” the U.S. Food and Drug Administration says in an advisory posted on its website.



“If the leak issue is not resolved, persistent low purge pressure and purge flow may lead to pump stop and loss of therapy. In patients who are critical, failure of the pump’s support can lead to further deterioration and worsening of their already critical condition and may even lead to serious injury or death,” the FDA says.

The FDA has identified this as a class I recall, the most serious type, because of the potential for serious injury or death.

To date, Abiomed says it has received 179 complaints; there have been three injuries and no deaths related to this problem.

The Impella 5.5 with SmartAssist System is used for up to 14 days to support the ventricles in the setting of ongoing cardiogenic shock that occurs less than 48 hours after acute myocardial infarction, open-heart surgery, or when the heart is not functioning well owing to cardiomyopathy.

All the devices that are being recalled were distributed from September 2021 to March 2023. Detailed product information is available on the FDA website.

Abiomed has sent an urgent medical device recall letter to customers asking them to review their inventory to check for any recalled product and to contact Abiomed customer support to coordinate return of the product.

Customers are advised not to use affected products unless no other product is available. The letter includes “best practices” for situations in which no other option is available and the device must be used until a replacement is available.

Customers with questions about this recall should contact Abiomed’s clinical support center at 1-800-422-8666.

A version of this article was first published on Medscape.com.

Abiomed has recalled 466 of its Impella 5.5 with SmartAssist heart pumps after receiving customer complaints about purge fluid leaking from the purge sidearm of the pump.

“If a purge leak occurs, the system will experience low purge pressures, prompting alarms and requiring evaluation,” the U.S. Food and Drug Administration says in an advisory posted on its website.



“If the leak issue is not resolved, persistent low purge pressure and purge flow may lead to pump stop and loss of therapy. In patients who are critical, failure of the pump’s support can lead to further deterioration and worsening of their already critical condition and may even lead to serious injury or death,” the FDA says.

The FDA has identified this as a class I recall, the most serious type, because of the potential for serious injury or death.

To date, Abiomed says it has received 179 complaints; there have been three injuries and no deaths related to this problem.

The Impella 5.5 with SmartAssist System is used for up to 14 days to support the ventricles in the setting of ongoing cardiogenic shock that occurs less than 48 hours after acute myocardial infarction, open-heart surgery, or when the heart is not functioning well owing to cardiomyopathy.

All the devices that are being recalled were distributed from September 2021 to March 2023. Detailed product information is available on the FDA website.

Abiomed has sent an urgent medical device recall letter to customers asking them to review their inventory to check for any recalled product and to contact Abiomed customer support to coordinate return of the product.

Customers are advised not to use affected products unless no other product is available. The letter includes “best practices” for situations in which no other option is available and the device must be used until a replacement is available.

Customers with questions about this recall should contact Abiomed’s clinical support center at 1-800-422-8666.

A version of this article was first published on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Once-weekly growth hormone somapacitan approved for children 

Article Type
Changed
Wed, 06/07/2023 - 11:41

The once-weekly growth hormone analog somapacitan (Sogroya, Novo Nordisk) has been given the green light in both Europe and the United States for the treatment of children and adolescents with growth failure due to growth hormone deficiency.

On May 26, the European Medicine Agency’s Committee for Medicinal Products for Human Use adopted a positive opinion, recommending the product for replacement of endogenous growth hormone in children aged 3 years and older.   

That decision followed the Food and Drug Administration’s approval in April of the new indication for somapacitan injection in 5 mg, 10 mg, or 15 mg doses for children aged 2.5 years and older. The FDA approved the treatment for adults with growth hormone deficiency in September 2020.

Growth hormone deficiency is estimated to affect between 1 in 3,500 to 1 in 10,000 children. If left untreated, the condition can lead to shortened stature, reduced bone mineral density, and delayed appearance of teeth.

The European and American regulatory decisions were based on data from the phase 3 multinational REAL4 trial, published in the Journal of Clinical Endocrinology & Metabolism, in 200 prepubertal children with growth hormone deficiency randomly assigned 2:1 to weekly subcutaneous somapacitan or daily somatropin. At 52 weeks, height velocity was 11.2 cm/year with the once-weekly drug, compared with 11.7 cm/year with daily somatropin, a nonsignificant difference.

There were no major differences between the drugs in safety or tolerability. Adverse reactions in the REAL4 study that occurred in more than 5% of patients included nasopharyngitis, headache, pyrexia, extremity pain, and injection site reactions. A 3-year extension trial is ongoing.

The European Commission is expected to make a final decision in the coming months, and if approved somapacitan will be available in some European countries beginning in late 2023.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

The once-weekly growth hormone analog somapacitan (Sogroya, Novo Nordisk) has been given the green light in both Europe and the United States for the treatment of children and adolescents with growth failure due to growth hormone deficiency.

On May 26, the European Medicine Agency’s Committee for Medicinal Products for Human Use adopted a positive opinion, recommending the product for replacement of endogenous growth hormone in children aged 3 years and older.   

That decision followed the Food and Drug Administration’s approval in April of the new indication for somapacitan injection in 5 mg, 10 mg, or 15 mg doses for children aged 2.5 years and older. The FDA approved the treatment for adults with growth hormone deficiency in September 2020.

Growth hormone deficiency is estimated to affect between 1 in 3,500 to 1 in 10,000 children. If left untreated, the condition can lead to shortened stature, reduced bone mineral density, and delayed appearance of teeth.

The European and American regulatory decisions were based on data from the phase 3 multinational REAL4 trial, published in the Journal of Clinical Endocrinology & Metabolism, in 200 prepubertal children with growth hormone deficiency randomly assigned 2:1 to weekly subcutaneous somapacitan or daily somatropin. At 52 weeks, height velocity was 11.2 cm/year with the once-weekly drug, compared with 11.7 cm/year with daily somatropin, a nonsignificant difference.

There were no major differences between the drugs in safety or tolerability. Adverse reactions in the REAL4 study that occurred in more than 5% of patients included nasopharyngitis, headache, pyrexia, extremity pain, and injection site reactions. A 3-year extension trial is ongoing.

The European Commission is expected to make a final decision in the coming months, and if approved somapacitan will be available in some European countries beginning in late 2023.

A version of this article originally appeared on Medscape.com.

The once-weekly growth hormone analog somapacitan (Sogroya, Novo Nordisk) has been given the green light in both Europe and the United States for the treatment of children and adolescents with growth failure due to growth hormone deficiency.

On May 26, the European Medicine Agency’s Committee for Medicinal Products for Human Use adopted a positive opinion, recommending the product for replacement of endogenous growth hormone in children aged 3 years and older.   

That decision followed the Food and Drug Administration’s approval in April of the new indication for somapacitan injection in 5 mg, 10 mg, or 15 mg doses for children aged 2.5 years and older. The FDA approved the treatment for adults with growth hormone deficiency in September 2020.

Growth hormone deficiency is estimated to affect between 1 in 3,500 to 1 in 10,000 children. If left untreated, the condition can lead to shortened stature, reduced bone mineral density, and delayed appearance of teeth.

The European and American regulatory decisions were based on data from the phase 3 multinational REAL4 trial, published in the Journal of Clinical Endocrinology & Metabolism, in 200 prepubertal children with growth hormone deficiency randomly assigned 2:1 to weekly subcutaneous somapacitan or daily somatropin. At 52 weeks, height velocity was 11.2 cm/year with the once-weekly drug, compared with 11.7 cm/year with daily somatropin, a nonsignificant difference.

There were no major differences between the drugs in safety or tolerability. Adverse reactions in the REAL4 study that occurred in more than 5% of patients included nasopharyngitis, headache, pyrexia, extremity pain, and injection site reactions. A 3-year extension trial is ongoing.

The European Commission is expected to make a final decision in the coming months, and if approved somapacitan will be available in some European countries beginning in late 2023.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

EHR nudges a bust for boosting guideline-directed meds in acute HF: PROMPT-AHF

Article Type
Changed
Thu, 06/08/2023 - 10:02

A system of personalized alerts via an electronic health record (EHR) network failed to boost discharge prescriptions for guideline-directed medical therapy (GDMT) for patients hospitalized with heart failure (HF) with reduced ejection fraction in a randomized trial conducted at several centers in the same health care system.

The results of the PROMPT-AHF trial, which assigned such patients to have or not have the GDMT-promoting physician nudges as part of their in-hospital management, were “not entirely surprising,” Tariq Ahmad, MD, MPH, of Yale University, New Haven, Conn., said in an interview.

“We have created an environment in the hospital that makes care quite fractured for patients with heart failure,” he said. “They are cared for by many different clinicians, which leads to well-known behaviors such as diffusion of responsibility.”

Moreover, many clinicians focus on stabilizing patients “rather than starting them on a comprehensive set of medications, which most think should be done after discharge,” Dr. Ahmad added.

“Importantly, there has been a logarithmic increase in alerts while patients are hospitalized that has caused clinician burnout and is leading to even very important alerts being ignored.”

Likely as a result, the trial saw no significant difference between the alert and no-alert groups in how often the number of GDMT prescriptions rose by at least one drug class, whether beta blockers, renin-angiotensin system inhibitors, mineralocorticoid receptor antagonists, or SGLT2 inhibitors. That happened for 34% of patients in both groups, reported Dr. Ahmad at the Heart Failure Association of the European Society of Cardiology (HFA-ESC) 2023 sessions

Nor was there a difference in the secondary endpoint of increased number of GDMT meds or escalated dosage of prescribed GDMT drugs.
 

GDMT ‘uncommon’ in AHF

In an earlier trial in outpatients with chronic HF, conducted by many of the same researchers, use of a targeted EHR-based alert system was associated with significantly higher rates of GDMT prescriptions 30 days after discharge, compared with usual care, Dr. Ahmad observed in his presentation.

Because GDMT is similarly “uncommon” among patients hospitalized with acute HF, the team designed the current trial, a test of the hypothesis that a similar system of nudges would lead to higher rates of prescriptions of the four core GDMT drug classes.

The study enrolled 920 adults with acute HF, an EF of 40% or lower (their median was 28%), and NT-proBNP levels higher than 500 pg/mL. The patients received IV diuretics for the first 24 in-hospital hours and were not taking medications from any of the four core HF drug classes. Their mean age was 74, 36% were women, and 25% were Black.

Physicians of patients who were randomly assigned to the intervention received the alerts as they entered information that involved ejection fraction, blood pressure, potassium levels, heart rate, glomerular filtration rate, and meds they were currently or should be taking, “along with an order set that made ordering those medications very easy,” Dr. Ahmad said.

“There was absolutely no evidence that the alert made any difference. There were zero patients on all four classes of GDMT at baseline, and at the time of discharge, only 11.2% of patients were on all four pillars – essentially, one in nine patients,” Dr. Ahmad said. Nor were there any subgroup differences in age, sex, race, ejection fraction, type of health insurance, or whether care was provided by a cardiologist or noncardiologist physician.

The study was limited by having been conducted within a single health care network using only the Epic EHR system. The alerts did not go exclusively to cardiologists, and patient preferences were not considered in the analysis. Also, the study’s alerts represented only some of the many that were received by the clinicians during the course of the trial.
 

 

 

Better incentives needed

“We believe this shows that refinement of the nudges is needed, as well as changes to clinician incentives to overcome barriers to implementation of GDMT during hospitalizations for AHF,” Dr. Ahmad said.

Responding to a postpresentation question on whether the postdischarge phase might be a more effective time to intervene with nudges, Dr. Ahmad observed that many clinicians who care for patients in the hospital assume that someone else will have the patient receive appropriate meds after discharge. “But we know that things that are started in the hospital tend to stick better.

“I do think that a lot of the clinicians were thinking, ‘I’m just going to get this patient out and someone in the outside will get them on GDMT,’ ” he said.

In the United States there are many incentives to reduce hospital length of stay and to expedite discharge so more beds are available for incoming patients, Dr. Ahmad observed. “I think it’s a combination of these kinds of perverse incentives that are not allowing us to get patients on appropriate GDMT during hospitalization.”

Furthermore, Dr. Ahmad told this news organization, “additions to the EHR should be evaluated in an evidence-based manner. However, the opposite has occurred, with an unregulated data tsunami crushing clinicians, which has been bad both for the clinicians and for patients.”

The study was funded by AstraZeneca. Dr. Ahmad discloses receiving research funding from and consulting for AstraZeneca; and receiving research funding from Boehringer Ingelheim, Cytokinetics, and Relypsa. Three other coauthors are employees of AstraZeneca.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

A system of personalized alerts via an electronic health record (EHR) network failed to boost discharge prescriptions for guideline-directed medical therapy (GDMT) for patients hospitalized with heart failure (HF) with reduced ejection fraction in a randomized trial conducted at several centers in the same health care system.

The results of the PROMPT-AHF trial, which assigned such patients to have or not have the GDMT-promoting physician nudges as part of their in-hospital management, were “not entirely surprising,” Tariq Ahmad, MD, MPH, of Yale University, New Haven, Conn., said in an interview.

“We have created an environment in the hospital that makes care quite fractured for patients with heart failure,” he said. “They are cared for by many different clinicians, which leads to well-known behaviors such as diffusion of responsibility.”

Moreover, many clinicians focus on stabilizing patients “rather than starting them on a comprehensive set of medications, which most think should be done after discharge,” Dr. Ahmad added.

“Importantly, there has been a logarithmic increase in alerts while patients are hospitalized that has caused clinician burnout and is leading to even very important alerts being ignored.”

Likely as a result, the trial saw no significant difference between the alert and no-alert groups in how often the number of GDMT prescriptions rose by at least one drug class, whether beta blockers, renin-angiotensin system inhibitors, mineralocorticoid receptor antagonists, or SGLT2 inhibitors. That happened for 34% of patients in both groups, reported Dr. Ahmad at the Heart Failure Association of the European Society of Cardiology (HFA-ESC) 2023 sessions

Nor was there a difference in the secondary endpoint of increased number of GDMT meds or escalated dosage of prescribed GDMT drugs.
 

GDMT ‘uncommon’ in AHF

In an earlier trial in outpatients with chronic HF, conducted by many of the same researchers, use of a targeted EHR-based alert system was associated with significantly higher rates of GDMT prescriptions 30 days after discharge, compared with usual care, Dr. Ahmad observed in his presentation.

Because GDMT is similarly “uncommon” among patients hospitalized with acute HF, the team designed the current trial, a test of the hypothesis that a similar system of nudges would lead to higher rates of prescriptions of the four core GDMT drug classes.

The study enrolled 920 adults with acute HF, an EF of 40% or lower (their median was 28%), and NT-proBNP levels higher than 500 pg/mL. The patients received IV diuretics for the first 24 in-hospital hours and were not taking medications from any of the four core HF drug classes. Their mean age was 74, 36% were women, and 25% were Black.

Physicians of patients who were randomly assigned to the intervention received the alerts as they entered information that involved ejection fraction, blood pressure, potassium levels, heart rate, glomerular filtration rate, and meds they were currently or should be taking, “along with an order set that made ordering those medications very easy,” Dr. Ahmad said.

“There was absolutely no evidence that the alert made any difference. There were zero patients on all four classes of GDMT at baseline, and at the time of discharge, only 11.2% of patients were on all four pillars – essentially, one in nine patients,” Dr. Ahmad said. Nor were there any subgroup differences in age, sex, race, ejection fraction, type of health insurance, or whether care was provided by a cardiologist or noncardiologist physician.

The study was limited by having been conducted within a single health care network using only the Epic EHR system. The alerts did not go exclusively to cardiologists, and patient preferences were not considered in the analysis. Also, the study’s alerts represented only some of the many that were received by the clinicians during the course of the trial.
 

 

 

Better incentives needed

“We believe this shows that refinement of the nudges is needed, as well as changes to clinician incentives to overcome barriers to implementation of GDMT during hospitalizations for AHF,” Dr. Ahmad said.

Responding to a postpresentation question on whether the postdischarge phase might be a more effective time to intervene with nudges, Dr. Ahmad observed that many clinicians who care for patients in the hospital assume that someone else will have the patient receive appropriate meds after discharge. “But we know that things that are started in the hospital tend to stick better.

“I do think that a lot of the clinicians were thinking, ‘I’m just going to get this patient out and someone in the outside will get them on GDMT,’ ” he said.

In the United States there are many incentives to reduce hospital length of stay and to expedite discharge so more beds are available for incoming patients, Dr. Ahmad observed. “I think it’s a combination of these kinds of perverse incentives that are not allowing us to get patients on appropriate GDMT during hospitalization.”

Furthermore, Dr. Ahmad told this news organization, “additions to the EHR should be evaluated in an evidence-based manner. However, the opposite has occurred, with an unregulated data tsunami crushing clinicians, which has been bad both for the clinicians and for patients.”

The study was funded by AstraZeneca. Dr. Ahmad discloses receiving research funding from and consulting for AstraZeneca; and receiving research funding from Boehringer Ingelheim, Cytokinetics, and Relypsa. Three other coauthors are employees of AstraZeneca.

A version of this article first appeared on Medscape.com.

A system of personalized alerts via an electronic health record (EHR) network failed to boost discharge prescriptions for guideline-directed medical therapy (GDMT) for patients hospitalized with heart failure (HF) with reduced ejection fraction in a randomized trial conducted at several centers in the same health care system.

The results of the PROMPT-AHF trial, which assigned such patients to have or not have the GDMT-promoting physician nudges as part of their in-hospital management, were “not entirely surprising,” Tariq Ahmad, MD, MPH, of Yale University, New Haven, Conn., said in an interview.

“We have created an environment in the hospital that makes care quite fractured for patients with heart failure,” he said. “They are cared for by many different clinicians, which leads to well-known behaviors such as diffusion of responsibility.”

Moreover, many clinicians focus on stabilizing patients “rather than starting them on a comprehensive set of medications, which most think should be done after discharge,” Dr. Ahmad added.

“Importantly, there has been a logarithmic increase in alerts while patients are hospitalized that has caused clinician burnout and is leading to even very important alerts being ignored.”

Likely as a result, the trial saw no significant difference between the alert and no-alert groups in how often the number of GDMT prescriptions rose by at least one drug class, whether beta blockers, renin-angiotensin system inhibitors, mineralocorticoid receptor antagonists, or SGLT2 inhibitors. That happened for 34% of patients in both groups, reported Dr. Ahmad at the Heart Failure Association of the European Society of Cardiology (HFA-ESC) 2023 sessions

Nor was there a difference in the secondary endpoint of increased number of GDMT meds or escalated dosage of prescribed GDMT drugs.
 

GDMT ‘uncommon’ in AHF

In an earlier trial in outpatients with chronic HF, conducted by many of the same researchers, use of a targeted EHR-based alert system was associated with significantly higher rates of GDMT prescriptions 30 days after discharge, compared with usual care, Dr. Ahmad observed in his presentation.

Because GDMT is similarly “uncommon” among patients hospitalized with acute HF, the team designed the current trial, a test of the hypothesis that a similar system of nudges would lead to higher rates of prescriptions of the four core GDMT drug classes.

The study enrolled 920 adults with acute HF, an EF of 40% or lower (their median was 28%), and NT-proBNP levels higher than 500 pg/mL. The patients received IV diuretics for the first 24 in-hospital hours and were not taking medications from any of the four core HF drug classes. Their mean age was 74, 36% were women, and 25% were Black.

Physicians of patients who were randomly assigned to the intervention received the alerts as they entered information that involved ejection fraction, blood pressure, potassium levels, heart rate, glomerular filtration rate, and meds they were currently or should be taking, “along with an order set that made ordering those medications very easy,” Dr. Ahmad said.

“There was absolutely no evidence that the alert made any difference. There were zero patients on all four classes of GDMT at baseline, and at the time of discharge, only 11.2% of patients were on all four pillars – essentially, one in nine patients,” Dr. Ahmad said. Nor were there any subgroup differences in age, sex, race, ejection fraction, type of health insurance, or whether care was provided by a cardiologist or noncardiologist physician.

The study was limited by having been conducted within a single health care network using only the Epic EHR system. The alerts did not go exclusively to cardiologists, and patient preferences were not considered in the analysis. Also, the study’s alerts represented only some of the many that were received by the clinicians during the course of the trial.
 

 

 

Better incentives needed

“We believe this shows that refinement of the nudges is needed, as well as changes to clinician incentives to overcome barriers to implementation of GDMT during hospitalizations for AHF,” Dr. Ahmad said.

Responding to a postpresentation question on whether the postdischarge phase might be a more effective time to intervene with nudges, Dr. Ahmad observed that many clinicians who care for patients in the hospital assume that someone else will have the patient receive appropriate meds after discharge. “But we know that things that are started in the hospital tend to stick better.

“I do think that a lot of the clinicians were thinking, ‘I’m just going to get this patient out and someone in the outside will get them on GDMT,’ ” he said.

In the United States there are many incentives to reduce hospital length of stay and to expedite discharge so more beds are available for incoming patients, Dr. Ahmad observed. “I think it’s a combination of these kinds of perverse incentives that are not allowing us to get patients on appropriate GDMT during hospitalization.”

Furthermore, Dr. Ahmad told this news organization, “additions to the EHR should be evaluated in an evidence-based manner. However, the opposite has occurred, with an unregulated data tsunami crushing clinicians, which has been bad both for the clinicians and for patients.”

The study was funded by AstraZeneca. Dr. Ahmad discloses receiving research funding from and consulting for AstraZeneca; and receiving research funding from Boehringer Ingelheim, Cytokinetics, and Relypsa. Three other coauthors are employees of AstraZeneca.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ESC Heart Failure 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Game-changing Alzheimer’s research: The latest on biomarkers

Article Type
Changed
Thu, 06/08/2023 - 10:03

The field of neurodegenerative dementias, particularly Alzheimer’s disease (AD), has been revolutionized by the development of imaging and cerebrospinal fluid biomarkers and is on the brink of a new development: emerging plasma biomarkers. Research now recognizes the relationship between the cognitive-behavioral syndromic diagnosis (that is, the illness) and the etiologic diagnosis (the disease) – and the need to consider each separately when developing a diagnostic formulation. The National Institute on Aging and Alzheimer’s Association Research Framework uses the amyloid, tau, and neurodegeneration system to define AD biologically in living patients. Here is an overview of the framework, which requires biomarker evidence of amyloid plaques (amyloid positivity) and neurofibrillary tangles (tau positivity), with evidence of neurodegeneration (neurodegeneration positivity) to support the diagnosis.

The diagnostic approach for symptomatic patients

The differential diagnosis in symptomatic patients with mild cognitive impairment (MCI), mild behavioral impairment, or dementia is broad and includes multiple neurodegenerative diseases (for example, AD, frontotemporal lobar degeneration, dementia with Lewy bodies, argyrophilic grain disease, hippocampal sclerosis); vascular ischemic brain injury (for example, stroke); tumors; infectious, inflammatory, paraneoplastic, or demyelinating diseases; trauma; hydrocephalus; toxic/metabolic insults; and other rare diseases. The patient’s clinical syndrome narrows the differential diagnosis.

Once the clinician has a prioritized differential diagnosis of the brain disease or condition that is probably causing or contributing to the patient’s signs and symptoms, they can then select appropriate assessments and tests, typically starting with a laboratory panel and brain MRI. Strong evidence backed by practice recommendations also supports the use of fluorodeoxyglucose PET as a marker of functional brain abnormalities associated with dementia. Although molecular biomarkers are typically considered at the later stage of the clinical workup, the anticipated future availability of plasma biomarkers will probably change the timing of molecular biomarker assessment in patients with suspected cognitive impairment owing to AD.
 

Molecular PET biomarkers

Three PET tracers approved by the U.S. Food and Drug Administration for the detection of cerebral amyloid plaques have high sensitivity (89%-98%) and specificity (88%-100%), compared with autopsy, the gold standard diagnostic tool. However, these scans are costly and are not reimbursed by Medicare and Medicaid. Because all amyloid PET scans are covered by the Veterans Administration, this test is more readily accessible for patients receiving VA benefits.

The appropriate-use criteria developed by the Amyloid Imaging Task Force recommends amyloid PET for patients with persistent or progressive MCI or dementia. In such patients, a negative amyloid PET scan would strongly weigh against AD, supporting a differential diagnosis of other etiologies. Although a positive amyloid PET scan in patients with MCI or dementia indicates the presence of amyloid plaques, it does not necessarily confirm AD as the cause. Cerebral amyloid plaques may coexist with other pathologies and increase with age, even in cognitively normal individuals.

The IDEAS study looked at the clinical utility of amyloid PET in a real-world dementia specialist setting. In the study, dementia subspecialists documented their presumed etiologic diagnosis (and level of confidence) before and after amyloid PET. Of the 11,409 patients who completed the study, the etiologic diagnosis changed from AD to non-AD in just over 25% of cases and from non-AD to AD in 10.5%. Clinical management changed in about 60% of patients with MCI and 63.5% of patients with dementia.

In May 2020, the FDA approved flortaucipir F-18, the first diagnostic tau radiotracer for use with PET to estimate the density and distribution of aggregated tau neurofibrillary tangles in adults with cognitive impairment undergoing evaluation for AD. Regulatory approval of flortaucipir F-18 was based on findings from two clinical trials of terminally ill patients who were followed to autopsy. The studies included patients with a spectrum of clinically diagnosed dementias and those with normal cognition. The primary outcome of the studies was accurate visual interpretation of the images in detecting advanced AD tau neurofibrillary tangle pathology (Braak stage V or VI tau pathology). Sensitivity of five trained readers ranged from 68% to 86%, and specificity ranged from 63% to 100%; interrater agreement was 0.87. Tau PET is not yet reimbursed and is therefore not yet readily available in the clinical setting. Moreover, appropriate use criteria have not yet been published.
 

 

 

Molecular fluid biomarkers

Cerebrospinal fluid (CSF) analysis is currently the most readily available and reimbursed test to aid in diagnosing AD, with appropriate-use criteria for patients with suspected AD. CSF biomarkers for AD are useful in cognitively impaired patients when the etiologic diagnosis is equivocal, there is only an intermediate level of diagnostic confidence, or there is very high confidence in the etiologic diagnosis. Testing for CSF biomarkers is also recommended for patients at very early clinical stages (for example, early MCI) or with atypical clinical presentations.

A decreased concentration of amyloid-beta 42 in CSF is a marker of amyloid neuritic plaques in the brain. An increased concentration of total tau in CSF reflects injury to neurons, and an increased concentration of specific isoforms of hyperphosphorylated tau reflects neurofibrillary tangles. Presently, the ratios of t-tau to amyloid-beta 42, amyloid-beta 42 to amyloid-beta 40, and phosphorylated-tau 181 to amyloid-beta 42 are the best-performing markers of AD neuropathologic changes and are more accurate than assessing individual biomarkers. These CSF biomarkers of AD have been validated against autopsy, and ratio values of CSF amyloid-beta 42 have been further validated against amyloid PET, with overall sensitivity and specificity of approximately 90% and 84%, respectively.

Some of the most exciting recent advances in AD center around the measurement of these proteins and others in plasma. Appropriate-use criteria for plasma biomarkers in the evaluation of patients with cognitive impairment were published in 2022. In addition to their use in clinical trials, these criteria cautiously recommend using these biomarkers in specialized memory clinics in the diagnostic workup of patients with cognitive symptoms, along with confirmatory CSF markers or PET. Additional data are needed before plasma biomarkers of AD are used as standalone diagnostic markers or considered in the primary care setting.

We have made remarkable progress toward more precise molecular diagnosis of brain diseases underlying cognitive impairment and dementia. Ongoing efforts to evaluate the utility of these measures in clinical practice include the need to increase diversity of patients and providers. Ultimately, the tremendous progress in molecular biomarkers for the diseases causing dementia will help the field work toward our common goal of early and accurate diagnosis, better management, and hope for people living with these diseases.

Bradford C. Dickerson, MD, MMSc, is a professor, department of neurology, Harvard Medical School, and director, Frontotemporal Disorders Unit, department of neurology, at Massachusetts General Hospital, both in Boston.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

The field of neurodegenerative dementias, particularly Alzheimer’s disease (AD), has been revolutionized by the development of imaging and cerebrospinal fluid biomarkers and is on the brink of a new development: emerging plasma biomarkers. Research now recognizes the relationship between the cognitive-behavioral syndromic diagnosis (that is, the illness) and the etiologic diagnosis (the disease) – and the need to consider each separately when developing a diagnostic formulation. The National Institute on Aging and Alzheimer’s Association Research Framework uses the amyloid, tau, and neurodegeneration system to define AD biologically in living patients. Here is an overview of the framework, which requires biomarker evidence of amyloid plaques (amyloid positivity) and neurofibrillary tangles (tau positivity), with evidence of neurodegeneration (neurodegeneration positivity) to support the diagnosis.

The diagnostic approach for symptomatic patients

The differential diagnosis in symptomatic patients with mild cognitive impairment (MCI), mild behavioral impairment, or dementia is broad and includes multiple neurodegenerative diseases (for example, AD, frontotemporal lobar degeneration, dementia with Lewy bodies, argyrophilic grain disease, hippocampal sclerosis); vascular ischemic brain injury (for example, stroke); tumors; infectious, inflammatory, paraneoplastic, or demyelinating diseases; trauma; hydrocephalus; toxic/metabolic insults; and other rare diseases. The patient’s clinical syndrome narrows the differential diagnosis.

Once the clinician has a prioritized differential diagnosis of the brain disease or condition that is probably causing or contributing to the patient’s signs and symptoms, they can then select appropriate assessments and tests, typically starting with a laboratory panel and brain MRI. Strong evidence backed by practice recommendations also supports the use of fluorodeoxyglucose PET as a marker of functional brain abnormalities associated with dementia. Although molecular biomarkers are typically considered at the later stage of the clinical workup, the anticipated future availability of plasma biomarkers will probably change the timing of molecular biomarker assessment in patients with suspected cognitive impairment owing to AD.
 

Molecular PET biomarkers

Three PET tracers approved by the U.S. Food and Drug Administration for the detection of cerebral amyloid plaques have high sensitivity (89%-98%) and specificity (88%-100%), compared with autopsy, the gold standard diagnostic tool. However, these scans are costly and are not reimbursed by Medicare and Medicaid. Because all amyloid PET scans are covered by the Veterans Administration, this test is more readily accessible for patients receiving VA benefits.

The appropriate-use criteria developed by the Amyloid Imaging Task Force recommends amyloid PET for patients with persistent or progressive MCI or dementia. In such patients, a negative amyloid PET scan would strongly weigh against AD, supporting a differential diagnosis of other etiologies. Although a positive amyloid PET scan in patients with MCI or dementia indicates the presence of amyloid plaques, it does not necessarily confirm AD as the cause. Cerebral amyloid plaques may coexist with other pathologies and increase with age, even in cognitively normal individuals.

The IDEAS study looked at the clinical utility of amyloid PET in a real-world dementia specialist setting. In the study, dementia subspecialists documented their presumed etiologic diagnosis (and level of confidence) before and after amyloid PET. Of the 11,409 patients who completed the study, the etiologic diagnosis changed from AD to non-AD in just over 25% of cases and from non-AD to AD in 10.5%. Clinical management changed in about 60% of patients with MCI and 63.5% of patients with dementia.

In May 2020, the FDA approved flortaucipir F-18, the first diagnostic tau radiotracer for use with PET to estimate the density and distribution of aggregated tau neurofibrillary tangles in adults with cognitive impairment undergoing evaluation for AD. Regulatory approval of flortaucipir F-18 was based on findings from two clinical trials of terminally ill patients who were followed to autopsy. The studies included patients with a spectrum of clinically diagnosed dementias and those with normal cognition. The primary outcome of the studies was accurate visual interpretation of the images in detecting advanced AD tau neurofibrillary tangle pathology (Braak stage V or VI tau pathology). Sensitivity of five trained readers ranged from 68% to 86%, and specificity ranged from 63% to 100%; interrater agreement was 0.87. Tau PET is not yet reimbursed and is therefore not yet readily available in the clinical setting. Moreover, appropriate use criteria have not yet been published.
 

 

 

Molecular fluid biomarkers

Cerebrospinal fluid (CSF) analysis is currently the most readily available and reimbursed test to aid in diagnosing AD, with appropriate-use criteria for patients with suspected AD. CSF biomarkers for AD are useful in cognitively impaired patients when the etiologic diagnosis is equivocal, there is only an intermediate level of diagnostic confidence, or there is very high confidence in the etiologic diagnosis. Testing for CSF biomarkers is also recommended for patients at very early clinical stages (for example, early MCI) or with atypical clinical presentations.

A decreased concentration of amyloid-beta 42 in CSF is a marker of amyloid neuritic plaques in the brain. An increased concentration of total tau in CSF reflects injury to neurons, and an increased concentration of specific isoforms of hyperphosphorylated tau reflects neurofibrillary tangles. Presently, the ratios of t-tau to amyloid-beta 42, amyloid-beta 42 to amyloid-beta 40, and phosphorylated-tau 181 to amyloid-beta 42 are the best-performing markers of AD neuropathologic changes and are more accurate than assessing individual biomarkers. These CSF biomarkers of AD have been validated against autopsy, and ratio values of CSF amyloid-beta 42 have been further validated against amyloid PET, with overall sensitivity and specificity of approximately 90% and 84%, respectively.

Some of the most exciting recent advances in AD center around the measurement of these proteins and others in plasma. Appropriate-use criteria for plasma biomarkers in the evaluation of patients with cognitive impairment were published in 2022. In addition to their use in clinical trials, these criteria cautiously recommend using these biomarkers in specialized memory clinics in the diagnostic workup of patients with cognitive symptoms, along with confirmatory CSF markers or PET. Additional data are needed before plasma biomarkers of AD are used as standalone diagnostic markers or considered in the primary care setting.

We have made remarkable progress toward more precise molecular diagnosis of brain diseases underlying cognitive impairment and dementia. Ongoing efforts to evaluate the utility of these measures in clinical practice include the need to increase diversity of patients and providers. Ultimately, the tremendous progress in molecular biomarkers for the diseases causing dementia will help the field work toward our common goal of early and accurate diagnosis, better management, and hope for people living with these diseases.

Bradford C. Dickerson, MD, MMSc, is a professor, department of neurology, Harvard Medical School, and director, Frontotemporal Disorders Unit, department of neurology, at Massachusetts General Hospital, both in Boston.

A version of this article first appeared on Medscape.com.

The field of neurodegenerative dementias, particularly Alzheimer’s disease (AD), has been revolutionized by the development of imaging and cerebrospinal fluid biomarkers and is on the brink of a new development: emerging plasma biomarkers. Research now recognizes the relationship between the cognitive-behavioral syndromic diagnosis (that is, the illness) and the etiologic diagnosis (the disease) – and the need to consider each separately when developing a diagnostic formulation. The National Institute on Aging and Alzheimer’s Association Research Framework uses the amyloid, tau, and neurodegeneration system to define AD biologically in living patients. Here is an overview of the framework, which requires biomarker evidence of amyloid plaques (amyloid positivity) and neurofibrillary tangles (tau positivity), with evidence of neurodegeneration (neurodegeneration positivity) to support the diagnosis.

The diagnostic approach for symptomatic patients

The differential diagnosis in symptomatic patients with mild cognitive impairment (MCI), mild behavioral impairment, or dementia is broad and includes multiple neurodegenerative diseases (for example, AD, frontotemporal lobar degeneration, dementia with Lewy bodies, argyrophilic grain disease, hippocampal sclerosis); vascular ischemic brain injury (for example, stroke); tumors; infectious, inflammatory, paraneoplastic, or demyelinating diseases; trauma; hydrocephalus; toxic/metabolic insults; and other rare diseases. The patient’s clinical syndrome narrows the differential diagnosis.

Once the clinician has a prioritized differential diagnosis of the brain disease or condition that is probably causing or contributing to the patient’s signs and symptoms, they can then select appropriate assessments and tests, typically starting with a laboratory panel and brain MRI. Strong evidence backed by practice recommendations also supports the use of fluorodeoxyglucose PET as a marker of functional brain abnormalities associated with dementia. Although molecular biomarkers are typically considered at the later stage of the clinical workup, the anticipated future availability of plasma biomarkers will probably change the timing of molecular biomarker assessment in patients with suspected cognitive impairment owing to AD.
 

Molecular PET biomarkers

Three PET tracers approved by the U.S. Food and Drug Administration for the detection of cerebral amyloid plaques have high sensitivity (89%-98%) and specificity (88%-100%), compared with autopsy, the gold standard diagnostic tool. However, these scans are costly and are not reimbursed by Medicare and Medicaid. Because all amyloid PET scans are covered by the Veterans Administration, this test is more readily accessible for patients receiving VA benefits.

The appropriate-use criteria developed by the Amyloid Imaging Task Force recommends amyloid PET for patients with persistent or progressive MCI or dementia. In such patients, a negative amyloid PET scan would strongly weigh against AD, supporting a differential diagnosis of other etiologies. Although a positive amyloid PET scan in patients with MCI or dementia indicates the presence of amyloid plaques, it does not necessarily confirm AD as the cause. Cerebral amyloid plaques may coexist with other pathologies and increase with age, even in cognitively normal individuals.

The IDEAS study looked at the clinical utility of amyloid PET in a real-world dementia specialist setting. In the study, dementia subspecialists documented their presumed etiologic diagnosis (and level of confidence) before and after amyloid PET. Of the 11,409 patients who completed the study, the etiologic diagnosis changed from AD to non-AD in just over 25% of cases and from non-AD to AD in 10.5%. Clinical management changed in about 60% of patients with MCI and 63.5% of patients with dementia.

In May 2020, the FDA approved flortaucipir F-18, the first diagnostic tau radiotracer for use with PET to estimate the density and distribution of aggregated tau neurofibrillary tangles in adults with cognitive impairment undergoing evaluation for AD. Regulatory approval of flortaucipir F-18 was based on findings from two clinical trials of terminally ill patients who were followed to autopsy. The studies included patients with a spectrum of clinically diagnosed dementias and those with normal cognition. The primary outcome of the studies was accurate visual interpretation of the images in detecting advanced AD tau neurofibrillary tangle pathology (Braak stage V or VI tau pathology). Sensitivity of five trained readers ranged from 68% to 86%, and specificity ranged from 63% to 100%; interrater agreement was 0.87. Tau PET is not yet reimbursed and is therefore not yet readily available in the clinical setting. Moreover, appropriate use criteria have not yet been published.
 

 

 

Molecular fluid biomarkers

Cerebrospinal fluid (CSF) analysis is currently the most readily available and reimbursed test to aid in diagnosing AD, with appropriate-use criteria for patients with suspected AD. CSF biomarkers for AD are useful in cognitively impaired patients when the etiologic diagnosis is equivocal, there is only an intermediate level of diagnostic confidence, or there is very high confidence in the etiologic diagnosis. Testing for CSF biomarkers is also recommended for patients at very early clinical stages (for example, early MCI) or with atypical clinical presentations.

A decreased concentration of amyloid-beta 42 in CSF is a marker of amyloid neuritic plaques in the brain. An increased concentration of total tau in CSF reflects injury to neurons, and an increased concentration of specific isoforms of hyperphosphorylated tau reflects neurofibrillary tangles. Presently, the ratios of t-tau to amyloid-beta 42, amyloid-beta 42 to amyloid-beta 40, and phosphorylated-tau 181 to amyloid-beta 42 are the best-performing markers of AD neuropathologic changes and are more accurate than assessing individual biomarkers. These CSF biomarkers of AD have been validated against autopsy, and ratio values of CSF amyloid-beta 42 have been further validated against amyloid PET, with overall sensitivity and specificity of approximately 90% and 84%, respectively.

Some of the most exciting recent advances in AD center around the measurement of these proteins and others in plasma. Appropriate-use criteria for plasma biomarkers in the evaluation of patients with cognitive impairment were published in 2022. In addition to their use in clinical trials, these criteria cautiously recommend using these biomarkers in specialized memory clinics in the diagnostic workup of patients with cognitive symptoms, along with confirmatory CSF markers or PET. Additional data are needed before plasma biomarkers of AD are used as standalone diagnostic markers or considered in the primary care setting.

We have made remarkable progress toward more precise molecular diagnosis of brain diseases underlying cognitive impairment and dementia. Ongoing efforts to evaluate the utility of these measures in clinical practice include the need to increase diversity of patients and providers. Ultimately, the tremendous progress in molecular biomarkers for the diseases causing dementia will help the field work toward our common goal of early and accurate diagnosis, better management, and hope for people living with these diseases.

Bradford C. Dickerson, MD, MMSc, is a professor, department of neurology, Harvard Medical School, and director, Frontotemporal Disorders Unit, department of neurology, at Massachusetts General Hospital, both in Boston.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Treatment-resistant depression? Don’t forget about MAOIs

Article Type
Changed
Wed, 06/07/2023 - 09:07

– University of California, San Diego, psychiatrist Stephen M. Stahl, MD, PhD, has heard the scary stories about monoamine oxidase inhibitors (MAOIs): Patients supposedly need to be on restrictive diets free of culinary joys like cheese, beer, and wine; they can’t take cold medicines; and they can just forget about anesthesia for dental work or surgery.

Waketonay via Creative Commons (https://creativecommons.org/licenses/by-sa/4.0/legalcode)
Dr. Stephen Stahl

Wrong, wrong, and wrong, Dr. Stahl told an audience at the annual meeting of the American Psychiatric Association. While the venerable antidepressants can transform the lives of patients with treatment-resistant depression, he said, MAOIs are plagued by myths that exaggerate their risks.

“These are good options,” he said. “Everybody who prescribes these today, without exception, has seen patients respond after nothing else has – including ECT (electroconvulsive therapy).”

Still, MAOIs, which were first developed in the 1950s, remain little-used in the United States. While an average of six selective serotonin reuptake inhibitors (SSRIs) are prescribed every second in the United States each day, Dr. Stahl said, “there are only a few hundred MAOI prescribers for a few thousand patients.”

The main barrier to the use of the drugs is unfamiliarity, he said. Despite their low profile, they’re appropriate to use after failures of monotherapy with SSRIs/serotonin and norepinephrine reuptake inhibitors (SNRIs) and augmentation with atypical antipsychotics. And they can be used in conjunction with ketamine/esketamine and ECT, which are other options for treatment-resistant depression, he said.

As for the myths about MAOIs, Dr. Stahl said the drugs can indeed interact with tyramine, which is found in foods like cheese, beer, and wine. The interaction can lead to potentially fatal hypertensive crises, Dr. Stahl said, noting that patients should avoid aged cheeses, tap and unpasteurized beer, soy products, and certain other foods. (Patients taking 6 mg transdermal or low-dose oral selegiline can ignore these restrictions.)

But canned beer, certain wines, yogurt, fresh American cheese, mozzarella/pizza chain cheese, cream cheese, and fresh or processed meat/poultry/fish are fine, he said. “Selectively, you can have a pretty high tyramine diet,” he added, although it’s a good idea for patients to have a blood pressure monitor at home.

As for cold medicines, sympathomimetic decongestants and stimulants should be used cautiously with blood pressure monitoring or not at all, he said, but those with codeine or expectorants are OK. Dextromethorphan, a weak serotonin reuptake inhibitor in some cough medicine, should be avoided. However, antihistamines other than chlorpheniramine/brompheniramine are OK to use, he added, and they may be the ideal choice for cold relief.

As for anesthesia, he cautioned that local anesthetics with epinephrine and general anesthesia can disrupt blood pressure. Choose a local anesthetic that does not contain vasoconstrictors, he said, and if surgery with general anesthesia is needed, “you can wash [the MAOI] out if you want” ahead of time.

Benzodiazepines, mivacurium, rapacuronium, morphine, or codeine can be used cautiously, he said, in urgent or elective surgery in a patient on an MAOI.

As for other myths, he said tricyclic antidepressants and related drugs aren’t as troublesome as psychiatrists may assume. Clomipramine and imipramine should be avoided. But other tricyclic antidepressants can be used with caution.

As for painkillers, he said it’s not true that they must be avoided, although MAIOs shouldn’t be taken with meperidine, fentanyl, methadone, tramadol, or tapentadol. Other painkillers, including over-the-counter products like aspirin, NSAIDs, and acetaminophen, should be used with caution, he said. And expert guidance is advised for use of hydromorphone, morphine, oxycodone, or oxymorphone.

In the big picture, he noted, myths are so prevalent “that you have more calls from patients – and other doctors, dentists, and anesthesiologists – about MAO inhibitors then you will ever have about any other drug there.”

Columbia University, New York, psychiatrist Jonathan W. Stewart, MD, also spoke at the presentation on MAIOs at the APA conference. He recommended that colleagues consider the drugs if two or more antidepressants that work in different ways fail to provide relief after 4 weeks at a sufficient dose. Start low with one pill a day, he recommended, and seek full remission – no depressed mood – instead of simply “better.”

Ultimately, he said, “we do patients a disservice” if MAOIs aren’t considered in the appropriate patients.

Dr. Stahl discloses grant/research support (Acadia, Allergan/AbbVie, Avanir, Boehringer Ingelheim Braeburn, Daiichi Sankyo-Brazil Eisai, Eli Lilly, Harmony, Indivior, Intra-Cellular Therapies, Ironshore, Neurocrine, Otsuka, Pear Therapeutics, Sage, Shire Sunovion, Supernus, and Torrent), consultant/advisor support (Acadia, Alkermes, Allergan, AbbVie, Axsome, Clearview, Done, Eisai Pharmaceuticals, Gedeon Richter, Intra-Cellular Therapies, Karuna, Levo, Lundbeck, Neurocrine, Neurawell, Otsuka, Relmada, Sage, Sunovion, Supernus, Taliaz, Teva, Tris Pharma, and VistaGen), speakers bureau payments (Acadia, Lundbeck, Neurocrine, Otsuka, Servier, Sunovion, and Teva), and options in Genomind, Lipidio, Neurawell and Delix. Dr. Stewart discloses unspecified relationships with Eli Lilly, Pfizer, Merck, Boeringer- Ingleheim, Bristol-Myers, Sinolfi-Aventis, Amilyn, Novartis, Organon, GlaxoSmithKlein, Shire, and Somerset.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

– University of California, San Diego, psychiatrist Stephen M. Stahl, MD, PhD, has heard the scary stories about monoamine oxidase inhibitors (MAOIs): Patients supposedly need to be on restrictive diets free of culinary joys like cheese, beer, and wine; they can’t take cold medicines; and they can just forget about anesthesia for dental work or surgery.

Waketonay via Creative Commons (https://creativecommons.org/licenses/by-sa/4.0/legalcode)
Dr. Stephen Stahl

Wrong, wrong, and wrong, Dr. Stahl told an audience at the annual meeting of the American Psychiatric Association. While the venerable antidepressants can transform the lives of patients with treatment-resistant depression, he said, MAOIs are plagued by myths that exaggerate their risks.

“These are good options,” he said. “Everybody who prescribes these today, without exception, has seen patients respond after nothing else has – including ECT (electroconvulsive therapy).”

Still, MAOIs, which were first developed in the 1950s, remain little-used in the United States. While an average of six selective serotonin reuptake inhibitors (SSRIs) are prescribed every second in the United States each day, Dr. Stahl said, “there are only a few hundred MAOI prescribers for a few thousand patients.”

The main barrier to the use of the drugs is unfamiliarity, he said. Despite their low profile, they’re appropriate to use after failures of monotherapy with SSRIs/serotonin and norepinephrine reuptake inhibitors (SNRIs) and augmentation with atypical antipsychotics. And they can be used in conjunction with ketamine/esketamine and ECT, which are other options for treatment-resistant depression, he said.

As for the myths about MAOIs, Dr. Stahl said the drugs can indeed interact with tyramine, which is found in foods like cheese, beer, and wine. The interaction can lead to potentially fatal hypertensive crises, Dr. Stahl said, noting that patients should avoid aged cheeses, tap and unpasteurized beer, soy products, and certain other foods. (Patients taking 6 mg transdermal or low-dose oral selegiline can ignore these restrictions.)

But canned beer, certain wines, yogurt, fresh American cheese, mozzarella/pizza chain cheese, cream cheese, and fresh or processed meat/poultry/fish are fine, he said. “Selectively, you can have a pretty high tyramine diet,” he added, although it’s a good idea for patients to have a blood pressure monitor at home.

As for cold medicines, sympathomimetic decongestants and stimulants should be used cautiously with blood pressure monitoring or not at all, he said, but those with codeine or expectorants are OK. Dextromethorphan, a weak serotonin reuptake inhibitor in some cough medicine, should be avoided. However, antihistamines other than chlorpheniramine/brompheniramine are OK to use, he added, and they may be the ideal choice for cold relief.

As for anesthesia, he cautioned that local anesthetics with epinephrine and general anesthesia can disrupt blood pressure. Choose a local anesthetic that does not contain vasoconstrictors, he said, and if surgery with general anesthesia is needed, “you can wash [the MAOI] out if you want” ahead of time.

Benzodiazepines, mivacurium, rapacuronium, morphine, or codeine can be used cautiously, he said, in urgent or elective surgery in a patient on an MAOI.

As for other myths, he said tricyclic antidepressants and related drugs aren’t as troublesome as psychiatrists may assume. Clomipramine and imipramine should be avoided. But other tricyclic antidepressants can be used with caution.

As for painkillers, he said it’s not true that they must be avoided, although MAIOs shouldn’t be taken with meperidine, fentanyl, methadone, tramadol, or tapentadol. Other painkillers, including over-the-counter products like aspirin, NSAIDs, and acetaminophen, should be used with caution, he said. And expert guidance is advised for use of hydromorphone, morphine, oxycodone, or oxymorphone.

In the big picture, he noted, myths are so prevalent “that you have more calls from patients – and other doctors, dentists, and anesthesiologists – about MAO inhibitors then you will ever have about any other drug there.”

Columbia University, New York, psychiatrist Jonathan W. Stewart, MD, also spoke at the presentation on MAIOs at the APA conference. He recommended that colleagues consider the drugs if two or more antidepressants that work in different ways fail to provide relief after 4 weeks at a sufficient dose. Start low with one pill a day, he recommended, and seek full remission – no depressed mood – instead of simply “better.”

Ultimately, he said, “we do patients a disservice” if MAOIs aren’t considered in the appropriate patients.

Dr. Stahl discloses grant/research support (Acadia, Allergan/AbbVie, Avanir, Boehringer Ingelheim Braeburn, Daiichi Sankyo-Brazil Eisai, Eli Lilly, Harmony, Indivior, Intra-Cellular Therapies, Ironshore, Neurocrine, Otsuka, Pear Therapeutics, Sage, Shire Sunovion, Supernus, and Torrent), consultant/advisor support (Acadia, Alkermes, Allergan, AbbVie, Axsome, Clearview, Done, Eisai Pharmaceuticals, Gedeon Richter, Intra-Cellular Therapies, Karuna, Levo, Lundbeck, Neurocrine, Neurawell, Otsuka, Relmada, Sage, Sunovion, Supernus, Taliaz, Teva, Tris Pharma, and VistaGen), speakers bureau payments (Acadia, Lundbeck, Neurocrine, Otsuka, Servier, Sunovion, and Teva), and options in Genomind, Lipidio, Neurawell and Delix. Dr. Stewart discloses unspecified relationships with Eli Lilly, Pfizer, Merck, Boeringer- Ingleheim, Bristol-Myers, Sinolfi-Aventis, Amilyn, Novartis, Organon, GlaxoSmithKlein, Shire, and Somerset.

– University of California, San Diego, psychiatrist Stephen M. Stahl, MD, PhD, has heard the scary stories about monoamine oxidase inhibitors (MAOIs): Patients supposedly need to be on restrictive diets free of culinary joys like cheese, beer, and wine; they can’t take cold medicines; and they can just forget about anesthesia for dental work or surgery.

Waketonay via Creative Commons (https://creativecommons.org/licenses/by-sa/4.0/legalcode)
Dr. Stephen Stahl

Wrong, wrong, and wrong, Dr. Stahl told an audience at the annual meeting of the American Psychiatric Association. While the venerable antidepressants can transform the lives of patients with treatment-resistant depression, he said, MAOIs are plagued by myths that exaggerate their risks.

“These are good options,” he said. “Everybody who prescribes these today, without exception, has seen patients respond after nothing else has – including ECT (electroconvulsive therapy).”

Still, MAOIs, which were first developed in the 1950s, remain little-used in the United States. While an average of six selective serotonin reuptake inhibitors (SSRIs) are prescribed every second in the United States each day, Dr. Stahl said, “there are only a few hundred MAOI prescribers for a few thousand patients.”

The main barrier to the use of the drugs is unfamiliarity, he said. Despite their low profile, they’re appropriate to use after failures of monotherapy with SSRIs/serotonin and norepinephrine reuptake inhibitors (SNRIs) and augmentation with atypical antipsychotics. And they can be used in conjunction with ketamine/esketamine and ECT, which are other options for treatment-resistant depression, he said.

As for the myths about MAOIs, Dr. Stahl said the drugs can indeed interact with tyramine, which is found in foods like cheese, beer, and wine. The interaction can lead to potentially fatal hypertensive crises, Dr. Stahl said, noting that patients should avoid aged cheeses, tap and unpasteurized beer, soy products, and certain other foods. (Patients taking 6 mg transdermal or low-dose oral selegiline can ignore these restrictions.)

But canned beer, certain wines, yogurt, fresh American cheese, mozzarella/pizza chain cheese, cream cheese, and fresh or processed meat/poultry/fish are fine, he said. “Selectively, you can have a pretty high tyramine diet,” he added, although it’s a good idea for patients to have a blood pressure monitor at home.

As for cold medicines, sympathomimetic decongestants and stimulants should be used cautiously with blood pressure monitoring or not at all, he said, but those with codeine or expectorants are OK. Dextromethorphan, a weak serotonin reuptake inhibitor in some cough medicine, should be avoided. However, antihistamines other than chlorpheniramine/brompheniramine are OK to use, he added, and they may be the ideal choice for cold relief.

As for anesthesia, he cautioned that local anesthetics with epinephrine and general anesthesia can disrupt blood pressure. Choose a local anesthetic that does not contain vasoconstrictors, he said, and if surgery with general anesthesia is needed, “you can wash [the MAOI] out if you want” ahead of time.

Benzodiazepines, mivacurium, rapacuronium, morphine, or codeine can be used cautiously, he said, in urgent or elective surgery in a patient on an MAOI.

As for other myths, he said tricyclic antidepressants and related drugs aren’t as troublesome as psychiatrists may assume. Clomipramine and imipramine should be avoided. But other tricyclic antidepressants can be used with caution.

As for painkillers, he said it’s not true that they must be avoided, although MAIOs shouldn’t be taken with meperidine, fentanyl, methadone, tramadol, or tapentadol. Other painkillers, including over-the-counter products like aspirin, NSAIDs, and acetaminophen, should be used with caution, he said. And expert guidance is advised for use of hydromorphone, morphine, oxycodone, or oxymorphone.

In the big picture, he noted, myths are so prevalent “that you have more calls from patients – and other doctors, dentists, and anesthesiologists – about MAO inhibitors then you will ever have about any other drug there.”

Columbia University, New York, psychiatrist Jonathan W. Stewart, MD, also spoke at the presentation on MAIOs at the APA conference. He recommended that colleagues consider the drugs if two or more antidepressants that work in different ways fail to provide relief after 4 weeks at a sufficient dose. Start low with one pill a day, he recommended, and seek full remission – no depressed mood – instead of simply “better.”

Ultimately, he said, “we do patients a disservice” if MAOIs aren’t considered in the appropriate patients.

Dr. Stahl discloses grant/research support (Acadia, Allergan/AbbVie, Avanir, Boehringer Ingelheim Braeburn, Daiichi Sankyo-Brazil Eisai, Eli Lilly, Harmony, Indivior, Intra-Cellular Therapies, Ironshore, Neurocrine, Otsuka, Pear Therapeutics, Sage, Shire Sunovion, Supernus, and Torrent), consultant/advisor support (Acadia, Alkermes, Allergan, AbbVie, Axsome, Clearview, Done, Eisai Pharmaceuticals, Gedeon Richter, Intra-Cellular Therapies, Karuna, Levo, Lundbeck, Neurocrine, Neurawell, Otsuka, Relmada, Sage, Sunovion, Supernus, Taliaz, Teva, Tris Pharma, and VistaGen), speakers bureau payments (Acadia, Lundbeck, Neurocrine, Otsuka, Servier, Sunovion, and Teva), and options in Genomind, Lipidio, Neurawell and Delix. Dr. Stewart discloses unspecified relationships with Eli Lilly, Pfizer, Merck, Boeringer- Ingleheim, Bristol-Myers, Sinolfi-Aventis, Amilyn, Novartis, Organon, GlaxoSmithKlein, Shire, and Somerset.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT APA 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

PMBCL: Postremission, patients may safely skip radiation

Article Type
Changed
Mon, 06/12/2023 - 12:14

For patients with primary mediastinal B-cell lymphoma (PMBCL) who achieved a complete metabolic response after immunochemotherapy, radiation therapy may be safely omitted without heightening their risks of relapse or disease progression – thereby sparing them the toxicity and costs of this additional treatment.

“This study is the largest prospective study of PMBCL ever conducted,” said first author Emanuele Zucca, MD, consultant and head of the lymphoma unit at the Oncology Institute of Southern Switzerland in Bellinzona. Dr. Zucca presented the findings at the annual meeting of the American Society of Clinical Oncology (ASCO).

The results of the research underscore that “mediastinal radiation therapy in patients with complete remission after frontline immunochemotherapy can be safely omitted,” he said.

While PMBCL has a relatively low incidence, representing fewer than 5% of cases of non-Hodgkin lymphoma, the cancer is over-represented in young White women between approximately 30 and 40 years of age, and is a notably aggressive form of diffuse large B-cell lymphoma.

However, in patients who rapidly achieve remission with dose-intensive immunochemotherapy, the prognosis is good.

In such cases, the use of mediastinal radiation therapy has been seen as a measure to further consolidate the immunochemotherapy response, but the additional treatment comes at the cost of an increased risk of second malignancies, as well as coronary or valvular heart disease.

Meanwhile, in recent decades promising data has shown that aggressive chemoimmunotherapy regimens alone, such as DA-EPOCH-R (dose-adjusted etoposide, prednisone, vincristine, cyclophosphamide, doxorubicin, and rituximab) can be enough for patients achieving a complete remission, while novel approaches such as checkpoint inhibitors and CAR T-cell therapy further show benefits in patients with lymphoma that relapses after treatment.

With ongoing controversy over whether to include the added radiation therapy among patients with a complete metabolic response, Dr. Zucca and his colleagues conducted the IELSG37 international study, enrolling 545 patients from 74 centers in 13 countries, including 336 women, with newly diagnosed PMBCL.

The patients were treated with induction chemoimmunotherapy with rituximab and anthracycline-based therapy based on local practice, and response assessed among of 530 of the 545 patients showed that 268 (50.6%) achieved a complete metabolic response.

Those patients were then randomized to either observation (n = 132) or consolidation radiation therapy (30 Gy; n = 136). The characteristics between the two groups were similar, with a mean age of 35.5, and about 65% female.

With a median follow-up of 63 months (range, 48-60 months), the primary endpoint of progression-free survival at 30 months was not significantly different between the observation arm (98.5%) and radiation therapy arm (96.2%; P = .278).

After adjustment for factors including sex, chemotherapy, country, and positron emission tomography (PET) response score, the estimated relative effect of radiotherapy versus observation was a hazard ratio of 0.68, and the absolute risk reduction associated with radiotherapy at 30 months was 1.2% after adjustment.

The number needed to treat is high, at 126.3 after stratification, and the 5-year overall survival was excellent in both arms, at 99%.

“What this tells us is that treatment with radiation therapy in well over 100 patients is needed just to avoid a single recurrence,” Dr. Zucca explained.

Overall survival after 3 years was excellent and identical in both arms, at about 99%.

To date, three severe cardiac events and three second cancers have been recorded in the study, all occurring among patients randomized to receive radiation therapy.

Dr. Zucca noted that longer follow-up is needed to better examine late toxicities.

“The long-term toxicities of mediastinal radiotherapy are well documented, particularly second breast, thyroid, and lung cancers and increased risk of coronary or valvular heart disease, in a patient group dominated by young adults,” Dr. Zucca said in a press statement.

“This study shows chemoimmunotherapy alone is an effective treatment for primary mediastinal B-cell lymphoma and strongly supports omitting radiotherapy without impacting chances of cure.”

Commenting on the study, Corey W. Speers, MD, PhD, assistant professor, radiation oncology, department of surgery, University of Michigan Hospital, Ann Arbor, said the findings have important clinical implications.

“We all should be encouraged by the low rates in this trial, which are lower than expected,” Dr. Speers said in a press briefing.

In further comments, he added that “these results will inform and likely change clinical practice.”

Dr. Speers said the study is notable for being the first of its kind.

“This clinical question has not previously been directly addressed, and this is the first study to do so,” he said.

“With more effective systemic therapies, many patients have their lymphoma disappear with early aggressive treatment, and although radiation is very effective at treating lymphoma, it has not been clear if it is needed in these patients that have an early rapid response to systemic therapy before starting radiation,” Dr. Speers explained.

“We have struggled as oncologists to know whether omitting this effective radiotherapy would compromise outcomes, and thus many were inclined to continue offering it to patients, even with the great early response. This study helps answer this critical question,” he said.

The results add reassuring evidence, buttressing efforts to avoid unnecessary interventions that may provide little or no benefit, Dr. Speers added.

“We are now in an era of ‘less being more’ as we seek ways to provide optimal quality and quantity of life to patients with cancer and their families, and this is just another example of the tremendous progress being made.”

Further commenting on the study at the press briefing, Julie R. Gralow, MD, ASCO chief medical officer and executive vice president, said the research supports ASCO’s ongoing efforts to reduce the toxicities of cancer treatment.

“Our ASCO vision is a world where cancer is either prevented or cured, and every patient is cured – and every survivor is healthy, and that part about every survivor being healthy is what we’re working on here [in this study],” Dr. Gralow said.

The study was funded by the Swiss Cancer League and Cancer Research UK, with partial support from the Swiss National Science Foundation. Dr. Zucca reported relationships with AstraZeneca, Beigene, Celgene, Incyte, Janssen, Merck, Roche, Celltrion Healthcare, Kite, and Abbvie. Dr. Speers disclosed his coinvention of technology that assesses radiosensitivity and predicts benefits from adjutant radiotherapy.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

For patients with primary mediastinal B-cell lymphoma (PMBCL) who achieved a complete metabolic response after immunochemotherapy, radiation therapy may be safely omitted without heightening their risks of relapse or disease progression – thereby sparing them the toxicity and costs of this additional treatment.

“This study is the largest prospective study of PMBCL ever conducted,” said first author Emanuele Zucca, MD, consultant and head of the lymphoma unit at the Oncology Institute of Southern Switzerland in Bellinzona. Dr. Zucca presented the findings at the annual meeting of the American Society of Clinical Oncology (ASCO).

The results of the research underscore that “mediastinal radiation therapy in patients with complete remission after frontline immunochemotherapy can be safely omitted,” he said.

While PMBCL has a relatively low incidence, representing fewer than 5% of cases of non-Hodgkin lymphoma, the cancer is over-represented in young White women between approximately 30 and 40 years of age, and is a notably aggressive form of diffuse large B-cell lymphoma.

However, in patients who rapidly achieve remission with dose-intensive immunochemotherapy, the prognosis is good.

In such cases, the use of mediastinal radiation therapy has been seen as a measure to further consolidate the immunochemotherapy response, but the additional treatment comes at the cost of an increased risk of second malignancies, as well as coronary or valvular heart disease.

Meanwhile, in recent decades promising data has shown that aggressive chemoimmunotherapy regimens alone, such as DA-EPOCH-R (dose-adjusted etoposide, prednisone, vincristine, cyclophosphamide, doxorubicin, and rituximab) can be enough for patients achieving a complete remission, while novel approaches such as checkpoint inhibitors and CAR T-cell therapy further show benefits in patients with lymphoma that relapses after treatment.

With ongoing controversy over whether to include the added radiation therapy among patients with a complete metabolic response, Dr. Zucca and his colleagues conducted the IELSG37 international study, enrolling 545 patients from 74 centers in 13 countries, including 336 women, with newly diagnosed PMBCL.

The patients were treated with induction chemoimmunotherapy with rituximab and anthracycline-based therapy based on local practice, and response assessed among of 530 of the 545 patients showed that 268 (50.6%) achieved a complete metabolic response.

Those patients were then randomized to either observation (n = 132) or consolidation radiation therapy (30 Gy; n = 136). The characteristics between the two groups were similar, with a mean age of 35.5, and about 65% female.

With a median follow-up of 63 months (range, 48-60 months), the primary endpoint of progression-free survival at 30 months was not significantly different between the observation arm (98.5%) and radiation therapy arm (96.2%; P = .278).

After adjustment for factors including sex, chemotherapy, country, and positron emission tomography (PET) response score, the estimated relative effect of radiotherapy versus observation was a hazard ratio of 0.68, and the absolute risk reduction associated with radiotherapy at 30 months was 1.2% after adjustment.

The number needed to treat is high, at 126.3 after stratification, and the 5-year overall survival was excellent in both arms, at 99%.

“What this tells us is that treatment with radiation therapy in well over 100 patients is needed just to avoid a single recurrence,” Dr. Zucca explained.

Overall survival after 3 years was excellent and identical in both arms, at about 99%.

To date, three severe cardiac events and three second cancers have been recorded in the study, all occurring among patients randomized to receive radiation therapy.

Dr. Zucca noted that longer follow-up is needed to better examine late toxicities.

“The long-term toxicities of mediastinal radiotherapy are well documented, particularly second breast, thyroid, and lung cancers and increased risk of coronary or valvular heart disease, in a patient group dominated by young adults,” Dr. Zucca said in a press statement.

“This study shows chemoimmunotherapy alone is an effective treatment for primary mediastinal B-cell lymphoma and strongly supports omitting radiotherapy without impacting chances of cure.”

Commenting on the study, Corey W. Speers, MD, PhD, assistant professor, radiation oncology, department of surgery, University of Michigan Hospital, Ann Arbor, said the findings have important clinical implications.

“We all should be encouraged by the low rates in this trial, which are lower than expected,” Dr. Speers said in a press briefing.

In further comments, he added that “these results will inform and likely change clinical practice.”

Dr. Speers said the study is notable for being the first of its kind.

“This clinical question has not previously been directly addressed, and this is the first study to do so,” he said.

“With more effective systemic therapies, many patients have their lymphoma disappear with early aggressive treatment, and although radiation is very effective at treating lymphoma, it has not been clear if it is needed in these patients that have an early rapid response to systemic therapy before starting radiation,” Dr. Speers explained.

“We have struggled as oncologists to know whether omitting this effective radiotherapy would compromise outcomes, and thus many were inclined to continue offering it to patients, even with the great early response. This study helps answer this critical question,” he said.

The results add reassuring evidence, buttressing efforts to avoid unnecessary interventions that may provide little or no benefit, Dr. Speers added.

“We are now in an era of ‘less being more’ as we seek ways to provide optimal quality and quantity of life to patients with cancer and their families, and this is just another example of the tremendous progress being made.”

Further commenting on the study at the press briefing, Julie R. Gralow, MD, ASCO chief medical officer and executive vice president, said the research supports ASCO’s ongoing efforts to reduce the toxicities of cancer treatment.

“Our ASCO vision is a world where cancer is either prevented or cured, and every patient is cured – and every survivor is healthy, and that part about every survivor being healthy is what we’re working on here [in this study],” Dr. Gralow said.

The study was funded by the Swiss Cancer League and Cancer Research UK, with partial support from the Swiss National Science Foundation. Dr. Zucca reported relationships with AstraZeneca, Beigene, Celgene, Incyte, Janssen, Merck, Roche, Celltrion Healthcare, Kite, and Abbvie. Dr. Speers disclosed his coinvention of technology that assesses radiosensitivity and predicts benefits from adjutant radiotherapy.

For patients with primary mediastinal B-cell lymphoma (PMBCL) who achieved a complete metabolic response after immunochemotherapy, radiation therapy may be safely omitted without heightening their risks of relapse or disease progression – thereby sparing them the toxicity and costs of this additional treatment.

“This study is the largest prospective study of PMBCL ever conducted,” said first author Emanuele Zucca, MD, consultant and head of the lymphoma unit at the Oncology Institute of Southern Switzerland in Bellinzona. Dr. Zucca presented the findings at the annual meeting of the American Society of Clinical Oncology (ASCO).

The results of the research underscore that “mediastinal radiation therapy in patients with complete remission after frontline immunochemotherapy can be safely omitted,” he said.

While PMBCL has a relatively low incidence, representing fewer than 5% of cases of non-Hodgkin lymphoma, the cancer is over-represented in young White women between approximately 30 and 40 years of age, and is a notably aggressive form of diffuse large B-cell lymphoma.

However, in patients who rapidly achieve remission with dose-intensive immunochemotherapy, the prognosis is good.

In such cases, the use of mediastinal radiation therapy has been seen as a measure to further consolidate the immunochemotherapy response, but the additional treatment comes at the cost of an increased risk of second malignancies, as well as coronary or valvular heart disease.

Meanwhile, in recent decades promising data has shown that aggressive chemoimmunotherapy regimens alone, such as DA-EPOCH-R (dose-adjusted etoposide, prednisone, vincristine, cyclophosphamide, doxorubicin, and rituximab) can be enough for patients achieving a complete remission, while novel approaches such as checkpoint inhibitors and CAR T-cell therapy further show benefits in patients with lymphoma that relapses after treatment.

With ongoing controversy over whether to include the added radiation therapy among patients with a complete metabolic response, Dr. Zucca and his colleagues conducted the IELSG37 international study, enrolling 545 patients from 74 centers in 13 countries, including 336 women, with newly diagnosed PMBCL.

The patients were treated with induction chemoimmunotherapy with rituximab and anthracycline-based therapy based on local practice, and response assessed among of 530 of the 545 patients showed that 268 (50.6%) achieved a complete metabolic response.

Those patients were then randomized to either observation (n = 132) or consolidation radiation therapy (30 Gy; n = 136). The characteristics between the two groups were similar, with a mean age of 35.5, and about 65% female.

With a median follow-up of 63 months (range, 48-60 months), the primary endpoint of progression-free survival at 30 months was not significantly different between the observation arm (98.5%) and radiation therapy arm (96.2%; P = .278).

After adjustment for factors including sex, chemotherapy, country, and positron emission tomography (PET) response score, the estimated relative effect of radiotherapy versus observation was a hazard ratio of 0.68, and the absolute risk reduction associated with radiotherapy at 30 months was 1.2% after adjustment.

The number needed to treat is high, at 126.3 after stratification, and the 5-year overall survival was excellent in both arms, at 99%.

“What this tells us is that treatment with radiation therapy in well over 100 patients is needed just to avoid a single recurrence,” Dr. Zucca explained.

Overall survival after 3 years was excellent and identical in both arms, at about 99%.

To date, three severe cardiac events and three second cancers have been recorded in the study, all occurring among patients randomized to receive radiation therapy.

Dr. Zucca noted that longer follow-up is needed to better examine late toxicities.

“The long-term toxicities of mediastinal radiotherapy are well documented, particularly second breast, thyroid, and lung cancers and increased risk of coronary or valvular heart disease, in a patient group dominated by young adults,” Dr. Zucca said in a press statement.

“This study shows chemoimmunotherapy alone is an effective treatment for primary mediastinal B-cell lymphoma and strongly supports omitting radiotherapy without impacting chances of cure.”

Commenting on the study, Corey W. Speers, MD, PhD, assistant professor, radiation oncology, department of surgery, University of Michigan Hospital, Ann Arbor, said the findings have important clinical implications.

“We all should be encouraged by the low rates in this trial, which are lower than expected,” Dr. Speers said in a press briefing.

In further comments, he added that “these results will inform and likely change clinical practice.”

Dr. Speers said the study is notable for being the first of its kind.

“This clinical question has not previously been directly addressed, and this is the first study to do so,” he said.

“With more effective systemic therapies, many patients have their lymphoma disappear with early aggressive treatment, and although radiation is very effective at treating lymphoma, it has not been clear if it is needed in these patients that have an early rapid response to systemic therapy before starting radiation,” Dr. Speers explained.

“We have struggled as oncologists to know whether omitting this effective radiotherapy would compromise outcomes, and thus many were inclined to continue offering it to patients, even with the great early response. This study helps answer this critical question,” he said.

The results add reassuring evidence, buttressing efforts to avoid unnecessary interventions that may provide little or no benefit, Dr. Speers added.

“We are now in an era of ‘less being more’ as we seek ways to provide optimal quality and quantity of life to patients with cancer and their families, and this is just another example of the tremendous progress being made.”

Further commenting on the study at the press briefing, Julie R. Gralow, MD, ASCO chief medical officer and executive vice president, said the research supports ASCO’s ongoing efforts to reduce the toxicities of cancer treatment.

“Our ASCO vision is a world where cancer is either prevented or cured, and every patient is cured – and every survivor is healthy, and that part about every survivor being healthy is what we’re working on here [in this study],” Dr. Gralow said.

The study was funded by the Swiss Cancer League and Cancer Research UK, with partial support from the Swiss National Science Foundation. Dr. Zucca reported relationships with AstraZeneca, Beigene, Celgene, Incyte, Janssen, Merck, Roche, Celltrion Healthcare, Kite, and Abbvie. Dr. Speers disclosed his coinvention of technology that assesses radiosensitivity and predicts benefits from adjutant radiotherapy.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ASCO 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Unlocking the riddle of REM sleep

Article Type
Changed
Wed, 06/07/2023 - 09:08

Eugene Aserinsky, PhD, never wanted to study sleep. He tried being a social worker, a dental student, and even did a stint in the army as an explosives handler. He enrolled at the University of Chicago to pursue organ physiology, but all potential supervisors were too busy to take him on. His only choice was Nathaniel Kleitman, PhD, a middle-aged professor whom Dr. Aserinsky described as “always serious.” Dr. Kleitman was doing research on sleep and so, grudgingly, Dr. Aserinsky had followed suit.

Two years later, in 1953, the duo published a paper that shattered the way we saw sleep. They described a weird phenomenon Dr. Aserinsky later called REM sleep: periods of rapid eye movements paired with wakefulness-like activity in the brain. While 7 decades have passed since their discovery, the real essence of REM sleep and its function continue to elude us. “We are still at the very beginning of understanding this phenomenon,” Mark Blumberg, PhD, professor of psychological and brain sciences at University of Iowa, Iowa City, said in an interview.

Before Dr. Aserinsky had walked into Dr. Kleitman’s lab, the widespread belief held that sleep was “the antithesis of wakefulness,” as Dr. Kleitman wrote in his seminal 1939 book, “Sleep and Wakefulness.” Others saw it as a kind of a coma, a passive state. Another theory, developed in the early 20th century by French psychologist Henri Piéron, PhD, held that sleepiness is caused by an accumulation of ‘hypnotoxins’ in the brain.

In his 1913 study that would likely fail a contemporary ethics review, Dr. Piéron drew fluid from the brains of sleep-deprived dogs and injected it into other dogs to induce sleep. As he explained in an interview with The Washington Times in 1933, he said he believed that fatigue toxins accumulate in the brain throughout the wakeful hours, then slowly seep into the spinal column, promoting drowsiness. Once we fall asleep, Dr. Piéron claimed, the hypnotoxins burn away.
 

From blinking to rapid eye movement

In 1925 when Dr. Kleitman established the world’s first sleep laboratory at the University of Chicago, sleep was a fringe science that most researchers avoided with a wide berth. Yet Dr. Kleitman was obsessed. The Moldova-born scientist famously worked 24/7 – literally. He not only stayed long hours in his lab, but also slept attached to a plethora of instruments to measure his brain waves, breathing, and heartbeat. At one point, Dr. Kleitman stayed awake for 180 hours (more than a week), to check how forced sleeplessness would affect his body (he later compared it to torture). He also lived 2 weeks aboard a submarine, moved his family north of the Arctic Circle, and spent over a month 119 feet below the surface in a cave in Kentucky, fighting rats, cold, and humidity to study circadian rhythms.

Dr. Kleitman was intrigued by an article in Nature in which the author asserted that he could detect the approach of slumber in train passengers by observing their blink frequencies. He instructed Dr. Aserinsky to observe sleeping infants (being monitored for a different study), to see how their blinking related to sleep. Yet Dr. Aserinsky was not amused. The project, he later wrote, “seemed about as exciting as warm milk.”

Dr. Aserinsky was uncertain whether eyelid movement with the eyes closed constituted a blink, then he noticed a 20-minute span in each hour when eye movement ceased entirely. Still short of getting his degree, Dr. Aserinsky decided to observe sleeping adults. He hauled a dusty clanker of a brain-wave machine out of the university’s basement, and started registering the electrical activity of the brain of his dozing subjects. Soon, he noticed something weird.

As he kept staring at the sleeping adults, he noticed that at times they’d have saccadic-like eye movements, just as the EEG machine would register a wake-like state of the brain. At first, he thought the machine was broken (it was ancient, after all). Then, that the subjects were awake and just keeping their eyes shut. Yet after conducting several sessions and tinkering with the EEG machine, Dr. Aserinsky finally concluded that the recordings and observations were correct: Something was indeed happening during sleep that kept the cortex activated and made the subjects’ eyes move in a jerky manner.
 

 

 

Dreams, memory, and thermoregulation

After studying dozens of subjects, including his son and Dr. Kleitman’s daughter, and using miles of polygraph paper, the two scientists published their findings in September 1953 in the journal Science. Dr. Kleitman didn’t expect the discovery to be particularly earth-shattering. When asked in a later interview how much research and excitement he thought the paper would generate, he replied: “none whatsoever.” That’s not how things went, though. “They completely changed the way people think,” Dr. Blumberg said. Once and for all, the REM discovery put to rest the idea that sleep was a passive state where nothing interesting happens.

Dr. Aserinsky soon left the University of Chicago, while Dr. Kleitman continued research on rapid eye movements in sleep with his new student, William Dement, MD. Together, they published studies suggesting that REM periods were when dreaming occurred – they reported that people who were awakened during REM sleep were far more likely to recall dreams than were those awakened outside of that period. “REM sleep = dreams” became established dogma for decades, even though first reports of dreams during non-REM sleep came as early as Dr. Kleitman’s and Dr. Dement’s original research (they assumed these were recollections from the preceding REM episodes).

“It turns out that you can have a perfectly good dream when you haven’t had a previous REM sleep period,” said Jerome Siegel, PhD, professor of psychiatry and biobehavioral sciences at UCLA’s Center for Sleep Research, pointing out that equating REM sleep with dreams is still “a common misconception.”

By the 1960s, REM sleep seemed to be well defined as the combination of rapid eye movement with EEG showing brain activation, first noted by Dr. Aserinsky, as well as muscle atonia – a state of near-total muscle relaxation or paralysis. Today, however, Dr. Blumberg said, things are considerably less clear cut. In one recent paper, Dr. Blumberg and his colleagues went as far as to question whether REM sleep is even “a thing.” REM sleep is prevalent across terrestrial vertebrates, but they found that it is also highly nuanced, messing up old definitions.



Take the platypus, for example, the animal with the most REM sleep (as far as we know): They have rapid eye movements and their bills twitch during REM (stillness punctuated by sudden twitches is typical of that period of sleep), but they don’t have the classic brain activation on EEG. Owls have EEG activation and twitching, but no rapid eye movements, since their eyes are largely immobile. Geese, meanwhile, are missing muscle atonia – that’s why they can sleep standing. And new studies are still coming in, showing, for instance, that even jumping spiders may have REM sleep, complete with jerky eye movements and limb twitching.

For Dr. Siegel, the findings on REM sleep in animals point to the potential explanation of what that bizarre stage of sleep may be all about: thermoregulation. “When you look at differences in sleep among the groups of warm-blooded animals, the correlation is almost perfect, and inverse. The colder they are, the more REM sleep they get,” Dr. Siegel said. During REM sleep, body thermoregulation is basically suspended, and so, as Dr. Siegel argued in The Lancet Neurology last fall, REM sleep could be a vital player in managing our brain’s temperature and metabolic activity during sleep.

Wallace B. Mendelson, MD, professor emeritus of psychiatry at the University of Chicago, said it’s likely, however, that REM sleep has more than one function. “There is no reason why one single theory has to be an answer. Most important physiological functions have multiple functions,” he said. The ideas are many, including that REM sleep helps consolidate our memories and plays an important role in emotion regulation  But it’s not that simple. A Swiss study of nearly 1,000 healthy participants did not show any correlation between sleep stage and memory consolidation. Sleep disruption of any stage can prevent memory consolidation and quiet wakefulness with closed eyes can be as effective as sleep for memory recall.

In 1971, researchers from the National Institute of Mental Health published results of their study on total suppression of REM sleep. For as long as 40 days, they administered the monoamine oxidase inhibitor (MAOI) phenelzine, a type of drug that can completely eliminate REM sleep, to six patients with anxiety and depression. They showed that suppression of REM sleep could improve symptoms of depression, seemingly without impairing the patients’ cognitive function. Modern antidepressants, too, can greatly diminish REM sleep, Dr. Siegel said. “I’m not aware that there is any dramatic downside in having REM sleep reduced,” he said.

So do we even need REM sleep for optimal performance? Dr. Siegel said that there is a lot of exaggeration about how great REM sleep is for our health. “People just indulge their imaginations,” he said.

Dr. Blumberg pointed out that, in general, as long as you get enough sleep in the first place, you will get enough REM. “You can’t control the amount of REM sleep you have,” he explained.
 

 

 

REM sleep behavior disorder

Even though we may not need REM sleep to function well, REM sleep behavior disorder (RBD) is a sign that our health may be in trouble. In 1986, scientists from the University of Minnesota reported a bizarre REM sleep pathology in four men and one woman who would act out their dreams. One 67-year-old man, for example, reportedly punched and kicked his wife at night for years. One time he found himself kneeling alongside the bed with his arms extended as if he were holding a rifle (he dreamt he was in a shootout). His overall health, however, seemed unaffected apart from self-injury during some episodes.

However, in 1996 the same group of researchers reported that 11 of 29 men originally diagnosed with RBD went on to develop a parkinsonian disorder. Combined data from 24 centers of the International RBD Study Group puts that number as high as 74% at 12-year follow-up. These patients get diagnosed with Parkinson’s disease, dementia with Lewy bodies, or multiple system atrophy. Scientists believe that the protein alpha-synuclein forms toxic clumps in the brain, which are responsible both for malfunctioning of muscle atonia during REM sleep and subsequent neurodegenerative disorders.

While some researchers say that RBD may offer a unique window into better understanding REM sleep, we’re still a long way off from fully figuring out this biological phenomenon. According to Dr. Blumberg, the story of REM sleep has arguably become more muddled in the 7 decades since Dr. Aserinsky and Dr. Kleitman published their original findings, dispelling myths about ‘fatigue toxins’ and sleep as a passive, coma-like state.  Dr. Mendelson concurred: “It truly remains a mystery.”

Dr. Blumberg, Dr. Mendelson, and Dr. Siegel reported no relevant disclosures.
 

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

Eugene Aserinsky, PhD, never wanted to study sleep. He tried being a social worker, a dental student, and even did a stint in the army as an explosives handler. He enrolled at the University of Chicago to pursue organ physiology, but all potential supervisors were too busy to take him on. His only choice was Nathaniel Kleitman, PhD, a middle-aged professor whom Dr. Aserinsky described as “always serious.” Dr. Kleitman was doing research on sleep and so, grudgingly, Dr. Aserinsky had followed suit.

Two years later, in 1953, the duo published a paper that shattered the way we saw sleep. They described a weird phenomenon Dr. Aserinsky later called REM sleep: periods of rapid eye movements paired with wakefulness-like activity in the brain. While 7 decades have passed since their discovery, the real essence of REM sleep and its function continue to elude us. “We are still at the very beginning of understanding this phenomenon,” Mark Blumberg, PhD, professor of psychological and brain sciences at University of Iowa, Iowa City, said in an interview.

Before Dr. Aserinsky had walked into Dr. Kleitman’s lab, the widespread belief held that sleep was “the antithesis of wakefulness,” as Dr. Kleitman wrote in his seminal 1939 book, “Sleep and Wakefulness.” Others saw it as a kind of a coma, a passive state. Another theory, developed in the early 20th century by French psychologist Henri Piéron, PhD, held that sleepiness is caused by an accumulation of ‘hypnotoxins’ in the brain.

In his 1913 study that would likely fail a contemporary ethics review, Dr. Piéron drew fluid from the brains of sleep-deprived dogs and injected it into other dogs to induce sleep. As he explained in an interview with The Washington Times in 1933, he said he believed that fatigue toxins accumulate in the brain throughout the wakeful hours, then slowly seep into the spinal column, promoting drowsiness. Once we fall asleep, Dr. Piéron claimed, the hypnotoxins burn away.
 

From blinking to rapid eye movement

In 1925 when Dr. Kleitman established the world’s first sleep laboratory at the University of Chicago, sleep was a fringe science that most researchers avoided with a wide berth. Yet Dr. Kleitman was obsessed. The Moldova-born scientist famously worked 24/7 – literally. He not only stayed long hours in his lab, but also slept attached to a plethora of instruments to measure his brain waves, breathing, and heartbeat. At one point, Dr. Kleitman stayed awake for 180 hours (more than a week), to check how forced sleeplessness would affect his body (he later compared it to torture). He also lived 2 weeks aboard a submarine, moved his family north of the Arctic Circle, and spent over a month 119 feet below the surface in a cave in Kentucky, fighting rats, cold, and humidity to study circadian rhythms.

Dr. Kleitman was intrigued by an article in Nature in which the author asserted that he could detect the approach of slumber in train passengers by observing their blink frequencies. He instructed Dr. Aserinsky to observe sleeping infants (being monitored for a different study), to see how their blinking related to sleep. Yet Dr. Aserinsky was not amused. The project, he later wrote, “seemed about as exciting as warm milk.”

Dr. Aserinsky was uncertain whether eyelid movement with the eyes closed constituted a blink, then he noticed a 20-minute span in each hour when eye movement ceased entirely. Still short of getting his degree, Dr. Aserinsky decided to observe sleeping adults. He hauled a dusty clanker of a brain-wave machine out of the university’s basement, and started registering the electrical activity of the brain of his dozing subjects. Soon, he noticed something weird.

As he kept staring at the sleeping adults, he noticed that at times they’d have saccadic-like eye movements, just as the EEG machine would register a wake-like state of the brain. At first, he thought the machine was broken (it was ancient, after all). Then, that the subjects were awake and just keeping their eyes shut. Yet after conducting several sessions and tinkering with the EEG machine, Dr. Aserinsky finally concluded that the recordings and observations were correct: Something was indeed happening during sleep that kept the cortex activated and made the subjects’ eyes move in a jerky manner.
 

 

 

Dreams, memory, and thermoregulation

After studying dozens of subjects, including his son and Dr. Kleitman’s daughter, and using miles of polygraph paper, the two scientists published their findings in September 1953 in the journal Science. Dr. Kleitman didn’t expect the discovery to be particularly earth-shattering. When asked in a later interview how much research and excitement he thought the paper would generate, he replied: “none whatsoever.” That’s not how things went, though. “They completely changed the way people think,” Dr. Blumberg said. Once and for all, the REM discovery put to rest the idea that sleep was a passive state where nothing interesting happens.

Dr. Aserinsky soon left the University of Chicago, while Dr. Kleitman continued research on rapid eye movements in sleep with his new student, William Dement, MD. Together, they published studies suggesting that REM periods were when dreaming occurred – they reported that people who were awakened during REM sleep were far more likely to recall dreams than were those awakened outside of that period. “REM sleep = dreams” became established dogma for decades, even though first reports of dreams during non-REM sleep came as early as Dr. Kleitman’s and Dr. Dement’s original research (they assumed these were recollections from the preceding REM episodes).

“It turns out that you can have a perfectly good dream when you haven’t had a previous REM sleep period,” said Jerome Siegel, PhD, professor of psychiatry and biobehavioral sciences at UCLA’s Center for Sleep Research, pointing out that equating REM sleep with dreams is still “a common misconception.”

By the 1960s, REM sleep seemed to be well defined as the combination of rapid eye movement with EEG showing brain activation, first noted by Dr. Aserinsky, as well as muscle atonia – a state of near-total muscle relaxation or paralysis. Today, however, Dr. Blumberg said, things are considerably less clear cut. In one recent paper, Dr. Blumberg and his colleagues went as far as to question whether REM sleep is even “a thing.” REM sleep is prevalent across terrestrial vertebrates, but they found that it is also highly nuanced, messing up old definitions.



Take the platypus, for example, the animal with the most REM sleep (as far as we know): They have rapid eye movements and their bills twitch during REM (stillness punctuated by sudden twitches is typical of that period of sleep), but they don’t have the classic brain activation on EEG. Owls have EEG activation and twitching, but no rapid eye movements, since their eyes are largely immobile. Geese, meanwhile, are missing muscle atonia – that’s why they can sleep standing. And new studies are still coming in, showing, for instance, that even jumping spiders may have REM sleep, complete with jerky eye movements and limb twitching.

For Dr. Siegel, the findings on REM sleep in animals point to the potential explanation of what that bizarre stage of sleep may be all about: thermoregulation. “When you look at differences in sleep among the groups of warm-blooded animals, the correlation is almost perfect, and inverse. The colder they are, the more REM sleep they get,” Dr. Siegel said. During REM sleep, body thermoregulation is basically suspended, and so, as Dr. Siegel argued in The Lancet Neurology last fall, REM sleep could be a vital player in managing our brain’s temperature and metabolic activity during sleep.

Wallace B. Mendelson, MD, professor emeritus of psychiatry at the University of Chicago, said it’s likely, however, that REM sleep has more than one function. “There is no reason why one single theory has to be an answer. Most important physiological functions have multiple functions,” he said. The ideas are many, including that REM sleep helps consolidate our memories and plays an important role in emotion regulation  But it’s not that simple. A Swiss study of nearly 1,000 healthy participants did not show any correlation between sleep stage and memory consolidation. Sleep disruption of any stage can prevent memory consolidation and quiet wakefulness with closed eyes can be as effective as sleep for memory recall.

In 1971, researchers from the National Institute of Mental Health published results of their study on total suppression of REM sleep. For as long as 40 days, they administered the monoamine oxidase inhibitor (MAOI) phenelzine, a type of drug that can completely eliminate REM sleep, to six patients with anxiety and depression. They showed that suppression of REM sleep could improve symptoms of depression, seemingly without impairing the patients’ cognitive function. Modern antidepressants, too, can greatly diminish REM sleep, Dr. Siegel said. “I’m not aware that there is any dramatic downside in having REM sleep reduced,” he said.

So do we even need REM sleep for optimal performance? Dr. Siegel said that there is a lot of exaggeration about how great REM sleep is for our health. “People just indulge their imaginations,” he said.

Dr. Blumberg pointed out that, in general, as long as you get enough sleep in the first place, you will get enough REM. “You can’t control the amount of REM sleep you have,” he explained.
 

 

 

REM sleep behavior disorder

Even though we may not need REM sleep to function well, REM sleep behavior disorder (RBD) is a sign that our health may be in trouble. In 1986, scientists from the University of Minnesota reported a bizarre REM sleep pathology in four men and one woman who would act out their dreams. One 67-year-old man, for example, reportedly punched and kicked his wife at night for years. One time he found himself kneeling alongside the bed with his arms extended as if he were holding a rifle (he dreamt he was in a shootout). His overall health, however, seemed unaffected apart from self-injury during some episodes.

However, in 1996 the same group of researchers reported that 11 of 29 men originally diagnosed with RBD went on to develop a parkinsonian disorder. Combined data from 24 centers of the International RBD Study Group puts that number as high as 74% at 12-year follow-up. These patients get diagnosed with Parkinson’s disease, dementia with Lewy bodies, or multiple system atrophy. Scientists believe that the protein alpha-synuclein forms toxic clumps in the brain, which are responsible both for malfunctioning of muscle atonia during REM sleep and subsequent neurodegenerative disorders.

While some researchers say that RBD may offer a unique window into better understanding REM sleep, we’re still a long way off from fully figuring out this biological phenomenon. According to Dr. Blumberg, the story of REM sleep has arguably become more muddled in the 7 decades since Dr. Aserinsky and Dr. Kleitman published their original findings, dispelling myths about ‘fatigue toxins’ and sleep as a passive, coma-like state.  Dr. Mendelson concurred: “It truly remains a mystery.”

Dr. Blumberg, Dr. Mendelson, and Dr. Siegel reported no relevant disclosures.
 

A version of this article originally appeared on Medscape.com.

Eugene Aserinsky, PhD, never wanted to study sleep. He tried being a social worker, a dental student, and even did a stint in the army as an explosives handler. He enrolled at the University of Chicago to pursue organ physiology, but all potential supervisors were too busy to take him on. His only choice was Nathaniel Kleitman, PhD, a middle-aged professor whom Dr. Aserinsky described as “always serious.” Dr. Kleitman was doing research on sleep and so, grudgingly, Dr. Aserinsky had followed suit.

Two years later, in 1953, the duo published a paper that shattered the way we saw sleep. They described a weird phenomenon Dr. Aserinsky later called REM sleep: periods of rapid eye movements paired with wakefulness-like activity in the brain. While 7 decades have passed since their discovery, the real essence of REM sleep and its function continue to elude us. “We are still at the very beginning of understanding this phenomenon,” Mark Blumberg, PhD, professor of psychological and brain sciences at University of Iowa, Iowa City, said in an interview.

Before Dr. Aserinsky had walked into Dr. Kleitman’s lab, the widespread belief held that sleep was “the antithesis of wakefulness,” as Dr. Kleitman wrote in his seminal 1939 book, “Sleep and Wakefulness.” Others saw it as a kind of a coma, a passive state. Another theory, developed in the early 20th century by French psychologist Henri Piéron, PhD, held that sleepiness is caused by an accumulation of ‘hypnotoxins’ in the brain.

In his 1913 study that would likely fail a contemporary ethics review, Dr. Piéron drew fluid from the brains of sleep-deprived dogs and injected it into other dogs to induce sleep. As he explained in an interview with The Washington Times in 1933, he said he believed that fatigue toxins accumulate in the brain throughout the wakeful hours, then slowly seep into the spinal column, promoting drowsiness. Once we fall asleep, Dr. Piéron claimed, the hypnotoxins burn away.
 

From blinking to rapid eye movement

In 1925 when Dr. Kleitman established the world’s first sleep laboratory at the University of Chicago, sleep was a fringe science that most researchers avoided with a wide berth. Yet Dr. Kleitman was obsessed. The Moldova-born scientist famously worked 24/7 – literally. He not only stayed long hours in his lab, but also slept attached to a plethora of instruments to measure his brain waves, breathing, and heartbeat. At one point, Dr. Kleitman stayed awake for 180 hours (more than a week), to check how forced sleeplessness would affect his body (he later compared it to torture). He also lived 2 weeks aboard a submarine, moved his family north of the Arctic Circle, and spent over a month 119 feet below the surface in a cave in Kentucky, fighting rats, cold, and humidity to study circadian rhythms.

Dr. Kleitman was intrigued by an article in Nature in which the author asserted that he could detect the approach of slumber in train passengers by observing their blink frequencies. He instructed Dr. Aserinsky to observe sleeping infants (being monitored for a different study), to see how their blinking related to sleep. Yet Dr. Aserinsky was not amused. The project, he later wrote, “seemed about as exciting as warm milk.”

Dr. Aserinsky was uncertain whether eyelid movement with the eyes closed constituted a blink, then he noticed a 20-minute span in each hour when eye movement ceased entirely. Still short of getting his degree, Dr. Aserinsky decided to observe sleeping adults. He hauled a dusty clanker of a brain-wave machine out of the university’s basement, and started registering the electrical activity of the brain of his dozing subjects. Soon, he noticed something weird.

As he kept staring at the sleeping adults, he noticed that at times they’d have saccadic-like eye movements, just as the EEG machine would register a wake-like state of the brain. At first, he thought the machine was broken (it was ancient, after all). Then, that the subjects were awake and just keeping their eyes shut. Yet after conducting several sessions and tinkering with the EEG machine, Dr. Aserinsky finally concluded that the recordings and observations were correct: Something was indeed happening during sleep that kept the cortex activated and made the subjects’ eyes move in a jerky manner.
 

 

 

Dreams, memory, and thermoregulation

After studying dozens of subjects, including his son and Dr. Kleitman’s daughter, and using miles of polygraph paper, the two scientists published their findings in September 1953 in the journal Science. Dr. Kleitman didn’t expect the discovery to be particularly earth-shattering. When asked in a later interview how much research and excitement he thought the paper would generate, he replied: “none whatsoever.” That’s not how things went, though. “They completely changed the way people think,” Dr. Blumberg said. Once and for all, the REM discovery put to rest the idea that sleep was a passive state where nothing interesting happens.

Dr. Aserinsky soon left the University of Chicago, while Dr. Kleitman continued research on rapid eye movements in sleep with his new student, William Dement, MD. Together, they published studies suggesting that REM periods were when dreaming occurred – they reported that people who were awakened during REM sleep were far more likely to recall dreams than were those awakened outside of that period. “REM sleep = dreams” became established dogma for decades, even though first reports of dreams during non-REM sleep came as early as Dr. Kleitman’s and Dr. Dement’s original research (they assumed these were recollections from the preceding REM episodes).

“It turns out that you can have a perfectly good dream when you haven’t had a previous REM sleep period,” said Jerome Siegel, PhD, professor of psychiatry and biobehavioral sciences at UCLA’s Center for Sleep Research, pointing out that equating REM sleep with dreams is still “a common misconception.”

By the 1960s, REM sleep seemed to be well defined as the combination of rapid eye movement with EEG showing brain activation, first noted by Dr. Aserinsky, as well as muscle atonia – a state of near-total muscle relaxation or paralysis. Today, however, Dr. Blumberg said, things are considerably less clear cut. In one recent paper, Dr. Blumberg and his colleagues went as far as to question whether REM sleep is even “a thing.” REM sleep is prevalent across terrestrial vertebrates, but they found that it is also highly nuanced, messing up old definitions.



Take the platypus, for example, the animal with the most REM sleep (as far as we know): They have rapid eye movements and their bills twitch during REM (stillness punctuated by sudden twitches is typical of that period of sleep), but they don’t have the classic brain activation on EEG. Owls have EEG activation and twitching, but no rapid eye movements, since their eyes are largely immobile. Geese, meanwhile, are missing muscle atonia – that’s why they can sleep standing. And new studies are still coming in, showing, for instance, that even jumping spiders may have REM sleep, complete with jerky eye movements and limb twitching.

For Dr. Siegel, the findings on REM sleep in animals point to the potential explanation of what that bizarre stage of sleep may be all about: thermoregulation. “When you look at differences in sleep among the groups of warm-blooded animals, the correlation is almost perfect, and inverse. The colder they are, the more REM sleep they get,” Dr. Siegel said. During REM sleep, body thermoregulation is basically suspended, and so, as Dr. Siegel argued in The Lancet Neurology last fall, REM sleep could be a vital player in managing our brain’s temperature and metabolic activity during sleep.

Wallace B. Mendelson, MD, professor emeritus of psychiatry at the University of Chicago, said it’s likely, however, that REM sleep has more than one function. “There is no reason why one single theory has to be an answer. Most important physiological functions have multiple functions,” he said. The ideas are many, including that REM sleep helps consolidate our memories and plays an important role in emotion regulation  But it’s not that simple. A Swiss study of nearly 1,000 healthy participants did not show any correlation between sleep stage and memory consolidation. Sleep disruption of any stage can prevent memory consolidation and quiet wakefulness with closed eyes can be as effective as sleep for memory recall.

In 1971, researchers from the National Institute of Mental Health published results of their study on total suppression of REM sleep. For as long as 40 days, they administered the monoamine oxidase inhibitor (MAOI) phenelzine, a type of drug that can completely eliminate REM sleep, to six patients with anxiety and depression. They showed that suppression of REM sleep could improve symptoms of depression, seemingly without impairing the patients’ cognitive function. Modern antidepressants, too, can greatly diminish REM sleep, Dr. Siegel said. “I’m not aware that there is any dramatic downside in having REM sleep reduced,” he said.

So do we even need REM sleep for optimal performance? Dr. Siegel said that there is a lot of exaggeration about how great REM sleep is for our health. “People just indulge their imaginations,” he said.

Dr. Blumberg pointed out that, in general, as long as you get enough sleep in the first place, you will get enough REM. “You can’t control the amount of REM sleep you have,” he explained.
 

 

 

REM sleep behavior disorder

Even though we may not need REM sleep to function well, REM sleep behavior disorder (RBD) is a sign that our health may be in trouble. In 1986, scientists from the University of Minnesota reported a bizarre REM sleep pathology in four men and one woman who would act out their dreams. One 67-year-old man, for example, reportedly punched and kicked his wife at night for years. One time he found himself kneeling alongside the bed with his arms extended as if he were holding a rifle (he dreamt he was in a shootout). His overall health, however, seemed unaffected apart from self-injury during some episodes.

However, in 1996 the same group of researchers reported that 11 of 29 men originally diagnosed with RBD went on to develop a parkinsonian disorder. Combined data from 24 centers of the International RBD Study Group puts that number as high as 74% at 12-year follow-up. These patients get diagnosed with Parkinson’s disease, dementia with Lewy bodies, or multiple system atrophy. Scientists believe that the protein alpha-synuclein forms toxic clumps in the brain, which are responsible both for malfunctioning of muscle atonia during REM sleep and subsequent neurodegenerative disorders.

While some researchers say that RBD may offer a unique window into better understanding REM sleep, we’re still a long way off from fully figuring out this biological phenomenon. According to Dr. Blumberg, the story of REM sleep has arguably become more muddled in the 7 decades since Dr. Aserinsky and Dr. Kleitman published their original findings, dispelling myths about ‘fatigue toxins’ and sleep as a passive, coma-like state.  Dr. Mendelson concurred: “It truly remains a mystery.”

Dr. Blumberg, Dr. Mendelson, and Dr. Siegel reported no relevant disclosures.
 

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Ovarian cancer: Sequencing strategy identifies biomarker that could guide treatment

Article Type
Changed
Wed, 06/07/2023 - 09:08

A targeted genetic sequencing strategy effectively identified homologous recombination DNA repair deficiency in ovarian cancer patients, and may eventually help predict treatment response, a study suggests.

Previous research has identified homologous recombination DNA repair deficiency (HRD) as a biomarker for sensitivity to poly( ADP-ribose) polymerase inhibitors (PARPi) and platinum-based therapies in patients with ovarian and breast cancers, wrote Niklas Krumm, MD, of the University of Washington, Seattle, and colleagues.

Currently, direct genetic testing is the most widely used method to identify mutations in HRD-associated genes, but not all genes underlying HDD have been identified, therefore current HRD assays that don’t rely on gene-specific information have been considered more diagnostically useful, the researchers noted. Two genetic tests are approved by the Food and Drug Administration, which are the FoundationFocus CDX BRCA and myChoice CDx, the researchers wrote. The Foundation Focus CDX BRCA was approved in 2016, and myChoice CDx was approved in 2019.

“However, transparent, well-defined methods and criteria for diagnosing HRD by genomic scarring that are practical for smaller, academic, or private laboratories have not yet been established or widely implemented,” they said.

In the paper published in JCO Precision Oncology, the researchers said they developed a molecular testing strategy involving the use of common, polymorphic single-nucleotide polymorphisms (SNPs).

They used a panel of approximately 3,000 SNPs distributed across the genome to create a loss of heterozygosity (LOH) score that could identify HRD.

To determine the ability of LOH to diagnose HRD in ovarian cancers, the researchers examined 99 ovarian neoplasm–normal pairs using the LOH method, and compared results with patient mutational genotypes and HRD predictors. LOH scores of 11% or higher showed greater than 86% sensitivity for identifying tumors with HRD-causing mutations in an independent validation set, and a sensitivity of 90.9% across training and validation sets.

When LOH scores were compared to a validated genome-wide mutational signature assay (HRDetect) the sensitivity and specificity of an LOH score of 11% or higher were estimated at 96.7% and 50%, respectively, for determining HRD-positive tumors.

However, the researchers found poor concordance (statistically insignificant correlation) using their LOH capture design to diagnose HRD based on mutational signatures only from targeted regions. “We conclude that mutational signatures inferred from our diagnostic tumor panel are unable to accurately ascertain HRD status, likely because the absolute number of somatic variants that it is able to identify is insufficient,” they said.

LOH scores were not significantly correlated with treatment outcomes, which suggests that LOH score can be used to infer HRD status, rather than serving as a direct predictor of patient response to primary platinum therapy, the researchers said. The average LOH score was higher in patients whose cancers responded to platinum therapy than in those with no treatment response (17% vs. 15%) but this difference was not significant.


 

Study limitations

The research was limited by several factors, including the validation only for high-grade non–clear cell ovarian carcinomas, and LOH scores likely vary across cancer types, therefore more studies will be needed to optimize the strategy for different cancers, the researchers noted. Other potential limitations include the high level of tumor cellularity needed (30%), which will eliminate some specimens, they said.

 

 

Finally, the poor predictive value of LOH itself for treatment outcomes suggests a limitation of the HRD biomarker in this respect, the researchers concluded.
 

Potential advantages of using LOH method

However, the potential advantages of the LOH method include the minimal sequence reads and the ability to integrate the LOH into current targeted gene capture workflows, the researchers wrote, and the LOH score appears to be a reliable predictor of HRD positivity.

“Although we have found that the regions targeted by our assay are insufficient to identify HRD-associated mutational signatures, future refinements to this approach could integrate minimal additional sequencing targets designed to robustly identify such signatures in concert with LOH events,” they concluded.
 

Study shares the details of detection methodology

“Tumors with HRD are sensitive to certain cancer chemotherapeutic agents [PARP inhibitors],” said Dr. Krumm, in an interview. “Until recently, HR-deficient tumors were primarily identified via inactivating BRCA1 or BRCA2 mutations, but now it is understood that an entire repair pathway can be affected and can result in HRD. Therefore, we sought to implement an NGS-based approach that could detect the ‘HRD phenotype’ in the DNA of tumors,” he said.

The approach developed by Dr. Krumm and colleagues and presented in the current study “is not the first in the field, as some commercial tests have similar approaches,” he said. However, the current study is important, “because it openly publishes the methodology and detailed results of our validation work in bringing HRD detection online in our clinical lab,” he said.

“One of the advantages of a genome-wide approach is that we can identify HR-deficient tumors, even when BRCA1 and BRCA2 do not have any detectable loss-of-function mutations,” said Dr. Krumm. “HRD detection is a relatively young test in the field of next-generation sequencing (NGS)–based cancer diagnostics. One of the challenges currently is the lack of large, standardized reference data sets or reference materials that can be used to compare tests and methodology in a clinical setting. We hope that by publishing our methods, more data sets can be generated and published,” he said.

Some specific challenges to using the test clinically today include the need for a paired tumor plus blood sample, and the need for a relatively high fraction of tumor content in the sample, Dr. Krumm noted.

“This test is currently being used in a clinical setting at the University of Washington, as it is a laboratory-developed test (LDT) and part of our clinically validated NGS platform,” said Dr. Krumm. “This test highlights how LDTs can advance clinical testing capabilities and improve the care of our patients and illustrates the UW Medicine position that LDTs are a necessary and important part of the clinical care. That said, we anticipate that additional validation studies, including long-term clinical effectiveness and outcome studies, will be required to bring HRD testing into a commercial platform that undergoes FDA review,” he explained.

The study was supported by the Brotman Baty Institute for Precision Medicine, the National Institutes of Health, and the Department of Defense, Ovarian Cancer Research Program Clinical Development Award. Dr. Krumm disclosed stock and ownership interests in Reference Genomics.

Publications
Topics
Sections

A targeted genetic sequencing strategy effectively identified homologous recombination DNA repair deficiency in ovarian cancer patients, and may eventually help predict treatment response, a study suggests.

Previous research has identified homologous recombination DNA repair deficiency (HRD) as a biomarker for sensitivity to poly( ADP-ribose) polymerase inhibitors (PARPi) and platinum-based therapies in patients with ovarian and breast cancers, wrote Niklas Krumm, MD, of the University of Washington, Seattle, and colleagues.

Currently, direct genetic testing is the most widely used method to identify mutations in HRD-associated genes, but not all genes underlying HDD have been identified, therefore current HRD assays that don’t rely on gene-specific information have been considered more diagnostically useful, the researchers noted. Two genetic tests are approved by the Food and Drug Administration, which are the FoundationFocus CDX BRCA and myChoice CDx, the researchers wrote. The Foundation Focus CDX BRCA was approved in 2016, and myChoice CDx was approved in 2019.

“However, transparent, well-defined methods and criteria for diagnosing HRD by genomic scarring that are practical for smaller, academic, or private laboratories have not yet been established or widely implemented,” they said.

In the paper published in JCO Precision Oncology, the researchers said they developed a molecular testing strategy involving the use of common, polymorphic single-nucleotide polymorphisms (SNPs).

They used a panel of approximately 3,000 SNPs distributed across the genome to create a loss of heterozygosity (LOH) score that could identify HRD.

To determine the ability of LOH to diagnose HRD in ovarian cancers, the researchers examined 99 ovarian neoplasm–normal pairs using the LOH method, and compared results with patient mutational genotypes and HRD predictors. LOH scores of 11% or higher showed greater than 86% sensitivity for identifying tumors with HRD-causing mutations in an independent validation set, and a sensitivity of 90.9% across training and validation sets.

When LOH scores were compared to a validated genome-wide mutational signature assay (HRDetect) the sensitivity and specificity of an LOH score of 11% or higher were estimated at 96.7% and 50%, respectively, for determining HRD-positive tumors.

However, the researchers found poor concordance (statistically insignificant correlation) using their LOH capture design to diagnose HRD based on mutational signatures only from targeted regions. “We conclude that mutational signatures inferred from our diagnostic tumor panel are unable to accurately ascertain HRD status, likely because the absolute number of somatic variants that it is able to identify is insufficient,” they said.

LOH scores were not significantly correlated with treatment outcomes, which suggests that LOH score can be used to infer HRD status, rather than serving as a direct predictor of patient response to primary platinum therapy, the researchers said. The average LOH score was higher in patients whose cancers responded to platinum therapy than in those with no treatment response (17% vs. 15%) but this difference was not significant.


 

Study limitations

The research was limited by several factors, including the validation only for high-grade non–clear cell ovarian carcinomas, and LOH scores likely vary across cancer types, therefore more studies will be needed to optimize the strategy for different cancers, the researchers noted. Other potential limitations include the high level of tumor cellularity needed (30%), which will eliminate some specimens, they said.

 

 

Finally, the poor predictive value of LOH itself for treatment outcomes suggests a limitation of the HRD biomarker in this respect, the researchers concluded.
 

Potential advantages of using LOH method

However, the potential advantages of the LOH method include the minimal sequence reads and the ability to integrate the LOH into current targeted gene capture workflows, the researchers wrote, and the LOH score appears to be a reliable predictor of HRD positivity.

“Although we have found that the regions targeted by our assay are insufficient to identify HRD-associated mutational signatures, future refinements to this approach could integrate minimal additional sequencing targets designed to robustly identify such signatures in concert with LOH events,” they concluded.
 

Study shares the details of detection methodology

“Tumors with HRD are sensitive to certain cancer chemotherapeutic agents [PARP inhibitors],” said Dr. Krumm, in an interview. “Until recently, HR-deficient tumors were primarily identified via inactivating BRCA1 or BRCA2 mutations, but now it is understood that an entire repair pathway can be affected and can result in HRD. Therefore, we sought to implement an NGS-based approach that could detect the ‘HRD phenotype’ in the DNA of tumors,” he said.

The approach developed by Dr. Krumm and colleagues and presented in the current study “is not the first in the field, as some commercial tests have similar approaches,” he said. However, the current study is important, “because it openly publishes the methodology and detailed results of our validation work in bringing HRD detection online in our clinical lab,” he said.

“One of the advantages of a genome-wide approach is that we can identify HR-deficient tumors, even when BRCA1 and BRCA2 do not have any detectable loss-of-function mutations,” said Dr. Krumm. “HRD detection is a relatively young test in the field of next-generation sequencing (NGS)–based cancer diagnostics. One of the challenges currently is the lack of large, standardized reference data sets or reference materials that can be used to compare tests and methodology in a clinical setting. We hope that by publishing our methods, more data sets can be generated and published,” he said.

Some specific challenges to using the test clinically today include the need for a paired tumor plus blood sample, and the need for a relatively high fraction of tumor content in the sample, Dr. Krumm noted.

“This test is currently being used in a clinical setting at the University of Washington, as it is a laboratory-developed test (LDT) and part of our clinically validated NGS platform,” said Dr. Krumm. “This test highlights how LDTs can advance clinical testing capabilities and improve the care of our patients and illustrates the UW Medicine position that LDTs are a necessary and important part of the clinical care. That said, we anticipate that additional validation studies, including long-term clinical effectiveness and outcome studies, will be required to bring HRD testing into a commercial platform that undergoes FDA review,” he explained.

The study was supported by the Brotman Baty Institute for Precision Medicine, the National Institutes of Health, and the Department of Defense, Ovarian Cancer Research Program Clinical Development Award. Dr. Krumm disclosed stock and ownership interests in Reference Genomics.

A targeted genetic sequencing strategy effectively identified homologous recombination DNA repair deficiency in ovarian cancer patients, and may eventually help predict treatment response, a study suggests.

Previous research has identified homologous recombination DNA repair deficiency (HRD) as a biomarker for sensitivity to poly( ADP-ribose) polymerase inhibitors (PARPi) and platinum-based therapies in patients with ovarian and breast cancers, wrote Niklas Krumm, MD, of the University of Washington, Seattle, and colleagues.

Currently, direct genetic testing is the most widely used method to identify mutations in HRD-associated genes, but not all genes underlying HDD have been identified, therefore current HRD assays that don’t rely on gene-specific information have been considered more diagnostically useful, the researchers noted. Two genetic tests are approved by the Food and Drug Administration, which are the FoundationFocus CDX BRCA and myChoice CDx, the researchers wrote. The Foundation Focus CDX BRCA was approved in 2016, and myChoice CDx was approved in 2019.

“However, transparent, well-defined methods and criteria for diagnosing HRD by genomic scarring that are practical for smaller, academic, or private laboratories have not yet been established or widely implemented,” they said.

In the paper published in JCO Precision Oncology, the researchers said they developed a molecular testing strategy involving the use of common, polymorphic single-nucleotide polymorphisms (SNPs).

They used a panel of approximately 3,000 SNPs distributed across the genome to create a loss of heterozygosity (LOH) score that could identify HRD.

To determine the ability of LOH to diagnose HRD in ovarian cancers, the researchers examined 99 ovarian neoplasm–normal pairs using the LOH method, and compared results with patient mutational genotypes and HRD predictors. LOH scores of 11% or higher showed greater than 86% sensitivity for identifying tumors with HRD-causing mutations in an independent validation set, and a sensitivity of 90.9% across training and validation sets.

When LOH scores were compared to a validated genome-wide mutational signature assay (HRDetect) the sensitivity and specificity of an LOH score of 11% or higher were estimated at 96.7% and 50%, respectively, for determining HRD-positive tumors.

However, the researchers found poor concordance (statistically insignificant correlation) using their LOH capture design to diagnose HRD based on mutational signatures only from targeted regions. “We conclude that mutational signatures inferred from our diagnostic tumor panel are unable to accurately ascertain HRD status, likely because the absolute number of somatic variants that it is able to identify is insufficient,” they said.

LOH scores were not significantly correlated with treatment outcomes, which suggests that LOH score can be used to infer HRD status, rather than serving as a direct predictor of patient response to primary platinum therapy, the researchers said. The average LOH score was higher in patients whose cancers responded to platinum therapy than in those with no treatment response (17% vs. 15%) but this difference was not significant.


 

Study limitations

The research was limited by several factors, including the validation only for high-grade non–clear cell ovarian carcinomas, and LOH scores likely vary across cancer types, therefore more studies will be needed to optimize the strategy for different cancers, the researchers noted. Other potential limitations include the high level of tumor cellularity needed (30%), which will eliminate some specimens, they said.

 

 

Finally, the poor predictive value of LOH itself for treatment outcomes suggests a limitation of the HRD biomarker in this respect, the researchers concluded.
 

Potential advantages of using LOH method

However, the potential advantages of the LOH method include the minimal sequence reads and the ability to integrate the LOH into current targeted gene capture workflows, the researchers wrote, and the LOH score appears to be a reliable predictor of HRD positivity.

“Although we have found that the regions targeted by our assay are insufficient to identify HRD-associated mutational signatures, future refinements to this approach could integrate minimal additional sequencing targets designed to robustly identify such signatures in concert with LOH events,” they concluded.
 

Study shares the details of detection methodology

“Tumors with HRD are sensitive to certain cancer chemotherapeutic agents [PARP inhibitors],” said Dr. Krumm, in an interview. “Until recently, HR-deficient tumors were primarily identified via inactivating BRCA1 or BRCA2 mutations, but now it is understood that an entire repair pathway can be affected and can result in HRD. Therefore, we sought to implement an NGS-based approach that could detect the ‘HRD phenotype’ in the DNA of tumors,” he said.

The approach developed by Dr. Krumm and colleagues and presented in the current study “is not the first in the field, as some commercial tests have similar approaches,” he said. However, the current study is important, “because it openly publishes the methodology and detailed results of our validation work in bringing HRD detection online in our clinical lab,” he said.

“One of the advantages of a genome-wide approach is that we can identify HR-deficient tumors, even when BRCA1 and BRCA2 do not have any detectable loss-of-function mutations,” said Dr. Krumm. “HRD detection is a relatively young test in the field of next-generation sequencing (NGS)–based cancer diagnostics. One of the challenges currently is the lack of large, standardized reference data sets or reference materials that can be used to compare tests and methodology in a clinical setting. We hope that by publishing our methods, more data sets can be generated and published,” he said.

Some specific challenges to using the test clinically today include the need for a paired tumor plus blood sample, and the need for a relatively high fraction of tumor content in the sample, Dr. Krumm noted.

“This test is currently being used in a clinical setting at the University of Washington, as it is a laboratory-developed test (LDT) and part of our clinically validated NGS platform,” said Dr. Krumm. “This test highlights how LDTs can advance clinical testing capabilities and improve the care of our patients and illustrates the UW Medicine position that LDTs are a necessary and important part of the clinical care. That said, we anticipate that additional validation studies, including long-term clinical effectiveness and outcome studies, will be required to bring HRD testing into a commercial platform that undergoes FDA review,” he explained.

The study was supported by the Brotman Baty Institute for Precision Medicine, the National Institutes of Health, and the Department of Defense, Ovarian Cancer Research Program Clinical Development Award. Dr. Krumm disclosed stock and ownership interests in Reference Genomics.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JCO PRECISION ONCOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

FDA OKs Injectafer for iron deficiency anemia in heart failure

Article Type
Changed
Thu, 06/08/2023 - 11:02

 

The Food and Drug Administration has expanded the indication for ferric carboxymaltose injection (Injectafer, Daiichi Sankyo/American Regent) to include treatment of iron deficiency in adults with New York Heart Association (NYHA) class II/III heart failure (HF).

“This new indication for Injectafer marks the first and only FDA approval of an intravenous iron replacement therapy for adult patients with heart failure,” Ravi Tayi, MD, MPH, chief medical officer at American Regent, said in a news release.

Ferric carboxymaltose injection is also indicated for the treatment of iron deficiency anemia in adults and children as young as 1 year of age who have either intolerance or an unsatisfactory response to oral iron, and in adult patients who have nondialysis dependent chronic kidney disease.

The new indication in HF was supported by data from the CONFIRM-HF randomized controlled trial that evaluated the efficacy and safety of ferric carboxymaltose injection in adults with chronic HF and iron deficiency.

In the study, results showed that treatment with ferric carboxymaltose injection significantly improved exercise capacity compared with placebo in iron-deficient patients with HF.  

No new safety signals emerged. The most common treatment emergent adverse events were headache, nausea, hypertension, injection site reactions, hypophosphatemia, and dizziness.

According to the company, ferric carboxymaltose injection has been studied in more than 40 clinical trials that included over 8,800 patients worldwide and has been approved in 86 countries.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

The Food and Drug Administration has expanded the indication for ferric carboxymaltose injection (Injectafer, Daiichi Sankyo/American Regent) to include treatment of iron deficiency in adults with New York Heart Association (NYHA) class II/III heart failure (HF).

“This new indication for Injectafer marks the first and only FDA approval of an intravenous iron replacement therapy for adult patients with heart failure,” Ravi Tayi, MD, MPH, chief medical officer at American Regent, said in a news release.

Ferric carboxymaltose injection is also indicated for the treatment of iron deficiency anemia in adults and children as young as 1 year of age who have either intolerance or an unsatisfactory response to oral iron, and in adult patients who have nondialysis dependent chronic kidney disease.

The new indication in HF was supported by data from the CONFIRM-HF randomized controlled trial that evaluated the efficacy and safety of ferric carboxymaltose injection in adults with chronic HF and iron deficiency.

In the study, results showed that treatment with ferric carboxymaltose injection significantly improved exercise capacity compared with placebo in iron-deficient patients with HF.  

No new safety signals emerged. The most common treatment emergent adverse events were headache, nausea, hypertension, injection site reactions, hypophosphatemia, and dizziness.

According to the company, ferric carboxymaltose injection has been studied in more than 40 clinical trials that included over 8,800 patients worldwide and has been approved in 86 countries.

A version of this article first appeared on Medscape.com.

 

The Food and Drug Administration has expanded the indication for ferric carboxymaltose injection (Injectafer, Daiichi Sankyo/American Regent) to include treatment of iron deficiency in adults with New York Heart Association (NYHA) class II/III heart failure (HF).

“This new indication for Injectafer marks the first and only FDA approval of an intravenous iron replacement therapy for adult patients with heart failure,” Ravi Tayi, MD, MPH, chief medical officer at American Regent, said in a news release.

Ferric carboxymaltose injection is also indicated for the treatment of iron deficiency anemia in adults and children as young as 1 year of age who have either intolerance or an unsatisfactory response to oral iron, and in adult patients who have nondialysis dependent chronic kidney disease.

The new indication in HF was supported by data from the CONFIRM-HF randomized controlled trial that evaluated the efficacy and safety of ferric carboxymaltose injection in adults with chronic HF and iron deficiency.

In the study, results showed that treatment with ferric carboxymaltose injection significantly improved exercise capacity compared with placebo in iron-deficient patients with HF.  

No new safety signals emerged. The most common treatment emergent adverse events were headache, nausea, hypertension, injection site reactions, hypophosphatemia, and dizziness.

According to the company, ferric carboxymaltose injection has been studied in more than 40 clinical trials that included over 8,800 patients worldwide and has been approved in 86 countries.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Multiple changes in NMOSD treatment for nonmedical reasons tied to poorer outcomes

Article Type
Changed
Wed, 06/07/2023 - 09:09

 

Multiple treatment transitions in patients with neuromyelitis optica spectrum disorder (NMOSD) for nonmedical reasons are associated with increased neurological harm, including relapse risk and disease progression, new research shows.

“For the first time, we were able to quantify clinical outcomes associated with treatment transitions in people with NMOSD. Our data highlight that aspects outside of therapeutic efficacy may be remarkably meaningful in the effective suppression of disease advancement,” said senior investigator Darin T. Okuda, MD, professor of neurology and director of the neuroinnovation program at University of Texas Southwestern Medical Center in Dallas.

The findings were presented at the annual meeting of the Consortium of Multiple Sclerosis Centers.

Treatment delayed?

NMOSD, an inflammatory syndrome of the central nervous system, can cause irreversible disability. As treatments have improved over time, transitioning from one medication to newer options has become increasingly common.

To better understand the effects of multiple treatment transitions, the researchers conducted a retrospective analysis of electronic medical records of 164 patients with aquaporin-4 IgG–positive NMOSD. Of these individuals, 89 met the study’s inclusion criteria.

Of the participants, 89% were female, and the median disease duration was 10.1 years. Forty-two patients had switched therapies at least once; 26 switched at least twice; 12 switched at least three times; six switched four times; and three switched therapies five times or more for a total of 174 treatment transitions.

Patients were stratified into two groups – those who transitioned for medical reasons (53.4%), and those who switched because of nonmedical/tolerability reasons (46.6%).

Top reasons for transitioning in the medical category included clinical relapse and/or new MRI activity (29.9%), physician-directed transition (11.5%), and increased physical or clinical disability (4.0%). Leading reasons for nonmedical transitions were side effects (16.7%), adherence/persistence (8.1%), and cost/access (5.75%).

A recurrent event survival analysis showed that, after just one transition for nonmedical or tolerability reasons, outcomes significantly improved, with the risk of hospitalization decreasing 40.3% (P = .005), the risk of relapse decreasing by 53.1% (P = .002), and the risk of advancement on MRI decreasing by 65.9% (P = .005).

Conversely, each additional drug discontinuation in the nonmedical group was associated with worse outcomes. These included a 25.2% increased risk of hospitalization (P = .0003), a 24.4% increase in relapse risk (P = .06), and a 41.9% increased risk of MRI advancement (P = .03).

In terms of transitions for medical reasons, there was a significantly increased risk of MRI advancement with the first switch (32.2%; P = .005). However, no significant increases in risk were associated with each additional transition (P = .33).

The median time spent on the first treatment was 306 days in the transition for medical reasons group and 378 days for the nonmedical/tolerability group.

The median duration of time spent between treatments during the initial transition was just 7 days among those transitioning for medical reasons versus 91 days for nonmedical reasons, with the median duration of additional transitions also substantially longer in the nonmedical reasons group, at 22 and 80 days, respectively.

“The median time spent on a first-line therapy regardless of [whether] that first transition was due to a medical or nonmedical tolerability reason was similar; however, the duration of that initial transition was 13 times longer if the transition had to do with a nonmedical or tolerability reason,” first author Alexander D. Smith, a clinical data specialist at UT Southwestern Medical Center, told conference delegates. “Similarly, each additional transition was almost four times longer if it had to do with a nonmedical or tolerability reason,” he said.

Dr. Okuda noted the longer window between treatment transitions may be a key factor in the different outcomes between the groups. “A central theory involves the increased amount of time between treatments,” he said.

“The reasons for the delay in starting a new treatment may be related to a variety of factors, including laboratory testing required to start a new treatment, third-party administrator coverage, time for the resolution of adverse reactions, and/or personal factors from the individual undergoing treatment, etc.”

Another factor, Mr. Smith said in his talk, is that, “when people are left miserable by a prior treatment exposure, they may simply be hesitant to get on the next therapy.”

The finding that only MRI advancement was associated with transitions for medical reasons suggests that worsening disease activity is not necessarily behind increased transitions, with nonmedical reasons often the cause, and more likely to be associated with the worse outcomes.

With the time between treatments a possible culprit, Dr. Okuda said the clinical implications are that “treatment transitions, regardless of the reason, should occur as quickly as possible to reduce the risk for disease progression associated with NMOSD.”

Mr. Smith echoed the suggestion, adding that “it’s important that even if disease activity is not present, complacency should be avoided.”

“Clinicians and third-party administrators should work to ensure that people with NMOSD have accelerated switches onto their next therapy, even if that disease activity is not present. In a sense, rapid treatment transitions may have equitable benefits to the treatments themselves,” Mr. Smith added.

 

 

Important research

Commenting on the study, Shailee Shah, MD, an assistant professor in the Neuroimmunology division at Vanderbilt University Medical Center, in Nashville, Tenn., noted the findings are consistent with generally higher concerns around switching treatments for nonmedical reasons.

“In general, if a high-efficacy medication is started, it appears that patients are less likely to require a transition to a different medication. It is a little harder to predict who may have issues with tolerability or nonmedical reasons to transition medications, and many providers would likely agree that these transitions do raise some concerns about the risk of relapse or hospitalization in the interim,” she said.

Dr. Shah added that in her experience patients who require multiple transitions are either started on lower-efficacy medications at treatment initiation or have highly refractory disease.

The study’s findings underscore that “identifying additional risk factors and underlying reasons for these findings will be imperative in the future,” Dr. Shah said.

The study was supported by Revert Health, a corporation founded by Dr. Okuda. Dr. Okuda reports receiving personal compensation for consulting and advisory services from Alexion, Biogen, Celgene/Bristol Myers Squibb, EMD Serono, Genentech, Genzyme, Janssen Pharmaceuticals, Novartis, Osmotica Pharmaceuticals, RVL Pharmaceuticals, TG Therapeutics, Viela Bio, and research support from Biogen, EMD Serono/Merck, and Novartis. Dr. Shah reports that she has served on advisory boards for Horizon, Alexion, and Genentech.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

Multiple treatment transitions in patients with neuromyelitis optica spectrum disorder (NMOSD) for nonmedical reasons are associated with increased neurological harm, including relapse risk and disease progression, new research shows.

“For the first time, we were able to quantify clinical outcomes associated with treatment transitions in people with NMOSD. Our data highlight that aspects outside of therapeutic efficacy may be remarkably meaningful in the effective suppression of disease advancement,” said senior investigator Darin T. Okuda, MD, professor of neurology and director of the neuroinnovation program at University of Texas Southwestern Medical Center in Dallas.

The findings were presented at the annual meeting of the Consortium of Multiple Sclerosis Centers.

Treatment delayed?

NMOSD, an inflammatory syndrome of the central nervous system, can cause irreversible disability. As treatments have improved over time, transitioning from one medication to newer options has become increasingly common.

To better understand the effects of multiple treatment transitions, the researchers conducted a retrospective analysis of electronic medical records of 164 patients with aquaporin-4 IgG–positive NMOSD. Of these individuals, 89 met the study’s inclusion criteria.

Of the participants, 89% were female, and the median disease duration was 10.1 years. Forty-two patients had switched therapies at least once; 26 switched at least twice; 12 switched at least three times; six switched four times; and three switched therapies five times or more for a total of 174 treatment transitions.

Patients were stratified into two groups – those who transitioned for medical reasons (53.4%), and those who switched because of nonmedical/tolerability reasons (46.6%).

Top reasons for transitioning in the medical category included clinical relapse and/or new MRI activity (29.9%), physician-directed transition (11.5%), and increased physical or clinical disability (4.0%). Leading reasons for nonmedical transitions were side effects (16.7%), adherence/persistence (8.1%), and cost/access (5.75%).

A recurrent event survival analysis showed that, after just one transition for nonmedical or tolerability reasons, outcomes significantly improved, with the risk of hospitalization decreasing 40.3% (P = .005), the risk of relapse decreasing by 53.1% (P = .002), and the risk of advancement on MRI decreasing by 65.9% (P = .005).

Conversely, each additional drug discontinuation in the nonmedical group was associated with worse outcomes. These included a 25.2% increased risk of hospitalization (P = .0003), a 24.4% increase in relapse risk (P = .06), and a 41.9% increased risk of MRI advancement (P = .03).

In terms of transitions for medical reasons, there was a significantly increased risk of MRI advancement with the first switch (32.2%; P = .005). However, no significant increases in risk were associated with each additional transition (P = .33).

The median time spent on the first treatment was 306 days in the transition for medical reasons group and 378 days for the nonmedical/tolerability group.

The median duration of time spent between treatments during the initial transition was just 7 days among those transitioning for medical reasons versus 91 days for nonmedical reasons, with the median duration of additional transitions also substantially longer in the nonmedical reasons group, at 22 and 80 days, respectively.

“The median time spent on a first-line therapy regardless of [whether] that first transition was due to a medical or nonmedical tolerability reason was similar; however, the duration of that initial transition was 13 times longer if the transition had to do with a nonmedical or tolerability reason,” first author Alexander D. Smith, a clinical data specialist at UT Southwestern Medical Center, told conference delegates. “Similarly, each additional transition was almost four times longer if it had to do with a nonmedical or tolerability reason,” he said.

Dr. Okuda noted the longer window between treatment transitions may be a key factor in the different outcomes between the groups. “A central theory involves the increased amount of time between treatments,” he said.

“The reasons for the delay in starting a new treatment may be related to a variety of factors, including laboratory testing required to start a new treatment, third-party administrator coverage, time for the resolution of adverse reactions, and/or personal factors from the individual undergoing treatment, etc.”

Another factor, Mr. Smith said in his talk, is that, “when people are left miserable by a prior treatment exposure, they may simply be hesitant to get on the next therapy.”

The finding that only MRI advancement was associated with transitions for medical reasons suggests that worsening disease activity is not necessarily behind increased transitions, with nonmedical reasons often the cause, and more likely to be associated with the worse outcomes.

With the time between treatments a possible culprit, Dr. Okuda said the clinical implications are that “treatment transitions, regardless of the reason, should occur as quickly as possible to reduce the risk for disease progression associated with NMOSD.”

Mr. Smith echoed the suggestion, adding that “it’s important that even if disease activity is not present, complacency should be avoided.”

“Clinicians and third-party administrators should work to ensure that people with NMOSD have accelerated switches onto their next therapy, even if that disease activity is not present. In a sense, rapid treatment transitions may have equitable benefits to the treatments themselves,” Mr. Smith added.

 

 

Important research

Commenting on the study, Shailee Shah, MD, an assistant professor in the Neuroimmunology division at Vanderbilt University Medical Center, in Nashville, Tenn., noted the findings are consistent with generally higher concerns around switching treatments for nonmedical reasons.

“In general, if a high-efficacy medication is started, it appears that patients are less likely to require a transition to a different medication. It is a little harder to predict who may have issues with tolerability or nonmedical reasons to transition medications, and many providers would likely agree that these transitions do raise some concerns about the risk of relapse or hospitalization in the interim,” she said.

Dr. Shah added that in her experience patients who require multiple transitions are either started on lower-efficacy medications at treatment initiation or have highly refractory disease.

The study’s findings underscore that “identifying additional risk factors and underlying reasons for these findings will be imperative in the future,” Dr. Shah said.

The study was supported by Revert Health, a corporation founded by Dr. Okuda. Dr. Okuda reports receiving personal compensation for consulting and advisory services from Alexion, Biogen, Celgene/Bristol Myers Squibb, EMD Serono, Genentech, Genzyme, Janssen Pharmaceuticals, Novartis, Osmotica Pharmaceuticals, RVL Pharmaceuticals, TG Therapeutics, Viela Bio, and research support from Biogen, EMD Serono/Merck, and Novartis. Dr. Shah reports that she has served on advisory boards for Horizon, Alexion, and Genentech.

A version of this article first appeared on Medscape.com.

 

Multiple treatment transitions in patients with neuromyelitis optica spectrum disorder (NMOSD) for nonmedical reasons are associated with increased neurological harm, including relapse risk and disease progression, new research shows.

“For the first time, we were able to quantify clinical outcomes associated with treatment transitions in people with NMOSD. Our data highlight that aspects outside of therapeutic efficacy may be remarkably meaningful in the effective suppression of disease advancement,” said senior investigator Darin T. Okuda, MD, professor of neurology and director of the neuroinnovation program at University of Texas Southwestern Medical Center in Dallas.

The findings were presented at the annual meeting of the Consortium of Multiple Sclerosis Centers.

Treatment delayed?

NMOSD, an inflammatory syndrome of the central nervous system, can cause irreversible disability. As treatments have improved over time, transitioning from one medication to newer options has become increasingly common.

To better understand the effects of multiple treatment transitions, the researchers conducted a retrospective analysis of electronic medical records of 164 patients with aquaporin-4 IgG–positive NMOSD. Of these individuals, 89 met the study’s inclusion criteria.

Of the participants, 89% were female, and the median disease duration was 10.1 years. Forty-two patients had switched therapies at least once; 26 switched at least twice; 12 switched at least three times; six switched four times; and three switched therapies five times or more for a total of 174 treatment transitions.

Patients were stratified into two groups – those who transitioned for medical reasons (53.4%), and those who switched because of nonmedical/tolerability reasons (46.6%).

Top reasons for transitioning in the medical category included clinical relapse and/or new MRI activity (29.9%), physician-directed transition (11.5%), and increased physical or clinical disability (4.0%). Leading reasons for nonmedical transitions were side effects (16.7%), adherence/persistence (8.1%), and cost/access (5.75%).

A recurrent event survival analysis showed that, after just one transition for nonmedical or tolerability reasons, outcomes significantly improved, with the risk of hospitalization decreasing 40.3% (P = .005), the risk of relapse decreasing by 53.1% (P = .002), and the risk of advancement on MRI decreasing by 65.9% (P = .005).

Conversely, each additional drug discontinuation in the nonmedical group was associated with worse outcomes. These included a 25.2% increased risk of hospitalization (P = .0003), a 24.4% increase in relapse risk (P = .06), and a 41.9% increased risk of MRI advancement (P = .03).

In terms of transitions for medical reasons, there was a significantly increased risk of MRI advancement with the first switch (32.2%; P = .005). However, no significant increases in risk were associated with each additional transition (P = .33).

The median time spent on the first treatment was 306 days in the transition for medical reasons group and 378 days for the nonmedical/tolerability group.

The median duration of time spent between treatments during the initial transition was just 7 days among those transitioning for medical reasons versus 91 days for nonmedical reasons, with the median duration of additional transitions also substantially longer in the nonmedical reasons group, at 22 and 80 days, respectively.

“The median time spent on a first-line therapy regardless of [whether] that first transition was due to a medical or nonmedical tolerability reason was similar; however, the duration of that initial transition was 13 times longer if the transition had to do with a nonmedical or tolerability reason,” first author Alexander D. Smith, a clinical data specialist at UT Southwestern Medical Center, told conference delegates. “Similarly, each additional transition was almost four times longer if it had to do with a nonmedical or tolerability reason,” he said.

Dr. Okuda noted the longer window between treatment transitions may be a key factor in the different outcomes between the groups. “A central theory involves the increased amount of time between treatments,” he said.

“The reasons for the delay in starting a new treatment may be related to a variety of factors, including laboratory testing required to start a new treatment, third-party administrator coverage, time for the resolution of adverse reactions, and/or personal factors from the individual undergoing treatment, etc.”

Another factor, Mr. Smith said in his talk, is that, “when people are left miserable by a prior treatment exposure, they may simply be hesitant to get on the next therapy.”

The finding that only MRI advancement was associated with transitions for medical reasons suggests that worsening disease activity is not necessarily behind increased transitions, with nonmedical reasons often the cause, and more likely to be associated with the worse outcomes.

With the time between treatments a possible culprit, Dr. Okuda said the clinical implications are that “treatment transitions, regardless of the reason, should occur as quickly as possible to reduce the risk for disease progression associated with NMOSD.”

Mr. Smith echoed the suggestion, adding that “it’s important that even if disease activity is not present, complacency should be avoided.”

“Clinicians and third-party administrators should work to ensure that people with NMOSD have accelerated switches onto their next therapy, even if that disease activity is not present. In a sense, rapid treatment transitions may have equitable benefits to the treatments themselves,” Mr. Smith added.

 

 

Important research

Commenting on the study, Shailee Shah, MD, an assistant professor in the Neuroimmunology division at Vanderbilt University Medical Center, in Nashville, Tenn., noted the findings are consistent with generally higher concerns around switching treatments for nonmedical reasons.

“In general, if a high-efficacy medication is started, it appears that patients are less likely to require a transition to a different medication. It is a little harder to predict who may have issues with tolerability or nonmedical reasons to transition medications, and many providers would likely agree that these transitions do raise some concerns about the risk of relapse or hospitalization in the interim,” she said.

Dr. Shah added that in her experience patients who require multiple transitions are either started on lower-efficacy medications at treatment initiation or have highly refractory disease.

The study’s findings underscore that “identifying additional risk factors and underlying reasons for these findings will be imperative in the future,” Dr. Shah said.

The study was supported by Revert Health, a corporation founded by Dr. Okuda. Dr. Okuda reports receiving personal compensation for consulting and advisory services from Alexion, Biogen, Celgene/Bristol Myers Squibb, EMD Serono, Genentech, Genzyme, Janssen Pharmaceuticals, Novartis, Osmotica Pharmaceuticals, RVL Pharmaceuticals, TG Therapeutics, Viela Bio, and research support from Biogen, EMD Serono/Merck, and Novartis. Dr. Shah reports that she has served on advisory boards for Horizon, Alexion, and Genentech.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

At CMSC 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article