Theme
medstat_icymi_bc
icymibc
Main menu
ICYMI Breast Cancer Featured Menu
Unpublish
Altmetric
Click for Credit Button Label
Click For Credit
DSM Affiliated
Display in offset block
Enable Disqus
Display Author and Disclosure Link
Publication Type
News
Slot System
Featured Buckets
Disable Sticky Ads
Disable Ad Block Mitigation
Featured Buckets Admin
Show Ads on this Publication's Homepage
Consolidated Pub
Show Article Page Numbers on TOC
Use larger logo size
Off
publication_blueconic_enabled
Off
Show More Destinations Menu
Disable Adhesion on Publication
Off
Restore Menu Label on Mobile Navigation
Disable Facebook Pixel from Publication
Exclude this publication from publication selection on articles and quiz
Challenge Center
Disable Inline Native ads
Supporter Name /ID
Verzenio [ 4734 ]
Activity Salesforce Deliverable ID
376356.57
Activity ID
97181
Product Name
ICYMI Expert Perspectives
Product ID
112

COVID-19 linked to development of myasthenia gravis

Article Type
Changed
Thu, 12/15/2022 - 15:43

Myasthenia gravis should be added to the growing list of potential neurological sequelae associated with COVID-19, new research suggests. Clinicians from Italy have described what they believe are the first three reported cases of acetylcholine receptor (AChR) antibody-positive myasthenia gravis after COVID-19 infection.

“I think it is possible that there could be many more cases,” said lead author Domenico Restivo, MD, of the Garibaldi Hospital, Catania, Italy. “In fact, myasthenia gravis could be underestimated especially in the course of COVID-19 infection in which a specific muscular weakness is frequently present. For this reason, this association is easy to miss if not top of mind,” Dr. Restivo said.

None of the three patients had previous neurologic or autoimmune disorders. In all three cases, symptoms of myasthenia gravis appeared within 5-7 days after onset of fever caused by SARS-CoV-2 infection. The time from presumed SARS-CoV-2 infection to myasthenia gravis symptoms “is consistent with the time from infection to symptoms in other neurologic disorders triggered by infections,” the investigators reported.

The findings were published online August 10 in Annals of Internal Medicine.
 

First patients

The first patient described in the report was a 64-year-old man who had a fever as high as 39° C (102.2° F) for 4 days. Five days after fever onset, he developed diplopia and muscle fatigue. The patient’s neurologic examination was “unremarkable.” Computed tomography (CT) of the thorax excluded thymoma, and findings on chest radiograph were normal. He tested positive for SARS-CoV-2 on nasopharyngeal swab and real-time reverse transcriptase polymerase chain reaction (RT-PCR).

The patient’s symptoms led the investigators to suspect myasthenia gravis. Repetitive stimulation of the patient’s facial nerve showed a 57% decrement, confirming involvement of the postsynaptic neuromuscular junction. The concentration of AChR antibodies in serum was also elevated (22.8 pmol/L; reference range, <0.4 pmol/L). The patient was treated with pyridostigmine bromide and prednisone and had a response “typical for someone with myasthenia gravis,” the researchers wrote.

The second patient was a 68-year-old man who had a fever as high as 38.8° C (101.8° F) for 7 days. On day 7, he developed muscle fatigue, diplopia, and dysphagia. Findings of a chest CT and neurologic exam were normal. Nasopharyngeal swab and RT-PCR testing for COVID-19 were positive. As with the first patient, myasthenia gravis was suspected because of the patient’s symptoms. Repetitive nerve stimulation revealed a postsynaptic deficit of neuromuscular transmission of the facial (52%) and ulnar (21%) nerves. His serum AChR antibody level was elevated (27.6 pmol/L). The patient improved after one cycle of intravenous immunoglobulin treatment.
 

Possible mechanisms

The third patient was a 71-year-old woman with cough and a fever up to 38.6° C (101.5° F) for 6 days. She initially tested negative for SARS-CoV-2 on nasopharyngeal swab and RT-PCR. Five days after her symptoms began, she developed bilateral ocular ptosis, diplopia, and hypophonia. CT of the thorax excluded thymoma but showed bilateral interstitial pneumonia. On day 6, she developed dysphagia and respiratory failure, and was transferred to the ICU where she received mechanical ventilation.

Repetitive nerve stimulation revealed a postsynaptic deficit of neuromuscular transmission of the ulnar nerve (56%), and her serum AChR antibody level was elevated (35.6 pmol/L). Five days later, a second nasopharyngeal swab test for SARS-CoV-2 was positive. The patient improved following plasmapheresis treatment and was successfully extubated.

The investigators noted that this patient received hydroxychloroquine the day after the onset of neurologic symptoms, but the drug was withdrawn a day later, so they do not believe that it caused the symptoms of myasthenia gravis.

The observations in these three patients are “consistent with reports of other infections that induce autoimmune disorders, as well as with the growing evidence of other neurologic disorders with presumed autoimmune mechanisms after COVID-19 onset,” the researchers wrote.

They offered several possible explanations for the link between COVID-19 and myasthenia gravis. “Antibodies that are directed against SARS-CoV-2 proteins may cross-react with AChR subunits, because the virus has epitopes that are similar to components of the neuromuscular junction; this is known to occur in other neurologic autoimmune disorders after infection. Alternatively, COVID-19 infection may break immunologic self-tolerance,” the investigators wrote.

“The main message for clinicians is that myasthenia gravis, as well as other neurological disorders associated with autoimmunity, could occur in the course of SARS-CoV-2 infection,” Dr. Restivo said. Prompt recognition of the disease “could lead to a drug treatment that limits its evolution as quickly as possible,” he added.
 

 

 

An “unmasking”

Commenting on the findings, Anthony Geraci, MD, director of neuromuscular medicine, Northwell Health, Great Neck, N.Y., said these case reports of myasthenia gravis after SARS-CoV-2 infection are “not unique or novel as there has been a long understanding that seropositive [AChR antibody-positive] myasthenia gravis can and is frequently ‘unmasked’ in the setting” of several viral and bacterial infections.

“Antibodies in myasthenia gravis are of a type that take several weeks to develop to measurable levels as in the reported cases by Restivo et al., giving strong support to the notion that subclinical myasthenia gravis can be immunologically upregulated in the setting of viral infection and this is a far more likely explanation of the observed association reported,” added Dr. Geraci, who was not involved with the research.

He noted that, at his institution, “we have also observed ocular myasthenia gravis emerge in patients with SARS-CoV-2 infection, with similar double vision and lid droop, as we have seen similarly in patients with Zika, West Nile, and other viral infections, as well as a multiplicity of bacterial infections.”

“Most of our observed patients have responded to treatment much the same as reported by the three cases from Restivo and colleagues,” Dr.Geraci reported.

The authors of the study disclosed no conflicts of interest.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

Myasthenia gravis should be added to the growing list of potential neurological sequelae associated with COVID-19, new research suggests. Clinicians from Italy have described what they believe are the first three reported cases of acetylcholine receptor (AChR) antibody-positive myasthenia gravis after COVID-19 infection.

“I think it is possible that there could be many more cases,” said lead author Domenico Restivo, MD, of the Garibaldi Hospital, Catania, Italy. “In fact, myasthenia gravis could be underestimated especially in the course of COVID-19 infection in which a specific muscular weakness is frequently present. For this reason, this association is easy to miss if not top of mind,” Dr. Restivo said.

None of the three patients had previous neurologic or autoimmune disorders. In all three cases, symptoms of myasthenia gravis appeared within 5-7 days after onset of fever caused by SARS-CoV-2 infection. The time from presumed SARS-CoV-2 infection to myasthenia gravis symptoms “is consistent with the time from infection to symptoms in other neurologic disorders triggered by infections,” the investigators reported.

The findings were published online August 10 in Annals of Internal Medicine.
 

First patients

The first patient described in the report was a 64-year-old man who had a fever as high as 39° C (102.2° F) for 4 days. Five days after fever onset, he developed diplopia and muscle fatigue. The patient’s neurologic examination was “unremarkable.” Computed tomography (CT) of the thorax excluded thymoma, and findings on chest radiograph were normal. He tested positive for SARS-CoV-2 on nasopharyngeal swab and real-time reverse transcriptase polymerase chain reaction (RT-PCR).

The patient’s symptoms led the investigators to suspect myasthenia gravis. Repetitive stimulation of the patient’s facial nerve showed a 57% decrement, confirming involvement of the postsynaptic neuromuscular junction. The concentration of AChR antibodies in serum was also elevated (22.8 pmol/L; reference range, <0.4 pmol/L). The patient was treated with pyridostigmine bromide and prednisone and had a response “typical for someone with myasthenia gravis,” the researchers wrote.

The second patient was a 68-year-old man who had a fever as high as 38.8° C (101.8° F) for 7 days. On day 7, he developed muscle fatigue, diplopia, and dysphagia. Findings of a chest CT and neurologic exam were normal. Nasopharyngeal swab and RT-PCR testing for COVID-19 were positive. As with the first patient, myasthenia gravis was suspected because of the patient’s symptoms. Repetitive nerve stimulation revealed a postsynaptic deficit of neuromuscular transmission of the facial (52%) and ulnar (21%) nerves. His serum AChR antibody level was elevated (27.6 pmol/L). The patient improved after one cycle of intravenous immunoglobulin treatment.
 

Possible mechanisms

The third patient was a 71-year-old woman with cough and a fever up to 38.6° C (101.5° F) for 6 days. She initially tested negative for SARS-CoV-2 on nasopharyngeal swab and RT-PCR. Five days after her symptoms began, she developed bilateral ocular ptosis, diplopia, and hypophonia. CT of the thorax excluded thymoma but showed bilateral interstitial pneumonia. On day 6, she developed dysphagia and respiratory failure, and was transferred to the ICU where she received mechanical ventilation.

Repetitive nerve stimulation revealed a postsynaptic deficit of neuromuscular transmission of the ulnar nerve (56%), and her serum AChR antibody level was elevated (35.6 pmol/L). Five days later, a second nasopharyngeal swab test for SARS-CoV-2 was positive. The patient improved following plasmapheresis treatment and was successfully extubated.

The investigators noted that this patient received hydroxychloroquine the day after the onset of neurologic symptoms, but the drug was withdrawn a day later, so they do not believe that it caused the symptoms of myasthenia gravis.

The observations in these three patients are “consistent with reports of other infections that induce autoimmune disorders, as well as with the growing evidence of other neurologic disorders with presumed autoimmune mechanisms after COVID-19 onset,” the researchers wrote.

They offered several possible explanations for the link between COVID-19 and myasthenia gravis. “Antibodies that are directed against SARS-CoV-2 proteins may cross-react with AChR subunits, because the virus has epitopes that are similar to components of the neuromuscular junction; this is known to occur in other neurologic autoimmune disorders after infection. Alternatively, COVID-19 infection may break immunologic self-tolerance,” the investigators wrote.

“The main message for clinicians is that myasthenia gravis, as well as other neurological disorders associated with autoimmunity, could occur in the course of SARS-CoV-2 infection,” Dr. Restivo said. Prompt recognition of the disease “could lead to a drug treatment that limits its evolution as quickly as possible,” he added.
 

 

 

An “unmasking”

Commenting on the findings, Anthony Geraci, MD, director of neuromuscular medicine, Northwell Health, Great Neck, N.Y., said these case reports of myasthenia gravis after SARS-CoV-2 infection are “not unique or novel as there has been a long understanding that seropositive [AChR antibody-positive] myasthenia gravis can and is frequently ‘unmasked’ in the setting” of several viral and bacterial infections.

“Antibodies in myasthenia gravis are of a type that take several weeks to develop to measurable levels as in the reported cases by Restivo et al., giving strong support to the notion that subclinical myasthenia gravis can be immunologically upregulated in the setting of viral infection and this is a far more likely explanation of the observed association reported,” added Dr. Geraci, who was not involved with the research.

He noted that, at his institution, “we have also observed ocular myasthenia gravis emerge in patients with SARS-CoV-2 infection, with similar double vision and lid droop, as we have seen similarly in patients with Zika, West Nile, and other viral infections, as well as a multiplicity of bacterial infections.”

“Most of our observed patients have responded to treatment much the same as reported by the three cases from Restivo and colleagues,” Dr.Geraci reported.

The authors of the study disclosed no conflicts of interest.

A version of this article originally appeared on Medscape.com.

Myasthenia gravis should be added to the growing list of potential neurological sequelae associated with COVID-19, new research suggests. Clinicians from Italy have described what they believe are the first three reported cases of acetylcholine receptor (AChR) antibody-positive myasthenia gravis after COVID-19 infection.

“I think it is possible that there could be many more cases,” said lead author Domenico Restivo, MD, of the Garibaldi Hospital, Catania, Italy. “In fact, myasthenia gravis could be underestimated especially in the course of COVID-19 infection in which a specific muscular weakness is frequently present. For this reason, this association is easy to miss if not top of mind,” Dr. Restivo said.

None of the three patients had previous neurologic or autoimmune disorders. In all three cases, symptoms of myasthenia gravis appeared within 5-7 days after onset of fever caused by SARS-CoV-2 infection. The time from presumed SARS-CoV-2 infection to myasthenia gravis symptoms “is consistent with the time from infection to symptoms in other neurologic disorders triggered by infections,” the investigators reported.

The findings were published online August 10 in Annals of Internal Medicine.
 

First patients

The first patient described in the report was a 64-year-old man who had a fever as high as 39° C (102.2° F) for 4 days. Five days after fever onset, he developed diplopia and muscle fatigue. The patient’s neurologic examination was “unremarkable.” Computed tomography (CT) of the thorax excluded thymoma, and findings on chest radiograph were normal. He tested positive for SARS-CoV-2 on nasopharyngeal swab and real-time reverse transcriptase polymerase chain reaction (RT-PCR).

The patient’s symptoms led the investigators to suspect myasthenia gravis. Repetitive stimulation of the patient’s facial nerve showed a 57% decrement, confirming involvement of the postsynaptic neuromuscular junction. The concentration of AChR antibodies in serum was also elevated (22.8 pmol/L; reference range, <0.4 pmol/L). The patient was treated with pyridostigmine bromide and prednisone and had a response “typical for someone with myasthenia gravis,” the researchers wrote.

The second patient was a 68-year-old man who had a fever as high as 38.8° C (101.8° F) for 7 days. On day 7, he developed muscle fatigue, diplopia, and dysphagia. Findings of a chest CT and neurologic exam were normal. Nasopharyngeal swab and RT-PCR testing for COVID-19 were positive. As with the first patient, myasthenia gravis was suspected because of the patient’s symptoms. Repetitive nerve stimulation revealed a postsynaptic deficit of neuromuscular transmission of the facial (52%) and ulnar (21%) nerves. His serum AChR antibody level was elevated (27.6 pmol/L). The patient improved after one cycle of intravenous immunoglobulin treatment.
 

Possible mechanisms

The third patient was a 71-year-old woman with cough and a fever up to 38.6° C (101.5° F) for 6 days. She initially tested negative for SARS-CoV-2 on nasopharyngeal swab and RT-PCR. Five days after her symptoms began, she developed bilateral ocular ptosis, diplopia, and hypophonia. CT of the thorax excluded thymoma but showed bilateral interstitial pneumonia. On day 6, she developed dysphagia and respiratory failure, and was transferred to the ICU where she received mechanical ventilation.

Repetitive nerve stimulation revealed a postsynaptic deficit of neuromuscular transmission of the ulnar nerve (56%), and her serum AChR antibody level was elevated (35.6 pmol/L). Five days later, a second nasopharyngeal swab test for SARS-CoV-2 was positive. The patient improved following plasmapheresis treatment and was successfully extubated.

The investigators noted that this patient received hydroxychloroquine the day after the onset of neurologic symptoms, but the drug was withdrawn a day later, so they do not believe that it caused the symptoms of myasthenia gravis.

The observations in these three patients are “consistent with reports of other infections that induce autoimmune disorders, as well as with the growing evidence of other neurologic disorders with presumed autoimmune mechanisms after COVID-19 onset,” the researchers wrote.

They offered several possible explanations for the link between COVID-19 and myasthenia gravis. “Antibodies that are directed against SARS-CoV-2 proteins may cross-react with AChR subunits, because the virus has epitopes that are similar to components of the neuromuscular junction; this is known to occur in other neurologic autoimmune disorders after infection. Alternatively, COVID-19 infection may break immunologic self-tolerance,” the investigators wrote.

“The main message for clinicians is that myasthenia gravis, as well as other neurological disorders associated with autoimmunity, could occur in the course of SARS-CoV-2 infection,” Dr. Restivo said. Prompt recognition of the disease “could lead to a drug treatment that limits its evolution as quickly as possible,” he added.
 

 

 

An “unmasking”

Commenting on the findings, Anthony Geraci, MD, director of neuromuscular medicine, Northwell Health, Great Neck, N.Y., said these case reports of myasthenia gravis after SARS-CoV-2 infection are “not unique or novel as there has been a long understanding that seropositive [AChR antibody-positive] myasthenia gravis can and is frequently ‘unmasked’ in the setting” of several viral and bacterial infections.

“Antibodies in myasthenia gravis are of a type that take several weeks to develop to measurable levels as in the reported cases by Restivo et al., giving strong support to the notion that subclinical myasthenia gravis can be immunologically upregulated in the setting of viral infection and this is a far more likely explanation of the observed association reported,” added Dr. Geraci, who was not involved with the research.

He noted that, at his institution, “we have also observed ocular myasthenia gravis emerge in patients with SARS-CoV-2 infection, with similar double vision and lid droop, as we have seen similarly in patients with Zika, West Nile, and other viral infections, as well as a multiplicity of bacterial infections.”

“Most of our observed patients have responded to treatment much the same as reported by the three cases from Restivo and colleagues,” Dr.Geraci reported.

The authors of the study disclosed no conflicts of interest.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article

FDA approves ofatumumab (Kesimpta) for relapsing forms of MS

Article Type
Changed
Thu, 12/15/2022 - 14:40

The Food and Drug Administration has approved Kesimpta ofatumumab (Kesimpta) injection for the treatment of adults with relapsing forms of multiple sclerosis, including relapsing-remitting MS, active secondary progressive MS, and clinically isolated syndrome, Novartis announced in a press release. This is the first FDA approval of a self-administered, targeted B-cell therapy for these conditions, and is delivered via an autoinjector pen.

“This approval is wonderful news for patients with relapsing multiple sclerosis,” Stephen Hauser, MD, director of the Weill Institute for Neurosciences at the University of California, San Francisco, said in the press release. “Through its favorable safety profile and well-tolerated monthly injection regimen, patients can self-administer the treatment at home, avoiding visits to the infusion center,” he noted.

Dr. Hauser is cochair of the steering committee for the phase 3 ASCLEPIOS I and II studies that were part of the basis for the FDA’s approval.

Bruce Bebo, PhD, executive vice president of research at the National MS Society, said because response to disease-modifying treatments varies among individuals with MS, it’s important to have a range of treatment options available with differing mechanisms of action. “We are pleased to have an additional option approved for the treatment of relapsing forms of MS,” he said.

Twin studies

Formerly known as OMB157, ofatumumab is a precisely-dosed anti-CD20 monoclonal antibody administered subcutaneously via once-monthly injection. However, Novartis noted that initial doses are given at weeks 0, 1, and 2 – with the first injection occurring with a health care professional present.

The drug “is thought to work by binding to a distinct epitope on the CD20 molecule inducing potent B-cell lysis and depletion,” the manufacturer noted.

As previously reported, results for the ACLEPIOS I and II studies were presented at the 2019 Congress of the European Committee for Treatment and Research in Multiple Sclerosis (ECTRIMS), with additional results presented at the 2020 Virtual Annual Meeting of the Consortium of Multiple Sclerosis Centers. In addition, the findings were published in the New England Journal of Medicine.

The twin, identically designed phase 3 studies assessed the safety and efficacy of the drug at a monthly subcutaneous dose of 20 mg versus once daily teriflunomide 14-mg oral tablets. Together, the studies included 1,882 adult patients at more than 350 sites in 37 countries.

Results showed that the study drug reduced the annualized relapse rate (ARR) by 51% in the first study and by 59% in the second versus teriflunomide (P < .001 in both studies), meeting the primary endpoint. Both studies also showed significant reductions of gadolinium-enhancing (Gd+) T1 lesions (by 98% and 94%, respectively) and new or enlarging T2 lesions (by 82% and 85%).

The most commonly observed treatment-related adverse events for ofatumumab were upper respiratory tract infectionheadache, and injection-related reactions.

Although the FDA first approved ofatumumab in 2009 for treating chronic lymphocytic leukemia (CLL), it was administered as a high-dose intravenous infusion by a healthcare provider. “This is a different dosing regimen and route of administration than was previously approved for the CLL indication,” the company noted.

The drug is expected to be available in the United States in September.

A version of this article originally appeared on Medscape.com.

Issue
Neurology Reviews- 28(10)
Publications
Topics
Sections

The Food and Drug Administration has approved Kesimpta ofatumumab (Kesimpta) injection for the treatment of adults with relapsing forms of multiple sclerosis, including relapsing-remitting MS, active secondary progressive MS, and clinically isolated syndrome, Novartis announced in a press release. This is the first FDA approval of a self-administered, targeted B-cell therapy for these conditions, and is delivered via an autoinjector pen.

“This approval is wonderful news for patients with relapsing multiple sclerosis,” Stephen Hauser, MD, director of the Weill Institute for Neurosciences at the University of California, San Francisco, said in the press release. “Through its favorable safety profile and well-tolerated monthly injection regimen, patients can self-administer the treatment at home, avoiding visits to the infusion center,” he noted.

Dr. Hauser is cochair of the steering committee for the phase 3 ASCLEPIOS I and II studies that were part of the basis for the FDA’s approval.

Bruce Bebo, PhD, executive vice president of research at the National MS Society, said because response to disease-modifying treatments varies among individuals with MS, it’s important to have a range of treatment options available with differing mechanisms of action. “We are pleased to have an additional option approved for the treatment of relapsing forms of MS,” he said.

Twin studies

Formerly known as OMB157, ofatumumab is a precisely-dosed anti-CD20 monoclonal antibody administered subcutaneously via once-monthly injection. However, Novartis noted that initial doses are given at weeks 0, 1, and 2 – with the first injection occurring with a health care professional present.

The drug “is thought to work by binding to a distinct epitope on the CD20 molecule inducing potent B-cell lysis and depletion,” the manufacturer noted.

As previously reported, results for the ACLEPIOS I and II studies were presented at the 2019 Congress of the European Committee for Treatment and Research in Multiple Sclerosis (ECTRIMS), with additional results presented at the 2020 Virtual Annual Meeting of the Consortium of Multiple Sclerosis Centers. In addition, the findings were published in the New England Journal of Medicine.

The twin, identically designed phase 3 studies assessed the safety and efficacy of the drug at a monthly subcutaneous dose of 20 mg versus once daily teriflunomide 14-mg oral tablets. Together, the studies included 1,882 adult patients at more than 350 sites in 37 countries.

Results showed that the study drug reduced the annualized relapse rate (ARR) by 51% in the first study and by 59% in the second versus teriflunomide (P < .001 in both studies), meeting the primary endpoint. Both studies also showed significant reductions of gadolinium-enhancing (Gd+) T1 lesions (by 98% and 94%, respectively) and new or enlarging T2 lesions (by 82% and 85%).

The most commonly observed treatment-related adverse events for ofatumumab were upper respiratory tract infectionheadache, and injection-related reactions.

Although the FDA first approved ofatumumab in 2009 for treating chronic lymphocytic leukemia (CLL), it was administered as a high-dose intravenous infusion by a healthcare provider. “This is a different dosing regimen and route of administration than was previously approved for the CLL indication,” the company noted.

The drug is expected to be available in the United States in September.

A version of this article originally appeared on Medscape.com.

The Food and Drug Administration has approved Kesimpta ofatumumab (Kesimpta) injection for the treatment of adults with relapsing forms of multiple sclerosis, including relapsing-remitting MS, active secondary progressive MS, and clinically isolated syndrome, Novartis announced in a press release. This is the first FDA approval of a self-administered, targeted B-cell therapy for these conditions, and is delivered via an autoinjector pen.

“This approval is wonderful news for patients with relapsing multiple sclerosis,” Stephen Hauser, MD, director of the Weill Institute for Neurosciences at the University of California, San Francisco, said in the press release. “Through its favorable safety profile and well-tolerated monthly injection regimen, patients can self-administer the treatment at home, avoiding visits to the infusion center,” he noted.

Dr. Hauser is cochair of the steering committee for the phase 3 ASCLEPIOS I and II studies that were part of the basis for the FDA’s approval.

Bruce Bebo, PhD, executive vice president of research at the National MS Society, said because response to disease-modifying treatments varies among individuals with MS, it’s important to have a range of treatment options available with differing mechanisms of action. “We are pleased to have an additional option approved for the treatment of relapsing forms of MS,” he said.

Twin studies

Formerly known as OMB157, ofatumumab is a precisely-dosed anti-CD20 monoclonal antibody administered subcutaneously via once-monthly injection. However, Novartis noted that initial doses are given at weeks 0, 1, and 2 – with the first injection occurring with a health care professional present.

The drug “is thought to work by binding to a distinct epitope on the CD20 molecule inducing potent B-cell lysis and depletion,” the manufacturer noted.

As previously reported, results for the ACLEPIOS I and II studies were presented at the 2019 Congress of the European Committee for Treatment and Research in Multiple Sclerosis (ECTRIMS), with additional results presented at the 2020 Virtual Annual Meeting of the Consortium of Multiple Sclerosis Centers. In addition, the findings were published in the New England Journal of Medicine.

The twin, identically designed phase 3 studies assessed the safety and efficacy of the drug at a monthly subcutaneous dose of 20 mg versus once daily teriflunomide 14-mg oral tablets. Together, the studies included 1,882 adult patients at more than 350 sites in 37 countries.

Results showed that the study drug reduced the annualized relapse rate (ARR) by 51% in the first study and by 59% in the second versus teriflunomide (P < .001 in both studies), meeting the primary endpoint. Both studies also showed significant reductions of gadolinium-enhancing (Gd+) T1 lesions (by 98% and 94%, respectively) and new or enlarging T2 lesions (by 82% and 85%).

The most commonly observed treatment-related adverse events for ofatumumab were upper respiratory tract infectionheadache, and injection-related reactions.

Although the FDA first approved ofatumumab in 2009 for treating chronic lymphocytic leukemia (CLL), it was administered as a high-dose intravenous infusion by a healthcare provider. “This is a different dosing regimen and route of administration than was previously approved for the CLL indication,” the company noted.

The drug is expected to be available in the United States in September.

A version of this article originally appeared on Medscape.com.

Issue
Neurology Reviews- 28(10)
Issue
Neurology Reviews- 28(10)
Publications
Publications
Topics
Article Type
Sections
Citation Override
Publish date: August 21, 2020
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article

Age, smoking among leading cancer risk factors for SLE patients

Article Type
Changed
Thu, 12/15/2022 - 17:35

A new study has quantified cancer risk factors in patients with systemic lupus erythematosus, including smoking and the use of certain medications.

“As expected, older age was associated with cancer overall, as well as with the most common cancer subtypes,” wrote Sasha Bernatsky, MD, PhD, of McGill University, Montreal, and coauthors. The study was published in Arthritis Care & Research.

To determine the risk of cancer in people with clinically confirmed incident systemic lupus erythematosus (SLE), the researchers analyzed data from 1,668 newly diagnosed lupus patients with at least one follow-up visit. All patients were enrolled in the Systemic Lupus International Collaborating Clinics inception cohort from across 33 different centers in North America, Europe, and Asia. A total of 89% (n = 1,480) were women, and 49% (n = 824) were white. The average follow-up period was 9 years.

Of the 1,668 SLE patients, 65 developed some type of cancer. The cancers included 15 breast;, 10 nonmelanoma skin; 7 lung; 6 hematologic, 6 prostate; 5 melanoma; 3 cervical; 3 renal; 2 gastric; 2 head and neck; 2 thyroid; and 1 rectal, sarcoma, thymoma, or uterine. No patient had more than one type, and the mean age of the cancer patients at time of SLE diagnosis was 45.6 (standard deviation, 14.5).



Almost half of the 65 cancers occurred in past or current smokers, including all of the lung cancers, while only 33% of patients without cancers smoked prior to baseline. After univariate analysis, characteristics associated with a higher risk of all cancers included older age at SLE diagnosis (adjusted hazard ratio, 1.05; 95% confidence interval, 1.03-1.06), White race/ethnicity (aHR 1.34; 95% CI, 0.76-2.37), and smoking (aHR 1.21; 95% CI, 0.73-2.01).

After multivariate analysis, the two characteristics most associated with increased cancer risk were older age at SLE diagnosis and being male. The analyses also confirmed that older age was a risk factor for breast cancer (aHR 1.06; 95% CI, 1.02-1.10) and nonmelanoma skin cancer (aHR, 1.06; 95% CI, 1.02-1.11), while use of antimalarial drugs was associated with a lower risk of both breast (aHR, 0.28; 95% CI, 0.09-0.90) and nonmelanoma skin (aHR, 0.23; 95% CI, 0.05-0.95) cancers. For lung cancer, the highest risk factor was smoking 15 or more cigarettes a day (aHR, 6.64; 95% CI, 1.43-30.9); for hematologic cancers, it was being in the top quartile of SLE disease activity (aHR, 7.14; 95% CI, 1.13-45.3).

The authors acknowledged their study’s limitations, including the small number of cancers overall and purposefully not comparing cancer risk in SLE patients with risk in the general population. Although their methods – “physicians recording events at annual visits, confirmed by review of charts” – were recognized as very suitable for the current analysis, they noted that a broader comparison would “potentially be problematic due to differential misclassification error” in cancer registry data.

Two of the study’s authors reported potential conflicts of interest, including receiving grants and consulting and personal fees from various pharmaceutical companies. No other potential conflicts were reported.

SOURCE: Bernatsky S et al. Arthritis Care Res. 2020 Aug 19. doi: 10.1002/acr.24425.

Publications
Topics
Sections

A new study has quantified cancer risk factors in patients with systemic lupus erythematosus, including smoking and the use of certain medications.

“As expected, older age was associated with cancer overall, as well as with the most common cancer subtypes,” wrote Sasha Bernatsky, MD, PhD, of McGill University, Montreal, and coauthors. The study was published in Arthritis Care & Research.

To determine the risk of cancer in people with clinically confirmed incident systemic lupus erythematosus (SLE), the researchers analyzed data from 1,668 newly diagnosed lupus patients with at least one follow-up visit. All patients were enrolled in the Systemic Lupus International Collaborating Clinics inception cohort from across 33 different centers in North America, Europe, and Asia. A total of 89% (n = 1,480) were women, and 49% (n = 824) were white. The average follow-up period was 9 years.

Of the 1,668 SLE patients, 65 developed some type of cancer. The cancers included 15 breast;, 10 nonmelanoma skin; 7 lung; 6 hematologic, 6 prostate; 5 melanoma; 3 cervical; 3 renal; 2 gastric; 2 head and neck; 2 thyroid; and 1 rectal, sarcoma, thymoma, or uterine. No patient had more than one type, and the mean age of the cancer patients at time of SLE diagnosis was 45.6 (standard deviation, 14.5).



Almost half of the 65 cancers occurred in past or current smokers, including all of the lung cancers, while only 33% of patients without cancers smoked prior to baseline. After univariate analysis, characteristics associated with a higher risk of all cancers included older age at SLE diagnosis (adjusted hazard ratio, 1.05; 95% confidence interval, 1.03-1.06), White race/ethnicity (aHR 1.34; 95% CI, 0.76-2.37), and smoking (aHR 1.21; 95% CI, 0.73-2.01).

After multivariate analysis, the two characteristics most associated with increased cancer risk were older age at SLE diagnosis and being male. The analyses also confirmed that older age was a risk factor for breast cancer (aHR 1.06; 95% CI, 1.02-1.10) and nonmelanoma skin cancer (aHR, 1.06; 95% CI, 1.02-1.11), while use of antimalarial drugs was associated with a lower risk of both breast (aHR, 0.28; 95% CI, 0.09-0.90) and nonmelanoma skin (aHR, 0.23; 95% CI, 0.05-0.95) cancers. For lung cancer, the highest risk factor was smoking 15 or more cigarettes a day (aHR, 6.64; 95% CI, 1.43-30.9); for hematologic cancers, it was being in the top quartile of SLE disease activity (aHR, 7.14; 95% CI, 1.13-45.3).

The authors acknowledged their study’s limitations, including the small number of cancers overall and purposefully not comparing cancer risk in SLE patients with risk in the general population. Although their methods – “physicians recording events at annual visits, confirmed by review of charts” – were recognized as very suitable for the current analysis, they noted that a broader comparison would “potentially be problematic due to differential misclassification error” in cancer registry data.

Two of the study’s authors reported potential conflicts of interest, including receiving grants and consulting and personal fees from various pharmaceutical companies. No other potential conflicts were reported.

SOURCE: Bernatsky S et al. Arthritis Care Res. 2020 Aug 19. doi: 10.1002/acr.24425.

A new study has quantified cancer risk factors in patients with systemic lupus erythematosus, including smoking and the use of certain medications.

“As expected, older age was associated with cancer overall, as well as with the most common cancer subtypes,” wrote Sasha Bernatsky, MD, PhD, of McGill University, Montreal, and coauthors. The study was published in Arthritis Care & Research.

To determine the risk of cancer in people with clinically confirmed incident systemic lupus erythematosus (SLE), the researchers analyzed data from 1,668 newly diagnosed lupus patients with at least one follow-up visit. All patients were enrolled in the Systemic Lupus International Collaborating Clinics inception cohort from across 33 different centers in North America, Europe, and Asia. A total of 89% (n = 1,480) were women, and 49% (n = 824) were white. The average follow-up period was 9 years.

Of the 1,668 SLE patients, 65 developed some type of cancer. The cancers included 15 breast;, 10 nonmelanoma skin; 7 lung; 6 hematologic, 6 prostate; 5 melanoma; 3 cervical; 3 renal; 2 gastric; 2 head and neck; 2 thyroid; and 1 rectal, sarcoma, thymoma, or uterine. No patient had more than one type, and the mean age of the cancer patients at time of SLE diagnosis was 45.6 (standard deviation, 14.5).



Almost half of the 65 cancers occurred in past or current smokers, including all of the lung cancers, while only 33% of patients without cancers smoked prior to baseline. After univariate analysis, characteristics associated with a higher risk of all cancers included older age at SLE diagnosis (adjusted hazard ratio, 1.05; 95% confidence interval, 1.03-1.06), White race/ethnicity (aHR 1.34; 95% CI, 0.76-2.37), and smoking (aHR 1.21; 95% CI, 0.73-2.01).

After multivariate analysis, the two characteristics most associated with increased cancer risk were older age at SLE diagnosis and being male. The analyses also confirmed that older age was a risk factor for breast cancer (aHR 1.06; 95% CI, 1.02-1.10) and nonmelanoma skin cancer (aHR, 1.06; 95% CI, 1.02-1.11), while use of antimalarial drugs was associated with a lower risk of both breast (aHR, 0.28; 95% CI, 0.09-0.90) and nonmelanoma skin (aHR, 0.23; 95% CI, 0.05-0.95) cancers. For lung cancer, the highest risk factor was smoking 15 or more cigarettes a day (aHR, 6.64; 95% CI, 1.43-30.9); for hematologic cancers, it was being in the top quartile of SLE disease activity (aHR, 7.14; 95% CI, 1.13-45.3).

The authors acknowledged their study’s limitations, including the small number of cancers overall and purposefully not comparing cancer risk in SLE patients with risk in the general population. Although their methods – “physicians recording events at annual visits, confirmed by review of charts” – were recognized as very suitable for the current analysis, they noted that a broader comparison would “potentially be problematic due to differential misclassification error” in cancer registry data.

Two of the study’s authors reported potential conflicts of interest, including receiving grants and consulting and personal fees from various pharmaceutical companies. No other potential conflicts were reported.

SOURCE: Bernatsky S et al. Arthritis Care Res. 2020 Aug 19. doi: 10.1002/acr.24425.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM ARTHRITIS CARE & RESEARCH

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article

Beyond baseline, DBT no better than mammography for dense breasts

Article Type
Changed
Thu, 12/15/2022 - 17:35

In women with extremely dense breasts, digital breast tomosynthesis (DBT) does not outperform digital mammography (DM) after the baseline exam, according to a review of nearly 1.6 million screenings.

At baseline, DBT improved recall and cancer detection rates for all women. On subsequent exams, differences in screening performance between DBT and DM varied by age and density subgroups. However, there were no significant differences in recall or cancer detection rates among women with extremely dense breasts in any age group.

Kathryn Lowry, MD, of the University of Washington in Seattle, and colleagues reported these findings in JAMA Network Open.

“Our findings suggest that density likely should not be used as a criterion to triage use of DBT for routine screening in settings where DBT is not universally available, as has been reported in physician surveys,” the authors wrote. “The largest absolute improvements of DBT screening were achieved on the baseline screening examination, suggesting that women presenting for their first screening examination are particularly important to prioritize for DBT,” regardless of breast density or age.
 

Study details

Dr. Lowry and colleagues reviewed 1,584,079 screenings in women aged 40-79 years. The exams were done from January 2010 to April 2018 at Breast Cancer Surveillance Consortium facilities across the United States.

Sixty-five percent of the exams were in White, non-Hispanic women, 25.2% were in women younger than 50 years, and 42.4% were in women with heterogeneously dense or extremely dense breasts. Subjects had no history of breast cancer, mastectomy, or breast augmentation.

The investigators compared the performance of 1,273,492 DMs with 310,587 DBTs across the four Breast Imaging Reporting and Database System density types: almost entirely fatty, scattered fibroglandular density, heterogeneously dense, and extremely dense.

Findings were adjusted for race, family breast cancer history, and other potential confounders.
 

Recall and cancer detection rates

At baseline, recall and cancer detection rates were better with DBT than with DM, regardless of breast density subtype or patient age.

For instance, in women aged 50-59 years, screening recalls per 1,000 exams dropped from 241 with DM to 204 with DBT (relative risk, 0.84; 95% confidence interval, 0.73-0.98). Cancer detection rates per 1,000 exams in this age group increased from 5.9 with DM to 8.8 with DBT (RR, 1.50; 95% CI, 1.10-2.08).

On follow-up exams, recall rates were lower with DBT for women with scattered fibroglandular density and heterogeneously dense breasts in all age groups, as well as in women with almost entirely fatty breasts aged 50-79 years.

“By contrast, there were no significant differences in recall rates in women with extremely dense breasts in any age group,” the authors wrote.

Cancer detection rates on follow-up exams varied by age and breast density.

Cancer detection rates were higher with DBT than with DM in women with heterogeneously dense breasts in all age groups and in women with scattered fibroglandular density at 50-59 years of age and 60-79 years of age. However, cancer detection rates were not significantly different with DBT or DM for women with almost entirely fatty breasts or extremely dense breasts of any age.
 

 

 

Implications and next steps

Dr. Lowry and colleagues noted that use of DBT has increased steadily since it was approved by the Food and Drug Administration in 2011, driven by studies demonstrating, among other things, earlier detection of invasive cancers.

The problem has been that previous investigations “largely dichotomized dense (heterogeneously dense and extremely dense) and nondense (almost entirely fat and scattered fibroglandular densities) categories,” the authors wrote. Therefore, the nuance of benefit across density subtypes hasn’t been clear.

The finding that “screening benefits of DBT differ for women with heterogeneously dense breasts [versus] extremely dense breasts is especially important in the current landscape of density legislation and demand for supplemental screening tests beyond mammography. To date, most state mandates and ... proposed federal legislation have uniformly grouped women with heterogeneously dense breasts and those with extremely dense breasts as a single population,” the authors wrote.

As the new findings suggest, “there are important differences in performance that may not be appreciated by combining density categories,” the authors added.

The results “suggest that women with extremely dense breast tissue may benefit more from additional screening than women with heterogeneously dense breasts who undergo tomosynthesis mammography,” Catherine Tuite, MD, of Fox Chase Cancer Center in Philadelphia, and colleagues wrote in a related editorial.

“Research to determine density and risk-specific outcomes for supplemental screening methods, such as magnetic resonance imaging ... molecular breast imaging, or ultrasonography is necessary to understand which screening method beyond DBT is best for average-risk women with heterogeneous or extremely dense breasts,” the editorialists wrote.

This research was funded by the National Cancer Institute and the Patient-Centered Outcomes Research Institute through the Breast Cancer Surveillance Consortium. Dr. Lowry reported grants from GE Healthcare outside the submitted work. The editorialists didn’t have any disclosures.

SOURCE: Lowry K et al. JAMA Netw Open. 2020 Jul 1;3(7):e2011792.

Publications
Topics
Sections

In women with extremely dense breasts, digital breast tomosynthesis (DBT) does not outperform digital mammography (DM) after the baseline exam, according to a review of nearly 1.6 million screenings.

At baseline, DBT improved recall and cancer detection rates for all women. On subsequent exams, differences in screening performance between DBT and DM varied by age and density subgroups. However, there were no significant differences in recall or cancer detection rates among women with extremely dense breasts in any age group.

Kathryn Lowry, MD, of the University of Washington in Seattle, and colleagues reported these findings in JAMA Network Open.

“Our findings suggest that density likely should not be used as a criterion to triage use of DBT for routine screening in settings where DBT is not universally available, as has been reported in physician surveys,” the authors wrote. “The largest absolute improvements of DBT screening were achieved on the baseline screening examination, suggesting that women presenting for their first screening examination are particularly important to prioritize for DBT,” regardless of breast density or age.
 

Study details

Dr. Lowry and colleagues reviewed 1,584,079 screenings in women aged 40-79 years. The exams were done from January 2010 to April 2018 at Breast Cancer Surveillance Consortium facilities across the United States.

Sixty-five percent of the exams were in White, non-Hispanic women, 25.2% were in women younger than 50 years, and 42.4% were in women with heterogeneously dense or extremely dense breasts. Subjects had no history of breast cancer, mastectomy, or breast augmentation.

The investigators compared the performance of 1,273,492 DMs with 310,587 DBTs across the four Breast Imaging Reporting and Database System density types: almost entirely fatty, scattered fibroglandular density, heterogeneously dense, and extremely dense.

Findings were adjusted for race, family breast cancer history, and other potential confounders.
 

Recall and cancer detection rates

At baseline, recall and cancer detection rates were better with DBT than with DM, regardless of breast density subtype or patient age.

For instance, in women aged 50-59 years, screening recalls per 1,000 exams dropped from 241 with DM to 204 with DBT (relative risk, 0.84; 95% confidence interval, 0.73-0.98). Cancer detection rates per 1,000 exams in this age group increased from 5.9 with DM to 8.8 with DBT (RR, 1.50; 95% CI, 1.10-2.08).

On follow-up exams, recall rates were lower with DBT for women with scattered fibroglandular density and heterogeneously dense breasts in all age groups, as well as in women with almost entirely fatty breasts aged 50-79 years.

“By contrast, there were no significant differences in recall rates in women with extremely dense breasts in any age group,” the authors wrote.

Cancer detection rates on follow-up exams varied by age and breast density.

Cancer detection rates were higher with DBT than with DM in women with heterogeneously dense breasts in all age groups and in women with scattered fibroglandular density at 50-59 years of age and 60-79 years of age. However, cancer detection rates were not significantly different with DBT or DM for women with almost entirely fatty breasts or extremely dense breasts of any age.
 

 

 

Implications and next steps

Dr. Lowry and colleagues noted that use of DBT has increased steadily since it was approved by the Food and Drug Administration in 2011, driven by studies demonstrating, among other things, earlier detection of invasive cancers.

The problem has been that previous investigations “largely dichotomized dense (heterogeneously dense and extremely dense) and nondense (almost entirely fat and scattered fibroglandular densities) categories,” the authors wrote. Therefore, the nuance of benefit across density subtypes hasn’t been clear.

The finding that “screening benefits of DBT differ for women with heterogeneously dense breasts [versus] extremely dense breasts is especially important in the current landscape of density legislation and demand for supplemental screening tests beyond mammography. To date, most state mandates and ... proposed federal legislation have uniformly grouped women with heterogeneously dense breasts and those with extremely dense breasts as a single population,” the authors wrote.

As the new findings suggest, “there are important differences in performance that may not be appreciated by combining density categories,” the authors added.

The results “suggest that women with extremely dense breast tissue may benefit more from additional screening than women with heterogeneously dense breasts who undergo tomosynthesis mammography,” Catherine Tuite, MD, of Fox Chase Cancer Center in Philadelphia, and colleagues wrote in a related editorial.

“Research to determine density and risk-specific outcomes for supplemental screening methods, such as magnetic resonance imaging ... molecular breast imaging, or ultrasonography is necessary to understand which screening method beyond DBT is best for average-risk women with heterogeneous or extremely dense breasts,” the editorialists wrote.

This research was funded by the National Cancer Institute and the Patient-Centered Outcomes Research Institute through the Breast Cancer Surveillance Consortium. Dr. Lowry reported grants from GE Healthcare outside the submitted work. The editorialists didn’t have any disclosures.

SOURCE: Lowry K et al. JAMA Netw Open. 2020 Jul 1;3(7):e2011792.

In women with extremely dense breasts, digital breast tomosynthesis (DBT) does not outperform digital mammography (DM) after the baseline exam, according to a review of nearly 1.6 million screenings.

At baseline, DBT improved recall and cancer detection rates for all women. On subsequent exams, differences in screening performance between DBT and DM varied by age and density subgroups. However, there were no significant differences in recall or cancer detection rates among women with extremely dense breasts in any age group.

Kathryn Lowry, MD, of the University of Washington in Seattle, and colleagues reported these findings in JAMA Network Open.

“Our findings suggest that density likely should not be used as a criterion to triage use of DBT for routine screening in settings where DBT is not universally available, as has been reported in physician surveys,” the authors wrote. “The largest absolute improvements of DBT screening were achieved on the baseline screening examination, suggesting that women presenting for their first screening examination are particularly important to prioritize for DBT,” regardless of breast density or age.
 

Study details

Dr. Lowry and colleagues reviewed 1,584,079 screenings in women aged 40-79 years. The exams were done from January 2010 to April 2018 at Breast Cancer Surveillance Consortium facilities across the United States.

Sixty-five percent of the exams were in White, non-Hispanic women, 25.2% were in women younger than 50 years, and 42.4% were in women with heterogeneously dense or extremely dense breasts. Subjects had no history of breast cancer, mastectomy, or breast augmentation.

The investigators compared the performance of 1,273,492 DMs with 310,587 DBTs across the four Breast Imaging Reporting and Database System density types: almost entirely fatty, scattered fibroglandular density, heterogeneously dense, and extremely dense.

Findings were adjusted for race, family breast cancer history, and other potential confounders.
 

Recall and cancer detection rates

At baseline, recall and cancer detection rates were better with DBT than with DM, regardless of breast density subtype or patient age.

For instance, in women aged 50-59 years, screening recalls per 1,000 exams dropped from 241 with DM to 204 with DBT (relative risk, 0.84; 95% confidence interval, 0.73-0.98). Cancer detection rates per 1,000 exams in this age group increased from 5.9 with DM to 8.8 with DBT (RR, 1.50; 95% CI, 1.10-2.08).

On follow-up exams, recall rates were lower with DBT for women with scattered fibroglandular density and heterogeneously dense breasts in all age groups, as well as in women with almost entirely fatty breasts aged 50-79 years.

“By contrast, there were no significant differences in recall rates in women with extremely dense breasts in any age group,” the authors wrote.

Cancer detection rates on follow-up exams varied by age and breast density.

Cancer detection rates were higher with DBT than with DM in women with heterogeneously dense breasts in all age groups and in women with scattered fibroglandular density at 50-59 years of age and 60-79 years of age. However, cancer detection rates were not significantly different with DBT or DM for women with almost entirely fatty breasts or extremely dense breasts of any age.
 

 

 

Implications and next steps

Dr. Lowry and colleagues noted that use of DBT has increased steadily since it was approved by the Food and Drug Administration in 2011, driven by studies demonstrating, among other things, earlier detection of invasive cancers.

The problem has been that previous investigations “largely dichotomized dense (heterogeneously dense and extremely dense) and nondense (almost entirely fat and scattered fibroglandular densities) categories,” the authors wrote. Therefore, the nuance of benefit across density subtypes hasn’t been clear.

The finding that “screening benefits of DBT differ for women with heterogeneously dense breasts [versus] extremely dense breasts is especially important in the current landscape of density legislation and demand for supplemental screening tests beyond mammography. To date, most state mandates and ... proposed federal legislation have uniformly grouped women with heterogeneously dense breasts and those with extremely dense breasts as a single population,” the authors wrote.

As the new findings suggest, “there are important differences in performance that may not be appreciated by combining density categories,” the authors added.

The results “suggest that women with extremely dense breast tissue may benefit more from additional screening than women with heterogeneously dense breasts who undergo tomosynthesis mammography,” Catherine Tuite, MD, of Fox Chase Cancer Center in Philadelphia, and colleagues wrote in a related editorial.

“Research to determine density and risk-specific outcomes for supplemental screening methods, such as magnetic resonance imaging ... molecular breast imaging, or ultrasonography is necessary to understand which screening method beyond DBT is best for average-risk women with heterogeneous or extremely dense breasts,” the editorialists wrote.

This research was funded by the National Cancer Institute and the Patient-Centered Outcomes Research Institute through the Breast Cancer Surveillance Consortium. Dr. Lowry reported grants from GE Healthcare outside the submitted work. The editorialists didn’t have any disclosures.

SOURCE: Lowry K et al. JAMA Netw Open. 2020 Jul 1;3(7):e2011792.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM THE JAMA OPEN NETWORK

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article

How effective is screening mammography for preventing breast cancer mortality?

Article Type
Changed
Thu, 12/15/2022 - 17:35

EXPERT COMMENTARY

Although recommending screening mammograms continues to represent the standard of care, studies from the United States and abroad have questioned their value.1-3

In the June issue of JAMA Network Open, Australian investigators assessed the relative impacts of mammography screening and adjuvant therapy on breast cancer mortality, using data from population-based studies from 1982 through 2013.4 In recent decades, screening has increased substantially among Australian women.

Details of the study

Burton and Stevenson identified 76,630 women included in the Victorian Cancer Registry with invasive breast cancer in the state of Victoria, where women aged 50 to 69 are offered biennial screening.4 During the study’s time period, the use of adjuvant tamoxifen and chemotherapy increased substantially.

In the 31-year period assessed in this study, breast cancer mortality declined considerably. During the same period, however, the incidence of advanced breast cancer doubled.

These findings from Australia parallel those from the United States, Holland, and Norway, where the incidence of advanced breast cancer was stable or increased after screening mammography was introduced.1-3

According to Burton and Stevenson, the increased incidence of advanced cancer clarifies that screening mammography is not responsible for declining breast cancer mortality, but all of the decline in mortality can be attributed to increased uptake of adjuvant therapy.

The authors concluded that since screening mammography does not reduce breast cancer mortality, state-sponsored screens should be discontinued.

Study strengths and limitations

Relevant data for this study were obtained from large population-based surveys for premenopausal and postmenopausal women with breast cancer.

The authors noted, however, that this analysis of observational data examining time trends across the study period can show only associations among breast cancer mortality, mammography screening participation, and adjuvant therapy uptake, and that causality can only be inferred.

The study in perspective

Although some will view the findings and recommendations of these Australian authors with skepticism or even hostility, I view their findings as good news—we have improved the treatment of breast cancer so dramatically that the benefits of finding early tumors with screening mammography have become attenuated.

Although it is challenging given the time constraints of office visits, I try to engage in shared decision making with my patients regarding when to start and how often to have screening mammography. ●

WHAT THIS EVIDENCE MEANS FOR PRACTICE

Given our evolving understanding regarding the value of screening mammograms, it is time to stop pressuring patients who are reluctant or unwilling to undergo screening. Likewise, insurance companies and government agencies should stop using screening mammography as a quality metric.

ANDREW M. KAUNITZ, MD

 

References
  1. Bleyer A, Welch GH. Effect of three decades of screening mammography on breast cancer-incidence. N Engl J Med. 2012;367:1998-2005. 
  2. Autier P, Boniol M, Koechlin A, et al. Effectiveness of and overdiagnosis from mammography screening in the Netherlands: population based study. BMJ. 2017;359:j5224. 
  3. Kalager M, Zelen M, Langmark F, et al. Effect of screening mammography on breast-cancer mortality in Norway. N Engl J Med. 2010;363:1203-1210. 
  4. Burton R, Stevenson C. Assessment of breast cancer mortality trends associated with mammographic screening and adjuvant therapy from 1986 to 2013 in the state of Victoria, Australia. JAMA Netw Open. 2020;3:e208249.
Article PDF
Author and Disclosure Information

Andrew M. Kaunitz, MD, is Professor and Associate Chairman, Department of Obstetrics and Gynecology, University of Florida College of Medicine–Jacksonville; Medical Director and Director of Menopause and Gynecologic Ultrasound Services, UF Women’s Health Specialists at Emerson, Jacksonville. He serves on the OBG Management Board of Editors.

 

The author reports no financial relationships relevant to this article.

Issue
OBG Management - 32(8)
Publications
Topics
Page Number
17, 49
Sections
Author and Disclosure Information

Andrew M. Kaunitz, MD, is Professor and Associate Chairman, Department of Obstetrics and Gynecology, University of Florida College of Medicine–Jacksonville; Medical Director and Director of Menopause and Gynecologic Ultrasound Services, UF Women’s Health Specialists at Emerson, Jacksonville. He serves on the OBG Management Board of Editors.

 

The author reports no financial relationships relevant to this article.

Author and Disclosure Information

Andrew M. Kaunitz, MD, is Professor and Associate Chairman, Department of Obstetrics and Gynecology, University of Florida College of Medicine–Jacksonville; Medical Director and Director of Menopause and Gynecologic Ultrasound Services, UF Women’s Health Specialists at Emerson, Jacksonville. He serves on the OBG Management Board of Editors.

 

The author reports no financial relationships relevant to this article.

Article PDF
Article PDF

EXPERT COMMENTARY

Although recommending screening mammograms continues to represent the standard of care, studies from the United States and abroad have questioned their value.1-3

In the June issue of JAMA Network Open, Australian investigators assessed the relative impacts of mammography screening and adjuvant therapy on breast cancer mortality, using data from population-based studies from 1982 through 2013.4 In recent decades, screening has increased substantially among Australian women.

Details of the study

Burton and Stevenson identified 76,630 women included in the Victorian Cancer Registry with invasive breast cancer in the state of Victoria, where women aged 50 to 69 are offered biennial screening.4 During the study’s time period, the use of adjuvant tamoxifen and chemotherapy increased substantially.

In the 31-year period assessed in this study, breast cancer mortality declined considerably. During the same period, however, the incidence of advanced breast cancer doubled.

These findings from Australia parallel those from the United States, Holland, and Norway, where the incidence of advanced breast cancer was stable or increased after screening mammography was introduced.1-3

According to Burton and Stevenson, the increased incidence of advanced cancer clarifies that screening mammography is not responsible for declining breast cancer mortality, but all of the decline in mortality can be attributed to increased uptake of adjuvant therapy.

The authors concluded that since screening mammography does not reduce breast cancer mortality, state-sponsored screens should be discontinued.

Study strengths and limitations

Relevant data for this study were obtained from large population-based surveys for premenopausal and postmenopausal women with breast cancer.

The authors noted, however, that this analysis of observational data examining time trends across the study period can show only associations among breast cancer mortality, mammography screening participation, and adjuvant therapy uptake, and that causality can only be inferred.

The study in perspective

Although some will view the findings and recommendations of these Australian authors with skepticism or even hostility, I view their findings as good news—we have improved the treatment of breast cancer so dramatically that the benefits of finding early tumors with screening mammography have become attenuated.

Although it is challenging given the time constraints of office visits, I try to engage in shared decision making with my patients regarding when to start and how often to have screening mammography. ●

WHAT THIS EVIDENCE MEANS FOR PRACTICE

Given our evolving understanding regarding the value of screening mammograms, it is time to stop pressuring patients who are reluctant or unwilling to undergo screening. Likewise, insurance companies and government agencies should stop using screening mammography as a quality metric.

ANDREW M. KAUNITZ, MD

 

EXPERT COMMENTARY

Although recommending screening mammograms continues to represent the standard of care, studies from the United States and abroad have questioned their value.1-3

In the June issue of JAMA Network Open, Australian investigators assessed the relative impacts of mammography screening and adjuvant therapy on breast cancer mortality, using data from population-based studies from 1982 through 2013.4 In recent decades, screening has increased substantially among Australian women.

Details of the study

Burton and Stevenson identified 76,630 women included in the Victorian Cancer Registry with invasive breast cancer in the state of Victoria, where women aged 50 to 69 are offered biennial screening.4 During the study’s time period, the use of adjuvant tamoxifen and chemotherapy increased substantially.

In the 31-year period assessed in this study, breast cancer mortality declined considerably. During the same period, however, the incidence of advanced breast cancer doubled.

These findings from Australia parallel those from the United States, Holland, and Norway, where the incidence of advanced breast cancer was stable or increased after screening mammography was introduced.1-3

According to Burton and Stevenson, the increased incidence of advanced cancer clarifies that screening mammography is not responsible for declining breast cancer mortality, but all of the decline in mortality can be attributed to increased uptake of adjuvant therapy.

The authors concluded that since screening mammography does not reduce breast cancer mortality, state-sponsored screens should be discontinued.

Study strengths and limitations

Relevant data for this study were obtained from large population-based surveys for premenopausal and postmenopausal women with breast cancer.

The authors noted, however, that this analysis of observational data examining time trends across the study period can show only associations among breast cancer mortality, mammography screening participation, and adjuvant therapy uptake, and that causality can only be inferred.

The study in perspective

Although some will view the findings and recommendations of these Australian authors with skepticism or even hostility, I view their findings as good news—we have improved the treatment of breast cancer so dramatically that the benefits of finding early tumors with screening mammography have become attenuated.

Although it is challenging given the time constraints of office visits, I try to engage in shared decision making with my patients regarding when to start and how often to have screening mammography. ●

WHAT THIS EVIDENCE MEANS FOR PRACTICE

Given our evolving understanding regarding the value of screening mammograms, it is time to stop pressuring patients who are reluctant or unwilling to undergo screening. Likewise, insurance companies and government agencies should stop using screening mammography as a quality metric.

ANDREW M. KAUNITZ, MD

 

References
  1. Bleyer A, Welch GH. Effect of three decades of screening mammography on breast cancer-incidence. N Engl J Med. 2012;367:1998-2005. 
  2. Autier P, Boniol M, Koechlin A, et al. Effectiveness of and overdiagnosis from mammography screening in the Netherlands: population based study. BMJ. 2017;359:j5224. 
  3. Kalager M, Zelen M, Langmark F, et al. Effect of screening mammography on breast-cancer mortality in Norway. N Engl J Med. 2010;363:1203-1210. 
  4. Burton R, Stevenson C. Assessment of breast cancer mortality trends associated with mammographic screening and adjuvant therapy from 1986 to 2013 in the state of Victoria, Australia. JAMA Netw Open. 2020;3:e208249.
References
  1. Bleyer A, Welch GH. Effect of three decades of screening mammography on breast cancer-incidence. N Engl J Med. 2012;367:1998-2005. 
  2. Autier P, Boniol M, Koechlin A, et al. Effectiveness of and overdiagnosis from mammography screening in the Netherlands: population based study. BMJ. 2017;359:j5224. 
  3. Kalager M, Zelen M, Langmark F, et al. Effect of screening mammography on breast-cancer mortality in Norway. N Engl J Med. 2010;363:1203-1210. 
  4. Burton R, Stevenson C. Assessment of breast cancer mortality trends associated with mammographic screening and adjuvant therapy from 1986 to 2013 in the state of Victoria, Australia. JAMA Netw Open. 2020;3:e208249.
Issue
OBG Management - 32(8)
Issue
OBG Management - 32(8)
Page Number
17, 49
Page Number
17, 49
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Article PDF Media

Mammography starting at 40 cuts risk of breast cancer death

Article Type
Changed
Thu, 12/15/2022 - 17:35

 

New data will add fuel to the ongoing debate over the age at which mammography screening for breast cancer should begin. Many guidelines recommend starting at age 50.

But yearly mammography between the ages of 40 and 49 years was associated with a “substantial and significant” 25% reduction in breast cancer mortality during the first 10 years of follow-up, according to new data from the UK Age Trial.

The researchers calculated that 1,150 women needed to undergo screening in the age group of 40-49 years to prevent 1 breast cancer death, or about 1 breast cancer death prevented per 1,000 screened.

However, they also noted that, in the years since the trial first began, there have been improvements in the treatment of breast cancer, so “there might be less scope for screening to reduce mortality in our current era.”

The study was published online August 12 in Lancet Oncology.

“Our results do indicate that screening before age 50 does indeed prevent deaths from breast cancer, with a minimal additional burden of overdiagnosis,” said lead author Stephen W. Duffy, MSc, director of the policy research unit in cancer awareness, screening and early diagnosis, at Queen Mary University, London.

That said, Dr. Duffy explained they do not expect policy makers to extend the age range on the basis of these results alone. “For one thing, they will want to consider costs, both human and financial.” “For another, at this time, the services are concentrating on recovering from the hiatus caused by the COVID-19 crisis, and, at this time, it would be impractical to try to expand the eligibility for screening.”

“I would say our results indicate that lowering the age range, although not necessarily to 40 but to some age below 50, will be at least worth considering when the current crisis is over,” he added.
 

Guideline recommendations differ

Breast cancer screening guidelines have generated debate, much of which has focused on the age at which to begin screening.

The U.S. Preventive Services Task Force and American College of Physicians recommend screening every other year, on average, for women between the ages of 50 and 74 years.

However, other organizations disagree. The American College of Radiology and Society of Breast Imaging both recommend annual mammograms starting at age 40, and continuing “as long as they are in good health.”

In the UK, where the study was conducted, a national breast cancer screening program offers mammography to women aged 50-70 years every 3 years.

Given the uncertainty that continues to exist over the optimal age for average-risk women to begin screening, the UK Age Trial set out to assess if screening should begin at a younger age and if that might lead to overdiagnosis of breast cancer.

Results from the study’s 17-year follow-up, published in 2015, showed a reduction in breast cancer mortality with annual screening, beginning at age 40 years, which was significant in the first 10 years after participants were randomized (Lancet Oncol. 2015;16:1123-32).

In the current study, Dr. Duffy and colleagues report on breast cancer incidence and mortality results in the UK Age trial after 23 years of follow-up.

The cohort included 160,921 women enrolled between Oct. 14, 1990, and Sept. 24, 1997, who were randomized to screening (n = 53,883) or the control group (n = 106,953).

Of those screened during the study period, 7,893 (18.1%) had at least one false-positive result. There were 10,439 deaths, of which 683 (7%) were attributed to breast cancer diagnosed during the study period.

At 10 years of follow-up, death from breast cancer was significantly lower among women in the screening versus control group (83 vs 219 deaths; relative risk, 0.75; P = .029).

However, no significant reduction was observed thereafter, with 126 versus 255 deaths occurring after more than 10 years of follow-up (RR, 0.98; 95% confidence interval, 0.79-1.22; P = .86), the authors note.

“This follow-up indicates that the gain in survival was concentrated in the first 10 years after the women began to be screened,” commented Kevin McConway, PhD, emeritus professor of applied statistics at the Open University, Milton Keynes, England.

“In those first 10 years, out of every 10,000 women invited for screening, on average, about 16 died of breast cancer, while in every 10,000 women in the control group who did not get the screening, on average, 21 died. These numbers indicate that lives were saved,” he said.

“But they also indicate that death from breast cancer was pretty rare in women of that age,” he pointed out.

“Because breast cancer deaths in younger women are not common, the estimates of breast cancer death rates are not very precise, despite the fact that the trial involved 160,000 women,” he said.

“Over the whole follow-up period so far, the difference in numbers of deaths between those who were screened in their 40s and those who were not is 6 deaths for every 10,000 women, but because of the statistical uncertainty, this figure could plausibly be larger, at 13 per 10,000. Or, in fact, the data are also consistent with a very slightly higher death rate [1 death per 10,000 women] in those who had the screening,” Dr. McConway explained.

“But none of those numbers is very large, out of 10,000 women. Allowing for the fact that not every woman invited for screening will actually attend the screening, the researchers estimate that 1,150 women would have to be screened in their 40s to prevent one breast cancer death,” he noted.
 

 

 

U.S. experts support starting screening at 40

“The American Society of Breast Surgeons has continued to recommend screening women at the age of 40,” said Stephanie Bernik, MD, FACS, chief of breast surgery, Mount Sinai West, and associate professor of surgery at the Icahn School of Medicine at Mount Sinai, New York. “There is no question that screening earlier saves more lives, and this study adds to the body of evidence already available.”

She pointed out that the argument against early screening was that there were many false positives, which, in turn, increased cost and anxiety. “Because women in their 40s are in the prime of their lives, often with young children, it seems as though screening would be paramount. Furthermore, it is well known that the sooner you find a cancer, the better, as the treatment needed to cure the cancer is less toxic and less dramatic.”

Catherine Tuite, MD, section chief, breast radiology, Fox Chase Cancer Center, Philadelphia, echoed a similar viewpoint. “There is no real debate on this issue. The USPSTF recommends beginning screening mammography at age 50, and it is no secret that this is a recommendation based on cost, not on saving women’s lives.”

She emphasized that these recommendations were made without the input of expert physicians. “The data, reaffirmed by this publication, have always been clear that the most years of life saved from deaths due to breast cancer are achieved in women who begin screening mammography at age 40. We know that one-sixth of all breast cancers are diagnosed before age 50, and many of these cancers are the most aggressive types of breast cancer.

“The guidelines from every organization representing health care professionals who actually diagnose and care for women with breast cancer recommend that all women of average risk begin annual screening mammography at age 40 and continue as long as the woman is in good health, with life expectancy of 10 years,” she continued.

As for screening intervals, annual mammogram is also recommended for all age groups in the United States. At her institutions, she explained that they are currently enrolling women into the TMIST screening mammogram trial, which is, among other endpoints, evaluating a biannual screening interval for postmenopausal women of lower than average risk, but again, outside of a trial setting, yearly screening for all women is recommended.

Dr. Duffy commented that, in the United Kingdom, the current screening protocol for mammograms is every 3 years, which he said “works well in women over the age of 50 years.” But for younger women, more frequent screening would be need – in this study, screening was done annually.

“The results not only from our study but from others around the world suggest that this [3-year screening interval] would not be very effective in women under 50, due partly to the denser breast tissue of younger women and partly to the faster progression on average of cancers diagnosed in younger women,” he said. “Some counties in Sweden, for example, offer screening to women under 50 at 18-month intervals, which seems more realistic.”

The study was funded by the Health Technology Assessment program of the National Institute for Health Research. Dr. Duffy reported also receiving grants from the NIHR outside this trial. Dr. Bernik, Dr. Tuite, and Dr. Hodgson reported no relevant financial relationships.

This article first appeared on Medscape.com.

Publications
Topics
Sections

 

New data will add fuel to the ongoing debate over the age at which mammography screening for breast cancer should begin. Many guidelines recommend starting at age 50.

But yearly mammography between the ages of 40 and 49 years was associated with a “substantial and significant” 25% reduction in breast cancer mortality during the first 10 years of follow-up, according to new data from the UK Age Trial.

The researchers calculated that 1,150 women needed to undergo screening in the age group of 40-49 years to prevent 1 breast cancer death, or about 1 breast cancer death prevented per 1,000 screened.

However, they also noted that, in the years since the trial first began, there have been improvements in the treatment of breast cancer, so “there might be less scope for screening to reduce mortality in our current era.”

The study was published online August 12 in Lancet Oncology.

“Our results do indicate that screening before age 50 does indeed prevent deaths from breast cancer, with a minimal additional burden of overdiagnosis,” said lead author Stephen W. Duffy, MSc, director of the policy research unit in cancer awareness, screening and early diagnosis, at Queen Mary University, London.

That said, Dr. Duffy explained they do not expect policy makers to extend the age range on the basis of these results alone. “For one thing, they will want to consider costs, both human and financial.” “For another, at this time, the services are concentrating on recovering from the hiatus caused by the COVID-19 crisis, and, at this time, it would be impractical to try to expand the eligibility for screening.”

“I would say our results indicate that lowering the age range, although not necessarily to 40 but to some age below 50, will be at least worth considering when the current crisis is over,” he added.
 

Guideline recommendations differ

Breast cancer screening guidelines have generated debate, much of which has focused on the age at which to begin screening.

The U.S. Preventive Services Task Force and American College of Physicians recommend screening every other year, on average, for women between the ages of 50 and 74 years.

However, other organizations disagree. The American College of Radiology and Society of Breast Imaging both recommend annual mammograms starting at age 40, and continuing “as long as they are in good health.”

In the UK, where the study was conducted, a national breast cancer screening program offers mammography to women aged 50-70 years every 3 years.

Given the uncertainty that continues to exist over the optimal age for average-risk women to begin screening, the UK Age Trial set out to assess if screening should begin at a younger age and if that might lead to overdiagnosis of breast cancer.

Results from the study’s 17-year follow-up, published in 2015, showed a reduction in breast cancer mortality with annual screening, beginning at age 40 years, which was significant in the first 10 years after participants were randomized (Lancet Oncol. 2015;16:1123-32).

In the current study, Dr. Duffy and colleagues report on breast cancer incidence and mortality results in the UK Age trial after 23 years of follow-up.

The cohort included 160,921 women enrolled between Oct. 14, 1990, and Sept. 24, 1997, who were randomized to screening (n = 53,883) or the control group (n = 106,953).

Of those screened during the study period, 7,893 (18.1%) had at least one false-positive result. There were 10,439 deaths, of which 683 (7%) were attributed to breast cancer diagnosed during the study period.

At 10 years of follow-up, death from breast cancer was significantly lower among women in the screening versus control group (83 vs 219 deaths; relative risk, 0.75; P = .029).

However, no significant reduction was observed thereafter, with 126 versus 255 deaths occurring after more than 10 years of follow-up (RR, 0.98; 95% confidence interval, 0.79-1.22; P = .86), the authors note.

“This follow-up indicates that the gain in survival was concentrated in the first 10 years after the women began to be screened,” commented Kevin McConway, PhD, emeritus professor of applied statistics at the Open University, Milton Keynes, England.

“In those first 10 years, out of every 10,000 women invited for screening, on average, about 16 died of breast cancer, while in every 10,000 women in the control group who did not get the screening, on average, 21 died. These numbers indicate that lives were saved,” he said.

“But they also indicate that death from breast cancer was pretty rare in women of that age,” he pointed out.

“Because breast cancer deaths in younger women are not common, the estimates of breast cancer death rates are not very precise, despite the fact that the trial involved 160,000 women,” he said.

“Over the whole follow-up period so far, the difference in numbers of deaths between those who were screened in their 40s and those who were not is 6 deaths for every 10,000 women, but because of the statistical uncertainty, this figure could plausibly be larger, at 13 per 10,000. Or, in fact, the data are also consistent with a very slightly higher death rate [1 death per 10,000 women] in those who had the screening,” Dr. McConway explained.

“But none of those numbers is very large, out of 10,000 women. Allowing for the fact that not every woman invited for screening will actually attend the screening, the researchers estimate that 1,150 women would have to be screened in their 40s to prevent one breast cancer death,” he noted.
 

 

 

U.S. experts support starting screening at 40

“The American Society of Breast Surgeons has continued to recommend screening women at the age of 40,” said Stephanie Bernik, MD, FACS, chief of breast surgery, Mount Sinai West, and associate professor of surgery at the Icahn School of Medicine at Mount Sinai, New York. “There is no question that screening earlier saves more lives, and this study adds to the body of evidence already available.”

She pointed out that the argument against early screening was that there were many false positives, which, in turn, increased cost and anxiety. “Because women in their 40s are in the prime of their lives, often with young children, it seems as though screening would be paramount. Furthermore, it is well known that the sooner you find a cancer, the better, as the treatment needed to cure the cancer is less toxic and less dramatic.”

Catherine Tuite, MD, section chief, breast radiology, Fox Chase Cancer Center, Philadelphia, echoed a similar viewpoint. “There is no real debate on this issue. The USPSTF recommends beginning screening mammography at age 50, and it is no secret that this is a recommendation based on cost, not on saving women’s lives.”

She emphasized that these recommendations were made without the input of expert physicians. “The data, reaffirmed by this publication, have always been clear that the most years of life saved from deaths due to breast cancer are achieved in women who begin screening mammography at age 40. We know that one-sixth of all breast cancers are diagnosed before age 50, and many of these cancers are the most aggressive types of breast cancer.

“The guidelines from every organization representing health care professionals who actually diagnose and care for women with breast cancer recommend that all women of average risk begin annual screening mammography at age 40 and continue as long as the woman is in good health, with life expectancy of 10 years,” she continued.

As for screening intervals, annual mammogram is also recommended for all age groups in the United States. At her institutions, she explained that they are currently enrolling women into the TMIST screening mammogram trial, which is, among other endpoints, evaluating a biannual screening interval for postmenopausal women of lower than average risk, but again, outside of a trial setting, yearly screening for all women is recommended.

Dr. Duffy commented that, in the United Kingdom, the current screening protocol for mammograms is every 3 years, which he said “works well in women over the age of 50 years.” But for younger women, more frequent screening would be need – in this study, screening was done annually.

“The results not only from our study but from others around the world suggest that this [3-year screening interval] would not be very effective in women under 50, due partly to the denser breast tissue of younger women and partly to the faster progression on average of cancers diagnosed in younger women,” he said. “Some counties in Sweden, for example, offer screening to women under 50 at 18-month intervals, which seems more realistic.”

The study was funded by the Health Technology Assessment program of the National Institute for Health Research. Dr. Duffy reported also receiving grants from the NIHR outside this trial. Dr. Bernik, Dr. Tuite, and Dr. Hodgson reported no relevant financial relationships.

This article first appeared on Medscape.com.

 

New data will add fuel to the ongoing debate over the age at which mammography screening for breast cancer should begin. Many guidelines recommend starting at age 50.

But yearly mammography between the ages of 40 and 49 years was associated with a “substantial and significant” 25% reduction in breast cancer mortality during the first 10 years of follow-up, according to new data from the UK Age Trial.

The researchers calculated that 1,150 women needed to undergo screening in the age group of 40-49 years to prevent 1 breast cancer death, or about 1 breast cancer death prevented per 1,000 screened.

However, they also noted that, in the years since the trial first began, there have been improvements in the treatment of breast cancer, so “there might be less scope for screening to reduce mortality in our current era.”

The study was published online August 12 in Lancet Oncology.

“Our results do indicate that screening before age 50 does indeed prevent deaths from breast cancer, with a minimal additional burden of overdiagnosis,” said lead author Stephen W. Duffy, MSc, director of the policy research unit in cancer awareness, screening and early diagnosis, at Queen Mary University, London.

That said, Dr. Duffy explained they do not expect policy makers to extend the age range on the basis of these results alone. “For one thing, they will want to consider costs, both human and financial.” “For another, at this time, the services are concentrating on recovering from the hiatus caused by the COVID-19 crisis, and, at this time, it would be impractical to try to expand the eligibility for screening.”

“I would say our results indicate that lowering the age range, although not necessarily to 40 but to some age below 50, will be at least worth considering when the current crisis is over,” he added.
 

Guideline recommendations differ

Breast cancer screening guidelines have generated debate, much of which has focused on the age at which to begin screening.

The U.S. Preventive Services Task Force and American College of Physicians recommend screening every other year, on average, for women between the ages of 50 and 74 years.

However, other organizations disagree. The American College of Radiology and Society of Breast Imaging both recommend annual mammograms starting at age 40, and continuing “as long as they are in good health.”

In the UK, where the study was conducted, a national breast cancer screening program offers mammography to women aged 50-70 years every 3 years.

Given the uncertainty that continues to exist over the optimal age for average-risk women to begin screening, the UK Age Trial set out to assess if screening should begin at a younger age and if that might lead to overdiagnosis of breast cancer.

Results from the study’s 17-year follow-up, published in 2015, showed a reduction in breast cancer mortality with annual screening, beginning at age 40 years, which was significant in the first 10 years after participants were randomized (Lancet Oncol. 2015;16:1123-32).

In the current study, Dr. Duffy and colleagues report on breast cancer incidence and mortality results in the UK Age trial after 23 years of follow-up.

The cohort included 160,921 women enrolled between Oct. 14, 1990, and Sept. 24, 1997, who were randomized to screening (n = 53,883) or the control group (n = 106,953).

Of those screened during the study period, 7,893 (18.1%) had at least one false-positive result. There were 10,439 deaths, of which 683 (7%) were attributed to breast cancer diagnosed during the study period.

At 10 years of follow-up, death from breast cancer was significantly lower among women in the screening versus control group (83 vs 219 deaths; relative risk, 0.75; P = .029).

However, no significant reduction was observed thereafter, with 126 versus 255 deaths occurring after more than 10 years of follow-up (RR, 0.98; 95% confidence interval, 0.79-1.22; P = .86), the authors note.

“This follow-up indicates that the gain in survival was concentrated in the first 10 years after the women began to be screened,” commented Kevin McConway, PhD, emeritus professor of applied statistics at the Open University, Milton Keynes, England.

“In those first 10 years, out of every 10,000 women invited for screening, on average, about 16 died of breast cancer, while in every 10,000 women in the control group who did not get the screening, on average, 21 died. These numbers indicate that lives were saved,” he said.

“But they also indicate that death from breast cancer was pretty rare in women of that age,” he pointed out.

“Because breast cancer deaths in younger women are not common, the estimates of breast cancer death rates are not very precise, despite the fact that the trial involved 160,000 women,” he said.

“Over the whole follow-up period so far, the difference in numbers of deaths between those who were screened in their 40s and those who were not is 6 deaths for every 10,000 women, but because of the statistical uncertainty, this figure could plausibly be larger, at 13 per 10,000. Or, in fact, the data are also consistent with a very slightly higher death rate [1 death per 10,000 women] in those who had the screening,” Dr. McConway explained.

“But none of those numbers is very large, out of 10,000 women. Allowing for the fact that not every woman invited for screening will actually attend the screening, the researchers estimate that 1,150 women would have to be screened in their 40s to prevent one breast cancer death,” he noted.
 

 

 

U.S. experts support starting screening at 40

“The American Society of Breast Surgeons has continued to recommend screening women at the age of 40,” said Stephanie Bernik, MD, FACS, chief of breast surgery, Mount Sinai West, and associate professor of surgery at the Icahn School of Medicine at Mount Sinai, New York. “There is no question that screening earlier saves more lives, and this study adds to the body of evidence already available.”

She pointed out that the argument against early screening was that there were many false positives, which, in turn, increased cost and anxiety. “Because women in their 40s are in the prime of their lives, often with young children, it seems as though screening would be paramount. Furthermore, it is well known that the sooner you find a cancer, the better, as the treatment needed to cure the cancer is less toxic and less dramatic.”

Catherine Tuite, MD, section chief, breast radiology, Fox Chase Cancer Center, Philadelphia, echoed a similar viewpoint. “There is no real debate on this issue. The USPSTF recommends beginning screening mammography at age 50, and it is no secret that this is a recommendation based on cost, not on saving women’s lives.”

She emphasized that these recommendations were made without the input of expert physicians. “The data, reaffirmed by this publication, have always been clear that the most years of life saved from deaths due to breast cancer are achieved in women who begin screening mammography at age 40. We know that one-sixth of all breast cancers are diagnosed before age 50, and many of these cancers are the most aggressive types of breast cancer.

“The guidelines from every organization representing health care professionals who actually diagnose and care for women with breast cancer recommend that all women of average risk begin annual screening mammography at age 40 and continue as long as the woman is in good health, with life expectancy of 10 years,” she continued.

As for screening intervals, annual mammogram is also recommended for all age groups in the United States. At her institutions, she explained that they are currently enrolling women into the TMIST screening mammogram trial, which is, among other endpoints, evaluating a biannual screening interval for postmenopausal women of lower than average risk, but again, outside of a trial setting, yearly screening for all women is recommended.

Dr. Duffy commented that, in the United Kingdom, the current screening protocol for mammograms is every 3 years, which he said “works well in women over the age of 50 years.” But for younger women, more frequent screening would be need – in this study, screening was done annually.

“The results not only from our study but from others around the world suggest that this [3-year screening interval] would not be very effective in women under 50, due partly to the denser breast tissue of younger women and partly to the faster progression on average of cancers diagnosed in younger women,” he said. “Some counties in Sweden, for example, offer screening to women under 50 at 18-month intervals, which seems more realistic.”

The study was funded by the Health Technology Assessment program of the National Institute for Health Research. Dr. Duffy reported also receiving grants from the NIHR outside this trial. Dr. Bernik, Dr. Tuite, and Dr. Hodgson reported no relevant financial relationships.

This article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

Impaired senses, especially smell, linked to dementia

Article Type
Changed
Thu, 12/15/2022 - 15:43

A poor combined score on tests of hearing, vision, smell, and touch is associated with a higher risk for dementia and cognitive decline among older adults, new research suggests. The study, which included almost 1,800 participants, adds to emerging evidence that even mild levels of multisensory impairment are associated with accelerated cognitive aging, the researchers noted.

Clinicians should be aware of this link between sensory impairment and dementia risk, said lead author Willa Brenowitz, PhD, assistant professor, department of psychiatry and behavioral sciences, University of California, San Francisco. “Many of these impairments are treatable, or at least physicians can monitor them; and this can improve quality of life, even if it doesn’t improve dementia risk.”

The findings were published online July 12 in Alzheimer’s and Dementia.
 

Additive effects

Previous research has focused on the link between dementia and individual senses, but this new work is unique in that it focuses on the additive effects of multiple impairments in sensory function, said Dr. Brenowitz. The study included 1,794 dementia-free participants in their 70s from the Health, Aging and Body Composition study, a prospective cohort study of healthy Black and White men and women.

Researchers tested participants’ hearing using a pure tone average without hearing aids and vision using contrast sensitivity with glasses permitted. They also measured vibrations in the big toe to assess touch and had participants identify distinctive odors such as paint thinner, roses, lemons, and onions to assess smell.

A score of 0-3 was assigned based on sample quartiles for each of the four sensory functions. Individuals with the best quartile were assigned a score of 0 and those with the worst were assigned a score of 3.

The investigators added scores across all senses to create a summary score of multisensory function (0-12) and classified the participants into tertiles of good, medium, and poor. Individuals with a score of 0 would have good function in all senses, whereas those with 12 would have poor function in all senses. Those with medium scores could have a mix of impairments.

Participants with good multisensory function were more likely to be healthier than those with poor function. They were also significantly more likely to have completed high school (85.0% vs. 72.1%), were significantly less likely to have diabetes (16.9% vs. 27.9%), and were marginally less likely to have cardiovascular disease, high blood pressure, and history of stroke.

Investigators measured cognition using the Modified Mini-Mental State (3MS) examination, a test of global cognitive function, and the Digit Symbol Substitution Test (DSST), a measure of cognitive processing speed. Cognitive testing was carried out at the beginning of the study and repeated every other year.

Dementia was defined as the use of dementia medication, being hospitalized with dementia as a primary or secondary diagnosis, or having a 3MS score 1.5 standard deviations lower than the race-stratified Health ABC study baseline mean.

Over an average follow-up of 6.3 years, 18% of participants developed dementia.
 

Dose-response increase

Results showed that, with worsening multisensory function score, the risk for dementia increased in a dose-response manner. In models adjusted for demographics and health conditions, participants with a poor multisensory function score were more than twice as likely to develop dementia than those with a good score (hazard ratio, 2.05; 95% confidence interval, 1.50-2.81; P < .001). Those with a middle multisensory function score were 1.45 times more likely to develop dementia (HR, 1.45; 95% CI, 1.09-1.91; P < .001).

Even a 1-point worse multisensory function score was associated with a 14% higher risk for dementia (95% CI, 8%-21%), while a 4-point worse score was associated with 71% higher risk for dementia (95% CI, 38%-211%).

Smell was the sensory function most strongly associated with dementia risk. Participants whose sense of smell declined by 10% had a 19% higher risk for dementia versus a 1%-3% higher risk for declines in vision, hearing, and touch.

It is not clear why smell was a stronger determinant of dementia risk. However, loss of this sense is often considered to be a marker for Alzheimer’s disease “because it is closely linked with brain regions that are affected” in that disease, said Dr. Brenowitz.

However, that does not necessarily mean smell is more important than vision or hearing, she added. “Even if hearing and vision have a smaller contribution to dementia, they have a stronger potential for intervention.” The findings suggest “some additive or cumulative” effects for loss of the different senses. “There’s an association above and beyond those which can be attributed to individual sensory domains,” she said.
 

Frailty link

After including mobility, which is a potential mediator, estimates for the multisensory function score were slightly lower. “Walking speed is pretty strongly associated with dementia risk,” Dr. Brenowitz noted. Physical frailty might help explain the link between sensory impairment and dementia risk. “It’s not clear if that’s because people with dementia are declining or because people with frailty are especially vulnerable to dementia,” she said.

The researchers also assessed the role of social support, another potential mechanism by which sensory decline, especially in hearing and vision, could influence dementia risk. Although the study did not find substantial differences in social support measures, the investigators noted that questions assessing social support were limited in scope.

Interactions between multisensory function score and race, APOE e4 allele status, and sex were not significant.

Worsening multisensory function was also linked to faster annual rates of cognitive decline as measured by both the 3MS and DSST. Each 1-point worse score was associated with faster decline (P < .05), even after adjustment for demographics and health conditions.
 

Possible mechanisms

A number of possible mechanisms may explain the link between poor sensory function and dementia. It could be that neurodegeneration underlying dementia affects the senses, or vision and/or hearing loss leads to social isolation and poor mental health, which in turn could affect dementia risk, the researchers wrote. It also is possible that cardiovascular disease or diabetes affect both dementia risk and sensory impairment.

Dr. Brenowitz noted that, because cognitive tests rely on a certain degree of vision and hearing, impairment of these senses may complicate such tests. Still to be determined is whether correcting sensory impairments, such as wearing corrective lenses or hearing aids, affects dementia risk.

Meanwhile, it might be a good idea to more regularly check sensory function, especially vision and hearing, the researchers suggested. These functions affect various aspects of health and can be assessed rather easily. However, because smell is so strongly associated with dementia risk, Dr. Brenowitz said she would like to see it also become “part of a screening tool.”

A possible study limitation cited was that the researchers checked sensory function only once. “Most likely, some of these would change over time, but at least it captured sensory function at one point,” Dr. Brenowitz said.
 

 

 

“Sheds further light”

Commenting on the study, Jo V. Rushworth, PhD, associate professor and national teaching fellow, De Montfort University Leicester (England), said it “sheds further light on the emerging links” between multisensory impairment and cognitive decline leading to dementia. “The authors show that people with even mild loss of function in various senses are more likely to develop cognitive impairment.”

Dr. Rushworth was not involved with the study but has done research in the area.

The current results suggest that measuring patients’ hearing, vision, sense of smell, and touch might “flag at-risk groups” who could be targeted for dementia prevention strategies, Dr. Rushworth noted. Such tests are noninvasive and potentially less distressing than other methods of diagnosing dementia. “Importantly, the relatively low cost and simplicity of sensory tests offer the potential for more frequent testing and the use of these methods in areas of the world where medical facilities and resources are limited.”

This new study raises the question of whether the observed sensory impairments are a cause or an effect of dementia, Dr. Rushworth noted. “As the authors suggest, decreased sensory function can lead to a decrease in social engagement, mobility, and other factors which would usually contribute to counteracting cognitive decline.”

The study raises other questions, too, said Dr. Rushworth. She noted that the participants who experienced more severe sensory impairments were, on average, 2 years older than those with the least impairments. “To what degree were the observed sensory deficits linked to normal aging rather than dementia?”

As well, Dr. Rushworth pointed out that the molecular mechanisms that “kick-start” dementia are believed to occur in midlife – so possibly at an age younger than the study participants. “Do younger people of a ‘predementia’ age range display multisensory impairments?”

Because study participants could wear glasses during vision tests but were not allowed to wear hearing aids for the hearing tests, further standardization of sensory impairment is required, Dr. Rushworth said.

“Future studies will be essential in determining the value of clinical measurement of multisensory impairment as a possible dementia indicator and prevention strategy,” she concluded.

The study was funded by the National Institute on Aging, the National Institute of Nursing Research, and the Alzheimer’s Association. Dr. Brenowitz and Dr. Rushworth have reported no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Issue
Neurology Reviews- 28(9)
Publications
Topics
Sections

A poor combined score on tests of hearing, vision, smell, and touch is associated with a higher risk for dementia and cognitive decline among older adults, new research suggests. The study, which included almost 1,800 participants, adds to emerging evidence that even mild levels of multisensory impairment are associated with accelerated cognitive aging, the researchers noted.

Clinicians should be aware of this link between sensory impairment and dementia risk, said lead author Willa Brenowitz, PhD, assistant professor, department of psychiatry and behavioral sciences, University of California, San Francisco. “Many of these impairments are treatable, or at least physicians can monitor them; and this can improve quality of life, even if it doesn’t improve dementia risk.”

The findings were published online July 12 in Alzheimer’s and Dementia.
 

Additive effects

Previous research has focused on the link between dementia and individual senses, but this new work is unique in that it focuses on the additive effects of multiple impairments in sensory function, said Dr. Brenowitz. The study included 1,794 dementia-free participants in their 70s from the Health, Aging and Body Composition study, a prospective cohort study of healthy Black and White men and women.

Researchers tested participants’ hearing using a pure tone average without hearing aids and vision using contrast sensitivity with glasses permitted. They also measured vibrations in the big toe to assess touch and had participants identify distinctive odors such as paint thinner, roses, lemons, and onions to assess smell.

A score of 0-3 was assigned based on sample quartiles for each of the four sensory functions. Individuals with the best quartile were assigned a score of 0 and those with the worst were assigned a score of 3.

The investigators added scores across all senses to create a summary score of multisensory function (0-12) and classified the participants into tertiles of good, medium, and poor. Individuals with a score of 0 would have good function in all senses, whereas those with 12 would have poor function in all senses. Those with medium scores could have a mix of impairments.

Participants with good multisensory function were more likely to be healthier than those with poor function. They were also significantly more likely to have completed high school (85.0% vs. 72.1%), were significantly less likely to have diabetes (16.9% vs. 27.9%), and were marginally less likely to have cardiovascular disease, high blood pressure, and history of stroke.

Investigators measured cognition using the Modified Mini-Mental State (3MS) examination, a test of global cognitive function, and the Digit Symbol Substitution Test (DSST), a measure of cognitive processing speed. Cognitive testing was carried out at the beginning of the study and repeated every other year.

Dementia was defined as the use of dementia medication, being hospitalized with dementia as a primary or secondary diagnosis, or having a 3MS score 1.5 standard deviations lower than the race-stratified Health ABC study baseline mean.

Over an average follow-up of 6.3 years, 18% of participants developed dementia.
 

Dose-response increase

Results showed that, with worsening multisensory function score, the risk for dementia increased in a dose-response manner. In models adjusted for demographics and health conditions, participants with a poor multisensory function score were more than twice as likely to develop dementia than those with a good score (hazard ratio, 2.05; 95% confidence interval, 1.50-2.81; P < .001). Those with a middle multisensory function score were 1.45 times more likely to develop dementia (HR, 1.45; 95% CI, 1.09-1.91; P < .001).

Even a 1-point worse multisensory function score was associated with a 14% higher risk for dementia (95% CI, 8%-21%), while a 4-point worse score was associated with 71% higher risk for dementia (95% CI, 38%-211%).

Smell was the sensory function most strongly associated with dementia risk. Participants whose sense of smell declined by 10% had a 19% higher risk for dementia versus a 1%-3% higher risk for declines in vision, hearing, and touch.

It is not clear why smell was a stronger determinant of dementia risk. However, loss of this sense is often considered to be a marker for Alzheimer’s disease “because it is closely linked with brain regions that are affected” in that disease, said Dr. Brenowitz.

However, that does not necessarily mean smell is more important than vision or hearing, she added. “Even if hearing and vision have a smaller contribution to dementia, they have a stronger potential for intervention.” The findings suggest “some additive or cumulative” effects for loss of the different senses. “There’s an association above and beyond those which can be attributed to individual sensory domains,” she said.
 

Frailty link

After including mobility, which is a potential mediator, estimates for the multisensory function score were slightly lower. “Walking speed is pretty strongly associated with dementia risk,” Dr. Brenowitz noted. Physical frailty might help explain the link between sensory impairment and dementia risk. “It’s not clear if that’s because people with dementia are declining or because people with frailty are especially vulnerable to dementia,” she said.

The researchers also assessed the role of social support, another potential mechanism by which sensory decline, especially in hearing and vision, could influence dementia risk. Although the study did not find substantial differences in social support measures, the investigators noted that questions assessing social support were limited in scope.

Interactions between multisensory function score and race, APOE e4 allele status, and sex were not significant.

Worsening multisensory function was also linked to faster annual rates of cognitive decline as measured by both the 3MS and DSST. Each 1-point worse score was associated with faster decline (P < .05), even after adjustment for demographics and health conditions.
 

Possible mechanisms

A number of possible mechanisms may explain the link between poor sensory function and dementia. It could be that neurodegeneration underlying dementia affects the senses, or vision and/or hearing loss leads to social isolation and poor mental health, which in turn could affect dementia risk, the researchers wrote. It also is possible that cardiovascular disease or diabetes affect both dementia risk and sensory impairment.

Dr. Brenowitz noted that, because cognitive tests rely on a certain degree of vision and hearing, impairment of these senses may complicate such tests. Still to be determined is whether correcting sensory impairments, such as wearing corrective lenses or hearing aids, affects dementia risk.

Meanwhile, it might be a good idea to more regularly check sensory function, especially vision and hearing, the researchers suggested. These functions affect various aspects of health and can be assessed rather easily. However, because smell is so strongly associated with dementia risk, Dr. Brenowitz said she would like to see it also become “part of a screening tool.”

A possible study limitation cited was that the researchers checked sensory function only once. “Most likely, some of these would change over time, but at least it captured sensory function at one point,” Dr. Brenowitz said.
 

 

 

“Sheds further light”

Commenting on the study, Jo V. Rushworth, PhD, associate professor and national teaching fellow, De Montfort University Leicester (England), said it “sheds further light on the emerging links” between multisensory impairment and cognitive decline leading to dementia. “The authors show that people with even mild loss of function in various senses are more likely to develop cognitive impairment.”

Dr. Rushworth was not involved with the study but has done research in the area.

The current results suggest that measuring patients’ hearing, vision, sense of smell, and touch might “flag at-risk groups” who could be targeted for dementia prevention strategies, Dr. Rushworth noted. Such tests are noninvasive and potentially less distressing than other methods of diagnosing dementia. “Importantly, the relatively low cost and simplicity of sensory tests offer the potential for more frequent testing and the use of these methods in areas of the world where medical facilities and resources are limited.”

This new study raises the question of whether the observed sensory impairments are a cause or an effect of dementia, Dr. Rushworth noted. “As the authors suggest, decreased sensory function can lead to a decrease in social engagement, mobility, and other factors which would usually contribute to counteracting cognitive decline.”

The study raises other questions, too, said Dr. Rushworth. She noted that the participants who experienced more severe sensory impairments were, on average, 2 years older than those with the least impairments. “To what degree were the observed sensory deficits linked to normal aging rather than dementia?”

As well, Dr. Rushworth pointed out that the molecular mechanisms that “kick-start” dementia are believed to occur in midlife – so possibly at an age younger than the study participants. “Do younger people of a ‘predementia’ age range display multisensory impairments?”

Because study participants could wear glasses during vision tests but were not allowed to wear hearing aids for the hearing tests, further standardization of sensory impairment is required, Dr. Rushworth said.

“Future studies will be essential in determining the value of clinical measurement of multisensory impairment as a possible dementia indicator and prevention strategy,” she concluded.

The study was funded by the National Institute on Aging, the National Institute of Nursing Research, and the Alzheimer’s Association. Dr. Brenowitz and Dr. Rushworth have reported no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

A poor combined score on tests of hearing, vision, smell, and touch is associated with a higher risk for dementia and cognitive decline among older adults, new research suggests. The study, which included almost 1,800 participants, adds to emerging evidence that even mild levels of multisensory impairment are associated with accelerated cognitive aging, the researchers noted.

Clinicians should be aware of this link between sensory impairment and dementia risk, said lead author Willa Brenowitz, PhD, assistant professor, department of psychiatry and behavioral sciences, University of California, San Francisco. “Many of these impairments are treatable, or at least physicians can monitor them; and this can improve quality of life, even if it doesn’t improve dementia risk.”

The findings were published online July 12 in Alzheimer’s and Dementia.
 

Additive effects

Previous research has focused on the link between dementia and individual senses, but this new work is unique in that it focuses on the additive effects of multiple impairments in sensory function, said Dr. Brenowitz. The study included 1,794 dementia-free participants in their 70s from the Health, Aging and Body Composition study, a prospective cohort study of healthy Black and White men and women.

Researchers tested participants’ hearing using a pure tone average without hearing aids and vision using contrast sensitivity with glasses permitted. They also measured vibrations in the big toe to assess touch and had participants identify distinctive odors such as paint thinner, roses, lemons, and onions to assess smell.

A score of 0-3 was assigned based on sample quartiles for each of the four sensory functions. Individuals with the best quartile were assigned a score of 0 and those with the worst were assigned a score of 3.

The investigators added scores across all senses to create a summary score of multisensory function (0-12) and classified the participants into tertiles of good, medium, and poor. Individuals with a score of 0 would have good function in all senses, whereas those with 12 would have poor function in all senses. Those with medium scores could have a mix of impairments.

Participants with good multisensory function were more likely to be healthier than those with poor function. They were also significantly more likely to have completed high school (85.0% vs. 72.1%), were significantly less likely to have diabetes (16.9% vs. 27.9%), and were marginally less likely to have cardiovascular disease, high blood pressure, and history of stroke.

Investigators measured cognition using the Modified Mini-Mental State (3MS) examination, a test of global cognitive function, and the Digit Symbol Substitution Test (DSST), a measure of cognitive processing speed. Cognitive testing was carried out at the beginning of the study and repeated every other year.

Dementia was defined as the use of dementia medication, being hospitalized with dementia as a primary or secondary diagnosis, or having a 3MS score 1.5 standard deviations lower than the race-stratified Health ABC study baseline mean.

Over an average follow-up of 6.3 years, 18% of participants developed dementia.
 

Dose-response increase

Results showed that, with worsening multisensory function score, the risk for dementia increased in a dose-response manner. In models adjusted for demographics and health conditions, participants with a poor multisensory function score were more than twice as likely to develop dementia than those with a good score (hazard ratio, 2.05; 95% confidence interval, 1.50-2.81; P < .001). Those with a middle multisensory function score were 1.45 times more likely to develop dementia (HR, 1.45; 95% CI, 1.09-1.91; P < .001).

Even a 1-point worse multisensory function score was associated with a 14% higher risk for dementia (95% CI, 8%-21%), while a 4-point worse score was associated with 71% higher risk for dementia (95% CI, 38%-211%).

Smell was the sensory function most strongly associated with dementia risk. Participants whose sense of smell declined by 10% had a 19% higher risk for dementia versus a 1%-3% higher risk for declines in vision, hearing, and touch.

It is not clear why smell was a stronger determinant of dementia risk. However, loss of this sense is often considered to be a marker for Alzheimer’s disease “because it is closely linked with brain regions that are affected” in that disease, said Dr. Brenowitz.

However, that does not necessarily mean smell is more important than vision or hearing, she added. “Even if hearing and vision have a smaller contribution to dementia, they have a stronger potential for intervention.” The findings suggest “some additive or cumulative” effects for loss of the different senses. “There’s an association above and beyond those which can be attributed to individual sensory domains,” she said.
 

Frailty link

After including mobility, which is a potential mediator, estimates for the multisensory function score were slightly lower. “Walking speed is pretty strongly associated with dementia risk,” Dr. Brenowitz noted. Physical frailty might help explain the link between sensory impairment and dementia risk. “It’s not clear if that’s because people with dementia are declining or because people with frailty are especially vulnerable to dementia,” she said.

The researchers also assessed the role of social support, another potential mechanism by which sensory decline, especially in hearing and vision, could influence dementia risk. Although the study did not find substantial differences in social support measures, the investigators noted that questions assessing social support were limited in scope.

Interactions between multisensory function score and race, APOE e4 allele status, and sex were not significant.

Worsening multisensory function was also linked to faster annual rates of cognitive decline as measured by both the 3MS and DSST. Each 1-point worse score was associated with faster decline (P < .05), even after adjustment for demographics and health conditions.
 

Possible mechanisms

A number of possible mechanisms may explain the link between poor sensory function and dementia. It could be that neurodegeneration underlying dementia affects the senses, or vision and/or hearing loss leads to social isolation and poor mental health, which in turn could affect dementia risk, the researchers wrote. It also is possible that cardiovascular disease or diabetes affect both dementia risk and sensory impairment.

Dr. Brenowitz noted that, because cognitive tests rely on a certain degree of vision and hearing, impairment of these senses may complicate such tests. Still to be determined is whether correcting sensory impairments, such as wearing corrective lenses or hearing aids, affects dementia risk.

Meanwhile, it might be a good idea to more regularly check sensory function, especially vision and hearing, the researchers suggested. These functions affect various aspects of health and can be assessed rather easily. However, because smell is so strongly associated with dementia risk, Dr. Brenowitz said she would like to see it also become “part of a screening tool.”

A possible study limitation cited was that the researchers checked sensory function only once. “Most likely, some of these would change over time, but at least it captured sensory function at one point,” Dr. Brenowitz said.
 

 

 

“Sheds further light”

Commenting on the study, Jo V. Rushworth, PhD, associate professor and national teaching fellow, De Montfort University Leicester (England), said it “sheds further light on the emerging links” between multisensory impairment and cognitive decline leading to dementia. “The authors show that people with even mild loss of function in various senses are more likely to develop cognitive impairment.”

Dr. Rushworth was not involved with the study but has done research in the area.

The current results suggest that measuring patients’ hearing, vision, sense of smell, and touch might “flag at-risk groups” who could be targeted for dementia prevention strategies, Dr. Rushworth noted. Such tests are noninvasive and potentially less distressing than other methods of diagnosing dementia. “Importantly, the relatively low cost and simplicity of sensory tests offer the potential for more frequent testing and the use of these methods in areas of the world where medical facilities and resources are limited.”

This new study raises the question of whether the observed sensory impairments are a cause or an effect of dementia, Dr. Rushworth noted. “As the authors suggest, decreased sensory function can lead to a decrease in social engagement, mobility, and other factors which would usually contribute to counteracting cognitive decline.”

The study raises other questions, too, said Dr. Rushworth. She noted that the participants who experienced more severe sensory impairments were, on average, 2 years older than those with the least impairments. “To what degree were the observed sensory deficits linked to normal aging rather than dementia?”

As well, Dr. Rushworth pointed out that the molecular mechanisms that “kick-start” dementia are believed to occur in midlife – so possibly at an age younger than the study participants. “Do younger people of a ‘predementia’ age range display multisensory impairments?”

Because study participants could wear glasses during vision tests but were not allowed to wear hearing aids for the hearing tests, further standardization of sensory impairment is required, Dr. Rushworth said.

“Future studies will be essential in determining the value of clinical measurement of multisensory impairment as a possible dementia indicator and prevention strategy,” she concluded.

The study was funded by the National Institute on Aging, the National Institute of Nursing Research, and the Alzheimer’s Association. Dr. Brenowitz and Dr. Rushworth have reported no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Issue
Neurology Reviews- 28(9)
Issue
Neurology Reviews- 28(9)
Publications
Publications
Topics
Article Type
Click for Credit Status
Active
Sections
Citation Override
Publish date: August 14, 2020
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
CME ID
227015
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article

Multiple traits more common in difficult-to-treat patients with migraine

Article Type
Changed
Thu, 12/15/2022 - 15:43

Compared with their counterparts who get more relief, patients with difficult-to-treat migraine are more likely to delay acute treatment and take over-the-counter and opioid painkillers. They are also more likely to have depression and impairment. Overall, insufficient responders—patients less likely to get relief shortly after acute treatment—are “more medically and psychosocially complex,” wrote the authors of the study, which appeared in the July/August issue of Headache.

Common characteristics of insufficient responders

The researchers, led by Louise Lombard, M Nutr, of Eli Lilly and Company, analyzed data from a 2014 cross-sectional survey. They tracked 583 patients with migraine, including 200 (34%) who were considered insufficient responders because they failed to achieve freedom from pain within 2 hours of acute treatment in at least four of five attacks.

The insufficient and sufficient responder groups were similar in age (mean = 40 for both) and gender (80% and 75% female, respectively, P = .170) and race (72% and 77% white, P = .279).

However, insufficient responders were clearly more affected by headaches, multiple treatments, and other burdens. Compared with those who had better responses to treatment, they were more likely to have four or more migraine headache days per month (46% vs. 31%), rebound or medication-overuse headaches (16% vs. 7%) and chronic migraine (12% vs. 5%, all P < .05).

They were also more likely have comorbid depression (38% vs. 22%) and psychological conditions other than depression and anxiety (8% vs. 4%, all P < .05).

As for treatment, insufficient response was higher in patients who waited until the appearance of pain to take medication (odds ratio = 1.83, 95% confidence interval [CI] 1.15–2.92, P = .011, after adjustment for covariates). And insufficient responders were more likely to have been prescribed at least three unique preventive regimens (12% vs. 6%), to take over-the-counter medications (50% vs. 38%) and to take opioid painkillers (16% vs. 8%, all P < .05).

The authors, who caution that the study does not prove cause and effect, wrote that insufficient responders “may benefit from education on how and when to use current treatments.”
 

Managing insufficient responders

Neurology Reviews editor-in-chief Alan M. Rapoport, MD, said the study “confirms a lot of what we knew.” Dr, Rapoport, who was not involved in the study, is clinical professor of neurology at the University of California, Los Angeles.

“As expected, the insufficient responders used more opioids and over-the-counter medications, which is not the ideal way to treat migraine,” he said. “That probably caused them to have medication-overuse headache, which might have caused them to respond poorly to even the best treatment regimen. They also had more severe symptoms, more comorbidities, and a poorer quality of life. They also had more impairment and greater impact on work, with more of them unemployed.”

The insufficient responders also “took medication at the time or after the pain began, rather than before it when they thought the attack was beginning due to premonitory symptoms,” he said.

Dr. Rapoport also noted a surprising and unusual finding: Patients who did not report sensitivity to light as their most bothersome symptom were more likely to be insufficient responders (OR = 2.3, 95% CI [1.21–4.37], P = .011). “In all recent migraine studies,” he said, “the majority of patients selected photophobia as their most bothersome symptom.”

In the big picture, he said, the study suggests that “a third triptan does not seem to work better than the first two, patients with medication-overuse headache and chronic migraine and those not on preventive medication do not respond that well to acute care treatment, and the same is true when depression is present.”

No study funding was reported. Four study authors reported ties with Eli Lilly, and two reported employment by Adelphi Real World, which provided the survey results..

SOURCE: Lombard L et al. Headache. 2020;60(7):1325-39. doi: 10.1111/head.13835.

Publications
Topics
Sections

Compared with their counterparts who get more relief, patients with difficult-to-treat migraine are more likely to delay acute treatment and take over-the-counter and opioid painkillers. They are also more likely to have depression and impairment. Overall, insufficient responders—patients less likely to get relief shortly after acute treatment—are “more medically and psychosocially complex,” wrote the authors of the study, which appeared in the July/August issue of Headache.

Common characteristics of insufficient responders

The researchers, led by Louise Lombard, M Nutr, of Eli Lilly and Company, analyzed data from a 2014 cross-sectional survey. They tracked 583 patients with migraine, including 200 (34%) who were considered insufficient responders because they failed to achieve freedom from pain within 2 hours of acute treatment in at least four of five attacks.

The insufficient and sufficient responder groups were similar in age (mean = 40 for both) and gender (80% and 75% female, respectively, P = .170) and race (72% and 77% white, P = .279).

However, insufficient responders were clearly more affected by headaches, multiple treatments, and other burdens. Compared with those who had better responses to treatment, they were more likely to have four or more migraine headache days per month (46% vs. 31%), rebound or medication-overuse headaches (16% vs. 7%) and chronic migraine (12% vs. 5%, all P < .05).

They were also more likely have comorbid depression (38% vs. 22%) and psychological conditions other than depression and anxiety (8% vs. 4%, all P < .05).

As for treatment, insufficient response was higher in patients who waited until the appearance of pain to take medication (odds ratio = 1.83, 95% confidence interval [CI] 1.15–2.92, P = .011, after adjustment for covariates). And insufficient responders were more likely to have been prescribed at least three unique preventive regimens (12% vs. 6%), to take over-the-counter medications (50% vs. 38%) and to take opioid painkillers (16% vs. 8%, all P < .05).

The authors, who caution that the study does not prove cause and effect, wrote that insufficient responders “may benefit from education on how and when to use current treatments.”
 

Managing insufficient responders

Neurology Reviews editor-in-chief Alan M. Rapoport, MD, said the study “confirms a lot of what we knew.” Dr, Rapoport, who was not involved in the study, is clinical professor of neurology at the University of California, Los Angeles.

“As expected, the insufficient responders used more opioids and over-the-counter medications, which is not the ideal way to treat migraine,” he said. “That probably caused them to have medication-overuse headache, which might have caused them to respond poorly to even the best treatment regimen. They also had more severe symptoms, more comorbidities, and a poorer quality of life. They also had more impairment and greater impact on work, with more of them unemployed.”

The insufficient responders also “took medication at the time or after the pain began, rather than before it when they thought the attack was beginning due to premonitory symptoms,” he said.

Dr. Rapoport also noted a surprising and unusual finding: Patients who did not report sensitivity to light as their most bothersome symptom were more likely to be insufficient responders (OR = 2.3, 95% CI [1.21–4.37], P = .011). “In all recent migraine studies,” he said, “the majority of patients selected photophobia as their most bothersome symptom.”

In the big picture, he said, the study suggests that “a third triptan does not seem to work better than the first two, patients with medication-overuse headache and chronic migraine and those not on preventive medication do not respond that well to acute care treatment, and the same is true when depression is present.”

No study funding was reported. Four study authors reported ties with Eli Lilly, and two reported employment by Adelphi Real World, which provided the survey results..

SOURCE: Lombard L et al. Headache. 2020;60(7):1325-39. doi: 10.1111/head.13835.

Compared with their counterparts who get more relief, patients with difficult-to-treat migraine are more likely to delay acute treatment and take over-the-counter and opioid painkillers. They are also more likely to have depression and impairment. Overall, insufficient responders—patients less likely to get relief shortly after acute treatment—are “more medically and psychosocially complex,” wrote the authors of the study, which appeared in the July/August issue of Headache.

Common characteristics of insufficient responders

The researchers, led by Louise Lombard, M Nutr, of Eli Lilly and Company, analyzed data from a 2014 cross-sectional survey. They tracked 583 patients with migraine, including 200 (34%) who were considered insufficient responders because they failed to achieve freedom from pain within 2 hours of acute treatment in at least four of five attacks.

The insufficient and sufficient responder groups were similar in age (mean = 40 for both) and gender (80% and 75% female, respectively, P = .170) and race (72% and 77% white, P = .279).

However, insufficient responders were clearly more affected by headaches, multiple treatments, and other burdens. Compared with those who had better responses to treatment, they were more likely to have four or more migraine headache days per month (46% vs. 31%), rebound or medication-overuse headaches (16% vs. 7%) and chronic migraine (12% vs. 5%, all P < .05).

They were also more likely have comorbid depression (38% vs. 22%) and psychological conditions other than depression and anxiety (8% vs. 4%, all P < .05).

As for treatment, insufficient response was higher in patients who waited until the appearance of pain to take medication (odds ratio = 1.83, 95% confidence interval [CI] 1.15–2.92, P = .011, after adjustment for covariates). And insufficient responders were more likely to have been prescribed at least three unique preventive regimens (12% vs. 6%), to take over-the-counter medications (50% vs. 38%) and to take opioid painkillers (16% vs. 8%, all P < .05).

The authors, who caution that the study does not prove cause and effect, wrote that insufficient responders “may benefit from education on how and when to use current treatments.”
 

Managing insufficient responders

Neurology Reviews editor-in-chief Alan M. Rapoport, MD, said the study “confirms a lot of what we knew.” Dr, Rapoport, who was not involved in the study, is clinical professor of neurology at the University of California, Los Angeles.

“As expected, the insufficient responders used more opioids and over-the-counter medications, which is not the ideal way to treat migraine,” he said. “That probably caused them to have medication-overuse headache, which might have caused them to respond poorly to even the best treatment regimen. They also had more severe symptoms, more comorbidities, and a poorer quality of life. They also had more impairment and greater impact on work, with more of them unemployed.”

The insufficient responders also “took medication at the time or after the pain began, rather than before it when they thought the attack was beginning due to premonitory symptoms,” he said.

Dr. Rapoport also noted a surprising and unusual finding: Patients who did not report sensitivity to light as their most bothersome symptom were more likely to be insufficient responders (OR = 2.3, 95% CI [1.21–4.37], P = .011). “In all recent migraine studies,” he said, “the majority of patients selected photophobia as their most bothersome symptom.”

In the big picture, he said, the study suggests that “a third triptan does not seem to work better than the first two, patients with medication-overuse headache and chronic migraine and those not on preventive medication do not respond that well to acute care treatment, and the same is true when depression is present.”

No study funding was reported. Four study authors reported ties with Eli Lilly, and two reported employment by Adelphi Real World, which provided the survey results..

SOURCE: Lombard L et al. Headache. 2020;60(7):1325-39. doi: 10.1111/head.13835.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM HEADACHE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article

Concussion linked to risk for dementia, Parkinson’s disease, and ADHD

Article Type
Changed
Thu, 12/15/2022 - 15:43

 

Concussion is associated with increased risk for subsequent development of attention-deficit/hyperactivity disorder (ADHD), as well as dementia and Parkinson’s disease, new research suggests. Results from a retrospective, population-based cohort study showed that controlling for socioeconomic status and overall health did not significantly affect this association.

The link between concussion and risk for ADHD and for mood and anxiety disorder was stronger in the women than in the men. In addition, having a history of multiple concussions strengthened the association between concussion and subsequent mood and anxiety disorder, dementia, and Parkinson’s disease compared with experiencing just one concussion.

The findings are similar to those of previous studies, noted lead author Marc P. Morissette, PhD, research assistant at the Pan Am Clinic Foundation in Winnipeg, Manitoba, Canada. “The main methodological differences separating our study from previous studies in this area is a focus on concussion-specific injuries identified from medical records and the potential for study participants to have up to 25 years of follow-up data,” said Dr. Morissette.

The findings were published online July 27 in Family Medicine and Community Health, a BMJ journal.
 

Almost 190,000 participants

Several studies have shown associations between head injury and increased risk for ADHD, depression, anxiety, Alzheimer’s disease, and Parkinson’s disease. However, many of these studies relied on self-reported medical history, included all forms of traumatic brain injury, and failed to adjust for preexisting health conditions.

An improved understanding of concussion and the risks associated with it could help physicians manage their patients’ long-term needs, the investigators noted.

In the current study, the researchers examined anonymized administrative health data collected between the periods of 1990–1991 and 2014–2015 in the Manitoba Population Research Data Repository at the Manitoba Center for Health Policy.

Eligible patients had been diagnosed with concussion in accordance with standard criteria. Participants were excluded if they had been diagnosed with dementia or Parkinson’s disease before the incident concussion during the study period. The investigators matched three control participants to each included patient on the basis of age, sex, and location.

Study outcome was time from index date (date of first concussion) to diagnosis of ADHD, mood and anxiety disorder, dementia, or Parkinson’s disease. The researchers controlled for socioeconomic status using the Socioeconomic Factor Index, version 2 (SEFI2), and for preexisting medical conditions using the Charlson Comorbidity Index (CCI).

The study included 28,021 men (mean age, 25 years) and 19,462 women (mean age, 30 years) in the concussion group and 81,871 men (mean age, 25 years) and 57,159 women (mean age, 30 years) in the control group. Mean SEFI2 score was approximately −0.05, and mean CCI score was approximately 0.2.
 

Dose effect?

Results showed that concussion was associated with an increased risk for ADHD (hazard ratio [HR], 1.39), mood and anxiety disorder (HR, 1.72), dementia (HR, 1.72), and Parkinson’s disease (HR, 1.57).

After a concussion, the risk of developing ADHD was 28% higher and the risk of developing mood and anxiety disorder was 7% higher among women than among men. Gender was not associated with risk for dementia or Parkinson’s disease after concussion.

Sustaining a second concussion increased the strength of the association with risk for dementia compared with sustaining a single concussion (HR, 1.62). Similarly, sustaining more than three concussions increased the strength of the association with the risk for mood and anxiety disorders (HR for more than three vs one concussion, 1.22) and Parkinson›s disease (HR, 3.27).

A sensitivity analysis found similar associations between concussion and risk for mood and anxiety disorder among all age groups. Younger participants were at greater risk for ADHD, however, and older participants were at greater risk for dementia and Parkinson’s disease.

Increased awareness of concussion and the outcomes of interest, along with improved diagnostic tools, may have influenced the study’s findings, Dr. Morissette noted. “The sex-based differences may be due to either pathophysiological differences in response to concussive injuries or potentially a difference in willingness to seek medical care or share symptoms, concussion-related or otherwise, with a medical professional,” he said.

“We are hopeful that our findings will encourage practitioners to be cognizant of various conditions that may present in individuals who have previously experienced a concussion,” Dr. Morissette added. “If physicians are aware of the various associations identified following a concussion, it may lead to more thorough clinical examination at initial presentation, along with more dedicated care throughout the patient’s life.”
 

 

 

Association versus causation

Commenting on the research, Steven Erickson, MD, sports medicine specialist at Banner–University Medicine Neuroscience Institute, Phoenix, Ariz., noted that although the study showed an association between concussion and subsequent diagnosis of ADHD, anxiety, and Parkinson’s disease, “this association should not be misconstrued as causation.” He added that the study’s conclusions “are just as likely to be due to labeling theory” or a self-fulfilling prophecy.

“Patients diagnosed with ADHD, anxiety, or Parkinson’s disease may recall concussion and associate the two diagnoses; but patients who have not previously been diagnosed with a concussion cannot draw that conclusion,” said Dr. Erickson, who was not involved with the research.

Citing the apparent gender difference in the strength of the association between concussion and the outcomes of interest, Dr. Erickson noted that women are more likely to report symptoms in general “and therefore are more likely to be diagnosed with ADHD and anxiety disorders” because of differences in reporting rather than incidence of disease.

“Further research needs to be done to definitively determine a causal relationship between concussion and any psychiatric or neurologic diagnosis,” Dr. Erickson concluded.

The study was funded by the Pan Am Clinic Foundation. Dr. Morissette and Dr. Erickson have disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Issue
Neurology Reviews- 28(9)
Publications
Topics
Sections

 

Concussion is associated with increased risk for subsequent development of attention-deficit/hyperactivity disorder (ADHD), as well as dementia and Parkinson’s disease, new research suggests. Results from a retrospective, population-based cohort study showed that controlling for socioeconomic status and overall health did not significantly affect this association.

The link between concussion and risk for ADHD and for mood and anxiety disorder was stronger in the women than in the men. In addition, having a history of multiple concussions strengthened the association between concussion and subsequent mood and anxiety disorder, dementia, and Parkinson’s disease compared with experiencing just one concussion.

The findings are similar to those of previous studies, noted lead author Marc P. Morissette, PhD, research assistant at the Pan Am Clinic Foundation in Winnipeg, Manitoba, Canada. “The main methodological differences separating our study from previous studies in this area is a focus on concussion-specific injuries identified from medical records and the potential for study participants to have up to 25 years of follow-up data,” said Dr. Morissette.

The findings were published online July 27 in Family Medicine and Community Health, a BMJ journal.
 

Almost 190,000 participants

Several studies have shown associations between head injury and increased risk for ADHD, depression, anxiety, Alzheimer’s disease, and Parkinson’s disease. However, many of these studies relied on self-reported medical history, included all forms of traumatic brain injury, and failed to adjust for preexisting health conditions.

An improved understanding of concussion and the risks associated with it could help physicians manage their patients’ long-term needs, the investigators noted.

In the current study, the researchers examined anonymized administrative health data collected between the periods of 1990–1991 and 2014–2015 in the Manitoba Population Research Data Repository at the Manitoba Center for Health Policy.

Eligible patients had been diagnosed with concussion in accordance with standard criteria. Participants were excluded if they had been diagnosed with dementia or Parkinson’s disease before the incident concussion during the study period. The investigators matched three control participants to each included patient on the basis of age, sex, and location.

Study outcome was time from index date (date of first concussion) to diagnosis of ADHD, mood and anxiety disorder, dementia, or Parkinson’s disease. The researchers controlled for socioeconomic status using the Socioeconomic Factor Index, version 2 (SEFI2), and for preexisting medical conditions using the Charlson Comorbidity Index (CCI).

The study included 28,021 men (mean age, 25 years) and 19,462 women (mean age, 30 years) in the concussion group and 81,871 men (mean age, 25 years) and 57,159 women (mean age, 30 years) in the control group. Mean SEFI2 score was approximately −0.05, and mean CCI score was approximately 0.2.
 

Dose effect?

Results showed that concussion was associated with an increased risk for ADHD (hazard ratio [HR], 1.39), mood and anxiety disorder (HR, 1.72), dementia (HR, 1.72), and Parkinson’s disease (HR, 1.57).

After a concussion, the risk of developing ADHD was 28% higher and the risk of developing mood and anxiety disorder was 7% higher among women than among men. Gender was not associated with risk for dementia or Parkinson’s disease after concussion.

Sustaining a second concussion increased the strength of the association with risk for dementia compared with sustaining a single concussion (HR, 1.62). Similarly, sustaining more than three concussions increased the strength of the association with the risk for mood and anxiety disorders (HR for more than three vs one concussion, 1.22) and Parkinson›s disease (HR, 3.27).

A sensitivity analysis found similar associations between concussion and risk for mood and anxiety disorder among all age groups. Younger participants were at greater risk for ADHD, however, and older participants were at greater risk for dementia and Parkinson’s disease.

Increased awareness of concussion and the outcomes of interest, along with improved diagnostic tools, may have influenced the study’s findings, Dr. Morissette noted. “The sex-based differences may be due to either pathophysiological differences in response to concussive injuries or potentially a difference in willingness to seek medical care or share symptoms, concussion-related or otherwise, with a medical professional,” he said.

“We are hopeful that our findings will encourage practitioners to be cognizant of various conditions that may present in individuals who have previously experienced a concussion,” Dr. Morissette added. “If physicians are aware of the various associations identified following a concussion, it may lead to more thorough clinical examination at initial presentation, along with more dedicated care throughout the patient’s life.”
 

 

 

Association versus causation

Commenting on the research, Steven Erickson, MD, sports medicine specialist at Banner–University Medicine Neuroscience Institute, Phoenix, Ariz., noted that although the study showed an association between concussion and subsequent diagnosis of ADHD, anxiety, and Parkinson’s disease, “this association should not be misconstrued as causation.” He added that the study’s conclusions “are just as likely to be due to labeling theory” or a self-fulfilling prophecy.

“Patients diagnosed with ADHD, anxiety, or Parkinson’s disease may recall concussion and associate the two diagnoses; but patients who have not previously been diagnosed with a concussion cannot draw that conclusion,” said Dr. Erickson, who was not involved with the research.

Citing the apparent gender difference in the strength of the association between concussion and the outcomes of interest, Dr. Erickson noted that women are more likely to report symptoms in general “and therefore are more likely to be diagnosed with ADHD and anxiety disorders” because of differences in reporting rather than incidence of disease.

“Further research needs to be done to definitively determine a causal relationship between concussion and any psychiatric or neurologic diagnosis,” Dr. Erickson concluded.

The study was funded by the Pan Am Clinic Foundation. Dr. Morissette and Dr. Erickson have disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

 

Concussion is associated with increased risk for subsequent development of attention-deficit/hyperactivity disorder (ADHD), as well as dementia and Parkinson’s disease, new research suggests. Results from a retrospective, population-based cohort study showed that controlling for socioeconomic status and overall health did not significantly affect this association.

The link between concussion and risk for ADHD and for mood and anxiety disorder was stronger in the women than in the men. In addition, having a history of multiple concussions strengthened the association between concussion and subsequent mood and anxiety disorder, dementia, and Parkinson’s disease compared with experiencing just one concussion.

The findings are similar to those of previous studies, noted lead author Marc P. Morissette, PhD, research assistant at the Pan Am Clinic Foundation in Winnipeg, Manitoba, Canada. “The main methodological differences separating our study from previous studies in this area is a focus on concussion-specific injuries identified from medical records and the potential for study participants to have up to 25 years of follow-up data,” said Dr. Morissette.

The findings were published online July 27 in Family Medicine and Community Health, a BMJ journal.
 

Almost 190,000 participants

Several studies have shown associations between head injury and increased risk for ADHD, depression, anxiety, Alzheimer’s disease, and Parkinson’s disease. However, many of these studies relied on self-reported medical history, included all forms of traumatic brain injury, and failed to adjust for preexisting health conditions.

An improved understanding of concussion and the risks associated with it could help physicians manage their patients’ long-term needs, the investigators noted.

In the current study, the researchers examined anonymized administrative health data collected between the periods of 1990–1991 and 2014–2015 in the Manitoba Population Research Data Repository at the Manitoba Center for Health Policy.

Eligible patients had been diagnosed with concussion in accordance with standard criteria. Participants were excluded if they had been diagnosed with dementia or Parkinson’s disease before the incident concussion during the study period. The investigators matched three control participants to each included patient on the basis of age, sex, and location.

Study outcome was time from index date (date of first concussion) to diagnosis of ADHD, mood and anxiety disorder, dementia, or Parkinson’s disease. The researchers controlled for socioeconomic status using the Socioeconomic Factor Index, version 2 (SEFI2), and for preexisting medical conditions using the Charlson Comorbidity Index (CCI).

The study included 28,021 men (mean age, 25 years) and 19,462 women (mean age, 30 years) in the concussion group and 81,871 men (mean age, 25 years) and 57,159 women (mean age, 30 years) in the control group. Mean SEFI2 score was approximately −0.05, and mean CCI score was approximately 0.2.
 

Dose effect?

Results showed that concussion was associated with an increased risk for ADHD (hazard ratio [HR], 1.39), mood and anxiety disorder (HR, 1.72), dementia (HR, 1.72), and Parkinson’s disease (HR, 1.57).

After a concussion, the risk of developing ADHD was 28% higher and the risk of developing mood and anxiety disorder was 7% higher among women than among men. Gender was not associated with risk for dementia or Parkinson’s disease after concussion.

Sustaining a second concussion increased the strength of the association with risk for dementia compared with sustaining a single concussion (HR, 1.62). Similarly, sustaining more than three concussions increased the strength of the association with the risk for mood and anxiety disorders (HR for more than three vs one concussion, 1.22) and Parkinson›s disease (HR, 3.27).

A sensitivity analysis found similar associations between concussion and risk for mood and anxiety disorder among all age groups. Younger participants were at greater risk for ADHD, however, and older participants were at greater risk for dementia and Parkinson’s disease.

Increased awareness of concussion and the outcomes of interest, along with improved diagnostic tools, may have influenced the study’s findings, Dr. Morissette noted. “The sex-based differences may be due to either pathophysiological differences in response to concussive injuries or potentially a difference in willingness to seek medical care or share symptoms, concussion-related or otherwise, with a medical professional,” he said.

“We are hopeful that our findings will encourage practitioners to be cognizant of various conditions that may present in individuals who have previously experienced a concussion,” Dr. Morissette added. “If physicians are aware of the various associations identified following a concussion, it may lead to more thorough clinical examination at initial presentation, along with more dedicated care throughout the patient’s life.”
 

 

 

Association versus causation

Commenting on the research, Steven Erickson, MD, sports medicine specialist at Banner–University Medicine Neuroscience Institute, Phoenix, Ariz., noted that although the study showed an association between concussion and subsequent diagnosis of ADHD, anxiety, and Parkinson’s disease, “this association should not be misconstrued as causation.” He added that the study’s conclusions “are just as likely to be due to labeling theory” or a self-fulfilling prophecy.

“Patients diagnosed with ADHD, anxiety, or Parkinson’s disease may recall concussion and associate the two diagnoses; but patients who have not previously been diagnosed with a concussion cannot draw that conclusion,” said Dr. Erickson, who was not involved with the research.

Citing the apparent gender difference in the strength of the association between concussion and the outcomes of interest, Dr. Erickson noted that women are more likely to report symptoms in general “and therefore are more likely to be diagnosed with ADHD and anxiety disorders” because of differences in reporting rather than incidence of disease.

“Further research needs to be done to definitively determine a causal relationship between concussion and any psychiatric or neurologic diagnosis,” Dr. Erickson concluded.

The study was funded by the Pan Am Clinic Foundation. Dr. Morissette and Dr. Erickson have disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Issue
Neurology Reviews- 28(9)
Issue
Neurology Reviews- 28(9)
Publications
Publications
Topics
Article Type
Click for Credit Status
Active
Sections
Article Source

From Family Medicine and Community Health

Citation Override
Publish date: August 12, 2020
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
CME ID
226881
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article

Consensus document reviews determination of brain death

Article Type
Changed
Thu, 12/15/2022 - 15:43

 

A group of experts representing various international professional societies has drafted a consensus statement on the determination of brain death or death by neurologic criteria (BD/DNC). The document, a result of the World Brain Death Project, surveys the clinical aspects of this determination, such as clinical testing, apnea testing, and the number of examinations required, as well as its social and legal aspects, including documentation, qualifications for making the determination, and religious attitudes toward BD/DNC.

The recommendations are the minimum criteria for BD/DNC, and countries and professional societies may choose to adopt stricter criteria, the authors noted. Seventeen supplements to the consensus statement contain detailed reports on topics the statement examines, including focuses on both adults and children.

“Perhaps the most important points of this project are, first, to show the worldwide acceptance of the concept of BD/DNC and what the minimum requirements are for BD/DNC,” said corresponding author Gene Sung, MD, MPH, director of the neurocritical care and stroke division at the University of Southern California, Los Angeles. Second, “this standard is centered around a clinical determination without the need for other testing.”

The consensus document and supplements were published online Aug. 3 in JAMA.

Comprehensive review

A lack of rigor has led to many differences in the determination of BD/DNC, said Dr. Sung. “Some of the variance that is common are the numbers of exams and examiners that are required and whether ancillary tests are required for determination of BD/DNC. In addition, a lot of guidelines and protocols that are in use are not thorough in detailing how to do the examinations and what to do in different circumstances.”

Professional societies such as the World Federation of Intensive and Critical Care recruited experts in BD/DNC to develop recommendations, which were based on relevant articles that they identified during a literature search. “We wanted to develop a fairly comprehensive document that, along with the 17 supplements, builds a foundation to show how to determine BD/DNC – what the minimum clinical criteria needed are and what to do in special circumstances,” Dr. Sung said.

Major sections of the statement include recommendations for the minimum clinical standards for the determination of BD/DNC in adults and children.

Determination must begin by establishing that the patient has sustained an irreversible brain injury that resulted in the loss of all brain function, according to the authors. Confounders such as pharmacologic paralysis and the effect of CNS depressant medications should be ruled out.

In addition, clinical evaluation must include an assessment for coma and an evaluation for brain stem areflexia. Among other criteria, the pupils should be fixed and nonresponsive to light, the face should not move in response to noxious cranial stimulation, and the gag and cough reflexes should be absent. Apnea testing is recommended to evaluate the responsiveness of respiratory centers in the medulla.

Although the definition of BD/DNC is the same in children as in adults, less evidence is available for the determination of BD/DNC in the very young. The authors thus advised a cautious approach to the evaluation of infants and younger children.

Recommendations vary by age and often require serial examinations, including apnea testing, they noted.

 

 

Ancillary testing

The consensus statement also reviews ancillary testing, which the authors recommend be required when the minimum clinical examination, including the apnea test, cannot be completed and when it is in the presence of confounding conditions that cannot be resolved.

The authors recommended digital subtraction angiography, radionuclide studies, and transcranial Doppler ultrasonography as ancillary tests based on blood flow in the brain. However, CT angiography and magnetic resonance angiography not be used.

A lack of guidance makes performing an apnea test in patients receiving extracorporeal membrane oxygenation (ECMO) challenging, according to the authors. Nevertheless, they recommended that the same principles of BD/DNC be applied to adults and children receiving ECMO.

They further recommended a period of preoxygenation before the apnea test, and the document describes in detail the method for administering this test to people receiving ECMO.

Another potentially challenging situation pointed out in the consensus document is the determination of BD/DNC in patients who have been treated with targeted temperature management. Therapeutic hypothermia, particularly if it is preceded or accompanied by sedation, can temporarily impair brain stem reflexes, thus mimicking BD/DNC.

The new document includes a flowchart and step-by-step recommendations as well as suggestions for determining BD/DNC under these circumstances.

Among document limitations acknowledged by the authors is the lack of high-quality data from randomized, controlled trials on which to base their recommendations.

In addition, economic, technological, or personnel limitations may reduce the available options for ancillary testing, they added. Also, the recommendations do not incorporate contributions from patients or social or religious groups, although the authors were mindful of their concerns.

To promote the national and international harmonization of BD/DNC criteria, “medical societies and countries can evaluate their own policies in relation to this document and fix any deficiencies,” Dr. Sung said.

“Many countries do not have any BD/DNC policies and can use the documents from this project to create their own. There may need to be discussions with legal, governmental, religious, and societal leaders to help understand and accept BD/DNC and to help enact policies in different communities,” he added.

Divergent definitions

The determination of death is not simply a scientific question, but also a philosophical, religious, and cultural question, wrote Robert D. Truog, MD, director of the Harvard Center for Bioethics, Boston, and colleagues in an accompanying editorial. Future research should consider cultural differences over these questions.

“Most important is that there be a clear and logical consistency between the definition of death and the tests that are used to diagnose it,” Dr. Truog said.

The concept of whole brain death was advanced as an equivalent to biological death, “such that, when the brain dies, the body literally disintegrates, just as it does after cardiac arrest,” but evidence indicates that this claim is untrue, Dr. Truog said. Current tests also do not diagnose the death of the whole brain.

Another hypothesis is that brain stem death represents the irreversible loss of consciousness and the capacity for spontaneous respiration.

“Instead of focusing on biology, [this definition] focuses on values and is based on the claim that when a person is in a state of irreversible apneic unconsciousness, we may consider them to be dead,” said Dr. Truog. He and his coeditorialists argued that the concept of whole brain death should be replaced with that of brain stem death.

“This report should be a call for our profession, as well as for federal and state lawmakers, to reform our laws so that they are consistent with our diagnostic criteria,” Dr. Truog said.

“The most straightforward way of doing this would be to change U.S. law and adopt the British standard of brain stem death, and then refine our testing to make the diagnosis of irreversible apneic unconsciousness as reliable and safe as possible,” he concluded.

The drafting of the consensus statement was not supported by outside funding. Dr. Sung reported no relevant financial relationships. Dr. Truog reported receiving compensation from Sanofi and Covance for participating in data and safety monitoring boards unrelated to the consensus document.

A version of this article originally appeared on Medscape.com.

Issue
Neurology Reviews- 28(9)
Publications
Topics
Sections

 

A group of experts representing various international professional societies has drafted a consensus statement on the determination of brain death or death by neurologic criteria (BD/DNC). The document, a result of the World Brain Death Project, surveys the clinical aspects of this determination, such as clinical testing, apnea testing, and the number of examinations required, as well as its social and legal aspects, including documentation, qualifications for making the determination, and religious attitudes toward BD/DNC.

The recommendations are the minimum criteria for BD/DNC, and countries and professional societies may choose to adopt stricter criteria, the authors noted. Seventeen supplements to the consensus statement contain detailed reports on topics the statement examines, including focuses on both adults and children.

“Perhaps the most important points of this project are, first, to show the worldwide acceptance of the concept of BD/DNC and what the minimum requirements are for BD/DNC,” said corresponding author Gene Sung, MD, MPH, director of the neurocritical care and stroke division at the University of Southern California, Los Angeles. Second, “this standard is centered around a clinical determination without the need for other testing.”

The consensus document and supplements were published online Aug. 3 in JAMA.

Comprehensive review

A lack of rigor has led to many differences in the determination of BD/DNC, said Dr. Sung. “Some of the variance that is common are the numbers of exams and examiners that are required and whether ancillary tests are required for determination of BD/DNC. In addition, a lot of guidelines and protocols that are in use are not thorough in detailing how to do the examinations and what to do in different circumstances.”

Professional societies such as the World Federation of Intensive and Critical Care recruited experts in BD/DNC to develop recommendations, which were based on relevant articles that they identified during a literature search. “We wanted to develop a fairly comprehensive document that, along with the 17 supplements, builds a foundation to show how to determine BD/DNC – what the minimum clinical criteria needed are and what to do in special circumstances,” Dr. Sung said.

Major sections of the statement include recommendations for the minimum clinical standards for the determination of BD/DNC in adults and children.

Determination must begin by establishing that the patient has sustained an irreversible brain injury that resulted in the loss of all brain function, according to the authors. Confounders such as pharmacologic paralysis and the effect of CNS depressant medications should be ruled out.

In addition, clinical evaluation must include an assessment for coma and an evaluation for brain stem areflexia. Among other criteria, the pupils should be fixed and nonresponsive to light, the face should not move in response to noxious cranial stimulation, and the gag and cough reflexes should be absent. Apnea testing is recommended to evaluate the responsiveness of respiratory centers in the medulla.

Although the definition of BD/DNC is the same in children as in adults, less evidence is available for the determination of BD/DNC in the very young. The authors thus advised a cautious approach to the evaluation of infants and younger children.

Recommendations vary by age and often require serial examinations, including apnea testing, they noted.

 

 

Ancillary testing

The consensus statement also reviews ancillary testing, which the authors recommend be required when the minimum clinical examination, including the apnea test, cannot be completed and when it is in the presence of confounding conditions that cannot be resolved.

The authors recommended digital subtraction angiography, radionuclide studies, and transcranial Doppler ultrasonography as ancillary tests based on blood flow in the brain. However, CT angiography and magnetic resonance angiography not be used.

A lack of guidance makes performing an apnea test in patients receiving extracorporeal membrane oxygenation (ECMO) challenging, according to the authors. Nevertheless, they recommended that the same principles of BD/DNC be applied to adults and children receiving ECMO.

They further recommended a period of preoxygenation before the apnea test, and the document describes in detail the method for administering this test to people receiving ECMO.

Another potentially challenging situation pointed out in the consensus document is the determination of BD/DNC in patients who have been treated with targeted temperature management. Therapeutic hypothermia, particularly if it is preceded or accompanied by sedation, can temporarily impair brain stem reflexes, thus mimicking BD/DNC.

The new document includes a flowchart and step-by-step recommendations as well as suggestions for determining BD/DNC under these circumstances.

Among document limitations acknowledged by the authors is the lack of high-quality data from randomized, controlled trials on which to base their recommendations.

In addition, economic, technological, or personnel limitations may reduce the available options for ancillary testing, they added. Also, the recommendations do not incorporate contributions from patients or social or religious groups, although the authors were mindful of their concerns.

To promote the national and international harmonization of BD/DNC criteria, “medical societies and countries can evaluate their own policies in relation to this document and fix any deficiencies,” Dr. Sung said.

“Many countries do not have any BD/DNC policies and can use the documents from this project to create their own. There may need to be discussions with legal, governmental, religious, and societal leaders to help understand and accept BD/DNC and to help enact policies in different communities,” he added.

Divergent definitions

The determination of death is not simply a scientific question, but also a philosophical, religious, and cultural question, wrote Robert D. Truog, MD, director of the Harvard Center for Bioethics, Boston, and colleagues in an accompanying editorial. Future research should consider cultural differences over these questions.

“Most important is that there be a clear and logical consistency between the definition of death and the tests that are used to diagnose it,” Dr. Truog said.

The concept of whole brain death was advanced as an equivalent to biological death, “such that, when the brain dies, the body literally disintegrates, just as it does after cardiac arrest,” but evidence indicates that this claim is untrue, Dr. Truog said. Current tests also do not diagnose the death of the whole brain.

Another hypothesis is that brain stem death represents the irreversible loss of consciousness and the capacity for spontaneous respiration.

“Instead of focusing on biology, [this definition] focuses on values and is based on the claim that when a person is in a state of irreversible apneic unconsciousness, we may consider them to be dead,” said Dr. Truog. He and his coeditorialists argued that the concept of whole brain death should be replaced with that of brain stem death.

“This report should be a call for our profession, as well as for federal and state lawmakers, to reform our laws so that they are consistent with our diagnostic criteria,” Dr. Truog said.

“The most straightforward way of doing this would be to change U.S. law and adopt the British standard of brain stem death, and then refine our testing to make the diagnosis of irreversible apneic unconsciousness as reliable and safe as possible,” he concluded.

The drafting of the consensus statement was not supported by outside funding. Dr. Sung reported no relevant financial relationships. Dr. Truog reported receiving compensation from Sanofi and Covance for participating in data and safety monitoring boards unrelated to the consensus document.

A version of this article originally appeared on Medscape.com.

 

A group of experts representing various international professional societies has drafted a consensus statement on the determination of brain death or death by neurologic criteria (BD/DNC). The document, a result of the World Brain Death Project, surveys the clinical aspects of this determination, such as clinical testing, apnea testing, and the number of examinations required, as well as its social and legal aspects, including documentation, qualifications for making the determination, and religious attitudes toward BD/DNC.

The recommendations are the minimum criteria for BD/DNC, and countries and professional societies may choose to adopt stricter criteria, the authors noted. Seventeen supplements to the consensus statement contain detailed reports on topics the statement examines, including focuses on both adults and children.

“Perhaps the most important points of this project are, first, to show the worldwide acceptance of the concept of BD/DNC and what the minimum requirements are for BD/DNC,” said corresponding author Gene Sung, MD, MPH, director of the neurocritical care and stroke division at the University of Southern California, Los Angeles. Second, “this standard is centered around a clinical determination without the need for other testing.”

The consensus document and supplements were published online Aug. 3 in JAMA.

Comprehensive review

A lack of rigor has led to many differences in the determination of BD/DNC, said Dr. Sung. “Some of the variance that is common are the numbers of exams and examiners that are required and whether ancillary tests are required for determination of BD/DNC. In addition, a lot of guidelines and protocols that are in use are not thorough in detailing how to do the examinations and what to do in different circumstances.”

Professional societies such as the World Federation of Intensive and Critical Care recruited experts in BD/DNC to develop recommendations, which were based on relevant articles that they identified during a literature search. “We wanted to develop a fairly comprehensive document that, along with the 17 supplements, builds a foundation to show how to determine BD/DNC – what the minimum clinical criteria needed are and what to do in special circumstances,” Dr. Sung said.

Major sections of the statement include recommendations for the minimum clinical standards for the determination of BD/DNC in adults and children.

Determination must begin by establishing that the patient has sustained an irreversible brain injury that resulted in the loss of all brain function, according to the authors. Confounders such as pharmacologic paralysis and the effect of CNS depressant medications should be ruled out.

In addition, clinical evaluation must include an assessment for coma and an evaluation for brain stem areflexia. Among other criteria, the pupils should be fixed and nonresponsive to light, the face should not move in response to noxious cranial stimulation, and the gag and cough reflexes should be absent. Apnea testing is recommended to evaluate the responsiveness of respiratory centers in the medulla.

Although the definition of BD/DNC is the same in children as in adults, less evidence is available for the determination of BD/DNC in the very young. The authors thus advised a cautious approach to the evaluation of infants and younger children.

Recommendations vary by age and often require serial examinations, including apnea testing, they noted.

 

 

Ancillary testing

The consensus statement also reviews ancillary testing, which the authors recommend be required when the minimum clinical examination, including the apnea test, cannot be completed and when it is in the presence of confounding conditions that cannot be resolved.

The authors recommended digital subtraction angiography, radionuclide studies, and transcranial Doppler ultrasonography as ancillary tests based on blood flow in the brain. However, CT angiography and magnetic resonance angiography not be used.

A lack of guidance makes performing an apnea test in patients receiving extracorporeal membrane oxygenation (ECMO) challenging, according to the authors. Nevertheless, they recommended that the same principles of BD/DNC be applied to adults and children receiving ECMO.

They further recommended a period of preoxygenation before the apnea test, and the document describes in detail the method for administering this test to people receiving ECMO.

Another potentially challenging situation pointed out in the consensus document is the determination of BD/DNC in patients who have been treated with targeted temperature management. Therapeutic hypothermia, particularly if it is preceded or accompanied by sedation, can temporarily impair brain stem reflexes, thus mimicking BD/DNC.

The new document includes a flowchart and step-by-step recommendations as well as suggestions for determining BD/DNC under these circumstances.

Among document limitations acknowledged by the authors is the lack of high-quality data from randomized, controlled trials on which to base their recommendations.

In addition, economic, technological, or personnel limitations may reduce the available options for ancillary testing, they added. Also, the recommendations do not incorporate contributions from patients or social or religious groups, although the authors were mindful of their concerns.

To promote the national and international harmonization of BD/DNC criteria, “medical societies and countries can evaluate their own policies in relation to this document and fix any deficiencies,” Dr. Sung said.

“Many countries do not have any BD/DNC policies and can use the documents from this project to create their own. There may need to be discussions with legal, governmental, religious, and societal leaders to help understand and accept BD/DNC and to help enact policies in different communities,” he added.

Divergent definitions

The determination of death is not simply a scientific question, but also a philosophical, religious, and cultural question, wrote Robert D. Truog, MD, director of the Harvard Center for Bioethics, Boston, and colleagues in an accompanying editorial. Future research should consider cultural differences over these questions.

“Most important is that there be a clear and logical consistency between the definition of death and the tests that are used to diagnose it,” Dr. Truog said.

The concept of whole brain death was advanced as an equivalent to biological death, “such that, when the brain dies, the body literally disintegrates, just as it does after cardiac arrest,” but evidence indicates that this claim is untrue, Dr. Truog said. Current tests also do not diagnose the death of the whole brain.

Another hypothesis is that brain stem death represents the irreversible loss of consciousness and the capacity for spontaneous respiration.

“Instead of focusing on biology, [this definition] focuses on values and is based on the claim that when a person is in a state of irreversible apneic unconsciousness, we may consider them to be dead,” said Dr. Truog. He and his coeditorialists argued that the concept of whole brain death should be replaced with that of brain stem death.

“This report should be a call for our profession, as well as for federal and state lawmakers, to reform our laws so that they are consistent with our diagnostic criteria,” Dr. Truog said.

“The most straightforward way of doing this would be to change U.S. law and adopt the British standard of brain stem death, and then refine our testing to make the diagnosis of irreversible apneic unconsciousness as reliable and safe as possible,” he concluded.

The drafting of the consensus statement was not supported by outside funding. Dr. Sung reported no relevant financial relationships. Dr. Truog reported receiving compensation from Sanofi and Covance for participating in data and safety monitoring boards unrelated to the consensus document.

A version of this article originally appeared on Medscape.com.

Issue
Neurology Reviews- 28(9)
Issue
Neurology Reviews- 28(9)
Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Citation Override
Publish date: August 12, 2020
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article