User login
Neurology Reviews covers innovative and emerging news in neurology and neuroscience every month, with a focus on practical approaches to treating Parkinson's disease, epilepsy, headache, stroke, multiple sclerosis, Alzheimer's disease, and other neurologic disorders.
PML
Progressive multifocal leukoencephalopathy
Rituxan
The leading independent newspaper covering neurology news and commentary.
Prescribing Epilepsy Meds in Pregnancy: ‘We Can Do Better,’ Experts Say
HELSINKI, FINLAND — When it comes to caring for women with epilepsy who become pregnant, there is a great deal of room for improvement, experts say.
“Too many women with epilepsy receive information about epilepsy and pregnancy only after pregnancy. We can do better,” Torbjörn Tomson, MD, PhD, senior professor of neurology and epileptology, Karolinska Institutet, Stockholm, Sweden, told delegates attending the Congress of the European Academy of Neurology 2024.
The goal in epilepsy is to maintain seizure control while minimizing exposure to potentially teratogenic medications, Dr. Tomson said. He added that pregnancy planning in women with epilepsy is important but also conceded that most pregnancies in this patient population are unplanned.
Overall, it’s important to tell patients that “there is a high likelihood of an uneventful pregnancy and a healthy offspring,” he said.
In recent years, new data have emerged on the risks to the fetus with exposure to different antiseizure medications (ASMs), said Dr. Tomson. This has led regulators, such as the US Food and Drug Administration and the European Medicines Agency, to issue restrictions on the use of some ASMs, particularly valproate and topiramate, in females of childbearing age.
Session chair Marte Bjørk, MD, PhD, of the Department of Neurology of Haukeland University Hospital, Bergen, Norway, questioned whether the latest recommendations from regulatory authorities have “sacrificed seizure control at the expense of teratogenic safety.”
To an extent, this is true, said Dr. Tomson, “as the regulations prioritize fetal health over women’s health.” However, “we have not seen poorer seizure control with newer medications” in recent datasets.
It’s about good planning, said Dr. Bjork, who is responsible for the clinical guidelines for treatment of epilepsy in pregnancy in Norway.
Start With Folic Acid
One simple measure is to ensure that all women with epilepsy of childbearing age are prescribed low-dose folic acid, Dr. Tomson said — even those who report that they are not considering pregnancy.
When it comes to folic acid, recently published guidelines on ASM use during pregnancy are relatively straightforward, he said.
The data do not show that folic acid reduces the risk for major congenital malformations, but they do show that it improves neurocognitive outcomes in children of mothers who received folic acid supplements prior to and throughout pregnancy.
Dr. Tomson said the new American Academy of Neurology (AAN) guidelines recommend a dosage of 0.4 mg/d, which balances the demonstrated benefits of supplementation and potential negative consequences of high doses of folic acid.
“Consider 0.4 mg of folic acid for all women on ASMs that are of childbearing potential, whether they become pregnant or not,” he said. However, well-designed, preferably randomized, studies are needed to better define the optimal folic acid dosing for pregnancy in women with epilepsy.
Choosing the Right ASM
The choice of the most appropriate ASM in pregnancy is based on the potential for an individual drug to cause major congenital malformations and, in more recent years, the likelihood that a woman with epilepsy is using any other medications associated with neurodevelopmental disorders in offspring.
Balanced against this must be the effect of pregnancy on seizure control, and the maternal and fetal risks associated with seizures during pregnancy.
“There are ways to optimize seizure control and to reduce teratogenic risks,” said Dr. Tomson, adding that the new AAN guidelines provide updated evidence-based conclusions on this topic.
The good news is that “there has been almost a 40% decline in the rate of major congenital malformations associated with ASM use in pregnancy, in parallel with a shift from use of ASMs such as carbamazepine and valproate to lamotrigine and levetiracetam.” The latter two medications are associated with a much lower risk for such birth defects, he added.
This is based on the average rate of major congenital malformations in the EURAP registry that tracks the comparative risk for major fetal malformations after ASM use during pregnancy in over 40 countries. The latest reporting from the registry shows that this risk has decreased from 6.1% in 1998-2004 to 3.7% in 2015-2022.
Taking valproate during pregnancy is associated with a significantly increased risk for neurodevelopmental outcomes, including autism spectrum disorder. However, the jury is still out on whether topiramate escalates the risk for neurodevelopmental disorders, because findings across studies have been inconsistent.
Overall, the AAN guidance, and similar advice from European regulatory authorities, is that valproate is associated with high risk for major congenital malformations and neurodevelopmental disorders. Topiramate has also been shown to increase the risk for major congenital malformations. Consequently, these two anticonvulsants are generally contraindicated in pregnancy, Dr. Tomson noted.
On the other hand, levetiracetam, lamotrigine, and oxcarbazepine seem to be the safest ASMs with respect to congenital malformation risk, and lamotrigine has the best documented safety profile when it comes to the risk for neurodevelopmental disorders.
Although there are newer ASMs on the market, including brivaracetam, cannabidiol, cenobamate, eslicarbazepine acetate, fenfluramine, lacosamide, perampanel, and zonisamide, at this juncture data on the risk potential of these agents are insufficient.
“For some of these newer meds, we don’t even have a single exposure in our large databases, even if you combine them all. We need to collect more data, and that will take time,” Dr. Tomson said.
Dose Optimization
Dose optimization of ASMs is also important — and for this to be accurate, it’s important to document an individual’s optimal ASM serum levels before pregnancy that can be used as a baseline target during pregnancy. However, Dr. Tomson noted, this information is not always available.
He pointed out that, with many ASMs, there can be a significant decline in serum concentration levels during pregnancy, which can increase seizure risk.
To address the uncertainty surrounding this issue, Dr. Tomson recommended that physicians consider future pregnancy when prescribing ASMs to women of childbearing age. He also advised discussing contraception with these patients, even if they indicate they are not currently planning to conceive.
The data clearly show the importance of planning a pregnancy so that the most appropriate and safest medications are prescribed, he said.
Dr. Tomson reported receiving research support, on behalf of EURAP, from Accord, Angelini, Bial, EcuPharma, Eisai, GlaxoSmithKline, Glenmark, GW Pharma, Hazz, Sanofi, Teva, USB, Zentiva, and SF Group. He has received speakers’ honoraria from Angelini, Eisai, and UCB. Dr. Bjørk reports receiving speakers’ honoraria from Pfizer, Eisai, AbbVie, Best Practice, Lilly, Novartis, and Teva. She has received unrestricted educational grants from The Research Council of Norway, the Research Council of the Nordic Countries (NordForsk), and the Norwegian Epilepsy Association. She has received consulting honoraria from Novartis and is on the advisory board of Eisai, Lundbeck, Angelini Pharma, and Jazz Pharmaceuticals. Dr. Bjørk also received institutional grants from marked authorization holders of valproate.
A version of this article first appeared on Medscape.com.
HELSINKI, FINLAND — When it comes to caring for women with epilepsy who become pregnant, there is a great deal of room for improvement, experts say.
“Too many women with epilepsy receive information about epilepsy and pregnancy only after pregnancy. We can do better,” Torbjörn Tomson, MD, PhD, senior professor of neurology and epileptology, Karolinska Institutet, Stockholm, Sweden, told delegates attending the Congress of the European Academy of Neurology 2024.
The goal in epilepsy is to maintain seizure control while minimizing exposure to potentially teratogenic medications, Dr. Tomson said. He added that pregnancy planning in women with epilepsy is important but also conceded that most pregnancies in this patient population are unplanned.
Overall, it’s important to tell patients that “there is a high likelihood of an uneventful pregnancy and a healthy offspring,” he said.
In recent years, new data have emerged on the risks to the fetus with exposure to different antiseizure medications (ASMs), said Dr. Tomson. This has led regulators, such as the US Food and Drug Administration and the European Medicines Agency, to issue restrictions on the use of some ASMs, particularly valproate and topiramate, in females of childbearing age.
Session chair Marte Bjørk, MD, PhD, of the Department of Neurology of Haukeland University Hospital, Bergen, Norway, questioned whether the latest recommendations from regulatory authorities have “sacrificed seizure control at the expense of teratogenic safety.”
To an extent, this is true, said Dr. Tomson, “as the regulations prioritize fetal health over women’s health.” However, “we have not seen poorer seizure control with newer medications” in recent datasets.
It’s about good planning, said Dr. Bjork, who is responsible for the clinical guidelines for treatment of epilepsy in pregnancy in Norway.
Start With Folic Acid
One simple measure is to ensure that all women with epilepsy of childbearing age are prescribed low-dose folic acid, Dr. Tomson said — even those who report that they are not considering pregnancy.
When it comes to folic acid, recently published guidelines on ASM use during pregnancy are relatively straightforward, he said.
The data do not show that folic acid reduces the risk for major congenital malformations, but they do show that it improves neurocognitive outcomes in children of mothers who received folic acid supplements prior to and throughout pregnancy.
Dr. Tomson said the new American Academy of Neurology (AAN) guidelines recommend a dosage of 0.4 mg/d, which balances the demonstrated benefits of supplementation and potential negative consequences of high doses of folic acid.
“Consider 0.4 mg of folic acid for all women on ASMs that are of childbearing potential, whether they become pregnant or not,” he said. However, well-designed, preferably randomized, studies are needed to better define the optimal folic acid dosing for pregnancy in women with epilepsy.
Choosing the Right ASM
The choice of the most appropriate ASM in pregnancy is based on the potential for an individual drug to cause major congenital malformations and, in more recent years, the likelihood that a woman with epilepsy is using any other medications associated with neurodevelopmental disorders in offspring.
Balanced against this must be the effect of pregnancy on seizure control, and the maternal and fetal risks associated with seizures during pregnancy.
“There are ways to optimize seizure control and to reduce teratogenic risks,” said Dr. Tomson, adding that the new AAN guidelines provide updated evidence-based conclusions on this topic.
The good news is that “there has been almost a 40% decline in the rate of major congenital malformations associated with ASM use in pregnancy, in parallel with a shift from use of ASMs such as carbamazepine and valproate to lamotrigine and levetiracetam.” The latter two medications are associated with a much lower risk for such birth defects, he added.
This is based on the average rate of major congenital malformations in the EURAP registry that tracks the comparative risk for major fetal malformations after ASM use during pregnancy in over 40 countries. The latest reporting from the registry shows that this risk has decreased from 6.1% in 1998-2004 to 3.7% in 2015-2022.
Taking valproate during pregnancy is associated with a significantly increased risk for neurodevelopmental outcomes, including autism spectrum disorder. However, the jury is still out on whether topiramate escalates the risk for neurodevelopmental disorders, because findings across studies have been inconsistent.
Overall, the AAN guidance, and similar advice from European regulatory authorities, is that valproate is associated with high risk for major congenital malformations and neurodevelopmental disorders. Topiramate has also been shown to increase the risk for major congenital malformations. Consequently, these two anticonvulsants are generally contraindicated in pregnancy, Dr. Tomson noted.
On the other hand, levetiracetam, lamotrigine, and oxcarbazepine seem to be the safest ASMs with respect to congenital malformation risk, and lamotrigine has the best documented safety profile when it comes to the risk for neurodevelopmental disorders.
Although there are newer ASMs on the market, including brivaracetam, cannabidiol, cenobamate, eslicarbazepine acetate, fenfluramine, lacosamide, perampanel, and zonisamide, at this juncture data on the risk potential of these agents are insufficient.
“For some of these newer meds, we don’t even have a single exposure in our large databases, even if you combine them all. We need to collect more data, and that will take time,” Dr. Tomson said.
Dose Optimization
Dose optimization of ASMs is also important — and for this to be accurate, it’s important to document an individual’s optimal ASM serum levels before pregnancy that can be used as a baseline target during pregnancy. However, Dr. Tomson noted, this information is not always available.
He pointed out that, with many ASMs, there can be a significant decline in serum concentration levels during pregnancy, which can increase seizure risk.
To address the uncertainty surrounding this issue, Dr. Tomson recommended that physicians consider future pregnancy when prescribing ASMs to women of childbearing age. He also advised discussing contraception with these patients, even if they indicate they are not currently planning to conceive.
The data clearly show the importance of planning a pregnancy so that the most appropriate and safest medications are prescribed, he said.
Dr. Tomson reported receiving research support, on behalf of EURAP, from Accord, Angelini, Bial, EcuPharma, Eisai, GlaxoSmithKline, Glenmark, GW Pharma, Hazz, Sanofi, Teva, USB, Zentiva, and SF Group. He has received speakers’ honoraria from Angelini, Eisai, and UCB. Dr. Bjørk reports receiving speakers’ honoraria from Pfizer, Eisai, AbbVie, Best Practice, Lilly, Novartis, and Teva. She has received unrestricted educational grants from The Research Council of Norway, the Research Council of the Nordic Countries (NordForsk), and the Norwegian Epilepsy Association. She has received consulting honoraria from Novartis and is on the advisory board of Eisai, Lundbeck, Angelini Pharma, and Jazz Pharmaceuticals. Dr. Bjørk also received institutional grants from marked authorization holders of valproate.
A version of this article first appeared on Medscape.com.
HELSINKI, FINLAND — When it comes to caring for women with epilepsy who become pregnant, there is a great deal of room for improvement, experts say.
“Too many women with epilepsy receive information about epilepsy and pregnancy only after pregnancy. We can do better,” Torbjörn Tomson, MD, PhD, senior professor of neurology and epileptology, Karolinska Institutet, Stockholm, Sweden, told delegates attending the Congress of the European Academy of Neurology 2024.
The goal in epilepsy is to maintain seizure control while minimizing exposure to potentially teratogenic medications, Dr. Tomson said. He added that pregnancy planning in women with epilepsy is important but also conceded that most pregnancies in this patient population are unplanned.
Overall, it’s important to tell patients that “there is a high likelihood of an uneventful pregnancy and a healthy offspring,” he said.
In recent years, new data have emerged on the risks to the fetus with exposure to different antiseizure medications (ASMs), said Dr. Tomson. This has led regulators, such as the US Food and Drug Administration and the European Medicines Agency, to issue restrictions on the use of some ASMs, particularly valproate and topiramate, in females of childbearing age.
Session chair Marte Bjørk, MD, PhD, of the Department of Neurology of Haukeland University Hospital, Bergen, Norway, questioned whether the latest recommendations from regulatory authorities have “sacrificed seizure control at the expense of teratogenic safety.”
To an extent, this is true, said Dr. Tomson, “as the regulations prioritize fetal health over women’s health.” However, “we have not seen poorer seizure control with newer medications” in recent datasets.
It’s about good planning, said Dr. Bjork, who is responsible for the clinical guidelines for treatment of epilepsy in pregnancy in Norway.
Start With Folic Acid
One simple measure is to ensure that all women with epilepsy of childbearing age are prescribed low-dose folic acid, Dr. Tomson said — even those who report that they are not considering pregnancy.
When it comes to folic acid, recently published guidelines on ASM use during pregnancy are relatively straightforward, he said.
The data do not show that folic acid reduces the risk for major congenital malformations, but they do show that it improves neurocognitive outcomes in children of mothers who received folic acid supplements prior to and throughout pregnancy.
Dr. Tomson said the new American Academy of Neurology (AAN) guidelines recommend a dosage of 0.4 mg/d, which balances the demonstrated benefits of supplementation and potential negative consequences of high doses of folic acid.
“Consider 0.4 mg of folic acid for all women on ASMs that are of childbearing potential, whether they become pregnant or not,” he said. However, well-designed, preferably randomized, studies are needed to better define the optimal folic acid dosing for pregnancy in women with epilepsy.
Choosing the Right ASM
The choice of the most appropriate ASM in pregnancy is based on the potential for an individual drug to cause major congenital malformations and, in more recent years, the likelihood that a woman with epilepsy is using any other medications associated with neurodevelopmental disorders in offspring.
Balanced against this must be the effect of pregnancy on seizure control, and the maternal and fetal risks associated with seizures during pregnancy.
“There are ways to optimize seizure control and to reduce teratogenic risks,” said Dr. Tomson, adding that the new AAN guidelines provide updated evidence-based conclusions on this topic.
The good news is that “there has been almost a 40% decline in the rate of major congenital malformations associated with ASM use in pregnancy, in parallel with a shift from use of ASMs such as carbamazepine and valproate to lamotrigine and levetiracetam.” The latter two medications are associated with a much lower risk for such birth defects, he added.
This is based on the average rate of major congenital malformations in the EURAP registry that tracks the comparative risk for major fetal malformations after ASM use during pregnancy in over 40 countries. The latest reporting from the registry shows that this risk has decreased from 6.1% in 1998-2004 to 3.7% in 2015-2022.
Taking valproate during pregnancy is associated with a significantly increased risk for neurodevelopmental outcomes, including autism spectrum disorder. However, the jury is still out on whether topiramate escalates the risk for neurodevelopmental disorders, because findings across studies have been inconsistent.
Overall, the AAN guidance, and similar advice from European regulatory authorities, is that valproate is associated with high risk for major congenital malformations and neurodevelopmental disorders. Topiramate has also been shown to increase the risk for major congenital malformations. Consequently, these two anticonvulsants are generally contraindicated in pregnancy, Dr. Tomson noted.
On the other hand, levetiracetam, lamotrigine, and oxcarbazepine seem to be the safest ASMs with respect to congenital malformation risk, and lamotrigine has the best documented safety profile when it comes to the risk for neurodevelopmental disorders.
Although there are newer ASMs on the market, including brivaracetam, cannabidiol, cenobamate, eslicarbazepine acetate, fenfluramine, lacosamide, perampanel, and zonisamide, at this juncture data on the risk potential of these agents are insufficient.
“For some of these newer meds, we don’t even have a single exposure in our large databases, even if you combine them all. We need to collect more data, and that will take time,” Dr. Tomson said.
Dose Optimization
Dose optimization of ASMs is also important — and for this to be accurate, it’s important to document an individual’s optimal ASM serum levels before pregnancy that can be used as a baseline target during pregnancy. However, Dr. Tomson noted, this information is not always available.
He pointed out that, with many ASMs, there can be a significant decline in serum concentration levels during pregnancy, which can increase seizure risk.
To address the uncertainty surrounding this issue, Dr. Tomson recommended that physicians consider future pregnancy when prescribing ASMs to women of childbearing age. He also advised discussing contraception with these patients, even if they indicate they are not currently planning to conceive.
The data clearly show the importance of planning a pregnancy so that the most appropriate and safest medications are prescribed, he said.
Dr. Tomson reported receiving research support, on behalf of EURAP, from Accord, Angelini, Bial, EcuPharma, Eisai, GlaxoSmithKline, Glenmark, GW Pharma, Hazz, Sanofi, Teva, USB, Zentiva, and SF Group. He has received speakers’ honoraria from Angelini, Eisai, and UCB. Dr. Bjørk reports receiving speakers’ honoraria from Pfizer, Eisai, AbbVie, Best Practice, Lilly, Novartis, and Teva. She has received unrestricted educational grants from The Research Council of Norway, the Research Council of the Nordic Countries (NordForsk), and the Norwegian Epilepsy Association. She has received consulting honoraria from Novartis and is on the advisory board of Eisai, Lundbeck, Angelini Pharma, and Jazz Pharmaceuticals. Dr. Bjørk also received institutional grants from marked authorization holders of valproate.
A version of this article first appeared on Medscape.com.
FROM EAN 2024
Buprenorphine One of Many Options For Pain Relief In Oldest Adults
Some degree of pain is inevitable in older individuals, and as people pass 80 years of age, the harms of medications used to control chronic pain increase. Pain-reducing medication use in this age group may cause inflammation, gastric bleeding, kidney damage, or constipation.
These risks may lead some clinicians to avoid aggressive pain treatment in their eldest patients, resulting in unnecessary suffering.
“Pain causes harm beyond just the physical suffering associated with it,” said Diane Meier, MD, a geriatrician and palliative care specialist at Mount Sinai Medicine in New York City who treats many people in their 80s and 90s.
Downstream effects of untreated pain could include a loss of mobility and isolation, Dr. Meier said. And, as these harms are mounting, some clinicians may avoid using an analgesic that could bring great relief: buprenorphine.
“People think about buprenorphine like they think about methadone,” Dr. Meier said, as something prescribed to treat substance use disorder. In reality, it is an effective analgesic in other situations.
Buprenorphine is better at treating chronic pain than other opioids that carry a higher addiction risk and often cause constipation in elderly patients. Buprenorphine is easier on the kidneys and has a lower addiction risk than opioids like oxycodone.
The transdermal patch form of buprenorphine (Butrans, PurduePharma) is changed weekly and starts at low doses.
“There’s an adage in geriatrics: start low and go slow,” said Jessica Merlin, MD, PhD, a palliative care and addiction medicine physician at the University of Pittsburgh Medical Center in Pittsburgh, Pennsylvania.
Dr. Merlin recommends beginning elderly patients with chronic pain on a 10-microgram/hour dose of Butrans, among the lowest doses available. Physicians could monitor side effects, which will generally be mild, with the aim of never increasing the dose if pain is managed.
Nonpharmacologic Remedies, Drug Considerations
“Nonpharmacologic therapy is very underutilized,” Dr. Merlin said, even though multiple alternatives to medications can improve chronic pain symptoms at any age.
Cognitive-behavioral therapy or acceptance and commitment therapy can both help people reduce the impact of pain, Dr. Merlin said. And for people who can do so, physical therapy programs, yoga, or tai chi are all ways to strengthen the body’s defenses against pain, Dr. Merlin added.
Sometimes medication is necessary, however.
“You can’t get an older person to participate in rehab if they are in severe pain,” Dr. Meier said, adding that judicious use of medications should go hand in hand with nonpharmacologic treatment.
When medications are unavoidable, internist Douglas S. Paauw, MD, starts with topical injections at the site of the pain — a troublesome joint, for example — rather than systemic medications that affect multiple organs and the brain.
“We try not to flood their body with meds” for localized problems, Dr. Paauw said, whose goal when treating elderly patients with pain is to improve their daily functioning and quality of life.
Dr. Paauw works at the University of Washington in Seattle and treats people who are approaching 100 years old. As some of his patients have grown older, Dr. Paauw’s interest in effective pain management has grown; he thinks that all internists and family medicine physician need to know how to manage chronic pain in their eldest patients.
“Were you able to play with your grandkid? Were you able to go grocery shopping? Were you able to take a walk outside?” These are the kinds of improvements Dr. Paauw hopes to see in older patients, recognizing that the wear and tear of life — orthopedic stresses or healed fractures that cause lingering pain — make it impossible for many older people to be pain free.
Pain is often spread throughout the body rather than focusing at one point, which requires systemic medications if physical therapy and similar approaches have not reduced pain. Per American Geriatrics Society (AGS) guidelines, in this situation Dr. Paauw starts with acetaminophen (Tylenol) as the lowest-risk systemic pain treatment.
Dr. Pauuw often counsels older patients to begin with 2 grams/day of acetaminophen and then progress to 3 grams if the lower dose has manageable side effects, rather than the standard dose of 4 grams that he feels is geared toward younger patients.
When acetaminophen doesn’t reduce pain sufficiently, or aggravates inflammation, Dr. Paauw may use the nerve pain medication pregabalin, or the antidepressant duloxetine — especially if the pain appears to be neuropathic.
Tricyclic antidepressants used to be recommended for neuropathic pain in older adults, but are now on the AGS’s Beers Criteria of drugs to avoid in elderly patients due to risk of causing dizziness or cardiac stress. Dr. Paauw might still use a tricyclic, but only after a careful risk-benefit analysis.
Nonsteroidal anti-inflammatory drugs (NSAIDs) like ibuprofen (Motrin) or naproxen (Aleve) could work in short bursts, Dr. Paauw said, although they may cause stomach bleeding or kidney damage in older patients.
This is why NSAIDs are not recommended by the AGS for chronic pain management. And opioids like oxycodone don’t work long at low doses, often leading to dose escalation and addiction.
“The American Geriatrics Society really puts opioids down at the bottom of the list,” Dr. Paauw said, to be used “judiciously and rarely.”
Opioids may interact with other drugs to increase risk of a fall, Dr. Meier added, making them inadvisable for older patients who live alone.
“That’s why knowing something about buprenorphine is so important,” Dr. Meier said.
Dr. Meier and Dr. Paauw are on the editorial board for Internal Medicine News. Dr. Merlin is a trainer for the Center to Advance Palliative Care, which Dr. Meier founded.
Some degree of pain is inevitable in older individuals, and as people pass 80 years of age, the harms of medications used to control chronic pain increase. Pain-reducing medication use in this age group may cause inflammation, gastric bleeding, kidney damage, or constipation.
These risks may lead some clinicians to avoid aggressive pain treatment in their eldest patients, resulting in unnecessary suffering.
“Pain causes harm beyond just the physical suffering associated with it,” said Diane Meier, MD, a geriatrician and palliative care specialist at Mount Sinai Medicine in New York City who treats many people in their 80s and 90s.
Downstream effects of untreated pain could include a loss of mobility and isolation, Dr. Meier said. And, as these harms are mounting, some clinicians may avoid using an analgesic that could bring great relief: buprenorphine.
“People think about buprenorphine like they think about methadone,” Dr. Meier said, as something prescribed to treat substance use disorder. In reality, it is an effective analgesic in other situations.
Buprenorphine is better at treating chronic pain than other opioids that carry a higher addiction risk and often cause constipation in elderly patients. Buprenorphine is easier on the kidneys and has a lower addiction risk than opioids like oxycodone.
The transdermal patch form of buprenorphine (Butrans, PurduePharma) is changed weekly and starts at low doses.
“There’s an adage in geriatrics: start low and go slow,” said Jessica Merlin, MD, PhD, a palliative care and addiction medicine physician at the University of Pittsburgh Medical Center in Pittsburgh, Pennsylvania.
Dr. Merlin recommends beginning elderly patients with chronic pain on a 10-microgram/hour dose of Butrans, among the lowest doses available. Physicians could monitor side effects, which will generally be mild, with the aim of never increasing the dose if pain is managed.
Nonpharmacologic Remedies, Drug Considerations
“Nonpharmacologic therapy is very underutilized,” Dr. Merlin said, even though multiple alternatives to medications can improve chronic pain symptoms at any age.
Cognitive-behavioral therapy or acceptance and commitment therapy can both help people reduce the impact of pain, Dr. Merlin said. And for people who can do so, physical therapy programs, yoga, or tai chi are all ways to strengthen the body’s defenses against pain, Dr. Merlin added.
Sometimes medication is necessary, however.
“You can’t get an older person to participate in rehab if they are in severe pain,” Dr. Meier said, adding that judicious use of medications should go hand in hand with nonpharmacologic treatment.
When medications are unavoidable, internist Douglas S. Paauw, MD, starts with topical injections at the site of the pain — a troublesome joint, for example — rather than systemic medications that affect multiple organs and the brain.
“We try not to flood their body with meds” for localized problems, Dr. Paauw said, whose goal when treating elderly patients with pain is to improve their daily functioning and quality of life.
Dr. Paauw works at the University of Washington in Seattle and treats people who are approaching 100 years old. As some of his patients have grown older, Dr. Paauw’s interest in effective pain management has grown; he thinks that all internists and family medicine physician need to know how to manage chronic pain in their eldest patients.
“Were you able to play with your grandkid? Were you able to go grocery shopping? Were you able to take a walk outside?” These are the kinds of improvements Dr. Paauw hopes to see in older patients, recognizing that the wear and tear of life — orthopedic stresses or healed fractures that cause lingering pain — make it impossible for many older people to be pain free.
Pain is often spread throughout the body rather than focusing at one point, which requires systemic medications if physical therapy and similar approaches have not reduced pain. Per American Geriatrics Society (AGS) guidelines, in this situation Dr. Paauw starts with acetaminophen (Tylenol) as the lowest-risk systemic pain treatment.
Dr. Pauuw often counsels older patients to begin with 2 grams/day of acetaminophen and then progress to 3 grams if the lower dose has manageable side effects, rather than the standard dose of 4 grams that he feels is geared toward younger patients.
When acetaminophen doesn’t reduce pain sufficiently, or aggravates inflammation, Dr. Paauw may use the nerve pain medication pregabalin, or the antidepressant duloxetine — especially if the pain appears to be neuropathic.
Tricyclic antidepressants used to be recommended for neuropathic pain in older adults, but are now on the AGS’s Beers Criteria of drugs to avoid in elderly patients due to risk of causing dizziness or cardiac stress. Dr. Paauw might still use a tricyclic, but only after a careful risk-benefit analysis.
Nonsteroidal anti-inflammatory drugs (NSAIDs) like ibuprofen (Motrin) or naproxen (Aleve) could work in short bursts, Dr. Paauw said, although they may cause stomach bleeding or kidney damage in older patients.
This is why NSAIDs are not recommended by the AGS for chronic pain management. And opioids like oxycodone don’t work long at low doses, often leading to dose escalation and addiction.
“The American Geriatrics Society really puts opioids down at the bottom of the list,” Dr. Paauw said, to be used “judiciously and rarely.”
Opioids may interact with other drugs to increase risk of a fall, Dr. Meier added, making them inadvisable for older patients who live alone.
“That’s why knowing something about buprenorphine is so important,” Dr. Meier said.
Dr. Meier and Dr. Paauw are on the editorial board for Internal Medicine News. Dr. Merlin is a trainer for the Center to Advance Palliative Care, which Dr. Meier founded.
Some degree of pain is inevitable in older individuals, and as people pass 80 years of age, the harms of medications used to control chronic pain increase. Pain-reducing medication use in this age group may cause inflammation, gastric bleeding, kidney damage, or constipation.
These risks may lead some clinicians to avoid aggressive pain treatment in their eldest patients, resulting in unnecessary suffering.
“Pain causes harm beyond just the physical suffering associated with it,” said Diane Meier, MD, a geriatrician and palliative care specialist at Mount Sinai Medicine in New York City who treats many people in their 80s and 90s.
Downstream effects of untreated pain could include a loss of mobility and isolation, Dr. Meier said. And, as these harms are mounting, some clinicians may avoid using an analgesic that could bring great relief: buprenorphine.
“People think about buprenorphine like they think about methadone,” Dr. Meier said, as something prescribed to treat substance use disorder. In reality, it is an effective analgesic in other situations.
Buprenorphine is better at treating chronic pain than other opioids that carry a higher addiction risk and often cause constipation in elderly patients. Buprenorphine is easier on the kidneys and has a lower addiction risk than opioids like oxycodone.
The transdermal patch form of buprenorphine (Butrans, PurduePharma) is changed weekly and starts at low doses.
“There’s an adage in geriatrics: start low and go slow,” said Jessica Merlin, MD, PhD, a palliative care and addiction medicine physician at the University of Pittsburgh Medical Center in Pittsburgh, Pennsylvania.
Dr. Merlin recommends beginning elderly patients with chronic pain on a 10-microgram/hour dose of Butrans, among the lowest doses available. Physicians could monitor side effects, which will generally be mild, with the aim of never increasing the dose if pain is managed.
Nonpharmacologic Remedies, Drug Considerations
“Nonpharmacologic therapy is very underutilized,” Dr. Merlin said, even though multiple alternatives to medications can improve chronic pain symptoms at any age.
Cognitive-behavioral therapy or acceptance and commitment therapy can both help people reduce the impact of pain, Dr. Merlin said. And for people who can do so, physical therapy programs, yoga, or tai chi are all ways to strengthen the body’s defenses against pain, Dr. Merlin added.
Sometimes medication is necessary, however.
“You can’t get an older person to participate in rehab if they are in severe pain,” Dr. Meier said, adding that judicious use of medications should go hand in hand with nonpharmacologic treatment.
When medications are unavoidable, internist Douglas S. Paauw, MD, starts with topical injections at the site of the pain — a troublesome joint, for example — rather than systemic medications that affect multiple organs and the brain.
“We try not to flood their body with meds” for localized problems, Dr. Paauw said, whose goal when treating elderly patients with pain is to improve their daily functioning and quality of life.
Dr. Paauw works at the University of Washington in Seattle and treats people who are approaching 100 years old. As some of his patients have grown older, Dr. Paauw’s interest in effective pain management has grown; he thinks that all internists and family medicine physician need to know how to manage chronic pain in their eldest patients.
“Were you able to play with your grandkid? Were you able to go grocery shopping? Were you able to take a walk outside?” These are the kinds of improvements Dr. Paauw hopes to see in older patients, recognizing that the wear and tear of life — orthopedic stresses or healed fractures that cause lingering pain — make it impossible for many older people to be pain free.
Pain is often spread throughout the body rather than focusing at one point, which requires systemic medications if physical therapy and similar approaches have not reduced pain. Per American Geriatrics Society (AGS) guidelines, in this situation Dr. Paauw starts with acetaminophen (Tylenol) as the lowest-risk systemic pain treatment.
Dr. Pauuw often counsels older patients to begin with 2 grams/day of acetaminophen and then progress to 3 grams if the lower dose has manageable side effects, rather than the standard dose of 4 grams that he feels is geared toward younger patients.
When acetaminophen doesn’t reduce pain sufficiently, or aggravates inflammation, Dr. Paauw may use the nerve pain medication pregabalin, or the antidepressant duloxetine — especially if the pain appears to be neuropathic.
Tricyclic antidepressants used to be recommended for neuropathic pain in older adults, but are now on the AGS’s Beers Criteria of drugs to avoid in elderly patients due to risk of causing dizziness or cardiac stress. Dr. Paauw might still use a tricyclic, but only after a careful risk-benefit analysis.
Nonsteroidal anti-inflammatory drugs (NSAIDs) like ibuprofen (Motrin) or naproxen (Aleve) could work in short bursts, Dr. Paauw said, although they may cause stomach bleeding or kidney damage in older patients.
This is why NSAIDs are not recommended by the AGS for chronic pain management. And opioids like oxycodone don’t work long at low doses, often leading to dose escalation and addiction.
“The American Geriatrics Society really puts opioids down at the bottom of the list,” Dr. Paauw said, to be used “judiciously and rarely.”
Opioids may interact with other drugs to increase risk of a fall, Dr. Meier added, making them inadvisable for older patients who live alone.
“That’s why knowing something about buprenorphine is so important,” Dr. Meier said.
Dr. Meier and Dr. Paauw are on the editorial board for Internal Medicine News. Dr. Merlin is a trainer for the Center to Advance Palliative Care, which Dr. Meier founded.
Night Owl or Lark? The Answer May Affect Cognition
new research suggests.
“Rather than just being personal preferences, these chronotypes could impact our cognitive function,” said study investigator, Raha West, MBChB, with Imperial College London, London, England, in a statement.
But the researchers also urged caution when interpreting the findings.
“It’s important to note that this doesn’t mean all morning people have worse cognitive performance. The findings reflect an overall trend where the majority might lean toward better cognition in the evening types,” Dr. West added.
In addition, across the board, getting the recommended 7-9 hours of nightly sleep was best for cognitive function, and sleeping for less than 7 or more than 9 hours had detrimental effects on brain function regardless of whether an individual was a night owl or lark.
The study was published online in BMJ Public Health.
A UK Biobank Cohort Study
The findings are based on a cross-sectional analysis of 26,820 adults aged 53-86 years from the UK Biobank database, who were categorized into two cohorts.
Cohort 1 had 10,067 participants (56% women) who completed four cognitive tests measuring fluid intelligence/reasoning, pairs matching, reaction time, and prospective memory. Cohort 2 had 16,753 participants (56% women) who completed two cognitive assessments (pairs matching and reaction time).
Participants self-reported sleep duration, chronotype, and quality. Cognitive test scores were evaluated against sleep parameters and health and lifestyle factors including sex, age, vascular and cardiac conditions, diabetes,alcohol use, smoking habits, and body mass index.
The results revealed a positive association between normal sleep duration (7-9 hours) and cognitive scores in Cohort 1 (beta, 0.0567), while extended sleep duration negatively impacted scores across in Cohort 1 and 2 (beta, –0.188 and beta, –0.2619, respectively).
An individual’s preference for evening or morning activity correlated strongly with their test scores. In particular, night owls consistently performed better on cognitive tests than early birds.
“While understanding and working with your natural sleep tendencies is essential, it’s equally important to remember to get just enough sleep, not too long or too short,” Dr. West noted. “This is crucial for keeping your brain healthy and functioning at its best.”
Contrary to some previous findings, the study did not find a significant relationship between sleep, sleepiness/insomnia, and cognitive performance. This may be because specific aspects of insomnia, such as severity and chronicity, as well as comorbid conditions need to be considered, the investigators wrote.
They added that age and diabetes consistently emerged as negative predictors of cognitive functioning across both cohorts, in line with previous research.
Limitations of the study include the cross-sectional design, which limits causal inferences; the possibility of residual confounding; and reliance on self-reported sleep data.
Also, the study did not adjust for educational attainment, a factor potentially influential on cognitive performance and sleep patterns, because of incomplete data. The study also did not factor in depression and social isolation, which have been shown to increase the risk for cognitive decline.
No Real-World Implications
Several outside experts offered their perspective on the study in a statement from the UK nonprofit Science Media Centre.
The study provides “interesting insights” into the difference in memory and thinking in people who identify themselves as a “morning” or “evening” person, Jacqui Hanley, PhD, with Alzheimer’s Research UK, said in the statement.
However, without a detailed picture of what is going on in the brain, it’s not clear whether being a morning or evening person affects memory and thinking or whether a decline in cognition is causing changes to sleeping patterns, Dr. Hanley added.
Roi Cohen Kadosh, PhD, CPsychol, professor of cognitive neuroscience, University of Surrey, Guildford, England, cautioned that there are “multiple potential reasons” for these associations.
“Therefore, there are no implications in my view for the real world. I fear that the general public will not be able to understand that and will change their sleep pattern, while this study does not give any evidence that this will lead to any benefit,” Dr. Cohen Kadosh said.
Jessica Chelekis, PhD, MBA, a sleep expert from Brunel University London, Uxbridge, England, said that the “main takeaway should be that the cultural belief that early risers are more productive than ‘night owls’ does not hold up to scientific scrutiny.”
“While everyone should aim to get good-quality sleep each night, we should also try to be aware of what time of day we are at our (cognitive) best and work in ways that suit us. Night owls, in particular, should not be shamed into fitting a stereotype that favors an ‘early to bed, early to rise’ practice,” Dr. Chelekis said.
Funding for the study was provided by the Korea Institute of Oriental Medicine in collaboration with Imperial College London. Dr. Hanley, Dr. Cohen Kadosh, and Dr. Chelekis have no relevant disclosures.
A version of this article first appeared on Medscape.com.
new research suggests.
“Rather than just being personal preferences, these chronotypes could impact our cognitive function,” said study investigator, Raha West, MBChB, with Imperial College London, London, England, in a statement.
But the researchers also urged caution when interpreting the findings.
“It’s important to note that this doesn’t mean all morning people have worse cognitive performance. The findings reflect an overall trend where the majority might lean toward better cognition in the evening types,” Dr. West added.
In addition, across the board, getting the recommended 7-9 hours of nightly sleep was best for cognitive function, and sleeping for less than 7 or more than 9 hours had detrimental effects on brain function regardless of whether an individual was a night owl or lark.
The study was published online in BMJ Public Health.
A UK Biobank Cohort Study
The findings are based on a cross-sectional analysis of 26,820 adults aged 53-86 years from the UK Biobank database, who were categorized into two cohorts.
Cohort 1 had 10,067 participants (56% women) who completed four cognitive tests measuring fluid intelligence/reasoning, pairs matching, reaction time, and prospective memory. Cohort 2 had 16,753 participants (56% women) who completed two cognitive assessments (pairs matching and reaction time).
Participants self-reported sleep duration, chronotype, and quality. Cognitive test scores were evaluated against sleep parameters and health and lifestyle factors including sex, age, vascular and cardiac conditions, diabetes,alcohol use, smoking habits, and body mass index.
The results revealed a positive association between normal sleep duration (7-9 hours) and cognitive scores in Cohort 1 (beta, 0.0567), while extended sleep duration negatively impacted scores across in Cohort 1 and 2 (beta, –0.188 and beta, –0.2619, respectively).
An individual’s preference for evening or morning activity correlated strongly with their test scores. In particular, night owls consistently performed better on cognitive tests than early birds.
“While understanding and working with your natural sleep tendencies is essential, it’s equally important to remember to get just enough sleep, not too long or too short,” Dr. West noted. “This is crucial for keeping your brain healthy and functioning at its best.”
Contrary to some previous findings, the study did not find a significant relationship between sleep, sleepiness/insomnia, and cognitive performance. This may be because specific aspects of insomnia, such as severity and chronicity, as well as comorbid conditions need to be considered, the investigators wrote.
They added that age and diabetes consistently emerged as negative predictors of cognitive functioning across both cohorts, in line with previous research.
Limitations of the study include the cross-sectional design, which limits causal inferences; the possibility of residual confounding; and reliance on self-reported sleep data.
Also, the study did not adjust for educational attainment, a factor potentially influential on cognitive performance and sleep patterns, because of incomplete data. The study also did not factor in depression and social isolation, which have been shown to increase the risk for cognitive decline.
No Real-World Implications
Several outside experts offered their perspective on the study in a statement from the UK nonprofit Science Media Centre.
The study provides “interesting insights” into the difference in memory and thinking in people who identify themselves as a “morning” or “evening” person, Jacqui Hanley, PhD, with Alzheimer’s Research UK, said in the statement.
However, without a detailed picture of what is going on in the brain, it’s not clear whether being a morning or evening person affects memory and thinking or whether a decline in cognition is causing changes to sleeping patterns, Dr. Hanley added.
Roi Cohen Kadosh, PhD, CPsychol, professor of cognitive neuroscience, University of Surrey, Guildford, England, cautioned that there are “multiple potential reasons” for these associations.
“Therefore, there are no implications in my view for the real world. I fear that the general public will not be able to understand that and will change their sleep pattern, while this study does not give any evidence that this will lead to any benefit,” Dr. Cohen Kadosh said.
Jessica Chelekis, PhD, MBA, a sleep expert from Brunel University London, Uxbridge, England, said that the “main takeaway should be that the cultural belief that early risers are more productive than ‘night owls’ does not hold up to scientific scrutiny.”
“While everyone should aim to get good-quality sleep each night, we should also try to be aware of what time of day we are at our (cognitive) best and work in ways that suit us. Night owls, in particular, should not be shamed into fitting a stereotype that favors an ‘early to bed, early to rise’ practice,” Dr. Chelekis said.
Funding for the study was provided by the Korea Institute of Oriental Medicine in collaboration with Imperial College London. Dr. Hanley, Dr. Cohen Kadosh, and Dr. Chelekis have no relevant disclosures.
A version of this article first appeared on Medscape.com.
new research suggests.
“Rather than just being personal preferences, these chronotypes could impact our cognitive function,” said study investigator, Raha West, MBChB, with Imperial College London, London, England, in a statement.
But the researchers also urged caution when interpreting the findings.
“It’s important to note that this doesn’t mean all morning people have worse cognitive performance. The findings reflect an overall trend where the majority might lean toward better cognition in the evening types,” Dr. West added.
In addition, across the board, getting the recommended 7-9 hours of nightly sleep was best for cognitive function, and sleeping for less than 7 or more than 9 hours had detrimental effects on brain function regardless of whether an individual was a night owl or lark.
The study was published online in BMJ Public Health.
A UK Biobank Cohort Study
The findings are based on a cross-sectional analysis of 26,820 adults aged 53-86 years from the UK Biobank database, who were categorized into two cohorts.
Cohort 1 had 10,067 participants (56% women) who completed four cognitive tests measuring fluid intelligence/reasoning, pairs matching, reaction time, and prospective memory. Cohort 2 had 16,753 participants (56% women) who completed two cognitive assessments (pairs matching and reaction time).
Participants self-reported sleep duration, chronotype, and quality. Cognitive test scores were evaluated against sleep parameters and health and lifestyle factors including sex, age, vascular and cardiac conditions, diabetes,alcohol use, smoking habits, and body mass index.
The results revealed a positive association between normal sleep duration (7-9 hours) and cognitive scores in Cohort 1 (beta, 0.0567), while extended sleep duration negatively impacted scores across in Cohort 1 and 2 (beta, –0.188 and beta, –0.2619, respectively).
An individual’s preference for evening or morning activity correlated strongly with their test scores. In particular, night owls consistently performed better on cognitive tests than early birds.
“While understanding and working with your natural sleep tendencies is essential, it’s equally important to remember to get just enough sleep, not too long or too short,” Dr. West noted. “This is crucial for keeping your brain healthy and functioning at its best.”
Contrary to some previous findings, the study did not find a significant relationship between sleep, sleepiness/insomnia, and cognitive performance. This may be because specific aspects of insomnia, such as severity and chronicity, as well as comorbid conditions need to be considered, the investigators wrote.
They added that age and diabetes consistently emerged as negative predictors of cognitive functioning across both cohorts, in line with previous research.
Limitations of the study include the cross-sectional design, which limits causal inferences; the possibility of residual confounding; and reliance on self-reported sleep data.
Also, the study did not adjust for educational attainment, a factor potentially influential on cognitive performance and sleep patterns, because of incomplete data. The study also did not factor in depression and social isolation, which have been shown to increase the risk for cognitive decline.
No Real-World Implications
Several outside experts offered their perspective on the study in a statement from the UK nonprofit Science Media Centre.
The study provides “interesting insights” into the difference in memory and thinking in people who identify themselves as a “morning” or “evening” person, Jacqui Hanley, PhD, with Alzheimer’s Research UK, said in the statement.
However, without a detailed picture of what is going on in the brain, it’s not clear whether being a morning or evening person affects memory and thinking or whether a decline in cognition is causing changes to sleeping patterns, Dr. Hanley added.
Roi Cohen Kadosh, PhD, CPsychol, professor of cognitive neuroscience, University of Surrey, Guildford, England, cautioned that there are “multiple potential reasons” for these associations.
“Therefore, there are no implications in my view for the real world. I fear that the general public will not be able to understand that and will change their sleep pattern, while this study does not give any evidence that this will lead to any benefit,” Dr. Cohen Kadosh said.
Jessica Chelekis, PhD, MBA, a sleep expert from Brunel University London, Uxbridge, England, said that the “main takeaway should be that the cultural belief that early risers are more productive than ‘night owls’ does not hold up to scientific scrutiny.”
“While everyone should aim to get good-quality sleep each night, we should also try to be aware of what time of day we are at our (cognitive) best and work in ways that suit us. Night owls, in particular, should not be shamed into fitting a stereotype that favors an ‘early to bed, early to rise’ practice,” Dr. Chelekis said.
Funding for the study was provided by the Korea Institute of Oriental Medicine in collaboration with Imperial College London. Dr. Hanley, Dr. Cohen Kadosh, and Dr. Chelekis have no relevant disclosures.
A version of this article first appeared on Medscape.com.
FROM BMJ PUBLIC HEALTH
EMA Warns of Anaphylactic Reactions to MS Drug
Glatiramer acetate is a disease-modifying therapy (DMT) for relapsing MS that is given by injection.
The drug has been used for treating MS for more than 20 years, during which time, it has had a good safety profile. Common side effects are known to include vasodilation, arthralgia, anxiety, hypertonia, palpitations, and lipoatrophy.
A meeting of the EMA’s Pharmacovigilance Risk Assessment Committee (PRAC), held on July 8-11, considered evidence from an EU-wide review of all available data concerning anaphylactic reactions with glatiramer acetate. As a result, the committee concluded that the medicine is associated with a risk for anaphylactic reactions, which may occur shortly after administration or even months or years later.
Risk for Delays to Treatment
Cases involving the use of glatiramer acetate with a fatal outcome have been reported, PRAC noted.
The committee cautioned that because the initial symptoms could overlap with those of postinjection reaction, there was a risk for delay in identifying an anaphylactic reaction.
PRAC has sanctioned a direct healthcare professional communication (DHPC) to inform healthcare professionals about the risk. Patients and caregivers should be advised of the signs and symptoms of an anaphylactic reaction and the need to seek emergency care if this should occur, the committee added. In the event of such a reaction, treatment with glatiramer acetate must be discontinued, PRAC stated.
Once adopted, the DHPC for glatiramer acetate will be disseminated to healthcare professionals by the marketing authorization holders.
Anaphylactic reactions associated with the use of glatiramer acetate have been noted in medical literature for some years. A letter by members of the department of neurology at Albert Ludwig University Freiburg, Freiburg im Bresigau, Germany, published in the journal European Neurology in 2011, detailed six cases of anaphylactoid or anaphylactic reactions in patients while they were undergoing treatment with glatiramer acetate.
The authors highlighted that in one of the cases, a grade 1 anaphylactic reaction occurred 3 months after treatment with the drug was initiated.
A version of this article first appeared on Medscape.com.
Glatiramer acetate is a disease-modifying therapy (DMT) for relapsing MS that is given by injection.
The drug has been used for treating MS for more than 20 years, during which time, it has had a good safety profile. Common side effects are known to include vasodilation, arthralgia, anxiety, hypertonia, palpitations, and lipoatrophy.
A meeting of the EMA’s Pharmacovigilance Risk Assessment Committee (PRAC), held on July 8-11, considered evidence from an EU-wide review of all available data concerning anaphylactic reactions with glatiramer acetate. As a result, the committee concluded that the medicine is associated with a risk for anaphylactic reactions, which may occur shortly after administration or even months or years later.
Risk for Delays to Treatment
Cases involving the use of glatiramer acetate with a fatal outcome have been reported, PRAC noted.
The committee cautioned that because the initial symptoms could overlap with those of postinjection reaction, there was a risk for delay in identifying an anaphylactic reaction.
PRAC has sanctioned a direct healthcare professional communication (DHPC) to inform healthcare professionals about the risk. Patients and caregivers should be advised of the signs and symptoms of an anaphylactic reaction and the need to seek emergency care if this should occur, the committee added. In the event of such a reaction, treatment with glatiramer acetate must be discontinued, PRAC stated.
Once adopted, the DHPC for glatiramer acetate will be disseminated to healthcare professionals by the marketing authorization holders.
Anaphylactic reactions associated with the use of glatiramer acetate have been noted in medical literature for some years. A letter by members of the department of neurology at Albert Ludwig University Freiburg, Freiburg im Bresigau, Germany, published in the journal European Neurology in 2011, detailed six cases of anaphylactoid or anaphylactic reactions in patients while they were undergoing treatment with glatiramer acetate.
The authors highlighted that in one of the cases, a grade 1 anaphylactic reaction occurred 3 months after treatment with the drug was initiated.
A version of this article first appeared on Medscape.com.
Glatiramer acetate is a disease-modifying therapy (DMT) for relapsing MS that is given by injection.
The drug has been used for treating MS for more than 20 years, during which time, it has had a good safety profile. Common side effects are known to include vasodilation, arthralgia, anxiety, hypertonia, palpitations, and lipoatrophy.
A meeting of the EMA’s Pharmacovigilance Risk Assessment Committee (PRAC), held on July 8-11, considered evidence from an EU-wide review of all available data concerning anaphylactic reactions with glatiramer acetate. As a result, the committee concluded that the medicine is associated with a risk for anaphylactic reactions, which may occur shortly after administration or even months or years later.
Risk for Delays to Treatment
Cases involving the use of glatiramer acetate with a fatal outcome have been reported, PRAC noted.
The committee cautioned that because the initial symptoms could overlap with those of postinjection reaction, there was a risk for delay in identifying an anaphylactic reaction.
PRAC has sanctioned a direct healthcare professional communication (DHPC) to inform healthcare professionals about the risk. Patients and caregivers should be advised of the signs and symptoms of an anaphylactic reaction and the need to seek emergency care if this should occur, the committee added. In the event of such a reaction, treatment with glatiramer acetate must be discontinued, PRAC stated.
Once adopted, the DHPC for glatiramer acetate will be disseminated to healthcare professionals by the marketing authorization holders.
Anaphylactic reactions associated with the use of glatiramer acetate have been noted in medical literature for some years. A letter by members of the department of neurology at Albert Ludwig University Freiburg, Freiburg im Bresigau, Germany, published in the journal European Neurology in 2011, detailed six cases of anaphylactoid or anaphylactic reactions in patients while they were undergoing treatment with glatiramer acetate.
The authors highlighted that in one of the cases, a grade 1 anaphylactic reaction occurred 3 months after treatment with the drug was initiated.
A version of this article first appeared on Medscape.com.
Change in Clinical Definition of Parkinson’s Triggers Debate
Parkinson’s disease (PD) and dementia with Lewy bodies are currently defined by clinical features, which can be heterogeneous and do not capture the presymptomatic phase of neurodegeneration.
Recent advances have enabled the detection of misfolded and aggregated alpha-synuclein protein (synucleinopathy) — a key pathologic feature of these diseases — allowing for earlier and more accurate diagnosis. This has led two international research groups to propose a major shift from a clinical to a biological definition of the disease.
Both groups emphasized the detection of alpha-synuclein through recently developed seed amplification assays as a key diagnostic and staging tool, although they differ in their approaches and criteria.
NSD-ISS
NSD is defined by the presence during life of pathologic neuronal alpha-synuclein (S, the first biological anchor) in cerebrospinal fluid (CSF), regardless of the presence of any specific clinical syndrome. Individuals with pathologic neuronal alpha-synuclein aggregates are at a high risk for dopaminergic neuronal dysfunction (D, the second key biological anchor).
Dr. Simuni and colleagues also proposed the NSD integrated staging system (NSD-ISS) rooted in the S and D biological anchors coupled with the degree of functional impairment caused by clinical signs or symptoms.
Stages 0-1 occur without signs or symptoms and are defined by the presence of pathogenic variants in the SNCA gene (stage 0), S alone (stage 1A), or S and D (stage 1B).
The presence of clinical manifestations marks the transition to stage 2 and beyond, with stage 2 characterized by subtle signs or symptoms but without functional impairment. Stages 2B-6 require both S and D and stage-specific increases in functional impairment.
“An advantage of the NSD-ISS will be to reduce heterogeneity in clinical trials by requiring biological consistency within the study cohort rather than identifying study participants on the basis of clinical criteria for Parkinson’s disease and dementia with Lewy bodies,” Dr. Simuni and colleagues pointed out in a position paper describing the NSD-ISS published online earlier this year in The Lancet Neurology.
The NSD-ISS will “evolve to include the incorporation of data-driven definitions of stage-specific functional anchors and additional biomarkers as they emerge and are validated.”
For now, the NSD-ISS is intended for research use only and not in the clinic.
The SynNeurGe Research Diagnostic Criteria
Separately, a team led by Anthony Lang, MD, with the Krembil Brain Institute at Toronto Western Hospital, Toronto, Ontario, Canada, proposed the SynNeurGe biological classification of PD.
Described in a companion paper published online in The Lancet Neurology, their “S-N-G” classification emphasizes the important interactions between three biological factors that contribute to disease: The presence or absence of pathologic alpha-synuclein (S) in tissues or CSF, an evidence of underlying neurodegeneration (N) defined by neuroimaging procedures, and the documentation of pathogenic gene variants (G) that cause or strongly predispose to PD.
These three components link to a clinical component, defined either by a single high-specificity clinical feature or by multiple lower-specificity clinical features.
As with the NSD-ISS, the SynNeurGe model is intended for research purposes only and is not ready for immediate application in the clinic.
Both groups acknowledged the need for studies to test and validate the proposed classification systems.
Caveats, Cautionary Notes
Adopting a biological definition of PD would represent a shift as the field has prompted considerable discussion and healthy debate.
Commenting for this news organization, James Beck, PhD, chief scientific officer at the Parkinson’s Foundation, said the principle behind the proposed classifications is where “the field needs to go.”
“Right now, people with Parkinson’s take too long to get a confirmed diagnosis of their disease, and despite best efforts, clinicians can get it wrong, not diagnosing people or maybe misdiagnosing people,” Dr. Beck said. “Moving to a biological basis, where we have better certainty, is going to be really important.”
Beck noted that the NSD-ISS “goes all in on alpha-synuclein,” which does play a big role in PD, but added, “I don’t know if I want to declare a winner after the first heat. There are other biomarkers that are coming to fruition but still need validation, and alpha-synuclein may be just one of many to help determine whether someone has Parkinson’s disease or not.”
Un Kang, MD, director of translational research at the Fresco Institute for Parkinson’s & Movement Disorders at NYU Langone Health, New York City, told this news organization that alpha-synuclein has “very high diagnostic accuracy” but cautioned that the adoption of a biological definition for PD would not usurp a clinical diagnosis.
“We need both,” Dr. Kang said. “But knowing the underlying pathology is important for earlier diagnosis and testing of potential therapies to treat the molecular pathology. If a patient doesn’t have abnormal synuclein, you may be treating the wrong disease.”
The coauthors of recent JAMA Neurology perspective said the biological definitions are “exciting, but there is “wisdom” in tapping the brakes when attempting to establish a biological definition and classification system for PD.
“Although these two proposals represent significant steps forward, a sprint toward the finish line may not be wise,” wrote Njideka U. Okubadejo, MD, with University of Lagos, Nigeria; Joseph Jankovic, MD, with Baylor College of Medicine, Houston; and Michael S. Okun, MD, with University of Florida Health, Gainesville, Florida.
“A process that embraces inclusivity and weaves in evolving technological advancements will be important. Who benefits if implementation of a biologically based staging system for PD is hurried?” they continued.
The proposals rely heavily on alpha-synuclein assays, they noted, which currently require subjective interpretation and lack extensive validation. They also worry that the need for expensive and, in some regions, unattainable biological fluids (CSF) or imaging studies (dopamine transporter scan) may limit global access to both PD trials and future therapeutics.
They also worry about retiring the name Parkinson’s disease.
“Beyond the historical importance of the term Parkinson disease, any classification that proposes abandoning the two words in either clinical or research descriptions could have unintended global repercussions,” Dr. Okubadejo, Dr. Jankovic, and Dr. Okun cautioned.
Dr. Beck told this news organization he’s spoken to clinicians at meetings about this and “no one really likes the idea” of retiring the term Parkinson’s disease.
Frederick Ketchum, MD, and Nathaniel Chin, MD, with University of Wisconsin–Madison, worry about the “lived” experience of the asymptomatic patient after receiving a biological diagnosis.
“Biological diagnosis might enable effective prognostication and treatment in the future but will substantially change the experience of illness for patients now as new frameworks are slowly adopted and knowledge is gained,” they said in a correspondence in The Lancet Neurology.
“Understanding and addressing this lived experience remains a core task for health professionals and must be made central as we begin an era in which neurological diseases are redefined on a biological basis,” Dr. Ketchum and Dr. Chin advised.
A complete list of agencies that supported this work and author disclosures are available with the original articles. Dr. Beck and Dr. Kang had no relevant disclosures.
A version of this article first appeared on Medscape.com.
Parkinson’s disease (PD) and dementia with Lewy bodies are currently defined by clinical features, which can be heterogeneous and do not capture the presymptomatic phase of neurodegeneration.
Recent advances have enabled the detection of misfolded and aggregated alpha-synuclein protein (synucleinopathy) — a key pathologic feature of these diseases — allowing for earlier and more accurate diagnosis. This has led two international research groups to propose a major shift from a clinical to a biological definition of the disease.
Both groups emphasized the detection of alpha-synuclein through recently developed seed amplification assays as a key diagnostic and staging tool, although they differ in their approaches and criteria.
NSD-ISS
NSD is defined by the presence during life of pathologic neuronal alpha-synuclein (S, the first biological anchor) in cerebrospinal fluid (CSF), regardless of the presence of any specific clinical syndrome. Individuals with pathologic neuronal alpha-synuclein aggregates are at a high risk for dopaminergic neuronal dysfunction (D, the second key biological anchor).
Dr. Simuni and colleagues also proposed the NSD integrated staging system (NSD-ISS) rooted in the S and D biological anchors coupled with the degree of functional impairment caused by clinical signs or symptoms.
Stages 0-1 occur without signs or symptoms and are defined by the presence of pathogenic variants in the SNCA gene (stage 0), S alone (stage 1A), or S and D (stage 1B).
The presence of clinical manifestations marks the transition to stage 2 and beyond, with stage 2 characterized by subtle signs or symptoms but without functional impairment. Stages 2B-6 require both S and D and stage-specific increases in functional impairment.
“An advantage of the NSD-ISS will be to reduce heterogeneity in clinical trials by requiring biological consistency within the study cohort rather than identifying study participants on the basis of clinical criteria for Parkinson’s disease and dementia with Lewy bodies,” Dr. Simuni and colleagues pointed out in a position paper describing the NSD-ISS published online earlier this year in The Lancet Neurology.
The NSD-ISS will “evolve to include the incorporation of data-driven definitions of stage-specific functional anchors and additional biomarkers as they emerge and are validated.”
For now, the NSD-ISS is intended for research use only and not in the clinic.
The SynNeurGe Research Diagnostic Criteria
Separately, a team led by Anthony Lang, MD, with the Krembil Brain Institute at Toronto Western Hospital, Toronto, Ontario, Canada, proposed the SynNeurGe biological classification of PD.
Described in a companion paper published online in The Lancet Neurology, their “S-N-G” classification emphasizes the important interactions between three biological factors that contribute to disease: The presence or absence of pathologic alpha-synuclein (S) in tissues or CSF, an evidence of underlying neurodegeneration (N) defined by neuroimaging procedures, and the documentation of pathogenic gene variants (G) that cause or strongly predispose to PD.
These three components link to a clinical component, defined either by a single high-specificity clinical feature or by multiple lower-specificity clinical features.
As with the NSD-ISS, the SynNeurGe model is intended for research purposes only and is not ready for immediate application in the clinic.
Both groups acknowledged the need for studies to test and validate the proposed classification systems.
Caveats, Cautionary Notes
Adopting a biological definition of PD would represent a shift as the field has prompted considerable discussion and healthy debate.
Commenting for this news organization, James Beck, PhD, chief scientific officer at the Parkinson’s Foundation, said the principle behind the proposed classifications is where “the field needs to go.”
“Right now, people with Parkinson’s take too long to get a confirmed diagnosis of their disease, and despite best efforts, clinicians can get it wrong, not diagnosing people or maybe misdiagnosing people,” Dr. Beck said. “Moving to a biological basis, where we have better certainty, is going to be really important.”
Beck noted that the NSD-ISS “goes all in on alpha-synuclein,” which does play a big role in PD, but added, “I don’t know if I want to declare a winner after the first heat. There are other biomarkers that are coming to fruition but still need validation, and alpha-synuclein may be just one of many to help determine whether someone has Parkinson’s disease or not.”
Un Kang, MD, director of translational research at the Fresco Institute for Parkinson’s & Movement Disorders at NYU Langone Health, New York City, told this news organization that alpha-synuclein has “very high diagnostic accuracy” but cautioned that the adoption of a biological definition for PD would not usurp a clinical diagnosis.
“We need both,” Dr. Kang said. “But knowing the underlying pathology is important for earlier diagnosis and testing of potential therapies to treat the molecular pathology. If a patient doesn’t have abnormal synuclein, you may be treating the wrong disease.”
The coauthors of recent JAMA Neurology perspective said the biological definitions are “exciting, but there is “wisdom” in tapping the brakes when attempting to establish a biological definition and classification system for PD.
“Although these two proposals represent significant steps forward, a sprint toward the finish line may not be wise,” wrote Njideka U. Okubadejo, MD, with University of Lagos, Nigeria; Joseph Jankovic, MD, with Baylor College of Medicine, Houston; and Michael S. Okun, MD, with University of Florida Health, Gainesville, Florida.
“A process that embraces inclusivity and weaves in evolving technological advancements will be important. Who benefits if implementation of a biologically based staging system for PD is hurried?” they continued.
The proposals rely heavily on alpha-synuclein assays, they noted, which currently require subjective interpretation and lack extensive validation. They also worry that the need for expensive and, in some regions, unattainable biological fluids (CSF) or imaging studies (dopamine transporter scan) may limit global access to both PD trials and future therapeutics.
They also worry about retiring the name Parkinson’s disease.
“Beyond the historical importance of the term Parkinson disease, any classification that proposes abandoning the two words in either clinical or research descriptions could have unintended global repercussions,” Dr. Okubadejo, Dr. Jankovic, and Dr. Okun cautioned.
Dr. Beck told this news organization he’s spoken to clinicians at meetings about this and “no one really likes the idea” of retiring the term Parkinson’s disease.
Frederick Ketchum, MD, and Nathaniel Chin, MD, with University of Wisconsin–Madison, worry about the “lived” experience of the asymptomatic patient after receiving a biological diagnosis.
“Biological diagnosis might enable effective prognostication and treatment in the future but will substantially change the experience of illness for patients now as new frameworks are slowly adopted and knowledge is gained,” they said in a correspondence in The Lancet Neurology.
“Understanding and addressing this lived experience remains a core task for health professionals and must be made central as we begin an era in which neurological diseases are redefined on a biological basis,” Dr. Ketchum and Dr. Chin advised.
A complete list of agencies that supported this work and author disclosures are available with the original articles. Dr. Beck and Dr. Kang had no relevant disclosures.
A version of this article first appeared on Medscape.com.
Parkinson’s disease (PD) and dementia with Lewy bodies are currently defined by clinical features, which can be heterogeneous and do not capture the presymptomatic phase of neurodegeneration.
Recent advances have enabled the detection of misfolded and aggregated alpha-synuclein protein (synucleinopathy) — a key pathologic feature of these diseases — allowing for earlier and more accurate diagnosis. This has led two international research groups to propose a major shift from a clinical to a biological definition of the disease.
Both groups emphasized the detection of alpha-synuclein through recently developed seed amplification assays as a key diagnostic and staging tool, although they differ in their approaches and criteria.
NSD-ISS
NSD is defined by the presence during life of pathologic neuronal alpha-synuclein (S, the first biological anchor) in cerebrospinal fluid (CSF), regardless of the presence of any specific clinical syndrome. Individuals with pathologic neuronal alpha-synuclein aggregates are at a high risk for dopaminergic neuronal dysfunction (D, the second key biological anchor).
Dr. Simuni and colleagues also proposed the NSD integrated staging system (NSD-ISS) rooted in the S and D biological anchors coupled with the degree of functional impairment caused by clinical signs or symptoms.
Stages 0-1 occur without signs or symptoms and are defined by the presence of pathogenic variants in the SNCA gene (stage 0), S alone (stage 1A), or S and D (stage 1B).
The presence of clinical manifestations marks the transition to stage 2 and beyond, with stage 2 characterized by subtle signs or symptoms but without functional impairment. Stages 2B-6 require both S and D and stage-specific increases in functional impairment.
“An advantage of the NSD-ISS will be to reduce heterogeneity in clinical trials by requiring biological consistency within the study cohort rather than identifying study participants on the basis of clinical criteria for Parkinson’s disease and dementia with Lewy bodies,” Dr. Simuni and colleagues pointed out in a position paper describing the NSD-ISS published online earlier this year in The Lancet Neurology.
The NSD-ISS will “evolve to include the incorporation of data-driven definitions of stage-specific functional anchors and additional biomarkers as they emerge and are validated.”
For now, the NSD-ISS is intended for research use only and not in the clinic.
The SynNeurGe Research Diagnostic Criteria
Separately, a team led by Anthony Lang, MD, with the Krembil Brain Institute at Toronto Western Hospital, Toronto, Ontario, Canada, proposed the SynNeurGe biological classification of PD.
Described in a companion paper published online in The Lancet Neurology, their “S-N-G” classification emphasizes the important interactions between three biological factors that contribute to disease: The presence or absence of pathologic alpha-synuclein (S) in tissues or CSF, an evidence of underlying neurodegeneration (N) defined by neuroimaging procedures, and the documentation of pathogenic gene variants (G) that cause or strongly predispose to PD.
These three components link to a clinical component, defined either by a single high-specificity clinical feature or by multiple lower-specificity clinical features.
As with the NSD-ISS, the SynNeurGe model is intended for research purposes only and is not ready for immediate application in the clinic.
Both groups acknowledged the need for studies to test and validate the proposed classification systems.
Caveats, Cautionary Notes
Adopting a biological definition of PD would represent a shift as the field has prompted considerable discussion and healthy debate.
Commenting for this news organization, James Beck, PhD, chief scientific officer at the Parkinson’s Foundation, said the principle behind the proposed classifications is where “the field needs to go.”
“Right now, people with Parkinson’s take too long to get a confirmed diagnosis of their disease, and despite best efforts, clinicians can get it wrong, not diagnosing people or maybe misdiagnosing people,” Dr. Beck said. “Moving to a biological basis, where we have better certainty, is going to be really important.”
Beck noted that the NSD-ISS “goes all in on alpha-synuclein,” which does play a big role in PD, but added, “I don’t know if I want to declare a winner after the first heat. There are other biomarkers that are coming to fruition but still need validation, and alpha-synuclein may be just one of many to help determine whether someone has Parkinson’s disease or not.”
Un Kang, MD, director of translational research at the Fresco Institute for Parkinson’s & Movement Disorders at NYU Langone Health, New York City, told this news organization that alpha-synuclein has “very high diagnostic accuracy” but cautioned that the adoption of a biological definition for PD would not usurp a clinical diagnosis.
“We need both,” Dr. Kang said. “But knowing the underlying pathology is important for earlier diagnosis and testing of potential therapies to treat the molecular pathology. If a patient doesn’t have abnormal synuclein, you may be treating the wrong disease.”
The coauthors of recent JAMA Neurology perspective said the biological definitions are “exciting, but there is “wisdom” in tapping the brakes when attempting to establish a biological definition and classification system for PD.
“Although these two proposals represent significant steps forward, a sprint toward the finish line may not be wise,” wrote Njideka U. Okubadejo, MD, with University of Lagos, Nigeria; Joseph Jankovic, MD, with Baylor College of Medicine, Houston; and Michael S. Okun, MD, with University of Florida Health, Gainesville, Florida.
“A process that embraces inclusivity and weaves in evolving technological advancements will be important. Who benefits if implementation of a biologically based staging system for PD is hurried?” they continued.
The proposals rely heavily on alpha-synuclein assays, they noted, which currently require subjective interpretation and lack extensive validation. They also worry that the need for expensive and, in some regions, unattainable biological fluids (CSF) or imaging studies (dopamine transporter scan) may limit global access to both PD trials and future therapeutics.
They also worry about retiring the name Parkinson’s disease.
“Beyond the historical importance of the term Parkinson disease, any classification that proposes abandoning the two words in either clinical or research descriptions could have unintended global repercussions,” Dr. Okubadejo, Dr. Jankovic, and Dr. Okun cautioned.
Dr. Beck told this news organization he’s spoken to clinicians at meetings about this and “no one really likes the idea” of retiring the term Parkinson’s disease.
Frederick Ketchum, MD, and Nathaniel Chin, MD, with University of Wisconsin–Madison, worry about the “lived” experience of the asymptomatic patient after receiving a biological diagnosis.
“Biological diagnosis might enable effective prognostication and treatment in the future but will substantially change the experience of illness for patients now as new frameworks are slowly adopted and knowledge is gained,” they said in a correspondence in The Lancet Neurology.
“Understanding and addressing this lived experience remains a core task for health professionals and must be made central as we begin an era in which neurological diseases are redefined on a biological basis,” Dr. Ketchum and Dr. Chin advised.
A complete list of agencies that supported this work and author disclosures are available with the original articles. Dr. Beck and Dr. Kang had no relevant disclosures.
A version of this article first appeared on Medscape.com.
Study: AFib May Be Linked to Dementia in T2D
TOPLINE:
New-onset atrial fibrillation (AF) is associated with a substantially higher risk for all-cause dementia in patients with type 2 diabetes (T2D).
METHODOLOGY:
- Studies suggest a potential link between AF and dementia in the broader population, but evidence is scarce in people with diabetes, who are at increased risk for both conditions.
- This longitudinal observational study assessed the association between new-onset AF and dementia in 22,989 patients with T2D (median age at enrollment, 61.0 years; 62.3% men; 86.3% White individuals).
- New-onset AF was identified through hospital admission records using the International Classification of Diseases – 9th Revision (ICD-9) and ICD-10 codes, and dementia cases were identified using an algorithm developed by the UK Biobank.
- Time-varying Cox proportional hazard regression models were used to determine the association between incident dementia and new-onset AF.
TAKEAWAY:
- Over a median follow-up duration of about 12 years, 844 patients developed all-cause dementia, 342 were diagnosed with Alzheimer’s disease, and 246 had vascular dementia.
- Patients with incident AF had a higher risk of developing all-cause dementia (hazard ratio [HR], 2.15; 95% CI, 1.80-2.57), Alzheimer’s disease (HR, 1.44; 95% CI, 1.06-1.96), and vascular dementia (HR, 3.11; 95% CI, 2.32-4.17) than those without incident AF.
- The results are independent of common dementia risk factors, such as sociodemographic characteristics and lifestyle factors.
- The mean time intervals from the onset of AF to all-cause dementia, Alzheimer’s disease and vascular dementia were 2.95, 2.81, and 3.37 years, respectively.
IN PRACTICE:
“AF is a significant risk factor for dementia in patients with type 2 diabetes, suggesting the importance of timely and effective treatment of AF, such as early rhythm control strategies and anticoagulant use, in preventing dementia among this demographic,” the authors wrote.
SOURCE:
The study, led by Ying Zhou, PhD, School of Public Health, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China, was published online in Diabetes, Obesity and Metabolism.
LIMITATIONS:
The study could not explore the link between different AF subtypes and dementia owing to its small sample size. The effects of AF treatment on the risk for dementia in patients with type 2 diabetes were not considered because of lack of information. The mostly White study population limits the generalizability of the findings to other races and ethnicities.
DISCLOSURES:
The study was supported by the National Social Science Fund of China. The authors declared no conflicts of interest.
A version of this article first appeared on Medscape.com.
TOPLINE:
New-onset atrial fibrillation (AF) is associated with a substantially higher risk for all-cause dementia in patients with type 2 diabetes (T2D).
METHODOLOGY:
- Studies suggest a potential link between AF and dementia in the broader population, but evidence is scarce in people with diabetes, who are at increased risk for both conditions.
- This longitudinal observational study assessed the association between new-onset AF and dementia in 22,989 patients with T2D (median age at enrollment, 61.0 years; 62.3% men; 86.3% White individuals).
- New-onset AF was identified through hospital admission records using the International Classification of Diseases – 9th Revision (ICD-9) and ICD-10 codes, and dementia cases were identified using an algorithm developed by the UK Biobank.
- Time-varying Cox proportional hazard regression models were used to determine the association between incident dementia and new-onset AF.
TAKEAWAY:
- Over a median follow-up duration of about 12 years, 844 patients developed all-cause dementia, 342 were diagnosed with Alzheimer’s disease, and 246 had vascular dementia.
- Patients with incident AF had a higher risk of developing all-cause dementia (hazard ratio [HR], 2.15; 95% CI, 1.80-2.57), Alzheimer’s disease (HR, 1.44; 95% CI, 1.06-1.96), and vascular dementia (HR, 3.11; 95% CI, 2.32-4.17) than those without incident AF.
- The results are independent of common dementia risk factors, such as sociodemographic characteristics and lifestyle factors.
- The mean time intervals from the onset of AF to all-cause dementia, Alzheimer’s disease and vascular dementia were 2.95, 2.81, and 3.37 years, respectively.
IN PRACTICE:
“AF is a significant risk factor for dementia in patients with type 2 diabetes, suggesting the importance of timely and effective treatment of AF, such as early rhythm control strategies and anticoagulant use, in preventing dementia among this demographic,” the authors wrote.
SOURCE:
The study, led by Ying Zhou, PhD, School of Public Health, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China, was published online in Diabetes, Obesity and Metabolism.
LIMITATIONS:
The study could not explore the link between different AF subtypes and dementia owing to its small sample size. The effects of AF treatment on the risk for dementia in patients with type 2 diabetes were not considered because of lack of information. The mostly White study population limits the generalizability of the findings to other races and ethnicities.
DISCLOSURES:
The study was supported by the National Social Science Fund of China. The authors declared no conflicts of interest.
A version of this article first appeared on Medscape.com.
TOPLINE:
New-onset atrial fibrillation (AF) is associated with a substantially higher risk for all-cause dementia in patients with type 2 diabetes (T2D).
METHODOLOGY:
- Studies suggest a potential link between AF and dementia in the broader population, but evidence is scarce in people with diabetes, who are at increased risk for both conditions.
- This longitudinal observational study assessed the association between new-onset AF and dementia in 22,989 patients with T2D (median age at enrollment, 61.0 years; 62.3% men; 86.3% White individuals).
- New-onset AF was identified through hospital admission records using the International Classification of Diseases – 9th Revision (ICD-9) and ICD-10 codes, and dementia cases were identified using an algorithm developed by the UK Biobank.
- Time-varying Cox proportional hazard regression models were used to determine the association between incident dementia and new-onset AF.
TAKEAWAY:
- Over a median follow-up duration of about 12 years, 844 patients developed all-cause dementia, 342 were diagnosed with Alzheimer’s disease, and 246 had vascular dementia.
- Patients with incident AF had a higher risk of developing all-cause dementia (hazard ratio [HR], 2.15; 95% CI, 1.80-2.57), Alzheimer’s disease (HR, 1.44; 95% CI, 1.06-1.96), and vascular dementia (HR, 3.11; 95% CI, 2.32-4.17) than those without incident AF.
- The results are independent of common dementia risk factors, such as sociodemographic characteristics and lifestyle factors.
- The mean time intervals from the onset of AF to all-cause dementia, Alzheimer’s disease and vascular dementia were 2.95, 2.81, and 3.37 years, respectively.
IN PRACTICE:
“AF is a significant risk factor for dementia in patients with type 2 diabetes, suggesting the importance of timely and effective treatment of AF, such as early rhythm control strategies and anticoagulant use, in preventing dementia among this demographic,” the authors wrote.
SOURCE:
The study, led by Ying Zhou, PhD, School of Public Health, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China, was published online in Diabetes, Obesity and Metabolism.
LIMITATIONS:
The study could not explore the link between different AF subtypes and dementia owing to its small sample size. The effects of AF treatment on the risk for dementia in patients with type 2 diabetes were not considered because of lack of information. The mostly White study population limits the generalizability of the findings to other races and ethnicities.
DISCLOSURES:
The study was supported by the National Social Science Fund of China. The authors declared no conflicts of interest.
A version of this article first appeared on Medscape.com.
Managing Agitation in Alzheimer’s Disease: Five Things to Know
Agitation is a neuropsychiatric symptom in patients with Alzheimer’s disease (AD), the most common form of dementia. The prevalence of this symptom is about 40%-65%, with the higher end of the range applying to patients who have moderate to severe dementia. . The DICE approach is a collaborative process for managing behavioral symptoms in dementia, wherein the caregiver describes the behaviors, the provider investigates the etiology, the provider and caregiver create a treatment plan, and the provider evaluates the outcome of the interventions. We use this widely adopted approach as the framework for discussing recent advances in the management of agitation.
Here are five things to know about managing agitation in AD.
1. There is a new operational definition for agitation in dementia.
Agitation in dementia is a syndrome that encompasses specific behaviors across all dementia types. The 2023 operational definition of agitation in dementia by the International Psychogeriatric Association (IPA) includes three domains: excessive motor activity (including pacing, rocking, restlessness, and performing repetitious mannerisms), verbal aggression (including using profanity, screaming, and shouting), and physical aggression (including interpersonal aggression and mishandling or destruction of property). These behaviors must be persistent or recurrent for at least 2 weeks or represent a dramatic change from the person’s baseline behavior, must be associated with excessive distress or disability beyond what is caused by the cognitive impairment itself, and result in significant impairment in at least one of the three specified functional domains. Behavioral symptoms in dementia frequently co-occur, which affects treatment and prognosis. For instance, the risk for stroke associated with antipsychotic treatments appears to be higher in dementia-related psychosis without agitation than in agitation alone or in psychosis with agitation. Therefore, the use of a rating scale such as the Neuropsychiatric Inventory–Questionnaire (NPI-Q), which takes 5 minutes or less to administer, is recommended to identify and track behavioral symptoms and caregiver distress.
2. The etiology of agitation in dementia may be multifactorial.
It is important in every case to identify all underlying etiologies so that presumed causal and/or exacerbating factors are not inadvertently missed. Agitation may be a means of communicating distress owing to unmet needs or a patient-environment mismatch (function-focused approach) or may be a direct consequence of the dementia itself (behavioral-symptom approach). These approaches are not mutually exclusive. A patient can present with agitation as a direct consequence of dementia and inadequately treated pain concurrently.
The new IPA definition specifies several exclusion criteria for agitation in dementia, including underlying medical conditions, delirium, substance use, and suboptimal care conditions. It is especially crucial to accurately identify delirium because dementia is an independent risk factor for delirium, which in turn may accelerate the progression of cognitive and functional decline. Even subsyndromal delirium in older adults leads to a higher 3-year mortality rate that is comparable to that seen in delirium. Older adults with acute-onset agitation in the context of dementia should undergo a comprehensive assessment for delirium, as agitation may be the only indication of a serious underlying medical condition.
3. Nonpharmacologic interventions should be used whenever possible.
The wider adoption of nonpharmacologic interventions in clinical practice has been greatly limited by the heterogeneity in study protocols, including in selection of participants, in the types of dementias included, and in defining and applying the intervention strategies. Nevertheless, there is general consensus that individualized behavioral strategies that build on the patients’ interests and preserved abilities are more effective, at least in the short term. Patients best suited for these interventions are those with less cognitive decline, better communication skills, less impairment in activities of daily living, and higher responsiveness. A systematic review of systematic reviews found music therapy to be the most effective intervention for reducing agitation and aggression in dementia, along with behavioral management techniques when supervised by healthcare professionals. On the other hand, physical restraints are best avoided, as their use in hospitalized patients has been associated with longer stays, higher costs, lower odds of being discharged to home, and in long-term care patients with longer stays, with increased risk for medical complications and functional decline.
4. Antidepressants are not all equally safe or efficacious in managing agitation.
In a network meta-analysis that looked at the effects of several antidepressants on agitation in dementia, citalopram had just under 95% probability of efficacy and was the only antidepressant that was significantly more efficacious than placebo. In the multicenter CitAD trial, citalopram was efficacious and well tolerated for the treatment of agitation in AD, but the mean dose of citalopram used, 30 mg/d, was higher than the maximum dose of 20 mg/d recommended by the US Food and Drug Administration (FDA) in those aged 60 years or above. The optimal candidates for citalopram were those under the age of 85 with mild to moderate AD and mild to moderate nonpsychotic agitation, and it took up to 9 weeks for it to be fully effective. Due to the risk for dose-dependent QTc prolongation with citalopram, a baseline ECG must be done, and a second ECG is recommended if a clinical decision is made to exceed the recommended maximum daily dose. In the CitAD trial, 66% of patients in the citalopram arm received cholinesterase inhibitors concurrently while 44% received memantine, so these symptomatic treatments for AD should not be stopped solely for initiating a citalopram trial.
The antiagitation effect of citalopram may well be a class effect of all selective serotonin reuptake inhibitors (SSRIs), given that there is also evidence favoring the use of sertraline and escitalopram. The S-CitAD trial, the first large, randomized controlled study of escitalopram for the treatment of agitation in dementia, is expected to announce its top-line results sometime this year. However, not all antidepressant classes appear to be equally efficacious or safe. In the large, 12-week randomized placebo-controlled trial SYMBAD, mirtazapine was not only ineffective in treating nonpsychotic agitation in AD but was also associated with a higher mortality rate that just missed statistical significance. Trazodone is also often used for treating agitation, but there is insufficient evidence regarding efficacy and a high probability of adverse effects, even at low doses.
5. Antipsychotics may be effective drugs for treating severe dementia-related agitation.
The CATIE-AD study found that the small beneficial effects of antipsychotics for treating agitation and psychosis in AD were offset by their adverse effects and high discontinuation rates, and the FDA-imposed boxed warnings in 2005 and 2008 cautioned against the use of both first- and second-generation antipsychotics to manage dementia-related psychosis owing to an increased risk for death. Subsequently, the quest for safer and more effective alternatives culminated in the FDA approval of brexpiprazole in 2023 for the treatment of agitation in AD, but the black box warning was left in place. Three randomized controlled trials found brexpiprazole to be relatively safe, with statistically significant improvement in agitation. It was especially efficacious for severe agitation, but there is controversy about whether such improvement is clinically meaningful and whether brexpiprazole is truly superior to other antipsychotics for treating dementia-related agitation. As in the previously mentioned citalopram studies, most patients in the brexpiprazole studies received the drug as an add-on to memantine and/or a cholinesterase inhibitor, and it was proven effective over a period of up to 12 weeks across the three trials. Regarding other antipsychotics, aripiprazole and risperidone have been shown to be effective in treating agitation in patients with mixed dementia, but risperidone has also been associated with the highest risk for strokes (about 80% probability). Unfortunately, an unintended consequence of the boxed warnings on antipsychotics has been an increase in off-label substitution of psychotropic drugs with unproven efficacy and a questionable safety profile, such as valproic acid preparations, that have been linked to an increased short-term risk for accelerated brain volume loss and rapid cognitive decline, as well as a higher risk for mortality.
Lisa M. Wise, assistant professor, Psychiatry, at Oregon Health & Science University, and staff psychiatrist, Department of Psychiatry, Portland VA Medical Center, Portland, Oregon, and Vimal M. Aga, adjunct assistant professor, Department of Neurology, Oregon Health & Science University, and geriatric psychiatrist, Layton Aging and Alzheimer’s Disease Center, Portland, Oregon, have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Agitation is a neuropsychiatric symptom in patients with Alzheimer’s disease (AD), the most common form of dementia. The prevalence of this symptom is about 40%-65%, with the higher end of the range applying to patients who have moderate to severe dementia. . The DICE approach is a collaborative process for managing behavioral symptoms in dementia, wherein the caregiver describes the behaviors, the provider investigates the etiology, the provider and caregiver create a treatment plan, and the provider evaluates the outcome of the interventions. We use this widely adopted approach as the framework for discussing recent advances in the management of agitation.
Here are five things to know about managing agitation in AD.
1. There is a new operational definition for agitation in dementia.
Agitation in dementia is a syndrome that encompasses specific behaviors across all dementia types. The 2023 operational definition of agitation in dementia by the International Psychogeriatric Association (IPA) includes three domains: excessive motor activity (including pacing, rocking, restlessness, and performing repetitious mannerisms), verbal aggression (including using profanity, screaming, and shouting), and physical aggression (including interpersonal aggression and mishandling or destruction of property). These behaviors must be persistent or recurrent for at least 2 weeks or represent a dramatic change from the person’s baseline behavior, must be associated with excessive distress or disability beyond what is caused by the cognitive impairment itself, and result in significant impairment in at least one of the three specified functional domains. Behavioral symptoms in dementia frequently co-occur, which affects treatment and prognosis. For instance, the risk for stroke associated with antipsychotic treatments appears to be higher in dementia-related psychosis without agitation than in agitation alone or in psychosis with agitation. Therefore, the use of a rating scale such as the Neuropsychiatric Inventory–Questionnaire (NPI-Q), which takes 5 minutes or less to administer, is recommended to identify and track behavioral symptoms and caregiver distress.
2. The etiology of agitation in dementia may be multifactorial.
It is important in every case to identify all underlying etiologies so that presumed causal and/or exacerbating factors are not inadvertently missed. Agitation may be a means of communicating distress owing to unmet needs or a patient-environment mismatch (function-focused approach) or may be a direct consequence of the dementia itself (behavioral-symptom approach). These approaches are not mutually exclusive. A patient can present with agitation as a direct consequence of dementia and inadequately treated pain concurrently.
The new IPA definition specifies several exclusion criteria for agitation in dementia, including underlying medical conditions, delirium, substance use, and suboptimal care conditions. It is especially crucial to accurately identify delirium because dementia is an independent risk factor for delirium, which in turn may accelerate the progression of cognitive and functional decline. Even subsyndromal delirium in older adults leads to a higher 3-year mortality rate that is comparable to that seen in delirium. Older adults with acute-onset agitation in the context of dementia should undergo a comprehensive assessment for delirium, as agitation may be the only indication of a serious underlying medical condition.
3. Nonpharmacologic interventions should be used whenever possible.
The wider adoption of nonpharmacologic interventions in clinical practice has been greatly limited by the heterogeneity in study protocols, including in selection of participants, in the types of dementias included, and in defining and applying the intervention strategies. Nevertheless, there is general consensus that individualized behavioral strategies that build on the patients’ interests and preserved abilities are more effective, at least in the short term. Patients best suited for these interventions are those with less cognitive decline, better communication skills, less impairment in activities of daily living, and higher responsiveness. A systematic review of systematic reviews found music therapy to be the most effective intervention for reducing agitation and aggression in dementia, along with behavioral management techniques when supervised by healthcare professionals. On the other hand, physical restraints are best avoided, as their use in hospitalized patients has been associated with longer stays, higher costs, lower odds of being discharged to home, and in long-term care patients with longer stays, with increased risk for medical complications and functional decline.
4. Antidepressants are not all equally safe or efficacious in managing agitation.
In a network meta-analysis that looked at the effects of several antidepressants on agitation in dementia, citalopram had just under 95% probability of efficacy and was the only antidepressant that was significantly more efficacious than placebo. In the multicenter CitAD trial, citalopram was efficacious and well tolerated for the treatment of agitation in AD, but the mean dose of citalopram used, 30 mg/d, was higher than the maximum dose of 20 mg/d recommended by the US Food and Drug Administration (FDA) in those aged 60 years or above. The optimal candidates for citalopram were those under the age of 85 with mild to moderate AD and mild to moderate nonpsychotic agitation, and it took up to 9 weeks for it to be fully effective. Due to the risk for dose-dependent QTc prolongation with citalopram, a baseline ECG must be done, and a second ECG is recommended if a clinical decision is made to exceed the recommended maximum daily dose. In the CitAD trial, 66% of patients in the citalopram arm received cholinesterase inhibitors concurrently while 44% received memantine, so these symptomatic treatments for AD should not be stopped solely for initiating a citalopram trial.
The antiagitation effect of citalopram may well be a class effect of all selective serotonin reuptake inhibitors (SSRIs), given that there is also evidence favoring the use of sertraline and escitalopram. The S-CitAD trial, the first large, randomized controlled study of escitalopram for the treatment of agitation in dementia, is expected to announce its top-line results sometime this year. However, not all antidepressant classes appear to be equally efficacious or safe. In the large, 12-week randomized placebo-controlled trial SYMBAD, mirtazapine was not only ineffective in treating nonpsychotic agitation in AD but was also associated with a higher mortality rate that just missed statistical significance. Trazodone is also often used for treating agitation, but there is insufficient evidence regarding efficacy and a high probability of adverse effects, even at low doses.
5. Antipsychotics may be effective drugs for treating severe dementia-related agitation.
The CATIE-AD study found that the small beneficial effects of antipsychotics for treating agitation and psychosis in AD were offset by their adverse effects and high discontinuation rates, and the FDA-imposed boxed warnings in 2005 and 2008 cautioned against the use of both first- and second-generation antipsychotics to manage dementia-related psychosis owing to an increased risk for death. Subsequently, the quest for safer and more effective alternatives culminated in the FDA approval of brexpiprazole in 2023 for the treatment of agitation in AD, but the black box warning was left in place. Three randomized controlled trials found brexpiprazole to be relatively safe, with statistically significant improvement in agitation. It was especially efficacious for severe agitation, but there is controversy about whether such improvement is clinically meaningful and whether brexpiprazole is truly superior to other antipsychotics for treating dementia-related agitation. As in the previously mentioned citalopram studies, most patients in the brexpiprazole studies received the drug as an add-on to memantine and/or a cholinesterase inhibitor, and it was proven effective over a period of up to 12 weeks across the three trials. Regarding other antipsychotics, aripiprazole and risperidone have been shown to be effective in treating agitation in patients with mixed dementia, but risperidone has also been associated with the highest risk for strokes (about 80% probability). Unfortunately, an unintended consequence of the boxed warnings on antipsychotics has been an increase in off-label substitution of psychotropic drugs with unproven efficacy and a questionable safety profile, such as valproic acid preparations, that have been linked to an increased short-term risk for accelerated brain volume loss and rapid cognitive decline, as well as a higher risk for mortality.
Lisa M. Wise, assistant professor, Psychiatry, at Oregon Health & Science University, and staff psychiatrist, Department of Psychiatry, Portland VA Medical Center, Portland, Oregon, and Vimal M. Aga, adjunct assistant professor, Department of Neurology, Oregon Health & Science University, and geriatric psychiatrist, Layton Aging and Alzheimer’s Disease Center, Portland, Oregon, have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Agitation is a neuropsychiatric symptom in patients with Alzheimer’s disease (AD), the most common form of dementia. The prevalence of this symptom is about 40%-65%, with the higher end of the range applying to patients who have moderate to severe dementia. . The DICE approach is a collaborative process for managing behavioral symptoms in dementia, wherein the caregiver describes the behaviors, the provider investigates the etiology, the provider and caregiver create a treatment plan, and the provider evaluates the outcome of the interventions. We use this widely adopted approach as the framework for discussing recent advances in the management of agitation.
Here are five things to know about managing agitation in AD.
1. There is a new operational definition for agitation in dementia.
Agitation in dementia is a syndrome that encompasses specific behaviors across all dementia types. The 2023 operational definition of agitation in dementia by the International Psychogeriatric Association (IPA) includes three domains: excessive motor activity (including pacing, rocking, restlessness, and performing repetitious mannerisms), verbal aggression (including using profanity, screaming, and shouting), and physical aggression (including interpersonal aggression and mishandling or destruction of property). These behaviors must be persistent or recurrent for at least 2 weeks or represent a dramatic change from the person’s baseline behavior, must be associated with excessive distress or disability beyond what is caused by the cognitive impairment itself, and result in significant impairment in at least one of the three specified functional domains. Behavioral symptoms in dementia frequently co-occur, which affects treatment and prognosis. For instance, the risk for stroke associated with antipsychotic treatments appears to be higher in dementia-related psychosis without agitation than in agitation alone or in psychosis with agitation. Therefore, the use of a rating scale such as the Neuropsychiatric Inventory–Questionnaire (NPI-Q), which takes 5 minutes or less to administer, is recommended to identify and track behavioral symptoms and caregiver distress.
2. The etiology of agitation in dementia may be multifactorial.
It is important in every case to identify all underlying etiologies so that presumed causal and/or exacerbating factors are not inadvertently missed. Agitation may be a means of communicating distress owing to unmet needs or a patient-environment mismatch (function-focused approach) or may be a direct consequence of the dementia itself (behavioral-symptom approach). These approaches are not mutually exclusive. A patient can present with agitation as a direct consequence of dementia and inadequately treated pain concurrently.
The new IPA definition specifies several exclusion criteria for agitation in dementia, including underlying medical conditions, delirium, substance use, and suboptimal care conditions. It is especially crucial to accurately identify delirium because dementia is an independent risk factor for delirium, which in turn may accelerate the progression of cognitive and functional decline. Even subsyndromal delirium in older adults leads to a higher 3-year mortality rate that is comparable to that seen in delirium. Older adults with acute-onset agitation in the context of dementia should undergo a comprehensive assessment for delirium, as agitation may be the only indication of a serious underlying medical condition.
3. Nonpharmacologic interventions should be used whenever possible.
The wider adoption of nonpharmacologic interventions in clinical practice has been greatly limited by the heterogeneity in study protocols, including in selection of participants, in the types of dementias included, and in defining and applying the intervention strategies. Nevertheless, there is general consensus that individualized behavioral strategies that build on the patients’ interests and preserved abilities are more effective, at least in the short term. Patients best suited for these interventions are those with less cognitive decline, better communication skills, less impairment in activities of daily living, and higher responsiveness. A systematic review of systematic reviews found music therapy to be the most effective intervention for reducing agitation and aggression in dementia, along with behavioral management techniques when supervised by healthcare professionals. On the other hand, physical restraints are best avoided, as their use in hospitalized patients has been associated with longer stays, higher costs, lower odds of being discharged to home, and in long-term care patients with longer stays, with increased risk for medical complications and functional decline.
4. Antidepressants are not all equally safe or efficacious in managing agitation.
In a network meta-analysis that looked at the effects of several antidepressants on agitation in dementia, citalopram had just under 95% probability of efficacy and was the only antidepressant that was significantly more efficacious than placebo. In the multicenter CitAD trial, citalopram was efficacious and well tolerated for the treatment of agitation in AD, but the mean dose of citalopram used, 30 mg/d, was higher than the maximum dose of 20 mg/d recommended by the US Food and Drug Administration (FDA) in those aged 60 years or above. The optimal candidates for citalopram were those under the age of 85 with mild to moderate AD and mild to moderate nonpsychotic agitation, and it took up to 9 weeks for it to be fully effective. Due to the risk for dose-dependent QTc prolongation with citalopram, a baseline ECG must be done, and a second ECG is recommended if a clinical decision is made to exceed the recommended maximum daily dose. In the CitAD trial, 66% of patients in the citalopram arm received cholinesterase inhibitors concurrently while 44% received memantine, so these symptomatic treatments for AD should not be stopped solely for initiating a citalopram trial.
The antiagitation effect of citalopram may well be a class effect of all selective serotonin reuptake inhibitors (SSRIs), given that there is also evidence favoring the use of sertraline and escitalopram. The S-CitAD trial, the first large, randomized controlled study of escitalopram for the treatment of agitation in dementia, is expected to announce its top-line results sometime this year. However, not all antidepressant classes appear to be equally efficacious or safe. In the large, 12-week randomized placebo-controlled trial SYMBAD, mirtazapine was not only ineffective in treating nonpsychotic agitation in AD but was also associated with a higher mortality rate that just missed statistical significance. Trazodone is also often used for treating agitation, but there is insufficient evidence regarding efficacy and a high probability of adverse effects, even at low doses.
5. Antipsychotics may be effective drugs for treating severe dementia-related agitation.
The CATIE-AD study found that the small beneficial effects of antipsychotics for treating agitation and psychosis in AD were offset by their adverse effects and high discontinuation rates, and the FDA-imposed boxed warnings in 2005 and 2008 cautioned against the use of both first- and second-generation antipsychotics to manage dementia-related psychosis owing to an increased risk for death. Subsequently, the quest for safer and more effective alternatives culminated in the FDA approval of brexpiprazole in 2023 for the treatment of agitation in AD, but the black box warning was left in place. Three randomized controlled trials found brexpiprazole to be relatively safe, with statistically significant improvement in agitation. It was especially efficacious for severe agitation, but there is controversy about whether such improvement is clinically meaningful and whether brexpiprazole is truly superior to other antipsychotics for treating dementia-related agitation. As in the previously mentioned citalopram studies, most patients in the brexpiprazole studies received the drug as an add-on to memantine and/or a cholinesterase inhibitor, and it was proven effective over a period of up to 12 weeks across the three trials. Regarding other antipsychotics, aripiprazole and risperidone have been shown to be effective in treating agitation in patients with mixed dementia, but risperidone has also been associated with the highest risk for strokes (about 80% probability). Unfortunately, an unintended consequence of the boxed warnings on antipsychotics has been an increase in off-label substitution of psychotropic drugs with unproven efficacy and a questionable safety profile, such as valproic acid preparations, that have been linked to an increased short-term risk for accelerated brain volume loss and rapid cognitive decline, as well as a higher risk for mortality.
Lisa M. Wise, assistant professor, Psychiatry, at Oregon Health & Science University, and staff psychiatrist, Department of Psychiatry, Portland VA Medical Center, Portland, Oregon, and Vimal M. Aga, adjunct assistant professor, Department of Neurology, Oregon Health & Science University, and geriatric psychiatrist, Layton Aging and Alzheimer’s Disease Center, Portland, Oregon, have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Gut Biomarkers Accurately Flag Autism Spectrum Disorder
, new research shows.
The findings could form the basis for development of a noninvasive diagnostic test for ASD and also provide novel therapeutic targets, wrote investigators, led by Siew C. Ng, MBBS, PhD, with the Microbiota I-Center (MagIC), the Chinese University of Hong Kong.
Their study was published online in Nature Microbiology.
Beyond Bacteria
The gut microbiome has been shown to play a central role in modulating the gut-brain axis, potentially influencing the development of ASD.
However, most studies in ASD have focused on the bacterial component of the microbiome. Whether nonbacterial microorganisms (such as gut archaea, fungi, and viruses) or function of the gut microbiome are altered in ASD remains unclear.
To investigate, the researchers performed metagenomic sequencing on fecal samples from 1627 boys and girls aged 1-13 years with and without ASD from five cohorts in China.
After controlling for diet, medication, and comorbidity, they identified 14 archaea, 51 bacteria, 7 fungi, 18 viruses, 27 microbial genes, and 12 metabolic pathways that were altered in children with ASD.
Machine-learning models using single-kingdom panels (archaea, bacteria, fungi, viruses) achieved area under the curve (AUC) values ranging from 0.68 to 0.87 in differentiating children with ASD from neurotypical control children.
A model based on a panel of 31 multikingdom and functional markers achieved “high predictive value” for ASD, achieving an AUC of 0.91, with comparable performance among boys and girls.
“The reproducible performance of the models across ages, sexes, and cohorts highlights their potential as promising diagnostic tools for ASD,” the investigators wrote.
They also noted that the accuracy of the model was largely driven by the biosynthesis pathways of ubiquinol-7 and thiamine diphosphate, which were less abundant in children with ASD, and may serve as therapeutic targets.
‘Exciting’ Possibilities
“This study broadens our understanding by including fungi, archaea, and viruses, where previous studies have largely focused on the role of gut bacteria in autism,” Bhismadev Chakrabarti, PhD, research director of the Centre for Autism at the University of Reading, United Kingdom, said in a statement from the nonprofit UK Science Media Centre.
“The results are broadly in line with previous studies that show reduced microbial diversity in autistic individuals. It also examines one of the largest samples seen in a study like this, which further strengthens the results,” Dr. Chakrabarti added.
He said this research may provide “new ways of detecting autism, if microbial markers turn out to strengthen the ability of genetic and behavioral tests to detect autism. A future platform that can combine genetic, microbial, and simple behavioral assessments could help address the detection gap.
“One limitation of this data is that it cannot assess any causal role for the microbiota in the development of autism,” Dr. Chakrabarti noted.
This study was supported by InnoHK, the Government of Hong Kong, Special Administrative Region of the People’s Republic of China, The D. H. Chen Foundation, and the New Cornerstone Science Foundation through the New Cornerstone Investigator Program. Dr. Ng has served as an advisory board member for Pfizer, Ferring, Janssen, and AbbVie; has received honoraria as a speaker for Ferring, Tillotts, Menarini, Janssen, AbbVie, and Takeda; is a scientific cofounder and shareholder of GenieBiome; receives patent royalties through her affiliated institutions; and is named as a co-inventor of patent applications that cover the therapeutic and diagnostic use of microbiome. Dr. Chakrabarti has no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
, new research shows.
The findings could form the basis for development of a noninvasive diagnostic test for ASD and also provide novel therapeutic targets, wrote investigators, led by Siew C. Ng, MBBS, PhD, with the Microbiota I-Center (MagIC), the Chinese University of Hong Kong.
Their study was published online in Nature Microbiology.
Beyond Bacteria
The gut microbiome has been shown to play a central role in modulating the gut-brain axis, potentially influencing the development of ASD.
However, most studies in ASD have focused on the bacterial component of the microbiome. Whether nonbacterial microorganisms (such as gut archaea, fungi, and viruses) or function of the gut microbiome are altered in ASD remains unclear.
To investigate, the researchers performed metagenomic sequencing on fecal samples from 1627 boys and girls aged 1-13 years with and without ASD from five cohorts in China.
After controlling for diet, medication, and comorbidity, they identified 14 archaea, 51 bacteria, 7 fungi, 18 viruses, 27 microbial genes, and 12 metabolic pathways that were altered in children with ASD.
Machine-learning models using single-kingdom panels (archaea, bacteria, fungi, viruses) achieved area under the curve (AUC) values ranging from 0.68 to 0.87 in differentiating children with ASD from neurotypical control children.
A model based on a panel of 31 multikingdom and functional markers achieved “high predictive value” for ASD, achieving an AUC of 0.91, with comparable performance among boys and girls.
“The reproducible performance of the models across ages, sexes, and cohorts highlights their potential as promising diagnostic tools for ASD,” the investigators wrote.
They also noted that the accuracy of the model was largely driven by the biosynthesis pathways of ubiquinol-7 and thiamine diphosphate, which were less abundant in children with ASD, and may serve as therapeutic targets.
‘Exciting’ Possibilities
“This study broadens our understanding by including fungi, archaea, and viruses, where previous studies have largely focused on the role of gut bacteria in autism,” Bhismadev Chakrabarti, PhD, research director of the Centre for Autism at the University of Reading, United Kingdom, said in a statement from the nonprofit UK Science Media Centre.
“The results are broadly in line with previous studies that show reduced microbial diversity in autistic individuals. It also examines one of the largest samples seen in a study like this, which further strengthens the results,” Dr. Chakrabarti added.
He said this research may provide “new ways of detecting autism, if microbial markers turn out to strengthen the ability of genetic and behavioral tests to detect autism. A future platform that can combine genetic, microbial, and simple behavioral assessments could help address the detection gap.
“One limitation of this data is that it cannot assess any causal role for the microbiota in the development of autism,” Dr. Chakrabarti noted.
This study was supported by InnoHK, the Government of Hong Kong, Special Administrative Region of the People’s Republic of China, The D. H. Chen Foundation, and the New Cornerstone Science Foundation through the New Cornerstone Investigator Program. Dr. Ng has served as an advisory board member for Pfizer, Ferring, Janssen, and AbbVie; has received honoraria as a speaker for Ferring, Tillotts, Menarini, Janssen, AbbVie, and Takeda; is a scientific cofounder and shareholder of GenieBiome; receives patent royalties through her affiliated institutions; and is named as a co-inventor of patent applications that cover the therapeutic and diagnostic use of microbiome. Dr. Chakrabarti has no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
, new research shows.
The findings could form the basis for development of a noninvasive diagnostic test for ASD and also provide novel therapeutic targets, wrote investigators, led by Siew C. Ng, MBBS, PhD, with the Microbiota I-Center (MagIC), the Chinese University of Hong Kong.
Their study was published online in Nature Microbiology.
Beyond Bacteria
The gut microbiome has been shown to play a central role in modulating the gut-brain axis, potentially influencing the development of ASD.
However, most studies in ASD have focused on the bacterial component of the microbiome. Whether nonbacterial microorganisms (such as gut archaea, fungi, and viruses) or function of the gut microbiome are altered in ASD remains unclear.
To investigate, the researchers performed metagenomic sequencing on fecal samples from 1627 boys and girls aged 1-13 years with and without ASD from five cohorts in China.
After controlling for diet, medication, and comorbidity, they identified 14 archaea, 51 bacteria, 7 fungi, 18 viruses, 27 microbial genes, and 12 metabolic pathways that were altered in children with ASD.
Machine-learning models using single-kingdom panels (archaea, bacteria, fungi, viruses) achieved area under the curve (AUC) values ranging from 0.68 to 0.87 in differentiating children with ASD from neurotypical control children.
A model based on a panel of 31 multikingdom and functional markers achieved “high predictive value” for ASD, achieving an AUC of 0.91, with comparable performance among boys and girls.
“The reproducible performance of the models across ages, sexes, and cohorts highlights their potential as promising diagnostic tools for ASD,” the investigators wrote.
They also noted that the accuracy of the model was largely driven by the biosynthesis pathways of ubiquinol-7 and thiamine diphosphate, which were less abundant in children with ASD, and may serve as therapeutic targets.
‘Exciting’ Possibilities
“This study broadens our understanding by including fungi, archaea, and viruses, where previous studies have largely focused on the role of gut bacteria in autism,” Bhismadev Chakrabarti, PhD, research director of the Centre for Autism at the University of Reading, United Kingdom, said in a statement from the nonprofit UK Science Media Centre.
“The results are broadly in line with previous studies that show reduced microbial diversity in autistic individuals. It also examines one of the largest samples seen in a study like this, which further strengthens the results,” Dr. Chakrabarti added.
He said this research may provide “new ways of detecting autism, if microbial markers turn out to strengthen the ability of genetic and behavioral tests to detect autism. A future platform that can combine genetic, microbial, and simple behavioral assessments could help address the detection gap.
“One limitation of this data is that it cannot assess any causal role for the microbiota in the development of autism,” Dr. Chakrabarti noted.
This study was supported by InnoHK, the Government of Hong Kong, Special Administrative Region of the People’s Republic of China, The D. H. Chen Foundation, and the New Cornerstone Science Foundation through the New Cornerstone Investigator Program. Dr. Ng has served as an advisory board member for Pfizer, Ferring, Janssen, and AbbVie; has received honoraria as a speaker for Ferring, Tillotts, Menarini, Janssen, AbbVie, and Takeda; is a scientific cofounder and shareholder of GenieBiome; receives patent royalties through her affiliated institutions; and is named as a co-inventor of patent applications that cover the therapeutic and diagnostic use of microbiome. Dr. Chakrabarti has no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
FROM NATURE MICROBIOLOGY
Combat Exposure Increases Chronic Pain Among Women in the US Military
TOPLINE:
Combat exposure is strongly associated with chronic pain in active-duty servicewomen and female civilian dependents of military personnel on active duty; a lower socioeconomic status and mental health conditions further increased the likelihood of chronic pain.
METHODOLOGY:
- Researchers analyzed claims data from the Military Health System to identify chronic pain diagnoses among active-duty servicewomen and civilian dependents of individuals on active duty.
- A total of 3,473,401 individuals (median age, 29 years) were included in the study, with 644,478 active-duty servicewomen and 2,828,923 civilian dependents.
- The study compared the incidence of chronic pain during 2006-2013, a period of heightened deployment intensity, with 2014-2020, a period of reduced deployment intensity.
- The primary outcome was the diagnosis of chronic pain.
TAKEAWAY:
- Active-duty servicewomen in the years 2006-2013 had a 53% increase in the odds of reporting chronic pain compared with those in the period between 2014 and 2020 (odds ratio [OR], 1.53; 95% CI, 1.48-1.58).
- Civilian dependents in the years 2006-2013 had a 96% increase in the odds of chronic pain compared with those in the later interval (OR, 1.96; 95% CI, 1.93-1.99).
- In 2006-2013, junior enlisted active-duty servicewomen had nearly a twofold increase in the odds of chronic pain (OR, 1.95; 95% CI, 1.83-2.09), while junior enlisted dependents had more than a threefold increase in the odds of chronic pain (OR, 3.05; 95% CI, 2.87-3.25) compared with senior officers.
- Comorbid mental conditions also were associated with an increased odds of reporting chronic pain (OR, 1.67; 95% CI, 1.65-1.69).
IN PRACTICE:
“The potential for higher rates of chronic pain in women veterans has been theorized to result from differences in support structures, family conflict, coping strategies, stress regulation, and exposure to military sexual trauma,” the authors wrote. “Our results suggest that these contributing factors may carry over to the women dependents of combat veterans in addition, indicating a line of research that requires urgent further exploration.”
SOURCE:
The study was led by Andrew J. Schoenfeld, MD, MSc, of the Center for Surgery and Public Health, Department of Orthopaedic Surgery at Brigham and Women’s Hospital and Harvard Medical School, in Boston. It was published online on July 5, 2024, in JAMA Network Open.
LIMITATIONS:
This study relied on claims-based data, which may have issues with coding accuracy and limited clinical granularity. The population size reduced over time owing to military downsizing, which could impact the findings. The prevalence of chronic pain in the population was likely underestimated because individuals who did not report symptoms or were diagnosed after separation from service were not identified.
DISCLOSURES:
This study was funded by the US Department of Defense. The lead author reported receiving grants and personal fees, serving as the editor-in-chief for Spine, acting as a consultant, and having other ties with various sources outside the submitted work.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Combat exposure is strongly associated with chronic pain in active-duty servicewomen and female civilian dependents of military personnel on active duty; a lower socioeconomic status and mental health conditions further increased the likelihood of chronic pain.
METHODOLOGY:
- Researchers analyzed claims data from the Military Health System to identify chronic pain diagnoses among active-duty servicewomen and civilian dependents of individuals on active duty.
- A total of 3,473,401 individuals (median age, 29 years) were included in the study, with 644,478 active-duty servicewomen and 2,828,923 civilian dependents.
- The study compared the incidence of chronic pain during 2006-2013, a period of heightened deployment intensity, with 2014-2020, a period of reduced deployment intensity.
- The primary outcome was the diagnosis of chronic pain.
TAKEAWAY:
- Active-duty servicewomen in the years 2006-2013 had a 53% increase in the odds of reporting chronic pain compared with those in the period between 2014 and 2020 (odds ratio [OR], 1.53; 95% CI, 1.48-1.58).
- Civilian dependents in the years 2006-2013 had a 96% increase in the odds of chronic pain compared with those in the later interval (OR, 1.96; 95% CI, 1.93-1.99).
- In 2006-2013, junior enlisted active-duty servicewomen had nearly a twofold increase in the odds of chronic pain (OR, 1.95; 95% CI, 1.83-2.09), while junior enlisted dependents had more than a threefold increase in the odds of chronic pain (OR, 3.05; 95% CI, 2.87-3.25) compared with senior officers.
- Comorbid mental conditions also were associated with an increased odds of reporting chronic pain (OR, 1.67; 95% CI, 1.65-1.69).
IN PRACTICE:
“The potential for higher rates of chronic pain in women veterans has been theorized to result from differences in support structures, family conflict, coping strategies, stress regulation, and exposure to military sexual trauma,” the authors wrote. “Our results suggest that these contributing factors may carry over to the women dependents of combat veterans in addition, indicating a line of research that requires urgent further exploration.”
SOURCE:
The study was led by Andrew J. Schoenfeld, MD, MSc, of the Center for Surgery and Public Health, Department of Orthopaedic Surgery at Brigham and Women’s Hospital and Harvard Medical School, in Boston. It was published online on July 5, 2024, in JAMA Network Open.
LIMITATIONS:
This study relied on claims-based data, which may have issues with coding accuracy and limited clinical granularity. The population size reduced over time owing to military downsizing, which could impact the findings. The prevalence of chronic pain in the population was likely underestimated because individuals who did not report symptoms or were diagnosed after separation from service were not identified.
DISCLOSURES:
This study was funded by the US Department of Defense. The lead author reported receiving grants and personal fees, serving as the editor-in-chief for Spine, acting as a consultant, and having other ties with various sources outside the submitted work.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Combat exposure is strongly associated with chronic pain in active-duty servicewomen and female civilian dependents of military personnel on active duty; a lower socioeconomic status and mental health conditions further increased the likelihood of chronic pain.
METHODOLOGY:
- Researchers analyzed claims data from the Military Health System to identify chronic pain diagnoses among active-duty servicewomen and civilian dependents of individuals on active duty.
- A total of 3,473,401 individuals (median age, 29 years) were included in the study, with 644,478 active-duty servicewomen and 2,828,923 civilian dependents.
- The study compared the incidence of chronic pain during 2006-2013, a period of heightened deployment intensity, with 2014-2020, a period of reduced deployment intensity.
- The primary outcome was the diagnosis of chronic pain.
TAKEAWAY:
- Active-duty servicewomen in the years 2006-2013 had a 53% increase in the odds of reporting chronic pain compared with those in the period between 2014 and 2020 (odds ratio [OR], 1.53; 95% CI, 1.48-1.58).
- Civilian dependents in the years 2006-2013 had a 96% increase in the odds of chronic pain compared with those in the later interval (OR, 1.96; 95% CI, 1.93-1.99).
- In 2006-2013, junior enlisted active-duty servicewomen had nearly a twofold increase in the odds of chronic pain (OR, 1.95; 95% CI, 1.83-2.09), while junior enlisted dependents had more than a threefold increase in the odds of chronic pain (OR, 3.05; 95% CI, 2.87-3.25) compared with senior officers.
- Comorbid mental conditions also were associated with an increased odds of reporting chronic pain (OR, 1.67; 95% CI, 1.65-1.69).
IN PRACTICE:
“The potential for higher rates of chronic pain in women veterans has been theorized to result from differences in support structures, family conflict, coping strategies, stress regulation, and exposure to military sexual trauma,” the authors wrote. “Our results suggest that these contributing factors may carry over to the women dependents of combat veterans in addition, indicating a line of research that requires urgent further exploration.”
SOURCE:
The study was led by Andrew J. Schoenfeld, MD, MSc, of the Center for Surgery and Public Health, Department of Orthopaedic Surgery at Brigham and Women’s Hospital and Harvard Medical School, in Boston. It was published online on July 5, 2024, in JAMA Network Open.
LIMITATIONS:
This study relied on claims-based data, which may have issues with coding accuracy and limited clinical granularity. The population size reduced over time owing to military downsizing, which could impact the findings. The prevalence of chronic pain in the population was likely underestimated because individuals who did not report symptoms or were diagnosed after separation from service were not identified.
DISCLOSURES:
This study was funded by the US Department of Defense. The lead author reported receiving grants and personal fees, serving as the editor-in-chief for Spine, acting as a consultant, and having other ties with various sources outside the submitted work.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Confronting Healthcare Disinformation on Social Media
More than 90% of internet users are active on social media, which had 4.76 billion users worldwide in January 2023. The digital revolution has reshaped the news landscape and changed how users interact with information. Social media has fostered an active relationship with the media, including the ability to interact directly with the content presented. It also has augmented media’s ability to reach a large audience with tight deadlines.
These developments suggest that social media can be a useful tool in everyday medical practice for professionals and patients. But social media also can spread misinformation, as happened during the COVID-19 pandemic.
This characteristic is the focus of the latest research by Fabiana Zollo, a computer science professor at Ca’ Foscari University of Venice, Italy, and coordinator of the Data Science for Society laboratory. The research was published in The BMJ. Ms. Zollo’s research group aims to assess the effect of social media on misinformation and consequent behaviors related to health. “The study results focus primarily on two topics, the COVID-19 pandemic and vaccinations, but can also be applied to other health-related behaviors such as smoking and diet,” Ms. Zollo told Univadis Italy.
Social media has become an important tool for public health organizations to inform and educate citizens. Institutions can use it to monitor choices and understand which topics are being discussed most at a given time, thus comprehending how the topics evolve and take shape in public discourse. “This could lead to the emergence of people’s perceptions, allowing us to understand, among other things, what the population’s needs might be, including informational needs,” said Ms. Zollo.
Tenuous Causal Link
While social media offers public health organizations the opportunity to inform and engage the public, it also raises concerns about misinformation and the difficulty of measuring its effect on health behavior. Although some studies have observed correlations between exposure to misinformation on social media and levels of adherence to vaccination campaigns, establishing a causal link is complex. As the authors emphasize, “despite the importance of the effect of social media and misinformation on people’s behavior and the broad hypotheses within public and political debates, the current state of the art cannot provide definitive conclusions on a clear causal association between social media and health behaviors.” Establishing a clear causal link between information obtained from social media and offline behavior is challenging due to methodologic limitations and the complexity of connections between online and offline behaviors. Studies often rely on self-reported data, which may not accurately reflect real behaviors, and struggle to isolate the effect of social media from other external influences. Moreover, many studies primarily focus on Western countries, limiting the generalizability of the results to other cultural and geographical conditions.
Another issue highlighted by Ms. Zollo and colleagues is the lack of complete and representative data. Studies often lack detailed information about participants, such as demographic or geolocation data, and rely on limited samples. This lack makes it difficult to assess the effect of misinformation on different segments of the population and in different geographic areas.
“The main methodologic difficulty concerns behavior, which is difficult to measure because it would require tracking a person’s actions over time and having a shared methodology to do so. We need to understand whether online stated intentions do or do not translate into actual behaviors,” said Ms. Zollo. Therefore, despite the recognized importance of the effect of social media and misinformation on people’s general behavior and the broad hypotheses expressed within public and political debates, the current state of the art cannot provide definitive conclusions on a causal association between social media and health behaviors.
Institutions’ Role
Social media is a fertile ground for the formation of echo chambers (where users find themselves dialoguing with like-minded people, forming a distorted impression of the real prevalence of that opinion) and for reinforcing polarized positions around certain topics. “We know that on certain topics, especially those related to health, there is a lot of misinformation circulating precisely because it is easy to leverage factors such as fear and beliefs, even the difficulties in understanding the technical aspects of a message,” said Ms. Zollo. Moreover, institutions have not always provided timely information during the pandemic. “Often, when there is a gap in response to a specific informational need, people turn elsewhere, where those questions find answers. And even if the response is not of high quality, it sometimes confirms the idea that the user had already created in their mind.”
The article published in The BMJ aims primarily to provide information and evaluation insights to institutions rather than professionals or healthcare workers. “We would like to spark the interest of institutions and ministries that can analyze this type of data and integrate it into their monitoring system. Social monitoring (the observation of what happens on social media) is a practice that the World Health Organization is also evaluating and trying to integrate with more traditional tools, such as questionnaires. The aim is to understand as well as possible what a population thinks about a particular health measure, such as a vaccine: Through data obtained from social monitoring, a more realistic and comprehensive view of the problem could be achieved,” said Ms. Zollo.
A Doctor’s Role
And this is where the doctor comes in: All the information thus obtained allows for identifying the needs that the population expresses and that “could push a patient to turn elsewhere, toward sources that provide answers even if of dubious quality or extremely oversimplified.” The doctor can enter this landscape by trying to understand, even with the data provided by institutions, what needs the patients are trying to fill and what drives them to seek elsewhere and to look for a reference community that offers the relevant confirmations.
From the doctor’s perspective, therefore, it can be useful to understand how these dynamics arise and evolve because they could help improve interactions with patients. At the institutional level, social monitoring would be an excellent tool for providing services to doctors who, in turn, offer a service to patients. If it were possible to identify areas where a disinformation narrative is developing from the outset, both the doctor and the institutions would benefit.
Misinformation vs Disinformation
The rapid spread of false or misleading information on social media can undermine trust in healthcare institutions and negatively influence health-related behaviors. Ms. Zollo and colleagues, in fact, speak of misinformation in their discussion, not disinformation. “In English, a distinction is made between misinformation and disinformation, a distinction that we are also adopting in Italian. When we talk about misinformation, we mean information that is generally false, inaccurate, or misleading but has not been created with the intention to harm, an intention that is present in disinformation,” said Ms. Zollo.
The distinction is often not easy to define even at the operational level, but in her studies, Ms. Zollo is mainly interested in understanding how the end user interacts with content, not the purposes for which that content was created. “This allows us to focus on users and the relationships that are created on various social platforms, thus bypassing the author of that information and focusing on how misinformation arises and evolves so that it can be effectively combated before it translates into action (ie, into incorrect health choices),” said Ms. Zollo.
This story was translated from Univadis Italy, which is part of the Medscape Professional Network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
More than 90% of internet users are active on social media, which had 4.76 billion users worldwide in January 2023. The digital revolution has reshaped the news landscape and changed how users interact with information. Social media has fostered an active relationship with the media, including the ability to interact directly with the content presented. It also has augmented media’s ability to reach a large audience with tight deadlines.
These developments suggest that social media can be a useful tool in everyday medical practice for professionals and patients. But social media also can spread misinformation, as happened during the COVID-19 pandemic.
This characteristic is the focus of the latest research by Fabiana Zollo, a computer science professor at Ca’ Foscari University of Venice, Italy, and coordinator of the Data Science for Society laboratory. The research was published in The BMJ. Ms. Zollo’s research group aims to assess the effect of social media on misinformation and consequent behaviors related to health. “The study results focus primarily on two topics, the COVID-19 pandemic and vaccinations, but can also be applied to other health-related behaviors such as smoking and diet,” Ms. Zollo told Univadis Italy.
Social media has become an important tool for public health organizations to inform and educate citizens. Institutions can use it to monitor choices and understand which topics are being discussed most at a given time, thus comprehending how the topics evolve and take shape in public discourse. “This could lead to the emergence of people’s perceptions, allowing us to understand, among other things, what the population’s needs might be, including informational needs,” said Ms. Zollo.
Tenuous Causal Link
While social media offers public health organizations the opportunity to inform and engage the public, it also raises concerns about misinformation and the difficulty of measuring its effect on health behavior. Although some studies have observed correlations between exposure to misinformation on social media and levels of adherence to vaccination campaigns, establishing a causal link is complex. As the authors emphasize, “despite the importance of the effect of social media and misinformation on people’s behavior and the broad hypotheses within public and political debates, the current state of the art cannot provide definitive conclusions on a clear causal association between social media and health behaviors.” Establishing a clear causal link between information obtained from social media and offline behavior is challenging due to methodologic limitations and the complexity of connections between online and offline behaviors. Studies often rely on self-reported data, which may not accurately reflect real behaviors, and struggle to isolate the effect of social media from other external influences. Moreover, many studies primarily focus on Western countries, limiting the generalizability of the results to other cultural and geographical conditions.
Another issue highlighted by Ms. Zollo and colleagues is the lack of complete and representative data. Studies often lack detailed information about participants, such as demographic or geolocation data, and rely on limited samples. This lack makes it difficult to assess the effect of misinformation on different segments of the population and in different geographic areas.
“The main methodologic difficulty concerns behavior, which is difficult to measure because it would require tracking a person’s actions over time and having a shared methodology to do so. We need to understand whether online stated intentions do or do not translate into actual behaviors,” said Ms. Zollo. Therefore, despite the recognized importance of the effect of social media and misinformation on people’s general behavior and the broad hypotheses expressed within public and political debates, the current state of the art cannot provide definitive conclusions on a causal association between social media and health behaviors.
Institutions’ Role
Social media is a fertile ground for the formation of echo chambers (where users find themselves dialoguing with like-minded people, forming a distorted impression of the real prevalence of that opinion) and for reinforcing polarized positions around certain topics. “We know that on certain topics, especially those related to health, there is a lot of misinformation circulating precisely because it is easy to leverage factors such as fear and beliefs, even the difficulties in understanding the technical aspects of a message,” said Ms. Zollo. Moreover, institutions have not always provided timely information during the pandemic. “Often, when there is a gap in response to a specific informational need, people turn elsewhere, where those questions find answers. And even if the response is not of high quality, it sometimes confirms the idea that the user had already created in their mind.”
The article published in The BMJ aims primarily to provide information and evaluation insights to institutions rather than professionals or healthcare workers. “We would like to spark the interest of institutions and ministries that can analyze this type of data and integrate it into their monitoring system. Social monitoring (the observation of what happens on social media) is a practice that the World Health Organization is also evaluating and trying to integrate with more traditional tools, such as questionnaires. The aim is to understand as well as possible what a population thinks about a particular health measure, such as a vaccine: Through data obtained from social monitoring, a more realistic and comprehensive view of the problem could be achieved,” said Ms. Zollo.
A Doctor’s Role
And this is where the doctor comes in: All the information thus obtained allows for identifying the needs that the population expresses and that “could push a patient to turn elsewhere, toward sources that provide answers even if of dubious quality or extremely oversimplified.” The doctor can enter this landscape by trying to understand, even with the data provided by institutions, what needs the patients are trying to fill and what drives them to seek elsewhere and to look for a reference community that offers the relevant confirmations.
From the doctor’s perspective, therefore, it can be useful to understand how these dynamics arise and evolve because they could help improve interactions with patients. At the institutional level, social monitoring would be an excellent tool for providing services to doctors who, in turn, offer a service to patients. If it were possible to identify areas where a disinformation narrative is developing from the outset, both the doctor and the institutions would benefit.
Misinformation vs Disinformation
The rapid spread of false or misleading information on social media can undermine trust in healthcare institutions and negatively influence health-related behaviors. Ms. Zollo and colleagues, in fact, speak of misinformation in their discussion, not disinformation. “In English, a distinction is made between misinformation and disinformation, a distinction that we are also adopting in Italian. When we talk about misinformation, we mean information that is generally false, inaccurate, or misleading but has not been created with the intention to harm, an intention that is present in disinformation,” said Ms. Zollo.
The distinction is often not easy to define even at the operational level, but in her studies, Ms. Zollo is mainly interested in understanding how the end user interacts with content, not the purposes for which that content was created. “This allows us to focus on users and the relationships that are created on various social platforms, thus bypassing the author of that information and focusing on how misinformation arises and evolves so that it can be effectively combated before it translates into action (ie, into incorrect health choices),” said Ms. Zollo.
This story was translated from Univadis Italy, which is part of the Medscape Professional Network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
More than 90% of internet users are active on social media, which had 4.76 billion users worldwide in January 2023. The digital revolution has reshaped the news landscape and changed how users interact with information. Social media has fostered an active relationship with the media, including the ability to interact directly with the content presented. It also has augmented media’s ability to reach a large audience with tight deadlines.
These developments suggest that social media can be a useful tool in everyday medical practice for professionals and patients. But social media also can spread misinformation, as happened during the COVID-19 pandemic.
This characteristic is the focus of the latest research by Fabiana Zollo, a computer science professor at Ca’ Foscari University of Venice, Italy, and coordinator of the Data Science for Society laboratory. The research was published in The BMJ. Ms. Zollo’s research group aims to assess the effect of social media on misinformation and consequent behaviors related to health. “The study results focus primarily on two topics, the COVID-19 pandemic and vaccinations, but can also be applied to other health-related behaviors such as smoking and diet,” Ms. Zollo told Univadis Italy.
Social media has become an important tool for public health organizations to inform and educate citizens. Institutions can use it to monitor choices and understand which topics are being discussed most at a given time, thus comprehending how the topics evolve and take shape in public discourse. “This could lead to the emergence of people’s perceptions, allowing us to understand, among other things, what the population’s needs might be, including informational needs,” said Ms. Zollo.
Tenuous Causal Link
While social media offers public health organizations the opportunity to inform and engage the public, it also raises concerns about misinformation and the difficulty of measuring its effect on health behavior. Although some studies have observed correlations between exposure to misinformation on social media and levels of adherence to vaccination campaigns, establishing a causal link is complex. As the authors emphasize, “despite the importance of the effect of social media and misinformation on people’s behavior and the broad hypotheses within public and political debates, the current state of the art cannot provide definitive conclusions on a clear causal association between social media and health behaviors.” Establishing a clear causal link between information obtained from social media and offline behavior is challenging due to methodologic limitations and the complexity of connections between online and offline behaviors. Studies often rely on self-reported data, which may not accurately reflect real behaviors, and struggle to isolate the effect of social media from other external influences. Moreover, many studies primarily focus on Western countries, limiting the generalizability of the results to other cultural and geographical conditions.
Another issue highlighted by Ms. Zollo and colleagues is the lack of complete and representative data. Studies often lack detailed information about participants, such as demographic or geolocation data, and rely on limited samples. This lack makes it difficult to assess the effect of misinformation on different segments of the population and in different geographic areas.
“The main methodologic difficulty concerns behavior, which is difficult to measure because it would require tracking a person’s actions over time and having a shared methodology to do so. We need to understand whether online stated intentions do or do not translate into actual behaviors,” said Ms. Zollo. Therefore, despite the recognized importance of the effect of social media and misinformation on people’s general behavior and the broad hypotheses expressed within public and political debates, the current state of the art cannot provide definitive conclusions on a causal association between social media and health behaviors.
Institutions’ Role
Social media is a fertile ground for the formation of echo chambers (where users find themselves dialoguing with like-minded people, forming a distorted impression of the real prevalence of that opinion) and for reinforcing polarized positions around certain topics. “We know that on certain topics, especially those related to health, there is a lot of misinformation circulating precisely because it is easy to leverage factors such as fear and beliefs, even the difficulties in understanding the technical aspects of a message,” said Ms. Zollo. Moreover, institutions have not always provided timely information during the pandemic. “Often, when there is a gap in response to a specific informational need, people turn elsewhere, where those questions find answers. And even if the response is not of high quality, it sometimes confirms the idea that the user had already created in their mind.”
The article published in The BMJ aims primarily to provide information and evaluation insights to institutions rather than professionals or healthcare workers. “We would like to spark the interest of institutions and ministries that can analyze this type of data and integrate it into their monitoring system. Social monitoring (the observation of what happens on social media) is a practice that the World Health Organization is also evaluating and trying to integrate with more traditional tools, such as questionnaires. The aim is to understand as well as possible what a population thinks about a particular health measure, such as a vaccine: Through data obtained from social monitoring, a more realistic and comprehensive view of the problem could be achieved,” said Ms. Zollo.
A Doctor’s Role
And this is where the doctor comes in: All the information thus obtained allows for identifying the needs that the population expresses and that “could push a patient to turn elsewhere, toward sources that provide answers even if of dubious quality or extremely oversimplified.” The doctor can enter this landscape by trying to understand, even with the data provided by institutions, what needs the patients are trying to fill and what drives them to seek elsewhere and to look for a reference community that offers the relevant confirmations.
From the doctor’s perspective, therefore, it can be useful to understand how these dynamics arise and evolve because they could help improve interactions with patients. At the institutional level, social monitoring would be an excellent tool for providing services to doctors who, in turn, offer a service to patients. If it were possible to identify areas where a disinformation narrative is developing from the outset, both the doctor and the institutions would benefit.
Misinformation vs Disinformation
The rapid spread of false or misleading information on social media can undermine trust in healthcare institutions and negatively influence health-related behaviors. Ms. Zollo and colleagues, in fact, speak of misinformation in their discussion, not disinformation. “In English, a distinction is made between misinformation and disinformation, a distinction that we are also adopting in Italian. When we talk about misinformation, we mean information that is generally false, inaccurate, or misleading but has not been created with the intention to harm, an intention that is present in disinformation,” said Ms. Zollo.
The distinction is often not easy to define even at the operational level, but in her studies, Ms. Zollo is mainly interested in understanding how the end user interacts with content, not the purposes for which that content was created. “This allows us to focus on users and the relationships that are created on various social platforms, thus bypassing the author of that information and focusing on how misinformation arises and evolves so that it can be effectively combated before it translates into action (ie, into incorrect health choices),” said Ms. Zollo.
This story was translated from Univadis Italy, which is part of the Medscape Professional Network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.