Neurology Reviews covers innovative and emerging news in neurology and neuroscience every month, with a focus on practical approaches to treating Parkinson's disease, epilepsy, headache, stroke, multiple sclerosis, Alzheimer's disease, and other neurologic disorders.

Theme
medstat_nr
Top Sections
Literature Review
Expert Commentary
Expert Interview
nr
Main menu
NR Main Menu
Explore menu
NR Explore Menu
Proclivity ID
18828001
Unpublish
Negative Keywords
Ocrevus PML
PML
Progressive multifocal leukoencephalopathy
Rituxan
Altmetric
DSM Affiliated
Display in offset block
QuickLearn Excluded Topics/Sections
Best Practices
CME
CME Supplements
Education Center
Medical Education Library
Disqus Exclude
Best Practices
CE/CME
Education Center
Medical Education Library
Enable Disqus
Display Author and Disclosure Link
Publication Type
Clinical
Slot System
Featured Buckets
Disable Sticky Ads
Disable Ad Block Mitigation
Featured Buckets Admin
Show Ads on this Publication's Homepage
Consolidated Pub
Show Article Page Numbers on TOC
Use larger logo size
Off
Current Issue
Title
Neurology Reviews
Description

The leading independent newspaper covering neurology news and commentary.

Current Issue Reference

When Medicine Isn’t the Last Stop

Article Type
Changed
Thu, 05/16/2024 - 09:16

A distant friend and I were recently chatting by email. After years of trying, she’s become a successful author, and decided to leave medicine to focus on the new career.

She’s excited about this, as it’s really what she’s always dreamed of doing, but at the same time feels guilty about it. Leaving medicine for a new career isn’t quite the same as quitting your job as a waitress or insurance salesman. You’ve put a lot of time, and effort, and money, into becoming an attending physician.

Dr. Allan M. Block, a neurologist in Scottsdale, Arizona.
Dr. Allan M. Block


I also once dreamed of being a successful writer (amongst other things) but have no complaints about where I landed. I like what I do. Besides, I don’t have her kind of imagination.

It’s a valid point, though. Becoming a doc in practice takes a minimum of 4 years of college and 4 years of medical school. Then you tack on a residency of 3 years (internal medicine) to 7 years (neurosurgery). On top of that many add another 1-2 years for fellowship training. So you’re talking a bare minimum of at least 11 years, ranging up to 17 years.

Then you think of how much money was spent on college and medical school — tuition, living expenses, loan interest, not to mention the emotional toll of the training.

You also have to think that somewhere in there you got a chance to become a doctor while someone else didn’t.

So, I can see why she feels guilty, but she shouldn’t. She’s paid back all her loans, so no one else is left carrying the financial bag. The argument about denying someone else a spot can be kind of flimsy when you don’t know how that person might have turned out (the medical school dropout rate is 15%-18%).

Life is unpredictable. We often don’t really know what we want until we get there, and those journeys are rarely a straight line. That doesn’t mean those years were a waste, they’re just part of the trip — stepping stones to get you to the right place and realize who you really are. They also make these things possible — the experiences add to the background, and give you time and support to make the change.

She joins a group of other physicians who found their calling elsewhere, such as Graham Chapman or Michael Crichton. A nonmedical example is the renowned British astrophysicist, Sir Brian May.

I have no plans to leave medicine for another career. This fall will be 35 years since I started at Creighton Medical School, and I have no regrets. But if others have found something they enjoy more and are successful at, they have nothing to feel guilty about.

Good luck, friend.
 

Dr. Block has a solo neurology practice in Scottsdale, Arizona.

Publications
Topics
Sections

A distant friend and I were recently chatting by email. After years of trying, she’s become a successful author, and decided to leave medicine to focus on the new career.

She’s excited about this, as it’s really what she’s always dreamed of doing, but at the same time feels guilty about it. Leaving medicine for a new career isn’t quite the same as quitting your job as a waitress or insurance salesman. You’ve put a lot of time, and effort, and money, into becoming an attending physician.

Dr. Allan M. Block, a neurologist in Scottsdale, Arizona.
Dr. Allan M. Block


I also once dreamed of being a successful writer (amongst other things) but have no complaints about where I landed. I like what I do. Besides, I don’t have her kind of imagination.

It’s a valid point, though. Becoming a doc in practice takes a minimum of 4 years of college and 4 years of medical school. Then you tack on a residency of 3 years (internal medicine) to 7 years (neurosurgery). On top of that many add another 1-2 years for fellowship training. So you’re talking a bare minimum of at least 11 years, ranging up to 17 years.

Then you think of how much money was spent on college and medical school — tuition, living expenses, loan interest, not to mention the emotional toll of the training.

You also have to think that somewhere in there you got a chance to become a doctor while someone else didn’t.

So, I can see why she feels guilty, but she shouldn’t. She’s paid back all her loans, so no one else is left carrying the financial bag. The argument about denying someone else a spot can be kind of flimsy when you don’t know how that person might have turned out (the medical school dropout rate is 15%-18%).

Life is unpredictable. We often don’t really know what we want until we get there, and those journeys are rarely a straight line. That doesn’t mean those years were a waste, they’re just part of the trip — stepping stones to get you to the right place and realize who you really are. They also make these things possible — the experiences add to the background, and give you time and support to make the change.

She joins a group of other physicians who found their calling elsewhere, such as Graham Chapman or Michael Crichton. A nonmedical example is the renowned British astrophysicist, Sir Brian May.

I have no plans to leave medicine for another career. This fall will be 35 years since I started at Creighton Medical School, and I have no regrets. But if others have found something they enjoy more and are successful at, they have nothing to feel guilty about.

Good luck, friend.
 

Dr. Block has a solo neurology practice in Scottsdale, Arizona.

A distant friend and I were recently chatting by email. After years of trying, she’s become a successful author, and decided to leave medicine to focus on the new career.

She’s excited about this, as it’s really what she’s always dreamed of doing, but at the same time feels guilty about it. Leaving medicine for a new career isn’t quite the same as quitting your job as a waitress or insurance salesman. You’ve put a lot of time, and effort, and money, into becoming an attending physician.

Dr. Allan M. Block, a neurologist in Scottsdale, Arizona.
Dr. Allan M. Block


I also once dreamed of being a successful writer (amongst other things) but have no complaints about where I landed. I like what I do. Besides, I don’t have her kind of imagination.

It’s a valid point, though. Becoming a doc in practice takes a minimum of 4 years of college and 4 years of medical school. Then you tack on a residency of 3 years (internal medicine) to 7 years (neurosurgery). On top of that many add another 1-2 years for fellowship training. So you’re talking a bare minimum of at least 11 years, ranging up to 17 years.

Then you think of how much money was spent on college and medical school — tuition, living expenses, loan interest, not to mention the emotional toll of the training.

You also have to think that somewhere in there you got a chance to become a doctor while someone else didn’t.

So, I can see why she feels guilty, but she shouldn’t. She’s paid back all her loans, so no one else is left carrying the financial bag. The argument about denying someone else a spot can be kind of flimsy when you don’t know how that person might have turned out (the medical school dropout rate is 15%-18%).

Life is unpredictable. We often don’t really know what we want until we get there, and those journeys are rarely a straight line. That doesn’t mean those years were a waste, they’re just part of the trip — stepping stones to get you to the right place and realize who you really are. They also make these things possible — the experiences add to the background, and give you time and support to make the change.

She joins a group of other physicians who found their calling elsewhere, such as Graham Chapman or Michael Crichton. A nonmedical example is the renowned British astrophysicist, Sir Brian May.

I have no plans to leave medicine for another career. This fall will be 35 years since I started at Creighton Medical School, and I have no regrets. But if others have found something they enjoy more and are successful at, they have nothing to feel guilty about.

Good luck, friend.
 

Dr. Block has a solo neurology practice in Scottsdale, Arizona.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Is Meningitis a Risk Factor for Trigeminal Neuralgia? New Data

Article Type
Changed
Tue, 05/28/2024 - 15:06

Meningitis has been highlighted as a novel risk factor for trigeminal neuralgia in a nationwide, propensity-matched study of hospital admissions.

In multivariate analysis, the odds of meningitis were threefold higher in patients admitted with trigeminal neuralgia than in matched controls without trigeminal neuralgia.

This is the first nationwide population-based study of the rare, chronic pain disorder to identify the prevalence of trigeminal neuralgia admissions in the United States and risk factors contributing to trigeminal neuralgia development.

“Our results affirm known associations between trigeminal neuralgia and comorbidities like multiple sclerosis, and they also identify meningitis as a novel risk factor for trigeminal neuralgia,” said investigator Megan Tang, BS, a medical student at the Icahn School of Medicine at Mount Sinai, New York City.

The findings were presented at the American Association of Neurological Surgeons (AANS) 2024 annual meeting.
 

Strong Clinical Risk Factors

Trigeminal neuralgia is a rare pain disorder involving neurovascular compression of the trigeminal nerve. Its etiology and risk factors are poorly understood. Current literature is based on limited datasets and reports inconsistent risk factors across studies.

To better understand the disorder, researchers used International Classification of Diseases (ICD)-9 codes to identify trigeminal neuralgia admissions in the National Inpatient Sample from 2016 to 2019, and then propensity matched them 1:1 to non-trigeminal neuralgia admissions based on demographics, socioeconomic status, and Charlson comorbidity index scores.

Univariate analysis identified 136,345 trigeminal neuralgia admissions or an overall prevalence of 0.096%.

Trigeminal neuralgia admissions had lower morbidity than non-trigeminal neuralgia admissions and a higher prevalence of non-White patients, private insurance, and prolonged length of stay, Ms. Tang said.

Patients admitted for trigeminal neuralgia also had a higher prevalence of several chronic conditions, including hypertension, hyperlipidemia, and osteoarthritis; inflammatory conditions like lupus, meningitis, rheumatoid arthritis, and inflammatory bowel disease; and neurologic conditions including multiple sclerosis, epilepsy, stroke, and neurovascular compression disorders.

In multivariate analysis, investigators identified meningitis as a previously unknown risk factor for trigeminal neuralgia (odds ratio [OR], 3.1; P < .001).

Other strong risk factors were neurovascular compression disorders (OR, 39.82; P < .001) and multiple sclerosis (OR, 12.41; P < .001). Non-White race (Black; OR, 1.09; Hispanic; OR, 1.23; Other; OR, 1.24) and use of Medicaid (OR, 1.07) and other insurance (OR, 1.17) were demographic risk factors for trigeminal neuralgia.

“This finding points us toward future work exploring the potential mechanisms of predictors, most notably inflammatory conditions in trigeminal neuralgia development,” Ms. Tang concluded.

She declined to comment further on the findings, noting the investigators are still finalizing the results and interpretation.
 

Ask About Meningitis, Fever

Commenting on the findings, Michael D. Staudt, MD, MSc, University Hospitals Cleveland Medical Center, said that many patients who present with classical trigeminal neuralgia will have a blood vessel on MRI that is pressing on the trigeminal nerve.

“Obviously, the nerve is bathed in cerebrospinal fluid. So, if there’s an inflammatory marker, inflammation, or infection that could be injuring the nerve in a way that we don’t yet understand, that could be something that could cause trigeminal neuralgia without having to see a blood vessel,” said Dr. Staudt, who was not involved in the study. “It makes sense, theoretically. Something that’s inflammatory, something that’s irritating, that’s novel.”

Currently, predictive markers include clinical history, response to classical medications such as carbamazepine, and MRI findings, Dr. Staudt noted.

“Someone shows up with symptoms and MRI, and it’s basically do they have a blood vessel or not,” he said. “Treatments are generally within the same categories, but we don’t think it’s the same sort of success rate as seeing a blood vessel.”

Further research is needed, but, in the meantime, Dr. Staudt said, “We can ask patients who show up with facial pain if they’ve ever had meningitis or some sort of fever that preceded their onset of pain.”

The study had no specific funding. Ms. Tang and coauthor Jack Y. Zhang, MS, reported no relevant financial disclosures. Dr. Staudt reported serving as a consultant for Abbott and as a scientific adviser and consultant for Boston Scientific.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Meningitis has been highlighted as a novel risk factor for trigeminal neuralgia in a nationwide, propensity-matched study of hospital admissions.

In multivariate analysis, the odds of meningitis were threefold higher in patients admitted with trigeminal neuralgia than in matched controls without trigeminal neuralgia.

This is the first nationwide population-based study of the rare, chronic pain disorder to identify the prevalence of trigeminal neuralgia admissions in the United States and risk factors contributing to trigeminal neuralgia development.

“Our results affirm known associations between trigeminal neuralgia and comorbidities like multiple sclerosis, and they also identify meningitis as a novel risk factor for trigeminal neuralgia,” said investigator Megan Tang, BS, a medical student at the Icahn School of Medicine at Mount Sinai, New York City.

The findings were presented at the American Association of Neurological Surgeons (AANS) 2024 annual meeting.
 

Strong Clinical Risk Factors

Trigeminal neuralgia is a rare pain disorder involving neurovascular compression of the trigeminal nerve. Its etiology and risk factors are poorly understood. Current literature is based on limited datasets and reports inconsistent risk factors across studies.

To better understand the disorder, researchers used International Classification of Diseases (ICD)-9 codes to identify trigeminal neuralgia admissions in the National Inpatient Sample from 2016 to 2019, and then propensity matched them 1:1 to non-trigeminal neuralgia admissions based on demographics, socioeconomic status, and Charlson comorbidity index scores.

Univariate analysis identified 136,345 trigeminal neuralgia admissions or an overall prevalence of 0.096%.

Trigeminal neuralgia admissions had lower morbidity than non-trigeminal neuralgia admissions and a higher prevalence of non-White patients, private insurance, and prolonged length of stay, Ms. Tang said.

Patients admitted for trigeminal neuralgia also had a higher prevalence of several chronic conditions, including hypertension, hyperlipidemia, and osteoarthritis; inflammatory conditions like lupus, meningitis, rheumatoid arthritis, and inflammatory bowel disease; and neurologic conditions including multiple sclerosis, epilepsy, stroke, and neurovascular compression disorders.

In multivariate analysis, investigators identified meningitis as a previously unknown risk factor for trigeminal neuralgia (odds ratio [OR], 3.1; P < .001).

Other strong risk factors were neurovascular compression disorders (OR, 39.82; P < .001) and multiple sclerosis (OR, 12.41; P < .001). Non-White race (Black; OR, 1.09; Hispanic; OR, 1.23; Other; OR, 1.24) and use of Medicaid (OR, 1.07) and other insurance (OR, 1.17) were demographic risk factors for trigeminal neuralgia.

“This finding points us toward future work exploring the potential mechanisms of predictors, most notably inflammatory conditions in trigeminal neuralgia development,” Ms. Tang concluded.

She declined to comment further on the findings, noting the investigators are still finalizing the results and interpretation.
 

Ask About Meningitis, Fever

Commenting on the findings, Michael D. Staudt, MD, MSc, University Hospitals Cleveland Medical Center, said that many patients who present with classical trigeminal neuralgia will have a blood vessel on MRI that is pressing on the trigeminal nerve.

“Obviously, the nerve is bathed in cerebrospinal fluid. So, if there’s an inflammatory marker, inflammation, or infection that could be injuring the nerve in a way that we don’t yet understand, that could be something that could cause trigeminal neuralgia without having to see a blood vessel,” said Dr. Staudt, who was not involved in the study. “It makes sense, theoretically. Something that’s inflammatory, something that’s irritating, that’s novel.”

Currently, predictive markers include clinical history, response to classical medications such as carbamazepine, and MRI findings, Dr. Staudt noted.

“Someone shows up with symptoms and MRI, and it’s basically do they have a blood vessel or not,” he said. “Treatments are generally within the same categories, but we don’t think it’s the same sort of success rate as seeing a blood vessel.”

Further research is needed, but, in the meantime, Dr. Staudt said, “We can ask patients who show up with facial pain if they’ve ever had meningitis or some sort of fever that preceded their onset of pain.”

The study had no specific funding. Ms. Tang and coauthor Jack Y. Zhang, MS, reported no relevant financial disclosures. Dr. Staudt reported serving as a consultant for Abbott and as a scientific adviser and consultant for Boston Scientific.

A version of this article appeared on Medscape.com.

Meningitis has been highlighted as a novel risk factor for trigeminal neuralgia in a nationwide, propensity-matched study of hospital admissions.

In multivariate analysis, the odds of meningitis were threefold higher in patients admitted with trigeminal neuralgia than in matched controls without trigeminal neuralgia.

This is the first nationwide population-based study of the rare, chronic pain disorder to identify the prevalence of trigeminal neuralgia admissions in the United States and risk factors contributing to trigeminal neuralgia development.

“Our results affirm known associations between trigeminal neuralgia and comorbidities like multiple sclerosis, and they also identify meningitis as a novel risk factor for trigeminal neuralgia,” said investigator Megan Tang, BS, a medical student at the Icahn School of Medicine at Mount Sinai, New York City.

The findings were presented at the American Association of Neurological Surgeons (AANS) 2024 annual meeting.
 

Strong Clinical Risk Factors

Trigeminal neuralgia is a rare pain disorder involving neurovascular compression of the trigeminal nerve. Its etiology and risk factors are poorly understood. Current literature is based on limited datasets and reports inconsistent risk factors across studies.

To better understand the disorder, researchers used International Classification of Diseases (ICD)-9 codes to identify trigeminal neuralgia admissions in the National Inpatient Sample from 2016 to 2019, and then propensity matched them 1:1 to non-trigeminal neuralgia admissions based on demographics, socioeconomic status, and Charlson comorbidity index scores.

Univariate analysis identified 136,345 trigeminal neuralgia admissions or an overall prevalence of 0.096%.

Trigeminal neuralgia admissions had lower morbidity than non-trigeminal neuralgia admissions and a higher prevalence of non-White patients, private insurance, and prolonged length of stay, Ms. Tang said.

Patients admitted for trigeminal neuralgia also had a higher prevalence of several chronic conditions, including hypertension, hyperlipidemia, and osteoarthritis; inflammatory conditions like lupus, meningitis, rheumatoid arthritis, and inflammatory bowel disease; and neurologic conditions including multiple sclerosis, epilepsy, stroke, and neurovascular compression disorders.

In multivariate analysis, investigators identified meningitis as a previously unknown risk factor for trigeminal neuralgia (odds ratio [OR], 3.1; P < .001).

Other strong risk factors were neurovascular compression disorders (OR, 39.82; P < .001) and multiple sclerosis (OR, 12.41; P < .001). Non-White race (Black; OR, 1.09; Hispanic; OR, 1.23; Other; OR, 1.24) and use of Medicaid (OR, 1.07) and other insurance (OR, 1.17) were demographic risk factors for trigeminal neuralgia.

“This finding points us toward future work exploring the potential mechanisms of predictors, most notably inflammatory conditions in trigeminal neuralgia development,” Ms. Tang concluded.

She declined to comment further on the findings, noting the investigators are still finalizing the results and interpretation.
 

Ask About Meningitis, Fever

Commenting on the findings, Michael D. Staudt, MD, MSc, University Hospitals Cleveland Medical Center, said that many patients who present with classical trigeminal neuralgia will have a blood vessel on MRI that is pressing on the trigeminal nerve.

“Obviously, the nerve is bathed in cerebrospinal fluid. So, if there’s an inflammatory marker, inflammation, or infection that could be injuring the nerve in a way that we don’t yet understand, that could be something that could cause trigeminal neuralgia without having to see a blood vessel,” said Dr. Staudt, who was not involved in the study. “It makes sense, theoretically. Something that’s inflammatory, something that’s irritating, that’s novel.”

Currently, predictive markers include clinical history, response to classical medications such as carbamazepine, and MRI findings, Dr. Staudt noted.

“Someone shows up with symptoms and MRI, and it’s basically do they have a blood vessel or not,” he said. “Treatments are generally within the same categories, but we don’t think it’s the same sort of success rate as seeing a blood vessel.”

Further research is needed, but, in the meantime, Dr. Staudt said, “We can ask patients who show up with facial pain if they’ve ever had meningitis or some sort of fever that preceded their onset of pain.”

The study had no specific funding. Ms. Tang and coauthor Jack Y. Zhang, MS, reported no relevant financial disclosures. Dr. Staudt reported serving as a consultant for Abbott and as a scientific adviser and consultant for Boston Scientific.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AANS 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Does Racism in Black Americans Boost Alzheimer’s Risk?

Article Type
Changed
Wed, 05/15/2024 - 11:49

Racial discrimination in Black Americans is associated with an increased risk of developing Alzheimer’s disease (AD) in later life, new findings showed.

Researchers found that Black Americans who experience racism in their 40s and 50s are more likely to have increased serum levels of AD biomarkers p-tau181 and neurofilament light (NfL) more than a decade later.

“We know that Black Americans are at an elevated risk of Alzheimer’s disease and other dementias compared to non-Hispanic White Americans, but we don’t fully understand all the factors that contribute to this disproportionate risk,” Michelle Mielke, PhD, co-author and professor of epidemiology and prevention at Wake Forest University School of Medicine, Winston-Salem, North Carolina, said in a press release.

Recent data show AD is twice as prevalent in Black Americans as in Whites, at 18.6% and 10%, respectively. Dr. Mielke said this level of disparity cannot be attributed solely to genetic differences, and evidence suggests that racism and its related stress may play a role.

The findings were published online in Alzheimer’s and Dementia.
 

AD Biomarker Testing

To further explore a possible link between exposure to racism and AD risk, investigators analyzed data from the Family and Community Health Study, a multisite, longitudinal investigation that included more than 800 families in the United States.

Blood samples and information on racial discrimination were collected from 255 middle-aged Black Americans between 2002 and 2005.

Blood samples were tested for serum phosphorylated tau181 (p-Tau181), a marker of AD pathology; NfL, a nonspecific marker of neurodegeneration; and glial fibrillary acidic protein (GFAP), a marker of brain inflammation.

Participants answered questions about racial discrimination, which included whether they have been subjected to disrespectful treatment including racial slurs, harassment from law enforcement, or if they had ever been excluded from social activities because of their race.

The sample included 212 females and 43 males with a mean age of 46. Most participants (70%) lived in urban areas.
 

Stress-Related?

Investigators found no correlation between racial discrimination and increased levels of AD blood biomarkers in 2008 when participants were a mean age of 46 years. However, 11 years later, when participants were roughly 57 years old, investigators found experiencing racism in middle age was significantly correlated with higher levels of both p-Tau181 (r = 0.158; P ≤ .012) and NfL (r = 0.143; P ≤ .023). There was no significant association between reported discrimination and GFAP.

“These findings support the hypothesis that unique life stressors encountered by Black Americans in midlife become biologically embedded and contribute to AD pathology and neurodegeneration later in life,” the authors wrote.

Investigators speculated based on previous research that the stress related to discrimination may be associated with reductions in hippocampal and prefrontal cortex volumes and neurodegeneration in general.

Dr. Mielke also said it’s clear that future studies should focus on racism experienced by Black Americans to further understand their risk for dementia.

“This research can help inform policies and interventions to reduce racial disparities and reduce dementia risk,” she said.

Study limitations include the absence of amyloid biomarkers. Investigators noted that participants had non-detectable levels of amyloid, likely due to the use of serum vs cerebrospinal fluid.

The study was funded by the National Institute on Aging and the National Heart, Lung, and Blood Institute. Mielke reported serving on scientific advisory boards and/or having consulted for Acadia, Biogen, Eisai, LabCorp, Lilly, Merck, PeerView Institute, Roche, Siemens Healthineers, and Sunbird Bio.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Racial discrimination in Black Americans is associated with an increased risk of developing Alzheimer’s disease (AD) in later life, new findings showed.

Researchers found that Black Americans who experience racism in their 40s and 50s are more likely to have increased serum levels of AD biomarkers p-tau181 and neurofilament light (NfL) more than a decade later.

“We know that Black Americans are at an elevated risk of Alzheimer’s disease and other dementias compared to non-Hispanic White Americans, but we don’t fully understand all the factors that contribute to this disproportionate risk,” Michelle Mielke, PhD, co-author and professor of epidemiology and prevention at Wake Forest University School of Medicine, Winston-Salem, North Carolina, said in a press release.

Recent data show AD is twice as prevalent in Black Americans as in Whites, at 18.6% and 10%, respectively. Dr. Mielke said this level of disparity cannot be attributed solely to genetic differences, and evidence suggests that racism and its related stress may play a role.

The findings were published online in Alzheimer’s and Dementia.
 

AD Biomarker Testing

To further explore a possible link between exposure to racism and AD risk, investigators analyzed data from the Family and Community Health Study, a multisite, longitudinal investigation that included more than 800 families in the United States.

Blood samples and information on racial discrimination were collected from 255 middle-aged Black Americans between 2002 and 2005.

Blood samples were tested for serum phosphorylated tau181 (p-Tau181), a marker of AD pathology; NfL, a nonspecific marker of neurodegeneration; and glial fibrillary acidic protein (GFAP), a marker of brain inflammation.

Participants answered questions about racial discrimination, which included whether they have been subjected to disrespectful treatment including racial slurs, harassment from law enforcement, or if they had ever been excluded from social activities because of their race.

The sample included 212 females and 43 males with a mean age of 46. Most participants (70%) lived in urban areas.
 

Stress-Related?

Investigators found no correlation between racial discrimination and increased levels of AD blood biomarkers in 2008 when participants were a mean age of 46 years. However, 11 years later, when participants were roughly 57 years old, investigators found experiencing racism in middle age was significantly correlated with higher levels of both p-Tau181 (r = 0.158; P ≤ .012) and NfL (r = 0.143; P ≤ .023). There was no significant association between reported discrimination and GFAP.

“These findings support the hypothesis that unique life stressors encountered by Black Americans in midlife become biologically embedded and contribute to AD pathology and neurodegeneration later in life,” the authors wrote.

Investigators speculated based on previous research that the stress related to discrimination may be associated with reductions in hippocampal and prefrontal cortex volumes and neurodegeneration in general.

Dr. Mielke also said it’s clear that future studies should focus on racism experienced by Black Americans to further understand their risk for dementia.

“This research can help inform policies and interventions to reduce racial disparities and reduce dementia risk,” she said.

Study limitations include the absence of amyloid biomarkers. Investigators noted that participants had non-detectable levels of amyloid, likely due to the use of serum vs cerebrospinal fluid.

The study was funded by the National Institute on Aging and the National Heart, Lung, and Blood Institute. Mielke reported serving on scientific advisory boards and/or having consulted for Acadia, Biogen, Eisai, LabCorp, Lilly, Merck, PeerView Institute, Roche, Siemens Healthineers, and Sunbird Bio.

A version of this article appeared on Medscape.com.

Racial discrimination in Black Americans is associated with an increased risk of developing Alzheimer’s disease (AD) in later life, new findings showed.

Researchers found that Black Americans who experience racism in their 40s and 50s are more likely to have increased serum levels of AD biomarkers p-tau181 and neurofilament light (NfL) more than a decade later.

“We know that Black Americans are at an elevated risk of Alzheimer’s disease and other dementias compared to non-Hispanic White Americans, but we don’t fully understand all the factors that contribute to this disproportionate risk,” Michelle Mielke, PhD, co-author and professor of epidemiology and prevention at Wake Forest University School of Medicine, Winston-Salem, North Carolina, said in a press release.

Recent data show AD is twice as prevalent in Black Americans as in Whites, at 18.6% and 10%, respectively. Dr. Mielke said this level of disparity cannot be attributed solely to genetic differences, and evidence suggests that racism and its related stress may play a role.

The findings were published online in Alzheimer’s and Dementia.
 

AD Biomarker Testing

To further explore a possible link between exposure to racism and AD risk, investigators analyzed data from the Family and Community Health Study, a multisite, longitudinal investigation that included more than 800 families in the United States.

Blood samples and information on racial discrimination were collected from 255 middle-aged Black Americans between 2002 and 2005.

Blood samples were tested for serum phosphorylated tau181 (p-Tau181), a marker of AD pathology; NfL, a nonspecific marker of neurodegeneration; and glial fibrillary acidic protein (GFAP), a marker of brain inflammation.

Participants answered questions about racial discrimination, which included whether they have been subjected to disrespectful treatment including racial slurs, harassment from law enforcement, or if they had ever been excluded from social activities because of their race.

The sample included 212 females and 43 males with a mean age of 46. Most participants (70%) lived in urban areas.
 

Stress-Related?

Investigators found no correlation between racial discrimination and increased levels of AD blood biomarkers in 2008 when participants were a mean age of 46 years. However, 11 years later, when participants were roughly 57 years old, investigators found experiencing racism in middle age was significantly correlated with higher levels of both p-Tau181 (r = 0.158; P ≤ .012) and NfL (r = 0.143; P ≤ .023). There was no significant association between reported discrimination and GFAP.

“These findings support the hypothesis that unique life stressors encountered by Black Americans in midlife become biologically embedded and contribute to AD pathology and neurodegeneration later in life,” the authors wrote.

Investigators speculated based on previous research that the stress related to discrimination may be associated with reductions in hippocampal and prefrontal cortex volumes and neurodegeneration in general.

Dr. Mielke also said it’s clear that future studies should focus on racism experienced by Black Americans to further understand their risk for dementia.

“This research can help inform policies and interventions to reduce racial disparities and reduce dementia risk,” she said.

Study limitations include the absence of amyloid biomarkers. Investigators noted that participants had non-detectable levels of amyloid, likely due to the use of serum vs cerebrospinal fluid.

The study was funded by the National Institute on Aging and the National Heart, Lung, and Blood Institute. Mielke reported serving on scientific advisory boards and/or having consulted for Acadia, Biogen, Eisai, LabCorp, Lilly, Merck, PeerView Institute, Roche, Siemens Healthineers, and Sunbird Bio.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Lecanemab’s Promise and Peril: Alzheimer’s Treatment Dilemma

Article Type
Changed
Wed, 05/15/2024 - 11:45

Clinicians interested in treating patients with symptoms of mild cognitive impairment or mild dementia should carefully analyze the potential benefits and harms of monoclonal amyloid beta therapy, including likelihood of side effects and overall burden on the patient, according to researchers at the annual meeting of the American Geriatrics Society (AGS). 

Lecanemab (Leqembi) may help some patients by lowering the level of beta-amyloid protein in the brain. Results from a phase 3 trial presented at the conference showed participants with Alzheimer’s disease had a 27% slower progression of the disease compared with placebo.

But clinicians must weigh that advantage against risks and contraindications, according to Esther Oh, MD, PhD, an associate professor in the Division of Geriatric Medicine and Gerontology and co-director of the Johns Hopkins Memory and Alzheimer’s Treatment Center, Johns Hopkins University, Baltimore, Maryland, who spoke during a plenary session. Lecanemab gained accelerated approval by the US Food and Drug Administration in January 2023 and full approval in July 2023.

The results from CLARITY, an 18-month, multicenter, double-blind trial involving 1795 participants aged 50-90 years, showed that the variation between treatment and placebo did not meet the criteria for a minimum clinically important difference for mild cognitive impairment or mild Alzheimer’s disease.

Even more concerning to Dr. Oh was the rate of amyloid-related abnormalities on brain imaging, which can cause brain edema and hemorrhage (12.6% and 17.3%, respectively). Almost 85% of cases were asymptomatic. 

The risk for abnormalities indicates that thrombolytics are contraindicated for patients taking the drug, according to Dr. Oh. 

“Appropriate use recommendations exclude vitamin K antagonists such as warfarin, direct oral anticoagulants and heparin, although aspirin and other antiplatelet agents are allowed,” Dr. Oh said during the presentation.

Blood biomarkers, PET imaging, and levels of amyloid-beta proteins in cerebrospinal fluid are used to determine eligibility for lecanemab. However, tau biomarkers may indicate signs of cognitive impairment decades prior to symptoms. Some evidence indicates that the drug may be more effective in individuals with low tau levels that are evident in earlier stages of disease. Tau can also be determined from cerebrospinal fluid, however, “we do not factor in tau protein as a biomarker for treatment eligibility, but this may become an important biomarker in the future,” Dr. Oh said.

Lecanemab is cost-prohibitive for many patients, with an annual price tag of $26,000. Treatment also requires monthly infusions, a PET, intravenous administration, lab work, multiple MRIs, and potentially an APOE4 serum test.

Medicare covers the majority of services, but patients are responsible for deductibles and copays, an estimated $7000 annually, according to Shari Ling, MD, deputy chief medical officer with the US Centers for Medicare & Medicaid Services, who also spoke during the session. Supplemental or other insurance such as Medicaid are also not included in this estimate.

The Medicare population is growing more complex over time, Dr. Ling said. In 2021, 54% of beneficiaries had five or more comorbidities, which can affect eligibility for lecanemab. 

“Across the healthcare system, we are learning what is necessary for coordination of delivery, for evaluation of people who receive these treatments, and for the care that is not anticipated,” Dr. Ling noted.

Neither speaker reported any financial conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Clinicians interested in treating patients with symptoms of mild cognitive impairment or mild dementia should carefully analyze the potential benefits and harms of monoclonal amyloid beta therapy, including likelihood of side effects and overall burden on the patient, according to researchers at the annual meeting of the American Geriatrics Society (AGS). 

Lecanemab (Leqembi) may help some patients by lowering the level of beta-amyloid protein in the brain. Results from a phase 3 trial presented at the conference showed participants with Alzheimer’s disease had a 27% slower progression of the disease compared with placebo.

But clinicians must weigh that advantage against risks and contraindications, according to Esther Oh, MD, PhD, an associate professor in the Division of Geriatric Medicine and Gerontology and co-director of the Johns Hopkins Memory and Alzheimer’s Treatment Center, Johns Hopkins University, Baltimore, Maryland, who spoke during a plenary session. Lecanemab gained accelerated approval by the US Food and Drug Administration in January 2023 and full approval in July 2023.

The results from CLARITY, an 18-month, multicenter, double-blind trial involving 1795 participants aged 50-90 years, showed that the variation between treatment and placebo did not meet the criteria for a minimum clinically important difference for mild cognitive impairment or mild Alzheimer’s disease.

Even more concerning to Dr. Oh was the rate of amyloid-related abnormalities on brain imaging, which can cause brain edema and hemorrhage (12.6% and 17.3%, respectively). Almost 85% of cases were asymptomatic. 

The risk for abnormalities indicates that thrombolytics are contraindicated for patients taking the drug, according to Dr. Oh. 

“Appropriate use recommendations exclude vitamin K antagonists such as warfarin, direct oral anticoagulants and heparin, although aspirin and other antiplatelet agents are allowed,” Dr. Oh said during the presentation.

Blood biomarkers, PET imaging, and levels of amyloid-beta proteins in cerebrospinal fluid are used to determine eligibility for lecanemab. However, tau biomarkers may indicate signs of cognitive impairment decades prior to symptoms. Some evidence indicates that the drug may be more effective in individuals with low tau levels that are evident in earlier stages of disease. Tau can also be determined from cerebrospinal fluid, however, “we do not factor in tau protein as a biomarker for treatment eligibility, but this may become an important biomarker in the future,” Dr. Oh said.

Lecanemab is cost-prohibitive for many patients, with an annual price tag of $26,000. Treatment also requires monthly infusions, a PET, intravenous administration, lab work, multiple MRIs, and potentially an APOE4 serum test.

Medicare covers the majority of services, but patients are responsible for deductibles and copays, an estimated $7000 annually, according to Shari Ling, MD, deputy chief medical officer with the US Centers for Medicare & Medicaid Services, who also spoke during the session. Supplemental or other insurance such as Medicaid are also not included in this estimate.

The Medicare population is growing more complex over time, Dr. Ling said. In 2021, 54% of beneficiaries had five or more comorbidities, which can affect eligibility for lecanemab. 

“Across the healthcare system, we are learning what is necessary for coordination of delivery, for evaluation of people who receive these treatments, and for the care that is not anticipated,” Dr. Ling noted.

Neither speaker reported any financial conflicts of interest.

A version of this article first appeared on Medscape.com.

Clinicians interested in treating patients with symptoms of mild cognitive impairment or mild dementia should carefully analyze the potential benefits and harms of monoclonal amyloid beta therapy, including likelihood of side effects and overall burden on the patient, according to researchers at the annual meeting of the American Geriatrics Society (AGS). 

Lecanemab (Leqembi) may help some patients by lowering the level of beta-amyloid protein in the brain. Results from a phase 3 trial presented at the conference showed participants with Alzheimer’s disease had a 27% slower progression of the disease compared with placebo.

But clinicians must weigh that advantage against risks and contraindications, according to Esther Oh, MD, PhD, an associate professor in the Division of Geriatric Medicine and Gerontology and co-director of the Johns Hopkins Memory and Alzheimer’s Treatment Center, Johns Hopkins University, Baltimore, Maryland, who spoke during a plenary session. Lecanemab gained accelerated approval by the US Food and Drug Administration in January 2023 and full approval in July 2023.

The results from CLARITY, an 18-month, multicenter, double-blind trial involving 1795 participants aged 50-90 years, showed that the variation between treatment and placebo did not meet the criteria for a minimum clinically important difference for mild cognitive impairment or mild Alzheimer’s disease.

Even more concerning to Dr. Oh was the rate of amyloid-related abnormalities on brain imaging, which can cause brain edema and hemorrhage (12.6% and 17.3%, respectively). Almost 85% of cases were asymptomatic. 

The risk for abnormalities indicates that thrombolytics are contraindicated for patients taking the drug, according to Dr. Oh. 

“Appropriate use recommendations exclude vitamin K antagonists such as warfarin, direct oral anticoagulants and heparin, although aspirin and other antiplatelet agents are allowed,” Dr. Oh said during the presentation.

Blood biomarkers, PET imaging, and levels of amyloid-beta proteins in cerebrospinal fluid are used to determine eligibility for lecanemab. However, tau biomarkers may indicate signs of cognitive impairment decades prior to symptoms. Some evidence indicates that the drug may be more effective in individuals with low tau levels that are evident in earlier stages of disease. Tau can also be determined from cerebrospinal fluid, however, “we do not factor in tau protein as a biomarker for treatment eligibility, but this may become an important biomarker in the future,” Dr. Oh said.

Lecanemab is cost-prohibitive for many patients, with an annual price tag of $26,000. Treatment also requires monthly infusions, a PET, intravenous administration, lab work, multiple MRIs, and potentially an APOE4 serum test.

Medicare covers the majority of services, but patients are responsible for deductibles and copays, an estimated $7000 annually, according to Shari Ling, MD, deputy chief medical officer with the US Centers for Medicare & Medicaid Services, who also spoke during the session. Supplemental or other insurance such as Medicaid are also not included in this estimate.

The Medicare population is growing more complex over time, Dr. Ling said. In 2021, 54% of beneficiaries had five or more comorbidities, which can affect eligibility for lecanemab. 

“Across the healthcare system, we are learning what is necessary for coordination of delivery, for evaluation of people who receive these treatments, and for the care that is not anticipated,” Dr. Ling noted.

Neither speaker reported any financial conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AGS 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Nocturnal Hot Flashes and Alzheimer’s Risk

Article Type
Changed
Wed, 05/15/2024 - 11:10

In a recent article in the American Journal of Obstetrics & Gynecology, Rebecca C. Thurston, PhD, and Pauline Maki, PhD, leading scientists in the area of menopause’s impact on brain function, presented data from their assessment of 248 late perimenopausal and postmenopausal women who reported hot flashes, also known as vasomotor symptoms (VMS).

Hot flashes are known to be associated with changes in brain white matter, carotid atherosclerosis, brain function, and memory. Dr. Thurston and colleagues objectively measured VMS over 24 hours, using skin conductance monitoring. Plasma concentrations of Alzheimer’s disease biomarkers, including the amyloid beta 42–to–amyloid beta 40 ratio, were assessed. The mean age of study participants was 59 years, and they experienced a mean of five objective VMS daily.

A key finding was that VMS, particularly those occurring during sleep, were associated with a significantly lower amyloid beta 42–to–beta 40 ratio. This finding suggests that nighttime VMS may be a marker of risk for Alzheimer’s disease.

Previous research has found that menopausal hormone therapy is associated with favorable changes in Alzheimer’s disease biomarkers. Likewise, large observational studies have shown a lower incidence of Alzheimer’s disease among women who initiate hormone therapy in their late perimenopausal or early postmenopausal years and continue such therapy long term.

The findings of this important study by Thurston and colleagues provide further evidence to support the tantalizing possibility that agents that reduce nighttime hot flashes (including hormone therapy) may lower the subsequent incidence of Alzheimer’s disease in high-risk women.
 

Dr. Kaunitz is a tenured professor and associate chair in the department of obstetrics and gynecology at the University of Florida College of Medicine–Jacksonville, and medical director and director of menopause and gynecologic ultrasound services at the University of Florida Southside Women’s Health, Jacksonville. He disclosed ties to Sumitomo Pharma America, Mithra, Viatris, Bayer, Merck, Mylan (Viatris), and UpToDate.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

In a recent article in the American Journal of Obstetrics & Gynecology, Rebecca C. Thurston, PhD, and Pauline Maki, PhD, leading scientists in the area of menopause’s impact on brain function, presented data from their assessment of 248 late perimenopausal and postmenopausal women who reported hot flashes, also known as vasomotor symptoms (VMS).

Hot flashes are known to be associated with changes in brain white matter, carotid atherosclerosis, brain function, and memory. Dr. Thurston and colleagues objectively measured VMS over 24 hours, using skin conductance monitoring. Plasma concentrations of Alzheimer’s disease biomarkers, including the amyloid beta 42–to–amyloid beta 40 ratio, were assessed. The mean age of study participants was 59 years, and they experienced a mean of five objective VMS daily.

A key finding was that VMS, particularly those occurring during sleep, were associated with a significantly lower amyloid beta 42–to–beta 40 ratio. This finding suggests that nighttime VMS may be a marker of risk for Alzheimer’s disease.

Previous research has found that menopausal hormone therapy is associated with favorable changes in Alzheimer’s disease biomarkers. Likewise, large observational studies have shown a lower incidence of Alzheimer’s disease among women who initiate hormone therapy in their late perimenopausal or early postmenopausal years and continue such therapy long term.

The findings of this important study by Thurston and colleagues provide further evidence to support the tantalizing possibility that agents that reduce nighttime hot flashes (including hormone therapy) may lower the subsequent incidence of Alzheimer’s disease in high-risk women.
 

Dr. Kaunitz is a tenured professor and associate chair in the department of obstetrics and gynecology at the University of Florida College of Medicine–Jacksonville, and medical director and director of menopause and gynecologic ultrasound services at the University of Florida Southside Women’s Health, Jacksonville. He disclosed ties to Sumitomo Pharma America, Mithra, Viatris, Bayer, Merck, Mylan (Viatris), and UpToDate.

A version of this article appeared on Medscape.com.

In a recent article in the American Journal of Obstetrics & Gynecology, Rebecca C. Thurston, PhD, and Pauline Maki, PhD, leading scientists in the area of menopause’s impact on brain function, presented data from their assessment of 248 late perimenopausal and postmenopausal women who reported hot flashes, also known as vasomotor symptoms (VMS).

Hot flashes are known to be associated with changes in brain white matter, carotid atherosclerosis, brain function, and memory. Dr. Thurston and colleagues objectively measured VMS over 24 hours, using skin conductance monitoring. Plasma concentrations of Alzheimer’s disease biomarkers, including the amyloid beta 42–to–amyloid beta 40 ratio, were assessed. The mean age of study participants was 59 years, and they experienced a mean of five objective VMS daily.

A key finding was that VMS, particularly those occurring during sleep, were associated with a significantly lower amyloid beta 42–to–beta 40 ratio. This finding suggests that nighttime VMS may be a marker of risk for Alzheimer’s disease.

Previous research has found that menopausal hormone therapy is associated with favorable changes in Alzheimer’s disease biomarkers. Likewise, large observational studies have shown a lower incidence of Alzheimer’s disease among women who initiate hormone therapy in their late perimenopausal or early postmenopausal years and continue such therapy long term.

The findings of this important study by Thurston and colleagues provide further evidence to support the tantalizing possibility that agents that reduce nighttime hot flashes (including hormone therapy) may lower the subsequent incidence of Alzheimer’s disease in high-risk women.
 

Dr. Kaunitz is a tenured professor and associate chair in the department of obstetrics and gynecology at the University of Florida College of Medicine–Jacksonville, and medical director and director of menopause and gynecologic ultrasound services at the University of Florida Southside Women’s Health, Jacksonville. He disclosed ties to Sumitomo Pharma America, Mithra, Viatris, Bayer, Merck, Mylan (Viatris), and UpToDate.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Lower Urinary Tract Symptoms Associated With Poorer Cognition in Older Adults

Article Type
Changed
Tue, 05/14/2024 - 16:25

Lower urinary tract symptoms were significantly associated with lower scores on measures of cognitive impairment in older adults, based on data from approximately 10,000 individuals.

“We know that lower urinary tract symptoms are very common in aging men and women;” however, older adults often underreport symptoms and avoid seeking treatment, Belinda Williams, MD, of the University of Alabama, Birmingham, said in a presentation at the annual meeting of the American Geriatrics Society.

“Evidence also shows us that the incidence of lower urinary tract symptoms (LUTS) is higher in patients with dementia,” she said. However, the association between cognitive impairment and LUTS has not been well studied, she said.

To address this knowledge gap, Dr. Williams and colleagues reviewed data from older adults with and without LUTS who were enrolled in the REasons for Geographic and Racial Differences in Stroke (REGARDS) study, a cohort study including 30,239 Black or White adults aged 45 years and older who completed telephone or in-home assessments in 2003-2007 and in 2013-2017.

The study population included 6062 women and 4438 men who responded to questionnaires about LUTS and completed several cognitive tests via telephone in 2019-2010. The tests evaluated verbal fluency, executive function, and memory, and included the Six-Item Screener, Animal Naming, Letter F naming, and word list learning; lower scores indicated poorer cognitive performance.

Participants who met the criteria for LUTS were categorized as having mild, moderate, or severe symptoms.

The researchers controlled for age, race, education, income, and urban/rural setting in a multivariate analysis. The mean ages of the women and men were 69 years and 63 years, respectively; 41% and 32% were Black, 59% and 68% were White.

Overall, 70% of women and 62% of men reported LUTS; 6.2% and 8.2%, respectively, met criteria for cognitive impairment. The association between cognitive impairment and LUTS was statistically significant for all specific tests (P < .01), but not for the global cognitive domain tests.

Black men were more likely to report LUTS than White men, but LUTS reports were similar between Black and White women.

Moderate LUTS was the most common degree of severity for men and women (54% and 64%, respectively).

The most common symptom overall was pre-toilet leakage (urge urinary incontinence), reported by 94% of women and 91% of men. The next most common symptoms for men and women were nocturia and urgency.

“We found that, across the board, in all the cognitive tests, LUTS were associated with lower cognitive test scores,” Dr. Williams said in her presentation. Little differences were seen on the Six-Item Screener, she noted, but when they further analyzed the data using scores lower than 4 to indicate cognitive impairment, they found significant association with LUTS, she said.

The results showing that the presence of LUTS was consistently associated with lower cognitive test scores of verbal fluency, executive function, and memory, are applicable in clinical practice, Dr. Williams said in her presentation.

“Recognizing the subtle changes in cognition among older adults with LUTS may impact treatment decisions,” she said. “For example, we can encourage and advise our patients to be physically and cognitively active and to avoid anticholinergic medications.”

Next steps for research include analyzing longitudinal changes in cognition among participants with and without LUTS, said Dr. Williams.

During a question-and-answer session, Dr. Williams agreed with a comment that incorporating cognitive screening strategies in to LUTS clinical pathways might be helpful, such as conducting a baseline Montreal Cognitive Assessment Test (MoCA) in patients with LUTS. “Periodic repeat MoCAs thereafter can help assess decline in cognition,” she said.

The study was supported by the National Institutes of Neurological Disorders and Stroke and the National Institute on Aging. The researchers had no financial conflicts to disclose.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Lower urinary tract symptoms were significantly associated with lower scores on measures of cognitive impairment in older adults, based on data from approximately 10,000 individuals.

“We know that lower urinary tract symptoms are very common in aging men and women;” however, older adults often underreport symptoms and avoid seeking treatment, Belinda Williams, MD, of the University of Alabama, Birmingham, said in a presentation at the annual meeting of the American Geriatrics Society.

“Evidence also shows us that the incidence of lower urinary tract symptoms (LUTS) is higher in patients with dementia,” she said. However, the association between cognitive impairment and LUTS has not been well studied, she said.

To address this knowledge gap, Dr. Williams and colleagues reviewed data from older adults with and without LUTS who were enrolled in the REasons for Geographic and Racial Differences in Stroke (REGARDS) study, a cohort study including 30,239 Black or White adults aged 45 years and older who completed telephone or in-home assessments in 2003-2007 and in 2013-2017.

The study population included 6062 women and 4438 men who responded to questionnaires about LUTS and completed several cognitive tests via telephone in 2019-2010. The tests evaluated verbal fluency, executive function, and memory, and included the Six-Item Screener, Animal Naming, Letter F naming, and word list learning; lower scores indicated poorer cognitive performance.

Participants who met the criteria for LUTS were categorized as having mild, moderate, or severe symptoms.

The researchers controlled for age, race, education, income, and urban/rural setting in a multivariate analysis. The mean ages of the women and men were 69 years and 63 years, respectively; 41% and 32% were Black, 59% and 68% were White.

Overall, 70% of women and 62% of men reported LUTS; 6.2% and 8.2%, respectively, met criteria for cognitive impairment. The association between cognitive impairment and LUTS was statistically significant for all specific tests (P < .01), but not for the global cognitive domain tests.

Black men were more likely to report LUTS than White men, but LUTS reports were similar between Black and White women.

Moderate LUTS was the most common degree of severity for men and women (54% and 64%, respectively).

The most common symptom overall was pre-toilet leakage (urge urinary incontinence), reported by 94% of women and 91% of men. The next most common symptoms for men and women were nocturia and urgency.

“We found that, across the board, in all the cognitive tests, LUTS were associated with lower cognitive test scores,” Dr. Williams said in her presentation. Little differences were seen on the Six-Item Screener, she noted, but when they further analyzed the data using scores lower than 4 to indicate cognitive impairment, they found significant association with LUTS, she said.

The results showing that the presence of LUTS was consistently associated with lower cognitive test scores of verbal fluency, executive function, and memory, are applicable in clinical practice, Dr. Williams said in her presentation.

“Recognizing the subtle changes in cognition among older adults with LUTS may impact treatment decisions,” she said. “For example, we can encourage and advise our patients to be physically and cognitively active and to avoid anticholinergic medications.”

Next steps for research include analyzing longitudinal changes in cognition among participants with and without LUTS, said Dr. Williams.

During a question-and-answer session, Dr. Williams agreed with a comment that incorporating cognitive screening strategies in to LUTS clinical pathways might be helpful, such as conducting a baseline Montreal Cognitive Assessment Test (MoCA) in patients with LUTS. “Periodic repeat MoCAs thereafter can help assess decline in cognition,” she said.

The study was supported by the National Institutes of Neurological Disorders and Stroke and the National Institute on Aging. The researchers had no financial conflicts to disclose.

Lower urinary tract symptoms were significantly associated with lower scores on measures of cognitive impairment in older adults, based on data from approximately 10,000 individuals.

“We know that lower urinary tract symptoms are very common in aging men and women;” however, older adults often underreport symptoms and avoid seeking treatment, Belinda Williams, MD, of the University of Alabama, Birmingham, said in a presentation at the annual meeting of the American Geriatrics Society.

“Evidence also shows us that the incidence of lower urinary tract symptoms (LUTS) is higher in patients with dementia,” she said. However, the association between cognitive impairment and LUTS has not been well studied, she said.

To address this knowledge gap, Dr. Williams and colleagues reviewed data from older adults with and without LUTS who were enrolled in the REasons for Geographic and Racial Differences in Stroke (REGARDS) study, a cohort study including 30,239 Black or White adults aged 45 years and older who completed telephone or in-home assessments in 2003-2007 and in 2013-2017.

The study population included 6062 women and 4438 men who responded to questionnaires about LUTS and completed several cognitive tests via telephone in 2019-2010. The tests evaluated verbal fluency, executive function, and memory, and included the Six-Item Screener, Animal Naming, Letter F naming, and word list learning; lower scores indicated poorer cognitive performance.

Participants who met the criteria for LUTS were categorized as having mild, moderate, or severe symptoms.

The researchers controlled for age, race, education, income, and urban/rural setting in a multivariate analysis. The mean ages of the women and men were 69 years and 63 years, respectively; 41% and 32% were Black, 59% and 68% were White.

Overall, 70% of women and 62% of men reported LUTS; 6.2% and 8.2%, respectively, met criteria for cognitive impairment. The association between cognitive impairment and LUTS was statistically significant for all specific tests (P < .01), but not for the global cognitive domain tests.

Black men were more likely to report LUTS than White men, but LUTS reports were similar between Black and White women.

Moderate LUTS was the most common degree of severity for men and women (54% and 64%, respectively).

The most common symptom overall was pre-toilet leakage (urge urinary incontinence), reported by 94% of women and 91% of men. The next most common symptoms for men and women were nocturia and urgency.

“We found that, across the board, in all the cognitive tests, LUTS were associated with lower cognitive test scores,” Dr. Williams said in her presentation. Little differences were seen on the Six-Item Screener, she noted, but when they further analyzed the data using scores lower than 4 to indicate cognitive impairment, they found significant association with LUTS, she said.

The results showing that the presence of LUTS was consistently associated with lower cognitive test scores of verbal fluency, executive function, and memory, are applicable in clinical practice, Dr. Williams said in her presentation.

“Recognizing the subtle changes in cognition among older adults with LUTS may impact treatment decisions,” she said. “For example, we can encourage and advise our patients to be physically and cognitively active and to avoid anticholinergic medications.”

Next steps for research include analyzing longitudinal changes in cognition among participants with and without LUTS, said Dr. Williams.

During a question-and-answer session, Dr. Williams agreed with a comment that incorporating cognitive screening strategies in to LUTS clinical pathways might be helpful, such as conducting a baseline Montreal Cognitive Assessment Test (MoCA) in patients with LUTS. “Periodic repeat MoCAs thereafter can help assess decline in cognition,” she said.

The study was supported by the National Institutes of Neurological Disorders and Stroke and the National Institute on Aging. The researchers had no financial conflicts to disclose.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AGS 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

High-Potency Cannabis Tied to Impaired Brain Development, Psychosis, Cannabis-Use Disorder

Article Type
Changed
Tue, 05/14/2024 - 13:08

It’s becoming clear that the adolescent brain is particularly vulnerable to cannabis, especially today’s higher-potency products, which put teens at risk for impaired brain development; mental health issues, including psychosis; and cannabis-use disorder (CUD). 

That was the message delivered by Yasmin Hurd, PhD, director of the Addiction Institute at Mount Sinai in New York, during a press briefing at the American Psychiatric Association (APA) 2024 annual meeting

“We’re actually in historic times in that we now have highly concentrated, highly potent cannabis products that are administered in various routes,” Dr. Hurd told reporters. 

Tetrahydrocannabinol (THC) concentrations in cannabis products have increased over the years, from around 2%-4% to 15%-24% now, Dr. Hurd noted.

The impact of high-potency cannabis products and increased risk for CUD and mental health problems, particularly in adolescents, “must be taken seriously, especially in light of the current mental health crisis,” Dr. Hurd and colleagues wrote in a commentary on the developmental trajectory of CUD published simultaneously in the American Journal of Psychiatry
 

Dramatic Increase in Teen Cannabis Use

A recent study from Oregon Health & Science University showed that adolescent cannabis abuse in the United States has increased dramatically, by about 245%, since 2000. 

“Drug abuse is often driven by what is in front of you,” Nora Volkow, MD, director of the National Institute on Drug Abuse, noted in an interview. 

“Right now, cannabis is widely available. So, guess what? Cannabis becomes the drug that people take. Nicotine is much harder to get. It is regulated to a much greater extent than cannabis, so fewer teenagers are consuming nicotine than are consuming cannabis,” Dr. Volkow said. 

Cannabis exposure during neurodevelopment has the potential to alter the endocannabinoid system, which in turn, can affect the development of neural pathways that mediate reward; emotional regulation; and multiple cognitive domains including executive functioning and decision-making, learning, abstraction, and attention — all processes central to substance use disorder and other psychiatric disorders, Dr. Hurd said at the briefing.

Dr. Volkow said that cannabis use in adolescence and young adulthood is “very concerning because that’s also the age of risk for psychosis, particularly schizophrenia, with one study showing that use of cannabis in high doses can trigger psychotic episodes, particularly among young males.”

Dr. Hurd noted that not all young people who use cannabis develop CUD, “but a significant number do,” and large-scale studies have consistently reported two main factors associated with CUD risk.

The first is age, both for the onset and frequency of use at younger age. Those who start using cannabis before age 16 years are at the highest risk for CUD. The risk for CUD also increases significantly among youth who use cannabis at least weekly, with the highest prevalence among youth who use cannabis daily. One large study linked increased frequency of use with up to a 17-fold increased risk for CUD.

The second factor consistently associated with the risk for CUD is biologic sex, with CUD rates typically higher in male individuals.
 

Treatment Challenges

For young people who develop CUD, access to and uptake of treatment can be challenging.

“Given that the increased potency of cannabis and cannabinoid products is expected to increase CUD risk, it is disturbing that less than 10% of youth who meet the criteria for a substance use disorder, including CUD, receive treatment,” Dr. Hurd and colleagues point out in their commentary. 

Another challenge is that treatment strategies for CUD are currently limited and consist mainly of motivational enhancement and cognitive-behavioral therapies. 

“Clearly new treatment strategies are needed to address the mounting challenge of CUD risk in teens and young adults,” Dr. Hurd and colleagues wrote. 

Summing up, Dr. Hurd told reporters, “We now know that most psychiatric disorders have a developmental origin, and the adolescent time period is a critical window for cannabis use disorder risk.”

Yet, on a positive note, the “plasticity of the developing brain that makes it vulnerable to cannabis use disorder and psychiatric comorbidities also provides an opportunity for prevention and early intervention to change that trajectory,” Dr. Hurd said. 

The changing legal landscape of cannabis — the US Drug Enforcement Agency is moving forward with plans to move marijuana from a Schedule I to a Schedule III controlled substance under the Controlled Substance Act — makes addressing these risks all the timelier. 

“As states vie to leverage tax dollars from the growing cannabis industry, a significant portion of such funds must be used for early intervention/prevention strategies to reduce the impact of cannabis on the developing brain,” Dr. Hurd and colleagues wrote. 

This research was supported in part by the National Institute on Drug Abuse and the National Institutes of Health. Dr. Hurd and Dr. Volkow have no relevant disclosures. 

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

It’s becoming clear that the adolescent brain is particularly vulnerable to cannabis, especially today’s higher-potency products, which put teens at risk for impaired brain development; mental health issues, including psychosis; and cannabis-use disorder (CUD). 

That was the message delivered by Yasmin Hurd, PhD, director of the Addiction Institute at Mount Sinai in New York, during a press briefing at the American Psychiatric Association (APA) 2024 annual meeting

“We’re actually in historic times in that we now have highly concentrated, highly potent cannabis products that are administered in various routes,” Dr. Hurd told reporters. 

Tetrahydrocannabinol (THC) concentrations in cannabis products have increased over the years, from around 2%-4% to 15%-24% now, Dr. Hurd noted.

The impact of high-potency cannabis products and increased risk for CUD and mental health problems, particularly in adolescents, “must be taken seriously, especially in light of the current mental health crisis,” Dr. Hurd and colleagues wrote in a commentary on the developmental trajectory of CUD published simultaneously in the American Journal of Psychiatry
 

Dramatic Increase in Teen Cannabis Use

A recent study from Oregon Health & Science University showed that adolescent cannabis abuse in the United States has increased dramatically, by about 245%, since 2000. 

“Drug abuse is often driven by what is in front of you,” Nora Volkow, MD, director of the National Institute on Drug Abuse, noted in an interview. 

“Right now, cannabis is widely available. So, guess what? Cannabis becomes the drug that people take. Nicotine is much harder to get. It is regulated to a much greater extent than cannabis, so fewer teenagers are consuming nicotine than are consuming cannabis,” Dr. Volkow said. 

Cannabis exposure during neurodevelopment has the potential to alter the endocannabinoid system, which in turn, can affect the development of neural pathways that mediate reward; emotional regulation; and multiple cognitive domains including executive functioning and decision-making, learning, abstraction, and attention — all processes central to substance use disorder and other psychiatric disorders, Dr. Hurd said at the briefing.

Dr. Volkow said that cannabis use in adolescence and young adulthood is “very concerning because that’s also the age of risk for psychosis, particularly schizophrenia, with one study showing that use of cannabis in high doses can trigger psychotic episodes, particularly among young males.”

Dr. Hurd noted that not all young people who use cannabis develop CUD, “but a significant number do,” and large-scale studies have consistently reported two main factors associated with CUD risk.

The first is age, both for the onset and frequency of use at younger age. Those who start using cannabis before age 16 years are at the highest risk for CUD. The risk for CUD also increases significantly among youth who use cannabis at least weekly, with the highest prevalence among youth who use cannabis daily. One large study linked increased frequency of use with up to a 17-fold increased risk for CUD.

The second factor consistently associated with the risk for CUD is biologic sex, with CUD rates typically higher in male individuals.
 

Treatment Challenges

For young people who develop CUD, access to and uptake of treatment can be challenging.

“Given that the increased potency of cannabis and cannabinoid products is expected to increase CUD risk, it is disturbing that less than 10% of youth who meet the criteria for a substance use disorder, including CUD, receive treatment,” Dr. Hurd and colleagues point out in their commentary. 

Another challenge is that treatment strategies for CUD are currently limited and consist mainly of motivational enhancement and cognitive-behavioral therapies. 

“Clearly new treatment strategies are needed to address the mounting challenge of CUD risk in teens and young adults,” Dr. Hurd and colleagues wrote. 

Summing up, Dr. Hurd told reporters, “We now know that most psychiatric disorders have a developmental origin, and the adolescent time period is a critical window for cannabis use disorder risk.”

Yet, on a positive note, the “plasticity of the developing brain that makes it vulnerable to cannabis use disorder and psychiatric comorbidities also provides an opportunity for prevention and early intervention to change that trajectory,” Dr. Hurd said. 

The changing legal landscape of cannabis — the US Drug Enforcement Agency is moving forward with plans to move marijuana from a Schedule I to a Schedule III controlled substance under the Controlled Substance Act — makes addressing these risks all the timelier. 

“As states vie to leverage tax dollars from the growing cannabis industry, a significant portion of such funds must be used for early intervention/prevention strategies to reduce the impact of cannabis on the developing brain,” Dr. Hurd and colleagues wrote. 

This research was supported in part by the National Institute on Drug Abuse and the National Institutes of Health. Dr. Hurd and Dr. Volkow have no relevant disclosures. 

A version of this article appeared on Medscape.com.

It’s becoming clear that the adolescent brain is particularly vulnerable to cannabis, especially today’s higher-potency products, which put teens at risk for impaired brain development; mental health issues, including psychosis; and cannabis-use disorder (CUD). 

That was the message delivered by Yasmin Hurd, PhD, director of the Addiction Institute at Mount Sinai in New York, during a press briefing at the American Psychiatric Association (APA) 2024 annual meeting

“We’re actually in historic times in that we now have highly concentrated, highly potent cannabis products that are administered in various routes,” Dr. Hurd told reporters. 

Tetrahydrocannabinol (THC) concentrations in cannabis products have increased over the years, from around 2%-4% to 15%-24% now, Dr. Hurd noted.

The impact of high-potency cannabis products and increased risk for CUD and mental health problems, particularly in adolescents, “must be taken seriously, especially in light of the current mental health crisis,” Dr. Hurd and colleagues wrote in a commentary on the developmental trajectory of CUD published simultaneously in the American Journal of Psychiatry
 

Dramatic Increase in Teen Cannabis Use

A recent study from Oregon Health & Science University showed that adolescent cannabis abuse in the United States has increased dramatically, by about 245%, since 2000. 

“Drug abuse is often driven by what is in front of you,” Nora Volkow, MD, director of the National Institute on Drug Abuse, noted in an interview. 

“Right now, cannabis is widely available. So, guess what? Cannabis becomes the drug that people take. Nicotine is much harder to get. It is regulated to a much greater extent than cannabis, so fewer teenagers are consuming nicotine than are consuming cannabis,” Dr. Volkow said. 

Cannabis exposure during neurodevelopment has the potential to alter the endocannabinoid system, which in turn, can affect the development of neural pathways that mediate reward; emotional regulation; and multiple cognitive domains including executive functioning and decision-making, learning, abstraction, and attention — all processes central to substance use disorder and other psychiatric disorders, Dr. Hurd said at the briefing.

Dr. Volkow said that cannabis use in adolescence and young adulthood is “very concerning because that’s also the age of risk for psychosis, particularly schizophrenia, with one study showing that use of cannabis in high doses can trigger psychotic episodes, particularly among young males.”

Dr. Hurd noted that not all young people who use cannabis develop CUD, “but a significant number do,” and large-scale studies have consistently reported two main factors associated with CUD risk.

The first is age, both for the onset and frequency of use at younger age. Those who start using cannabis before age 16 years are at the highest risk for CUD. The risk for CUD also increases significantly among youth who use cannabis at least weekly, with the highest prevalence among youth who use cannabis daily. One large study linked increased frequency of use with up to a 17-fold increased risk for CUD.

The second factor consistently associated with the risk for CUD is biologic sex, with CUD rates typically higher in male individuals.
 

Treatment Challenges

For young people who develop CUD, access to and uptake of treatment can be challenging.

“Given that the increased potency of cannabis and cannabinoid products is expected to increase CUD risk, it is disturbing that less than 10% of youth who meet the criteria for a substance use disorder, including CUD, receive treatment,” Dr. Hurd and colleagues point out in their commentary. 

Another challenge is that treatment strategies for CUD are currently limited and consist mainly of motivational enhancement and cognitive-behavioral therapies. 

“Clearly new treatment strategies are needed to address the mounting challenge of CUD risk in teens and young adults,” Dr. Hurd and colleagues wrote. 

Summing up, Dr. Hurd told reporters, “We now know that most psychiatric disorders have a developmental origin, and the adolescent time period is a critical window for cannabis use disorder risk.”

Yet, on a positive note, the “plasticity of the developing brain that makes it vulnerable to cannabis use disorder and psychiatric comorbidities also provides an opportunity for prevention and early intervention to change that trajectory,” Dr. Hurd said. 

The changing legal landscape of cannabis — the US Drug Enforcement Agency is moving forward with plans to move marijuana from a Schedule I to a Schedule III controlled substance under the Controlled Substance Act — makes addressing these risks all the timelier. 

“As states vie to leverage tax dollars from the growing cannabis industry, a significant portion of such funds must be used for early intervention/prevention strategies to reduce the impact of cannabis on the developing brain,” Dr. Hurd and colleagues wrote. 

This research was supported in part by the National Institute on Drug Abuse and the National Institutes of Health. Dr. Hurd and Dr. Volkow have no relevant disclosures. 

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM APA 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Widespread, Long-Held Practice in Dementia Called Into Question

Article Type
Changed
Tue, 05/14/2024 - 12:31

Hospitalized patients with dementia and dysphagia are often prescribed a “dysphagia diet,” made up of texture-modified foods and thickened liquids in an effort to reduce the risk for aspiration or other problems. However, a new study calls this widespread and long-held practice into question.

Investigators found no evidence that the use of thickened liquids reduced mortality or respiratory complications, such as pneumonia, aspiration, or choking, compared with thin-liquid diets in patients with Alzheimer’s disease and related dementias (ADRD) and dysphagia. Patients receiving thick liquids were less likely to be intubated, but they were actually more likely to have respiratory complications.

“When hospitalized patients with Alzheimer’s disease and related dementias are found to have dysphagia, our go-to solution is to use a thick liquid diet,” senior author Liron Sinvani, MD, with the Feinstein Institutes for Medical Research, Manhasset, New York, said in a news release.

“However, there is no concrete evidence that thick liquids improve health outcomes, and we also know that thick liquids can lead to decreased palatability, poor oral intake, dehydration, malnutrition, and worse quality of life,” added Dr. Sinvani, director of the geriatric hospitalist service at Northwell Health in New York.

The study was published online in JAMA Internal Medicine.
 

Challenging a Go-To Solution

The researchers compared outcomes in a propensity score-matched cohort of patients with ADRD and dysphagia (mean age, 86 years; 54% women) receiving mostly thick liquids versus thin liquids during their hospitalization. There were 4458 patients in each group.

They found no significant difference in hospital mortality between the thick liquids and thin liquids groups (hazard ratio [HR], 0.92; = .46).

Patients receiving thick liquids were less likely to require intubation (odds ratio [OR], 0.66; 95% CI, 0.54-0.80) but were more likely to develop respiratory complications (OR, 1.73; 95% CI, 1.56-1.91).

The two groups did not differ significantly in terms of risk for dehydration, hospital length of stay, or rate of 30-day readmission.

“This cohort study emphasizes the need for prospective studies that evaluate whether thick liquids are associated with improved clinical outcomes in hospitalized patients with ADRD and dysphagia,” the authors wrote.

Because few patients received a Modified Barium Swallow Study at baseline, researchers were unable to confirm the presence of dysphagia or account for dysphagia severity and impairment. It’s possible that patients in the thick liquid group had more severe dysphagia than those in the thin liquid group.

Another limitation is that the type of dementia and severity were not characterized. Also, the study could not account for factors like oral hygiene, immune status, and diet adherence that could impact risks like aspiration pneumonia.
 

Theoretical Benefit, No Evidence

In an invited commentary on the study, Eric Widera, MD, with University of California San Francisco, noted that medicine is “littered with interventions that have become the standard of practice based on theoretical benefits without clinical evidence”.

One example is percutaneous endoscopic gastrostomy tubes for individuals with dysphagia and dementia.

“For decades, these tubes were regularly used in individuals with dementia on the assumption that bypassing the oropharyngeal route would decrease rates of aspiration and, therefore, decrease adverse outcomes like pressure ulcers, malnutrition, pneumonia, and death. However, similar to what we see with thickened liquids, evidence slowly built that this standard of practice was not evidence-based practice,” Dr. Widera wrote.

When thinking about thick liquid diets, Dr. Widera encouraged clinicians to “acknowledge the limitations of the evidence both for and against thickened-liquid diets.”

He also encouraged clinicians to “put yourself in the shoes of the patients who will be asked to adhere to this modified diet. For 12 hours, drink your tea, coffee, wine, and water as thickened liquids,” Dr. Widera suggested. “The goal is not to convince yourself never to prescribe thickened liquids, but rather to be mindful of how a thickened liquid diet affects patients’ liquid and food intake, how it changes the mouthfeel and taste of different drinks, and how it affects patients’ quality of life.”

Clinicians also should “proactively engage speech-language pathologists, but do not ask them if it is safe for a patient with dementia to eat or drink normally. Instead, ask what we can do to meet the patient’s goals and maintain quality of life given the current evidence base,” Dr. Widera wrote.

“For some, when the patient’s goals are focused on comfort, this may lead to a recommendation for thickened liquids if their use may resolve significant coughing distress after drinking thin liquids. Alternatively, even when the patient’s goals are focused on prolonging life, the risks of thickened liquids, including dehydration and decreased food and fluid intake, as well as the thin evidence for mortality improvement, will argue against their use,” Dr. Widera added.

Funding for the study was provided by grants from the National Institute on Aging and by the William S. Middleton Veteran Affairs Hospital, Madison, Wisconsin. Dr. Sinvani and Dr. Widera declared no relevant conflicts of interest.

A version of this article appeared on Medscape.com .

Publications
Topics
Sections

Hospitalized patients with dementia and dysphagia are often prescribed a “dysphagia diet,” made up of texture-modified foods and thickened liquids in an effort to reduce the risk for aspiration or other problems. However, a new study calls this widespread and long-held practice into question.

Investigators found no evidence that the use of thickened liquids reduced mortality or respiratory complications, such as pneumonia, aspiration, or choking, compared with thin-liquid diets in patients with Alzheimer’s disease and related dementias (ADRD) and dysphagia. Patients receiving thick liquids were less likely to be intubated, but they were actually more likely to have respiratory complications.

“When hospitalized patients with Alzheimer’s disease and related dementias are found to have dysphagia, our go-to solution is to use a thick liquid diet,” senior author Liron Sinvani, MD, with the Feinstein Institutes for Medical Research, Manhasset, New York, said in a news release.

“However, there is no concrete evidence that thick liquids improve health outcomes, and we also know that thick liquids can lead to decreased palatability, poor oral intake, dehydration, malnutrition, and worse quality of life,” added Dr. Sinvani, director of the geriatric hospitalist service at Northwell Health in New York.

The study was published online in JAMA Internal Medicine.
 

Challenging a Go-To Solution

The researchers compared outcomes in a propensity score-matched cohort of patients with ADRD and dysphagia (mean age, 86 years; 54% women) receiving mostly thick liquids versus thin liquids during their hospitalization. There were 4458 patients in each group.

They found no significant difference in hospital mortality between the thick liquids and thin liquids groups (hazard ratio [HR], 0.92; = .46).

Patients receiving thick liquids were less likely to require intubation (odds ratio [OR], 0.66; 95% CI, 0.54-0.80) but were more likely to develop respiratory complications (OR, 1.73; 95% CI, 1.56-1.91).

The two groups did not differ significantly in terms of risk for dehydration, hospital length of stay, or rate of 30-day readmission.

“This cohort study emphasizes the need for prospective studies that evaluate whether thick liquids are associated with improved clinical outcomes in hospitalized patients with ADRD and dysphagia,” the authors wrote.

Because few patients received a Modified Barium Swallow Study at baseline, researchers were unable to confirm the presence of dysphagia or account for dysphagia severity and impairment. It’s possible that patients in the thick liquid group had more severe dysphagia than those in the thin liquid group.

Another limitation is that the type of dementia and severity were not characterized. Also, the study could not account for factors like oral hygiene, immune status, and diet adherence that could impact risks like aspiration pneumonia.
 

Theoretical Benefit, No Evidence

In an invited commentary on the study, Eric Widera, MD, with University of California San Francisco, noted that medicine is “littered with interventions that have become the standard of practice based on theoretical benefits without clinical evidence”.

One example is percutaneous endoscopic gastrostomy tubes for individuals with dysphagia and dementia.

“For decades, these tubes were regularly used in individuals with dementia on the assumption that bypassing the oropharyngeal route would decrease rates of aspiration and, therefore, decrease adverse outcomes like pressure ulcers, malnutrition, pneumonia, and death. However, similar to what we see with thickened liquids, evidence slowly built that this standard of practice was not evidence-based practice,” Dr. Widera wrote.

When thinking about thick liquid diets, Dr. Widera encouraged clinicians to “acknowledge the limitations of the evidence both for and against thickened-liquid diets.”

He also encouraged clinicians to “put yourself in the shoes of the patients who will be asked to adhere to this modified diet. For 12 hours, drink your tea, coffee, wine, and water as thickened liquids,” Dr. Widera suggested. “The goal is not to convince yourself never to prescribe thickened liquids, but rather to be mindful of how a thickened liquid diet affects patients’ liquid and food intake, how it changes the mouthfeel and taste of different drinks, and how it affects patients’ quality of life.”

Clinicians also should “proactively engage speech-language pathologists, but do not ask them if it is safe for a patient with dementia to eat or drink normally. Instead, ask what we can do to meet the patient’s goals and maintain quality of life given the current evidence base,” Dr. Widera wrote.

“For some, when the patient’s goals are focused on comfort, this may lead to a recommendation for thickened liquids if their use may resolve significant coughing distress after drinking thin liquids. Alternatively, even when the patient’s goals are focused on prolonging life, the risks of thickened liquids, including dehydration and decreased food and fluid intake, as well as the thin evidence for mortality improvement, will argue against their use,” Dr. Widera added.

Funding for the study was provided by grants from the National Institute on Aging and by the William S. Middleton Veteran Affairs Hospital, Madison, Wisconsin. Dr. Sinvani and Dr. Widera declared no relevant conflicts of interest.

A version of this article appeared on Medscape.com .

Hospitalized patients with dementia and dysphagia are often prescribed a “dysphagia diet,” made up of texture-modified foods and thickened liquids in an effort to reduce the risk for aspiration or other problems. However, a new study calls this widespread and long-held practice into question.

Investigators found no evidence that the use of thickened liquids reduced mortality or respiratory complications, such as pneumonia, aspiration, or choking, compared with thin-liquid diets in patients with Alzheimer’s disease and related dementias (ADRD) and dysphagia. Patients receiving thick liquids were less likely to be intubated, but they were actually more likely to have respiratory complications.

“When hospitalized patients with Alzheimer’s disease and related dementias are found to have dysphagia, our go-to solution is to use a thick liquid diet,” senior author Liron Sinvani, MD, with the Feinstein Institutes for Medical Research, Manhasset, New York, said in a news release.

“However, there is no concrete evidence that thick liquids improve health outcomes, and we also know that thick liquids can lead to decreased palatability, poor oral intake, dehydration, malnutrition, and worse quality of life,” added Dr. Sinvani, director of the geriatric hospitalist service at Northwell Health in New York.

The study was published online in JAMA Internal Medicine.
 

Challenging a Go-To Solution

The researchers compared outcomes in a propensity score-matched cohort of patients with ADRD and dysphagia (mean age, 86 years; 54% women) receiving mostly thick liquids versus thin liquids during their hospitalization. There were 4458 patients in each group.

They found no significant difference in hospital mortality between the thick liquids and thin liquids groups (hazard ratio [HR], 0.92; = .46).

Patients receiving thick liquids were less likely to require intubation (odds ratio [OR], 0.66; 95% CI, 0.54-0.80) but were more likely to develop respiratory complications (OR, 1.73; 95% CI, 1.56-1.91).

The two groups did not differ significantly in terms of risk for dehydration, hospital length of stay, or rate of 30-day readmission.

“This cohort study emphasizes the need for prospective studies that evaluate whether thick liquids are associated with improved clinical outcomes in hospitalized patients with ADRD and dysphagia,” the authors wrote.

Because few patients received a Modified Barium Swallow Study at baseline, researchers were unable to confirm the presence of dysphagia or account for dysphagia severity and impairment. It’s possible that patients in the thick liquid group had more severe dysphagia than those in the thin liquid group.

Another limitation is that the type of dementia and severity were not characterized. Also, the study could not account for factors like oral hygiene, immune status, and diet adherence that could impact risks like aspiration pneumonia.
 

Theoretical Benefit, No Evidence

In an invited commentary on the study, Eric Widera, MD, with University of California San Francisco, noted that medicine is “littered with interventions that have become the standard of practice based on theoretical benefits without clinical evidence”.

One example is percutaneous endoscopic gastrostomy tubes for individuals with dysphagia and dementia.

“For decades, these tubes were regularly used in individuals with dementia on the assumption that bypassing the oropharyngeal route would decrease rates of aspiration and, therefore, decrease adverse outcomes like pressure ulcers, malnutrition, pneumonia, and death. However, similar to what we see with thickened liquids, evidence slowly built that this standard of practice was not evidence-based practice,” Dr. Widera wrote.

When thinking about thick liquid diets, Dr. Widera encouraged clinicians to “acknowledge the limitations of the evidence both for and against thickened-liquid diets.”

He also encouraged clinicians to “put yourself in the shoes of the patients who will be asked to adhere to this modified diet. For 12 hours, drink your tea, coffee, wine, and water as thickened liquids,” Dr. Widera suggested. “The goal is not to convince yourself never to prescribe thickened liquids, but rather to be mindful of how a thickened liquid diet affects patients’ liquid and food intake, how it changes the mouthfeel and taste of different drinks, and how it affects patients’ quality of life.”

Clinicians also should “proactively engage speech-language pathologists, but do not ask them if it is safe for a patient with dementia to eat or drink normally. Instead, ask what we can do to meet the patient’s goals and maintain quality of life given the current evidence base,” Dr. Widera wrote.

“For some, when the patient’s goals are focused on comfort, this may lead to a recommendation for thickened liquids if their use may resolve significant coughing distress after drinking thin liquids. Alternatively, even when the patient’s goals are focused on prolonging life, the risks of thickened liquids, including dehydration and decreased food and fluid intake, as well as the thin evidence for mortality improvement, will argue against their use,” Dr. Widera added.

Funding for the study was provided by grants from the National Institute on Aging and by the William S. Middleton Veteran Affairs Hospital, Madison, Wisconsin. Dr. Sinvani and Dr. Widera declared no relevant conflicts of interest.

A version of this article appeared on Medscape.com .

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA INTERNAL MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

It Would Be Nice if Olive Oil Really Did Prevent Dementia

Article Type
Changed
Tue, 05/14/2024 - 10:03

This transcript has been edited for clarity.

As you all know by now, I’m always looking out for lifestyle changes that are both pleasurable and healthy. They are hard to find, especially when it comes to diet. My kids complain about this all the time: “When you say ‘healthy food,’ you just mean yucky food.” And yes, French fries are amazing, and no, we can’t have them three times a day.

So, when I saw an article claiming that olive oil reduces the risk for dementia, I was interested. I love olive oil; I cook with it all the time. But as is always the case in the world of nutritional epidemiology, we need to be careful. There are a lot of reasons to doubt the results of this study — and one reason to believe it’s true.

The study I’m talking about is “Consumption of Olive Oil and Diet Quality and Risk of Dementia-Related Death,” appearing in JAMA Network Open and following a well-trod formula in the nutritional epidemiology space.

Nearly 100,000 participants, all healthcare workers, filled out a food frequency questionnaire every 4 years with 130 questions touching on all aspects of diet: How often do you eat bananas, bacon, olive oil? Participants were followed for more than 20 years, and if they died, the cause of death was flagged as being dementia-related or not. Over that time frame there were around 38,000 deaths, of which 4751 were due to dementia.

The rest is just statistics. The authors show that those who reported consuming more olive oil were less likely to die from dementia — about 50% less likely, if you compare those who reported eating more than 7 grams of olive oil a day with those who reported eating none.
 

Is It What You Eat, or What You Don’t Eat?

And we could stop there if we wanted to; I’m sure big olive oil would be happy with that. Is there such a thing as “big olive oil”? But no, we need to dig deeper here because this study has the same problems as all nutritional epidemiology studies. Number one, no one is sitting around drinking small cups of olive oil. They consume it with other foods. And it was clear from the food frequency questionnaire that people who consumed more olive oil also consumed less red meat, more fruits and vegetables, more whole grains, more butter, and less margarine. And those are just the findings reported in the paper. I suspect that people who eat more olive oil also eat more tomatoes, for example, though data this granular aren’t shown. So, it can be really hard, in studies like this, to know for sure that it’s actually the olive oil that is helpful rather than some other constituent in the diet.

The flip side of that coin presents another issue. The food you eat is also a marker of the food you don’t eat. People who ate olive oil consumed less margarine, for example. At the time of this study, margarine was still adulterated with trans-fats, which a pretty solid evidence base suggests are really bad for your vascular system. So perhaps it’s not that olive oil is particularly good for you but that something else is bad for you. In other words, simply adding olive oil to your diet without changing anything else may not do anything.

The other major problem with studies of this sort is that people don’t consume food at random. The type of person who eats a lot of olive oil is simply different from the type of person who doesn›t. For one thing, olive oil is expensive. A 25-ounce bottle of olive oil is on sale at my local supermarket right now for $11.00. A similar-sized bottle of vegetable oil goes for $4.00.

Isn’t it interesting that food that costs more money tends to be associated with better health outcomes? (I’m looking at you, red wine.) Perhaps it’s not the food; perhaps it’s the money. We aren’t provided data on household income in this study, but we can see that the heavy olive oil users were less likely to be current smokers and they got more physical activity.

Now, the authors are aware of these limitations and do their best to account for them. In multivariable models, they adjust for other stuff in the diet, and even for income (sort of; they use census tract as a proxy for income, which is really a broad brush), and still find a significant though weakened association showing a protective effect of olive oil on dementia-related death. But still — adjustment is never perfect, and the small effect size here could definitely be due to residual confounding.
 

 

 

Evidence More Convincing

Now, I did tell you that there is one reason to believe that this study is true, but it’s not really from this study.

It’s from the PREDIMED randomized trial.

This is nutritional epidemiology I can get behind. Published in 2018, investigators in Spain randomized around 7500 participants to receive a liter of olive oil once a week vs mixed nuts, vs small nonfood gifts, the idea here being that if you have olive oil around, you’ll use it more. And people who were randomly assigned to get the olive oil had a 30% lower rate of cardiovascular events. A secondary analysis of that study found that the rate of development of mild cognitive impairment was 65% lower in those who were randomly assigned to olive oil. That’s an impressive result.

So, there might be something to this olive oil thing, but I’m not quite ready to add it to my “pleasurable things that are still good for you” list just yet. Though it does make me wonder: Can we make French fries in the stuff?
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

This transcript has been edited for clarity.

As you all know by now, I’m always looking out for lifestyle changes that are both pleasurable and healthy. They are hard to find, especially when it comes to diet. My kids complain about this all the time: “When you say ‘healthy food,’ you just mean yucky food.” And yes, French fries are amazing, and no, we can’t have them three times a day.

So, when I saw an article claiming that olive oil reduces the risk for dementia, I was interested. I love olive oil; I cook with it all the time. But as is always the case in the world of nutritional epidemiology, we need to be careful. There are a lot of reasons to doubt the results of this study — and one reason to believe it’s true.

The study I’m talking about is “Consumption of Olive Oil and Diet Quality and Risk of Dementia-Related Death,” appearing in JAMA Network Open and following a well-trod formula in the nutritional epidemiology space.

Nearly 100,000 participants, all healthcare workers, filled out a food frequency questionnaire every 4 years with 130 questions touching on all aspects of diet: How often do you eat bananas, bacon, olive oil? Participants were followed for more than 20 years, and if they died, the cause of death was flagged as being dementia-related or not. Over that time frame there were around 38,000 deaths, of which 4751 were due to dementia.

The rest is just statistics. The authors show that those who reported consuming more olive oil were less likely to die from dementia — about 50% less likely, if you compare those who reported eating more than 7 grams of olive oil a day with those who reported eating none.
 

Is It What You Eat, or What You Don’t Eat?

And we could stop there if we wanted to; I’m sure big olive oil would be happy with that. Is there such a thing as “big olive oil”? But no, we need to dig deeper here because this study has the same problems as all nutritional epidemiology studies. Number one, no one is sitting around drinking small cups of olive oil. They consume it with other foods. And it was clear from the food frequency questionnaire that people who consumed more olive oil also consumed less red meat, more fruits and vegetables, more whole grains, more butter, and less margarine. And those are just the findings reported in the paper. I suspect that people who eat more olive oil also eat more tomatoes, for example, though data this granular aren’t shown. So, it can be really hard, in studies like this, to know for sure that it’s actually the olive oil that is helpful rather than some other constituent in the diet.

The flip side of that coin presents another issue. The food you eat is also a marker of the food you don’t eat. People who ate olive oil consumed less margarine, for example. At the time of this study, margarine was still adulterated with trans-fats, which a pretty solid evidence base suggests are really bad for your vascular system. So perhaps it’s not that olive oil is particularly good for you but that something else is bad for you. In other words, simply adding olive oil to your diet without changing anything else may not do anything.

The other major problem with studies of this sort is that people don’t consume food at random. The type of person who eats a lot of olive oil is simply different from the type of person who doesn›t. For one thing, olive oil is expensive. A 25-ounce bottle of olive oil is on sale at my local supermarket right now for $11.00. A similar-sized bottle of vegetable oil goes for $4.00.

Isn’t it interesting that food that costs more money tends to be associated with better health outcomes? (I’m looking at you, red wine.) Perhaps it’s not the food; perhaps it’s the money. We aren’t provided data on household income in this study, but we can see that the heavy olive oil users were less likely to be current smokers and they got more physical activity.

Now, the authors are aware of these limitations and do their best to account for them. In multivariable models, they adjust for other stuff in the diet, and even for income (sort of; they use census tract as a proxy for income, which is really a broad brush), and still find a significant though weakened association showing a protective effect of olive oil on dementia-related death. But still — adjustment is never perfect, and the small effect size here could definitely be due to residual confounding.
 

 

 

Evidence More Convincing

Now, I did tell you that there is one reason to believe that this study is true, but it’s not really from this study.

It’s from the PREDIMED randomized trial.

This is nutritional epidemiology I can get behind. Published in 2018, investigators in Spain randomized around 7500 participants to receive a liter of olive oil once a week vs mixed nuts, vs small nonfood gifts, the idea here being that if you have olive oil around, you’ll use it more. And people who were randomly assigned to get the olive oil had a 30% lower rate of cardiovascular events. A secondary analysis of that study found that the rate of development of mild cognitive impairment was 65% lower in those who were randomly assigned to olive oil. That’s an impressive result.

So, there might be something to this olive oil thing, but I’m not quite ready to add it to my “pleasurable things that are still good for you” list just yet. Though it does make me wonder: Can we make French fries in the stuff?
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

This transcript has been edited for clarity.

As you all know by now, I’m always looking out for lifestyle changes that are both pleasurable and healthy. They are hard to find, especially when it comes to diet. My kids complain about this all the time: “When you say ‘healthy food,’ you just mean yucky food.” And yes, French fries are amazing, and no, we can’t have them three times a day.

So, when I saw an article claiming that olive oil reduces the risk for dementia, I was interested. I love olive oil; I cook with it all the time. But as is always the case in the world of nutritional epidemiology, we need to be careful. There are a lot of reasons to doubt the results of this study — and one reason to believe it’s true.

The study I’m talking about is “Consumption of Olive Oil and Diet Quality and Risk of Dementia-Related Death,” appearing in JAMA Network Open and following a well-trod formula in the nutritional epidemiology space.

Nearly 100,000 participants, all healthcare workers, filled out a food frequency questionnaire every 4 years with 130 questions touching on all aspects of diet: How often do you eat bananas, bacon, olive oil? Participants were followed for more than 20 years, and if they died, the cause of death was flagged as being dementia-related or not. Over that time frame there were around 38,000 deaths, of which 4751 were due to dementia.

The rest is just statistics. The authors show that those who reported consuming more olive oil were less likely to die from dementia — about 50% less likely, if you compare those who reported eating more than 7 grams of olive oil a day with those who reported eating none.
 

Is It What You Eat, or What You Don’t Eat?

And we could stop there if we wanted to; I’m sure big olive oil would be happy with that. Is there such a thing as “big olive oil”? But no, we need to dig deeper here because this study has the same problems as all nutritional epidemiology studies. Number one, no one is sitting around drinking small cups of olive oil. They consume it with other foods. And it was clear from the food frequency questionnaire that people who consumed more olive oil also consumed less red meat, more fruits and vegetables, more whole grains, more butter, and less margarine. And those are just the findings reported in the paper. I suspect that people who eat more olive oil also eat more tomatoes, for example, though data this granular aren’t shown. So, it can be really hard, in studies like this, to know for sure that it’s actually the olive oil that is helpful rather than some other constituent in the diet.

The flip side of that coin presents another issue. The food you eat is also a marker of the food you don’t eat. People who ate olive oil consumed less margarine, for example. At the time of this study, margarine was still adulterated with trans-fats, which a pretty solid evidence base suggests are really bad for your vascular system. So perhaps it’s not that olive oil is particularly good for you but that something else is bad for you. In other words, simply adding olive oil to your diet without changing anything else may not do anything.

The other major problem with studies of this sort is that people don’t consume food at random. The type of person who eats a lot of olive oil is simply different from the type of person who doesn›t. For one thing, olive oil is expensive. A 25-ounce bottle of olive oil is on sale at my local supermarket right now for $11.00. A similar-sized bottle of vegetable oil goes for $4.00.

Isn’t it interesting that food that costs more money tends to be associated with better health outcomes? (I’m looking at you, red wine.) Perhaps it’s not the food; perhaps it’s the money. We aren’t provided data on household income in this study, but we can see that the heavy olive oil users were less likely to be current smokers and they got more physical activity.

Now, the authors are aware of these limitations and do their best to account for them. In multivariable models, they adjust for other stuff in the diet, and even for income (sort of; they use census tract as a proxy for income, which is really a broad brush), and still find a significant though weakened association showing a protective effect of olive oil on dementia-related death. But still — adjustment is never perfect, and the small effect size here could definitely be due to residual confounding.
 

 

 

Evidence More Convincing

Now, I did tell you that there is one reason to believe that this study is true, but it’s not really from this study.

It’s from the PREDIMED randomized trial.

This is nutritional epidemiology I can get behind. Published in 2018, investigators in Spain randomized around 7500 participants to receive a liter of olive oil once a week vs mixed nuts, vs small nonfood gifts, the idea here being that if you have olive oil around, you’ll use it more. And people who were randomly assigned to get the olive oil had a 30% lower rate of cardiovascular events. A secondary analysis of that study found that the rate of development of mild cognitive impairment was 65% lower in those who were randomly assigned to olive oil. That’s an impressive result.

So, there might be something to this olive oil thing, but I’m not quite ready to add it to my “pleasurable things that are still good for you” list just yet. Though it does make me wonder: Can we make French fries in the stuff?
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Inappropriate Medication Use Persists in Older Adults With Dementia

Article Type
Changed
Mon, 05/13/2024 - 16:46

Medications that could have a negative effect on cognition are often used by older adults with dementia, according to data from approximately 13 million individuals presented at the annual meeting of the American Geriatrics Society.

Classes of medications including anticholinergics, antipsychotics, benzodiazepines, and non-benzodiazepine sedatives (Z drugs) have been identified as potentially inappropriate medications (PIMs) in patients with dementia, according to The American Geriatrics Society Beers Criteria for Potentially Inappropriate Medication Use in Older Adults.

The medications that could worsen dementia or cognition are known as CogPIMs, said presenting author Caroline M. Mak, a doctor of pharmacy candidate at the University at Buffalo School of Pharmacy and Pharmaceutical Sciences, New York.

Previous research has characterized the prevalence of use of CogPIMs, but data connecting use of CogPIMs and healthcare use are lacking, Ms. Mak said.

Ms. Mak and colleagues conducted a cross-sectional analysis of data from 2011 to 2015 from the Medical Expenditure Panel Survey (MEPS), a national survey with data on medication and healthcare use. The researchers included approximately 13 million survey respondents older than 65 years with dementia.

Exposure to CogPIMs was defined as filling a prescription for one or more of the CogPIMs during the study period. Population estimates of the prevalence of use of the CogPIMs were created using survey-weighted procedures, and prevalence trends were assessed using the Cochran-Armitage test.

Overall, the prevalence was 15.9%, 11.5%, 7.5%, and 3.8% for use of benzodiazepines, anticholinergics, antipsychotics, and Z drugs, respectively, during the study period.

Of these, benzodiazepines showed a significant trend with an increase in prevalence from 8.9% in 2011 to 16.4% in 2015 (P = .02).

The odds of hospitalization were more than twice as likely in individuals who reported using Z drugs (odds ratio, 2.57; P = .02) based on logistic regression. In addition, exposure to antipsychotics was significantly associated with an increased rate of hospitalization based on a binomial model for incidence rate ratio (IRR, 1.51; P = .02).

The findings were limited by several factors including the cross-sectional design, reliance on self-reports, and the lack of more recent data.

However, the results show that CogPIMs are often used by older adults with dementia, and antipsychotics and Z drugs could be targets for interventions to prevent harm from medication interactions and side effects, the researchers concluded.
 

Findings Highlight Need for Drug Awareness

The current study is important because of the expansion in the aging population and an increase in the number of patients with dementia, Ms. Mak said in an interview. “In both our older population and dementia patients, there are certain medication considerations that we need to take into account, and certain drugs that should be avoided if possible,” she said. Clinicians have been trying to use the Beers criteria to reduce potential medication harm, she noted. “One group of investigators (Hilmer et al.), has proposed a narrower focus on anticholinergic and sedative/hypnotic medication in the Drug Burden Index (DBI); the CogPIMs are a subset of both approaches (Beers and DBI) and represent a collection of medications that pose potential risks to our patients,” said Ms. Mak.

Continued reassessment is needed on appropriateness of anticholinergics, Z drugs, benzodiazepines, and antipsychotics in older patients with dementia, she added.

“Even though the only group to have a significant increase in prevalence [of use] was the benzodiazepine group, we didn’t see a decrease in any of the other groups,” said Ms. Mak. The current research provides a benchmark for CogPIMs use that can be monitored in the future for increases or, ideally, decreases, she said.
 

Part of a Bigger Picture

The current study is part of the work of Team Alice, a national deprescribing group affiliated with the University at Buffalo that was inspired by the tragic death of Alice Brennan, triggered by preventable medication harm, Ms. Mak said in an interview. “Team Alice consists of an array of academic, primary care, health plan, and regional health information partners that have designed patient-driven interventions to reduce medication harm, especially within primary care settings,” she said. “Their mission is to save people like Alice by pursuing multiple strategies to deprescribe unsafe medication, reduce harm, and foster successful aging. By characterizing the use of CogPIMs, we can design better intervention strategies,” she said.

Although Ms. Mak was not surprised by the emergence of benzodiazepines as the most commonly used drug groups, she was surprised by the increase during the study period.

“Unfortunately, our dataset was not rich enough to include reasons for this increase,” she said. In practice, “I have seen patients getting short-term, as needed, prescriptions for a benzodiazepine to address the anxiety and/or insomnia after the loss of a loved one; this may account for a small proportion of benzodiazepine use that appears to be inappropriate because of a lack of associated appropriate diagnosis,” she noted.

Also, the findings of increased hospitalization associated with Z drugs raises concerns, Ms. Mak said. Although the findings are consistent with other research, they illustrate the need for further investigation to identify strategies to prevent this harm, she said. “Not finding associations with hospitalization related to benzodiazepine or anticholinergics was a mild surprise,” Ms. Mak said in an interview. “However, while we know that these drugs can have a negative effect on older people, the effects may not have been severe enough to result in hospitalizations,” she said.

Looking ahead, Ms. Mak said she would like to see the study rerun with a more current data set, especially with regard to benzodiazepines and antipsychotics.
 

Seek Strategies to Reduce Medication Use

The current study was notable for its community-based population and attention to hospitalizations, Shelly Gray, PharmD, a professor of pharmacy at the University of Washington School of Pharmacy, said in an interview.

“Most studies examining potentially inappropriate medications that may impair cognition have been conducted in nursing homes, while this study focuses on community dwelling older adults where most people with dementia live,” said Dr. Gray, who served as a moderator for the session in which the study was presented.

In addition, “A unique aspect of this study was to examine how these medications are related to hospitalizations,” she said.

Given recent efforts to reduce use of potentially inappropriate medications in people with dementia, the increase in prevalence of use over the study period was surprising, especially for benzodiazepines, said Dr. Gray.

In clinical practice, “health care providers should continue to look for opportunities to deprescribe medications that may worsen cognition in people with dementia,” she said. However, more research is needed to examine trends in the years beyond 2015 for a more contemporary picture of medication use in this population, she noted.

The study received no outside funding. The researchers and Dr. Gray had no financial conflicts to disclose.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Medications that could have a negative effect on cognition are often used by older adults with dementia, according to data from approximately 13 million individuals presented at the annual meeting of the American Geriatrics Society.

Classes of medications including anticholinergics, antipsychotics, benzodiazepines, and non-benzodiazepine sedatives (Z drugs) have been identified as potentially inappropriate medications (PIMs) in patients with dementia, according to The American Geriatrics Society Beers Criteria for Potentially Inappropriate Medication Use in Older Adults.

The medications that could worsen dementia or cognition are known as CogPIMs, said presenting author Caroline M. Mak, a doctor of pharmacy candidate at the University at Buffalo School of Pharmacy and Pharmaceutical Sciences, New York.

Previous research has characterized the prevalence of use of CogPIMs, but data connecting use of CogPIMs and healthcare use are lacking, Ms. Mak said.

Ms. Mak and colleagues conducted a cross-sectional analysis of data from 2011 to 2015 from the Medical Expenditure Panel Survey (MEPS), a national survey with data on medication and healthcare use. The researchers included approximately 13 million survey respondents older than 65 years with dementia.

Exposure to CogPIMs was defined as filling a prescription for one or more of the CogPIMs during the study period. Population estimates of the prevalence of use of the CogPIMs were created using survey-weighted procedures, and prevalence trends were assessed using the Cochran-Armitage test.

Overall, the prevalence was 15.9%, 11.5%, 7.5%, and 3.8% for use of benzodiazepines, anticholinergics, antipsychotics, and Z drugs, respectively, during the study period.

Of these, benzodiazepines showed a significant trend with an increase in prevalence from 8.9% in 2011 to 16.4% in 2015 (P = .02).

The odds of hospitalization were more than twice as likely in individuals who reported using Z drugs (odds ratio, 2.57; P = .02) based on logistic regression. In addition, exposure to antipsychotics was significantly associated with an increased rate of hospitalization based on a binomial model for incidence rate ratio (IRR, 1.51; P = .02).

The findings were limited by several factors including the cross-sectional design, reliance on self-reports, and the lack of more recent data.

However, the results show that CogPIMs are often used by older adults with dementia, and antipsychotics and Z drugs could be targets for interventions to prevent harm from medication interactions and side effects, the researchers concluded.
 

Findings Highlight Need for Drug Awareness

The current study is important because of the expansion in the aging population and an increase in the number of patients with dementia, Ms. Mak said in an interview. “In both our older population and dementia patients, there are certain medication considerations that we need to take into account, and certain drugs that should be avoided if possible,” she said. Clinicians have been trying to use the Beers criteria to reduce potential medication harm, she noted. “One group of investigators (Hilmer et al.), has proposed a narrower focus on anticholinergic and sedative/hypnotic medication in the Drug Burden Index (DBI); the CogPIMs are a subset of both approaches (Beers and DBI) and represent a collection of medications that pose potential risks to our patients,” said Ms. Mak.

Continued reassessment is needed on appropriateness of anticholinergics, Z drugs, benzodiazepines, and antipsychotics in older patients with dementia, she added.

“Even though the only group to have a significant increase in prevalence [of use] was the benzodiazepine group, we didn’t see a decrease in any of the other groups,” said Ms. Mak. The current research provides a benchmark for CogPIMs use that can be monitored in the future for increases or, ideally, decreases, she said.
 

Part of a Bigger Picture

The current study is part of the work of Team Alice, a national deprescribing group affiliated with the University at Buffalo that was inspired by the tragic death of Alice Brennan, triggered by preventable medication harm, Ms. Mak said in an interview. “Team Alice consists of an array of academic, primary care, health plan, and regional health information partners that have designed patient-driven interventions to reduce medication harm, especially within primary care settings,” she said. “Their mission is to save people like Alice by pursuing multiple strategies to deprescribe unsafe medication, reduce harm, and foster successful aging. By characterizing the use of CogPIMs, we can design better intervention strategies,” she said.

Although Ms. Mak was not surprised by the emergence of benzodiazepines as the most commonly used drug groups, she was surprised by the increase during the study period.

“Unfortunately, our dataset was not rich enough to include reasons for this increase,” she said. In practice, “I have seen patients getting short-term, as needed, prescriptions for a benzodiazepine to address the anxiety and/or insomnia after the loss of a loved one; this may account for a small proportion of benzodiazepine use that appears to be inappropriate because of a lack of associated appropriate diagnosis,” she noted.

Also, the findings of increased hospitalization associated with Z drugs raises concerns, Ms. Mak said. Although the findings are consistent with other research, they illustrate the need for further investigation to identify strategies to prevent this harm, she said. “Not finding associations with hospitalization related to benzodiazepine or anticholinergics was a mild surprise,” Ms. Mak said in an interview. “However, while we know that these drugs can have a negative effect on older people, the effects may not have been severe enough to result in hospitalizations,” she said.

Looking ahead, Ms. Mak said she would like to see the study rerun with a more current data set, especially with regard to benzodiazepines and antipsychotics.
 

Seek Strategies to Reduce Medication Use

The current study was notable for its community-based population and attention to hospitalizations, Shelly Gray, PharmD, a professor of pharmacy at the University of Washington School of Pharmacy, said in an interview.

“Most studies examining potentially inappropriate medications that may impair cognition have been conducted in nursing homes, while this study focuses on community dwelling older adults where most people with dementia live,” said Dr. Gray, who served as a moderator for the session in which the study was presented.

In addition, “A unique aspect of this study was to examine how these medications are related to hospitalizations,” she said.

Given recent efforts to reduce use of potentially inappropriate medications in people with dementia, the increase in prevalence of use over the study period was surprising, especially for benzodiazepines, said Dr. Gray.

In clinical practice, “health care providers should continue to look for opportunities to deprescribe medications that may worsen cognition in people with dementia,” she said. However, more research is needed to examine trends in the years beyond 2015 for a more contemporary picture of medication use in this population, she noted.

The study received no outside funding. The researchers and Dr. Gray had no financial conflicts to disclose.

Medications that could have a negative effect on cognition are often used by older adults with dementia, according to data from approximately 13 million individuals presented at the annual meeting of the American Geriatrics Society.

Classes of medications including anticholinergics, antipsychotics, benzodiazepines, and non-benzodiazepine sedatives (Z drugs) have been identified as potentially inappropriate medications (PIMs) in patients with dementia, according to The American Geriatrics Society Beers Criteria for Potentially Inappropriate Medication Use in Older Adults.

The medications that could worsen dementia or cognition are known as CogPIMs, said presenting author Caroline M. Mak, a doctor of pharmacy candidate at the University at Buffalo School of Pharmacy and Pharmaceutical Sciences, New York.

Previous research has characterized the prevalence of use of CogPIMs, but data connecting use of CogPIMs and healthcare use are lacking, Ms. Mak said.

Ms. Mak and colleagues conducted a cross-sectional analysis of data from 2011 to 2015 from the Medical Expenditure Panel Survey (MEPS), a national survey with data on medication and healthcare use. The researchers included approximately 13 million survey respondents older than 65 years with dementia.

Exposure to CogPIMs was defined as filling a prescription for one or more of the CogPIMs during the study period. Population estimates of the prevalence of use of the CogPIMs were created using survey-weighted procedures, and prevalence trends were assessed using the Cochran-Armitage test.

Overall, the prevalence was 15.9%, 11.5%, 7.5%, and 3.8% for use of benzodiazepines, anticholinergics, antipsychotics, and Z drugs, respectively, during the study period.

Of these, benzodiazepines showed a significant trend with an increase in prevalence from 8.9% in 2011 to 16.4% in 2015 (P = .02).

The odds of hospitalization were more than twice as likely in individuals who reported using Z drugs (odds ratio, 2.57; P = .02) based on logistic regression. In addition, exposure to antipsychotics was significantly associated with an increased rate of hospitalization based on a binomial model for incidence rate ratio (IRR, 1.51; P = .02).

The findings were limited by several factors including the cross-sectional design, reliance on self-reports, and the lack of more recent data.

However, the results show that CogPIMs are often used by older adults with dementia, and antipsychotics and Z drugs could be targets for interventions to prevent harm from medication interactions and side effects, the researchers concluded.
 

Findings Highlight Need for Drug Awareness

The current study is important because of the expansion in the aging population and an increase in the number of patients with dementia, Ms. Mak said in an interview. “In both our older population and dementia patients, there are certain medication considerations that we need to take into account, and certain drugs that should be avoided if possible,” she said. Clinicians have been trying to use the Beers criteria to reduce potential medication harm, she noted. “One group of investigators (Hilmer et al.), has proposed a narrower focus on anticholinergic and sedative/hypnotic medication in the Drug Burden Index (DBI); the CogPIMs are a subset of both approaches (Beers and DBI) and represent a collection of medications that pose potential risks to our patients,” said Ms. Mak.

Continued reassessment is needed on appropriateness of anticholinergics, Z drugs, benzodiazepines, and antipsychotics in older patients with dementia, she added.

“Even though the only group to have a significant increase in prevalence [of use] was the benzodiazepine group, we didn’t see a decrease in any of the other groups,” said Ms. Mak. The current research provides a benchmark for CogPIMs use that can be monitored in the future for increases or, ideally, decreases, she said.
 

Part of a Bigger Picture

The current study is part of the work of Team Alice, a national deprescribing group affiliated with the University at Buffalo that was inspired by the tragic death of Alice Brennan, triggered by preventable medication harm, Ms. Mak said in an interview. “Team Alice consists of an array of academic, primary care, health plan, and regional health information partners that have designed patient-driven interventions to reduce medication harm, especially within primary care settings,” she said. “Their mission is to save people like Alice by pursuing multiple strategies to deprescribe unsafe medication, reduce harm, and foster successful aging. By characterizing the use of CogPIMs, we can design better intervention strategies,” she said.

Although Ms. Mak was not surprised by the emergence of benzodiazepines as the most commonly used drug groups, she was surprised by the increase during the study period.

“Unfortunately, our dataset was not rich enough to include reasons for this increase,” she said. In practice, “I have seen patients getting short-term, as needed, prescriptions for a benzodiazepine to address the anxiety and/or insomnia after the loss of a loved one; this may account for a small proportion of benzodiazepine use that appears to be inappropriate because of a lack of associated appropriate diagnosis,” she noted.

Also, the findings of increased hospitalization associated with Z drugs raises concerns, Ms. Mak said. Although the findings are consistent with other research, they illustrate the need for further investigation to identify strategies to prevent this harm, she said. “Not finding associations with hospitalization related to benzodiazepine or anticholinergics was a mild surprise,” Ms. Mak said in an interview. “However, while we know that these drugs can have a negative effect on older people, the effects may not have been severe enough to result in hospitalizations,” she said.

Looking ahead, Ms. Mak said she would like to see the study rerun with a more current data set, especially with regard to benzodiazepines and antipsychotics.
 

Seek Strategies to Reduce Medication Use

The current study was notable for its community-based population and attention to hospitalizations, Shelly Gray, PharmD, a professor of pharmacy at the University of Washington School of Pharmacy, said in an interview.

“Most studies examining potentially inappropriate medications that may impair cognition have been conducted in nursing homes, while this study focuses on community dwelling older adults where most people with dementia live,” said Dr. Gray, who served as a moderator for the session in which the study was presented.

In addition, “A unique aspect of this study was to examine how these medications are related to hospitalizations,” she said.

Given recent efforts to reduce use of potentially inappropriate medications in people with dementia, the increase in prevalence of use over the study period was surprising, especially for benzodiazepines, said Dr. Gray.

In clinical practice, “health care providers should continue to look for opportunities to deprescribe medications that may worsen cognition in people with dementia,” she said. However, more research is needed to examine trends in the years beyond 2015 for a more contemporary picture of medication use in this population, she noted.

The study received no outside funding. The researchers and Dr. Gray had no financial conflicts to disclose.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AGS 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article