User login
Analysis shows predictive capabilities of sleep EEG
CHARLOTTE, N.C. –
, a researcher reported at the annual meeting of the Associated Professional Sleep Societies. “Sleep EEGs contain decodable information about the risk of unfavorable outcomes,” said Haoqi Sun, PhD, an instructor of neurology at Massachusetts General Hospital, Boston, and lead study author. “The results suggest that it’s feasible to use sleep to identify people with high risk of unfavorable outcomes and it strengthens the concept of sleep as a window into brain and general health.”The researchers performed a quantitative analysis of sleep data collected on 8,673 adults who had diagnostic sleep studies that included polysomnography (PSG). The analysis used ICD codes to consider these 11 health outcomes: dementia, mild cognitive impairment (MCI) or dementia, ischemic stroke, intracranial hemorrhage, atrial fibrillation, myocardial infarction, type 2 diabetes, hypertension, bipolar disorder, depression, and mortality.
Then, Dr. Sun explained, they extracted 86 spectral and time-domain features of REM and non-REM sleep from sleep EEG recordings, and analyzed that data by adjusting for eight covariates including age, sex, body mass index, and use of benzodiazepines, antidepressants, sedatives, antiseizure drugs, and stimulants.
Participants were partitioned into three sleep-quality groups: poor, average, and good. The outcome-wise mean prediction difference in 10-year cumulative incidence was 2.3% for the poor sleep group, 0.5% for the average sleep group, and 1.3% for the good sleep group.
The outcomes with the three greatest poor to average risk ratios were dementia (6.2; 95% confidence interval, 4.5-9.3), mortality (5.7; 95% CI, 5-7.5) and MCI or dementia (4; 95% CI, 3.2-4.9).
Ready for the clinic?
In an interview, Dr. Sun said the results demonstrated the potential of using EEG brain wave data to predict health outcomes on an individual basis, although he acknowledged that most of the 86 sleep features the researchers used are not readily available in the clinic.
He noted the spectral features used in the study can be captured through software compatible with PSG. “From there you can identify the various bands, the different frequency ranges, and then you can easily see within this range whether a person has a higher power or lower power,” he said. However, the spindle and slow-oscillation features that researchers used in the study are beyond the reach of most clinics.
Next steps
This research is in its early stage, Dr. Sun said, but at some point the data collected from sleep studies could be paired with machine learning to make the model workable for evaluating individual patients. “Our goal is to first make this individualized,” he said. “We want to minimize the noise in the recording and minimize the night-to-night variability in the findings. There is some clinical-informed approach and there is also some algorithm-informed approach where you can minimize the variation over time.”
The model also has the potential to predict outcomes, particularly with chronic diseases such as diabetes and dementia, well before a diagnosis is made, he said.
‘Fascinating’ and ‘provocative’
Donald Bliwise, PhD, professor of neurology at Emory Sleep Center in Atlanta, said the study was “fascinating; it’s provocative; it’s exciting and interesting,” but added, “Sleep is vital for health. That’s abundantly clear in a study like that, but trying to push it a little bit further with all of these 86 measurements of the EEG, I think it becomes complicated.”
The study methodology, particularly the use of cumulative incidence of various diseases, was laudable, he said, and the use of simpler EEG-measured sleep features, such as alpha band power, “make intuitive sense.”
But it’s less clear on how the more sophisticated features the study model used – for example, kurtosis of theta frequency or coupling between spindle and slow oscillation – rank on sleep quality, he said, adding that the researchers have most likely done that but couldn’t add that into the format of the presentation.
“Kurtosis of the theta frequency band we don’t get on everyone in the sleep lab,” Dr. Bliwise said. “We might be able to, but I don’t know how to quite plug that into a turnkey model.”
The clinical components of the study were conducted by M. Brandon Westover, MD, PhD, at Massachusetts General Hospital, and Robert J. Thomas, MD, at Beth Israel Deaconess Medical Center, both in Boston. The study received support from the American Academy of Sleep Medicine Foundation. Dr. Sun has no relevant disclosures. Dr. Bliwise has no disclosures.
CHARLOTTE, N.C. –
, a researcher reported at the annual meeting of the Associated Professional Sleep Societies. “Sleep EEGs contain decodable information about the risk of unfavorable outcomes,” said Haoqi Sun, PhD, an instructor of neurology at Massachusetts General Hospital, Boston, and lead study author. “The results suggest that it’s feasible to use sleep to identify people with high risk of unfavorable outcomes and it strengthens the concept of sleep as a window into brain and general health.”The researchers performed a quantitative analysis of sleep data collected on 8,673 adults who had diagnostic sleep studies that included polysomnography (PSG). The analysis used ICD codes to consider these 11 health outcomes: dementia, mild cognitive impairment (MCI) or dementia, ischemic stroke, intracranial hemorrhage, atrial fibrillation, myocardial infarction, type 2 diabetes, hypertension, bipolar disorder, depression, and mortality.
Then, Dr. Sun explained, they extracted 86 spectral and time-domain features of REM and non-REM sleep from sleep EEG recordings, and analyzed that data by adjusting for eight covariates including age, sex, body mass index, and use of benzodiazepines, antidepressants, sedatives, antiseizure drugs, and stimulants.
Participants were partitioned into three sleep-quality groups: poor, average, and good. The outcome-wise mean prediction difference in 10-year cumulative incidence was 2.3% for the poor sleep group, 0.5% for the average sleep group, and 1.3% for the good sleep group.
The outcomes with the three greatest poor to average risk ratios were dementia (6.2; 95% confidence interval, 4.5-9.3), mortality (5.7; 95% CI, 5-7.5) and MCI or dementia (4; 95% CI, 3.2-4.9).
Ready for the clinic?
In an interview, Dr. Sun said the results demonstrated the potential of using EEG brain wave data to predict health outcomes on an individual basis, although he acknowledged that most of the 86 sleep features the researchers used are not readily available in the clinic.
He noted the spectral features used in the study can be captured through software compatible with PSG. “From there you can identify the various bands, the different frequency ranges, and then you can easily see within this range whether a person has a higher power or lower power,” he said. However, the spindle and slow-oscillation features that researchers used in the study are beyond the reach of most clinics.
Next steps
This research is in its early stage, Dr. Sun said, but at some point the data collected from sleep studies could be paired with machine learning to make the model workable for evaluating individual patients. “Our goal is to first make this individualized,” he said. “We want to minimize the noise in the recording and minimize the night-to-night variability in the findings. There is some clinical-informed approach and there is also some algorithm-informed approach where you can minimize the variation over time.”
The model also has the potential to predict outcomes, particularly with chronic diseases such as diabetes and dementia, well before a diagnosis is made, he said.
‘Fascinating’ and ‘provocative’
Donald Bliwise, PhD, professor of neurology at Emory Sleep Center in Atlanta, said the study was “fascinating; it’s provocative; it’s exciting and interesting,” but added, “Sleep is vital for health. That’s abundantly clear in a study like that, but trying to push it a little bit further with all of these 86 measurements of the EEG, I think it becomes complicated.”
The study methodology, particularly the use of cumulative incidence of various diseases, was laudable, he said, and the use of simpler EEG-measured sleep features, such as alpha band power, “make intuitive sense.”
But it’s less clear on how the more sophisticated features the study model used – for example, kurtosis of theta frequency or coupling between spindle and slow oscillation – rank on sleep quality, he said, adding that the researchers have most likely done that but couldn’t add that into the format of the presentation.
“Kurtosis of the theta frequency band we don’t get on everyone in the sleep lab,” Dr. Bliwise said. “We might be able to, but I don’t know how to quite plug that into a turnkey model.”
The clinical components of the study were conducted by M. Brandon Westover, MD, PhD, at Massachusetts General Hospital, and Robert J. Thomas, MD, at Beth Israel Deaconess Medical Center, both in Boston. The study received support from the American Academy of Sleep Medicine Foundation. Dr. Sun has no relevant disclosures. Dr. Bliwise has no disclosures.
CHARLOTTE, N.C. –
, a researcher reported at the annual meeting of the Associated Professional Sleep Societies. “Sleep EEGs contain decodable information about the risk of unfavorable outcomes,” said Haoqi Sun, PhD, an instructor of neurology at Massachusetts General Hospital, Boston, and lead study author. “The results suggest that it’s feasible to use sleep to identify people with high risk of unfavorable outcomes and it strengthens the concept of sleep as a window into brain and general health.”The researchers performed a quantitative analysis of sleep data collected on 8,673 adults who had diagnostic sleep studies that included polysomnography (PSG). The analysis used ICD codes to consider these 11 health outcomes: dementia, mild cognitive impairment (MCI) or dementia, ischemic stroke, intracranial hemorrhage, atrial fibrillation, myocardial infarction, type 2 diabetes, hypertension, bipolar disorder, depression, and mortality.
Then, Dr. Sun explained, they extracted 86 spectral and time-domain features of REM and non-REM sleep from sleep EEG recordings, and analyzed that data by adjusting for eight covariates including age, sex, body mass index, and use of benzodiazepines, antidepressants, sedatives, antiseizure drugs, and stimulants.
Participants were partitioned into three sleep-quality groups: poor, average, and good. The outcome-wise mean prediction difference in 10-year cumulative incidence was 2.3% for the poor sleep group, 0.5% for the average sleep group, and 1.3% for the good sleep group.
The outcomes with the three greatest poor to average risk ratios were dementia (6.2; 95% confidence interval, 4.5-9.3), mortality (5.7; 95% CI, 5-7.5) and MCI or dementia (4; 95% CI, 3.2-4.9).
Ready for the clinic?
In an interview, Dr. Sun said the results demonstrated the potential of using EEG brain wave data to predict health outcomes on an individual basis, although he acknowledged that most of the 86 sleep features the researchers used are not readily available in the clinic.
He noted the spectral features used in the study can be captured through software compatible with PSG. “From there you can identify the various bands, the different frequency ranges, and then you can easily see within this range whether a person has a higher power or lower power,” he said. However, the spindle and slow-oscillation features that researchers used in the study are beyond the reach of most clinics.
Next steps
This research is in its early stage, Dr. Sun said, but at some point the data collected from sleep studies could be paired with machine learning to make the model workable for evaluating individual patients. “Our goal is to first make this individualized,” he said. “We want to minimize the noise in the recording and minimize the night-to-night variability in the findings. There is some clinical-informed approach and there is also some algorithm-informed approach where you can minimize the variation over time.”
The model also has the potential to predict outcomes, particularly with chronic diseases such as diabetes and dementia, well before a diagnosis is made, he said.
‘Fascinating’ and ‘provocative’
Donald Bliwise, PhD, professor of neurology at Emory Sleep Center in Atlanta, said the study was “fascinating; it’s provocative; it’s exciting and interesting,” but added, “Sleep is vital for health. That’s abundantly clear in a study like that, but trying to push it a little bit further with all of these 86 measurements of the EEG, I think it becomes complicated.”
The study methodology, particularly the use of cumulative incidence of various diseases, was laudable, he said, and the use of simpler EEG-measured sleep features, such as alpha band power, “make intuitive sense.”
But it’s less clear on how the more sophisticated features the study model used – for example, kurtosis of theta frequency or coupling between spindle and slow oscillation – rank on sleep quality, he said, adding that the researchers have most likely done that but couldn’t add that into the format of the presentation.
“Kurtosis of the theta frequency band we don’t get on everyone in the sleep lab,” Dr. Bliwise said. “We might be able to, but I don’t know how to quite plug that into a turnkey model.”
The clinical components of the study were conducted by M. Brandon Westover, MD, PhD, at Massachusetts General Hospital, and Robert J. Thomas, MD, at Beth Israel Deaconess Medical Center, both in Boston. The study received support from the American Academy of Sleep Medicine Foundation. Dr. Sun has no relevant disclosures. Dr. Bliwise has no disclosures.
AT SLEEP 2022
Opioid use in the elderly a dementia risk factor?
in new findings that suggest exposure to these drugs may be another modifiable risk factor for dementia.
“Clinicians and others may want to consider that opioid exposure in those aged 75-80 increases dementia risk, and to balance the potential benefits of opioid use in old age with adverse side effects,” said Stephen Z. Levine, PhD, professor, department of community mental health, University of Haifa (Israel).
The study was published online in the American Journal of Geriatric Psychiatry.
Widespread use
Evidence points to a relatively high rate of opioid prescriptions among older adults. A Morbidity and Mortality Weekly Report noted 19.2% of the U.S. adult population filled an opioid prescription in 2018, with the rate in those over 65 double that of adults aged 20-24 years (25% vs. 11.2%).
Disorders and illnesses for which opioids might be prescribed, including cancer and some pain conditions, “are far more prevalent in old age than at a younger age,” said Dr. Levine.
This high rate of opioid use underscores the need to consider the risks of opioid use in old age, said Dr. Levine. “Unfortunately, studies of the association between opioid use and dementia risk in old age are few, and their results are inconsistent.”
The study included 91,307 Israeli citizens aged 60 and over without dementia who were enrolled in the Meuhedet Healthcare Services, a nonprofit health maintenance organization (HMO) serving 14% of the country’s population. Meuhedet has maintained an up-to-date dementia registry since 2002.
The average age of the study sample was 68.29 years at the start of the study (in 2012).
In Israel, opioids are prescribed for a 30-day period. In this study, opioid exposure was defined as opioid medication fills covering 60 days (or two prescriptions) within a 120-day interval.
The primary outcome was incident dementia during follow-up from Jan. 1, 2013 to Oct. 30, 2017. The analysis controlled for a number of factors, including age, sex, smoking status, health conditions such as arthritis, depression, diabetes, osteoporosis, cognitive decline, vitamin deficiencies, cancer, cardiovascular conditions, and hospitalizations for falls.
Researchers also accounted for the competing risk of mortality.
During the study, 3.1% of subjects were exposed to opioids at a mean age of 73.94 years, and 5.8% of subjects developed dementia at an average age of 78.07 years.
Increased dementia risk
The risk of incident dementia was significantly increased in those exposed to opioids versus unexposed individuals in the 75- to 80-year age group (adjusted hazard ratio, 1.39; 95% confidence interval, 1.01-1.92; z statistic = 2.02; P < .05).
The authors noted the effect size for opioid exposure in this elderly age group is like other potentially modifiable risk factors for dementia, including body mass index and smoking.
The current study could not determine the biological explanation for the increased dementia risk among older opioid users. “Causal notions are challenging in observational studies and should be viewed with caution,” Dr. Levine noted.
However, a plausible mechanism highlighted in the literature is that opioids promote apoptosis of microglia and neurons that contribute to neurodegenerative diseases, he said.
The study included 14 sensitivity analyses, including those that looked at females, subjects older than 70, smokers, and groups with and without comorbid health conditions. The only sensitivity analysis that didn’t have similar findings to the primary analysis looked at dementia risk restricted to subjects without a vitamin deficiency.
“It’s reassuring that 13 or 14 sensitivity analyses found a significant association between opioid exposure and dementia risk,” said Dr. Levine.
Some prior studies did not show an association between opioid exposure and dementia risk. One possible reason for the discrepancy with the current findings is that the previous research didn’t account for age-specific opioid use effects, or the competing risk of mortality, said Dr. Levine.
Clinicians have a number of potential alternatives to opioids to treat various conditions including acetaminophen, non-steroidal anti-inflammatory drugs, amine reuptake inhibitors (ARIs), membrane stabilizers, muscle relaxants, topical capsaicin, botulinum toxin, cannabinoids, and steroids.
A limitation of the study was that it didn’t adjust for all possible comorbid health conditions, including vascular conditions, or for use of benzodiazepines, and surgical procedures.
In addition, since up to 50% of dementia cases are undetected, it’s possible some in the unexposed opioid group may actually have undiagnosed dementia, thereby reducing the effect sizes in the results.
Reverse causality is also a possibility as the neuropathological process associated with dementia could have started prior to opioid exposure. In addition, the results are limited to prolonged opioid exposure.
Interpret with caution
Commenting on the study, David Knopman, MD, a neurologist at Mayo Clinic in Rochester, Minn., whose research involves late-life cognitive disorders, was skeptical.
“On the face of it, the fact that an association was seen only in one narrow age range – 75+ to 80 years – ought to raise serious suspicion about the reliability and validity of the claim that opioid use is a risk factor for dementia, he said.
Although the researchers performed several sensitivity analyses, including accounting for mortality, “pharmacoepidemiological studies are terribly sensitive to residual biases” related to physician and patient choices related to medication use, added Dr. Knopman.
The claim that opioids are a dementia risk “should be viewed with great caution” and should not influence use of opioids where they’re truly indicated, he said.
“It would be a great pity if patients with pain requiring opioids avoid them because of fears about dementia based on the dubious relationship between age and opioid use.”
Dr. Levine and Dr. Knopman report no relevant financial disclosures.
A version of this article first appeared on Medscape.com.
in new findings that suggest exposure to these drugs may be another modifiable risk factor for dementia.
“Clinicians and others may want to consider that opioid exposure in those aged 75-80 increases dementia risk, and to balance the potential benefits of opioid use in old age with adverse side effects,” said Stephen Z. Levine, PhD, professor, department of community mental health, University of Haifa (Israel).
The study was published online in the American Journal of Geriatric Psychiatry.
Widespread use
Evidence points to a relatively high rate of opioid prescriptions among older adults. A Morbidity and Mortality Weekly Report noted 19.2% of the U.S. adult population filled an opioid prescription in 2018, with the rate in those over 65 double that of adults aged 20-24 years (25% vs. 11.2%).
Disorders and illnesses for which opioids might be prescribed, including cancer and some pain conditions, “are far more prevalent in old age than at a younger age,” said Dr. Levine.
This high rate of opioid use underscores the need to consider the risks of opioid use in old age, said Dr. Levine. “Unfortunately, studies of the association between opioid use and dementia risk in old age are few, and their results are inconsistent.”
The study included 91,307 Israeli citizens aged 60 and over without dementia who were enrolled in the Meuhedet Healthcare Services, a nonprofit health maintenance organization (HMO) serving 14% of the country’s population. Meuhedet has maintained an up-to-date dementia registry since 2002.
The average age of the study sample was 68.29 years at the start of the study (in 2012).
In Israel, opioids are prescribed for a 30-day period. In this study, opioid exposure was defined as opioid medication fills covering 60 days (or two prescriptions) within a 120-day interval.
The primary outcome was incident dementia during follow-up from Jan. 1, 2013 to Oct. 30, 2017. The analysis controlled for a number of factors, including age, sex, smoking status, health conditions such as arthritis, depression, diabetes, osteoporosis, cognitive decline, vitamin deficiencies, cancer, cardiovascular conditions, and hospitalizations for falls.
Researchers also accounted for the competing risk of mortality.
During the study, 3.1% of subjects were exposed to opioids at a mean age of 73.94 years, and 5.8% of subjects developed dementia at an average age of 78.07 years.
Increased dementia risk
The risk of incident dementia was significantly increased in those exposed to opioids versus unexposed individuals in the 75- to 80-year age group (adjusted hazard ratio, 1.39; 95% confidence interval, 1.01-1.92; z statistic = 2.02; P < .05).
The authors noted the effect size for opioid exposure in this elderly age group is like other potentially modifiable risk factors for dementia, including body mass index and smoking.
The current study could not determine the biological explanation for the increased dementia risk among older opioid users. “Causal notions are challenging in observational studies and should be viewed with caution,” Dr. Levine noted.
However, a plausible mechanism highlighted in the literature is that opioids promote apoptosis of microglia and neurons that contribute to neurodegenerative diseases, he said.
The study included 14 sensitivity analyses, including those that looked at females, subjects older than 70, smokers, and groups with and without comorbid health conditions. The only sensitivity analysis that didn’t have similar findings to the primary analysis looked at dementia risk restricted to subjects without a vitamin deficiency.
“It’s reassuring that 13 or 14 sensitivity analyses found a significant association between opioid exposure and dementia risk,” said Dr. Levine.
Some prior studies did not show an association between opioid exposure and dementia risk. One possible reason for the discrepancy with the current findings is that the previous research didn’t account for age-specific opioid use effects, or the competing risk of mortality, said Dr. Levine.
Clinicians have a number of potential alternatives to opioids to treat various conditions including acetaminophen, non-steroidal anti-inflammatory drugs, amine reuptake inhibitors (ARIs), membrane stabilizers, muscle relaxants, topical capsaicin, botulinum toxin, cannabinoids, and steroids.
A limitation of the study was that it didn’t adjust for all possible comorbid health conditions, including vascular conditions, or for use of benzodiazepines, and surgical procedures.
In addition, since up to 50% of dementia cases are undetected, it’s possible some in the unexposed opioid group may actually have undiagnosed dementia, thereby reducing the effect sizes in the results.
Reverse causality is also a possibility as the neuropathological process associated with dementia could have started prior to opioid exposure. In addition, the results are limited to prolonged opioid exposure.
Interpret with caution
Commenting on the study, David Knopman, MD, a neurologist at Mayo Clinic in Rochester, Minn., whose research involves late-life cognitive disorders, was skeptical.
“On the face of it, the fact that an association was seen only in one narrow age range – 75+ to 80 years – ought to raise serious suspicion about the reliability and validity of the claim that opioid use is a risk factor for dementia, he said.
Although the researchers performed several sensitivity analyses, including accounting for mortality, “pharmacoepidemiological studies are terribly sensitive to residual biases” related to physician and patient choices related to medication use, added Dr. Knopman.
The claim that opioids are a dementia risk “should be viewed with great caution” and should not influence use of opioids where they’re truly indicated, he said.
“It would be a great pity if patients with pain requiring opioids avoid them because of fears about dementia based on the dubious relationship between age and opioid use.”
Dr. Levine and Dr. Knopman report no relevant financial disclosures.
A version of this article first appeared on Medscape.com.
in new findings that suggest exposure to these drugs may be another modifiable risk factor for dementia.
“Clinicians and others may want to consider that opioid exposure in those aged 75-80 increases dementia risk, and to balance the potential benefits of opioid use in old age with adverse side effects,” said Stephen Z. Levine, PhD, professor, department of community mental health, University of Haifa (Israel).
The study was published online in the American Journal of Geriatric Psychiatry.
Widespread use
Evidence points to a relatively high rate of opioid prescriptions among older adults. A Morbidity and Mortality Weekly Report noted 19.2% of the U.S. adult population filled an opioid prescription in 2018, with the rate in those over 65 double that of adults aged 20-24 years (25% vs. 11.2%).
Disorders and illnesses for which opioids might be prescribed, including cancer and some pain conditions, “are far more prevalent in old age than at a younger age,” said Dr. Levine.
This high rate of opioid use underscores the need to consider the risks of opioid use in old age, said Dr. Levine. “Unfortunately, studies of the association between opioid use and dementia risk in old age are few, and their results are inconsistent.”
The study included 91,307 Israeli citizens aged 60 and over without dementia who were enrolled in the Meuhedet Healthcare Services, a nonprofit health maintenance organization (HMO) serving 14% of the country’s population. Meuhedet has maintained an up-to-date dementia registry since 2002.
The average age of the study sample was 68.29 years at the start of the study (in 2012).
In Israel, opioids are prescribed for a 30-day period. In this study, opioid exposure was defined as opioid medication fills covering 60 days (or two prescriptions) within a 120-day interval.
The primary outcome was incident dementia during follow-up from Jan. 1, 2013 to Oct. 30, 2017. The analysis controlled for a number of factors, including age, sex, smoking status, health conditions such as arthritis, depression, diabetes, osteoporosis, cognitive decline, vitamin deficiencies, cancer, cardiovascular conditions, and hospitalizations for falls.
Researchers also accounted for the competing risk of mortality.
During the study, 3.1% of subjects were exposed to opioids at a mean age of 73.94 years, and 5.8% of subjects developed dementia at an average age of 78.07 years.
Increased dementia risk
The risk of incident dementia was significantly increased in those exposed to opioids versus unexposed individuals in the 75- to 80-year age group (adjusted hazard ratio, 1.39; 95% confidence interval, 1.01-1.92; z statistic = 2.02; P < .05).
The authors noted the effect size for opioid exposure in this elderly age group is like other potentially modifiable risk factors for dementia, including body mass index and smoking.
The current study could not determine the biological explanation for the increased dementia risk among older opioid users. “Causal notions are challenging in observational studies and should be viewed with caution,” Dr. Levine noted.
However, a plausible mechanism highlighted in the literature is that opioids promote apoptosis of microglia and neurons that contribute to neurodegenerative diseases, he said.
The study included 14 sensitivity analyses, including those that looked at females, subjects older than 70, smokers, and groups with and without comorbid health conditions. The only sensitivity analysis that didn’t have similar findings to the primary analysis looked at dementia risk restricted to subjects without a vitamin deficiency.
“It’s reassuring that 13 or 14 sensitivity analyses found a significant association between opioid exposure and dementia risk,” said Dr. Levine.
Some prior studies did not show an association between opioid exposure and dementia risk. One possible reason for the discrepancy with the current findings is that the previous research didn’t account for age-specific opioid use effects, or the competing risk of mortality, said Dr. Levine.
Clinicians have a number of potential alternatives to opioids to treat various conditions including acetaminophen, non-steroidal anti-inflammatory drugs, amine reuptake inhibitors (ARIs), membrane stabilizers, muscle relaxants, topical capsaicin, botulinum toxin, cannabinoids, and steroids.
A limitation of the study was that it didn’t adjust for all possible comorbid health conditions, including vascular conditions, or for use of benzodiazepines, and surgical procedures.
In addition, since up to 50% of dementia cases are undetected, it’s possible some in the unexposed opioid group may actually have undiagnosed dementia, thereby reducing the effect sizes in the results.
Reverse causality is also a possibility as the neuropathological process associated with dementia could have started prior to opioid exposure. In addition, the results are limited to prolonged opioid exposure.
Interpret with caution
Commenting on the study, David Knopman, MD, a neurologist at Mayo Clinic in Rochester, Minn., whose research involves late-life cognitive disorders, was skeptical.
“On the face of it, the fact that an association was seen only in one narrow age range – 75+ to 80 years – ought to raise serious suspicion about the reliability and validity of the claim that opioid use is a risk factor for dementia, he said.
Although the researchers performed several sensitivity analyses, including accounting for mortality, “pharmacoepidemiological studies are terribly sensitive to residual biases” related to physician and patient choices related to medication use, added Dr. Knopman.
The claim that opioids are a dementia risk “should be viewed with great caution” and should not influence use of opioids where they’re truly indicated, he said.
“It would be a great pity if patients with pain requiring opioids avoid them because of fears about dementia based on the dubious relationship between age and opioid use.”
Dr. Levine and Dr. Knopman report no relevant financial disclosures.
A version of this article first appeared on Medscape.com.
FROM AMERICAN JOURNAL OF GERIATRIC PSYCHIATRY
Hearing, vision loss combo a colossal risk for cognitive decline
The combination of hearing loss and vision loss is linked to an eightfold increased risk of cognitive impairment, new research shows.
Investigators analyzed data on more than 5 million U.S. seniors. Adjusted results show that participants with hearing impairment alone had more than twice the odds of also having cognitive impairment, while those with vision impairment alone had more than triple the odds of cognitive impairment.
However, those with dual sensory impairment (DSI) had an eightfold higher risk for cognitive impairment.
In addition, half of the participants with DSI also had cognitive impairment. Of those with cognitive impairment, 16% had DSI, compared with only about 2% of their peers without cognitive impairment.
“The findings of the present study may inform interventions that can support older people with concurrent sensory impairment and cognitive impairment,” said lead author Esme Fuller-Thomson, PhD, professor, Factor-Inwentash Faculty of Social Work, University of Toronto.
“Special attention, in particular, should be given to those aged 65-74 who have serious hearing and/or vision impairment [because], if the relationship with dementia is found to be causal, such interventions can potentially mitigate the development of cognitive impairment,” said Dr. Fuller-Thomson, who is also director of the Institute for Life Course and Aging and a professor in the department of family and community medicine and faculty of nursing, all at the University of Toronto.
The findings were published online in the Journal of Alzheimer’s Disease Reports.
Sensory isolation
Hearing and vision impairment increase with age; it is estimated that one-third of U.S. adults between the ages of 65 and 74 experience hearing loss, and 4% experience vision impairment, the investigators note.
“The link between dual hearing loss and seeing loss and mental health problems such as depression and social isolation have been well researched, but we were very interested in the link between dual sensory loss and cognitive problems,” Dr. Fuller-Thomson said.
Additionally, “there have been several studies in the past decade linking hearing loss to dementia and cognitive decline, but less attention has been paid to cognitive problems among those with DSI, despite this group being particularly isolated,” she said. Existing research into DSI suggests an association with cognitive decline; the current investigators sought to expand on this previous work.
To do so, they used merged data from 10 consecutive waves from 2008 to 2017 of the American Community Survey (ACS), which was conducted by the U.S. Census Bureau. The ACS is a nationally representative sample of 3.5 million randomly selected U.S. addresses and includes community-dwelling adults and those residing in institutional settings.
Participants aged 65 or older (n = 5,405,135; 56.4% women) were asked yes/no questions regarding serious cognitive impairment, hearing impairment, and vision impairment. A proxy, such as a family member or nursing home staff member, provided answers for individuals not capable of self-report.
Potential confounding variables included age, race/ethnicity, sex, education, and household income.
Potential mechanisms
Results showed that, among those with cognitive impairment, there was a higher prevalence of hearing impairment, vision impairment, and DSI than among their peers without cognitive impairment; in addition, a lower percentage of these persons had no sensory impairment (P < .001).
The prevalence of DSI climbed with age, from 1.5% for respondents aged 65-74 years to 2.6% for those aged 75-84 and to 10.8% in those 85 years and older.
Individuals with higher levels of poverty also had higher levels of DSI. Among those who had not completed high school, the prevalence of DSI was higher, compared with high school or university graduates (6.3% vs. 3.1% and 1.85, respectively).
After controlling for age, race, education, and income, the researchers found “substantially” higher odds of cognitive impairment in those with vs. those without sensory impairments.
“The magnitude of the odds of cognitive impairment by sensory impairment was greatest for the youngest cohort (age 65-74) and lowest for the oldest cohort (age 85+),” the investigators wrote. Among participants in the youngest cohort, there was a “dose-response relationship” for those with hearing impairment only, visual impairment only, and DSI.
Because the study was observational, it “does not provide sufficient information to determine the reasons behind the observed link between sensory loss and cognitive problems,” Dr. Fuller-Thomson said. However, there are “several potential causal mechanisms [that] warrant future research.”
The “sensory deprivation hypothesis” suggests that DSI could cause cognitive deterioration because of decreased auditory and visual input. The “resource allocation hypothesis” posits that hearing- or vision-impaired older adults “may use more cognitive resources to accommodate for sensory deficits, allocating fewer cognitive resources for higher-order memory processes,” the researchers wrote. Hearing impairment “may also lead to social disengagement among older adults, hastening cognitive decline due to isolation and lack of stimulation,” they added.
Reverse causality is also possible. In the “cognitive load on perception” hypothesis, cognitive decline may lead to declines in hearing and vision because of “decreased resources for sensory processing.”
In addition, the association may be noncausal. “The ‘common cause hypothesis’ theorizes that sensory impairment and cognitive impairment may be due to shared age-related degeneration of the central nervous system ... or frailty,” Dr. Fuller-Thomson said.
Parallel findings
The results are similar to those from a study conducted by Phillip Hwang, PhD, of the department of anatomy and neurobiology, Boston University, and colleagues that was published online in JAMA Network Open.
They analyzed data on 8 years of follow-up of 2,927 participants in the Cardiovascular Health Study (mean age, 74.6 years; 58.2% women).
Compared with no sensory impairment, DSI was associated with increased risk for all-cause dementia and Alzheimer’s disease, but not with vascular dementia.
“Future work in health care guidelines could consider incorporating screening of sensory impairment in older adults as part of risk assessment for dementia,” Nicholas Reed, AuD, and Esther Oh, MD, PhD, both of Johns Hopkins University, Baltimore, wrote in an accompanying editorial.
Accurate testing
Commenting on both studies, Heather Whitson, MD, professor of medicine (geriatrics) and ophthalmology and director at the Duke University Center for the Study of Aging and Human Development, Durham, N.C., said both “add further strength to the evidence base, which has really converged in the last few years to support that there is a link between sensory health and cognitive health.”
However, “we still don’t know whether hearing/vision loss causes cognitive decline, though there are plausible ways that sensory loss could affect cognitive abilities like memory, language, and executive function,” she said
Dr. Whitson, who was not involved with the research, is also codirector of the Duke/University of North Carolina Alzheimer’s Disease Research Center at Duke University, Durham, N.C., and the Durham VA Medical Center.
“The big question is whether we can improve patients’ cognitive performance by treating or accommodating their sensory impairments,” she said. “If safe and feasible things like hearing aids or cataract surgery improve cognitive health, even a little bit, it would be a huge benefit to society, because sensory loss is very common, and there are many treatment options,” Dr. Whitson added.
Dr. Fuller-Thomson emphasized that practitioners should “consider the full impact of sensory impairment on cognitive testing methods, as both auditory and visual testing methods may fail to take hearing and vision impairment into account.”
Thus, “when performing cognitive tests on older adults with sensory impairments, practitioners should ensure they are communicating audibly and/or using visual speech cues for hearing-impaired individuals, eliminating items from cognitive tests that rely on vision for those who are visually impaired, and using physical cues for individuals with hearing or dual sensory impairment, as this can help increase the accuracy of testing and prevent confounding,” she said.
The study by Fuller-Thomson et al. was funded by a donation from Janis Rotman. Its investigators have reported no relevant financial relationships. The study by Hwang et al. was funded by contracts from the National Heart, Lung, and Blood Institute, the National Institute of Neurological Disorders and Stroke, and the National Institute on Aging. Dr. Hwang reports no relevant financial relationships. The other investigators’ disclosures are listed in the original article. Dr. Reed received grants from the National Institute on Aging during the conduct of the study and has served on the advisory board of Neosensory outside the submitted work. Dr. Oh and Dr. Whitson report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
The combination of hearing loss and vision loss is linked to an eightfold increased risk of cognitive impairment, new research shows.
Investigators analyzed data on more than 5 million U.S. seniors. Adjusted results show that participants with hearing impairment alone had more than twice the odds of also having cognitive impairment, while those with vision impairment alone had more than triple the odds of cognitive impairment.
However, those with dual sensory impairment (DSI) had an eightfold higher risk for cognitive impairment.
In addition, half of the participants with DSI also had cognitive impairment. Of those with cognitive impairment, 16% had DSI, compared with only about 2% of their peers without cognitive impairment.
“The findings of the present study may inform interventions that can support older people with concurrent sensory impairment and cognitive impairment,” said lead author Esme Fuller-Thomson, PhD, professor, Factor-Inwentash Faculty of Social Work, University of Toronto.
“Special attention, in particular, should be given to those aged 65-74 who have serious hearing and/or vision impairment [because], if the relationship with dementia is found to be causal, such interventions can potentially mitigate the development of cognitive impairment,” said Dr. Fuller-Thomson, who is also director of the Institute for Life Course and Aging and a professor in the department of family and community medicine and faculty of nursing, all at the University of Toronto.
The findings were published online in the Journal of Alzheimer’s Disease Reports.
Sensory isolation
Hearing and vision impairment increase with age; it is estimated that one-third of U.S. adults between the ages of 65 and 74 experience hearing loss, and 4% experience vision impairment, the investigators note.
“The link between dual hearing loss and seeing loss and mental health problems such as depression and social isolation have been well researched, but we were very interested in the link between dual sensory loss and cognitive problems,” Dr. Fuller-Thomson said.
Additionally, “there have been several studies in the past decade linking hearing loss to dementia and cognitive decline, but less attention has been paid to cognitive problems among those with DSI, despite this group being particularly isolated,” she said. Existing research into DSI suggests an association with cognitive decline; the current investigators sought to expand on this previous work.
To do so, they used merged data from 10 consecutive waves from 2008 to 2017 of the American Community Survey (ACS), which was conducted by the U.S. Census Bureau. The ACS is a nationally representative sample of 3.5 million randomly selected U.S. addresses and includes community-dwelling adults and those residing in institutional settings.
Participants aged 65 or older (n = 5,405,135; 56.4% women) were asked yes/no questions regarding serious cognitive impairment, hearing impairment, and vision impairment. A proxy, such as a family member or nursing home staff member, provided answers for individuals not capable of self-report.
Potential confounding variables included age, race/ethnicity, sex, education, and household income.
Potential mechanisms
Results showed that, among those with cognitive impairment, there was a higher prevalence of hearing impairment, vision impairment, and DSI than among their peers without cognitive impairment; in addition, a lower percentage of these persons had no sensory impairment (P < .001).
The prevalence of DSI climbed with age, from 1.5% for respondents aged 65-74 years to 2.6% for those aged 75-84 and to 10.8% in those 85 years and older.
Individuals with higher levels of poverty also had higher levels of DSI. Among those who had not completed high school, the prevalence of DSI was higher, compared with high school or university graduates (6.3% vs. 3.1% and 1.85, respectively).
After controlling for age, race, education, and income, the researchers found “substantially” higher odds of cognitive impairment in those with vs. those without sensory impairments.
“The magnitude of the odds of cognitive impairment by sensory impairment was greatest for the youngest cohort (age 65-74) and lowest for the oldest cohort (age 85+),” the investigators wrote. Among participants in the youngest cohort, there was a “dose-response relationship” for those with hearing impairment only, visual impairment only, and DSI.
Because the study was observational, it “does not provide sufficient information to determine the reasons behind the observed link between sensory loss and cognitive problems,” Dr. Fuller-Thomson said. However, there are “several potential causal mechanisms [that] warrant future research.”
The “sensory deprivation hypothesis” suggests that DSI could cause cognitive deterioration because of decreased auditory and visual input. The “resource allocation hypothesis” posits that hearing- or vision-impaired older adults “may use more cognitive resources to accommodate for sensory deficits, allocating fewer cognitive resources for higher-order memory processes,” the researchers wrote. Hearing impairment “may also lead to social disengagement among older adults, hastening cognitive decline due to isolation and lack of stimulation,” they added.
Reverse causality is also possible. In the “cognitive load on perception” hypothesis, cognitive decline may lead to declines in hearing and vision because of “decreased resources for sensory processing.”
In addition, the association may be noncausal. “The ‘common cause hypothesis’ theorizes that sensory impairment and cognitive impairment may be due to shared age-related degeneration of the central nervous system ... or frailty,” Dr. Fuller-Thomson said.
Parallel findings
The results are similar to those from a study conducted by Phillip Hwang, PhD, of the department of anatomy and neurobiology, Boston University, and colleagues that was published online in JAMA Network Open.
They analyzed data on 8 years of follow-up of 2,927 participants in the Cardiovascular Health Study (mean age, 74.6 years; 58.2% women).
Compared with no sensory impairment, DSI was associated with increased risk for all-cause dementia and Alzheimer’s disease, but not with vascular dementia.
“Future work in health care guidelines could consider incorporating screening of sensory impairment in older adults as part of risk assessment for dementia,” Nicholas Reed, AuD, and Esther Oh, MD, PhD, both of Johns Hopkins University, Baltimore, wrote in an accompanying editorial.
Accurate testing
Commenting on both studies, Heather Whitson, MD, professor of medicine (geriatrics) and ophthalmology and director at the Duke University Center for the Study of Aging and Human Development, Durham, N.C., said both “add further strength to the evidence base, which has really converged in the last few years to support that there is a link between sensory health and cognitive health.”
However, “we still don’t know whether hearing/vision loss causes cognitive decline, though there are plausible ways that sensory loss could affect cognitive abilities like memory, language, and executive function,” she said
Dr. Whitson, who was not involved with the research, is also codirector of the Duke/University of North Carolina Alzheimer’s Disease Research Center at Duke University, Durham, N.C., and the Durham VA Medical Center.
“The big question is whether we can improve patients’ cognitive performance by treating or accommodating their sensory impairments,” she said. “If safe and feasible things like hearing aids or cataract surgery improve cognitive health, even a little bit, it would be a huge benefit to society, because sensory loss is very common, and there are many treatment options,” Dr. Whitson added.
Dr. Fuller-Thomson emphasized that practitioners should “consider the full impact of sensory impairment on cognitive testing methods, as both auditory and visual testing methods may fail to take hearing and vision impairment into account.”
Thus, “when performing cognitive tests on older adults with sensory impairments, practitioners should ensure they are communicating audibly and/or using visual speech cues for hearing-impaired individuals, eliminating items from cognitive tests that rely on vision for those who are visually impaired, and using physical cues for individuals with hearing or dual sensory impairment, as this can help increase the accuracy of testing and prevent confounding,” she said.
The study by Fuller-Thomson et al. was funded by a donation from Janis Rotman. Its investigators have reported no relevant financial relationships. The study by Hwang et al. was funded by contracts from the National Heart, Lung, and Blood Institute, the National Institute of Neurological Disorders and Stroke, and the National Institute on Aging. Dr. Hwang reports no relevant financial relationships. The other investigators’ disclosures are listed in the original article. Dr. Reed received grants from the National Institute on Aging during the conduct of the study and has served on the advisory board of Neosensory outside the submitted work. Dr. Oh and Dr. Whitson report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
The combination of hearing loss and vision loss is linked to an eightfold increased risk of cognitive impairment, new research shows.
Investigators analyzed data on more than 5 million U.S. seniors. Adjusted results show that participants with hearing impairment alone had more than twice the odds of also having cognitive impairment, while those with vision impairment alone had more than triple the odds of cognitive impairment.
However, those with dual sensory impairment (DSI) had an eightfold higher risk for cognitive impairment.
In addition, half of the participants with DSI also had cognitive impairment. Of those with cognitive impairment, 16% had DSI, compared with only about 2% of their peers without cognitive impairment.
“The findings of the present study may inform interventions that can support older people with concurrent sensory impairment and cognitive impairment,” said lead author Esme Fuller-Thomson, PhD, professor, Factor-Inwentash Faculty of Social Work, University of Toronto.
“Special attention, in particular, should be given to those aged 65-74 who have serious hearing and/or vision impairment [because], if the relationship with dementia is found to be causal, such interventions can potentially mitigate the development of cognitive impairment,” said Dr. Fuller-Thomson, who is also director of the Institute for Life Course and Aging and a professor in the department of family and community medicine and faculty of nursing, all at the University of Toronto.
The findings were published online in the Journal of Alzheimer’s Disease Reports.
Sensory isolation
Hearing and vision impairment increase with age; it is estimated that one-third of U.S. adults between the ages of 65 and 74 experience hearing loss, and 4% experience vision impairment, the investigators note.
“The link between dual hearing loss and seeing loss and mental health problems such as depression and social isolation have been well researched, but we were very interested in the link between dual sensory loss and cognitive problems,” Dr. Fuller-Thomson said.
Additionally, “there have been several studies in the past decade linking hearing loss to dementia and cognitive decline, but less attention has been paid to cognitive problems among those with DSI, despite this group being particularly isolated,” she said. Existing research into DSI suggests an association with cognitive decline; the current investigators sought to expand on this previous work.
To do so, they used merged data from 10 consecutive waves from 2008 to 2017 of the American Community Survey (ACS), which was conducted by the U.S. Census Bureau. The ACS is a nationally representative sample of 3.5 million randomly selected U.S. addresses and includes community-dwelling adults and those residing in institutional settings.
Participants aged 65 or older (n = 5,405,135; 56.4% women) were asked yes/no questions regarding serious cognitive impairment, hearing impairment, and vision impairment. A proxy, such as a family member or nursing home staff member, provided answers for individuals not capable of self-report.
Potential confounding variables included age, race/ethnicity, sex, education, and household income.
Potential mechanisms
Results showed that, among those with cognitive impairment, there was a higher prevalence of hearing impairment, vision impairment, and DSI than among their peers without cognitive impairment; in addition, a lower percentage of these persons had no sensory impairment (P < .001).
The prevalence of DSI climbed with age, from 1.5% for respondents aged 65-74 years to 2.6% for those aged 75-84 and to 10.8% in those 85 years and older.
Individuals with higher levels of poverty also had higher levels of DSI. Among those who had not completed high school, the prevalence of DSI was higher, compared with high school or university graduates (6.3% vs. 3.1% and 1.85, respectively).
After controlling for age, race, education, and income, the researchers found “substantially” higher odds of cognitive impairment in those with vs. those without sensory impairments.
“The magnitude of the odds of cognitive impairment by sensory impairment was greatest for the youngest cohort (age 65-74) and lowest for the oldest cohort (age 85+),” the investigators wrote. Among participants in the youngest cohort, there was a “dose-response relationship” for those with hearing impairment only, visual impairment only, and DSI.
Because the study was observational, it “does not provide sufficient information to determine the reasons behind the observed link between sensory loss and cognitive problems,” Dr. Fuller-Thomson said. However, there are “several potential causal mechanisms [that] warrant future research.”
The “sensory deprivation hypothesis” suggests that DSI could cause cognitive deterioration because of decreased auditory and visual input. The “resource allocation hypothesis” posits that hearing- or vision-impaired older adults “may use more cognitive resources to accommodate for sensory deficits, allocating fewer cognitive resources for higher-order memory processes,” the researchers wrote. Hearing impairment “may also lead to social disengagement among older adults, hastening cognitive decline due to isolation and lack of stimulation,” they added.
Reverse causality is also possible. In the “cognitive load on perception” hypothesis, cognitive decline may lead to declines in hearing and vision because of “decreased resources for sensory processing.”
In addition, the association may be noncausal. “The ‘common cause hypothesis’ theorizes that sensory impairment and cognitive impairment may be due to shared age-related degeneration of the central nervous system ... or frailty,” Dr. Fuller-Thomson said.
Parallel findings
The results are similar to those from a study conducted by Phillip Hwang, PhD, of the department of anatomy and neurobiology, Boston University, and colleagues that was published online in JAMA Network Open.
They analyzed data on 8 years of follow-up of 2,927 participants in the Cardiovascular Health Study (mean age, 74.6 years; 58.2% women).
Compared with no sensory impairment, DSI was associated with increased risk for all-cause dementia and Alzheimer’s disease, but not with vascular dementia.
“Future work in health care guidelines could consider incorporating screening of sensory impairment in older adults as part of risk assessment for dementia,” Nicholas Reed, AuD, and Esther Oh, MD, PhD, both of Johns Hopkins University, Baltimore, wrote in an accompanying editorial.
Accurate testing
Commenting on both studies, Heather Whitson, MD, professor of medicine (geriatrics) and ophthalmology and director at the Duke University Center for the Study of Aging and Human Development, Durham, N.C., said both “add further strength to the evidence base, which has really converged in the last few years to support that there is a link between sensory health and cognitive health.”
However, “we still don’t know whether hearing/vision loss causes cognitive decline, though there are plausible ways that sensory loss could affect cognitive abilities like memory, language, and executive function,” she said
Dr. Whitson, who was not involved with the research, is also codirector of the Duke/University of North Carolina Alzheimer’s Disease Research Center at Duke University, Durham, N.C., and the Durham VA Medical Center.
“The big question is whether we can improve patients’ cognitive performance by treating or accommodating their sensory impairments,” she said. “If safe and feasible things like hearing aids or cataract surgery improve cognitive health, even a little bit, it would be a huge benefit to society, because sensory loss is very common, and there are many treatment options,” Dr. Whitson added.
Dr. Fuller-Thomson emphasized that practitioners should “consider the full impact of sensory impairment on cognitive testing methods, as both auditory and visual testing methods may fail to take hearing and vision impairment into account.”
Thus, “when performing cognitive tests on older adults with sensory impairments, practitioners should ensure they are communicating audibly and/or using visual speech cues for hearing-impaired individuals, eliminating items from cognitive tests that rely on vision for those who are visually impaired, and using physical cues for individuals with hearing or dual sensory impairment, as this can help increase the accuracy of testing and prevent confounding,” she said.
The study by Fuller-Thomson et al. was funded by a donation from Janis Rotman. Its investigators have reported no relevant financial relationships. The study by Hwang et al. was funded by contracts from the National Heart, Lung, and Blood Institute, the National Institute of Neurological Disorders and Stroke, and the National Institute on Aging. Dr. Hwang reports no relevant financial relationships. The other investigators’ disclosures are listed in the original article. Dr. Reed received grants from the National Institute on Aging during the conduct of the study and has served on the advisory board of Neosensory outside the submitted work. Dr. Oh and Dr. Whitson report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM THE JOURNAL OF ALZHEIMER’S DISEASE REPORTS
Long COVID neuropsychiatric deficits greater than expected
NEW ORLEANS – , adding to mounting evidence of the significant toll the chronic condition can have on mental health.
“Many clinicians have observed the symptoms we describe in this study, however this report is among the first which identify the specific deficits using neuropsychological testing to better characterize the syndrome,” Sean T. Lynch, MD, first author of a study on the issue presented at the annual meeting of the American Psychiatric Association, said in an interview.
Dr. Lynch, of the department of psychiatry, Westchester Medical Center Health System, Valhalla, N.Y., and his colleagues enrolled 60 participants who had experienced acute COVID-19 disease 6-8 months earlier and had undergone neuropsychological, psychiatric, medical, functional, and quality-of-life assessments. Results from the study were published online in the Journal of the Academy of Consultation–Liaison Psychiatry (2022 Jan 25. doi: 10.1016/j.jaclp.2022.01.003).
Among the study participants, 32 were seeking treatment for brain fog in a clinical program for survivors of COVID-19, while the remaining 28 were part of an ongoing longitudinal investigation of neuropsychological, medical, and psychiatric sequelae of COVID-19, but were not seeking care for the persistent symptoms.
Assessments for neurocognitive impairment included a battery of tests used in infectious and other diseases, including the Test of Premorbid Function, the Patient Assessment of Own Function, the Trail Making Test parts A and B, the Stroop Color and Word Test, and others.
Overall, the battery of assessments showed that 37 (62%) of participants had neuropsychological test impairment, with results below the 16th percentile in two tests, while 16 (27%) showed scores indicative of severe impairment (below the second percentile in at least one test and below the 16th percentile in one test).
Those reporting brain fog had scores that were even lower than expected on tests of attention, processing speed, memory, and executive function. And among those reporting brain fog, significantly more had scores reflecting severe impairment compared with the controls (38% vs. 14%; P < .04).
“Based on what we’ve observed in our patients and what others have previously reported, we did expect to find some impairment in this study sample,” Dr. Lynch noted.
“However, we were surprised to find that 27% of the study sample had extremely low neuropsychological test scores, meaning that they scored at least two standard deviations below the expected score on at least one neuropsychological test based on their age and level of education.”
The brain fog group also reported significantly higher levels of depression, fatigue, PTSD, and functional difficulties, and lower quality of life.
Severe impairment on the neuropsychological tests correlated with the extent of acute COVID-19 symptoms, as well as depression scores, number of medical comorbidities, and subjective cognitive complaints.
An analysis of serum levels of the inflammatory markers among 50 of the 60 participants showed that 45% of the patients had an elevated IL-6, 20% had elevated TNF-alpha, and 41% had elevated CRP, compared with reference ranges.
IL-6 levels were found to correlate with acute COVID-19 symptoms, the number of medical comorbidities, fatigue, and measures of executive function, while C-reactive protein (CRP) correlated with current COVID-19 symptoms and depression scores.
In terms of clinical factors that might predict low neuropsychological test scores, Dr. Lynch noted that the “markers that we found to be significant included severity of acute COVID-19 illness, current post-COVID-19 symptoms, measures of depression and anxiety, level of fatigue, and number of medical comorbidities.”
Dr. Lynch noted that the ongoing study will include up to 18-month follow-ups that are currently underway. “The [follow-ups] will examine if symptoms improve over time and evaluate if any intervention that took place was successful,” he said.
Survey supports findings
The detrimental effects of mental health symptoms in long COVID were further supported in another study at the APA meeting, an online survey of 787 survivors of acute COVID-19.
In the community survey, presented by Michael Van Ameringen, MD, a professor in the department of psychiatry and behavioral neurosciences at McMaster University, in Hamilton, Ont., all respondents (100%) reported having persistent symptoms of the virus, and as many as 68% indicated that they had not returned to normal functioning, despite only 15% of the respondents having been hospitalized with COVID-19.
A large proportion showed significant depression, anxiety, and posttraumatic stress disorder (PTSD), and the most commonly reported persistent symptoms were fatigue in 75.9% of respondents, brain fog in 67.9%, concentration difficulties in 61.1%, and weakness in 51.2%.
As many as 88.2% of patients said they experienced persistent neurocognitive symptoms, with poor memory and concentration; 56% reported problems with word finding; and 54.1% had slowed thinking.
The respondents showed high rates of anxiety (41.7%) as well as depression (61.4%) as determined by scores above 9 on the Generalized Anxiety Disorder–7 (GAD-7) and Patient Health Questionnaires (PHQ-9).
As many as 40.5% of respondents showed probable PTSD, with scores above 30 on the PTSD checklist (PCL-5). Their mean resilience score on the Brief Resilient Coping Scale was 13.5, suggesting low resilience.
Among the respondents, 43.3% said they had received past treatment for mental health, while 33.5% were currently receiving mental health treatment.
Dr. Van Ameringen noted the important limitation of the study being an online survey with no control group, but said the responses nevertheless raise the question of the role of prior psychiatric disorders in long COVID.
“In our sample, 40% of respondents had a past psychiatric history, so you wonder if that also makes you vulnerable to long COVID,” he said in an interview.
“About a third were getting psychiatric help, but I think the more impaired you are, the more likely you are to seek help.”
Those who were hospitalized with COVID-19 were at a higher risk of PTSD compared with those not hospitalized (P < .001), as were those under the age of 30 (P < .05) or between 31 and 50 vs. over 50 (P < .01).
Dr. Van Ameringen noted that the survey’s high rate of subjects who had not returned to normal functioning was especially striking.
“This is not a minor issue – these are people who are no longer functioning in society,” he said.
In pandemics, the brain tends to be ‘overlooked’
Further addressing the neurological effects of COVID-19 at the APA meeting, Avindra Nath, MD, clinical director of the National Institutes of Neurologic Disorders and Stroke in Bethesda, Md., noted that the persisting cognitive and psychiatric symptoms after illness, such as brain fog and depression and anxiety, are not necessarily unique to COVID-19.
“We have seen this before,” he said. “There have been at least seven or eight human coronaviruses, and the interesting thing is each one affects the brain and causes neurological complications.”
The effects are classified differently and have slightly different receptors, “but the consequences are the same.”
Of note, however, research published in The Lancet Psychiatry (2021 May. doi: 10.1016/S2215-0366[21]00084-5) revealed that symptoms such as dementia, mood, and anxiety are significantly higher after COVID-19 compared with other respiratory infections, with the differences increasing at 180 days since the index event.
Dr. Nath noted that, over the decades, he has observed that in pandemics “the brain tends to get overlooked.” He explained that “what can be most important in the end is what happened in the brain, because those are the things that really cause the long-term consequences.”
“These patients are depressed; they have dementia, they have brain fog, and even now that we recognize these issues, we haven’t done a very good job of studying them,” he said. “There’s so much we still don’t know, and a lot of patients are left with these symptoms and nowhere to go.”
Dr. Lynch, Dr. Van Ameringen, and Dr. Nath had no disclosures to report.
NEW ORLEANS – , adding to mounting evidence of the significant toll the chronic condition can have on mental health.
“Many clinicians have observed the symptoms we describe in this study, however this report is among the first which identify the specific deficits using neuropsychological testing to better characterize the syndrome,” Sean T. Lynch, MD, first author of a study on the issue presented at the annual meeting of the American Psychiatric Association, said in an interview.
Dr. Lynch, of the department of psychiatry, Westchester Medical Center Health System, Valhalla, N.Y., and his colleagues enrolled 60 participants who had experienced acute COVID-19 disease 6-8 months earlier and had undergone neuropsychological, psychiatric, medical, functional, and quality-of-life assessments. Results from the study were published online in the Journal of the Academy of Consultation–Liaison Psychiatry (2022 Jan 25. doi: 10.1016/j.jaclp.2022.01.003).
Among the study participants, 32 were seeking treatment for brain fog in a clinical program for survivors of COVID-19, while the remaining 28 were part of an ongoing longitudinal investigation of neuropsychological, medical, and psychiatric sequelae of COVID-19, but were not seeking care for the persistent symptoms.
Assessments for neurocognitive impairment included a battery of tests used in infectious and other diseases, including the Test of Premorbid Function, the Patient Assessment of Own Function, the Trail Making Test parts A and B, the Stroop Color and Word Test, and others.
Overall, the battery of assessments showed that 37 (62%) of participants had neuropsychological test impairment, with results below the 16th percentile in two tests, while 16 (27%) showed scores indicative of severe impairment (below the second percentile in at least one test and below the 16th percentile in one test).
Those reporting brain fog had scores that were even lower than expected on tests of attention, processing speed, memory, and executive function. And among those reporting brain fog, significantly more had scores reflecting severe impairment compared with the controls (38% vs. 14%; P < .04).
“Based on what we’ve observed in our patients and what others have previously reported, we did expect to find some impairment in this study sample,” Dr. Lynch noted.
“However, we were surprised to find that 27% of the study sample had extremely low neuropsychological test scores, meaning that they scored at least two standard deviations below the expected score on at least one neuropsychological test based on their age and level of education.”
The brain fog group also reported significantly higher levels of depression, fatigue, PTSD, and functional difficulties, and lower quality of life.
Severe impairment on the neuropsychological tests correlated with the extent of acute COVID-19 symptoms, as well as depression scores, number of medical comorbidities, and subjective cognitive complaints.
An analysis of serum levels of the inflammatory markers among 50 of the 60 participants showed that 45% of the patients had an elevated IL-6, 20% had elevated TNF-alpha, and 41% had elevated CRP, compared with reference ranges.
IL-6 levels were found to correlate with acute COVID-19 symptoms, the number of medical comorbidities, fatigue, and measures of executive function, while C-reactive protein (CRP) correlated with current COVID-19 symptoms and depression scores.
In terms of clinical factors that might predict low neuropsychological test scores, Dr. Lynch noted that the “markers that we found to be significant included severity of acute COVID-19 illness, current post-COVID-19 symptoms, measures of depression and anxiety, level of fatigue, and number of medical comorbidities.”
Dr. Lynch noted that the ongoing study will include up to 18-month follow-ups that are currently underway. “The [follow-ups] will examine if symptoms improve over time and evaluate if any intervention that took place was successful,” he said.
Survey supports findings
The detrimental effects of mental health symptoms in long COVID were further supported in another study at the APA meeting, an online survey of 787 survivors of acute COVID-19.
In the community survey, presented by Michael Van Ameringen, MD, a professor in the department of psychiatry and behavioral neurosciences at McMaster University, in Hamilton, Ont., all respondents (100%) reported having persistent symptoms of the virus, and as many as 68% indicated that they had not returned to normal functioning, despite only 15% of the respondents having been hospitalized with COVID-19.
A large proportion showed significant depression, anxiety, and posttraumatic stress disorder (PTSD), and the most commonly reported persistent symptoms were fatigue in 75.9% of respondents, brain fog in 67.9%, concentration difficulties in 61.1%, and weakness in 51.2%.
As many as 88.2% of patients said they experienced persistent neurocognitive symptoms, with poor memory and concentration; 56% reported problems with word finding; and 54.1% had slowed thinking.
The respondents showed high rates of anxiety (41.7%) as well as depression (61.4%) as determined by scores above 9 on the Generalized Anxiety Disorder–7 (GAD-7) and Patient Health Questionnaires (PHQ-9).
As many as 40.5% of respondents showed probable PTSD, with scores above 30 on the PTSD checklist (PCL-5). Their mean resilience score on the Brief Resilient Coping Scale was 13.5, suggesting low resilience.
Among the respondents, 43.3% said they had received past treatment for mental health, while 33.5% were currently receiving mental health treatment.
Dr. Van Ameringen noted the important limitation of the study being an online survey with no control group, but said the responses nevertheless raise the question of the role of prior psychiatric disorders in long COVID.
“In our sample, 40% of respondents had a past psychiatric history, so you wonder if that also makes you vulnerable to long COVID,” he said in an interview.
“About a third were getting psychiatric help, but I think the more impaired you are, the more likely you are to seek help.”
Those who were hospitalized with COVID-19 were at a higher risk of PTSD compared with those not hospitalized (P < .001), as were those under the age of 30 (P < .05) or between 31 and 50 vs. over 50 (P < .01).
Dr. Van Ameringen noted that the survey’s high rate of subjects who had not returned to normal functioning was especially striking.
“This is not a minor issue – these are people who are no longer functioning in society,” he said.
In pandemics, the brain tends to be ‘overlooked’
Further addressing the neurological effects of COVID-19 at the APA meeting, Avindra Nath, MD, clinical director of the National Institutes of Neurologic Disorders and Stroke in Bethesda, Md., noted that the persisting cognitive and psychiatric symptoms after illness, such as brain fog and depression and anxiety, are not necessarily unique to COVID-19.
“We have seen this before,” he said. “There have been at least seven or eight human coronaviruses, and the interesting thing is each one affects the brain and causes neurological complications.”
The effects are classified differently and have slightly different receptors, “but the consequences are the same.”
Of note, however, research published in The Lancet Psychiatry (2021 May. doi: 10.1016/S2215-0366[21]00084-5) revealed that symptoms such as dementia, mood, and anxiety are significantly higher after COVID-19 compared with other respiratory infections, with the differences increasing at 180 days since the index event.
Dr. Nath noted that, over the decades, he has observed that in pandemics “the brain tends to get overlooked.” He explained that “what can be most important in the end is what happened in the brain, because those are the things that really cause the long-term consequences.”
“These patients are depressed; they have dementia, they have brain fog, and even now that we recognize these issues, we haven’t done a very good job of studying them,” he said. “There’s so much we still don’t know, and a lot of patients are left with these symptoms and nowhere to go.”
Dr. Lynch, Dr. Van Ameringen, and Dr. Nath had no disclosures to report.
NEW ORLEANS – , adding to mounting evidence of the significant toll the chronic condition can have on mental health.
“Many clinicians have observed the symptoms we describe in this study, however this report is among the first which identify the specific deficits using neuropsychological testing to better characterize the syndrome,” Sean T. Lynch, MD, first author of a study on the issue presented at the annual meeting of the American Psychiatric Association, said in an interview.
Dr. Lynch, of the department of psychiatry, Westchester Medical Center Health System, Valhalla, N.Y., and his colleagues enrolled 60 participants who had experienced acute COVID-19 disease 6-8 months earlier and had undergone neuropsychological, psychiatric, medical, functional, and quality-of-life assessments. Results from the study were published online in the Journal of the Academy of Consultation–Liaison Psychiatry (2022 Jan 25. doi: 10.1016/j.jaclp.2022.01.003).
Among the study participants, 32 were seeking treatment for brain fog in a clinical program for survivors of COVID-19, while the remaining 28 were part of an ongoing longitudinal investigation of neuropsychological, medical, and psychiatric sequelae of COVID-19, but were not seeking care for the persistent symptoms.
Assessments for neurocognitive impairment included a battery of tests used in infectious and other diseases, including the Test of Premorbid Function, the Patient Assessment of Own Function, the Trail Making Test parts A and B, the Stroop Color and Word Test, and others.
Overall, the battery of assessments showed that 37 (62%) of participants had neuropsychological test impairment, with results below the 16th percentile in two tests, while 16 (27%) showed scores indicative of severe impairment (below the second percentile in at least one test and below the 16th percentile in one test).
Those reporting brain fog had scores that were even lower than expected on tests of attention, processing speed, memory, and executive function. And among those reporting brain fog, significantly more had scores reflecting severe impairment compared with the controls (38% vs. 14%; P < .04).
“Based on what we’ve observed in our patients and what others have previously reported, we did expect to find some impairment in this study sample,” Dr. Lynch noted.
“However, we were surprised to find that 27% of the study sample had extremely low neuropsychological test scores, meaning that they scored at least two standard deviations below the expected score on at least one neuropsychological test based on their age and level of education.”
The brain fog group also reported significantly higher levels of depression, fatigue, PTSD, and functional difficulties, and lower quality of life.
Severe impairment on the neuropsychological tests correlated with the extent of acute COVID-19 symptoms, as well as depression scores, number of medical comorbidities, and subjective cognitive complaints.
An analysis of serum levels of the inflammatory markers among 50 of the 60 participants showed that 45% of the patients had an elevated IL-6, 20% had elevated TNF-alpha, and 41% had elevated CRP, compared with reference ranges.
IL-6 levels were found to correlate with acute COVID-19 symptoms, the number of medical comorbidities, fatigue, and measures of executive function, while C-reactive protein (CRP) correlated with current COVID-19 symptoms and depression scores.
In terms of clinical factors that might predict low neuropsychological test scores, Dr. Lynch noted that the “markers that we found to be significant included severity of acute COVID-19 illness, current post-COVID-19 symptoms, measures of depression and anxiety, level of fatigue, and number of medical comorbidities.”
Dr. Lynch noted that the ongoing study will include up to 18-month follow-ups that are currently underway. “The [follow-ups] will examine if symptoms improve over time and evaluate if any intervention that took place was successful,” he said.
Survey supports findings
The detrimental effects of mental health symptoms in long COVID were further supported in another study at the APA meeting, an online survey of 787 survivors of acute COVID-19.
In the community survey, presented by Michael Van Ameringen, MD, a professor in the department of psychiatry and behavioral neurosciences at McMaster University, in Hamilton, Ont., all respondents (100%) reported having persistent symptoms of the virus, and as many as 68% indicated that they had not returned to normal functioning, despite only 15% of the respondents having been hospitalized with COVID-19.
A large proportion showed significant depression, anxiety, and posttraumatic stress disorder (PTSD), and the most commonly reported persistent symptoms were fatigue in 75.9% of respondents, brain fog in 67.9%, concentration difficulties in 61.1%, and weakness in 51.2%.
As many as 88.2% of patients said they experienced persistent neurocognitive symptoms, with poor memory and concentration; 56% reported problems with word finding; and 54.1% had slowed thinking.
The respondents showed high rates of anxiety (41.7%) as well as depression (61.4%) as determined by scores above 9 on the Generalized Anxiety Disorder–7 (GAD-7) and Patient Health Questionnaires (PHQ-9).
As many as 40.5% of respondents showed probable PTSD, with scores above 30 on the PTSD checklist (PCL-5). Their mean resilience score on the Brief Resilient Coping Scale was 13.5, suggesting low resilience.
Among the respondents, 43.3% said they had received past treatment for mental health, while 33.5% were currently receiving mental health treatment.
Dr. Van Ameringen noted the important limitation of the study being an online survey with no control group, but said the responses nevertheless raise the question of the role of prior psychiatric disorders in long COVID.
“In our sample, 40% of respondents had a past psychiatric history, so you wonder if that also makes you vulnerable to long COVID,” he said in an interview.
“About a third were getting psychiatric help, but I think the more impaired you are, the more likely you are to seek help.”
Those who were hospitalized with COVID-19 were at a higher risk of PTSD compared with those not hospitalized (P < .001), as were those under the age of 30 (P < .05) or between 31 and 50 vs. over 50 (P < .01).
Dr. Van Ameringen noted that the survey’s high rate of subjects who had not returned to normal functioning was especially striking.
“This is not a minor issue – these are people who are no longer functioning in society,” he said.
In pandemics, the brain tends to be ‘overlooked’
Further addressing the neurological effects of COVID-19 at the APA meeting, Avindra Nath, MD, clinical director of the National Institutes of Neurologic Disorders and Stroke in Bethesda, Md., noted that the persisting cognitive and psychiatric symptoms after illness, such as brain fog and depression and anxiety, are not necessarily unique to COVID-19.
“We have seen this before,” he said. “There have been at least seven or eight human coronaviruses, and the interesting thing is each one affects the brain and causes neurological complications.”
The effects are classified differently and have slightly different receptors, “but the consequences are the same.”
Of note, however, research published in The Lancet Psychiatry (2021 May. doi: 10.1016/S2215-0366[21]00084-5) revealed that symptoms such as dementia, mood, and anxiety are significantly higher after COVID-19 compared with other respiratory infections, with the differences increasing at 180 days since the index event.
Dr. Nath noted that, over the decades, he has observed that in pandemics “the brain tends to get overlooked.” He explained that “what can be most important in the end is what happened in the brain, because those are the things that really cause the long-term consequences.”
“These patients are depressed; they have dementia, they have brain fog, and even now that we recognize these issues, we haven’t done a very good job of studying them,” he said. “There’s so much we still don’t know, and a lot of patients are left with these symptoms and nowhere to go.”
Dr. Lynch, Dr. Van Ameringen, and Dr. Nath had no disclosures to report.
AT APA 2022
More evidence dementia not linked to PPI use in older people
Controversy regarding the purported link between the use of proton pump inhibitors (PPIs) or histamine H2 receptor antagonists (H2RAs) and risk for dementia continues.
Adding to the “no link” column comes new evidence from a study presented at the annual Digestive Disease Week® (DDW) .
Among almost 19,000 people, no association was found between the use of these agents and a greater likelihood of incident dementia, Alzheimer’s disease, or cognitive decline in people older than 65 years.
“We found that baseline PPI or H2RA use in older adults was not associated with dementia, with mild cognitive impairment, or declines in cognitive scores over time,” said lead author Raaj Shishir Mehta, MD, a gastroenterology fellow at Massachusetts General Hospital in Boston.
“While deprescribing efforts are important, especially when medications are not indicated, these data provide reassurance about the cognitive impacts of long-term use of PPIs in older adults,” he added.
Growing use, growing concern
As PPI use has increased worldwide, so too have concerns over the adverse effects from their long-term use, Dr. Mehta said.
“One particular area of concern, especially among older adults, is the link between long-term PPI use and risk for dementia,” he said.
Igniting the controversy was a February 2016 study published in JAMA Neurology that showed a positive association between PPI use and dementia in residents of Germany aged 75 years and older. Researchers linked PPI use to a 44% increased risk of dementia over 5 years.
The 2016 study was based on claims data, which can introduce “inaccuracy or bias in defining dementia cases,” Dr. Mehta said. He noted that it and other previous studies also were limited by an inability to account for concomitant medications or comorbidities.
To overcome these limitations in their study, Dr. Mehta and colleagues analyzed medication data collected during in-person visits and asked experts to confirm dementia outcomes. The research data come from ASPREE, a large aspirin study of 18,846 people older than 65 years in the United States and Australia. Participants were enrolled from 2010 to 2014. A total of 566 people developed incident dementia during follow-up.
The researchers had data on alcohol consumption and other lifestyle factors, as well as information on comorbidities, hospitalizations, and overall well-being.
“Perhaps the biggest strength of our study is our rigorous neurocognitive assessments,” Dr. Mehta said.
They assessed cognition at baseline and at years 1, 3, 5, and 7 using a battery of tests. An expert panel of neurologists, neuropsychologists, and geriatricians adjudicated cases of dementia, in accordance with DSM-IV criteria. If the diagnosis was unclear, they referred people for additional workup, including neuroimaging.
Cox proportional hazards, regression, and/or mixed effects modeling were used to relate medication use with cognitive scores.
All analyses were adjusted for age, sex, body mass index, alcohol use, family history of dementia, medications, and other medical comorbidities.
At baseline, PPI users were more likely to be White, have fewer years of education, and have higher rates of hypertension, diabetes, and kidney disease. This group also was more likely to be taking five or more medications.
Key points
During 80,976 person-years of follow-up, there were 566 incident cases of dementia, including 235 probable cases of Alzheimer’s disease and 331 other dementias.
Baseline PPI use, in comparison with nonuse, was not associated with incident dementia (hazard ratio, 0.86; 95% confidence interval, 0.70-1.05).
“Similarly, when we look specifically at Alzheimer’s disease or mixed types of dementia, we find no association between baseline PPI use and dementia,” Dr. Mehta said.
When they excluded people already taking PPIs at baseline, they found no association between starting PPIs and developing dementia over time.
Secondary aims of the study included looking for a link between PPI use and mild cognitive impairment or significant changes in cognition over time. In both cases, no association emerged. PPI use at baseline also was not associated with cognitive impairment/no dementia (also known as mild cognitive impairment) or with changes in overall cognitive test scores over time.
To determine whether any association could be a class effect of acid suppression medication, they assessed use of H2RA medications and development of incident dementia. Again, the researchers found no link.
A diverse multinational population from urban and rural areas was a strength of the study, as was the “very rigorous cognitive testing with expert adjudication of our endpoints,” Dr. Mehta said. In addition, fewer than 5% of patients were lost to follow-up.
In terms of limitations, this was an observational study “so residual confounding is always possible,” he added. “But I’ll emphasize that we are among the largest studies to date with wealth of covariates.”
Why the different findings?
The study was “really well done,” session moderator Paul Moayyedi, MD, said during the Q&A session at DDW 2022.
Dr. Moayyedi, a professor of medicine at McMaster University, Hamilton, Ont., asked Dr. Mehta why he “found absolutely no signal, whereas the German study did.”
“It’s a good question,” Dr. Mehta responded. “If you look across the board, there have been conflicting results.”
The disparity could be related to how researchers conducting claims data studies classify dementia, he noted.
“If you look at the nitty-gritty details over 5 years, almost 40% of participants [in those studies] end up with a diagnosis of dementia, which is quite high,” Dr. Mehta said. “That raises questions about whether the diagnosis of dementia is truly accurate.”
Dr. Mehta and Dr. Moayyedi reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Controversy regarding the purported link between the use of proton pump inhibitors (PPIs) or histamine H2 receptor antagonists (H2RAs) and risk for dementia continues.
Adding to the “no link” column comes new evidence from a study presented at the annual Digestive Disease Week® (DDW) .
Among almost 19,000 people, no association was found between the use of these agents and a greater likelihood of incident dementia, Alzheimer’s disease, or cognitive decline in people older than 65 years.
“We found that baseline PPI or H2RA use in older adults was not associated with dementia, with mild cognitive impairment, or declines in cognitive scores over time,” said lead author Raaj Shishir Mehta, MD, a gastroenterology fellow at Massachusetts General Hospital in Boston.
“While deprescribing efforts are important, especially when medications are not indicated, these data provide reassurance about the cognitive impacts of long-term use of PPIs in older adults,” he added.
Growing use, growing concern
As PPI use has increased worldwide, so too have concerns over the adverse effects from their long-term use, Dr. Mehta said.
“One particular area of concern, especially among older adults, is the link between long-term PPI use and risk for dementia,” he said.
Igniting the controversy was a February 2016 study published in JAMA Neurology that showed a positive association between PPI use and dementia in residents of Germany aged 75 years and older. Researchers linked PPI use to a 44% increased risk of dementia over 5 years.
The 2016 study was based on claims data, which can introduce “inaccuracy or bias in defining dementia cases,” Dr. Mehta said. He noted that it and other previous studies also were limited by an inability to account for concomitant medications or comorbidities.
To overcome these limitations in their study, Dr. Mehta and colleagues analyzed medication data collected during in-person visits and asked experts to confirm dementia outcomes. The research data come from ASPREE, a large aspirin study of 18,846 people older than 65 years in the United States and Australia. Participants were enrolled from 2010 to 2014. A total of 566 people developed incident dementia during follow-up.
The researchers had data on alcohol consumption and other lifestyle factors, as well as information on comorbidities, hospitalizations, and overall well-being.
“Perhaps the biggest strength of our study is our rigorous neurocognitive assessments,” Dr. Mehta said.
They assessed cognition at baseline and at years 1, 3, 5, and 7 using a battery of tests. An expert panel of neurologists, neuropsychologists, and geriatricians adjudicated cases of dementia, in accordance with DSM-IV criteria. If the diagnosis was unclear, they referred people for additional workup, including neuroimaging.
Cox proportional hazards, regression, and/or mixed effects modeling were used to relate medication use with cognitive scores.
All analyses were adjusted for age, sex, body mass index, alcohol use, family history of dementia, medications, and other medical comorbidities.
At baseline, PPI users were more likely to be White, have fewer years of education, and have higher rates of hypertension, diabetes, and kidney disease. This group also was more likely to be taking five or more medications.
Key points
During 80,976 person-years of follow-up, there were 566 incident cases of dementia, including 235 probable cases of Alzheimer’s disease and 331 other dementias.
Baseline PPI use, in comparison with nonuse, was not associated with incident dementia (hazard ratio, 0.86; 95% confidence interval, 0.70-1.05).
“Similarly, when we look specifically at Alzheimer’s disease or mixed types of dementia, we find no association between baseline PPI use and dementia,” Dr. Mehta said.
When they excluded people already taking PPIs at baseline, they found no association between starting PPIs and developing dementia over time.
Secondary aims of the study included looking for a link between PPI use and mild cognitive impairment or significant changes in cognition over time. In both cases, no association emerged. PPI use at baseline also was not associated with cognitive impairment/no dementia (also known as mild cognitive impairment) or with changes in overall cognitive test scores over time.
To determine whether any association could be a class effect of acid suppression medication, they assessed use of H2RA medications and development of incident dementia. Again, the researchers found no link.
A diverse multinational population from urban and rural areas was a strength of the study, as was the “very rigorous cognitive testing with expert adjudication of our endpoints,” Dr. Mehta said. In addition, fewer than 5% of patients were lost to follow-up.
In terms of limitations, this was an observational study “so residual confounding is always possible,” he added. “But I’ll emphasize that we are among the largest studies to date with wealth of covariates.”
Why the different findings?
The study was “really well done,” session moderator Paul Moayyedi, MD, said during the Q&A session at DDW 2022.
Dr. Moayyedi, a professor of medicine at McMaster University, Hamilton, Ont., asked Dr. Mehta why he “found absolutely no signal, whereas the German study did.”
“It’s a good question,” Dr. Mehta responded. “If you look across the board, there have been conflicting results.”
The disparity could be related to how researchers conducting claims data studies classify dementia, he noted.
“If you look at the nitty-gritty details over 5 years, almost 40% of participants [in those studies] end up with a diagnosis of dementia, which is quite high,” Dr. Mehta said. “That raises questions about whether the diagnosis of dementia is truly accurate.”
Dr. Mehta and Dr. Moayyedi reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Controversy regarding the purported link between the use of proton pump inhibitors (PPIs) or histamine H2 receptor antagonists (H2RAs) and risk for dementia continues.
Adding to the “no link” column comes new evidence from a study presented at the annual Digestive Disease Week® (DDW) .
Among almost 19,000 people, no association was found between the use of these agents and a greater likelihood of incident dementia, Alzheimer’s disease, or cognitive decline in people older than 65 years.
“We found that baseline PPI or H2RA use in older adults was not associated with dementia, with mild cognitive impairment, or declines in cognitive scores over time,” said lead author Raaj Shishir Mehta, MD, a gastroenterology fellow at Massachusetts General Hospital in Boston.
“While deprescribing efforts are important, especially when medications are not indicated, these data provide reassurance about the cognitive impacts of long-term use of PPIs in older adults,” he added.
Growing use, growing concern
As PPI use has increased worldwide, so too have concerns over the adverse effects from their long-term use, Dr. Mehta said.
“One particular area of concern, especially among older adults, is the link between long-term PPI use and risk for dementia,” he said.
Igniting the controversy was a February 2016 study published in JAMA Neurology that showed a positive association between PPI use and dementia in residents of Germany aged 75 years and older. Researchers linked PPI use to a 44% increased risk of dementia over 5 years.
The 2016 study was based on claims data, which can introduce “inaccuracy or bias in defining dementia cases,” Dr. Mehta said. He noted that it and other previous studies also were limited by an inability to account for concomitant medications or comorbidities.
To overcome these limitations in their study, Dr. Mehta and colleagues analyzed medication data collected during in-person visits and asked experts to confirm dementia outcomes. The research data come from ASPREE, a large aspirin study of 18,846 people older than 65 years in the United States and Australia. Participants were enrolled from 2010 to 2014. A total of 566 people developed incident dementia during follow-up.
The researchers had data on alcohol consumption and other lifestyle factors, as well as information on comorbidities, hospitalizations, and overall well-being.
“Perhaps the biggest strength of our study is our rigorous neurocognitive assessments,” Dr. Mehta said.
They assessed cognition at baseline and at years 1, 3, 5, and 7 using a battery of tests. An expert panel of neurologists, neuropsychologists, and geriatricians adjudicated cases of dementia, in accordance with DSM-IV criteria. If the diagnosis was unclear, they referred people for additional workup, including neuroimaging.
Cox proportional hazards, regression, and/or mixed effects modeling were used to relate medication use with cognitive scores.
All analyses were adjusted for age, sex, body mass index, alcohol use, family history of dementia, medications, and other medical comorbidities.
At baseline, PPI users were more likely to be White, have fewer years of education, and have higher rates of hypertension, diabetes, and kidney disease. This group also was more likely to be taking five or more medications.
Key points
During 80,976 person-years of follow-up, there were 566 incident cases of dementia, including 235 probable cases of Alzheimer’s disease and 331 other dementias.
Baseline PPI use, in comparison with nonuse, was not associated with incident dementia (hazard ratio, 0.86; 95% confidence interval, 0.70-1.05).
“Similarly, when we look specifically at Alzheimer’s disease or mixed types of dementia, we find no association between baseline PPI use and dementia,” Dr. Mehta said.
When they excluded people already taking PPIs at baseline, they found no association between starting PPIs and developing dementia over time.
Secondary aims of the study included looking for a link between PPI use and mild cognitive impairment or significant changes in cognition over time. In both cases, no association emerged. PPI use at baseline also was not associated with cognitive impairment/no dementia (also known as mild cognitive impairment) or with changes in overall cognitive test scores over time.
To determine whether any association could be a class effect of acid suppression medication, they assessed use of H2RA medications and development of incident dementia. Again, the researchers found no link.
A diverse multinational population from urban and rural areas was a strength of the study, as was the “very rigorous cognitive testing with expert adjudication of our endpoints,” Dr. Mehta said. In addition, fewer than 5% of patients were lost to follow-up.
In terms of limitations, this was an observational study “so residual confounding is always possible,” he added. “But I’ll emphasize that we are among the largest studies to date with wealth of covariates.”
Why the different findings?
The study was “really well done,” session moderator Paul Moayyedi, MD, said during the Q&A session at DDW 2022.
Dr. Moayyedi, a professor of medicine at McMaster University, Hamilton, Ont., asked Dr. Mehta why he “found absolutely no signal, whereas the German study did.”
“It’s a good question,” Dr. Mehta responded. “If you look across the board, there have been conflicting results.”
The disparity could be related to how researchers conducting claims data studies classify dementia, he noted.
“If you look at the nitty-gritty details over 5 years, almost 40% of participants [in those studies] end up with a diagnosis of dementia, which is quite high,” Dr. Mehta said. “That raises questions about whether the diagnosis of dementia is truly accurate.”
Dr. Mehta and Dr. Moayyedi reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM DDW 2022
Many Americans missing an opportunity to prevent dementia
(ADRD), including hypertension, low levels of physical activity, and obesity, new research shows.
Data from the Centers for Disease Control and Prevention reveal that among nearly 162,000 adults aged 45 and older who were surveyed in 2019 as part of the Behavioral Risk Factor Surveillance System (BRFSS), nearly half had high blood pressure and did not achieve aerobic physical activity recommendations. These were the two most common modifiable risk factors for ADRD.
In addition, more than one-third (35%) of adults were obese, 19% had diabetes, 18% had depression, 15% were smokers, 11% had hearing loss, and 10% were binge drinkers.
The findings were published online in the CDC’s Morbidity and Mortality Weekly Report.
A missed prevention opportunity
More than 1 in 10 (11.3%) adults surveyed reported subjective cognitive decline (SCD), an early indicator of possible future ADRD.
The prevalence of SCD increased from about 4% among adults with no modifiable risk factors for ADRD to 25% for those with four or more risk factors.
Adults with SCD were more apt to report having almost all modifiable risk factors and were more likely to report four or more risk factors (34%) than were peers without SCD (13%)
The prevalence of SCD ranged from a high of about 29% in those with depression and 25% in those with hearing loss to 11% in those who reported binge drinking.
In line with previous research, the findings indicate that American Indian or Alaska Native, Black or African American, and Hispanic populations were more likely to have modifiable risk factors for ADRD than other racial groups, the researchers reported.
The CDC’s National Healthy Brain Initiative supports culturally tailored interventions that address ADRD risk factors specifically in these populations.
In 2021, the federal government’s National Plan to Address Alzheimer’s Disease was updated to include a new goal to reduce risk factors for ADRD.
“Given the prevalence of modifiable risk factors for ADRD and anticipated growth of the older adult population and those with ADRD, this new goal has the potential to benefit a large proportion of U.S. adults,” the investigators wrote.
“In addition to helping patients discuss concerns about memory loss, health care professionals should also screen patients for modifiable risk factors, counsel patients with risk factors, and refer them to effective programs and interventions where recommended,” they advised.
A recent report from the Lancet Commission on Dementia Prevention, Intervention, and Care found that modifying 12 risk factors over the life course could delay or prevent 40% of dementia cases.
A version of this article first appeared on Medscape.com.
(ADRD), including hypertension, low levels of physical activity, and obesity, new research shows.
Data from the Centers for Disease Control and Prevention reveal that among nearly 162,000 adults aged 45 and older who were surveyed in 2019 as part of the Behavioral Risk Factor Surveillance System (BRFSS), nearly half had high blood pressure and did not achieve aerobic physical activity recommendations. These were the two most common modifiable risk factors for ADRD.
In addition, more than one-third (35%) of adults were obese, 19% had diabetes, 18% had depression, 15% were smokers, 11% had hearing loss, and 10% were binge drinkers.
The findings were published online in the CDC’s Morbidity and Mortality Weekly Report.
A missed prevention opportunity
More than 1 in 10 (11.3%) adults surveyed reported subjective cognitive decline (SCD), an early indicator of possible future ADRD.
The prevalence of SCD increased from about 4% among adults with no modifiable risk factors for ADRD to 25% for those with four or more risk factors.
Adults with SCD were more apt to report having almost all modifiable risk factors and were more likely to report four or more risk factors (34%) than were peers without SCD (13%)
The prevalence of SCD ranged from a high of about 29% in those with depression and 25% in those with hearing loss to 11% in those who reported binge drinking.
In line with previous research, the findings indicate that American Indian or Alaska Native, Black or African American, and Hispanic populations were more likely to have modifiable risk factors for ADRD than other racial groups, the researchers reported.
The CDC’s National Healthy Brain Initiative supports culturally tailored interventions that address ADRD risk factors specifically in these populations.
In 2021, the federal government’s National Plan to Address Alzheimer’s Disease was updated to include a new goal to reduce risk factors for ADRD.
“Given the prevalence of modifiable risk factors for ADRD and anticipated growth of the older adult population and those with ADRD, this new goal has the potential to benefit a large proportion of U.S. adults,” the investigators wrote.
“In addition to helping patients discuss concerns about memory loss, health care professionals should also screen patients for modifiable risk factors, counsel patients with risk factors, and refer them to effective programs and interventions where recommended,” they advised.
A recent report from the Lancet Commission on Dementia Prevention, Intervention, and Care found that modifying 12 risk factors over the life course could delay or prevent 40% of dementia cases.
A version of this article first appeared on Medscape.com.
(ADRD), including hypertension, low levels of physical activity, and obesity, new research shows.
Data from the Centers for Disease Control and Prevention reveal that among nearly 162,000 adults aged 45 and older who were surveyed in 2019 as part of the Behavioral Risk Factor Surveillance System (BRFSS), nearly half had high blood pressure and did not achieve aerobic physical activity recommendations. These were the two most common modifiable risk factors for ADRD.
In addition, more than one-third (35%) of adults were obese, 19% had diabetes, 18% had depression, 15% were smokers, 11% had hearing loss, and 10% were binge drinkers.
The findings were published online in the CDC’s Morbidity and Mortality Weekly Report.
A missed prevention opportunity
More than 1 in 10 (11.3%) adults surveyed reported subjective cognitive decline (SCD), an early indicator of possible future ADRD.
The prevalence of SCD increased from about 4% among adults with no modifiable risk factors for ADRD to 25% for those with four or more risk factors.
Adults with SCD were more apt to report having almost all modifiable risk factors and were more likely to report four or more risk factors (34%) than were peers without SCD (13%)
The prevalence of SCD ranged from a high of about 29% in those with depression and 25% in those with hearing loss to 11% in those who reported binge drinking.
In line with previous research, the findings indicate that American Indian or Alaska Native, Black or African American, and Hispanic populations were more likely to have modifiable risk factors for ADRD than other racial groups, the researchers reported.
The CDC’s National Healthy Brain Initiative supports culturally tailored interventions that address ADRD risk factors specifically in these populations.
In 2021, the federal government’s National Plan to Address Alzheimer’s Disease was updated to include a new goal to reduce risk factors for ADRD.
“Given the prevalence of modifiable risk factors for ADRD and anticipated growth of the older adult population and those with ADRD, this new goal has the potential to benefit a large proportion of U.S. adults,” the investigators wrote.
“In addition to helping patients discuss concerns about memory loss, health care professionals should also screen patients for modifiable risk factors, counsel patients with risk factors, and refer them to effective programs and interventions where recommended,” they advised.
A recent report from the Lancet Commission on Dementia Prevention, Intervention, and Care found that modifying 12 risk factors over the life course could delay or prevent 40% of dementia cases.
A version of this article first appeared on Medscape.com.
FROM MMWR
Can fecal transplants help reverse aging?
Transplanting fecal microbiota from young mice into older mice can reverse signs of aging in the gut, brain, and eyes, a team of scientists from the United Kingdom has found. Conversely, transplanting microbiota from old mice to young mice has the opposite effect.
This research provides “tantalizing evidence for the direct involvement of gut microbes in aging and the functional decline of brain function and vision and offers a potential solution in the form of gut microbe replacement therapy,” Simon Carding, PhD, who heads the gut microbes and health research program at the Quadram Institute in Norwich, England, said in a news release.
The study was published online in the journal Microbiome.
The fountain of youth?
Age-related changes in diversity, composition, and function of the gut microbiota are associated with low-grade systemic inflammation, declining tissue function, and increased susceptibility to age-related chronic diseases.
Dr. Carding and colleagues at the Quadram Institute and the University of East Anglia used fecal microbiota transplant (FMT) to exchange the intestinal microbiota of young mice and aged mice.
Young mice who received aged microbiota showed increased intestinal barrier permeability (leaky gut) coupled with upregulated inflammation in the brain and retina, as well as loss of a key functional protein in the eye, they report.
Conversely, these detrimental effects were reversed when microbiota from young mice was transferred to aged mice. FMT with young microbiota also led to enrichment of beneficial taxa in aged mice.
“Our data support the suggestion that altered gut microbiota in old age contributes to intestinal and systemic inflammation, and so may contribute to driving inflammatory pathologies of aged organs,” the study team wrote.
“Targeting the gut-brain axis in aging, by modification of microbial composition to modulate immune and metabolic pathways, may therefore be a potential avenue for therapeutic approaches to age-associated inflammatory and functional decline,” they suggested.
In ongoing studies, the study team are working to understand how long the beneficial effects of young donor microbiota last, which will establish whether FMT can promote long-term health benefits in aged individuals and ameliorate age-associated neurodegeneration and retinal functional deterioration.
“Our results provide more evidence of the important links between microbes in the gut and healthy aging of tissues and organs around the body,” lead author Aimée Parker, PhD, from the Quadram Institute, said in the release.
“We hope that our findings will contribute ultimately to understanding how we can manipulate our diet and our gut bacteria to maximize good health in later life,” she added.
Support for this research was provided by the Biotechnology and Biological Sciences Research Council. The authors report no relevant financial relationships .
A version of this article first appeared on Medscape.com.
Transplanting fecal microbiota from young mice into older mice can reverse signs of aging in the gut, brain, and eyes, a team of scientists from the United Kingdom has found. Conversely, transplanting microbiota from old mice to young mice has the opposite effect.
This research provides “tantalizing evidence for the direct involvement of gut microbes in aging and the functional decline of brain function and vision and offers a potential solution in the form of gut microbe replacement therapy,” Simon Carding, PhD, who heads the gut microbes and health research program at the Quadram Institute in Norwich, England, said in a news release.
The study was published online in the journal Microbiome.
The fountain of youth?
Age-related changes in diversity, composition, and function of the gut microbiota are associated with low-grade systemic inflammation, declining tissue function, and increased susceptibility to age-related chronic diseases.
Dr. Carding and colleagues at the Quadram Institute and the University of East Anglia used fecal microbiota transplant (FMT) to exchange the intestinal microbiota of young mice and aged mice.
Young mice who received aged microbiota showed increased intestinal barrier permeability (leaky gut) coupled with upregulated inflammation in the brain and retina, as well as loss of a key functional protein in the eye, they report.
Conversely, these detrimental effects were reversed when microbiota from young mice was transferred to aged mice. FMT with young microbiota also led to enrichment of beneficial taxa in aged mice.
“Our data support the suggestion that altered gut microbiota in old age contributes to intestinal and systemic inflammation, and so may contribute to driving inflammatory pathologies of aged organs,” the study team wrote.
“Targeting the gut-brain axis in aging, by modification of microbial composition to modulate immune and metabolic pathways, may therefore be a potential avenue for therapeutic approaches to age-associated inflammatory and functional decline,” they suggested.
In ongoing studies, the study team are working to understand how long the beneficial effects of young donor microbiota last, which will establish whether FMT can promote long-term health benefits in aged individuals and ameliorate age-associated neurodegeneration and retinal functional deterioration.
“Our results provide more evidence of the important links between microbes in the gut and healthy aging of tissues and organs around the body,” lead author Aimée Parker, PhD, from the Quadram Institute, said in the release.
“We hope that our findings will contribute ultimately to understanding how we can manipulate our diet and our gut bacteria to maximize good health in later life,” she added.
Support for this research was provided by the Biotechnology and Biological Sciences Research Council. The authors report no relevant financial relationships .
A version of this article first appeared on Medscape.com.
Transplanting fecal microbiota from young mice into older mice can reverse signs of aging in the gut, brain, and eyes, a team of scientists from the United Kingdom has found. Conversely, transplanting microbiota from old mice to young mice has the opposite effect.
This research provides “tantalizing evidence for the direct involvement of gut microbes in aging and the functional decline of brain function and vision and offers a potential solution in the form of gut microbe replacement therapy,” Simon Carding, PhD, who heads the gut microbes and health research program at the Quadram Institute in Norwich, England, said in a news release.
The study was published online in the journal Microbiome.
The fountain of youth?
Age-related changes in diversity, composition, and function of the gut microbiota are associated with low-grade systemic inflammation, declining tissue function, and increased susceptibility to age-related chronic diseases.
Dr. Carding and colleagues at the Quadram Institute and the University of East Anglia used fecal microbiota transplant (FMT) to exchange the intestinal microbiota of young mice and aged mice.
Young mice who received aged microbiota showed increased intestinal barrier permeability (leaky gut) coupled with upregulated inflammation in the brain and retina, as well as loss of a key functional protein in the eye, they report.
Conversely, these detrimental effects were reversed when microbiota from young mice was transferred to aged mice. FMT with young microbiota also led to enrichment of beneficial taxa in aged mice.
“Our data support the suggestion that altered gut microbiota in old age contributes to intestinal and systemic inflammation, and so may contribute to driving inflammatory pathologies of aged organs,” the study team wrote.
“Targeting the gut-brain axis in aging, by modification of microbial composition to modulate immune and metabolic pathways, may therefore be a potential avenue for therapeutic approaches to age-associated inflammatory and functional decline,” they suggested.
In ongoing studies, the study team are working to understand how long the beneficial effects of young donor microbiota last, which will establish whether FMT can promote long-term health benefits in aged individuals and ameliorate age-associated neurodegeneration and retinal functional deterioration.
“Our results provide more evidence of the important links between microbes in the gut and healthy aging of tissues and organs around the body,” lead author Aimée Parker, PhD, from the Quadram Institute, said in the release.
“We hope that our findings will contribute ultimately to understanding how we can manipulate our diet and our gut bacteria to maximize good health in later life,” she added.
Support for this research was provided by the Biotechnology and Biological Sciences Research Council. The authors report no relevant financial relationships .
A version of this article first appeared on Medscape.com.
Neurology, psychiatry studies overlook sex as a variable
A large percentage of studies in neurology and psychiatry over the past decade have failed to account for differences between the sexes, according to a team of Canadian researchers.
“Despite the fact there are papers that are using males and females in the studies, they’re not using the males and females in the way that would optimally find the possibility of sex differences,” lead author Liisa A.M. Galea, PhD, told this news organization. Dr. Galea is a professor and distinguished scholar at the Djavad Mowafaghian Center for Brain Health at the University of British Columbia in Vancouver.
The study was published online in Nature Communications.
Optimal design uncommon
Differences in how neurologic and psychiatric diseases affect men and women have been well documented. Women, for example, are more susceptible to severe stroke, and men are more prone to cognitive decline with schizophrenia. With Alzheimer’s disease, women typically have more severe cognitive defects.
The researchers surveyed 3,193 papers that included a multitude of studies. Although most of the papers reported studies that included both sexes, only 19% of surveyed studies used what Dr. Galea called an optimal design for the discovery of sex differences. “What I mean by ‘optimally’ is the design of the experiments and the analysis of sex as a variable,” she said. And in 2019, only 5% of the studies used sex as a variable for determining differences between the sexes, the study found.
In the current research, two authors read the methods and results of each study described in each paper, Dr. Galea said. The readers noted whether the paper reported the study sample size and whether the studies used a balanced design. The surveyed journals include Nature Neuroscience, Neuron, Journal of Neuroscience, Molecular Psychiatry, Biological Psychiatry, and Neuropsychopharmacology.
‘Not much is changing’
“I had a suspicion that this was happening,” Dr. Galea said. “I didn’t know that it’s so bad, to be fair.” The “good news story,” she said, is that more papers considered sex as a factor in the later years surveyed. In 2019, more than 95% of papers across both disciplines reported participants’ sex, compared with about 70% in 2009. However, less than 20% of the papers in all study years reported studies that used sex optimally to determine differences between the sexes.
“The other thing that shocked me,” Dr. Galea said, “was that even despite the fact that we saw this increase in the number of papers that were using males and females, we didn’t see the sort of corresponding increase in those that were using ‘optimal design’ or ‘optimal analysis,’ ” Dr. Galea said. In 2009, 14% of papers used optimal design and 2% used optimal analysis for determining sex differences. By 2019, those percentages were 19% and 5%, respectively.
But even the papers that used both sexes had shortcomings, the study found. Just over one-third of these papers (34.5%) didn’t use a balanced design. Just over one-quarter (25.9%) didn’t identify the sample size, a shortcoming that marked 18% of these studies in 2009 and 33% in 2019. Fifteen percent of papers examined included studies that used both sexes inconsistently.
“That matters, because other studies have found that about 20% of papers are doing some kind of analysis with sex, but we had a suspicion that a lot of studies would include sex as a covariate,” Dr. Galea said. “Essentially what that does is, you remove that variable from the data. So, any statistical variation due to sex is then gone.
“The problem with that,” she added, “is you’re not actually looking to see if there’s an influence of sex; you’re removing it.”
Dr. Galea noted that this study points to a need for funding agencies to demand that researchers meet their mandates on sex- and gender-based analysis. “Despite the mandates, not much is really changing as far as the analysis or design of experiments, and we need to figure out how to change that,” she said. “We need to figure out how to get researchers more interested to use the power of studying sex differences.”
‘Not surprising, but disappointing’
Vladimir Hachinski, MD, professor of neurology and epidemiology at Western University in London, Ont., and former editor in chief of Stroke, told this news organization that women have almost twice the life risk of developing dementia, are at higher risk of stroke below age 35 years, and have more severe strokes and higher rates of disability at any age.
Commenting on the current study, Dr. Hachinski said, “It’s not surprising, but it’s disappointing, because we’ve known the difference for a long time.” He added, “The paper is very important because we were not aware that it was that bad.”
Dr. Hachinski also stated, “This paper needs a lot of reading. It’s a great resource, and it should be highlighted as one of those things that needs to be addressed, because it matters.”
The study was funded by a Natural Sciences and Engineering Research Council of Canada grant and by the British Columbia Women’s Foundation. Dr. Galea and Hachinski had no relevant disclosures.
A version of this article first appeared on Medscape.com.
A large percentage of studies in neurology and psychiatry over the past decade have failed to account for differences between the sexes, according to a team of Canadian researchers.
“Despite the fact there are papers that are using males and females in the studies, they’re not using the males and females in the way that would optimally find the possibility of sex differences,” lead author Liisa A.M. Galea, PhD, told this news organization. Dr. Galea is a professor and distinguished scholar at the Djavad Mowafaghian Center for Brain Health at the University of British Columbia in Vancouver.
The study was published online in Nature Communications.
Optimal design uncommon
Differences in how neurologic and psychiatric diseases affect men and women have been well documented. Women, for example, are more susceptible to severe stroke, and men are more prone to cognitive decline with schizophrenia. With Alzheimer’s disease, women typically have more severe cognitive defects.
The researchers surveyed 3,193 papers that included a multitude of studies. Although most of the papers reported studies that included both sexes, only 19% of surveyed studies used what Dr. Galea called an optimal design for the discovery of sex differences. “What I mean by ‘optimally’ is the design of the experiments and the analysis of sex as a variable,” she said. And in 2019, only 5% of the studies used sex as a variable for determining differences between the sexes, the study found.
In the current research, two authors read the methods and results of each study described in each paper, Dr. Galea said. The readers noted whether the paper reported the study sample size and whether the studies used a balanced design. The surveyed journals include Nature Neuroscience, Neuron, Journal of Neuroscience, Molecular Psychiatry, Biological Psychiatry, and Neuropsychopharmacology.
‘Not much is changing’
“I had a suspicion that this was happening,” Dr. Galea said. “I didn’t know that it’s so bad, to be fair.” The “good news story,” she said, is that more papers considered sex as a factor in the later years surveyed. In 2019, more than 95% of papers across both disciplines reported participants’ sex, compared with about 70% in 2009. However, less than 20% of the papers in all study years reported studies that used sex optimally to determine differences between the sexes.
“The other thing that shocked me,” Dr. Galea said, “was that even despite the fact that we saw this increase in the number of papers that were using males and females, we didn’t see the sort of corresponding increase in those that were using ‘optimal design’ or ‘optimal analysis,’ ” Dr. Galea said. In 2009, 14% of papers used optimal design and 2% used optimal analysis for determining sex differences. By 2019, those percentages were 19% and 5%, respectively.
But even the papers that used both sexes had shortcomings, the study found. Just over one-third of these papers (34.5%) didn’t use a balanced design. Just over one-quarter (25.9%) didn’t identify the sample size, a shortcoming that marked 18% of these studies in 2009 and 33% in 2019. Fifteen percent of papers examined included studies that used both sexes inconsistently.
“That matters, because other studies have found that about 20% of papers are doing some kind of analysis with sex, but we had a suspicion that a lot of studies would include sex as a covariate,” Dr. Galea said. “Essentially what that does is, you remove that variable from the data. So, any statistical variation due to sex is then gone.
“The problem with that,” she added, “is you’re not actually looking to see if there’s an influence of sex; you’re removing it.”
Dr. Galea noted that this study points to a need for funding agencies to demand that researchers meet their mandates on sex- and gender-based analysis. “Despite the mandates, not much is really changing as far as the analysis or design of experiments, and we need to figure out how to change that,” she said. “We need to figure out how to get researchers more interested to use the power of studying sex differences.”
‘Not surprising, but disappointing’
Vladimir Hachinski, MD, professor of neurology and epidemiology at Western University in London, Ont., and former editor in chief of Stroke, told this news organization that women have almost twice the life risk of developing dementia, are at higher risk of stroke below age 35 years, and have more severe strokes and higher rates of disability at any age.
Commenting on the current study, Dr. Hachinski said, “It’s not surprising, but it’s disappointing, because we’ve known the difference for a long time.” He added, “The paper is very important because we were not aware that it was that bad.”
Dr. Hachinski also stated, “This paper needs a lot of reading. It’s a great resource, and it should be highlighted as one of those things that needs to be addressed, because it matters.”
The study was funded by a Natural Sciences and Engineering Research Council of Canada grant and by the British Columbia Women’s Foundation. Dr. Galea and Hachinski had no relevant disclosures.
A version of this article first appeared on Medscape.com.
A large percentage of studies in neurology and psychiatry over the past decade have failed to account for differences between the sexes, according to a team of Canadian researchers.
“Despite the fact there are papers that are using males and females in the studies, they’re not using the males and females in the way that would optimally find the possibility of sex differences,” lead author Liisa A.M. Galea, PhD, told this news organization. Dr. Galea is a professor and distinguished scholar at the Djavad Mowafaghian Center for Brain Health at the University of British Columbia in Vancouver.
The study was published online in Nature Communications.
Optimal design uncommon
Differences in how neurologic and psychiatric diseases affect men and women have been well documented. Women, for example, are more susceptible to severe stroke, and men are more prone to cognitive decline with schizophrenia. With Alzheimer’s disease, women typically have more severe cognitive defects.
The researchers surveyed 3,193 papers that included a multitude of studies. Although most of the papers reported studies that included both sexes, only 19% of surveyed studies used what Dr. Galea called an optimal design for the discovery of sex differences. “What I mean by ‘optimally’ is the design of the experiments and the analysis of sex as a variable,” she said. And in 2019, only 5% of the studies used sex as a variable for determining differences between the sexes, the study found.
In the current research, two authors read the methods and results of each study described in each paper, Dr. Galea said. The readers noted whether the paper reported the study sample size and whether the studies used a balanced design. The surveyed journals include Nature Neuroscience, Neuron, Journal of Neuroscience, Molecular Psychiatry, Biological Psychiatry, and Neuropsychopharmacology.
‘Not much is changing’
“I had a suspicion that this was happening,” Dr. Galea said. “I didn’t know that it’s so bad, to be fair.” The “good news story,” she said, is that more papers considered sex as a factor in the later years surveyed. In 2019, more than 95% of papers across both disciplines reported participants’ sex, compared with about 70% in 2009. However, less than 20% of the papers in all study years reported studies that used sex optimally to determine differences between the sexes.
“The other thing that shocked me,” Dr. Galea said, “was that even despite the fact that we saw this increase in the number of papers that were using males and females, we didn’t see the sort of corresponding increase in those that were using ‘optimal design’ or ‘optimal analysis,’ ” Dr. Galea said. In 2009, 14% of papers used optimal design and 2% used optimal analysis for determining sex differences. By 2019, those percentages were 19% and 5%, respectively.
But even the papers that used both sexes had shortcomings, the study found. Just over one-third of these papers (34.5%) didn’t use a balanced design. Just over one-quarter (25.9%) didn’t identify the sample size, a shortcoming that marked 18% of these studies in 2009 and 33% in 2019. Fifteen percent of papers examined included studies that used both sexes inconsistently.
“That matters, because other studies have found that about 20% of papers are doing some kind of analysis with sex, but we had a suspicion that a lot of studies would include sex as a covariate,” Dr. Galea said. “Essentially what that does is, you remove that variable from the data. So, any statistical variation due to sex is then gone.
“The problem with that,” she added, “is you’re not actually looking to see if there’s an influence of sex; you’re removing it.”
Dr. Galea noted that this study points to a need for funding agencies to demand that researchers meet their mandates on sex- and gender-based analysis. “Despite the mandates, not much is really changing as far as the analysis or design of experiments, and we need to figure out how to change that,” she said. “We need to figure out how to get researchers more interested to use the power of studying sex differences.”
‘Not surprising, but disappointing’
Vladimir Hachinski, MD, professor of neurology and epidemiology at Western University in London, Ont., and former editor in chief of Stroke, told this news organization that women have almost twice the life risk of developing dementia, are at higher risk of stroke below age 35 years, and have more severe strokes and higher rates of disability at any age.
Commenting on the current study, Dr. Hachinski said, “It’s not surprising, but it’s disappointing, because we’ve known the difference for a long time.” He added, “The paper is very important because we were not aware that it was that bad.”
Dr. Hachinski also stated, “This paper needs a lot of reading. It’s a great resource, and it should be highlighted as one of those things that needs to be addressed, because it matters.”
The study was funded by a Natural Sciences and Engineering Research Council of Canada grant and by the British Columbia Women’s Foundation. Dr. Galea and Hachinski had no relevant disclosures.
A version of this article first appeared on Medscape.com.
FROM NATURE COMMUNICATIONS
Study casts doubt on safety, efficacy of L-serine supplementation for AD
When given to patients with AD, L-serine supplements could be driving abnormally increased serine levels in the brain even higher, potentially accelerating neuronal death, according to study author Xu Chen, PhD, of the University of California, San Diego, and colleagues.
This conclusion conflicts with a 2020 study by Juliette Le Douce, PhD, and colleagues, who reported that oral L-serine supplementation may act as a “ready-to-use therapy” for AD, based on their findings that patients with AD had low levels of PHGDH, an enzyme necessary for synthesizing serine, and AD-like mice had low levels of serine.
Writing in Cell Metabolism, Dr. Chen and colleagues framed the present study, and their findings, in this context.
“In contrast to the work of Le Douce et al., here we report that PHGDH mRNA and protein levels are increased in the brains of two mouse models of AD and/or tauopathy, and are also progressively increased in human brains with no, early, and late AD pathology, as well as in people with no, asymptomatic, and symptomatic AD,” they wrote.
They suggested adjusting clinical recommendations for L-serine, the form of the amino acid commonly found in supplements. In the body, L-serine is converted to D-serine, which acts on the NMDA receptor (NMDAR).
‘Long-term use of D-serine contributes to neuronal death’ suggests research
“We feel oral L-serine as a ready-to-use therapy to AD warrants precaution,” Dr. Chen and colleagues wrote. “This is because despite being a cognitive enhancer, some [research] suggests that long-term use of D-serine contributes to neuronal death in AD through excitotoxicity. Furthermore, D-serine, as a co-agonist of NMDAR, would be expected to oppose NMDAR antagonists, which have proven clinical benefits in treating AD.”
According to principal author Sheng Zhong, PhD, of the University of California, San Diego, “Research is needed to test if targeting PHGDH can ameliorate cognitive decline in AD.”
Dr. Zhong also noted that the present findings support the “promise of using a specific RNA in blood as a biomarker for early detection of Alzheimer’s disease.” This approach is currently being validated at UCSD Shiley-Marcos Alzheimer’s Disease Research Center, he added.
Roles of PHGDH and serine in Alzheimer’s disease require further study
Commenting on both studies, Steve W. Barger, PhD, of the University of Arkansas for Medical Sciences, Little Rock, suggested that more work is needed to better understand the roles of PHGDH and serine in AD before clinical applications can be considered.
“In the end, these two studies fail to provide the clarity we need in designing evidence-based therapeutic hypotheses,” Dr. Barger said in an interview. “We still do not have a firm grasp on the role that D-serine plays in AD. Indeed, the evidence regarding even a single enzyme contributing to its levels is ambiguous.”
Dr. Barger, who has published extensively on the topic of neuronal death, with a particular focus on Alzheimer’s disease, noted that “determination of what happens to D-serine levels in AD has been of interest for decades,” but levels of the amino acid have been notoriously challenging to measure because “D-serine can disappear rapidly from the brain and its fluids after death.”
While Dr. Le Douce and colleagues did measure levels of serine in mice, Dr. Barger noted that the study by Dr. Chen and colleagues was conducted with more “quantitatively rigorous methods.” Even though Dr. Chen and colleagues “did not assay the levels of D-serine itself ... the implication of their findings is that PHGDH is poised to elevate this critical neurotransmitter,” leading to their conclusion that serine supplementation is “potentially dangerous.”
At this point, it may be too early to tell, according to Dr. Barger.
He suggested that conclusions drawn from PHGDH levels alone are “always limited,” and conclusions based on serine levels may be equally dubious, considering that the activities and effects of serine “are quite complex,” and may be influenced by other physiologic processes, including the effects of gut bacteria.
Instead, Dr. Barger suggested that changes in PHGDH and serine may be interpreted as signals coming from a more relevant process upstream: glucose metabolism.
“What we can say confidently is that the glucose metabolism that PHGDH connects to D-serine is most definitely a factor in AD,” he said. “Countless studies have documented what now appears to be a universal decline in glucose delivery to the cerebral cortex, even before frank dementia sets in.”
Dr. Barger noted that declining glucose delivery coincides with some of the earliest events in the development of AD, perhaps “linking accumulation of amyloid β-peptide to subsequent neurofibrillary tangles and tissue atrophy.”
Dr. Barger’s own work recently demonstrated that AD is associated with “an irregularity in the insertion of a specific glucose transporter (GLUT1) into the cell surface” of astrocytes.
“It could be more effective to direct therapeutic interventions at these events lying upstream of PHGDH or serine,” he concluded.
The study was partly supported by a Kreuger v. Wyeth research award. The investigators and Dr. Barger reported no conflicts of interest.
When given to patients with AD, L-serine supplements could be driving abnormally increased serine levels in the brain even higher, potentially accelerating neuronal death, according to study author Xu Chen, PhD, of the University of California, San Diego, and colleagues.
This conclusion conflicts with a 2020 study by Juliette Le Douce, PhD, and colleagues, who reported that oral L-serine supplementation may act as a “ready-to-use therapy” for AD, based on their findings that patients with AD had low levels of PHGDH, an enzyme necessary for synthesizing serine, and AD-like mice had low levels of serine.
Writing in Cell Metabolism, Dr. Chen and colleagues framed the present study, and their findings, in this context.
“In contrast to the work of Le Douce et al., here we report that PHGDH mRNA and protein levels are increased in the brains of two mouse models of AD and/or tauopathy, and are also progressively increased in human brains with no, early, and late AD pathology, as well as in people with no, asymptomatic, and symptomatic AD,” they wrote.
They suggested adjusting clinical recommendations for L-serine, the form of the amino acid commonly found in supplements. In the body, L-serine is converted to D-serine, which acts on the NMDA receptor (NMDAR).
‘Long-term use of D-serine contributes to neuronal death’ suggests research
“We feel oral L-serine as a ready-to-use therapy to AD warrants precaution,” Dr. Chen and colleagues wrote. “This is because despite being a cognitive enhancer, some [research] suggests that long-term use of D-serine contributes to neuronal death in AD through excitotoxicity. Furthermore, D-serine, as a co-agonist of NMDAR, would be expected to oppose NMDAR antagonists, which have proven clinical benefits in treating AD.”
According to principal author Sheng Zhong, PhD, of the University of California, San Diego, “Research is needed to test if targeting PHGDH can ameliorate cognitive decline in AD.”
Dr. Zhong also noted that the present findings support the “promise of using a specific RNA in blood as a biomarker for early detection of Alzheimer’s disease.” This approach is currently being validated at UCSD Shiley-Marcos Alzheimer’s Disease Research Center, he added.
Roles of PHGDH and serine in Alzheimer’s disease require further study
Commenting on both studies, Steve W. Barger, PhD, of the University of Arkansas for Medical Sciences, Little Rock, suggested that more work is needed to better understand the roles of PHGDH and serine in AD before clinical applications can be considered.
“In the end, these two studies fail to provide the clarity we need in designing evidence-based therapeutic hypotheses,” Dr. Barger said in an interview. “We still do not have a firm grasp on the role that D-serine plays in AD. Indeed, the evidence regarding even a single enzyme contributing to its levels is ambiguous.”
Dr. Barger, who has published extensively on the topic of neuronal death, with a particular focus on Alzheimer’s disease, noted that “determination of what happens to D-serine levels in AD has been of interest for decades,” but levels of the amino acid have been notoriously challenging to measure because “D-serine can disappear rapidly from the brain and its fluids after death.”
While Dr. Le Douce and colleagues did measure levels of serine in mice, Dr. Barger noted that the study by Dr. Chen and colleagues was conducted with more “quantitatively rigorous methods.” Even though Dr. Chen and colleagues “did not assay the levels of D-serine itself ... the implication of their findings is that PHGDH is poised to elevate this critical neurotransmitter,” leading to their conclusion that serine supplementation is “potentially dangerous.”
At this point, it may be too early to tell, according to Dr. Barger.
He suggested that conclusions drawn from PHGDH levels alone are “always limited,” and conclusions based on serine levels may be equally dubious, considering that the activities and effects of serine “are quite complex,” and may be influenced by other physiologic processes, including the effects of gut bacteria.
Instead, Dr. Barger suggested that changes in PHGDH and serine may be interpreted as signals coming from a more relevant process upstream: glucose metabolism.
“What we can say confidently is that the glucose metabolism that PHGDH connects to D-serine is most definitely a factor in AD,” he said. “Countless studies have documented what now appears to be a universal decline in glucose delivery to the cerebral cortex, even before frank dementia sets in.”
Dr. Barger noted that declining glucose delivery coincides with some of the earliest events in the development of AD, perhaps “linking accumulation of amyloid β-peptide to subsequent neurofibrillary tangles and tissue atrophy.”
Dr. Barger’s own work recently demonstrated that AD is associated with “an irregularity in the insertion of a specific glucose transporter (GLUT1) into the cell surface” of astrocytes.
“It could be more effective to direct therapeutic interventions at these events lying upstream of PHGDH or serine,” he concluded.
The study was partly supported by a Kreuger v. Wyeth research award. The investigators and Dr. Barger reported no conflicts of interest.
When given to patients with AD, L-serine supplements could be driving abnormally increased serine levels in the brain even higher, potentially accelerating neuronal death, according to study author Xu Chen, PhD, of the University of California, San Diego, and colleagues.
This conclusion conflicts with a 2020 study by Juliette Le Douce, PhD, and colleagues, who reported that oral L-serine supplementation may act as a “ready-to-use therapy” for AD, based on their findings that patients with AD had low levels of PHGDH, an enzyme necessary for synthesizing serine, and AD-like mice had low levels of serine.
Writing in Cell Metabolism, Dr. Chen and colleagues framed the present study, and their findings, in this context.
“In contrast to the work of Le Douce et al., here we report that PHGDH mRNA and protein levels are increased in the brains of two mouse models of AD and/or tauopathy, and are also progressively increased in human brains with no, early, and late AD pathology, as well as in people with no, asymptomatic, and symptomatic AD,” they wrote.
They suggested adjusting clinical recommendations for L-serine, the form of the amino acid commonly found in supplements. In the body, L-serine is converted to D-serine, which acts on the NMDA receptor (NMDAR).
‘Long-term use of D-serine contributes to neuronal death’ suggests research
“We feel oral L-serine as a ready-to-use therapy to AD warrants precaution,” Dr. Chen and colleagues wrote. “This is because despite being a cognitive enhancer, some [research] suggests that long-term use of D-serine contributes to neuronal death in AD through excitotoxicity. Furthermore, D-serine, as a co-agonist of NMDAR, would be expected to oppose NMDAR antagonists, which have proven clinical benefits in treating AD.”
According to principal author Sheng Zhong, PhD, of the University of California, San Diego, “Research is needed to test if targeting PHGDH can ameliorate cognitive decline in AD.”
Dr. Zhong also noted that the present findings support the “promise of using a specific RNA in blood as a biomarker for early detection of Alzheimer’s disease.” This approach is currently being validated at UCSD Shiley-Marcos Alzheimer’s Disease Research Center, he added.
Roles of PHGDH and serine in Alzheimer’s disease require further study
Commenting on both studies, Steve W. Barger, PhD, of the University of Arkansas for Medical Sciences, Little Rock, suggested that more work is needed to better understand the roles of PHGDH and serine in AD before clinical applications can be considered.
“In the end, these two studies fail to provide the clarity we need in designing evidence-based therapeutic hypotheses,” Dr. Barger said in an interview. “We still do not have a firm grasp on the role that D-serine plays in AD. Indeed, the evidence regarding even a single enzyme contributing to its levels is ambiguous.”
Dr. Barger, who has published extensively on the topic of neuronal death, with a particular focus on Alzheimer’s disease, noted that “determination of what happens to D-serine levels in AD has been of interest for decades,” but levels of the amino acid have been notoriously challenging to measure because “D-serine can disappear rapidly from the brain and its fluids after death.”
While Dr. Le Douce and colleagues did measure levels of serine in mice, Dr. Barger noted that the study by Dr. Chen and colleagues was conducted with more “quantitatively rigorous methods.” Even though Dr. Chen and colleagues “did not assay the levels of D-serine itself ... the implication of their findings is that PHGDH is poised to elevate this critical neurotransmitter,” leading to their conclusion that serine supplementation is “potentially dangerous.”
At this point, it may be too early to tell, according to Dr. Barger.
He suggested that conclusions drawn from PHGDH levels alone are “always limited,” and conclusions based on serine levels may be equally dubious, considering that the activities and effects of serine “are quite complex,” and may be influenced by other physiologic processes, including the effects of gut bacteria.
Instead, Dr. Barger suggested that changes in PHGDH and serine may be interpreted as signals coming from a more relevant process upstream: glucose metabolism.
“What we can say confidently is that the glucose metabolism that PHGDH connects to D-serine is most definitely a factor in AD,” he said. “Countless studies have documented what now appears to be a universal decline in glucose delivery to the cerebral cortex, even before frank dementia sets in.”
Dr. Barger noted that declining glucose delivery coincides with some of the earliest events in the development of AD, perhaps “linking accumulation of amyloid β-peptide to subsequent neurofibrillary tangles and tissue atrophy.”
Dr. Barger’s own work recently demonstrated that AD is associated with “an irregularity in the insertion of a specific glucose transporter (GLUT1) into the cell surface” of astrocytes.
“It could be more effective to direct therapeutic interventions at these events lying upstream of PHGDH or serine,” he concluded.
The study was partly supported by a Kreuger v. Wyeth research award. The investigators and Dr. Barger reported no conflicts of interest.
FROM CELL METABOLISM
Higher industriousness reduces risk of predementia syndrome in older adults
Higher industriousness was associated with a 25% reduced risk of concurrent motoric cognitive risk syndrome (MCR), based on data from approximately 6,000 individuals.
Previous research supports an association between conscientiousness and a lower risk of MCR, a form of predementia that involves slow gait speed and cognitive complaints, wrote Yannick Stephan, PhD, of the University of Montpellier (France), and colleagues. However, the specific facets of conscientiousness that impact MCR have not been examined.
In a study published in the Journal of Psychiatric Research, the authors reviewed data from 6,001 dementia-free adults aged 65-99 years who were enrolled in the Health and Retirement Study, a nationally representative longitudinal study of adults aged 50 years and older in the United States.
Baseline data were collected between 2008 and 2010, and participants were assessed for MCR at follow-up points during 2012-2014 and 2016-2018. Six facets of conscientiousness were assessed using a 24-item scale that has been used in previous studies. The six facets were industriousness, self-control, order, traditionalism, virtue, and responsibility. The researchers controlled for variables including demographic factors, cognition, physical activity, disease burden, depressive symptoms, and body mass index.
Overall, increased industriousness was significantly associated with a lower likelihood of concurrent MCR (odds ratio, 0.75) and a reduced risk of incident MCR (hazard ratio, 0.63,; P < .001 for both).
The conscientiousness facets of order, self-control, and responsibility also were associated with a lower likelihood of both concurrent and incident MCR, with ORs ranging from 0.82-0.88 for concurrent and HRs ranging from 0.72-0.82 for incident.
Traditionalism and virtue were significantly associated with a lower risk of incident MCR, but not concurrent MCR (HR, 0.84; P < .01 for both).
The mechanism of action for the association may be explained by several cognitive, health-related, behavioral, and psychological pathways, the researchers wrote. With regard to industriousness, the relationship could be partly explained by cognition, physical activity, disease burden, BMI, and depressive symptoms. However, industriousness also has been associated with a reduced risk of systemic inflammation, which may in turn reduce MCR risk. Also, data suggest that industriousness and MCR share a common genetic cause.
The study findings were limited by several factors including the observational design and the positive selection effect from patients with complete follow-up data, as these patients likely have higher levels of order, industriousness, and responsibility, the researchers noted. However, the results support those from previous studies and were strengthened by the large sample and examination of six facets of conscientiousness.
“This study thus provides a more detailed understanding of the specific components of conscientiousness that are associated with risk of MCR among older adults,” and the facets could be targeted in interventions to reduce both MCR and dementia, they concluded.
The Health and Retirement Study is supported by the National Institute on Aging and conducted by the University of Michigan. The current study was supported in part by the National Institutes of Health. The researchers had no financial conflicts to disclose.
Higher industriousness was associated with a 25% reduced risk of concurrent motoric cognitive risk syndrome (MCR), based on data from approximately 6,000 individuals.
Previous research supports an association between conscientiousness and a lower risk of MCR, a form of predementia that involves slow gait speed and cognitive complaints, wrote Yannick Stephan, PhD, of the University of Montpellier (France), and colleagues. However, the specific facets of conscientiousness that impact MCR have not been examined.
In a study published in the Journal of Psychiatric Research, the authors reviewed data from 6,001 dementia-free adults aged 65-99 years who were enrolled in the Health and Retirement Study, a nationally representative longitudinal study of adults aged 50 years and older in the United States.
Baseline data were collected between 2008 and 2010, and participants were assessed for MCR at follow-up points during 2012-2014 and 2016-2018. Six facets of conscientiousness were assessed using a 24-item scale that has been used in previous studies. The six facets were industriousness, self-control, order, traditionalism, virtue, and responsibility. The researchers controlled for variables including demographic factors, cognition, physical activity, disease burden, depressive symptoms, and body mass index.
Overall, increased industriousness was significantly associated with a lower likelihood of concurrent MCR (odds ratio, 0.75) and a reduced risk of incident MCR (hazard ratio, 0.63,; P < .001 for both).
The conscientiousness facets of order, self-control, and responsibility also were associated with a lower likelihood of both concurrent and incident MCR, with ORs ranging from 0.82-0.88 for concurrent and HRs ranging from 0.72-0.82 for incident.
Traditionalism and virtue were significantly associated with a lower risk of incident MCR, but not concurrent MCR (HR, 0.84; P < .01 for both).
The mechanism of action for the association may be explained by several cognitive, health-related, behavioral, and psychological pathways, the researchers wrote. With regard to industriousness, the relationship could be partly explained by cognition, physical activity, disease burden, BMI, and depressive symptoms. However, industriousness also has been associated with a reduced risk of systemic inflammation, which may in turn reduce MCR risk. Also, data suggest that industriousness and MCR share a common genetic cause.
The study findings were limited by several factors including the observational design and the positive selection effect from patients with complete follow-up data, as these patients likely have higher levels of order, industriousness, and responsibility, the researchers noted. However, the results support those from previous studies and were strengthened by the large sample and examination of six facets of conscientiousness.
“This study thus provides a more detailed understanding of the specific components of conscientiousness that are associated with risk of MCR among older adults,” and the facets could be targeted in interventions to reduce both MCR and dementia, they concluded.
The Health and Retirement Study is supported by the National Institute on Aging and conducted by the University of Michigan. The current study was supported in part by the National Institutes of Health. The researchers had no financial conflicts to disclose.
Higher industriousness was associated with a 25% reduced risk of concurrent motoric cognitive risk syndrome (MCR), based on data from approximately 6,000 individuals.
Previous research supports an association between conscientiousness and a lower risk of MCR, a form of predementia that involves slow gait speed and cognitive complaints, wrote Yannick Stephan, PhD, of the University of Montpellier (France), and colleagues. However, the specific facets of conscientiousness that impact MCR have not been examined.
In a study published in the Journal of Psychiatric Research, the authors reviewed data from 6,001 dementia-free adults aged 65-99 years who were enrolled in the Health and Retirement Study, a nationally representative longitudinal study of adults aged 50 years and older in the United States.
Baseline data were collected between 2008 and 2010, and participants were assessed for MCR at follow-up points during 2012-2014 and 2016-2018. Six facets of conscientiousness were assessed using a 24-item scale that has been used in previous studies. The six facets were industriousness, self-control, order, traditionalism, virtue, and responsibility. The researchers controlled for variables including demographic factors, cognition, physical activity, disease burden, depressive symptoms, and body mass index.
Overall, increased industriousness was significantly associated with a lower likelihood of concurrent MCR (odds ratio, 0.75) and a reduced risk of incident MCR (hazard ratio, 0.63,; P < .001 for both).
The conscientiousness facets of order, self-control, and responsibility also were associated with a lower likelihood of both concurrent and incident MCR, with ORs ranging from 0.82-0.88 for concurrent and HRs ranging from 0.72-0.82 for incident.
Traditionalism and virtue were significantly associated with a lower risk of incident MCR, but not concurrent MCR (HR, 0.84; P < .01 for both).
The mechanism of action for the association may be explained by several cognitive, health-related, behavioral, and psychological pathways, the researchers wrote. With regard to industriousness, the relationship could be partly explained by cognition, physical activity, disease burden, BMI, and depressive symptoms. However, industriousness also has been associated with a reduced risk of systemic inflammation, which may in turn reduce MCR risk. Also, data suggest that industriousness and MCR share a common genetic cause.
The study findings were limited by several factors including the observational design and the positive selection effect from patients with complete follow-up data, as these patients likely have higher levels of order, industriousness, and responsibility, the researchers noted. However, the results support those from previous studies and were strengthened by the large sample and examination of six facets of conscientiousness.
“This study thus provides a more detailed understanding of the specific components of conscientiousness that are associated with risk of MCR among older adults,” and the facets could be targeted in interventions to reduce both MCR and dementia, they concluded.
The Health and Retirement Study is supported by the National Institute on Aging and conducted by the University of Michigan. The current study was supported in part by the National Institutes of Health. The researchers had no financial conflicts to disclose.
FROM PSYCHIATRIC RESEARCH