Mohs surgery in the elderly: The dilemma of when to treat

Article Type
Changed
Tue, 06/07/2022 - 10:30

As increasing numbers of patients in their 80s, 90s, and even 100s present for possible Mohs micrographic surgery, surgeons are confronted with deciding when the risks of treatment may outweigh the benefits.

In one of two presentations at the annual meeting of the American College of Mohs Surgery that addressed this topic, Howard W. Rogers, MD, of Advanced Dermatology in Norwich, Conn., said that the crux of the issue is the concern not to undertreat. He noted that reduced access to dermatologic care during the pandemic has provided a stark lesson in the risks of delaying treatment in all age groups. “Mohs surgeons have all seen the consequences of delayed treatment due to the pandemic with enormous, destructive, and sometimes fatal cancers coming to the office in the last year,” he told this news organization.

Dr. Howard W. Rogers

“Pandemic-related treatment delay has caused increased suffering and morbidity for countless skin cancer patients across the U.S.,” he said. “In general, not treating skin cancer and hoping it’s not going to grow or having significant delays in treatment are a recipe for disastrous outcomes.”

That said, active monitoring may be appropriate “for select small cancers that tend to grow slowly in the very elderly,” added Dr. Rogers, the incoming ACMS president. Among the key situations where the benefits of active monitoring may outweigh the risks of surgery are small, slowly growing cancers, when frailty is an issue.

Frailty has been equated to compromised functionality, which can increase the risk of an array of complications, including prolonged wound healing and secondary complications stemming from immobility. The toll those issues can take on patients’ quality of life can be considerable, Dr. Rogers said.

When weighing treatment options with elderly patients, he emphasized that careful consideration should be given to whether the “time needed to benefit from a Mohs procedure is longer than the patient’s life expectancy.” Furthermore, a decision not to treat does not have to be the last word. “We need to have an honest dialogue on the consequences of nontreatment, but part of that should be that just because we don’t treat today, doesn’t mean we can’t treat it tomorrow, if necessary.”

Of note, he added, “more than 100,00 patients have surgery for basal cell carcinoma [BCC] in their last year of life.” And that figure will likely rise exponentially if population projections come to fruition, considering that the population of people over the age of 85 is predicted to increase to nearly 18 million in 2050, from 5.8 million in 2012, Dr. Rogers said.

Until more research emerges on how to best treat this age group, Dr. Rogers noted that experts recommend that for elderly patients, “treatment should be individualized with consideration of active monitoring of primary BCC that is not in the H-zone, asymptomatic, smaller than 1 cm, with treatment initiated if there is substantial growth or symptoms.” Ultimately, he urged surgeons to “be sensitive and treat our patients like ourselves or our family members.”
 

 

 

When appropriate – Mohs is safe in the very elderly

Taking on the issue in a separate presentation, Deborah MacFarlane, MD, professor of dermatology and head and neck surgery at MD Anderson Cancer Center, Houston, said that for skin cancer cases that warrant treatment, clinicians should not let age alone stand in the way of Mohs surgery.

Dr. Deborah MacFarlane

The evidence of its safety in the elderly dates back to a paper published in 1997 that Dr. MacFarlane coauthored, describing Mohs surgery of BCCs, squamous cell cancers (SCCs), and melanomas among 115 patients aged 90 and older (average, 92.4 years) who had an average of 1.9 comorbid medical conditions, and were taking an average of 2.3 medications. “Overall, we had just one complication among the patients,” she said.

In a subsequent paper, Dr. MacFarlane and her colleagues found that age at the time of Mohs surgery, even in older patients, was unrelated to survival, stage of cancer, or the type of repair. “We have concluded that this rapidly growing segment of the population can undergo Mohs surgery and should not be relegated to less effective treatment out of fear of its affecting their survival,” Dr. MacFarlane said.

She agreed with the concern about frailty and hence functionality, which may need to be factored in when making a decision to perform Mohs surgery. “I think this is something we do intuitively anyway,” she added. “We’re going to offer Mohs to someone who we think will survive and who is in relatively good health,” Dr. MacFarlane noted.

The point is illustrated in a new multicenter study of 1,181 patients at 22 U.S. sites, aged 85 years and older with nonmelanoma skin cancer referred for Mohs surgery. In the study, published in JAMA Dermatology after the ACMS meeting, patients who had Mohs surgery were almost four times more likely to have high functional status (P < .001) and were more likely to have facial tumors (P < .001), compared with those who had an alternate surgery.

The main reasons provided by the surgeons for opting to treat with Mohs included a patient’s desire for treatment with a high cure rate (66%), good/excellent patient functional status for age (57%), and a high risk associated with the tumor based on histology (40%), noted Dr. MacFarlane, one of the authors.



She reiterated the point raised by Dr. Rogers that “this is something we’re going to increasingly face,” noting that people over 85 represent the fastest growing segment of the population. “I have more patients over the age of 100 than I’ve ever had before,” she said.

Nevertheless, her own experience with elderly patients speaks to the safety of Mohs surgery in this population: Dr. MacFarlane reported a review of her practice’s records of 171 patients aged 85 years and older between May 2016 and May 2022, who received 414 separate procedures, without a single complication.

Sharing many of Dr. Rogers’ concerns about using caution in at-risk patients, Dr. MacFarlane offered recommendations for the optimal treatment of elderly patients receiving Mohs, including handling tissue delicately, and “keep undermining to a minimum.” She noted that intermediate closures and full thickness skin grafts are ideal closures for the elderly, while flaps may be performed in selected robust skin. It is also important to involve caretakers from the onset, talk and listen to patients – and play their choice of music during treatment, she said.

Commenting on the debate, comoderator Nahid Y. Vidal, MD, of the department of dermatology, Mayo Clinic, Rochester, Minn., noted that the expanding older population is accompanied by increases in skin cancer, in addition to more immunosenescence that is related to development of infections, autoimmune disease, and malignant tumors.

Dr. Nahid Y. Vidal

“In our academic practice, as with both the reference speakers, we do frequently see elderly, and not uncommonly the super-elderly,” she told this news organization. “The take-home point for me is to treat your whole patient, not just the tumor,” considering social factors, frailty/spry factor, and preferences, “and to do the humanistic thing, while also remaining evidence based,” she said.

“Don’t assume that increased age translates to morbidity, worse outcomes, or futility of treatment,” she added. “Chances are, if [a patient] made it to 90 years old with only a few medications and few medical problems, they may make it to 100, so why put the patient at risk for metastasis and death from a treatable/curable skin cancer,” in the case of SCC, she said.

“By the same token, why not perform more conservative treatments such as ED&C [electrodesiccation and curettage] for very low-risk skin cancers in low-risk locations, such as a superficial basal cell carcinoma on the trunk?” Overall, instead of trying to determine how long a super-elderly individual will live, Dr. Vidal said that “it’s better to educate the patient, engage in a discussion about goals of care, and to make few assumptions.”

Dr. Rogers, Dr. MacFarlane, and Dr. Vidal report no disclosures.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

As increasing numbers of patients in their 80s, 90s, and even 100s present for possible Mohs micrographic surgery, surgeons are confronted with deciding when the risks of treatment may outweigh the benefits.

In one of two presentations at the annual meeting of the American College of Mohs Surgery that addressed this topic, Howard W. Rogers, MD, of Advanced Dermatology in Norwich, Conn., said that the crux of the issue is the concern not to undertreat. He noted that reduced access to dermatologic care during the pandemic has provided a stark lesson in the risks of delaying treatment in all age groups. “Mohs surgeons have all seen the consequences of delayed treatment due to the pandemic with enormous, destructive, and sometimes fatal cancers coming to the office in the last year,” he told this news organization.

Dr. Howard W. Rogers

“Pandemic-related treatment delay has caused increased suffering and morbidity for countless skin cancer patients across the U.S.,” he said. “In general, not treating skin cancer and hoping it’s not going to grow or having significant delays in treatment are a recipe for disastrous outcomes.”

That said, active monitoring may be appropriate “for select small cancers that tend to grow slowly in the very elderly,” added Dr. Rogers, the incoming ACMS president. Among the key situations where the benefits of active monitoring may outweigh the risks of surgery are small, slowly growing cancers, when frailty is an issue.

Frailty has been equated to compromised functionality, which can increase the risk of an array of complications, including prolonged wound healing and secondary complications stemming from immobility. The toll those issues can take on patients’ quality of life can be considerable, Dr. Rogers said.

When weighing treatment options with elderly patients, he emphasized that careful consideration should be given to whether the “time needed to benefit from a Mohs procedure is longer than the patient’s life expectancy.” Furthermore, a decision not to treat does not have to be the last word. “We need to have an honest dialogue on the consequences of nontreatment, but part of that should be that just because we don’t treat today, doesn’t mean we can’t treat it tomorrow, if necessary.”

Of note, he added, “more than 100,00 patients have surgery for basal cell carcinoma [BCC] in their last year of life.” And that figure will likely rise exponentially if population projections come to fruition, considering that the population of people over the age of 85 is predicted to increase to nearly 18 million in 2050, from 5.8 million in 2012, Dr. Rogers said.

Until more research emerges on how to best treat this age group, Dr. Rogers noted that experts recommend that for elderly patients, “treatment should be individualized with consideration of active monitoring of primary BCC that is not in the H-zone, asymptomatic, smaller than 1 cm, with treatment initiated if there is substantial growth or symptoms.” Ultimately, he urged surgeons to “be sensitive and treat our patients like ourselves or our family members.”
 

 

 

When appropriate – Mohs is safe in the very elderly

Taking on the issue in a separate presentation, Deborah MacFarlane, MD, professor of dermatology and head and neck surgery at MD Anderson Cancer Center, Houston, said that for skin cancer cases that warrant treatment, clinicians should not let age alone stand in the way of Mohs surgery.

Dr. Deborah MacFarlane

The evidence of its safety in the elderly dates back to a paper published in 1997 that Dr. MacFarlane coauthored, describing Mohs surgery of BCCs, squamous cell cancers (SCCs), and melanomas among 115 patients aged 90 and older (average, 92.4 years) who had an average of 1.9 comorbid medical conditions, and were taking an average of 2.3 medications. “Overall, we had just one complication among the patients,” she said.

In a subsequent paper, Dr. MacFarlane and her colleagues found that age at the time of Mohs surgery, even in older patients, was unrelated to survival, stage of cancer, or the type of repair. “We have concluded that this rapidly growing segment of the population can undergo Mohs surgery and should not be relegated to less effective treatment out of fear of its affecting their survival,” Dr. MacFarlane said.

She agreed with the concern about frailty and hence functionality, which may need to be factored in when making a decision to perform Mohs surgery. “I think this is something we do intuitively anyway,” she added. “We’re going to offer Mohs to someone who we think will survive and who is in relatively good health,” Dr. MacFarlane noted.

The point is illustrated in a new multicenter study of 1,181 patients at 22 U.S. sites, aged 85 years and older with nonmelanoma skin cancer referred for Mohs surgery. In the study, published in JAMA Dermatology after the ACMS meeting, patients who had Mohs surgery were almost four times more likely to have high functional status (P < .001) and were more likely to have facial tumors (P < .001), compared with those who had an alternate surgery.

The main reasons provided by the surgeons for opting to treat with Mohs included a patient’s desire for treatment with a high cure rate (66%), good/excellent patient functional status for age (57%), and a high risk associated with the tumor based on histology (40%), noted Dr. MacFarlane, one of the authors.



She reiterated the point raised by Dr. Rogers that “this is something we’re going to increasingly face,” noting that people over 85 represent the fastest growing segment of the population. “I have more patients over the age of 100 than I’ve ever had before,” she said.

Nevertheless, her own experience with elderly patients speaks to the safety of Mohs surgery in this population: Dr. MacFarlane reported a review of her practice’s records of 171 patients aged 85 years and older between May 2016 and May 2022, who received 414 separate procedures, without a single complication.

Sharing many of Dr. Rogers’ concerns about using caution in at-risk patients, Dr. MacFarlane offered recommendations for the optimal treatment of elderly patients receiving Mohs, including handling tissue delicately, and “keep undermining to a minimum.” She noted that intermediate closures and full thickness skin grafts are ideal closures for the elderly, while flaps may be performed in selected robust skin. It is also important to involve caretakers from the onset, talk and listen to patients – and play their choice of music during treatment, she said.

Commenting on the debate, comoderator Nahid Y. Vidal, MD, of the department of dermatology, Mayo Clinic, Rochester, Minn., noted that the expanding older population is accompanied by increases in skin cancer, in addition to more immunosenescence that is related to development of infections, autoimmune disease, and malignant tumors.

Dr. Nahid Y. Vidal

“In our academic practice, as with both the reference speakers, we do frequently see elderly, and not uncommonly the super-elderly,” she told this news organization. “The take-home point for me is to treat your whole patient, not just the tumor,” considering social factors, frailty/spry factor, and preferences, “and to do the humanistic thing, while also remaining evidence based,” she said.

“Don’t assume that increased age translates to morbidity, worse outcomes, or futility of treatment,” she added. “Chances are, if [a patient] made it to 90 years old with only a few medications and few medical problems, they may make it to 100, so why put the patient at risk for metastasis and death from a treatable/curable skin cancer,” in the case of SCC, she said.

“By the same token, why not perform more conservative treatments such as ED&C [electrodesiccation and curettage] for very low-risk skin cancers in low-risk locations, such as a superficial basal cell carcinoma on the trunk?” Overall, instead of trying to determine how long a super-elderly individual will live, Dr. Vidal said that “it’s better to educate the patient, engage in a discussion about goals of care, and to make few assumptions.”

Dr. Rogers, Dr. MacFarlane, and Dr. Vidal report no disclosures.

A version of this article first appeared on Medscape.com.

As increasing numbers of patients in their 80s, 90s, and even 100s present for possible Mohs micrographic surgery, surgeons are confronted with deciding when the risks of treatment may outweigh the benefits.

In one of two presentations at the annual meeting of the American College of Mohs Surgery that addressed this topic, Howard W. Rogers, MD, of Advanced Dermatology in Norwich, Conn., said that the crux of the issue is the concern not to undertreat. He noted that reduced access to dermatologic care during the pandemic has provided a stark lesson in the risks of delaying treatment in all age groups. “Mohs surgeons have all seen the consequences of delayed treatment due to the pandemic with enormous, destructive, and sometimes fatal cancers coming to the office in the last year,” he told this news organization.

Dr. Howard W. Rogers

“Pandemic-related treatment delay has caused increased suffering and morbidity for countless skin cancer patients across the U.S.,” he said. “In general, not treating skin cancer and hoping it’s not going to grow or having significant delays in treatment are a recipe for disastrous outcomes.”

That said, active monitoring may be appropriate “for select small cancers that tend to grow slowly in the very elderly,” added Dr. Rogers, the incoming ACMS president. Among the key situations where the benefits of active monitoring may outweigh the risks of surgery are small, slowly growing cancers, when frailty is an issue.

Frailty has been equated to compromised functionality, which can increase the risk of an array of complications, including prolonged wound healing and secondary complications stemming from immobility. The toll those issues can take on patients’ quality of life can be considerable, Dr. Rogers said.

When weighing treatment options with elderly patients, he emphasized that careful consideration should be given to whether the “time needed to benefit from a Mohs procedure is longer than the patient’s life expectancy.” Furthermore, a decision not to treat does not have to be the last word. “We need to have an honest dialogue on the consequences of nontreatment, but part of that should be that just because we don’t treat today, doesn’t mean we can’t treat it tomorrow, if necessary.”

Of note, he added, “more than 100,00 patients have surgery for basal cell carcinoma [BCC] in their last year of life.” And that figure will likely rise exponentially if population projections come to fruition, considering that the population of people over the age of 85 is predicted to increase to nearly 18 million in 2050, from 5.8 million in 2012, Dr. Rogers said.

Until more research emerges on how to best treat this age group, Dr. Rogers noted that experts recommend that for elderly patients, “treatment should be individualized with consideration of active monitoring of primary BCC that is not in the H-zone, asymptomatic, smaller than 1 cm, with treatment initiated if there is substantial growth or symptoms.” Ultimately, he urged surgeons to “be sensitive and treat our patients like ourselves or our family members.”
 

 

 

When appropriate – Mohs is safe in the very elderly

Taking on the issue in a separate presentation, Deborah MacFarlane, MD, professor of dermatology and head and neck surgery at MD Anderson Cancer Center, Houston, said that for skin cancer cases that warrant treatment, clinicians should not let age alone stand in the way of Mohs surgery.

Dr. Deborah MacFarlane

The evidence of its safety in the elderly dates back to a paper published in 1997 that Dr. MacFarlane coauthored, describing Mohs surgery of BCCs, squamous cell cancers (SCCs), and melanomas among 115 patients aged 90 and older (average, 92.4 years) who had an average of 1.9 comorbid medical conditions, and were taking an average of 2.3 medications. “Overall, we had just one complication among the patients,” she said.

In a subsequent paper, Dr. MacFarlane and her colleagues found that age at the time of Mohs surgery, even in older patients, was unrelated to survival, stage of cancer, or the type of repair. “We have concluded that this rapidly growing segment of the population can undergo Mohs surgery and should not be relegated to less effective treatment out of fear of its affecting their survival,” Dr. MacFarlane said.

She agreed with the concern about frailty and hence functionality, which may need to be factored in when making a decision to perform Mohs surgery. “I think this is something we do intuitively anyway,” she added. “We’re going to offer Mohs to someone who we think will survive and who is in relatively good health,” Dr. MacFarlane noted.

The point is illustrated in a new multicenter study of 1,181 patients at 22 U.S. sites, aged 85 years and older with nonmelanoma skin cancer referred for Mohs surgery. In the study, published in JAMA Dermatology after the ACMS meeting, patients who had Mohs surgery were almost four times more likely to have high functional status (P < .001) and were more likely to have facial tumors (P < .001), compared with those who had an alternate surgery.

The main reasons provided by the surgeons for opting to treat with Mohs included a patient’s desire for treatment with a high cure rate (66%), good/excellent patient functional status for age (57%), and a high risk associated with the tumor based on histology (40%), noted Dr. MacFarlane, one of the authors.



She reiterated the point raised by Dr. Rogers that “this is something we’re going to increasingly face,” noting that people over 85 represent the fastest growing segment of the population. “I have more patients over the age of 100 than I’ve ever had before,” she said.

Nevertheless, her own experience with elderly patients speaks to the safety of Mohs surgery in this population: Dr. MacFarlane reported a review of her practice’s records of 171 patients aged 85 years and older between May 2016 and May 2022, who received 414 separate procedures, without a single complication.

Sharing many of Dr. Rogers’ concerns about using caution in at-risk patients, Dr. MacFarlane offered recommendations for the optimal treatment of elderly patients receiving Mohs, including handling tissue delicately, and “keep undermining to a minimum.” She noted that intermediate closures and full thickness skin grafts are ideal closures for the elderly, while flaps may be performed in selected robust skin. It is also important to involve caretakers from the onset, talk and listen to patients – and play their choice of music during treatment, she said.

Commenting on the debate, comoderator Nahid Y. Vidal, MD, of the department of dermatology, Mayo Clinic, Rochester, Minn., noted that the expanding older population is accompanied by increases in skin cancer, in addition to more immunosenescence that is related to development of infections, autoimmune disease, and malignant tumors.

Dr. Nahid Y. Vidal

“In our academic practice, as with both the reference speakers, we do frequently see elderly, and not uncommonly the super-elderly,” she told this news organization. “The take-home point for me is to treat your whole patient, not just the tumor,” considering social factors, frailty/spry factor, and preferences, “and to do the humanistic thing, while also remaining evidence based,” she said.

“Don’t assume that increased age translates to morbidity, worse outcomes, or futility of treatment,” she added. “Chances are, if [a patient] made it to 90 years old with only a few medications and few medical problems, they may make it to 100, so why put the patient at risk for metastasis and death from a treatable/curable skin cancer,” in the case of SCC, she said.

“By the same token, why not perform more conservative treatments such as ED&C [electrodesiccation and curettage] for very low-risk skin cancers in low-risk locations, such as a superficial basal cell carcinoma on the trunk?” Overall, instead of trying to determine how long a super-elderly individual will live, Dr. Vidal said that “it’s better to educate the patient, engage in a discussion about goals of care, and to make few assumptions.”

Dr. Rogers, Dr. MacFarlane, and Dr. Vidal report no disclosures.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ACMS 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Hearing, vision loss combo a colossal risk for cognitive decline

Article Type
Changed
Thu, 12/15/2022 - 15:38

The combination of hearing loss and vision loss is linked to an eightfold increased risk of cognitive impairment, new research shows.

Investigators analyzed data on more than 5 million U.S. seniors. Adjusted results show that participants with hearing impairment alone had more than twice the odds of also having cognitive impairment, while those with vision impairment alone had more than triple the odds of cognitive impairment.

However, those with dual sensory impairment (DSI) had an eightfold higher risk for cognitive impairment.

In addition, half of the participants with DSI also had cognitive impairment. Of those with cognitive impairment, 16% had DSI, compared with only about 2% of their peers without cognitive impairment.

“The findings of the present study may inform interventions that can support older people with concurrent sensory impairment and cognitive impairment,” said lead author Esme Fuller-Thomson, PhD, professor, Factor-Inwentash Faculty of Social Work, University of Toronto.

“Special attention, in particular, should be given to those aged 65-74 who have serious hearing and/or vision impairment [because], if the relationship with dementia is found to be causal, such interventions can potentially mitigate the development of cognitive impairment,” said Dr. Fuller-Thomson, who is also director of the Institute for Life Course and Aging and a professor in the department of family and community medicine and faculty of nursing, all at the University of Toronto.

The findings were published online in the Journal of Alzheimer’s Disease  Reports.
 

Sensory isolation

Hearing and vision impairment increase with age; it is estimated that one-third of U.S. adults between the ages of 65 and 74 experience hearing loss, and 4% experience vision impairment, the investigators note.

“The link between dual hearing loss and seeing loss and mental health problems such as depression and social isolation have been well researched, but we were very interested in the link between dual sensory loss and cognitive problems,” Dr. Fuller-Thomson said.

Additionally, “there have been several studies in the past decade linking hearing loss to dementia and cognitive decline, but less attention has been paid to cognitive problems among those with DSI, despite this group being particularly isolated,” she said. Existing research into DSI suggests an association with cognitive decline; the current investigators sought to expand on this previous work.

To do so, they used merged data from 10 consecutive waves from 2008 to 2017 of the American Community Survey (ACS), which was conducted by the U.S. Census Bureau. The ACS is a nationally representative sample of 3.5 million randomly selected U.S. addresses and includes community-dwelling adults and those residing in institutional settings.

Participants aged 65 or older (n = 5,405,135; 56.4% women) were asked yes/no questions regarding serious cognitive impairment, hearing impairment, and vision impairment. A proxy, such as a family member or nursing home staff member, provided answers for individuals not capable of self-report.

Potential confounding variables included age, race/ethnicity, sex, education, and household income.
 

Potential mechanisms

Results showed that, among those with cognitive impairment, there was a higher prevalence of hearing impairment, vision impairment, and DSI than among their peers without cognitive impairment; in addition, a lower percentage of these persons had no sensory impairment (P < .001).

The prevalence of DSI climbed with age, from 1.5% for respondents aged 65-74 years to 2.6% for those aged 75-84 and to 10.8% in those 85 years and older.

Individuals with higher levels of poverty also had higher levels of DSI. Among those who had not completed high school, the prevalence of DSI was higher, compared with high school or university graduates (6.3% vs. 3.1% and 1.85, respectively).

After controlling for age, race, education, and income, the researchers found “substantially” higher odds of cognitive impairment in those with vs. those without sensory impairments.

“The magnitude of the odds of cognitive impairment by sensory impairment was greatest for the youngest cohort (age 65-74) and lowest for the oldest cohort (age 85+),” the investigators wrote. Among participants in the youngest cohort, there was a “dose-response relationship” for those with hearing impairment only, visual impairment only, and DSI.

Because the study was observational, it “does not provide sufficient information to determine the reasons behind the observed link between sensory loss and cognitive problems,” Dr. Fuller-Thomson said. However, there are “several potential causal mechanisms [that] warrant future research.”

The “sensory deprivation hypothesis” suggests that DSI could cause cognitive deterioration because of decreased auditory and visual input. The “resource allocation hypothesis” posits that hearing- or vision-impaired older adults “may use more cognitive resources to accommodate for sensory deficits, allocating fewer cognitive resources for higher-order memory processes,” the researchers wrote. Hearing impairment “may also lead to social disengagement among older adults, hastening cognitive decline due to isolation and lack of stimulation,” they added.

Reverse causality is also possible. In the “cognitive load on perception” hypothesis, cognitive decline may lead to declines in hearing and vision because of “decreased resources for sensory processing.”

In addition, the association may be noncausal. “The ‘common cause hypothesis’ theorizes that sensory impairment and cognitive impairment may be due to shared age-related degeneration of the central nervous system ... or frailty,” Dr. Fuller-Thomson said.
 

Parallel findings

The results are similar to those from a study conducted by Phillip Hwang, PhD, of the department of anatomy and neurobiology, Boston University, and colleagues that was published online in JAMA Network Open.

They analyzed data on 8 years of follow-up of 2,927 participants in the Cardiovascular Health Study (mean age, 74.6 years; 58.2% women).

Compared with no sensory impairment, DSI was associated with increased risk for all-cause dementia and Alzheimer’s disease, but not with vascular dementia.

“Future work in health care guidelines could consider incorporating screening of sensory impairment in older adults as part of risk assessment for dementia,” Nicholas Reed, AuD, and Esther Oh, MD, PhD, both of Johns Hopkins University, Baltimore, wrote in an accompanying editorial.
 

Accurate testing

Commenting on both studies, Heather Whitson, MD, professor of medicine (geriatrics) and ophthalmology and director at the Duke University Center for the Study of Aging and Human Development, Durham, N.C., said both “add further strength to the evidence base, which has really converged in the last few years to support that there is a link between sensory health and cognitive health.”

However, “we still don’t know whether hearing/vision loss causes cognitive decline, though there are plausible ways that sensory loss could affect cognitive abilities like memory, language, and executive function,” she said

Dr. Whitson, who was not involved with the research, is also codirector of the Duke/University of North Carolina Alzheimer’s Disease Research Center at Duke University, Durham, N.C., and the Durham VA Medical Center.

“The big question is whether we can improve patients’ cognitive performance by treating or accommodating their sensory impairments,” she said. “If safe and feasible things like hearing aids or cataract surgery improve cognitive health, even a little bit, it would be a huge benefit to society, because sensory loss is very common, and there are many treatment options,” Dr. Whitson added.

Dr. Fuller-Thomson emphasized that practitioners should “consider the full impact of sensory impairment on cognitive testing methods, as both auditory and visual testing methods may fail to take hearing and vision impairment into account.”

Thus, “when performing cognitive tests on older adults with sensory impairments, practitioners should ensure they are communicating audibly and/or using visual speech cues for hearing-impaired individuals, eliminating items from cognitive tests that rely on vision for those who are visually impaired, and using physical cues for individuals with hearing or dual sensory impairment, as this can help increase the accuracy of testing and prevent confounding,” she said.

The study by Fuller-Thomson et al. was funded by a donation from Janis Rotman. Its investigators have reported no relevant financial relationships. The study by Hwang et al. was funded by contracts from the National Heart, Lung, and Blood Institute, the National Institute of Neurological Disorders and Stroke, and the National Institute on Aging. Dr. Hwang reports no relevant financial relationships. The other investigators’ disclosures are listed in the original article. Dr. Reed received grants from the National Institute on Aging during the conduct of the study and has served on the advisory board of Neosensory outside the submitted work. Dr. Oh and Dr. Whitson report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews - 30(7)
Publications
Topics
Sections

The combination of hearing loss and vision loss is linked to an eightfold increased risk of cognitive impairment, new research shows.

Investigators analyzed data on more than 5 million U.S. seniors. Adjusted results show that participants with hearing impairment alone had more than twice the odds of also having cognitive impairment, while those with vision impairment alone had more than triple the odds of cognitive impairment.

However, those with dual sensory impairment (DSI) had an eightfold higher risk for cognitive impairment.

In addition, half of the participants with DSI also had cognitive impairment. Of those with cognitive impairment, 16% had DSI, compared with only about 2% of their peers without cognitive impairment.

“The findings of the present study may inform interventions that can support older people with concurrent sensory impairment and cognitive impairment,” said lead author Esme Fuller-Thomson, PhD, professor, Factor-Inwentash Faculty of Social Work, University of Toronto.

“Special attention, in particular, should be given to those aged 65-74 who have serious hearing and/or vision impairment [because], if the relationship with dementia is found to be causal, such interventions can potentially mitigate the development of cognitive impairment,” said Dr. Fuller-Thomson, who is also director of the Institute for Life Course and Aging and a professor in the department of family and community medicine and faculty of nursing, all at the University of Toronto.

The findings were published online in the Journal of Alzheimer’s Disease  Reports.
 

Sensory isolation

Hearing and vision impairment increase with age; it is estimated that one-third of U.S. adults between the ages of 65 and 74 experience hearing loss, and 4% experience vision impairment, the investigators note.

“The link between dual hearing loss and seeing loss and mental health problems such as depression and social isolation have been well researched, but we were very interested in the link between dual sensory loss and cognitive problems,” Dr. Fuller-Thomson said.

Additionally, “there have been several studies in the past decade linking hearing loss to dementia and cognitive decline, but less attention has been paid to cognitive problems among those with DSI, despite this group being particularly isolated,” she said. Existing research into DSI suggests an association with cognitive decline; the current investigators sought to expand on this previous work.

To do so, they used merged data from 10 consecutive waves from 2008 to 2017 of the American Community Survey (ACS), which was conducted by the U.S. Census Bureau. The ACS is a nationally representative sample of 3.5 million randomly selected U.S. addresses and includes community-dwelling adults and those residing in institutional settings.

Participants aged 65 or older (n = 5,405,135; 56.4% women) were asked yes/no questions regarding serious cognitive impairment, hearing impairment, and vision impairment. A proxy, such as a family member or nursing home staff member, provided answers for individuals not capable of self-report.

Potential confounding variables included age, race/ethnicity, sex, education, and household income.
 

Potential mechanisms

Results showed that, among those with cognitive impairment, there was a higher prevalence of hearing impairment, vision impairment, and DSI than among their peers without cognitive impairment; in addition, a lower percentage of these persons had no sensory impairment (P < .001).

The prevalence of DSI climbed with age, from 1.5% for respondents aged 65-74 years to 2.6% for those aged 75-84 and to 10.8% in those 85 years and older.

Individuals with higher levels of poverty also had higher levels of DSI. Among those who had not completed high school, the prevalence of DSI was higher, compared with high school or university graduates (6.3% vs. 3.1% and 1.85, respectively).

After controlling for age, race, education, and income, the researchers found “substantially” higher odds of cognitive impairment in those with vs. those without sensory impairments.

“The magnitude of the odds of cognitive impairment by sensory impairment was greatest for the youngest cohort (age 65-74) and lowest for the oldest cohort (age 85+),” the investigators wrote. Among participants in the youngest cohort, there was a “dose-response relationship” for those with hearing impairment only, visual impairment only, and DSI.

Because the study was observational, it “does not provide sufficient information to determine the reasons behind the observed link between sensory loss and cognitive problems,” Dr. Fuller-Thomson said. However, there are “several potential causal mechanisms [that] warrant future research.”

The “sensory deprivation hypothesis” suggests that DSI could cause cognitive deterioration because of decreased auditory and visual input. The “resource allocation hypothesis” posits that hearing- or vision-impaired older adults “may use more cognitive resources to accommodate for sensory deficits, allocating fewer cognitive resources for higher-order memory processes,” the researchers wrote. Hearing impairment “may also lead to social disengagement among older adults, hastening cognitive decline due to isolation and lack of stimulation,” they added.

Reverse causality is also possible. In the “cognitive load on perception” hypothesis, cognitive decline may lead to declines in hearing and vision because of “decreased resources for sensory processing.”

In addition, the association may be noncausal. “The ‘common cause hypothesis’ theorizes that sensory impairment and cognitive impairment may be due to shared age-related degeneration of the central nervous system ... or frailty,” Dr. Fuller-Thomson said.
 

Parallel findings

The results are similar to those from a study conducted by Phillip Hwang, PhD, of the department of anatomy and neurobiology, Boston University, and colleagues that was published online in JAMA Network Open.

They analyzed data on 8 years of follow-up of 2,927 participants in the Cardiovascular Health Study (mean age, 74.6 years; 58.2% women).

Compared with no sensory impairment, DSI was associated with increased risk for all-cause dementia and Alzheimer’s disease, but not with vascular dementia.

“Future work in health care guidelines could consider incorporating screening of sensory impairment in older adults as part of risk assessment for dementia,” Nicholas Reed, AuD, and Esther Oh, MD, PhD, both of Johns Hopkins University, Baltimore, wrote in an accompanying editorial.
 

Accurate testing

Commenting on both studies, Heather Whitson, MD, professor of medicine (geriatrics) and ophthalmology and director at the Duke University Center for the Study of Aging and Human Development, Durham, N.C., said both “add further strength to the evidence base, which has really converged in the last few years to support that there is a link between sensory health and cognitive health.”

However, “we still don’t know whether hearing/vision loss causes cognitive decline, though there are plausible ways that sensory loss could affect cognitive abilities like memory, language, and executive function,” she said

Dr. Whitson, who was not involved with the research, is also codirector of the Duke/University of North Carolina Alzheimer’s Disease Research Center at Duke University, Durham, N.C., and the Durham VA Medical Center.

“The big question is whether we can improve patients’ cognitive performance by treating or accommodating their sensory impairments,” she said. “If safe and feasible things like hearing aids or cataract surgery improve cognitive health, even a little bit, it would be a huge benefit to society, because sensory loss is very common, and there are many treatment options,” Dr. Whitson added.

Dr. Fuller-Thomson emphasized that practitioners should “consider the full impact of sensory impairment on cognitive testing methods, as both auditory and visual testing methods may fail to take hearing and vision impairment into account.”

Thus, “when performing cognitive tests on older adults with sensory impairments, practitioners should ensure they are communicating audibly and/or using visual speech cues for hearing-impaired individuals, eliminating items from cognitive tests that rely on vision for those who are visually impaired, and using physical cues for individuals with hearing or dual sensory impairment, as this can help increase the accuracy of testing and prevent confounding,” she said.

The study by Fuller-Thomson et al. was funded by a donation from Janis Rotman. Its investigators have reported no relevant financial relationships. The study by Hwang et al. was funded by contracts from the National Heart, Lung, and Blood Institute, the National Institute of Neurological Disorders and Stroke, and the National Institute on Aging. Dr. Hwang reports no relevant financial relationships. The other investigators’ disclosures are listed in the original article. Dr. Reed received grants from the National Institute on Aging during the conduct of the study and has served on the advisory board of Neosensory outside the submitted work. Dr. Oh and Dr. Whitson report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

The combination of hearing loss and vision loss is linked to an eightfold increased risk of cognitive impairment, new research shows.

Investigators analyzed data on more than 5 million U.S. seniors. Adjusted results show that participants with hearing impairment alone had more than twice the odds of also having cognitive impairment, while those with vision impairment alone had more than triple the odds of cognitive impairment.

However, those with dual sensory impairment (DSI) had an eightfold higher risk for cognitive impairment.

In addition, half of the participants with DSI also had cognitive impairment. Of those with cognitive impairment, 16% had DSI, compared with only about 2% of their peers without cognitive impairment.

“The findings of the present study may inform interventions that can support older people with concurrent sensory impairment and cognitive impairment,” said lead author Esme Fuller-Thomson, PhD, professor, Factor-Inwentash Faculty of Social Work, University of Toronto.

“Special attention, in particular, should be given to those aged 65-74 who have serious hearing and/or vision impairment [because], if the relationship with dementia is found to be causal, such interventions can potentially mitigate the development of cognitive impairment,” said Dr. Fuller-Thomson, who is also director of the Institute for Life Course and Aging and a professor in the department of family and community medicine and faculty of nursing, all at the University of Toronto.

The findings were published online in the Journal of Alzheimer’s Disease  Reports.
 

Sensory isolation

Hearing and vision impairment increase with age; it is estimated that one-third of U.S. adults between the ages of 65 and 74 experience hearing loss, and 4% experience vision impairment, the investigators note.

“The link between dual hearing loss and seeing loss and mental health problems such as depression and social isolation have been well researched, but we were very interested in the link between dual sensory loss and cognitive problems,” Dr. Fuller-Thomson said.

Additionally, “there have been several studies in the past decade linking hearing loss to dementia and cognitive decline, but less attention has been paid to cognitive problems among those with DSI, despite this group being particularly isolated,” she said. Existing research into DSI suggests an association with cognitive decline; the current investigators sought to expand on this previous work.

To do so, they used merged data from 10 consecutive waves from 2008 to 2017 of the American Community Survey (ACS), which was conducted by the U.S. Census Bureau. The ACS is a nationally representative sample of 3.5 million randomly selected U.S. addresses and includes community-dwelling adults and those residing in institutional settings.

Participants aged 65 or older (n = 5,405,135; 56.4% women) were asked yes/no questions regarding serious cognitive impairment, hearing impairment, and vision impairment. A proxy, such as a family member or nursing home staff member, provided answers for individuals not capable of self-report.

Potential confounding variables included age, race/ethnicity, sex, education, and household income.
 

Potential mechanisms

Results showed that, among those with cognitive impairment, there was a higher prevalence of hearing impairment, vision impairment, and DSI than among their peers without cognitive impairment; in addition, a lower percentage of these persons had no sensory impairment (P < .001).

The prevalence of DSI climbed with age, from 1.5% for respondents aged 65-74 years to 2.6% for those aged 75-84 and to 10.8% in those 85 years and older.

Individuals with higher levels of poverty also had higher levels of DSI. Among those who had not completed high school, the prevalence of DSI was higher, compared with high school or university graduates (6.3% vs. 3.1% and 1.85, respectively).

After controlling for age, race, education, and income, the researchers found “substantially” higher odds of cognitive impairment in those with vs. those without sensory impairments.

“The magnitude of the odds of cognitive impairment by sensory impairment was greatest for the youngest cohort (age 65-74) and lowest for the oldest cohort (age 85+),” the investigators wrote. Among participants in the youngest cohort, there was a “dose-response relationship” for those with hearing impairment only, visual impairment only, and DSI.

Because the study was observational, it “does not provide sufficient information to determine the reasons behind the observed link between sensory loss and cognitive problems,” Dr. Fuller-Thomson said. However, there are “several potential causal mechanisms [that] warrant future research.”

The “sensory deprivation hypothesis” suggests that DSI could cause cognitive deterioration because of decreased auditory and visual input. The “resource allocation hypothesis” posits that hearing- or vision-impaired older adults “may use more cognitive resources to accommodate for sensory deficits, allocating fewer cognitive resources for higher-order memory processes,” the researchers wrote. Hearing impairment “may also lead to social disengagement among older adults, hastening cognitive decline due to isolation and lack of stimulation,” they added.

Reverse causality is also possible. In the “cognitive load on perception” hypothesis, cognitive decline may lead to declines in hearing and vision because of “decreased resources for sensory processing.”

In addition, the association may be noncausal. “The ‘common cause hypothesis’ theorizes that sensory impairment and cognitive impairment may be due to shared age-related degeneration of the central nervous system ... or frailty,” Dr. Fuller-Thomson said.
 

Parallel findings

The results are similar to those from a study conducted by Phillip Hwang, PhD, of the department of anatomy and neurobiology, Boston University, and colleagues that was published online in JAMA Network Open.

They analyzed data on 8 years of follow-up of 2,927 participants in the Cardiovascular Health Study (mean age, 74.6 years; 58.2% women).

Compared with no sensory impairment, DSI was associated with increased risk for all-cause dementia and Alzheimer’s disease, but not with vascular dementia.

“Future work in health care guidelines could consider incorporating screening of sensory impairment in older adults as part of risk assessment for dementia,” Nicholas Reed, AuD, and Esther Oh, MD, PhD, both of Johns Hopkins University, Baltimore, wrote in an accompanying editorial.
 

Accurate testing

Commenting on both studies, Heather Whitson, MD, professor of medicine (geriatrics) and ophthalmology and director at the Duke University Center for the Study of Aging and Human Development, Durham, N.C., said both “add further strength to the evidence base, which has really converged in the last few years to support that there is a link between sensory health and cognitive health.”

However, “we still don’t know whether hearing/vision loss causes cognitive decline, though there are plausible ways that sensory loss could affect cognitive abilities like memory, language, and executive function,” she said

Dr. Whitson, who was not involved with the research, is also codirector of the Duke/University of North Carolina Alzheimer’s Disease Research Center at Duke University, Durham, N.C., and the Durham VA Medical Center.

“The big question is whether we can improve patients’ cognitive performance by treating or accommodating their sensory impairments,” she said. “If safe and feasible things like hearing aids or cataract surgery improve cognitive health, even a little bit, it would be a huge benefit to society, because sensory loss is very common, and there are many treatment options,” Dr. Whitson added.

Dr. Fuller-Thomson emphasized that practitioners should “consider the full impact of sensory impairment on cognitive testing methods, as both auditory and visual testing methods may fail to take hearing and vision impairment into account.”

Thus, “when performing cognitive tests on older adults with sensory impairments, practitioners should ensure they are communicating audibly and/or using visual speech cues for hearing-impaired individuals, eliminating items from cognitive tests that rely on vision for those who are visually impaired, and using physical cues for individuals with hearing or dual sensory impairment, as this can help increase the accuracy of testing and prevent confounding,” she said.

The study by Fuller-Thomson et al. was funded by a donation from Janis Rotman. Its investigators have reported no relevant financial relationships. The study by Hwang et al. was funded by contracts from the National Heart, Lung, and Blood Institute, the National Institute of Neurological Disorders and Stroke, and the National Institute on Aging. Dr. Hwang reports no relevant financial relationships. The other investigators’ disclosures are listed in the original article. Dr. Reed received grants from the National Institute on Aging during the conduct of the study and has served on the advisory board of Neosensory outside the submitted work. Dr. Oh and Dr. Whitson report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews - 30(7)
Issue
Neurology Reviews - 30(7)
Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF ALZHEIMER’S DISEASE REPORTS

Citation Override
Publish date: May 31, 2022
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Where Does the Hospital Belong? Perspectives on Hospital at Home in the 21st Century

Article Type
Changed
Thu, 06/02/2022 - 08:18
Display Headline
Where Does the Hospital Belong? Perspectives on Hospital at Home in the 21st Century

From Medically Home Group, Boston, MA.

Brick-and-mortar hospitals in the United States have historically been considered the dominant setting for providing care to patients. The coordination and delivery of care has previously been bound to physical hospitals largely because multidisciplinary services were only accessible in an individual location. While the fundamental make-up of these services remains unchanged, these services are now available in alternate settings. Some of these services include access to a patient care team, supplies, diagnostics, pharmacy, and advanced therapeutic interventions. Presently, the physical environment is becoming increasingly irrelevant as the core of what makes the traditional hospital—the professional staff, collaborative work processes, and the dynamics of the space—have all been translated into a modern digitally integrated environment. The elements necessary to providing safe, effective care in a physical hospital setting are now available in a patient’s home.

Impetus for the Model

As hospitals reconsider how and where they deliver patient care because of limited resources, the hospital-at-home model has gained significant momentum and interest. This model transforms a home into a hospital. The inpatient acute care episode is entirely substituted with an intensive at-home hospital admission enabled by technology, multidisciplinary teams, and ancillary services. Furthermore, patients requiring post-acute support can be transitioned to their next phase of care seamlessly. Given the nationwide nursing shortage, aging population, challenges uncovered by the COVID-19 pandemic, rising hospital costs, nurse/provider burnout related to challenging work environments, and capacity constraints, a shift toward the combination of virtual and in-home care is imperative. The hospital-at-home model has been associated with superior patient outcomes, including reduced risks of delirium, improved functional status, improved patient and family member satisfaction, reduced mortality, reduced readmissions, and significantly lower costs.1 COVID-19 alone has unmasked major facility-based deficiencies and limitations of our health care system. While the pandemic is not the impetus for the hospital-at-home model, the extended stress of this event has created a unique opportunity to reimagine and transform our health care delivery system so that it is less fragmented and more flexible.

Nursing in the Model

Nursing is central to the hospital-at-home model. Virtual nurses provide meticulous care plan oversight, assessment, and documentation across in-home service providers, to ensure holistic, safe, transparent, and continuous progression toward care plan milestones. The virtual nurse monitors patients using in-home technology that is set up at the time of admission. Connecting with patients to verify social and medical needs, the virtual nurse advocates for their patients and uses these technologies to care and deploy on-demand hands-on services to the patient. Service providers such as paramedics, infusion nurses, or home health nurses may be deployed to provide services in the patient’s home. By bringing in supplies, therapeutics, and interdisciplinary team members, the capabilities of a brick-and-mortar hospital are replicated in the home. All actions that occur wherever the patient is receiving care are overseen by professional nursing staff; in short, virtual nurses are the equivalent of bedside nurses in the brick-and-mortar health care facilities.

Potential Benefits

There are many benefits to the hospital-at-home model (Table). This health care model can be particularly helpful for patients who require frequent admission to acute care facilities, and is well suited for patients with a range of conditions, including those with COVID-19, pneumonia, cellulitis, or congestive heart failure. This care model helps eliminate some of the stressors for patients who have chronic illnesses or other conditions that require frequent hospital admissions. Patients can independently recover at home and can also be surrounded by their loved ones and pets while recovering. This care approach additionally eliminates the risk of hospital-acquired infections and injuries. The hospital-at-home model allows for increased mobility,2 as patients are familiar with their surroundings, resulting in reduced onset of delirium. Additionally, patients with improved mobility performance are less likely to experience negative health outcomes.3 There is less chance of sleep disruption as the patient is sleeping in their own bed—no unfamiliar roommate, no call bells or health care personnel frequently coming into the room. The in-home technology set up for remote patient monitoring is designed with the user in mind. Ease of use empowers the patient to collaborate with their care team on their own terms and center the priorities of themselves and their families.

Benefits of the Hospital-at-Home Model

Positive Outcomes

The hospital-at-home model is associated with positive outcomes. The authors of a systematic review identified 10 randomized controlled trials of hospital-at-home programs (with a total of 1372 patients), but were able to obtain data for only 5 of these trials (with a total of 844 patients).4 They found a 38% reduction in 6-month mortality for patients who received hospital care at home, as well as significantly higher patient satisfaction across a range of medical conditions, including patients with cellulitis and community-acquired pneumonia, as well as elderly patients with multiple medical conditions. The authors concluded that hospital care at home was less expensive than admission to an acute care hospital.4 Similarly, a meta-analysis done by Caplan et al5 that included 61 randomized controlled trials concluded that hospital at home is associated with reductions in mortality, readmission rates, and cost, and increases in patient and caregiver satisfaction. Levine et al2 found reduced costs and utilization with home hospitalization compared to in-hospital care, as well as improved patient mobility status.

The home is the ideal place to empower patients and caregivers to engage in self-management.2 Receiving hospital care at home eliminates the need for dealing with transportation arrangements, traffic, road tolls, and time/scheduling constraints, or finding care for a dependent family member, some of the many stressors that may be experienced by patients who require frequent trips to the hospital. For patients who may not be clinically suitable candidates for hospital at home, such as those requiring critical care intervention and support, the brick-and-mortar hospital is still the appropriate site of care. The hospital-at-home model helps prevent bed shortages in brick-and-mortar hospital settings by allowing hospital care at home for patients who meet preset criteria. These patients can be hospitalized in alternative locations such as their own homes or the residence of a friend. This helps increase health system capacity as well as resiliency.

In addition to expanding safe and appropriate treatment spaces, the hospital-at-home model helps increase access to care for patients during nonstandard hours, including weekends, holidays, or when the waiting time in the emergency room is painfully long. Furthermore, providing care in the home gives the clinical team valuable insight into the patient’s daily life and routine. Performing medication reconciliation with the medicine cabinet in sight and dietary education in a patient’s kitchen are powerful touch points.2 For example, a patient with congestive heart failure who must undergo diuresis is much more likely to meet their care goals when their home diet is aligned with the treatment goal. By being able to see exactly what is in a patient’s pantry and fridge, the care team can create a much more tailored approach to sodium intake and fluid management. Providers can create and execute true patient-centric care as they gain direct insight into the patient’s lifestyle, which is clearly valuable when creating care plans for complex chronic health issues.

 

 

Challenges to Implementation and Scaling

Although there are clear benefits to hospital at home, how to best implement and scale this model presents a challenge. In addition to educating patients and families about this model of care, health care systems must expand their hospital-at-home programs and provide education about this model to clinical staff and trainees, and insurers must create reimbursement paradigms. Patients meeting eligibility criteria to enroll in hospital at home is the easiest hurdle, as hospital-at-home programs function best when they enroll and service as many patients as possible, including underserved populations.

Upfront Costs and Cost Savings

While there are upfront costs to set up technology and coordinate services, hospital at home also provides significant total cost savings when compared to coordination associated with brick-and-mortar admission. Hospital care accounts for about one-third of total medical expenditures and is a leading cause of debt.2 Eliminating fixed hospital costs such as facility, overhead, and equipment costs through adoption of the hospital-at-home model can lead to a reduction in expenditures. It has been found that fewer laboratory and diagnostic tests are ordered for hospital-at-home patients when compared to similar patients in brick-and-mortar hospital settings, with comparable or better clinical patient outcomes.6 Furthermore, it is estimated that there are cost savings of 19% to 30% when compared to traditional inpatient care.6 Without legislative action, upon the end of the current COVID-19 public health emergency, the Centers for Medicare & Medicaid Service’s Acute Hospital Care at Home waiver will terminate. This could slow down scaling of the model.However, over the past 2 years there has been enough buy-in from major health systems and patients to continue the momentum of the model’s growth. When setting up a hospital-at-home program, it would be wise to consider a few factors: where in the hospital or health system entity structure the hospital-at-home program will reside, which existing resources can be leveraged within the hospital or health system, and what are the state or federal regulatory requirements for such a program. This type of program continues to fill gaps within the US health care system, meeting the needs of widely overlooked populations and increasing access to essential ancillary services.

Conclusion

It is time to consider our bias toward hospital-first options when managing the care needs of our patients. Health care providers have the option to advocate for holistic care, better experience, and better outcomes. Home-based options are safe, equitable, and patient-centric. Increased costs, consumerism, and technology have pushed us to think about alternative approaches to patient care delivery, and the pandemic created a unique opportunity to see just how far the health care system could stretch itself with capacity constraints, insufficient resources, and staff shortages. In light of new possibilities, it is time to reimagine and transform our health care delivery system so that it is unified, seamless, cohesive, and flexible.

Corresponding author: Payal Sharma, DNP, MSN, RN, FNP-BC, CBN; [email protected].

Disclosures: None reported.

References

1. Cai S, Laurel PA, Makineni R, Marks ML. Evaluation of a hospital-in-home program implemented among veterans. Am J Manag Care. 2017;23(8):482-487. 

2. Levine DM, Ouchi K, Blanchfield B, et al. Hospital-level care at home for acutely ill adults: a pilot randomized controlled trial. J Gen Intern Med. 2018;33(5):729-736. doi:10.1007/s11606-018-4307-z

3. Shuman V, Coyle PC, Perera S,et al. Association between improved mobility and distal health outcomes. J Gerontol A Biol Sci Med Sci. 2020;75(12):2412-2417. doi:10.1093/gerona/glaa086

4. Shepperd S, Doll H, Angus RM, et al. Avoiding hospital admission through provision of hospital care at home: a systematic review and meta-analysis of individual patient data. CMAJ. 2009;180(2):175-182. doi:10.1503/cmaj.081491

5. Caplan GA, Sulaiman NS, Mangin DA, et al. A meta-analysis of “hospital in the home”. Med J Aust. 2012;197(9):512-519. doi:10.5694/mja12.10480

6. Hospital at Home. Johns Hopkins Medicine. Healthcare Solutions. Accessed May 20, 2022. https://www.johnshopkinssolutions.com/solution/hospital-at-home/

Article PDF
Issue
Journal of Clinical Outcomes Management - 29(3)
Publications
Topics
Page Number
113-115
Sections
Article PDF
Article PDF

From Medically Home Group, Boston, MA.

Brick-and-mortar hospitals in the United States have historically been considered the dominant setting for providing care to patients. The coordination and delivery of care has previously been bound to physical hospitals largely because multidisciplinary services were only accessible in an individual location. While the fundamental make-up of these services remains unchanged, these services are now available in alternate settings. Some of these services include access to a patient care team, supplies, diagnostics, pharmacy, and advanced therapeutic interventions. Presently, the physical environment is becoming increasingly irrelevant as the core of what makes the traditional hospital—the professional staff, collaborative work processes, and the dynamics of the space—have all been translated into a modern digitally integrated environment. The elements necessary to providing safe, effective care in a physical hospital setting are now available in a patient’s home.

Impetus for the Model

As hospitals reconsider how and where they deliver patient care because of limited resources, the hospital-at-home model has gained significant momentum and interest. This model transforms a home into a hospital. The inpatient acute care episode is entirely substituted with an intensive at-home hospital admission enabled by technology, multidisciplinary teams, and ancillary services. Furthermore, patients requiring post-acute support can be transitioned to their next phase of care seamlessly. Given the nationwide nursing shortage, aging population, challenges uncovered by the COVID-19 pandemic, rising hospital costs, nurse/provider burnout related to challenging work environments, and capacity constraints, a shift toward the combination of virtual and in-home care is imperative. The hospital-at-home model has been associated with superior patient outcomes, including reduced risks of delirium, improved functional status, improved patient and family member satisfaction, reduced mortality, reduced readmissions, and significantly lower costs.1 COVID-19 alone has unmasked major facility-based deficiencies and limitations of our health care system. While the pandemic is not the impetus for the hospital-at-home model, the extended stress of this event has created a unique opportunity to reimagine and transform our health care delivery system so that it is less fragmented and more flexible.

Nursing in the Model

Nursing is central to the hospital-at-home model. Virtual nurses provide meticulous care plan oversight, assessment, and documentation across in-home service providers, to ensure holistic, safe, transparent, and continuous progression toward care plan milestones. The virtual nurse monitors patients using in-home technology that is set up at the time of admission. Connecting with patients to verify social and medical needs, the virtual nurse advocates for their patients and uses these technologies to care and deploy on-demand hands-on services to the patient. Service providers such as paramedics, infusion nurses, or home health nurses may be deployed to provide services in the patient’s home. By bringing in supplies, therapeutics, and interdisciplinary team members, the capabilities of a brick-and-mortar hospital are replicated in the home. All actions that occur wherever the patient is receiving care are overseen by professional nursing staff; in short, virtual nurses are the equivalent of bedside nurses in the brick-and-mortar health care facilities.

Potential Benefits

There are many benefits to the hospital-at-home model (Table). This health care model can be particularly helpful for patients who require frequent admission to acute care facilities, and is well suited for patients with a range of conditions, including those with COVID-19, pneumonia, cellulitis, or congestive heart failure. This care model helps eliminate some of the stressors for patients who have chronic illnesses or other conditions that require frequent hospital admissions. Patients can independently recover at home and can also be surrounded by their loved ones and pets while recovering. This care approach additionally eliminates the risk of hospital-acquired infections and injuries. The hospital-at-home model allows for increased mobility,2 as patients are familiar with their surroundings, resulting in reduced onset of delirium. Additionally, patients with improved mobility performance are less likely to experience negative health outcomes.3 There is less chance of sleep disruption as the patient is sleeping in their own bed—no unfamiliar roommate, no call bells or health care personnel frequently coming into the room. The in-home technology set up for remote patient monitoring is designed with the user in mind. Ease of use empowers the patient to collaborate with their care team on their own terms and center the priorities of themselves and their families.

Benefits of the Hospital-at-Home Model

Positive Outcomes

The hospital-at-home model is associated with positive outcomes. The authors of a systematic review identified 10 randomized controlled trials of hospital-at-home programs (with a total of 1372 patients), but were able to obtain data for only 5 of these trials (with a total of 844 patients).4 They found a 38% reduction in 6-month mortality for patients who received hospital care at home, as well as significantly higher patient satisfaction across a range of medical conditions, including patients with cellulitis and community-acquired pneumonia, as well as elderly patients with multiple medical conditions. The authors concluded that hospital care at home was less expensive than admission to an acute care hospital.4 Similarly, a meta-analysis done by Caplan et al5 that included 61 randomized controlled trials concluded that hospital at home is associated with reductions in mortality, readmission rates, and cost, and increases in patient and caregiver satisfaction. Levine et al2 found reduced costs and utilization with home hospitalization compared to in-hospital care, as well as improved patient mobility status.

The home is the ideal place to empower patients and caregivers to engage in self-management.2 Receiving hospital care at home eliminates the need for dealing with transportation arrangements, traffic, road tolls, and time/scheduling constraints, or finding care for a dependent family member, some of the many stressors that may be experienced by patients who require frequent trips to the hospital. For patients who may not be clinically suitable candidates for hospital at home, such as those requiring critical care intervention and support, the brick-and-mortar hospital is still the appropriate site of care. The hospital-at-home model helps prevent bed shortages in brick-and-mortar hospital settings by allowing hospital care at home for patients who meet preset criteria. These patients can be hospitalized in alternative locations such as their own homes or the residence of a friend. This helps increase health system capacity as well as resiliency.

In addition to expanding safe and appropriate treatment spaces, the hospital-at-home model helps increase access to care for patients during nonstandard hours, including weekends, holidays, or when the waiting time in the emergency room is painfully long. Furthermore, providing care in the home gives the clinical team valuable insight into the patient’s daily life and routine. Performing medication reconciliation with the medicine cabinet in sight and dietary education in a patient’s kitchen are powerful touch points.2 For example, a patient with congestive heart failure who must undergo diuresis is much more likely to meet their care goals when their home diet is aligned with the treatment goal. By being able to see exactly what is in a patient’s pantry and fridge, the care team can create a much more tailored approach to sodium intake and fluid management. Providers can create and execute true patient-centric care as they gain direct insight into the patient’s lifestyle, which is clearly valuable when creating care plans for complex chronic health issues.

 

 

Challenges to Implementation and Scaling

Although there are clear benefits to hospital at home, how to best implement and scale this model presents a challenge. In addition to educating patients and families about this model of care, health care systems must expand their hospital-at-home programs and provide education about this model to clinical staff and trainees, and insurers must create reimbursement paradigms. Patients meeting eligibility criteria to enroll in hospital at home is the easiest hurdle, as hospital-at-home programs function best when they enroll and service as many patients as possible, including underserved populations.

Upfront Costs and Cost Savings

While there are upfront costs to set up technology and coordinate services, hospital at home also provides significant total cost savings when compared to coordination associated with brick-and-mortar admission. Hospital care accounts for about one-third of total medical expenditures and is a leading cause of debt.2 Eliminating fixed hospital costs such as facility, overhead, and equipment costs through adoption of the hospital-at-home model can lead to a reduction in expenditures. It has been found that fewer laboratory and diagnostic tests are ordered for hospital-at-home patients when compared to similar patients in brick-and-mortar hospital settings, with comparable or better clinical patient outcomes.6 Furthermore, it is estimated that there are cost savings of 19% to 30% when compared to traditional inpatient care.6 Without legislative action, upon the end of the current COVID-19 public health emergency, the Centers for Medicare & Medicaid Service’s Acute Hospital Care at Home waiver will terminate. This could slow down scaling of the model.However, over the past 2 years there has been enough buy-in from major health systems and patients to continue the momentum of the model’s growth. When setting up a hospital-at-home program, it would be wise to consider a few factors: where in the hospital or health system entity structure the hospital-at-home program will reside, which existing resources can be leveraged within the hospital or health system, and what are the state or federal regulatory requirements for such a program. This type of program continues to fill gaps within the US health care system, meeting the needs of widely overlooked populations and increasing access to essential ancillary services.

Conclusion

It is time to consider our bias toward hospital-first options when managing the care needs of our patients. Health care providers have the option to advocate for holistic care, better experience, and better outcomes. Home-based options are safe, equitable, and patient-centric. Increased costs, consumerism, and technology have pushed us to think about alternative approaches to patient care delivery, and the pandemic created a unique opportunity to see just how far the health care system could stretch itself with capacity constraints, insufficient resources, and staff shortages. In light of new possibilities, it is time to reimagine and transform our health care delivery system so that it is unified, seamless, cohesive, and flexible.

Corresponding author: Payal Sharma, DNP, MSN, RN, FNP-BC, CBN; [email protected].

Disclosures: None reported.

From Medically Home Group, Boston, MA.

Brick-and-mortar hospitals in the United States have historically been considered the dominant setting for providing care to patients. The coordination and delivery of care has previously been bound to physical hospitals largely because multidisciplinary services were only accessible in an individual location. While the fundamental make-up of these services remains unchanged, these services are now available in alternate settings. Some of these services include access to a patient care team, supplies, diagnostics, pharmacy, and advanced therapeutic interventions. Presently, the physical environment is becoming increasingly irrelevant as the core of what makes the traditional hospital—the professional staff, collaborative work processes, and the dynamics of the space—have all been translated into a modern digitally integrated environment. The elements necessary to providing safe, effective care in a physical hospital setting are now available in a patient’s home.

Impetus for the Model

As hospitals reconsider how and where they deliver patient care because of limited resources, the hospital-at-home model has gained significant momentum and interest. This model transforms a home into a hospital. The inpatient acute care episode is entirely substituted with an intensive at-home hospital admission enabled by technology, multidisciplinary teams, and ancillary services. Furthermore, patients requiring post-acute support can be transitioned to their next phase of care seamlessly. Given the nationwide nursing shortage, aging population, challenges uncovered by the COVID-19 pandemic, rising hospital costs, nurse/provider burnout related to challenging work environments, and capacity constraints, a shift toward the combination of virtual and in-home care is imperative. The hospital-at-home model has been associated with superior patient outcomes, including reduced risks of delirium, improved functional status, improved patient and family member satisfaction, reduced mortality, reduced readmissions, and significantly lower costs.1 COVID-19 alone has unmasked major facility-based deficiencies and limitations of our health care system. While the pandemic is not the impetus for the hospital-at-home model, the extended stress of this event has created a unique opportunity to reimagine and transform our health care delivery system so that it is less fragmented and more flexible.

Nursing in the Model

Nursing is central to the hospital-at-home model. Virtual nurses provide meticulous care plan oversight, assessment, and documentation across in-home service providers, to ensure holistic, safe, transparent, and continuous progression toward care plan milestones. The virtual nurse monitors patients using in-home technology that is set up at the time of admission. Connecting with patients to verify social and medical needs, the virtual nurse advocates for their patients and uses these technologies to care and deploy on-demand hands-on services to the patient. Service providers such as paramedics, infusion nurses, or home health nurses may be deployed to provide services in the patient’s home. By bringing in supplies, therapeutics, and interdisciplinary team members, the capabilities of a brick-and-mortar hospital are replicated in the home. All actions that occur wherever the patient is receiving care are overseen by professional nursing staff; in short, virtual nurses are the equivalent of bedside nurses in the brick-and-mortar health care facilities.

Potential Benefits

There are many benefits to the hospital-at-home model (Table). This health care model can be particularly helpful for patients who require frequent admission to acute care facilities, and is well suited for patients with a range of conditions, including those with COVID-19, pneumonia, cellulitis, or congestive heart failure. This care model helps eliminate some of the stressors for patients who have chronic illnesses or other conditions that require frequent hospital admissions. Patients can independently recover at home and can also be surrounded by their loved ones and pets while recovering. This care approach additionally eliminates the risk of hospital-acquired infections and injuries. The hospital-at-home model allows for increased mobility,2 as patients are familiar with their surroundings, resulting in reduced onset of delirium. Additionally, patients with improved mobility performance are less likely to experience negative health outcomes.3 There is less chance of sleep disruption as the patient is sleeping in their own bed—no unfamiliar roommate, no call bells or health care personnel frequently coming into the room. The in-home technology set up for remote patient monitoring is designed with the user in mind. Ease of use empowers the patient to collaborate with their care team on their own terms and center the priorities of themselves and their families.

Benefits of the Hospital-at-Home Model

Positive Outcomes

The hospital-at-home model is associated with positive outcomes. The authors of a systematic review identified 10 randomized controlled trials of hospital-at-home programs (with a total of 1372 patients), but were able to obtain data for only 5 of these trials (with a total of 844 patients).4 They found a 38% reduction in 6-month mortality for patients who received hospital care at home, as well as significantly higher patient satisfaction across a range of medical conditions, including patients with cellulitis and community-acquired pneumonia, as well as elderly patients with multiple medical conditions. The authors concluded that hospital care at home was less expensive than admission to an acute care hospital.4 Similarly, a meta-analysis done by Caplan et al5 that included 61 randomized controlled trials concluded that hospital at home is associated with reductions in mortality, readmission rates, and cost, and increases in patient and caregiver satisfaction. Levine et al2 found reduced costs and utilization with home hospitalization compared to in-hospital care, as well as improved patient mobility status.

The home is the ideal place to empower patients and caregivers to engage in self-management.2 Receiving hospital care at home eliminates the need for dealing with transportation arrangements, traffic, road tolls, and time/scheduling constraints, or finding care for a dependent family member, some of the many stressors that may be experienced by patients who require frequent trips to the hospital. For patients who may not be clinically suitable candidates for hospital at home, such as those requiring critical care intervention and support, the brick-and-mortar hospital is still the appropriate site of care. The hospital-at-home model helps prevent bed shortages in brick-and-mortar hospital settings by allowing hospital care at home for patients who meet preset criteria. These patients can be hospitalized in alternative locations such as their own homes or the residence of a friend. This helps increase health system capacity as well as resiliency.

In addition to expanding safe and appropriate treatment spaces, the hospital-at-home model helps increase access to care for patients during nonstandard hours, including weekends, holidays, or when the waiting time in the emergency room is painfully long. Furthermore, providing care in the home gives the clinical team valuable insight into the patient’s daily life and routine. Performing medication reconciliation with the medicine cabinet in sight and dietary education in a patient’s kitchen are powerful touch points.2 For example, a patient with congestive heart failure who must undergo diuresis is much more likely to meet their care goals when their home diet is aligned with the treatment goal. By being able to see exactly what is in a patient’s pantry and fridge, the care team can create a much more tailored approach to sodium intake and fluid management. Providers can create and execute true patient-centric care as they gain direct insight into the patient’s lifestyle, which is clearly valuable when creating care plans for complex chronic health issues.

 

 

Challenges to Implementation and Scaling

Although there are clear benefits to hospital at home, how to best implement and scale this model presents a challenge. In addition to educating patients and families about this model of care, health care systems must expand their hospital-at-home programs and provide education about this model to clinical staff and trainees, and insurers must create reimbursement paradigms. Patients meeting eligibility criteria to enroll in hospital at home is the easiest hurdle, as hospital-at-home programs function best when they enroll and service as many patients as possible, including underserved populations.

Upfront Costs and Cost Savings

While there are upfront costs to set up technology and coordinate services, hospital at home also provides significant total cost savings when compared to coordination associated with brick-and-mortar admission. Hospital care accounts for about one-third of total medical expenditures and is a leading cause of debt.2 Eliminating fixed hospital costs such as facility, overhead, and equipment costs through adoption of the hospital-at-home model can lead to a reduction in expenditures. It has been found that fewer laboratory and diagnostic tests are ordered for hospital-at-home patients when compared to similar patients in brick-and-mortar hospital settings, with comparable or better clinical patient outcomes.6 Furthermore, it is estimated that there are cost savings of 19% to 30% when compared to traditional inpatient care.6 Without legislative action, upon the end of the current COVID-19 public health emergency, the Centers for Medicare & Medicaid Service’s Acute Hospital Care at Home waiver will terminate. This could slow down scaling of the model.However, over the past 2 years there has been enough buy-in from major health systems and patients to continue the momentum of the model’s growth. When setting up a hospital-at-home program, it would be wise to consider a few factors: where in the hospital or health system entity structure the hospital-at-home program will reside, which existing resources can be leveraged within the hospital or health system, and what are the state or federal regulatory requirements for such a program. This type of program continues to fill gaps within the US health care system, meeting the needs of widely overlooked populations and increasing access to essential ancillary services.

Conclusion

It is time to consider our bias toward hospital-first options when managing the care needs of our patients. Health care providers have the option to advocate for holistic care, better experience, and better outcomes. Home-based options are safe, equitable, and patient-centric. Increased costs, consumerism, and technology have pushed us to think about alternative approaches to patient care delivery, and the pandemic created a unique opportunity to see just how far the health care system could stretch itself with capacity constraints, insufficient resources, and staff shortages. In light of new possibilities, it is time to reimagine and transform our health care delivery system so that it is unified, seamless, cohesive, and flexible.

Corresponding author: Payal Sharma, DNP, MSN, RN, FNP-BC, CBN; [email protected].

Disclosures: None reported.

References

1. Cai S, Laurel PA, Makineni R, Marks ML. Evaluation of a hospital-in-home program implemented among veterans. Am J Manag Care. 2017;23(8):482-487. 

2. Levine DM, Ouchi K, Blanchfield B, et al. Hospital-level care at home for acutely ill adults: a pilot randomized controlled trial. J Gen Intern Med. 2018;33(5):729-736. doi:10.1007/s11606-018-4307-z

3. Shuman V, Coyle PC, Perera S,et al. Association between improved mobility and distal health outcomes. J Gerontol A Biol Sci Med Sci. 2020;75(12):2412-2417. doi:10.1093/gerona/glaa086

4. Shepperd S, Doll H, Angus RM, et al. Avoiding hospital admission through provision of hospital care at home: a systematic review and meta-analysis of individual patient data. CMAJ. 2009;180(2):175-182. doi:10.1503/cmaj.081491

5. Caplan GA, Sulaiman NS, Mangin DA, et al. A meta-analysis of “hospital in the home”. Med J Aust. 2012;197(9):512-519. doi:10.5694/mja12.10480

6. Hospital at Home. Johns Hopkins Medicine. Healthcare Solutions. Accessed May 20, 2022. https://www.johnshopkinssolutions.com/solution/hospital-at-home/

References

1. Cai S, Laurel PA, Makineni R, Marks ML. Evaluation of a hospital-in-home program implemented among veterans. Am J Manag Care. 2017;23(8):482-487. 

2. Levine DM, Ouchi K, Blanchfield B, et al. Hospital-level care at home for acutely ill adults: a pilot randomized controlled trial. J Gen Intern Med. 2018;33(5):729-736. doi:10.1007/s11606-018-4307-z

3. Shuman V, Coyle PC, Perera S,et al. Association between improved mobility and distal health outcomes. J Gerontol A Biol Sci Med Sci. 2020;75(12):2412-2417. doi:10.1093/gerona/glaa086

4. Shepperd S, Doll H, Angus RM, et al. Avoiding hospital admission through provision of hospital care at home: a systematic review and meta-analysis of individual patient data. CMAJ. 2009;180(2):175-182. doi:10.1503/cmaj.081491

5. Caplan GA, Sulaiman NS, Mangin DA, et al. A meta-analysis of “hospital in the home”. Med J Aust. 2012;197(9):512-519. doi:10.5694/mja12.10480

6. Hospital at Home. Johns Hopkins Medicine. Healthcare Solutions. Accessed May 20, 2022. https://www.johnshopkinssolutions.com/solution/hospital-at-home/

Issue
Journal of Clinical Outcomes Management - 29(3)
Issue
Journal of Clinical Outcomes Management - 29(3)
Page Number
113-115
Page Number
113-115
Publications
Publications
Topics
Article Type
Display Headline
Where Does the Hospital Belong? Perspectives on Hospital at Home in the 21st Century
Display Headline
Where Does the Hospital Belong? Perspectives on Hospital at Home in the 21st Century
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Coronary CT Angiography Compared to Coronary Angiography or Standard of Care in Patients With Intermediate-Risk Stable Chest Pain

Article Type
Changed
Thu, 06/02/2022 - 08:20
Display Headline
Coronary CT Angiography Compared to Coronary Angiography or Standard of Care in Patients With Intermediate-Risk Stable Chest Pain

Study 1 Overview (SCOT-HEART Investigators)

Objective: To assess cardiovascular mortality and nonfatal myocardial infarction at 5 years in patients with stable chest pain referred to cardiology clinic for management with either standard care plus computed tomography angiography (CTA) or standard care alone.

Design: Multicenter, randomized, open-label prospective study.

Setting and participants: A total of 4146 patients with stable chest pain were randomized to standard care or standard care plus CTA at 12 centers across Scotland and were followed for 5 years.

Main outcome measures: The primary end point was a composite of death from coronary heart disease or nonfatal myocardial infarction. Main secondary end points were nonfatal myocardial infarction, nonfatal stroke, and frequency of invasive coronary angiography (ICA) and coronary revascularization with percutaneous coronary intervention or coronary artery bypass grafting.

Main results: The primary outcome including the composite of cardiovascular death or nonfatal myocardial infarction was lower in the CTA group than in the standard-care group at 2.3% (48 of 2073 patients) vs 3.9% (81 of 2073 patients), respectively (hazard ratio, 0.59; 95% CI, 0.41-0.84; P = .004). Although there was a higher rate of ICA and coronary revascularization in the CTA group than in the standard-care group in the first few months of follow-up, the overall rates were similar at 5 years, with ICA performed in 491 patients and 502 patients in the CTA vs standard-care groups, respectively (hazard ratio, 1.00; 95% CI, 0.88-1.13). Similarly, coronary revascularization was performed in 279 patients in the CTA group and in 267 patients in the standard-care group (hazard ratio, 1.07; 95% CI, 0.91-1.27). There were, however, more preventive therapies initiated in patients in the CTA group than in the standard-care group (odds ratio, 1.40; 95% CI, 1.19-1.65).

Conclusion: In patients with stable chest pain, the use of CTA in addition to standard care resulted in a significantly lower rate of death from coronary heart disease or nonfatal myocardial infarction at 5 years; the main contributor to this outcome was a reduced nonfatal myocardial infarction rate. There was no difference in the rate of coronary angiography or coronary revascularization between the 2 groups at 5 years.

 

 

Study 2 Overview (DISCHARGE Trial Group)

Objective: To compare the effectiveness of computed tomography (CT) with ICA as a diagnostic tool in patients with stable chest pain and intermediate pretest probability of coronary artery disease (CAD).

Design: Multicenter, randomized, assessor-blinded pragmatic prospective study.

Setting and participants: A total of 3667 patients with stable chest pain and intermediate pretest probability of CAD were enrolled at 26 centers and randomized into CT or ICA groups. Only 3561 patients were included in the modified intention-to-treat analysis, with 1808 patients and 1753 patients in the CT and ICA groups, respectively.

Main outcome measures: The primary outcome was a composite of cardiovascular death, nonfatal myocardial infarction, and nonfatal stroke over 3.5 years. The main secondary outcomes were major procedure-related complications and patient-reported angina pectoris during the last 4 weeks of follow up.

Main results: The primary outcome occurred in 38 of 1808 patients (2.1%) in the CT group and in 52 of 1753 patients (3.0%) in the ICA group (hazard ratio, 0.70; 95% CI, 0.46-1.07; P = .10). The secondary outcomes showed that major procedure-related complications occurred in 9 patients (0.5%) in the CT group and in 33 patients (1.9%) in the ICA group (hazard ratio, 0.26; 95% CI, 0.13-0.55). Rates of patient-reported angina in the final 4 weeks of follow-up were 8.8% in the CT group and 7.5% in the ICA group (odds ratio, 1.17; 95% CI, 0.92-1.48).

Conclusion: Risk of major adverse cardiovascular events from the primary outcome were similar in both the CT and ICA groups among patients with stable chest pain and intermediate pretest probability of CAD. Patients referred for CT had a lower rate of coronary angiography leading to fewer major procedure-related complications in these patients than in those referred for ICA.

 

 

Commentary

Evaluation and treatment of obstructive atherosclerosis is an important part of clinical care in patients presenting with angina symptoms.1 Thus, the initial investigation for patients with suspected obstructive CAD includes ruling out acute coronary syndrome and assessing quality of life.1 The diagnostic test should be tailored to the pretest probability for the diagnosis of obstructive CAD.2

In the United States, stress testing traditionally has been used for the initial assessment in patients with suspected CAD,3 but recently CTA has been utilized more frequently for this purpose. Compared to a stress test, which often helps identify and assess ischemia, CTA can provide anatomical assessment, with higher sensitivity to identify CAD.4 Furthermore, it can distinguish nonobstructive plaques that can be challenging to identify with stress test alone.

Whether CTA is superior to stress testing as the initial assessment for CAD has been debated. The randomized PROMISE trial compared patients with stable angina who underwent functional stress testing or CTA as an initial strategy.5 They reported a similar outcome between the 2 groups at a median follow-up of 2 years. However, in the original SCOT-HEART trial (CT coronary angiography in patients with suspected angina due to coronary heart disease), which was published in the same year as the PROMISE trial, the patients who underwent initial assessment with CTA had a numerically lower composite end point of cardiac death and myocardial infarction at a median follow-up of 1.7 years (1.3% vs 2.0%, P = .053).6

Given this result, the SCOT-HEART investigators extended the follow-up to evaluate the composite end point of death from coronary heart disease or nonfatal myocardial infarction at 5 years.7 This trial enrolled patients who were initially referred to a cardiology clinic for evaluation of chest pain, and they were randomized to standard care plus CTA or standard care alone. At a median duration of 4.8 years, the primary outcome was lower in the CTA group (2.3%, 48 patients) than in the standard-care group (3.9%, 81 patients) (hazard ratio, 0.58; 95% CI, 0.41-0.84; P = .004). Both groups had similar rates of invasive coronary angiography and had similar coronary revascularization rates.

It is hypothesized that this lower rate of nonfatal myocardial infarction in patients with CTA plus standard care is associated with a higher rate of preventive therapies initiated in patients in the CTA-plus-standard-care group compared to standard care alone. However, the difference in the standard-care group should be noted when compared to the PROMISE trial. In the PROMISE trial, the comparator group had predominantly stress imaging (either nuclear stress test or echocardiography), while in the SCOT-HEART trial, the group had predominantly stress electrocardiogram (ECG), and only 10% of the patients underwent stress imaging. It is possible the difference seen in the rate of nonfatal myocardial infarction was due to suboptimal diagnosis of CAD with stress ECG, which has lower sensitivity compared to stress imaging.

The DISCHARGE trial investigated the effectiveness of CTA vs ICA as the initial diagnostic test in the management of patients with stable chest pain and an intermediate pretest probability of obstructive CAD.8 At 3.5 years of follow-up, the primary composite of cardiovascular death, myocardial infarction, or stroke was similar in both groups (2.1% vs 3.0; hazard ratio, 0.70; 95% CI, 0.46-1.07; P = .10). Importantly, as fewer patients underwent ICA, the risk of procedure-related complication was lower in the CTA group than in the ICA group. However, it is important to note that only 25% of the patients diagnosed with obstructive CAD had greater than 50% vessel stenosis, which raises the question of whether an initial invasive strategy is appropriate for this population.

The strengths of these 2 studies include the large number of patients enrolled along with adequate follow-up, 5 years in the SCOT-HEART trial and 3.5 years in the DISCHARGE trial. The 2 studies overall suggest the usefulness of CTA for assessment of CAD. However, the control groups were very different in these 2 trials. In the SCOT-HEART study, the comparator group was primarily assessed by stress ECG, while in the DISCHARGE study, the comparator group was primary assessed by ICA. In the PROMISE trial, the composite end point of death, myocardial infarction, hospitalization for unstable angina, or major procedural complication was similar when the strategy of initial CTA was compared to functional testing with imaging (exercise ECG, nuclear stress testing, or echocardiography).5 Thus, clinical assessment is still needed when clinicians are selecting the appropriate diagnostic test for patients with suspected CAD. The most recent guidelines give similar recommendations for CTA compared to stress imaging.9 Whether further improvement in CTA acquisition or the addition of CT fractional flow reserve can further improve outcomes requires additional study.

Applications for Clinical Practice and System Implementation

In patients with stable chest pain and intermediate pretest probability of CAD, CTA is useful in diagnosis compared to stress ECG and in reducing utilization of low-yield ICA. Whether CTA is more useful compared to the other noninvasive stress imaging modalities in this population requires further study.

Practice Points

  • In patients with stable chest pain and intermediate pretest probability of CAD, CTA is useful compared to stress ECG.
  • Use of CTA can potentially reduce the use of low-yield coronary angiography.

–Thai Nguyen, MD, Albert Chan, MD, Taishi Hirai, MD
University of Missouri, Columbia, MO

References

1. Knuuti J, Wijns W, Saraste A, et al. 2019 ESC Guidelines for the diagnosis and management of chronic coronary syndromes. Eur Heart J. 2020;41(3):407-477. doi:10.1093/eurheartj/ehz425

2. Nakano S, Kohsaka S, Chikamori T et al. JCS 2022 guideline focused update on diagnosis and treatment in patients with stable coronary artery disease. Circ J. 2022;86(5):882-915. doi:10.1253/circj.CJ-21-1041.

3. Fihn SD, Gardin JM, Abrams J, et al. 2012 ACCF/AHA/ACP/AATS/PCNA/SCAI/STS Guideline for the diagnosis and management of patients with stable ischemic heart disease: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines, and the American College of Physicians, American Association for Thoracic Surgery, Preventive Cardiovascular Nurses Association, Society for Cardiovascular Angiography and Interventions, and Society of Thoracic Surgeons. J Am Coll Cardiol. 2012;60(24):e44-e164. doi:10.1016/j.jacc.2012.07.013

4. Arbab-Zadeh A, Di Carli MF, Cerci R, et al. Accuracy of computed tomographic angiography and single-photon emission computed tomography-acquired myocardial perfusion imaging for the diagnosis of coronary artery disease. Circ Cardiovasc Imaging. 2015;8(10):e003533. doi:10.1161/CIRCIMAGING

5. Douglas PS, Hoffmann U, Patel MR, et al. Outcomes of anatomical versus functional testing for coronary artery disease. N Engl J Med. 2015;372(14):1291-300. doi:10.1056/NEJMoa1415516

6. SCOT-HEART investigators. CT coronary angiography in patients with suspected angina due to coronary heart disease (SCOT-HEART): an open-label, parallel-group, multicentre trial. Lancet. 2015;385:2383-2391. doi:10.1016/S0140-6736(15)60291-4

7. SCOT-HEART Investigators, Newby DE, Adamson PD, et al. Coronary CT angiography and 5-year risk of myocardial infarction. N Engl J Med. 2018;379(10):924-933. doi:10.1056/NEJMoa1805971

8. DISCHARGE Trial Group, Maurovich-Horvat P, Bosserdt M, et al. CT or invasive coronary angiography in stable chest pain. N Engl J Med. 2022;386(17):1591-1602. doi:10.1056/NEJMoa2200963

9. Writing Committee Members, Lawton JS, Tamis-Holland JE, et al. 2021 ACC/AHA/SCAI guideline for coronary artery revascularization: a report of the American College of Cardiology/American Heart Association Joint Committee on Clinical Practice Guidelines. J Am Coll Cardiol. 2022;79(2):e21-e129. doi:10.1016/j.jacc.2021.09.006

Article PDF
Issue
Journal of Clinical Outcomes Management - 29(3)
Publications
Topics
Page Number
105-108
Sections
Article PDF
Article PDF

Study 1 Overview (SCOT-HEART Investigators)

Objective: To assess cardiovascular mortality and nonfatal myocardial infarction at 5 years in patients with stable chest pain referred to cardiology clinic for management with either standard care plus computed tomography angiography (CTA) or standard care alone.

Design: Multicenter, randomized, open-label prospective study.

Setting and participants: A total of 4146 patients with stable chest pain were randomized to standard care or standard care plus CTA at 12 centers across Scotland and were followed for 5 years.

Main outcome measures: The primary end point was a composite of death from coronary heart disease or nonfatal myocardial infarction. Main secondary end points were nonfatal myocardial infarction, nonfatal stroke, and frequency of invasive coronary angiography (ICA) and coronary revascularization with percutaneous coronary intervention or coronary artery bypass grafting.

Main results: The primary outcome including the composite of cardiovascular death or nonfatal myocardial infarction was lower in the CTA group than in the standard-care group at 2.3% (48 of 2073 patients) vs 3.9% (81 of 2073 patients), respectively (hazard ratio, 0.59; 95% CI, 0.41-0.84; P = .004). Although there was a higher rate of ICA and coronary revascularization in the CTA group than in the standard-care group in the first few months of follow-up, the overall rates were similar at 5 years, with ICA performed in 491 patients and 502 patients in the CTA vs standard-care groups, respectively (hazard ratio, 1.00; 95% CI, 0.88-1.13). Similarly, coronary revascularization was performed in 279 patients in the CTA group and in 267 patients in the standard-care group (hazard ratio, 1.07; 95% CI, 0.91-1.27). There were, however, more preventive therapies initiated in patients in the CTA group than in the standard-care group (odds ratio, 1.40; 95% CI, 1.19-1.65).

Conclusion: In patients with stable chest pain, the use of CTA in addition to standard care resulted in a significantly lower rate of death from coronary heart disease or nonfatal myocardial infarction at 5 years; the main contributor to this outcome was a reduced nonfatal myocardial infarction rate. There was no difference in the rate of coronary angiography or coronary revascularization between the 2 groups at 5 years.

 

 

Study 2 Overview (DISCHARGE Trial Group)

Objective: To compare the effectiveness of computed tomography (CT) with ICA as a diagnostic tool in patients with stable chest pain and intermediate pretest probability of coronary artery disease (CAD).

Design: Multicenter, randomized, assessor-blinded pragmatic prospective study.

Setting and participants: A total of 3667 patients with stable chest pain and intermediate pretest probability of CAD were enrolled at 26 centers and randomized into CT or ICA groups. Only 3561 patients were included in the modified intention-to-treat analysis, with 1808 patients and 1753 patients in the CT and ICA groups, respectively.

Main outcome measures: The primary outcome was a composite of cardiovascular death, nonfatal myocardial infarction, and nonfatal stroke over 3.5 years. The main secondary outcomes were major procedure-related complications and patient-reported angina pectoris during the last 4 weeks of follow up.

Main results: The primary outcome occurred in 38 of 1808 patients (2.1%) in the CT group and in 52 of 1753 patients (3.0%) in the ICA group (hazard ratio, 0.70; 95% CI, 0.46-1.07; P = .10). The secondary outcomes showed that major procedure-related complications occurred in 9 patients (0.5%) in the CT group and in 33 patients (1.9%) in the ICA group (hazard ratio, 0.26; 95% CI, 0.13-0.55). Rates of patient-reported angina in the final 4 weeks of follow-up were 8.8% in the CT group and 7.5% in the ICA group (odds ratio, 1.17; 95% CI, 0.92-1.48).

Conclusion: Risk of major adverse cardiovascular events from the primary outcome were similar in both the CT and ICA groups among patients with stable chest pain and intermediate pretest probability of CAD. Patients referred for CT had a lower rate of coronary angiography leading to fewer major procedure-related complications in these patients than in those referred for ICA.

 

 

Commentary

Evaluation and treatment of obstructive atherosclerosis is an important part of clinical care in patients presenting with angina symptoms.1 Thus, the initial investigation for patients with suspected obstructive CAD includes ruling out acute coronary syndrome and assessing quality of life.1 The diagnostic test should be tailored to the pretest probability for the diagnosis of obstructive CAD.2

In the United States, stress testing traditionally has been used for the initial assessment in patients with suspected CAD,3 but recently CTA has been utilized more frequently for this purpose. Compared to a stress test, which often helps identify and assess ischemia, CTA can provide anatomical assessment, with higher sensitivity to identify CAD.4 Furthermore, it can distinguish nonobstructive plaques that can be challenging to identify with stress test alone.

Whether CTA is superior to stress testing as the initial assessment for CAD has been debated. The randomized PROMISE trial compared patients with stable angina who underwent functional stress testing or CTA as an initial strategy.5 They reported a similar outcome between the 2 groups at a median follow-up of 2 years. However, in the original SCOT-HEART trial (CT coronary angiography in patients with suspected angina due to coronary heart disease), which was published in the same year as the PROMISE trial, the patients who underwent initial assessment with CTA had a numerically lower composite end point of cardiac death and myocardial infarction at a median follow-up of 1.7 years (1.3% vs 2.0%, P = .053).6

Given this result, the SCOT-HEART investigators extended the follow-up to evaluate the composite end point of death from coronary heart disease or nonfatal myocardial infarction at 5 years.7 This trial enrolled patients who were initially referred to a cardiology clinic for evaluation of chest pain, and they were randomized to standard care plus CTA or standard care alone. At a median duration of 4.8 years, the primary outcome was lower in the CTA group (2.3%, 48 patients) than in the standard-care group (3.9%, 81 patients) (hazard ratio, 0.58; 95% CI, 0.41-0.84; P = .004). Both groups had similar rates of invasive coronary angiography and had similar coronary revascularization rates.

It is hypothesized that this lower rate of nonfatal myocardial infarction in patients with CTA plus standard care is associated with a higher rate of preventive therapies initiated in patients in the CTA-plus-standard-care group compared to standard care alone. However, the difference in the standard-care group should be noted when compared to the PROMISE trial. In the PROMISE trial, the comparator group had predominantly stress imaging (either nuclear stress test or echocardiography), while in the SCOT-HEART trial, the group had predominantly stress electrocardiogram (ECG), and only 10% of the patients underwent stress imaging. It is possible the difference seen in the rate of nonfatal myocardial infarction was due to suboptimal diagnosis of CAD with stress ECG, which has lower sensitivity compared to stress imaging.

The DISCHARGE trial investigated the effectiveness of CTA vs ICA as the initial diagnostic test in the management of patients with stable chest pain and an intermediate pretest probability of obstructive CAD.8 At 3.5 years of follow-up, the primary composite of cardiovascular death, myocardial infarction, or stroke was similar in both groups (2.1% vs 3.0; hazard ratio, 0.70; 95% CI, 0.46-1.07; P = .10). Importantly, as fewer patients underwent ICA, the risk of procedure-related complication was lower in the CTA group than in the ICA group. However, it is important to note that only 25% of the patients diagnosed with obstructive CAD had greater than 50% vessel stenosis, which raises the question of whether an initial invasive strategy is appropriate for this population.

The strengths of these 2 studies include the large number of patients enrolled along with adequate follow-up, 5 years in the SCOT-HEART trial and 3.5 years in the DISCHARGE trial. The 2 studies overall suggest the usefulness of CTA for assessment of CAD. However, the control groups were very different in these 2 trials. In the SCOT-HEART study, the comparator group was primarily assessed by stress ECG, while in the DISCHARGE study, the comparator group was primary assessed by ICA. In the PROMISE trial, the composite end point of death, myocardial infarction, hospitalization for unstable angina, or major procedural complication was similar when the strategy of initial CTA was compared to functional testing with imaging (exercise ECG, nuclear stress testing, or echocardiography).5 Thus, clinical assessment is still needed when clinicians are selecting the appropriate diagnostic test for patients with suspected CAD. The most recent guidelines give similar recommendations for CTA compared to stress imaging.9 Whether further improvement in CTA acquisition or the addition of CT fractional flow reserve can further improve outcomes requires additional study.

Applications for Clinical Practice and System Implementation

In patients with stable chest pain and intermediate pretest probability of CAD, CTA is useful in diagnosis compared to stress ECG and in reducing utilization of low-yield ICA. Whether CTA is more useful compared to the other noninvasive stress imaging modalities in this population requires further study.

Practice Points

  • In patients with stable chest pain and intermediate pretest probability of CAD, CTA is useful compared to stress ECG.
  • Use of CTA can potentially reduce the use of low-yield coronary angiography.

–Thai Nguyen, MD, Albert Chan, MD, Taishi Hirai, MD
University of Missouri, Columbia, MO

Study 1 Overview (SCOT-HEART Investigators)

Objective: To assess cardiovascular mortality and nonfatal myocardial infarction at 5 years in patients with stable chest pain referred to cardiology clinic for management with either standard care plus computed tomography angiography (CTA) or standard care alone.

Design: Multicenter, randomized, open-label prospective study.

Setting and participants: A total of 4146 patients with stable chest pain were randomized to standard care or standard care plus CTA at 12 centers across Scotland and were followed for 5 years.

Main outcome measures: The primary end point was a composite of death from coronary heart disease or nonfatal myocardial infarction. Main secondary end points were nonfatal myocardial infarction, nonfatal stroke, and frequency of invasive coronary angiography (ICA) and coronary revascularization with percutaneous coronary intervention or coronary artery bypass grafting.

Main results: The primary outcome including the composite of cardiovascular death or nonfatal myocardial infarction was lower in the CTA group than in the standard-care group at 2.3% (48 of 2073 patients) vs 3.9% (81 of 2073 patients), respectively (hazard ratio, 0.59; 95% CI, 0.41-0.84; P = .004). Although there was a higher rate of ICA and coronary revascularization in the CTA group than in the standard-care group in the first few months of follow-up, the overall rates were similar at 5 years, with ICA performed in 491 patients and 502 patients in the CTA vs standard-care groups, respectively (hazard ratio, 1.00; 95% CI, 0.88-1.13). Similarly, coronary revascularization was performed in 279 patients in the CTA group and in 267 patients in the standard-care group (hazard ratio, 1.07; 95% CI, 0.91-1.27). There were, however, more preventive therapies initiated in patients in the CTA group than in the standard-care group (odds ratio, 1.40; 95% CI, 1.19-1.65).

Conclusion: In patients with stable chest pain, the use of CTA in addition to standard care resulted in a significantly lower rate of death from coronary heart disease or nonfatal myocardial infarction at 5 years; the main contributor to this outcome was a reduced nonfatal myocardial infarction rate. There was no difference in the rate of coronary angiography or coronary revascularization between the 2 groups at 5 years.

 

 

Study 2 Overview (DISCHARGE Trial Group)

Objective: To compare the effectiveness of computed tomography (CT) with ICA as a diagnostic tool in patients with stable chest pain and intermediate pretest probability of coronary artery disease (CAD).

Design: Multicenter, randomized, assessor-blinded pragmatic prospective study.

Setting and participants: A total of 3667 patients with stable chest pain and intermediate pretest probability of CAD were enrolled at 26 centers and randomized into CT or ICA groups. Only 3561 patients were included in the modified intention-to-treat analysis, with 1808 patients and 1753 patients in the CT and ICA groups, respectively.

Main outcome measures: The primary outcome was a composite of cardiovascular death, nonfatal myocardial infarction, and nonfatal stroke over 3.5 years. The main secondary outcomes were major procedure-related complications and patient-reported angina pectoris during the last 4 weeks of follow up.

Main results: The primary outcome occurred in 38 of 1808 patients (2.1%) in the CT group and in 52 of 1753 patients (3.0%) in the ICA group (hazard ratio, 0.70; 95% CI, 0.46-1.07; P = .10). The secondary outcomes showed that major procedure-related complications occurred in 9 patients (0.5%) in the CT group and in 33 patients (1.9%) in the ICA group (hazard ratio, 0.26; 95% CI, 0.13-0.55). Rates of patient-reported angina in the final 4 weeks of follow-up were 8.8% in the CT group and 7.5% in the ICA group (odds ratio, 1.17; 95% CI, 0.92-1.48).

Conclusion: Risk of major adverse cardiovascular events from the primary outcome were similar in both the CT and ICA groups among patients with stable chest pain and intermediate pretest probability of CAD. Patients referred for CT had a lower rate of coronary angiography leading to fewer major procedure-related complications in these patients than in those referred for ICA.

 

 

Commentary

Evaluation and treatment of obstructive atherosclerosis is an important part of clinical care in patients presenting with angina symptoms.1 Thus, the initial investigation for patients with suspected obstructive CAD includes ruling out acute coronary syndrome and assessing quality of life.1 The diagnostic test should be tailored to the pretest probability for the diagnosis of obstructive CAD.2

In the United States, stress testing traditionally has been used for the initial assessment in patients with suspected CAD,3 but recently CTA has been utilized more frequently for this purpose. Compared to a stress test, which often helps identify and assess ischemia, CTA can provide anatomical assessment, with higher sensitivity to identify CAD.4 Furthermore, it can distinguish nonobstructive plaques that can be challenging to identify with stress test alone.

Whether CTA is superior to stress testing as the initial assessment for CAD has been debated. The randomized PROMISE trial compared patients with stable angina who underwent functional stress testing or CTA as an initial strategy.5 They reported a similar outcome between the 2 groups at a median follow-up of 2 years. However, in the original SCOT-HEART trial (CT coronary angiography in patients with suspected angina due to coronary heart disease), which was published in the same year as the PROMISE trial, the patients who underwent initial assessment with CTA had a numerically lower composite end point of cardiac death and myocardial infarction at a median follow-up of 1.7 years (1.3% vs 2.0%, P = .053).6

Given this result, the SCOT-HEART investigators extended the follow-up to evaluate the composite end point of death from coronary heart disease or nonfatal myocardial infarction at 5 years.7 This trial enrolled patients who were initially referred to a cardiology clinic for evaluation of chest pain, and they were randomized to standard care plus CTA or standard care alone. At a median duration of 4.8 years, the primary outcome was lower in the CTA group (2.3%, 48 patients) than in the standard-care group (3.9%, 81 patients) (hazard ratio, 0.58; 95% CI, 0.41-0.84; P = .004). Both groups had similar rates of invasive coronary angiography and had similar coronary revascularization rates.

It is hypothesized that this lower rate of nonfatal myocardial infarction in patients with CTA plus standard care is associated with a higher rate of preventive therapies initiated in patients in the CTA-plus-standard-care group compared to standard care alone. However, the difference in the standard-care group should be noted when compared to the PROMISE trial. In the PROMISE trial, the comparator group had predominantly stress imaging (either nuclear stress test or echocardiography), while in the SCOT-HEART trial, the group had predominantly stress electrocardiogram (ECG), and only 10% of the patients underwent stress imaging. It is possible the difference seen in the rate of nonfatal myocardial infarction was due to suboptimal diagnosis of CAD with stress ECG, which has lower sensitivity compared to stress imaging.

The DISCHARGE trial investigated the effectiveness of CTA vs ICA as the initial diagnostic test in the management of patients with stable chest pain and an intermediate pretest probability of obstructive CAD.8 At 3.5 years of follow-up, the primary composite of cardiovascular death, myocardial infarction, or stroke was similar in both groups (2.1% vs 3.0; hazard ratio, 0.70; 95% CI, 0.46-1.07; P = .10). Importantly, as fewer patients underwent ICA, the risk of procedure-related complication was lower in the CTA group than in the ICA group. However, it is important to note that only 25% of the patients diagnosed with obstructive CAD had greater than 50% vessel stenosis, which raises the question of whether an initial invasive strategy is appropriate for this population.

The strengths of these 2 studies include the large number of patients enrolled along with adequate follow-up, 5 years in the SCOT-HEART trial and 3.5 years in the DISCHARGE trial. The 2 studies overall suggest the usefulness of CTA for assessment of CAD. However, the control groups were very different in these 2 trials. In the SCOT-HEART study, the comparator group was primarily assessed by stress ECG, while in the DISCHARGE study, the comparator group was primary assessed by ICA. In the PROMISE trial, the composite end point of death, myocardial infarction, hospitalization for unstable angina, or major procedural complication was similar when the strategy of initial CTA was compared to functional testing with imaging (exercise ECG, nuclear stress testing, or echocardiography).5 Thus, clinical assessment is still needed when clinicians are selecting the appropriate diagnostic test for patients with suspected CAD. The most recent guidelines give similar recommendations for CTA compared to stress imaging.9 Whether further improvement in CTA acquisition or the addition of CT fractional flow reserve can further improve outcomes requires additional study.

Applications for Clinical Practice and System Implementation

In patients with stable chest pain and intermediate pretest probability of CAD, CTA is useful in diagnosis compared to stress ECG and in reducing utilization of low-yield ICA. Whether CTA is more useful compared to the other noninvasive stress imaging modalities in this population requires further study.

Practice Points

  • In patients with stable chest pain and intermediate pretest probability of CAD, CTA is useful compared to stress ECG.
  • Use of CTA can potentially reduce the use of low-yield coronary angiography.

–Thai Nguyen, MD, Albert Chan, MD, Taishi Hirai, MD
University of Missouri, Columbia, MO

References

1. Knuuti J, Wijns W, Saraste A, et al. 2019 ESC Guidelines for the diagnosis and management of chronic coronary syndromes. Eur Heart J. 2020;41(3):407-477. doi:10.1093/eurheartj/ehz425

2. Nakano S, Kohsaka S, Chikamori T et al. JCS 2022 guideline focused update on diagnosis and treatment in patients with stable coronary artery disease. Circ J. 2022;86(5):882-915. doi:10.1253/circj.CJ-21-1041.

3. Fihn SD, Gardin JM, Abrams J, et al. 2012 ACCF/AHA/ACP/AATS/PCNA/SCAI/STS Guideline for the diagnosis and management of patients with stable ischemic heart disease: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines, and the American College of Physicians, American Association for Thoracic Surgery, Preventive Cardiovascular Nurses Association, Society for Cardiovascular Angiography and Interventions, and Society of Thoracic Surgeons. J Am Coll Cardiol. 2012;60(24):e44-e164. doi:10.1016/j.jacc.2012.07.013

4. Arbab-Zadeh A, Di Carli MF, Cerci R, et al. Accuracy of computed tomographic angiography and single-photon emission computed tomography-acquired myocardial perfusion imaging for the diagnosis of coronary artery disease. Circ Cardiovasc Imaging. 2015;8(10):e003533. doi:10.1161/CIRCIMAGING

5. Douglas PS, Hoffmann U, Patel MR, et al. Outcomes of anatomical versus functional testing for coronary artery disease. N Engl J Med. 2015;372(14):1291-300. doi:10.1056/NEJMoa1415516

6. SCOT-HEART investigators. CT coronary angiography in patients with suspected angina due to coronary heart disease (SCOT-HEART): an open-label, parallel-group, multicentre trial. Lancet. 2015;385:2383-2391. doi:10.1016/S0140-6736(15)60291-4

7. SCOT-HEART Investigators, Newby DE, Adamson PD, et al. Coronary CT angiography and 5-year risk of myocardial infarction. N Engl J Med. 2018;379(10):924-933. doi:10.1056/NEJMoa1805971

8. DISCHARGE Trial Group, Maurovich-Horvat P, Bosserdt M, et al. CT or invasive coronary angiography in stable chest pain. N Engl J Med. 2022;386(17):1591-1602. doi:10.1056/NEJMoa2200963

9. Writing Committee Members, Lawton JS, Tamis-Holland JE, et al. 2021 ACC/AHA/SCAI guideline for coronary artery revascularization: a report of the American College of Cardiology/American Heart Association Joint Committee on Clinical Practice Guidelines. J Am Coll Cardiol. 2022;79(2):e21-e129. doi:10.1016/j.jacc.2021.09.006

References

1. Knuuti J, Wijns W, Saraste A, et al. 2019 ESC Guidelines for the diagnosis and management of chronic coronary syndromes. Eur Heart J. 2020;41(3):407-477. doi:10.1093/eurheartj/ehz425

2. Nakano S, Kohsaka S, Chikamori T et al. JCS 2022 guideline focused update on diagnosis and treatment in patients with stable coronary artery disease. Circ J. 2022;86(5):882-915. doi:10.1253/circj.CJ-21-1041.

3. Fihn SD, Gardin JM, Abrams J, et al. 2012 ACCF/AHA/ACP/AATS/PCNA/SCAI/STS Guideline for the diagnosis and management of patients with stable ischemic heart disease: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines, and the American College of Physicians, American Association for Thoracic Surgery, Preventive Cardiovascular Nurses Association, Society for Cardiovascular Angiography and Interventions, and Society of Thoracic Surgeons. J Am Coll Cardiol. 2012;60(24):e44-e164. doi:10.1016/j.jacc.2012.07.013

4. Arbab-Zadeh A, Di Carli MF, Cerci R, et al. Accuracy of computed tomographic angiography and single-photon emission computed tomography-acquired myocardial perfusion imaging for the diagnosis of coronary artery disease. Circ Cardiovasc Imaging. 2015;8(10):e003533. doi:10.1161/CIRCIMAGING

5. Douglas PS, Hoffmann U, Patel MR, et al. Outcomes of anatomical versus functional testing for coronary artery disease. N Engl J Med. 2015;372(14):1291-300. doi:10.1056/NEJMoa1415516

6. SCOT-HEART investigators. CT coronary angiography in patients with suspected angina due to coronary heart disease (SCOT-HEART): an open-label, parallel-group, multicentre trial. Lancet. 2015;385:2383-2391. doi:10.1016/S0140-6736(15)60291-4

7. SCOT-HEART Investigators, Newby DE, Adamson PD, et al. Coronary CT angiography and 5-year risk of myocardial infarction. N Engl J Med. 2018;379(10):924-933. doi:10.1056/NEJMoa1805971

8. DISCHARGE Trial Group, Maurovich-Horvat P, Bosserdt M, et al. CT or invasive coronary angiography in stable chest pain. N Engl J Med. 2022;386(17):1591-1602. doi:10.1056/NEJMoa2200963

9. Writing Committee Members, Lawton JS, Tamis-Holland JE, et al. 2021 ACC/AHA/SCAI guideline for coronary artery revascularization: a report of the American College of Cardiology/American Heart Association Joint Committee on Clinical Practice Guidelines. J Am Coll Cardiol. 2022;79(2):e21-e129. doi:10.1016/j.jacc.2021.09.006

Issue
Journal of Clinical Outcomes Management - 29(3)
Issue
Journal of Clinical Outcomes Management - 29(3)
Page Number
105-108
Page Number
105-108
Publications
Publications
Topics
Article Type
Display Headline
Coronary CT Angiography Compared to Coronary Angiography or Standard of Care in Patients With Intermediate-Risk Stable Chest Pain
Display Headline
Coronary CT Angiography Compared to Coronary Angiography or Standard of Care in Patients With Intermediate-Risk Stable Chest Pain
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Fall Injury Among Community-Dwelling Older Adults: Effect of a Multifactorial Intervention and a Home Hazard Removal Program

Article Type
Changed
Thu, 06/02/2022 - 08:20
Display Headline
Fall Injury Among Community-Dwelling Older Adults: Effect of a Multifactorial Intervention and a Home Hazard Removal Program

Study 1 Overview (Bhasin et al)

Objective: To examine the effect of a multifactorial intervention for fall prevention on fall injury in community-dwelling older adults.

Design: This was a pragmatic, cluster randomized trial conducted in 86 primary care practices across 10 health care systems.

Setting and participants: The primary care sites were selected based on the prespecified criteria of size, ability to implement the intervention, proximity to other practices, accessibility to electronic health records, and access to community-based exercise programs. The primary care practices were randomly assigned to intervention or control.

Eligibility criteria for participants at those practices included age 70 years or older, dwelling in the community, and having an increased risk of falls, as determined by a history of fall-related injury in the past year, 2 or more falls in the past year, or being afraid of falling because of problems with balance or walking. Exclusion criteria were inability to provide consent or lack of proxy consent for participants who were determined to have cognitive impairment based on screening, and inability to speak English or Spanish. A total of 2802 participants were enrolled in the intervention group, and 2649 participants were enrolled in the control group.

Intervention: The intervention contained 5 components: a standardized assessment of 7 modifiable risk factors for fall injuries; standardized protocol-driven recommendations for management of risk factors; an individualized care plan focused on 1 to 3 risk factors; implementation of care plans, including referrals to community-based programs; and follow-up care conducted by telephone or in person. The modifiable risk factors included impairment of strength, gait, or balance; use of medications related to falls; postural hypotension; problems with feet or footwear; visual impairment; osteoporosis or vitamin D deficiency; and home safety hazards. The intervention was delivered by nurses who had completed online training modules and face-to-face training sessions focused on the intervention and motivational interviewing along with continuing education, in partnership with participants and their primary care providers. In the control group, participants received enhanced usual care, including an informational pamphlet, and were encouraged to discuss fall prevention with their primary care provider, including the results of their screening evaluation.

Main outcome measures: The primary outcome of the study was the first serious fall injury in a time-to-event analysis, defined as a fall resulting in a fracture (other than thoracic or lumbar vertebral fracture), joint dislocation, cut requiring closure, head injury requiring hospitalization, sprain or strain, bruising or swelling, or other serious injury. The secondary outcome was first patient-reported fall injury, also in a time-to-event analysis, ascertained by telephone interviews conducted every 4 months. Other outcomes included hospital admissions, emergency department visits, and other health care utilization. Adjudication of fall events and injuries was conducted by a team blinded to treatment assignment and verified using administrative claims data, encounter data, or electronic health record review.

Main results: The intervention and control groups were similar in terms of sex and age: 62.5% vs 61.5% of participants were women, and mean (SD) age was 79.9 (5.7) years and 79.5 (5.8) years, respectively. Other demographic characteristics were similar between groups. For the primary outcome, the rate of first serious injury was 4.9 per 100 person-years in the intervention group and 5.3 per 100 person-years in the control group, with a hazard ratio of 0.92 (95% CI, 0.80-1.06; P = .25). For the secondary outcome of patient-reported fall injury, there were 25.6 events per 100 person-years in the intervention group and 28.6 in the control group, with a hazard ratio of 0.90 (95% CI, 0.83-0.99; P =0.004). Rates of hospitalization and other secondary outcomes were similar between groups.

Conclusion: The multifactorial STRIDE intervention did not reduce the rate of serious fall injury when compared to enhanced usual care. The intervention did result in lower rates of fall injury by patient report, but no other significant outcomes were seen.

 

 

Study 2 Overview (Stark et al)

Objective: To examine the effect of a behavioral home hazard removal intervention for fall prevention on risk of fall in community-dwelling older adults.

Design: This randomized clinical trial was conducted at a single site in St. Louis, Missouri. Participants were community-dwelling older adults who received services from the Area Agency on Aging (AAA). Inclusion criteria included age 65 years and older, having 1 or more falls in the previous 12 months or being worried about falling by self report, and currently receiving services from an AAA. Exclusion criteria included living in an institution or being severely cognitively impaired and unable to follow directions or report falls. Participants who met the criteria were contacted by phone and invited to participate. A total of 310 participants were enrolled in the study, with an equal number of participants assigned to the intervention and control groups.

Intervention: The intervention included hazard identification and removal after a comprehensive assessment of participants, their behaviors, and the environment; this assessment took place during the first visit, which lasted approximately 80 minutes. A home hazard removal plan was developed, and in the second session, which lasted approximately 40 minutes, remediation of hazards was carried out. A third session for home modification that lasted approximately 30 minutes was conducted, if needed. At 6 months after the intervention, a booster session to identify and remediate any new home hazards and address issues was conducted. Specific interventions, as identified by the assessment, included minor home repair such as grab bars, adaptive equipment, task modification, and education. Shared decision making that enabled older adults to control changes in their homes, self-management strategies to improve awareness, and motivational enhancement strategies to improve acceptance were employed. Scripted algorithms and checklists were used to deliver the intervention. For usual care, an annual assessment and referrals to community services, if needed, were conducted in the AAA.

Main outcome measures: The primary outcome of the study was the number of days to first fall in 12 months. Falls were defined as unintentional movements to the floor, ground, or object below knee level, and falls were recorded through a daily journal for 12 months. Participants were contacted by phone if they did not return the journal or reported a fall. Participants were interviewed to verify falls and determine whether a fall was injurious. Secondary outcomes included rate of falls per person per 12 months; daily activity performance measured using the Older Americans Resources and Services Activities of Daily Living scale; falls self-efficacy, which measures confidence performing daily activities without falling; and quality of life using the SF-36 at 12 months.

Main results: Most of the study participants were women (74%), and mean (SD) age was 75 (7.4) years. Study retention was similar between the intervention and control groups, with 82% completing the study in the intervention group compared with 81% in the control group. Fidelity to the intervention, as measured by a checklist by the interventionist, was 99%, and adherence to home modification, as measured by number of home modifications in use by self report, was high at 92% at 6 months and 91% at 12 months. For the primary outcome, fall hazard was not different between the intervention and control groups (hazard ratio, 0.9; 95% CI, 0.66-1.27). For the secondary outcomes, the rate of falling was lower in the intervention group compared with the control group, with a relative risk of 0.62 (95% CI, 0.40-0.95). There was no difference in other secondary outcomes of daily activity performance, falls self-efficacy, or quality of life.

Conclusion: Despite high adherence to home modifications and fidelity to the intervention, this home hazard removal program did not reduce the risk of falling when compared to usual care. It did reduce the rate of falls, although no other effects were observed.

 

 

Commentary

Observational studies have identified factors that contribute to falls,1 and over the past 30 years a number of intervention trials designed to reduce the risk of falling have been conducted. A recent Cochrane review, published prior to the Bhasin et al and Stark et al trials, looked at the effect of multifactorial interventions for fall prevention across 62 trials that included 19,935 older adults living in the community. The review concluded that multifactorial interventions may reduce the rate of falls, but this conclusion was based on low-quality evidence and there was significant heterogeneity across the studies.2

The STRIDE randomized trial represents the latest effort to address the evidence gap around fall prevention, with the STRIDE investigators hoping this would be the definitive trial that leads to practice change in fall prevention. Smaller trials that have demonstrated effectiveness were brought to scale in this large randomized trial that included 86 practices and more than 5000 participants. The investigators used risk of injurious falls as the primary outcome, as this outcome is considered the most clinically meaningful for the study population. The results, however, were disappointing: the multifactorial intervention in STRIDE did not result in a reduction of risk of injurious falls. Challenges in the implementation of this large trial may have contributed to its results; falls care managers, key to this multifactorial intervention, reported difficulties in navigating complex relationships with patients, families, study staff, and primary care practices during the study. Barriers reported included clinical space limitations, variable buy-in from providers, and turnover of practice staff and providers.3 Such implementation factors may have resulted in the divergent results between smaller clinical trials and this large-scale trial conducted across multiple settings.

The second study, by Stark et al, examined a home modification program and its effect on risk of falls. A prior Cochrane review examining the effect of home safety assessment and modification indicates that these strategies are effective in reducing the rate of falls as well as the risk of falling.4 The results of the current trial showed a reduction in the rate of falls but not in the risk of falling; however, this study did not examine outcomes of serious injurious falls, which may be more clinically meaningful. The Stark et al study adds to the existing literature showing that home modification may have an impact on fall rates. One noteworthy aspect of the Stark et al trial is the high adherence rate to home modification in a community-based approach; perhaps the investigators’ approach can be translated to real-world use.

Applications for Clinical Practice and System Implementation

The role of exercise programs in reducing fall rates is well established,5 but neither of these studies focused on exercise interventions. STRIDE offered community-based exercise program referral, but there is variability in such programs and study staff reported challenges in matching participants with appropriate exercise programs.3 Further studies that examine combinations of multifactorial falls risk reduction, exercise, and home safety, with careful consideration of implementation challenges to assure fidelity and adherence to the intervention, are needed to ascertain the best strategy for fall prevention for older adults at risk.

Given the results of these trials, it is difficult to recommend one falls prevention intervention over another. Clinicians should continue to identify falls risk factors using standardized assessments and determine which factors are modifiable.

Practice Points

  • Incorporating assessments of falls risk in primary care is feasible, and such assessments can identify important risk factors.
  • Clinicians and health systems should identify avenues, such as developing programmatic approaches, to providing home safety assessment and intervention, exercise options, medication review, and modification of other risk factors.
  • Ensuring delivery of these elements reliably through programmatic approaches with adequate follow-up is key to preventing falls in this population.

—William W. Hung, MD, MPH

References

1. Tinetti ME, Speechley M, Ginter SF. Risk factors for falls among elderly persons living in the community. N Engl J Med. 1988; 319:1701-1707. doi:10.1056/NEJM198812293192604

2. Hopewell S, Adedire O, Copsey BJ, et al. Multifactorial and multiple component interventions for preventing falls in older people living in the community. Cochrane Database Syst Rev. 2018;7(7):CD012221. doi:0.1002/14651858.CD012221.pub2

3. Reckrey JM, Gazarian P, Reuben DB, et al. Barriers to implementation of STRIDE, a national study to prevent fall-related injuries. J Am Geriatr Soc. 2021;69(5):1334-1342. doi:10.1111/jgs.17056

4. Gillespie LD, Robertson MC, Gillespie WJ, et al. Interventions for preventing falls in older people living in the community. Cochrane Database Syst Rev. 2012;2012(9):CD007146. doi:10.1002/14651858.CD007146.pub3

5. Sherrington C, Fairhall NJ, Wallbank GK, et al. Exercise for preventing falls in older people living in the community. Cochrane Database Syst Rev. 2019;1(1):CD012424. doi:10.1002/14651858.CD012424.pub2

Article PDF
Issue
Journal of Clinical Outcomes Management - 29(3)
Publications
Topics
Page Number
102-105
Sections
Article PDF
Article PDF

Study 1 Overview (Bhasin et al)

Objective: To examine the effect of a multifactorial intervention for fall prevention on fall injury in community-dwelling older adults.

Design: This was a pragmatic, cluster randomized trial conducted in 86 primary care practices across 10 health care systems.

Setting and participants: The primary care sites were selected based on the prespecified criteria of size, ability to implement the intervention, proximity to other practices, accessibility to electronic health records, and access to community-based exercise programs. The primary care practices were randomly assigned to intervention or control.

Eligibility criteria for participants at those practices included age 70 years or older, dwelling in the community, and having an increased risk of falls, as determined by a history of fall-related injury in the past year, 2 or more falls in the past year, or being afraid of falling because of problems with balance or walking. Exclusion criteria were inability to provide consent or lack of proxy consent for participants who were determined to have cognitive impairment based on screening, and inability to speak English or Spanish. A total of 2802 participants were enrolled in the intervention group, and 2649 participants were enrolled in the control group.

Intervention: The intervention contained 5 components: a standardized assessment of 7 modifiable risk factors for fall injuries; standardized protocol-driven recommendations for management of risk factors; an individualized care plan focused on 1 to 3 risk factors; implementation of care plans, including referrals to community-based programs; and follow-up care conducted by telephone or in person. The modifiable risk factors included impairment of strength, gait, or balance; use of medications related to falls; postural hypotension; problems with feet or footwear; visual impairment; osteoporosis or vitamin D deficiency; and home safety hazards. The intervention was delivered by nurses who had completed online training modules and face-to-face training sessions focused on the intervention and motivational interviewing along with continuing education, in partnership with participants and their primary care providers. In the control group, participants received enhanced usual care, including an informational pamphlet, and were encouraged to discuss fall prevention with their primary care provider, including the results of their screening evaluation.

Main outcome measures: The primary outcome of the study was the first serious fall injury in a time-to-event analysis, defined as a fall resulting in a fracture (other than thoracic or lumbar vertebral fracture), joint dislocation, cut requiring closure, head injury requiring hospitalization, sprain or strain, bruising or swelling, or other serious injury. The secondary outcome was first patient-reported fall injury, also in a time-to-event analysis, ascertained by telephone interviews conducted every 4 months. Other outcomes included hospital admissions, emergency department visits, and other health care utilization. Adjudication of fall events and injuries was conducted by a team blinded to treatment assignment and verified using administrative claims data, encounter data, or electronic health record review.

Main results: The intervention and control groups were similar in terms of sex and age: 62.5% vs 61.5% of participants were women, and mean (SD) age was 79.9 (5.7) years and 79.5 (5.8) years, respectively. Other demographic characteristics were similar between groups. For the primary outcome, the rate of first serious injury was 4.9 per 100 person-years in the intervention group and 5.3 per 100 person-years in the control group, with a hazard ratio of 0.92 (95% CI, 0.80-1.06; P = .25). For the secondary outcome of patient-reported fall injury, there were 25.6 events per 100 person-years in the intervention group and 28.6 in the control group, with a hazard ratio of 0.90 (95% CI, 0.83-0.99; P =0.004). Rates of hospitalization and other secondary outcomes were similar between groups.

Conclusion: The multifactorial STRIDE intervention did not reduce the rate of serious fall injury when compared to enhanced usual care. The intervention did result in lower rates of fall injury by patient report, but no other significant outcomes were seen.

 

 

Study 2 Overview (Stark et al)

Objective: To examine the effect of a behavioral home hazard removal intervention for fall prevention on risk of fall in community-dwelling older adults.

Design: This randomized clinical trial was conducted at a single site in St. Louis, Missouri. Participants were community-dwelling older adults who received services from the Area Agency on Aging (AAA). Inclusion criteria included age 65 years and older, having 1 or more falls in the previous 12 months or being worried about falling by self report, and currently receiving services from an AAA. Exclusion criteria included living in an institution or being severely cognitively impaired and unable to follow directions or report falls. Participants who met the criteria were contacted by phone and invited to participate. A total of 310 participants were enrolled in the study, with an equal number of participants assigned to the intervention and control groups.

Intervention: The intervention included hazard identification and removal after a comprehensive assessment of participants, their behaviors, and the environment; this assessment took place during the first visit, which lasted approximately 80 minutes. A home hazard removal plan was developed, and in the second session, which lasted approximately 40 minutes, remediation of hazards was carried out. A third session for home modification that lasted approximately 30 minutes was conducted, if needed. At 6 months after the intervention, a booster session to identify and remediate any new home hazards and address issues was conducted. Specific interventions, as identified by the assessment, included minor home repair such as grab bars, adaptive equipment, task modification, and education. Shared decision making that enabled older adults to control changes in their homes, self-management strategies to improve awareness, and motivational enhancement strategies to improve acceptance were employed. Scripted algorithms and checklists were used to deliver the intervention. For usual care, an annual assessment and referrals to community services, if needed, were conducted in the AAA.

Main outcome measures: The primary outcome of the study was the number of days to first fall in 12 months. Falls were defined as unintentional movements to the floor, ground, or object below knee level, and falls were recorded through a daily journal for 12 months. Participants were contacted by phone if they did not return the journal or reported a fall. Participants were interviewed to verify falls and determine whether a fall was injurious. Secondary outcomes included rate of falls per person per 12 months; daily activity performance measured using the Older Americans Resources and Services Activities of Daily Living scale; falls self-efficacy, which measures confidence performing daily activities without falling; and quality of life using the SF-36 at 12 months.

Main results: Most of the study participants were women (74%), and mean (SD) age was 75 (7.4) years. Study retention was similar between the intervention and control groups, with 82% completing the study in the intervention group compared with 81% in the control group. Fidelity to the intervention, as measured by a checklist by the interventionist, was 99%, and adherence to home modification, as measured by number of home modifications in use by self report, was high at 92% at 6 months and 91% at 12 months. For the primary outcome, fall hazard was not different between the intervention and control groups (hazard ratio, 0.9; 95% CI, 0.66-1.27). For the secondary outcomes, the rate of falling was lower in the intervention group compared with the control group, with a relative risk of 0.62 (95% CI, 0.40-0.95). There was no difference in other secondary outcomes of daily activity performance, falls self-efficacy, or quality of life.

Conclusion: Despite high adherence to home modifications and fidelity to the intervention, this home hazard removal program did not reduce the risk of falling when compared to usual care. It did reduce the rate of falls, although no other effects were observed.

 

 

Commentary

Observational studies have identified factors that contribute to falls,1 and over the past 30 years a number of intervention trials designed to reduce the risk of falling have been conducted. A recent Cochrane review, published prior to the Bhasin et al and Stark et al trials, looked at the effect of multifactorial interventions for fall prevention across 62 trials that included 19,935 older adults living in the community. The review concluded that multifactorial interventions may reduce the rate of falls, but this conclusion was based on low-quality evidence and there was significant heterogeneity across the studies.2

The STRIDE randomized trial represents the latest effort to address the evidence gap around fall prevention, with the STRIDE investigators hoping this would be the definitive trial that leads to practice change in fall prevention. Smaller trials that have demonstrated effectiveness were brought to scale in this large randomized trial that included 86 practices and more than 5000 participants. The investigators used risk of injurious falls as the primary outcome, as this outcome is considered the most clinically meaningful for the study population. The results, however, were disappointing: the multifactorial intervention in STRIDE did not result in a reduction of risk of injurious falls. Challenges in the implementation of this large trial may have contributed to its results; falls care managers, key to this multifactorial intervention, reported difficulties in navigating complex relationships with patients, families, study staff, and primary care practices during the study. Barriers reported included clinical space limitations, variable buy-in from providers, and turnover of practice staff and providers.3 Such implementation factors may have resulted in the divergent results between smaller clinical trials and this large-scale trial conducted across multiple settings.

The second study, by Stark et al, examined a home modification program and its effect on risk of falls. A prior Cochrane review examining the effect of home safety assessment and modification indicates that these strategies are effective in reducing the rate of falls as well as the risk of falling.4 The results of the current trial showed a reduction in the rate of falls but not in the risk of falling; however, this study did not examine outcomes of serious injurious falls, which may be more clinically meaningful. The Stark et al study adds to the existing literature showing that home modification may have an impact on fall rates. One noteworthy aspect of the Stark et al trial is the high adherence rate to home modification in a community-based approach; perhaps the investigators’ approach can be translated to real-world use.

Applications for Clinical Practice and System Implementation

The role of exercise programs in reducing fall rates is well established,5 but neither of these studies focused on exercise interventions. STRIDE offered community-based exercise program referral, but there is variability in such programs and study staff reported challenges in matching participants with appropriate exercise programs.3 Further studies that examine combinations of multifactorial falls risk reduction, exercise, and home safety, with careful consideration of implementation challenges to assure fidelity and adherence to the intervention, are needed to ascertain the best strategy for fall prevention for older adults at risk.

Given the results of these trials, it is difficult to recommend one falls prevention intervention over another. Clinicians should continue to identify falls risk factors using standardized assessments and determine which factors are modifiable.

Practice Points

  • Incorporating assessments of falls risk in primary care is feasible, and such assessments can identify important risk factors.
  • Clinicians and health systems should identify avenues, such as developing programmatic approaches, to providing home safety assessment and intervention, exercise options, medication review, and modification of other risk factors.
  • Ensuring delivery of these elements reliably through programmatic approaches with adequate follow-up is key to preventing falls in this population.

—William W. Hung, MD, MPH

Study 1 Overview (Bhasin et al)

Objective: To examine the effect of a multifactorial intervention for fall prevention on fall injury in community-dwelling older adults.

Design: This was a pragmatic, cluster randomized trial conducted in 86 primary care practices across 10 health care systems.

Setting and participants: The primary care sites were selected based on the prespecified criteria of size, ability to implement the intervention, proximity to other practices, accessibility to electronic health records, and access to community-based exercise programs. The primary care practices were randomly assigned to intervention or control.

Eligibility criteria for participants at those practices included age 70 years or older, dwelling in the community, and having an increased risk of falls, as determined by a history of fall-related injury in the past year, 2 or more falls in the past year, or being afraid of falling because of problems with balance or walking. Exclusion criteria were inability to provide consent or lack of proxy consent for participants who were determined to have cognitive impairment based on screening, and inability to speak English or Spanish. A total of 2802 participants were enrolled in the intervention group, and 2649 participants were enrolled in the control group.

Intervention: The intervention contained 5 components: a standardized assessment of 7 modifiable risk factors for fall injuries; standardized protocol-driven recommendations for management of risk factors; an individualized care plan focused on 1 to 3 risk factors; implementation of care plans, including referrals to community-based programs; and follow-up care conducted by telephone or in person. The modifiable risk factors included impairment of strength, gait, or balance; use of medications related to falls; postural hypotension; problems with feet or footwear; visual impairment; osteoporosis or vitamin D deficiency; and home safety hazards. The intervention was delivered by nurses who had completed online training modules and face-to-face training sessions focused on the intervention and motivational interviewing along with continuing education, in partnership with participants and their primary care providers. In the control group, participants received enhanced usual care, including an informational pamphlet, and were encouraged to discuss fall prevention with their primary care provider, including the results of their screening evaluation.

Main outcome measures: The primary outcome of the study was the first serious fall injury in a time-to-event analysis, defined as a fall resulting in a fracture (other than thoracic or lumbar vertebral fracture), joint dislocation, cut requiring closure, head injury requiring hospitalization, sprain or strain, bruising or swelling, or other serious injury. The secondary outcome was first patient-reported fall injury, also in a time-to-event analysis, ascertained by telephone interviews conducted every 4 months. Other outcomes included hospital admissions, emergency department visits, and other health care utilization. Adjudication of fall events and injuries was conducted by a team blinded to treatment assignment and verified using administrative claims data, encounter data, or electronic health record review.

Main results: The intervention and control groups were similar in terms of sex and age: 62.5% vs 61.5% of participants were women, and mean (SD) age was 79.9 (5.7) years and 79.5 (5.8) years, respectively. Other demographic characteristics were similar between groups. For the primary outcome, the rate of first serious injury was 4.9 per 100 person-years in the intervention group and 5.3 per 100 person-years in the control group, with a hazard ratio of 0.92 (95% CI, 0.80-1.06; P = .25). For the secondary outcome of patient-reported fall injury, there were 25.6 events per 100 person-years in the intervention group and 28.6 in the control group, with a hazard ratio of 0.90 (95% CI, 0.83-0.99; P =0.004). Rates of hospitalization and other secondary outcomes were similar between groups.

Conclusion: The multifactorial STRIDE intervention did not reduce the rate of serious fall injury when compared to enhanced usual care. The intervention did result in lower rates of fall injury by patient report, but no other significant outcomes were seen.

 

 

Study 2 Overview (Stark et al)

Objective: To examine the effect of a behavioral home hazard removal intervention for fall prevention on risk of fall in community-dwelling older adults.

Design: This randomized clinical trial was conducted at a single site in St. Louis, Missouri. Participants were community-dwelling older adults who received services from the Area Agency on Aging (AAA). Inclusion criteria included age 65 years and older, having 1 or more falls in the previous 12 months or being worried about falling by self report, and currently receiving services from an AAA. Exclusion criteria included living in an institution or being severely cognitively impaired and unable to follow directions or report falls. Participants who met the criteria were contacted by phone and invited to participate. A total of 310 participants were enrolled in the study, with an equal number of participants assigned to the intervention and control groups.

Intervention: The intervention included hazard identification and removal after a comprehensive assessment of participants, their behaviors, and the environment; this assessment took place during the first visit, which lasted approximately 80 minutes. A home hazard removal plan was developed, and in the second session, which lasted approximately 40 minutes, remediation of hazards was carried out. A third session for home modification that lasted approximately 30 minutes was conducted, if needed. At 6 months after the intervention, a booster session to identify and remediate any new home hazards and address issues was conducted. Specific interventions, as identified by the assessment, included minor home repair such as grab bars, adaptive equipment, task modification, and education. Shared decision making that enabled older adults to control changes in their homes, self-management strategies to improve awareness, and motivational enhancement strategies to improve acceptance were employed. Scripted algorithms and checklists were used to deliver the intervention. For usual care, an annual assessment and referrals to community services, if needed, were conducted in the AAA.

Main outcome measures: The primary outcome of the study was the number of days to first fall in 12 months. Falls were defined as unintentional movements to the floor, ground, or object below knee level, and falls were recorded through a daily journal for 12 months. Participants were contacted by phone if they did not return the journal or reported a fall. Participants were interviewed to verify falls and determine whether a fall was injurious. Secondary outcomes included rate of falls per person per 12 months; daily activity performance measured using the Older Americans Resources and Services Activities of Daily Living scale; falls self-efficacy, which measures confidence performing daily activities without falling; and quality of life using the SF-36 at 12 months.

Main results: Most of the study participants were women (74%), and mean (SD) age was 75 (7.4) years. Study retention was similar between the intervention and control groups, with 82% completing the study in the intervention group compared with 81% in the control group. Fidelity to the intervention, as measured by a checklist by the interventionist, was 99%, and adherence to home modification, as measured by number of home modifications in use by self report, was high at 92% at 6 months and 91% at 12 months. For the primary outcome, fall hazard was not different between the intervention and control groups (hazard ratio, 0.9; 95% CI, 0.66-1.27). For the secondary outcomes, the rate of falling was lower in the intervention group compared with the control group, with a relative risk of 0.62 (95% CI, 0.40-0.95). There was no difference in other secondary outcomes of daily activity performance, falls self-efficacy, or quality of life.

Conclusion: Despite high adherence to home modifications and fidelity to the intervention, this home hazard removal program did not reduce the risk of falling when compared to usual care. It did reduce the rate of falls, although no other effects were observed.

 

 

Commentary

Observational studies have identified factors that contribute to falls,1 and over the past 30 years a number of intervention trials designed to reduce the risk of falling have been conducted. A recent Cochrane review, published prior to the Bhasin et al and Stark et al trials, looked at the effect of multifactorial interventions for fall prevention across 62 trials that included 19,935 older adults living in the community. The review concluded that multifactorial interventions may reduce the rate of falls, but this conclusion was based on low-quality evidence and there was significant heterogeneity across the studies.2

The STRIDE randomized trial represents the latest effort to address the evidence gap around fall prevention, with the STRIDE investigators hoping this would be the definitive trial that leads to practice change in fall prevention. Smaller trials that have demonstrated effectiveness were brought to scale in this large randomized trial that included 86 practices and more than 5000 participants. The investigators used risk of injurious falls as the primary outcome, as this outcome is considered the most clinically meaningful for the study population. The results, however, were disappointing: the multifactorial intervention in STRIDE did not result in a reduction of risk of injurious falls. Challenges in the implementation of this large trial may have contributed to its results; falls care managers, key to this multifactorial intervention, reported difficulties in navigating complex relationships with patients, families, study staff, and primary care practices during the study. Barriers reported included clinical space limitations, variable buy-in from providers, and turnover of practice staff and providers.3 Such implementation factors may have resulted in the divergent results between smaller clinical trials and this large-scale trial conducted across multiple settings.

The second study, by Stark et al, examined a home modification program and its effect on risk of falls. A prior Cochrane review examining the effect of home safety assessment and modification indicates that these strategies are effective in reducing the rate of falls as well as the risk of falling.4 The results of the current trial showed a reduction in the rate of falls but not in the risk of falling; however, this study did not examine outcomes of serious injurious falls, which may be more clinically meaningful. The Stark et al study adds to the existing literature showing that home modification may have an impact on fall rates. One noteworthy aspect of the Stark et al trial is the high adherence rate to home modification in a community-based approach; perhaps the investigators’ approach can be translated to real-world use.

Applications for Clinical Practice and System Implementation

The role of exercise programs in reducing fall rates is well established,5 but neither of these studies focused on exercise interventions. STRIDE offered community-based exercise program referral, but there is variability in such programs and study staff reported challenges in matching participants with appropriate exercise programs.3 Further studies that examine combinations of multifactorial falls risk reduction, exercise, and home safety, with careful consideration of implementation challenges to assure fidelity and adherence to the intervention, are needed to ascertain the best strategy for fall prevention for older adults at risk.

Given the results of these trials, it is difficult to recommend one falls prevention intervention over another. Clinicians should continue to identify falls risk factors using standardized assessments and determine which factors are modifiable.

Practice Points

  • Incorporating assessments of falls risk in primary care is feasible, and such assessments can identify important risk factors.
  • Clinicians and health systems should identify avenues, such as developing programmatic approaches, to providing home safety assessment and intervention, exercise options, medication review, and modification of other risk factors.
  • Ensuring delivery of these elements reliably through programmatic approaches with adequate follow-up is key to preventing falls in this population.

—William W. Hung, MD, MPH

References

1. Tinetti ME, Speechley M, Ginter SF. Risk factors for falls among elderly persons living in the community. N Engl J Med. 1988; 319:1701-1707. doi:10.1056/NEJM198812293192604

2. Hopewell S, Adedire O, Copsey BJ, et al. Multifactorial and multiple component interventions for preventing falls in older people living in the community. Cochrane Database Syst Rev. 2018;7(7):CD012221. doi:0.1002/14651858.CD012221.pub2

3. Reckrey JM, Gazarian P, Reuben DB, et al. Barriers to implementation of STRIDE, a national study to prevent fall-related injuries. J Am Geriatr Soc. 2021;69(5):1334-1342. doi:10.1111/jgs.17056

4. Gillespie LD, Robertson MC, Gillespie WJ, et al. Interventions for preventing falls in older people living in the community. Cochrane Database Syst Rev. 2012;2012(9):CD007146. doi:10.1002/14651858.CD007146.pub3

5. Sherrington C, Fairhall NJ, Wallbank GK, et al. Exercise for preventing falls in older people living in the community. Cochrane Database Syst Rev. 2019;1(1):CD012424. doi:10.1002/14651858.CD012424.pub2

References

1. Tinetti ME, Speechley M, Ginter SF. Risk factors for falls among elderly persons living in the community. N Engl J Med. 1988; 319:1701-1707. doi:10.1056/NEJM198812293192604

2. Hopewell S, Adedire O, Copsey BJ, et al. Multifactorial and multiple component interventions for preventing falls in older people living in the community. Cochrane Database Syst Rev. 2018;7(7):CD012221. doi:0.1002/14651858.CD012221.pub2

3. Reckrey JM, Gazarian P, Reuben DB, et al. Barriers to implementation of STRIDE, a national study to prevent fall-related injuries. J Am Geriatr Soc. 2021;69(5):1334-1342. doi:10.1111/jgs.17056

4. Gillespie LD, Robertson MC, Gillespie WJ, et al. Interventions for preventing falls in older people living in the community. Cochrane Database Syst Rev. 2012;2012(9):CD007146. doi:10.1002/14651858.CD007146.pub3

5. Sherrington C, Fairhall NJ, Wallbank GK, et al. Exercise for preventing falls in older people living in the community. Cochrane Database Syst Rev. 2019;1(1):CD012424. doi:10.1002/14651858.CD012424.pub2

Issue
Journal of Clinical Outcomes Management - 29(3)
Issue
Journal of Clinical Outcomes Management - 29(3)
Page Number
102-105
Page Number
102-105
Publications
Publications
Topics
Article Type
Display Headline
Fall Injury Among Community-Dwelling Older Adults: Effect of a Multifactorial Intervention and a Home Hazard Removal Program
Display Headline
Fall Injury Among Community-Dwelling Older Adults: Effect of a Multifactorial Intervention and a Home Hazard Removal Program
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

More evidence dementia not linked to PPI use in older people

Article Type
Changed
Tue, 05/31/2022 - 13:33

Controversy regarding the purported link between the use of proton pump inhibitors (PPIs) or histamine H2 receptor antagonists (H2RAs) and risk for dementia continues.

Adding to the “no link” column comes new evidence from a study presented at the annual Digestive Disease Week® (DDW) .

Among almost 19,000 people, no association was found between the use of these agents and a greater likelihood of incident dementia, Alzheimer’s disease, or cognitive decline in people older than 65 years.

“We found that baseline PPI or H2RA use in older adults was not associated with dementia, with mild cognitive impairment, or declines in cognitive scores over time,” said lead author Raaj Shishir Mehta, MD, a gastroenterology fellow at Massachusetts General Hospital in Boston.

“While deprescribing efforts are important, especially when medications are not indicated, these data provide reassurance about the cognitive impacts of long-term use of PPIs in older adults,” he added.
 

Growing use, growing concern

As PPI use has increased worldwide, so too have concerns over the adverse effects from their long-term use, Dr. Mehta said.

“One particular area of concern, especially among older adults, is the link between long-term PPI use and risk for dementia,” he said.

Igniting the controversy was a February 2016 study published in JAMA Neurology that showed a positive association between PPI use and dementia in residents of Germany aged 75 years and older. Researchers linked PPI use to a 44% increased risk of dementia over 5 years.

The 2016 study was based on claims data, which can introduce “inaccuracy or bias in defining dementia cases,” Dr. Mehta said. He noted that it and other previous studies also were limited by an inability to account for concomitant medications or comorbidities.

To overcome these limitations in their study, Dr. Mehta and colleagues analyzed medication data collected during in-person visits and asked experts to confirm dementia outcomes. The research data come from ASPREE, a large aspirin study of 18,846 people older than 65 years in the United States and Australia. Participants were enrolled from 2010 to 2014. A total of 566 people developed incident dementia during follow-up.

The researchers had data on alcohol consumption and other lifestyle factors, as well as information on comorbidities, hospitalizations, and overall well-being.

“Perhaps the biggest strength of our study is our rigorous neurocognitive assessments,” Dr. Mehta said.

They assessed cognition at baseline and at years 1, 3, 5, and 7 using a battery of tests. An expert panel of neurologists, neuropsychologists, and geriatricians adjudicated cases of dementia, in accordance with DSM-IV criteria. If the diagnosis was unclear, they referred people for additional workup, including neuroimaging.

Cox proportional hazards, regression, and/or mixed effects modeling were used to relate medication use with cognitive scores.

All analyses were adjusted for age, sex, body mass index, alcohol use, family history of dementia, medications, and other medical comorbidities.

At baseline, PPI users were more likely to be White, have fewer years of education, and have higher rates of hypertension, diabetes, and kidney disease. This group also was more likely to be taking five or more medications.
 

 

 

Key points

During 80,976 person-years of follow-up, there were 566 incident cases of dementia, including 235 probable cases of Alzheimer’s disease and 331 other dementias.

Baseline PPI use, in comparison with nonuse, was not associated with incident dementia (hazard ratio, 0.86; 95% confidence interval, 0.70-1.05).

“Similarly, when we look specifically at Alzheimer’s disease or mixed types of dementia, we find no association between baseline PPI use and dementia,” Dr. Mehta said.

When they excluded people already taking PPIs at baseline, they found no association between starting PPIs and developing dementia over time.

Secondary aims of the study included looking for a link between PPI use and mild cognitive impairment or significant changes in cognition over time. In both cases, no association emerged. PPI use at baseline also was not associated with cognitive impairment/no dementia (also known as mild cognitive impairment) or with changes in overall cognitive test scores over time.

To determine whether any association could be a class effect of acid suppression medication, they assessed use of H2RA medications and development of incident dementia. Again, the researchers found no link.

A diverse multinational population from urban and rural areas was a strength of the study, as was the “very rigorous cognitive testing with expert adjudication of our endpoints,” Dr. Mehta said. In addition, fewer than 5% of patients were lost to follow-up.

In terms of limitations, this was an observational study “so residual confounding is always possible,” he added. “But I’ll emphasize that we are among the largest studies to date with wealth of covariates.”
 

Why the different findings?

The study was “really well done,” session moderator Paul Moayyedi, MD, said during the Q&A session at DDW 2022.

Dr. Moayyedi, a professor of medicine at McMaster University, Hamilton, Ont., asked Dr. Mehta why he “found absolutely no signal, whereas the German study did.”

“It’s a good question,” Dr. Mehta responded. “If you look across the board, there have been conflicting results.”

The disparity could be related to how researchers conducting claims data studies classify dementia, he noted.

“If you look at the nitty-gritty details over 5 years, almost 40% of participants [in those studies] end up with a diagnosis of dementia, which is quite high,” Dr. Mehta said. “That raises questions about whether the diagnosis of dementia is truly accurate.”

Dr. Mehta and Dr. Moayyedi reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Controversy regarding the purported link between the use of proton pump inhibitors (PPIs) or histamine H2 receptor antagonists (H2RAs) and risk for dementia continues.

Adding to the “no link” column comes new evidence from a study presented at the annual Digestive Disease Week® (DDW) .

Among almost 19,000 people, no association was found between the use of these agents and a greater likelihood of incident dementia, Alzheimer’s disease, or cognitive decline in people older than 65 years.

“We found that baseline PPI or H2RA use in older adults was not associated with dementia, with mild cognitive impairment, or declines in cognitive scores over time,” said lead author Raaj Shishir Mehta, MD, a gastroenterology fellow at Massachusetts General Hospital in Boston.

“While deprescribing efforts are important, especially when medications are not indicated, these data provide reassurance about the cognitive impacts of long-term use of PPIs in older adults,” he added.
 

Growing use, growing concern

As PPI use has increased worldwide, so too have concerns over the adverse effects from their long-term use, Dr. Mehta said.

“One particular area of concern, especially among older adults, is the link between long-term PPI use and risk for dementia,” he said.

Igniting the controversy was a February 2016 study published in JAMA Neurology that showed a positive association between PPI use and dementia in residents of Germany aged 75 years and older. Researchers linked PPI use to a 44% increased risk of dementia over 5 years.

The 2016 study was based on claims data, which can introduce “inaccuracy or bias in defining dementia cases,” Dr. Mehta said. He noted that it and other previous studies also were limited by an inability to account for concomitant medications or comorbidities.

To overcome these limitations in their study, Dr. Mehta and colleagues analyzed medication data collected during in-person visits and asked experts to confirm dementia outcomes. The research data come from ASPREE, a large aspirin study of 18,846 people older than 65 years in the United States and Australia. Participants were enrolled from 2010 to 2014. A total of 566 people developed incident dementia during follow-up.

The researchers had data on alcohol consumption and other lifestyle factors, as well as information on comorbidities, hospitalizations, and overall well-being.

“Perhaps the biggest strength of our study is our rigorous neurocognitive assessments,” Dr. Mehta said.

They assessed cognition at baseline and at years 1, 3, 5, and 7 using a battery of tests. An expert panel of neurologists, neuropsychologists, and geriatricians adjudicated cases of dementia, in accordance with DSM-IV criteria. If the diagnosis was unclear, they referred people for additional workup, including neuroimaging.

Cox proportional hazards, regression, and/or mixed effects modeling were used to relate medication use with cognitive scores.

All analyses were adjusted for age, sex, body mass index, alcohol use, family history of dementia, medications, and other medical comorbidities.

At baseline, PPI users were more likely to be White, have fewer years of education, and have higher rates of hypertension, diabetes, and kidney disease. This group also was more likely to be taking five or more medications.
 

 

 

Key points

During 80,976 person-years of follow-up, there were 566 incident cases of dementia, including 235 probable cases of Alzheimer’s disease and 331 other dementias.

Baseline PPI use, in comparison with nonuse, was not associated with incident dementia (hazard ratio, 0.86; 95% confidence interval, 0.70-1.05).

“Similarly, when we look specifically at Alzheimer’s disease or mixed types of dementia, we find no association between baseline PPI use and dementia,” Dr. Mehta said.

When they excluded people already taking PPIs at baseline, they found no association between starting PPIs and developing dementia over time.

Secondary aims of the study included looking for a link between PPI use and mild cognitive impairment or significant changes in cognition over time. In both cases, no association emerged. PPI use at baseline also was not associated with cognitive impairment/no dementia (also known as mild cognitive impairment) or with changes in overall cognitive test scores over time.

To determine whether any association could be a class effect of acid suppression medication, they assessed use of H2RA medications and development of incident dementia. Again, the researchers found no link.

A diverse multinational population from urban and rural areas was a strength of the study, as was the “very rigorous cognitive testing with expert adjudication of our endpoints,” Dr. Mehta said. In addition, fewer than 5% of patients were lost to follow-up.

In terms of limitations, this was an observational study “so residual confounding is always possible,” he added. “But I’ll emphasize that we are among the largest studies to date with wealth of covariates.”
 

Why the different findings?

The study was “really well done,” session moderator Paul Moayyedi, MD, said during the Q&A session at DDW 2022.

Dr. Moayyedi, a professor of medicine at McMaster University, Hamilton, Ont., asked Dr. Mehta why he “found absolutely no signal, whereas the German study did.”

“It’s a good question,” Dr. Mehta responded. “If you look across the board, there have been conflicting results.”

The disparity could be related to how researchers conducting claims data studies classify dementia, he noted.

“If you look at the nitty-gritty details over 5 years, almost 40% of participants [in those studies] end up with a diagnosis of dementia, which is quite high,” Dr. Mehta said. “That raises questions about whether the diagnosis of dementia is truly accurate.”

Dr. Mehta and Dr. Moayyedi reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Controversy regarding the purported link between the use of proton pump inhibitors (PPIs) or histamine H2 receptor antagonists (H2RAs) and risk for dementia continues.

Adding to the “no link” column comes new evidence from a study presented at the annual Digestive Disease Week® (DDW) .

Among almost 19,000 people, no association was found between the use of these agents and a greater likelihood of incident dementia, Alzheimer’s disease, or cognitive decline in people older than 65 years.

“We found that baseline PPI or H2RA use in older adults was not associated with dementia, with mild cognitive impairment, or declines in cognitive scores over time,” said lead author Raaj Shishir Mehta, MD, a gastroenterology fellow at Massachusetts General Hospital in Boston.

“While deprescribing efforts are important, especially when medications are not indicated, these data provide reassurance about the cognitive impacts of long-term use of PPIs in older adults,” he added.
 

Growing use, growing concern

As PPI use has increased worldwide, so too have concerns over the adverse effects from their long-term use, Dr. Mehta said.

“One particular area of concern, especially among older adults, is the link between long-term PPI use and risk for dementia,” he said.

Igniting the controversy was a February 2016 study published in JAMA Neurology that showed a positive association between PPI use and dementia in residents of Germany aged 75 years and older. Researchers linked PPI use to a 44% increased risk of dementia over 5 years.

The 2016 study was based on claims data, which can introduce “inaccuracy or bias in defining dementia cases,” Dr. Mehta said. He noted that it and other previous studies also were limited by an inability to account for concomitant medications or comorbidities.

To overcome these limitations in their study, Dr. Mehta and colleagues analyzed medication data collected during in-person visits and asked experts to confirm dementia outcomes. The research data come from ASPREE, a large aspirin study of 18,846 people older than 65 years in the United States and Australia. Participants were enrolled from 2010 to 2014. A total of 566 people developed incident dementia during follow-up.

The researchers had data on alcohol consumption and other lifestyle factors, as well as information on comorbidities, hospitalizations, and overall well-being.

“Perhaps the biggest strength of our study is our rigorous neurocognitive assessments,” Dr. Mehta said.

They assessed cognition at baseline and at years 1, 3, 5, and 7 using a battery of tests. An expert panel of neurologists, neuropsychologists, and geriatricians adjudicated cases of dementia, in accordance with DSM-IV criteria. If the diagnosis was unclear, they referred people for additional workup, including neuroimaging.

Cox proportional hazards, regression, and/or mixed effects modeling were used to relate medication use with cognitive scores.

All analyses were adjusted for age, sex, body mass index, alcohol use, family history of dementia, medications, and other medical comorbidities.

At baseline, PPI users were more likely to be White, have fewer years of education, and have higher rates of hypertension, diabetes, and kidney disease. This group also was more likely to be taking five or more medications.
 

 

 

Key points

During 80,976 person-years of follow-up, there were 566 incident cases of dementia, including 235 probable cases of Alzheimer’s disease and 331 other dementias.

Baseline PPI use, in comparison with nonuse, was not associated with incident dementia (hazard ratio, 0.86; 95% confidence interval, 0.70-1.05).

“Similarly, when we look specifically at Alzheimer’s disease or mixed types of dementia, we find no association between baseline PPI use and dementia,” Dr. Mehta said.

When they excluded people already taking PPIs at baseline, they found no association between starting PPIs and developing dementia over time.

Secondary aims of the study included looking for a link between PPI use and mild cognitive impairment or significant changes in cognition over time. In both cases, no association emerged. PPI use at baseline also was not associated with cognitive impairment/no dementia (also known as mild cognitive impairment) or with changes in overall cognitive test scores over time.

To determine whether any association could be a class effect of acid suppression medication, they assessed use of H2RA medications and development of incident dementia. Again, the researchers found no link.

A diverse multinational population from urban and rural areas was a strength of the study, as was the “very rigorous cognitive testing with expert adjudication of our endpoints,” Dr. Mehta said. In addition, fewer than 5% of patients were lost to follow-up.

In terms of limitations, this was an observational study “so residual confounding is always possible,” he added. “But I’ll emphasize that we are among the largest studies to date with wealth of covariates.”
 

Why the different findings?

The study was “really well done,” session moderator Paul Moayyedi, MD, said during the Q&A session at DDW 2022.

Dr. Moayyedi, a professor of medicine at McMaster University, Hamilton, Ont., asked Dr. Mehta why he “found absolutely no signal, whereas the German study did.”

“It’s a good question,” Dr. Mehta responded. “If you look across the board, there have been conflicting results.”

The disparity could be related to how researchers conducting claims data studies classify dementia, he noted.

“If you look at the nitty-gritty details over 5 years, almost 40% of participants [in those studies] end up with a diagnosis of dementia, which is quite high,” Dr. Mehta said. “That raises questions about whether the diagnosis of dementia is truly accurate.”

Dr. Mehta and Dr. Moayyedi reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM DDW 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Up in smoke: Cannabis-related ED visits increased 18-fold for older Californians

Article Type
Changed
Thu, 05/19/2022 - 11:46

As older adults turn to cannabis to relieve chronic symptoms, or for fun, an increasing number are winding up in emergency departments with side effects from the drug.

Researchers in California found an 18-fold increase in the rate of cannabis-related trips to the ED visits among adults over age 65 in the state from 2005 to 2019.

Addressing potential harms of cannabis use among older adults, who face heightened risk for adverse reactions to the substance, “is urgently required,” the researchers reported at the annual meeting of the American Geriatrics Society.

The researchers advised doctors to discuss cannabis use with older patients and screen older adults for cannabis use. Those living with multiple chronic conditions and taking multiple medications are especially likely to be at risk for harm, coinvestigator Benjamin Han, MD, MPH, a geriatrician at the University of California, San Diego, said in an interview.

Dr. Han added that “very little” is understood about the risks and benefits of cannabis use in the elderly, and more studies are needed “so that clinicians can have data-informed discussions with their patients.”

California legalized medical marijuana in 1996 and recreational marijuana in 2016.

The researchers used diagnostic code data from California’s nonmilitary acute care hospitals, collected by the state’s Department of Healthcare Access and Information, to calculate annual rates of cannabis-related visits per 10,000 ED visits.
 

ED trips up sharply among older adults

Rates of cannabis-related visits increased significantly for all older adult age ranges (P < .001), according to the researchers. Among those aged 65-74 years, the rate increased about 15-fold, from 44.9 per 10,000 visits in 2005 to 714.5 per 100,000 in 2019; for ages 75-84, the rate increased about 22-fold, from 8.4 to 193.9 per 10,000; and for those 85 and older the rate jumped nearly 18-fold, from 2.1 to 39.2 per 10,000.

The greatest increase occurred in visits categorized in diagnostic codes as cannabis abuse and unspecified use. Cannabis dependence and cannabis poisoning accounted for only a small fraction of cases, the investigators found.

The researchers did not have data on specific reasons for a visit, or whether patients had smoked or ingested marijuana products. They also could not discern whether patients had used delta-9-tetrahydrocannabinol, which has psychoactive properties, or cannabidiol, which typically does not have the same mind-altering effects.

Dr. Han said the data may not present a full picture of marijuana-related ED visits. “It is important to recognize that older adults have lived through the very putative language around drug use – including cannabis – as part of the racist war on drugs,” which could lead them to omit having used drugs during the intake process.

A 2017 study linked cannabis use among older adults with more injuries, which in turn led to greater emergency department use. Brian Kaskie, PhD, associate professor in health management and policy at the University of Iowa, Iowa City, said in an interview that the new findings show a state-specific, but alarming trend, and that more research is needed.

“Were these first-time users who were not familiar with anxiety-inducing aspects of cannabis use and took high potency products? Did they complete any education about how to use cannabis?” said Dr. Kaskie, who was not involved in the new study. “Were the ER visits for relatively benign, nonemergent reasons or were these ... visits an outcome of a tragic, harmful event like a car accident or overdose?”

Dr. Han and Dr. Kaskie disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

As older adults turn to cannabis to relieve chronic symptoms, or for fun, an increasing number are winding up in emergency departments with side effects from the drug.

Researchers in California found an 18-fold increase in the rate of cannabis-related trips to the ED visits among adults over age 65 in the state from 2005 to 2019.

Addressing potential harms of cannabis use among older adults, who face heightened risk for adverse reactions to the substance, “is urgently required,” the researchers reported at the annual meeting of the American Geriatrics Society.

The researchers advised doctors to discuss cannabis use with older patients and screen older adults for cannabis use. Those living with multiple chronic conditions and taking multiple medications are especially likely to be at risk for harm, coinvestigator Benjamin Han, MD, MPH, a geriatrician at the University of California, San Diego, said in an interview.

Dr. Han added that “very little” is understood about the risks and benefits of cannabis use in the elderly, and more studies are needed “so that clinicians can have data-informed discussions with their patients.”

California legalized medical marijuana in 1996 and recreational marijuana in 2016.

The researchers used diagnostic code data from California’s nonmilitary acute care hospitals, collected by the state’s Department of Healthcare Access and Information, to calculate annual rates of cannabis-related visits per 10,000 ED visits.
 

ED trips up sharply among older adults

Rates of cannabis-related visits increased significantly for all older adult age ranges (P < .001), according to the researchers. Among those aged 65-74 years, the rate increased about 15-fold, from 44.9 per 10,000 visits in 2005 to 714.5 per 100,000 in 2019; for ages 75-84, the rate increased about 22-fold, from 8.4 to 193.9 per 10,000; and for those 85 and older the rate jumped nearly 18-fold, from 2.1 to 39.2 per 10,000.

The greatest increase occurred in visits categorized in diagnostic codes as cannabis abuse and unspecified use. Cannabis dependence and cannabis poisoning accounted for only a small fraction of cases, the investigators found.

The researchers did not have data on specific reasons for a visit, or whether patients had smoked or ingested marijuana products. They also could not discern whether patients had used delta-9-tetrahydrocannabinol, which has psychoactive properties, or cannabidiol, which typically does not have the same mind-altering effects.

Dr. Han said the data may not present a full picture of marijuana-related ED visits. “It is important to recognize that older adults have lived through the very putative language around drug use – including cannabis – as part of the racist war on drugs,” which could lead them to omit having used drugs during the intake process.

A 2017 study linked cannabis use among older adults with more injuries, which in turn led to greater emergency department use. Brian Kaskie, PhD, associate professor in health management and policy at the University of Iowa, Iowa City, said in an interview that the new findings show a state-specific, but alarming trend, and that more research is needed.

“Were these first-time users who were not familiar with anxiety-inducing aspects of cannabis use and took high potency products? Did they complete any education about how to use cannabis?” said Dr. Kaskie, who was not involved in the new study. “Were the ER visits for relatively benign, nonemergent reasons or were these ... visits an outcome of a tragic, harmful event like a car accident or overdose?”

Dr. Han and Dr. Kaskie disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

As older adults turn to cannabis to relieve chronic symptoms, or for fun, an increasing number are winding up in emergency departments with side effects from the drug.

Researchers in California found an 18-fold increase in the rate of cannabis-related trips to the ED visits among adults over age 65 in the state from 2005 to 2019.

Addressing potential harms of cannabis use among older adults, who face heightened risk for adverse reactions to the substance, “is urgently required,” the researchers reported at the annual meeting of the American Geriatrics Society.

The researchers advised doctors to discuss cannabis use with older patients and screen older adults for cannabis use. Those living with multiple chronic conditions and taking multiple medications are especially likely to be at risk for harm, coinvestigator Benjamin Han, MD, MPH, a geriatrician at the University of California, San Diego, said in an interview.

Dr. Han added that “very little” is understood about the risks and benefits of cannabis use in the elderly, and more studies are needed “so that clinicians can have data-informed discussions with their patients.”

California legalized medical marijuana in 1996 and recreational marijuana in 2016.

The researchers used diagnostic code data from California’s nonmilitary acute care hospitals, collected by the state’s Department of Healthcare Access and Information, to calculate annual rates of cannabis-related visits per 10,000 ED visits.
 

ED trips up sharply among older adults

Rates of cannabis-related visits increased significantly for all older adult age ranges (P < .001), according to the researchers. Among those aged 65-74 years, the rate increased about 15-fold, from 44.9 per 10,000 visits in 2005 to 714.5 per 100,000 in 2019; for ages 75-84, the rate increased about 22-fold, from 8.4 to 193.9 per 10,000; and for those 85 and older the rate jumped nearly 18-fold, from 2.1 to 39.2 per 10,000.

The greatest increase occurred in visits categorized in diagnostic codes as cannabis abuse and unspecified use. Cannabis dependence and cannabis poisoning accounted for only a small fraction of cases, the investigators found.

The researchers did not have data on specific reasons for a visit, or whether patients had smoked or ingested marijuana products. They also could not discern whether patients had used delta-9-tetrahydrocannabinol, which has psychoactive properties, or cannabidiol, which typically does not have the same mind-altering effects.

Dr. Han said the data may not present a full picture of marijuana-related ED visits. “It is important to recognize that older adults have lived through the very putative language around drug use – including cannabis – as part of the racist war on drugs,” which could lead them to omit having used drugs during the intake process.

A 2017 study linked cannabis use among older adults with more injuries, which in turn led to greater emergency department use. Brian Kaskie, PhD, associate professor in health management and policy at the University of Iowa, Iowa City, said in an interview that the new findings show a state-specific, but alarming trend, and that more research is needed.

“Were these first-time users who were not familiar with anxiety-inducing aspects of cannabis use and took high potency products? Did they complete any education about how to use cannabis?” said Dr. Kaskie, who was not involved in the new study. “Were the ER visits for relatively benign, nonemergent reasons or were these ... visits an outcome of a tragic, harmful event like a car accident or overdose?”

Dr. Han and Dr. Kaskie disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AGS 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Abaloparatide works in ‘ignored population’: Men with osteoporosis

Article Type
Changed
Tue, 05/17/2022 - 10:03

San Diego – The anabolic osteoporosis treatment abaloparatide (Tymlos, Radius Health) works in men as well as women, new data indicate.  

Findings from the Abaloparatide for the Treatment of Men With Osteoporosis (ATOM) randomized, double-blind, placebo-controlled, phase 3 study were presented last week at the American Association of Clinical Endocrinology (AACE) Annual Meeting 2022.

Abaloparatide, a subcutaneously administered parathyroid-hormone–related protein (PTHrP) analog, resulted in significant increases in bone mineral density by 12 months at the lumbar spine, total hip, and femoral neck, compared with placebo in men with osteoporosis, with no significant adverse effects.

“Osteoporosis is underdiagnosed in men. Abaloparatide is another option for an ignored population,” presenter Neil Binkley, MD, of the University of Wisconsin School of Medicine and Public Health Madison, said in an interview.

Abaloparatide was approved by the U.S. Food and Drug Administration in 2017 for the treatment of postmenopausal women at high risk for fracture due to a history of osteoporotic fracture or multiple fracture risk factors, or who haven’t responded to or are intolerant of other osteoporosis therapies.

While postmenopausal women have mainly been the focus in osteoporosis, men account for approximately 30% of the societal burden of osteoporosis and have greater fracture-related morbidity and mortality than women.

About one in four men over the age of 50 years will have a fragility fracture in their lifetime. Yet, they’re far less likely to be diagnosed or to be included in osteoporosis treatment trials, Dr. Binkley noted.

Asked to comment, session moderator Thanh D. Hoang, DO, told this news organization, “I think it’s a great option to treat osteoporosis, and now we have evidence for treating osteoporosis in men. Mostly the data have come from postmenopausal women.”
 

Screen men with hypogonadism or those taking steroids

“This new medication is an addition to the very limited number of treatments that we have when patients don’t respond to [initial] medications. To have another anabolic bone-forming medication is very, very good,” said Dr. Hoang, who is professor and program director of the Endocrinology Fellowship Program at Walter Reed National Military Medical Center, Bethesda, Maryland.

Radius Health filed a Supplemental New Drug Application with the FDA for abaloparatide (Tymlos) subcutaneous injection in men with osteoporosis at high risk for fracture in February. There is a 10-month review period.



Dr. Binkley advises bone screening for men who have conditions such as hypogonadism or who are taking glucocorticoids or chemotherapeutics.

But, he added, “I think that if we did nothing else good in the osteoporosis field, if we treated people after they fractured that would be a huge step forward. Even with a normal T score, when those people fracture, they [often] don’t have normal bone mineral density ... That’s a group of people we’re ignoring still. They’re not getting diagnosed, and they’re not getting treated.”

ATOM Study: Significant BMD increases at key sites

The approval of abaloparatide in women was based on the phase 3, 18-week ACTIVE trial of more than 2,000 high-risk women, in whom abaloparatide was associated with an 86% reduction in vertebral fracture incidence, compared with placebo, and also significantly greater reductions in nonvertebral fractures, compared with both placebo and teriparatide (Forteo, Eli Lilly).

The ATOM study involved a total of 228 men aged 40-85 years with primary or hypogonadism-associated osteoporosis randomized 2:1 to receive subcutaneous 80 μg abaloparatide or injected placebo daily for 12 months. All had T scores (based on male reference range) of ≤ −2.5 at the lumbar spine or hip, or ≤ −1.5 and with radiologic vertebral fracture or a history of low trauma nonvertebral fracture in the past 5 years, or T score ≤ −2.0 if older than 65 years.

Increases in bone mineral density from baseline were significantly greater with abaloparatide compared with placebo at the lumbar spine, total hip, and femoral neck at 3, 6, and 12 months. Mean percentage changes at 12 months were 8.5%, 2.1%, and 3.0%, for the three locations, respectively, compared with 1.2%, 0.01%, and 0.2% for placebo (all P ≤ .0001).

Three fractures occurred in those receiving placebo and one with abaloparatide.

For markers of bone turnover, median serum procollagen type I N-terminal propeptide (s-PINP) was 111.2 ng/mL after 1 month of abaloparatide treatment and 85.7 ng/mL at month 12. Median serum carboxy-terminal cross-linking telopeptide of type I collagen (s-CTX) was 0.48 ng/mL at month 6 and 0.45 ng/mL at month 12 in the abaloparatide group. Geometric mean relative to baseline s-PINP and s-CTX increased significantly at months 3, 6, and 12 (all P < .001 for relative treatment effect of abaloparatide vs. placebo).

The most commonly reported treatment-emergent adverse events were injection site erythema (12.8% vs. 5.1%), nasopharyngitis (8.7% vs. 7.6%), dizziness (8.7% vs. 1.3%), and arthralgia (6.7% vs. 1.3%), with abaloparatide versus placebo. Serious treatment-emergent adverse event rates were similar in both groups (5.4% vs. 5.1%). There was one death in the abaloparatide group, which was deemed unrelated to the drug.

Dr. Binkley has reported receiving consulting fees from Amgen and research support from Radius. Dr. Hoang has reported disclosures with Acella Pharmaceuticals and Horizon Therapeutics (no financial compensation).

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

San Diego – The anabolic osteoporosis treatment abaloparatide (Tymlos, Radius Health) works in men as well as women, new data indicate.  

Findings from the Abaloparatide for the Treatment of Men With Osteoporosis (ATOM) randomized, double-blind, placebo-controlled, phase 3 study were presented last week at the American Association of Clinical Endocrinology (AACE) Annual Meeting 2022.

Abaloparatide, a subcutaneously administered parathyroid-hormone–related protein (PTHrP) analog, resulted in significant increases in bone mineral density by 12 months at the lumbar spine, total hip, and femoral neck, compared with placebo in men with osteoporosis, with no significant adverse effects.

“Osteoporosis is underdiagnosed in men. Abaloparatide is another option for an ignored population,” presenter Neil Binkley, MD, of the University of Wisconsin School of Medicine and Public Health Madison, said in an interview.

Abaloparatide was approved by the U.S. Food and Drug Administration in 2017 for the treatment of postmenopausal women at high risk for fracture due to a history of osteoporotic fracture or multiple fracture risk factors, or who haven’t responded to or are intolerant of other osteoporosis therapies.

While postmenopausal women have mainly been the focus in osteoporosis, men account for approximately 30% of the societal burden of osteoporosis and have greater fracture-related morbidity and mortality than women.

About one in four men over the age of 50 years will have a fragility fracture in their lifetime. Yet, they’re far less likely to be diagnosed or to be included in osteoporosis treatment trials, Dr. Binkley noted.

Asked to comment, session moderator Thanh D. Hoang, DO, told this news organization, “I think it’s a great option to treat osteoporosis, and now we have evidence for treating osteoporosis in men. Mostly the data have come from postmenopausal women.”
 

Screen men with hypogonadism or those taking steroids

“This new medication is an addition to the very limited number of treatments that we have when patients don’t respond to [initial] medications. To have another anabolic bone-forming medication is very, very good,” said Dr. Hoang, who is professor and program director of the Endocrinology Fellowship Program at Walter Reed National Military Medical Center, Bethesda, Maryland.

Radius Health filed a Supplemental New Drug Application with the FDA for abaloparatide (Tymlos) subcutaneous injection in men with osteoporosis at high risk for fracture in February. There is a 10-month review period.



Dr. Binkley advises bone screening for men who have conditions such as hypogonadism or who are taking glucocorticoids or chemotherapeutics.

But, he added, “I think that if we did nothing else good in the osteoporosis field, if we treated people after they fractured that would be a huge step forward. Even with a normal T score, when those people fracture, they [often] don’t have normal bone mineral density ... That’s a group of people we’re ignoring still. They’re not getting diagnosed, and they’re not getting treated.”

ATOM Study: Significant BMD increases at key sites

The approval of abaloparatide in women was based on the phase 3, 18-week ACTIVE trial of more than 2,000 high-risk women, in whom abaloparatide was associated with an 86% reduction in vertebral fracture incidence, compared with placebo, and also significantly greater reductions in nonvertebral fractures, compared with both placebo and teriparatide (Forteo, Eli Lilly).

The ATOM study involved a total of 228 men aged 40-85 years with primary or hypogonadism-associated osteoporosis randomized 2:1 to receive subcutaneous 80 μg abaloparatide or injected placebo daily for 12 months. All had T scores (based on male reference range) of ≤ −2.5 at the lumbar spine or hip, or ≤ −1.5 and with radiologic vertebral fracture or a history of low trauma nonvertebral fracture in the past 5 years, or T score ≤ −2.0 if older than 65 years.

Increases in bone mineral density from baseline were significantly greater with abaloparatide compared with placebo at the lumbar spine, total hip, and femoral neck at 3, 6, and 12 months. Mean percentage changes at 12 months were 8.5%, 2.1%, and 3.0%, for the three locations, respectively, compared with 1.2%, 0.01%, and 0.2% for placebo (all P ≤ .0001).

Three fractures occurred in those receiving placebo and one with abaloparatide.

For markers of bone turnover, median serum procollagen type I N-terminal propeptide (s-PINP) was 111.2 ng/mL after 1 month of abaloparatide treatment and 85.7 ng/mL at month 12. Median serum carboxy-terminal cross-linking telopeptide of type I collagen (s-CTX) was 0.48 ng/mL at month 6 and 0.45 ng/mL at month 12 in the abaloparatide group. Geometric mean relative to baseline s-PINP and s-CTX increased significantly at months 3, 6, and 12 (all P < .001 for relative treatment effect of abaloparatide vs. placebo).

The most commonly reported treatment-emergent adverse events were injection site erythema (12.8% vs. 5.1%), nasopharyngitis (8.7% vs. 7.6%), dizziness (8.7% vs. 1.3%), and arthralgia (6.7% vs. 1.3%), with abaloparatide versus placebo. Serious treatment-emergent adverse event rates were similar in both groups (5.4% vs. 5.1%). There was one death in the abaloparatide group, which was deemed unrelated to the drug.

Dr. Binkley has reported receiving consulting fees from Amgen and research support from Radius. Dr. Hoang has reported disclosures with Acella Pharmaceuticals and Horizon Therapeutics (no financial compensation).

A version of this article first appeared on Medscape.com.

San Diego – The anabolic osteoporosis treatment abaloparatide (Tymlos, Radius Health) works in men as well as women, new data indicate.  

Findings from the Abaloparatide for the Treatment of Men With Osteoporosis (ATOM) randomized, double-blind, placebo-controlled, phase 3 study were presented last week at the American Association of Clinical Endocrinology (AACE) Annual Meeting 2022.

Abaloparatide, a subcutaneously administered parathyroid-hormone–related protein (PTHrP) analog, resulted in significant increases in bone mineral density by 12 months at the lumbar spine, total hip, and femoral neck, compared with placebo in men with osteoporosis, with no significant adverse effects.

“Osteoporosis is underdiagnosed in men. Abaloparatide is another option for an ignored population,” presenter Neil Binkley, MD, of the University of Wisconsin School of Medicine and Public Health Madison, said in an interview.

Abaloparatide was approved by the U.S. Food and Drug Administration in 2017 for the treatment of postmenopausal women at high risk for fracture due to a history of osteoporotic fracture or multiple fracture risk factors, or who haven’t responded to or are intolerant of other osteoporosis therapies.

While postmenopausal women have mainly been the focus in osteoporosis, men account for approximately 30% of the societal burden of osteoporosis and have greater fracture-related morbidity and mortality than women.

About one in four men over the age of 50 years will have a fragility fracture in their lifetime. Yet, they’re far less likely to be diagnosed or to be included in osteoporosis treatment trials, Dr. Binkley noted.

Asked to comment, session moderator Thanh D. Hoang, DO, told this news organization, “I think it’s a great option to treat osteoporosis, and now we have evidence for treating osteoporosis in men. Mostly the data have come from postmenopausal women.”
 

Screen men with hypogonadism or those taking steroids

“This new medication is an addition to the very limited number of treatments that we have when patients don’t respond to [initial] medications. To have another anabolic bone-forming medication is very, very good,” said Dr. Hoang, who is professor and program director of the Endocrinology Fellowship Program at Walter Reed National Military Medical Center, Bethesda, Maryland.

Radius Health filed a Supplemental New Drug Application with the FDA for abaloparatide (Tymlos) subcutaneous injection in men with osteoporosis at high risk for fracture in February. There is a 10-month review period.



Dr. Binkley advises bone screening for men who have conditions such as hypogonadism or who are taking glucocorticoids or chemotherapeutics.

But, he added, “I think that if we did nothing else good in the osteoporosis field, if we treated people after they fractured that would be a huge step forward. Even with a normal T score, when those people fracture, they [often] don’t have normal bone mineral density ... That’s a group of people we’re ignoring still. They’re not getting diagnosed, and they’re not getting treated.”

ATOM Study: Significant BMD increases at key sites

The approval of abaloparatide in women was based on the phase 3, 18-week ACTIVE trial of more than 2,000 high-risk women, in whom abaloparatide was associated with an 86% reduction in vertebral fracture incidence, compared with placebo, and also significantly greater reductions in nonvertebral fractures, compared with both placebo and teriparatide (Forteo, Eli Lilly).

The ATOM study involved a total of 228 men aged 40-85 years with primary or hypogonadism-associated osteoporosis randomized 2:1 to receive subcutaneous 80 μg abaloparatide or injected placebo daily for 12 months. All had T scores (based on male reference range) of ≤ −2.5 at the lumbar spine or hip, or ≤ −1.5 and with radiologic vertebral fracture or a history of low trauma nonvertebral fracture in the past 5 years, or T score ≤ −2.0 if older than 65 years.

Increases in bone mineral density from baseline were significantly greater with abaloparatide compared with placebo at the lumbar spine, total hip, and femoral neck at 3, 6, and 12 months. Mean percentage changes at 12 months were 8.5%, 2.1%, and 3.0%, for the three locations, respectively, compared with 1.2%, 0.01%, and 0.2% for placebo (all P ≤ .0001).

Three fractures occurred in those receiving placebo and one with abaloparatide.

For markers of bone turnover, median serum procollagen type I N-terminal propeptide (s-PINP) was 111.2 ng/mL after 1 month of abaloparatide treatment and 85.7 ng/mL at month 12. Median serum carboxy-terminal cross-linking telopeptide of type I collagen (s-CTX) was 0.48 ng/mL at month 6 and 0.45 ng/mL at month 12 in the abaloparatide group. Geometric mean relative to baseline s-PINP and s-CTX increased significantly at months 3, 6, and 12 (all P < .001 for relative treatment effect of abaloparatide vs. placebo).

The most commonly reported treatment-emergent adverse events were injection site erythema (12.8% vs. 5.1%), nasopharyngitis (8.7% vs. 7.6%), dizziness (8.7% vs. 1.3%), and arthralgia (6.7% vs. 1.3%), with abaloparatide versus placebo. Serious treatment-emergent adverse event rates were similar in both groups (5.4% vs. 5.1%). There was one death in the abaloparatide group, which was deemed unrelated to the drug.

Dr. Binkley has reported receiving consulting fees from Amgen and research support from Radius. Dr. Hoang has reported disclosures with Acella Pharmaceuticals and Horizon Therapeutics (no financial compensation).

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT AACE 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Study casts doubt on safety, efficacy of L-serine supplementation for AD

Article Type
Changed
Fri, 07/01/2022 - 13:29

 

While previous research suggests that dietary supplementation with L-serine may be beneficial for patients with Alzheimer’s disease (AD), a new study cast doubt on the potential efficacy, and even the safety, of this treatment.

When given to patients with AD, L-serine supplements could be driving abnormally increased serine levels in the brain even higher, potentially accelerating neuronal death, according to study author Xu Chen, PhD, of the University of California, San Diego, and colleagues.

This conclusion conflicts with a 2020 study by Juliette Le Douce, PhD, and colleagues, who reported that oral L-serine supplementation may act as a “ready-to-use therapy” for AD, based on their findings that patients with AD had low levels of PHGDH, an enzyme necessary for synthesizing serine, and AD-like mice had low levels of serine.

Dr. Sheng Zhong

Writing in Cell Metabolism, Dr. Chen and colleagues framed the present study, and their findings, in this context.

“In contrast to the work of Le Douce et al., here we report that PHGDH mRNA and protein levels are increased in the brains of two mouse models of AD and/or tauopathy, and are also progressively increased in human brains with no, early, and late AD pathology, as well as in people with no, asymptomatic, and symptomatic AD,” they wrote.

They suggested adjusting clinical recommendations for L-serine, the form of the amino acid commonly found in supplements. In the body, L-serine is converted to D-serine, which acts on the NMDA receptor (NMDAR).

‘Long-term use of D-serine contributes to neuronal death’ suggests research

“We feel oral L-serine as a ready-to-use therapy to AD warrants precaution,” Dr. Chen and colleagues wrote. “This is because despite being a cognitive enhancer, some [research] suggests that long-term use of D-serine contributes to neuronal death in AD through excitotoxicity. Furthermore, D-serine, as a co-agonist of NMDAR, would be expected to oppose NMDAR antagonists, which have proven clinical benefits in treating AD.”

According to principal author Sheng Zhong, PhD, of the University of California, San Diego, “Research is needed to test if targeting PHGDH can ameliorate cognitive decline in AD.”

Dr. Zhong also noted that the present findings support the “promise of using a specific RNA in blood as a biomarker for early detection of Alzheimer’s disease.” This approach is currently being validated at UCSD Shiley-Marcos Alzheimer’s Disease Research Center, he added.

Roles of PHGDH and serine in Alzheimer’s disease require further study

Commenting on both studies, Steve W. Barger, PhD, of the University of Arkansas for Medical Sciences, Little Rock, suggested that more work is needed to better understand the roles of PHGDH and serine in AD before clinical applications can be considered.

“In the end, these two studies fail to provide the clarity we need in designing evidence-based therapeutic hypotheses,” Dr. Barger said in an interview. “We still do not have a firm grasp on the role that D-serine plays in AD. Indeed, the evidence regarding even a single enzyme contributing to its levels is ambiguous.”

Dr. Barger, who has published extensively on the topic of neuronal death, with a particular focus on Alzheimer’s disease, noted that “determination of what happens to D-serine levels in AD has been of interest for decades,” but levels of the amino acid have been notoriously challenging to measure because “D-serine can disappear rapidly from the brain and its fluids after death.”

While Dr. Le Douce and colleagues did measure levels of serine in mice, Dr. Barger noted that the study by Dr. Chen and colleagues was conducted with more “quantitatively rigorous methods.” Even though Dr. Chen and colleagues “did not assay the levels of D-serine itself ... the implication of their findings is that PHGDH is poised to elevate this critical neurotransmitter,” leading to their conclusion that serine supplementation is “potentially dangerous.”

At this point, it may be too early to tell, according to Dr. Barger.

He suggested that conclusions drawn from PHGDH levels alone are “always limited,” and conclusions based on serine levels may be equally dubious, considering that the activities and effects of serine “are quite complex,” and may be influenced by other physiologic processes, including the effects of gut bacteria.

Instead, Dr. Barger suggested that changes in PHGDH and serine may be interpreted as signals coming from a more relevant process upstream: glucose metabolism.

“What we can say confidently is that the glucose metabolism that PHGDH connects to D-serine is most definitely a factor in AD,” he said. “Countless studies have documented what now appears to be a universal decline in glucose delivery to the cerebral cortex, even before frank dementia sets in.”

Dr. Barger noted that declining glucose delivery coincides with some of the earliest events in the development of AD, perhaps “linking accumulation of amyloid β-peptide to subsequent neurofibrillary tangles and tissue atrophy.”

Dr. Barger’s own work recently demonstrated that AD is associated with “an irregularity in the insertion of a specific glucose transporter (GLUT1) into the cell surface” of astrocytes.

“It could be more effective to direct therapeutic interventions at these events lying upstream of PHGDH or serine,” he concluded.

The study was partly supported by a Kreuger v. Wyeth research award. The investigators and Dr. Barger reported no conflicts of interest.

Issue
Neurology Reviews - 30(7)
Publications
Topics
Sections

 

While previous research suggests that dietary supplementation with L-serine may be beneficial for patients with Alzheimer’s disease (AD), a new study cast doubt on the potential efficacy, and even the safety, of this treatment.

When given to patients with AD, L-serine supplements could be driving abnormally increased serine levels in the brain even higher, potentially accelerating neuronal death, according to study author Xu Chen, PhD, of the University of California, San Diego, and colleagues.

This conclusion conflicts with a 2020 study by Juliette Le Douce, PhD, and colleagues, who reported that oral L-serine supplementation may act as a “ready-to-use therapy” for AD, based on their findings that patients with AD had low levels of PHGDH, an enzyme necessary for synthesizing serine, and AD-like mice had low levels of serine.

Dr. Sheng Zhong

Writing in Cell Metabolism, Dr. Chen and colleagues framed the present study, and their findings, in this context.

“In contrast to the work of Le Douce et al., here we report that PHGDH mRNA and protein levels are increased in the brains of two mouse models of AD and/or tauopathy, and are also progressively increased in human brains with no, early, and late AD pathology, as well as in people with no, asymptomatic, and symptomatic AD,” they wrote.

They suggested adjusting clinical recommendations for L-serine, the form of the amino acid commonly found in supplements. In the body, L-serine is converted to D-serine, which acts on the NMDA receptor (NMDAR).

‘Long-term use of D-serine contributes to neuronal death’ suggests research

“We feel oral L-serine as a ready-to-use therapy to AD warrants precaution,” Dr. Chen and colleagues wrote. “This is because despite being a cognitive enhancer, some [research] suggests that long-term use of D-serine contributes to neuronal death in AD through excitotoxicity. Furthermore, D-serine, as a co-agonist of NMDAR, would be expected to oppose NMDAR antagonists, which have proven clinical benefits in treating AD.”

According to principal author Sheng Zhong, PhD, of the University of California, San Diego, “Research is needed to test if targeting PHGDH can ameliorate cognitive decline in AD.”

Dr. Zhong also noted that the present findings support the “promise of using a specific RNA in blood as a biomarker for early detection of Alzheimer’s disease.” This approach is currently being validated at UCSD Shiley-Marcos Alzheimer’s Disease Research Center, he added.

Roles of PHGDH and serine in Alzheimer’s disease require further study

Commenting on both studies, Steve W. Barger, PhD, of the University of Arkansas for Medical Sciences, Little Rock, suggested that more work is needed to better understand the roles of PHGDH and serine in AD before clinical applications can be considered.

“In the end, these two studies fail to provide the clarity we need in designing evidence-based therapeutic hypotheses,” Dr. Barger said in an interview. “We still do not have a firm grasp on the role that D-serine plays in AD. Indeed, the evidence regarding even a single enzyme contributing to its levels is ambiguous.”

Dr. Barger, who has published extensively on the topic of neuronal death, with a particular focus on Alzheimer’s disease, noted that “determination of what happens to D-serine levels in AD has been of interest for decades,” but levels of the amino acid have been notoriously challenging to measure because “D-serine can disappear rapidly from the brain and its fluids after death.”

While Dr. Le Douce and colleagues did measure levels of serine in mice, Dr. Barger noted that the study by Dr. Chen and colleagues was conducted with more “quantitatively rigorous methods.” Even though Dr. Chen and colleagues “did not assay the levels of D-serine itself ... the implication of their findings is that PHGDH is poised to elevate this critical neurotransmitter,” leading to their conclusion that serine supplementation is “potentially dangerous.”

At this point, it may be too early to tell, according to Dr. Barger.

He suggested that conclusions drawn from PHGDH levels alone are “always limited,” and conclusions based on serine levels may be equally dubious, considering that the activities and effects of serine “are quite complex,” and may be influenced by other physiologic processes, including the effects of gut bacteria.

Instead, Dr. Barger suggested that changes in PHGDH and serine may be interpreted as signals coming from a more relevant process upstream: glucose metabolism.

“What we can say confidently is that the glucose metabolism that PHGDH connects to D-serine is most definitely a factor in AD,” he said. “Countless studies have documented what now appears to be a universal decline in glucose delivery to the cerebral cortex, even before frank dementia sets in.”

Dr. Barger noted that declining glucose delivery coincides with some of the earliest events in the development of AD, perhaps “linking accumulation of amyloid β-peptide to subsequent neurofibrillary tangles and tissue atrophy.”

Dr. Barger’s own work recently demonstrated that AD is associated with “an irregularity in the insertion of a specific glucose transporter (GLUT1) into the cell surface” of astrocytes.

“It could be more effective to direct therapeutic interventions at these events lying upstream of PHGDH or serine,” he concluded.

The study was partly supported by a Kreuger v. Wyeth research award. The investigators and Dr. Barger reported no conflicts of interest.

 

While previous research suggests that dietary supplementation with L-serine may be beneficial for patients with Alzheimer’s disease (AD), a new study cast doubt on the potential efficacy, and even the safety, of this treatment.

When given to patients with AD, L-serine supplements could be driving abnormally increased serine levels in the brain even higher, potentially accelerating neuronal death, according to study author Xu Chen, PhD, of the University of California, San Diego, and colleagues.

This conclusion conflicts with a 2020 study by Juliette Le Douce, PhD, and colleagues, who reported that oral L-serine supplementation may act as a “ready-to-use therapy” for AD, based on their findings that patients with AD had low levels of PHGDH, an enzyme necessary for synthesizing serine, and AD-like mice had low levels of serine.

Dr. Sheng Zhong

Writing in Cell Metabolism, Dr. Chen and colleagues framed the present study, and their findings, in this context.

“In contrast to the work of Le Douce et al., here we report that PHGDH mRNA and protein levels are increased in the brains of two mouse models of AD and/or tauopathy, and are also progressively increased in human brains with no, early, and late AD pathology, as well as in people with no, asymptomatic, and symptomatic AD,” they wrote.

They suggested adjusting clinical recommendations for L-serine, the form of the amino acid commonly found in supplements. In the body, L-serine is converted to D-serine, which acts on the NMDA receptor (NMDAR).

‘Long-term use of D-serine contributes to neuronal death’ suggests research

“We feel oral L-serine as a ready-to-use therapy to AD warrants precaution,” Dr. Chen and colleagues wrote. “This is because despite being a cognitive enhancer, some [research] suggests that long-term use of D-serine contributes to neuronal death in AD through excitotoxicity. Furthermore, D-serine, as a co-agonist of NMDAR, would be expected to oppose NMDAR antagonists, which have proven clinical benefits in treating AD.”

According to principal author Sheng Zhong, PhD, of the University of California, San Diego, “Research is needed to test if targeting PHGDH can ameliorate cognitive decline in AD.”

Dr. Zhong also noted that the present findings support the “promise of using a specific RNA in blood as a biomarker for early detection of Alzheimer’s disease.” This approach is currently being validated at UCSD Shiley-Marcos Alzheimer’s Disease Research Center, he added.

Roles of PHGDH and serine in Alzheimer’s disease require further study

Commenting on both studies, Steve W. Barger, PhD, of the University of Arkansas for Medical Sciences, Little Rock, suggested that more work is needed to better understand the roles of PHGDH and serine in AD before clinical applications can be considered.

“In the end, these two studies fail to provide the clarity we need in designing evidence-based therapeutic hypotheses,” Dr. Barger said in an interview. “We still do not have a firm grasp on the role that D-serine plays in AD. Indeed, the evidence regarding even a single enzyme contributing to its levels is ambiguous.”

Dr. Barger, who has published extensively on the topic of neuronal death, with a particular focus on Alzheimer’s disease, noted that “determination of what happens to D-serine levels in AD has been of interest for decades,” but levels of the amino acid have been notoriously challenging to measure because “D-serine can disappear rapidly from the brain and its fluids after death.”

While Dr. Le Douce and colleagues did measure levels of serine in mice, Dr. Barger noted that the study by Dr. Chen and colleagues was conducted with more “quantitatively rigorous methods.” Even though Dr. Chen and colleagues “did not assay the levels of D-serine itself ... the implication of their findings is that PHGDH is poised to elevate this critical neurotransmitter,” leading to their conclusion that serine supplementation is “potentially dangerous.”

At this point, it may be too early to tell, according to Dr. Barger.

He suggested that conclusions drawn from PHGDH levels alone are “always limited,” and conclusions based on serine levels may be equally dubious, considering that the activities and effects of serine “are quite complex,” and may be influenced by other physiologic processes, including the effects of gut bacteria.

Instead, Dr. Barger suggested that changes in PHGDH and serine may be interpreted as signals coming from a more relevant process upstream: glucose metabolism.

“What we can say confidently is that the glucose metabolism that PHGDH connects to D-serine is most definitely a factor in AD,” he said. “Countless studies have documented what now appears to be a universal decline in glucose delivery to the cerebral cortex, even before frank dementia sets in.”

Dr. Barger noted that declining glucose delivery coincides with some of the earliest events in the development of AD, perhaps “linking accumulation of amyloid β-peptide to subsequent neurofibrillary tangles and tissue atrophy.”

Dr. Barger’s own work recently demonstrated that AD is associated with “an irregularity in the insertion of a specific glucose transporter (GLUT1) into the cell surface” of astrocytes.

“It could be more effective to direct therapeutic interventions at these events lying upstream of PHGDH or serine,” he concluded.

The study was partly supported by a Kreuger v. Wyeth research award. The investigators and Dr. Barger reported no conflicts of interest.

Issue
Neurology Reviews - 30(7)
Issue
Neurology Reviews - 30(7)
Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CELL METABOLISM

Citation Override
Publish date: May 11, 2022
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Higher industriousness reduces risk of predementia syndrome in older adults

Article Type
Changed
Tue, 05/10/2022 - 11:01

Higher industriousness was associated with a 25% reduced risk of concurrent motoric cognitive risk syndrome (MCR), based on data from approximately 6,000 individuals.

Previous research supports an association between conscientiousness and a lower risk of MCR, a form of predementia that involves slow gait speed and cognitive complaints, wrote Yannick Stephan, PhD, of the University of Montpellier (France), and colleagues. However, the specific facets of conscientiousness that impact MCR have not been examined.

Dr. Yannick Stephan

In a study published in the Journal of Psychiatric Research, the authors reviewed data from 6,001 dementia-free adults aged 65-99 years who were enrolled in the Health and Retirement Study, a nationally representative longitudinal study of adults aged 50 years and older in the United States.

Baseline data were collected between 2008 and 2010, and participants were assessed for MCR at follow-up points during 2012-2014 and 2016-2018. Six facets of conscientiousness were assessed using a 24-item scale that has been used in previous studies. The six facets were industriousness, self-control, order, traditionalism, virtue, and responsibility. The researchers controlled for variables including demographic factors, cognition, physical activity, disease burden, depressive symptoms, and body mass index.

Overall, increased industriousness was significantly associated with a lower likelihood of concurrent MCR (odds ratio, 0.75) and a reduced risk of incident MCR (hazard ratio, 0.63,; P < .001 for both).

The conscientiousness facets of order, self-control, and responsibility also were associated with a lower likelihood of both concurrent and incident MCR, with ORs ranging from 0.82-0.88 for concurrent and HRs ranging from 0.72-0.82 for incident.

Traditionalism and virtue were significantly associated with a lower risk of incident MCR, but not concurrent MCR (HR, 0.84; P < .01 for both).

The mechanism of action for the association may be explained by several cognitive, health-related, behavioral, and psychological pathways, the researchers wrote. With regard to industriousness, the relationship could be partly explained by cognition, physical activity, disease burden, BMI, and depressive symptoms. However, industriousness also has been associated with a reduced risk of systemic inflammation, which may in turn reduce MCR risk. Also, data suggest that industriousness and MCR share a common genetic cause.

The study findings were limited by several factors including the observational design and the positive selection effect from patients with complete follow-up data, as these patients likely have higher levels of order, industriousness, and responsibility, the researchers noted. However, the results support those from previous studies and were strengthened by the large sample and examination of six facets of conscientiousness.

“This study thus provides a more detailed understanding of the specific components of conscientiousness that are associated with risk of MCR among older adults,” and the facets could be targeted in interventions to reduce both MCR and dementia, they concluded.

The Health and Retirement Study is supported by the National Institute on Aging and conducted by the University of Michigan. The current study was supported in part by the National Institutes of Health. The researchers had no financial conflicts to disclose.

Publications
Topics
Sections

Higher industriousness was associated with a 25% reduced risk of concurrent motoric cognitive risk syndrome (MCR), based on data from approximately 6,000 individuals.

Previous research supports an association between conscientiousness and a lower risk of MCR, a form of predementia that involves slow gait speed and cognitive complaints, wrote Yannick Stephan, PhD, of the University of Montpellier (France), and colleagues. However, the specific facets of conscientiousness that impact MCR have not been examined.

Dr. Yannick Stephan

In a study published in the Journal of Psychiatric Research, the authors reviewed data from 6,001 dementia-free adults aged 65-99 years who were enrolled in the Health and Retirement Study, a nationally representative longitudinal study of adults aged 50 years and older in the United States.

Baseline data were collected between 2008 and 2010, and participants were assessed for MCR at follow-up points during 2012-2014 and 2016-2018. Six facets of conscientiousness were assessed using a 24-item scale that has been used in previous studies. The six facets were industriousness, self-control, order, traditionalism, virtue, and responsibility. The researchers controlled for variables including demographic factors, cognition, physical activity, disease burden, depressive symptoms, and body mass index.

Overall, increased industriousness was significantly associated with a lower likelihood of concurrent MCR (odds ratio, 0.75) and a reduced risk of incident MCR (hazard ratio, 0.63,; P < .001 for both).

The conscientiousness facets of order, self-control, and responsibility also were associated with a lower likelihood of both concurrent and incident MCR, with ORs ranging from 0.82-0.88 for concurrent and HRs ranging from 0.72-0.82 for incident.

Traditionalism and virtue were significantly associated with a lower risk of incident MCR, but not concurrent MCR (HR, 0.84; P < .01 for both).

The mechanism of action for the association may be explained by several cognitive, health-related, behavioral, and psychological pathways, the researchers wrote. With regard to industriousness, the relationship could be partly explained by cognition, physical activity, disease burden, BMI, and depressive symptoms. However, industriousness also has been associated with a reduced risk of systemic inflammation, which may in turn reduce MCR risk. Also, data suggest that industriousness and MCR share a common genetic cause.

The study findings were limited by several factors including the observational design and the positive selection effect from patients with complete follow-up data, as these patients likely have higher levels of order, industriousness, and responsibility, the researchers noted. However, the results support those from previous studies and were strengthened by the large sample and examination of six facets of conscientiousness.

“This study thus provides a more detailed understanding of the specific components of conscientiousness that are associated with risk of MCR among older adults,” and the facets could be targeted in interventions to reduce both MCR and dementia, they concluded.

The Health and Retirement Study is supported by the National Institute on Aging and conducted by the University of Michigan. The current study was supported in part by the National Institutes of Health. The researchers had no financial conflicts to disclose.

Higher industriousness was associated with a 25% reduced risk of concurrent motoric cognitive risk syndrome (MCR), based on data from approximately 6,000 individuals.

Previous research supports an association between conscientiousness and a lower risk of MCR, a form of predementia that involves slow gait speed and cognitive complaints, wrote Yannick Stephan, PhD, of the University of Montpellier (France), and colleagues. However, the specific facets of conscientiousness that impact MCR have not been examined.

Dr. Yannick Stephan

In a study published in the Journal of Psychiatric Research, the authors reviewed data from 6,001 dementia-free adults aged 65-99 years who were enrolled in the Health and Retirement Study, a nationally representative longitudinal study of adults aged 50 years and older in the United States.

Baseline data were collected between 2008 and 2010, and participants were assessed for MCR at follow-up points during 2012-2014 and 2016-2018. Six facets of conscientiousness were assessed using a 24-item scale that has been used in previous studies. The six facets were industriousness, self-control, order, traditionalism, virtue, and responsibility. The researchers controlled for variables including demographic factors, cognition, physical activity, disease burden, depressive symptoms, and body mass index.

Overall, increased industriousness was significantly associated with a lower likelihood of concurrent MCR (odds ratio, 0.75) and a reduced risk of incident MCR (hazard ratio, 0.63,; P < .001 for both).

The conscientiousness facets of order, self-control, and responsibility also were associated with a lower likelihood of both concurrent and incident MCR, with ORs ranging from 0.82-0.88 for concurrent and HRs ranging from 0.72-0.82 for incident.

Traditionalism and virtue were significantly associated with a lower risk of incident MCR, but not concurrent MCR (HR, 0.84; P < .01 for both).

The mechanism of action for the association may be explained by several cognitive, health-related, behavioral, and psychological pathways, the researchers wrote. With regard to industriousness, the relationship could be partly explained by cognition, physical activity, disease burden, BMI, and depressive symptoms. However, industriousness also has been associated with a reduced risk of systemic inflammation, which may in turn reduce MCR risk. Also, data suggest that industriousness and MCR share a common genetic cause.

The study findings were limited by several factors including the observational design and the positive selection effect from patients with complete follow-up data, as these patients likely have higher levels of order, industriousness, and responsibility, the researchers noted. However, the results support those from previous studies and were strengthened by the large sample and examination of six facets of conscientiousness.

“This study thus provides a more detailed understanding of the specific components of conscientiousness that are associated with risk of MCR among older adults,” and the facets could be targeted in interventions to reduce both MCR and dementia, they concluded.

The Health and Retirement Study is supported by the National Institute on Aging and conducted by the University of Michigan. The current study was supported in part by the National Institutes of Health. The researchers had no financial conflicts to disclose.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM PSYCHIATRIC RESEARCH

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article