The Journal of Clinical Outcomes Management® is an independent, peer-reviewed journal offering evidence-based, practical information for improving the quality, safety, and value of health care.

jcom
Main menu
JCOM Main
Explore menu
JCOM Explore
Proclivity ID
18843001
Unpublish
Negative Keywords Excluded Elements
header[@id='header']
div[contains(@class, 'header__large-screen')]
div[contains(@class, 'read-next-article')]
div[contains(@class, 'nav-primary')]
nav[contains(@class, 'nav-primary')]
section[contains(@class, 'footer-nav-section-wrapper')]
footer[@id='footer']
div[contains(@class, 'main-prefix')]
section[contains(@class, 'nav-hidden')]
div[contains(@class, 'ce-card-content')]
nav[contains(@class, 'nav-ce-stack')]
Altmetric
Click for Credit Button Label
Click For Credit
DSM Affiliated
Display in offset block
Enable Disqus
Display Author and Disclosure Link
Publication Type
Clinical
Slot System
Featured Buckets
Disable Sticky Ads
Disable Ad Block Mitigation
Featured Buckets Admin
Show Ads on this Publication's Homepage
Consolidated Pub
Show Article Page Numbers on TOC
Expire Announcement Bar
Wed, 12/18/2024 - 09:34
Use larger logo size
On
publication_blueconic_enabled
Off
Show More Destinations Menu
Disable Adhesion on Publication
Off
Restore Menu Label on Mobile Navigation
Disable Facebook Pixel from Publication
Exclude this publication from publication selection on articles and quiz
Gating Strategy
First Peek Free
Challenge Center
Disable Inline Native ads
survey writer start date
Wed, 12/18/2024 - 09:34

Breast cancer survival disparities: Need to address obesity

Article Type
Changed
Thu, 12/15/2022 - 17:32

The poorer survival rates in Black women compared with White women with breast cancer could be explained to some extent by the finding of much higher rates of  obesity, obesity-related comorbidities, and overall comorbidities in Black women, say researchers reporting a retrospective chart review from their clinic.

Kirsten A. Nyrop, PhD, deputy director for research for the Geriatric Oncology Program at the University of North Carolina, Chapel Hill, and colleagues analyzed medical charts for 144 Black women and 404 White women with early breast cancer.

Obesity rates were nearly double among Black women, at 62% vs 32%.

In addition, significantly more Black women had two or more total comorbidities (62% vs 47%), two or more obesity-related comorbidities (33% vs 10%), hypertension (60% vs 32%), type 2 diabetes (23% vs 6%), and hypercholesterolemia or hyperlipidemia (28% vs 18%).

These racial disparities persisted after adjustment for age and body mass index, both independent predictors of chronic diseases.

The difference in obesity and comorbidity did not translate to differences in treatment decisions such as surgery type, chemotherapy regimen, radiation, or endocrine treatment, the researchers note.

The findings were published online December 7 in the journal Cancer.

They underscore the likely role of obesity and related complications in the racial disparities seen in breast cancer survival outcomes, the authors conclude.

Experts writing in an accompanying editorial agree, while emphasizing the well documented differences in survival.

Compared with White women with breast cancer, Black women with breast cancer have a 42% higher mortality rate and a greater likelihood of dying from breast cancer regardless of their age at diagnosis, note Caitlin E. Taylor, MD, and Jane Lowe Meisel, MD, of Emory University, Atlanta, Georgia.

“Although it is clear that the causes of such disparities are complex and multifactorial, questions remain regarding to what extent patient comorbidities, and specifically obesity, may be playing a role,” they write.

The obesity- and comorbidity-related disparities identified by Nyrop et al “undoubtedly contribute to the overall disparities noted in breast cancer outcomes among Black women,” they add.
 

Takeaway messages

Nyrop and colleagues say their findings have two important takeaway messages:

1) It is important to manage comorbidities in women with newly diagnosed early breast cancer to reduce the risk of mortality from competing causes, and

2) Excess weight at diagnosis and weight gain after primary treatment increase the risk of obesity-related disease that can affect survival, and patients should be educated about the association.

Such conversations should be “patient-centered and culturally appropriate,” they note.

In their editorial, Taylor and Meisel agree with the authors’ conclusions. “Given the higher prevalence of obesity and related comorbidities among Black patients, addressing these issues must become an integral part of early breast cancer management to prolong overall survival and continue improving outcomes for these women.”

In fact, addressing comorbidities early in breast cancer management is important for patients of all racial and ethnic backgrounds, the editorialists add, concluding that the role of obesity as “a critical player underlying breast cancer mortality” is increasingly clear, and that its management is crucial for improved outcomes and quality of life.

This study was supported by the Breast Cancer Research Foundation, the National Cancer Institute’s Breast Cancer Specialized Program of Research Excellence, and the University of North Carolina Lineberger Comprehensive Cancer Center/University Cancer Research Fund. The study authors have disclosed no relevant financial relationships. Meisel has received research support from Pfizer, Eli Lilly, and Seattle Genetics and has served as a paid advisor for Pfizer, PUMA, Novartis, and Clovis Oncology. Taylor has disclosed no relevant financial relationships.

This article first appeared on Medscape.com.

Publications
Topics
Sections

The poorer survival rates in Black women compared with White women with breast cancer could be explained to some extent by the finding of much higher rates of  obesity, obesity-related comorbidities, and overall comorbidities in Black women, say researchers reporting a retrospective chart review from their clinic.

Kirsten A. Nyrop, PhD, deputy director for research for the Geriatric Oncology Program at the University of North Carolina, Chapel Hill, and colleagues analyzed medical charts for 144 Black women and 404 White women with early breast cancer.

Obesity rates were nearly double among Black women, at 62% vs 32%.

In addition, significantly more Black women had two or more total comorbidities (62% vs 47%), two or more obesity-related comorbidities (33% vs 10%), hypertension (60% vs 32%), type 2 diabetes (23% vs 6%), and hypercholesterolemia or hyperlipidemia (28% vs 18%).

These racial disparities persisted after adjustment for age and body mass index, both independent predictors of chronic diseases.

The difference in obesity and comorbidity did not translate to differences in treatment decisions such as surgery type, chemotherapy regimen, radiation, or endocrine treatment, the researchers note.

The findings were published online December 7 in the journal Cancer.

They underscore the likely role of obesity and related complications in the racial disparities seen in breast cancer survival outcomes, the authors conclude.

Experts writing in an accompanying editorial agree, while emphasizing the well documented differences in survival.

Compared with White women with breast cancer, Black women with breast cancer have a 42% higher mortality rate and a greater likelihood of dying from breast cancer regardless of their age at diagnosis, note Caitlin E. Taylor, MD, and Jane Lowe Meisel, MD, of Emory University, Atlanta, Georgia.

“Although it is clear that the causes of such disparities are complex and multifactorial, questions remain regarding to what extent patient comorbidities, and specifically obesity, may be playing a role,” they write.

The obesity- and comorbidity-related disparities identified by Nyrop et al “undoubtedly contribute to the overall disparities noted in breast cancer outcomes among Black women,” they add.
 

Takeaway messages

Nyrop and colleagues say their findings have two important takeaway messages:

1) It is important to manage comorbidities in women with newly diagnosed early breast cancer to reduce the risk of mortality from competing causes, and

2) Excess weight at diagnosis and weight gain after primary treatment increase the risk of obesity-related disease that can affect survival, and patients should be educated about the association.

Such conversations should be “patient-centered and culturally appropriate,” they note.

In their editorial, Taylor and Meisel agree with the authors’ conclusions. “Given the higher prevalence of obesity and related comorbidities among Black patients, addressing these issues must become an integral part of early breast cancer management to prolong overall survival and continue improving outcomes for these women.”

In fact, addressing comorbidities early in breast cancer management is important for patients of all racial and ethnic backgrounds, the editorialists add, concluding that the role of obesity as “a critical player underlying breast cancer mortality” is increasingly clear, and that its management is crucial for improved outcomes and quality of life.

This study was supported by the Breast Cancer Research Foundation, the National Cancer Institute’s Breast Cancer Specialized Program of Research Excellence, and the University of North Carolina Lineberger Comprehensive Cancer Center/University Cancer Research Fund. The study authors have disclosed no relevant financial relationships. Meisel has received research support from Pfizer, Eli Lilly, and Seattle Genetics and has served as a paid advisor for Pfizer, PUMA, Novartis, and Clovis Oncology. Taylor has disclosed no relevant financial relationships.

This article first appeared on Medscape.com.

The poorer survival rates in Black women compared with White women with breast cancer could be explained to some extent by the finding of much higher rates of  obesity, obesity-related comorbidities, and overall comorbidities in Black women, say researchers reporting a retrospective chart review from their clinic.

Kirsten A. Nyrop, PhD, deputy director for research for the Geriatric Oncology Program at the University of North Carolina, Chapel Hill, and colleagues analyzed medical charts for 144 Black women and 404 White women with early breast cancer.

Obesity rates were nearly double among Black women, at 62% vs 32%.

In addition, significantly more Black women had two or more total comorbidities (62% vs 47%), two or more obesity-related comorbidities (33% vs 10%), hypertension (60% vs 32%), type 2 diabetes (23% vs 6%), and hypercholesterolemia or hyperlipidemia (28% vs 18%).

These racial disparities persisted after adjustment for age and body mass index, both independent predictors of chronic diseases.

The difference in obesity and comorbidity did not translate to differences in treatment decisions such as surgery type, chemotherapy regimen, radiation, or endocrine treatment, the researchers note.

The findings were published online December 7 in the journal Cancer.

They underscore the likely role of obesity and related complications in the racial disparities seen in breast cancer survival outcomes, the authors conclude.

Experts writing in an accompanying editorial agree, while emphasizing the well documented differences in survival.

Compared with White women with breast cancer, Black women with breast cancer have a 42% higher mortality rate and a greater likelihood of dying from breast cancer regardless of their age at diagnosis, note Caitlin E. Taylor, MD, and Jane Lowe Meisel, MD, of Emory University, Atlanta, Georgia.

“Although it is clear that the causes of such disparities are complex and multifactorial, questions remain regarding to what extent patient comorbidities, and specifically obesity, may be playing a role,” they write.

The obesity- and comorbidity-related disparities identified by Nyrop et al “undoubtedly contribute to the overall disparities noted in breast cancer outcomes among Black women,” they add.
 

Takeaway messages

Nyrop and colleagues say their findings have two important takeaway messages:

1) It is important to manage comorbidities in women with newly diagnosed early breast cancer to reduce the risk of mortality from competing causes, and

2) Excess weight at diagnosis and weight gain after primary treatment increase the risk of obesity-related disease that can affect survival, and patients should be educated about the association.

Such conversations should be “patient-centered and culturally appropriate,” they note.

In their editorial, Taylor and Meisel agree with the authors’ conclusions. “Given the higher prevalence of obesity and related comorbidities among Black patients, addressing these issues must become an integral part of early breast cancer management to prolong overall survival and continue improving outcomes for these women.”

In fact, addressing comorbidities early in breast cancer management is important for patients of all racial and ethnic backgrounds, the editorialists add, concluding that the role of obesity as “a critical player underlying breast cancer mortality” is increasingly clear, and that its management is crucial for improved outcomes and quality of life.

This study was supported by the Breast Cancer Research Foundation, the National Cancer Institute’s Breast Cancer Specialized Program of Research Excellence, and the University of North Carolina Lineberger Comprehensive Cancer Center/University Cancer Research Fund. The study authors have disclosed no relevant financial relationships. Meisel has received research support from Pfizer, Eli Lilly, and Seattle Genetics and has served as a paid advisor for Pfizer, PUMA, Novartis, and Clovis Oncology. Taylor has disclosed no relevant financial relationships.

This article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

FDA approves liraglutide for adolescents with obesity

Article Type
Changed
Tue, 01/05/2021 - 14:34

 

The Food and Drug Administration’s new indication for liraglutide (Saxenda) for weight loss in adolescents with obesity, announced on Dec. 4, received welcome as a milestone for advancing a field that’s seen no new drug options since 2003 and boosted by 50% the list of agents indicated for weight loss in this age group.

Dr. Aaron S. Kelly

But liraglutide’s track record in adolescents in the key study published earlier in 2020 left some experts unconvinced that liraglutide’s modest effects would have much impact on blunting the expanding cohort of teens who are obese.

“Until now, we’ve had phentermine and orlistat with FDA approval” for adolescents with obesity, and phentermine’s label specifies only patients older than 16 years. “It’s important that the FDA deemed liraglutide’s benefits greater than its risks for adolescents,” said Aaron S. Kelly, PhD, leader of the 82-week, multicenter, randomized study of liraglutide in 251 adolescents with obesity that directly led to the FDA’s action.

“We have results from a strong, published randomized trial, and the green light from the FDA, and that should give clinicians reassurance and confidence to use liraglutide clinically,” said Dr. Kelly, professor of pediatrics and codirector of the Center for Pediatric Obesity Medicine at the University of Minnesota in Minneapolis.
 

An ‘unimpressive’ drop in BMI

Sonia Caprio, MD, had a more skeptical take on liraglutide’s role with its new indication: “Approval of higher-dose liraglutide is an improvement that reflects a willingness to accept adolescent obesity as a disease that needs treatment with pharmacological agents. However, the study, published in New England Journal of Medicine, was not impressive in terms of weight loss, and more importantly liraglutide was not associated with any significant changes in metabolic markers” such as insulin resistance, high-sensitivity C-reactive protein, lipoproteins and triglycerides, and hemoglobin A1c.

The observed average 5% drop in body mass index seen after a year on liraglutide treatment, compared with baseline and relative to no average change from baseline in the placebo arm, was “totally insufficient, and will not diminish any of the metabolic complications in youth with obesity,” commented Dr. Caprio, an endocrinologist and professor of pediatrics at Yale University in New Haven, Conn.

Results from the study led by Dr. Kelly also showed that liraglutide for 56 weeks cut BMI by 5% in 43% of patients, and by 10% in 26%, compared with respective rates of 19% and 8% among those in the placebo-control arm. He took a more expansive view of the potential benefits from weight loss of the caliber demonstrated by liraglutide in the study.

“In general, we wait too long with obesity in children; the earlier the intervention the better. A 3% or 4% reduction in BMI at 12 or 13 years old can pay big dividends down the road” when a typical adolescent trajectory of steadily rising weight can be flattened, he said in an interview.

Bariatric and metabolic surgery, although highly effective and usually safe, is seen by many clinicians, patients, and families as an “intervention of last resort,” and its very low level of uptake in adolescents bears witness to that reputation. It also creates an important niche for safe and effective drugs to fill as an adjunct to lifestyle changes, which are often ineffective when used by themselves. Liraglutide’s main mechanism for weight loss is depressing hunger, Dr. Kelly noted.
 

 

 

Existing meds have limitations

The existing medical treatments, orlistat and phentermine, both have significant drawbacks that limit their use. Orlistat (Xenical, Alli), FDA approved for adolescents 12-16 years old since 2003, limits intestinal fat absorption and as a result often produces unwanted GI effects. Phentermine’s approval for older adolescents dates from 1959 and has a weak evidence base, its label limits it to “short-term” use that’s generally taken to mean a maximum of 12 weeks. And, as a stimulant, phentermine has often been regarded as potentially dangerous, although Dr. Kelly noted that stimulants are well-accepted treatments for other disorders in children and adolescents.

“The earlier we treat obesity in youth, the better, given that it tends to track into adulthood,” agreed Dr. Caprio. “However, it remains to be seen whether weight reduction with a pharmacological agent is going to help prevent the intractable trajectories of weight and its complications. So far, it looks like surgery may be more efficacious,” she said in an interview.

Another drawback for the near future with liraglutide will likely be its cost for many patients, more than $10,000/year at full retail prices for the weight-loss formulation, given that insurers have had a poor record of covering the drug for this indication in adults, both Dr. Caprio and Dr. Kelly noted.

Compliance with liraglutide is also important. Dr. Kelly’s study followed patients for their first 26 weeks off treatment after 56 weeks on the drug, and showed that on average weights rebounded to virtually baseline levels by 6 months after treatment stopped.
 

Obesity treatment lasts a lifetime

“Obesity is a chronic disease, that requires chronic treatment, just like hypertension,” Dr. Kelly stressed, and cited the rebound seen in his study when liraglutide stopped as further proof of that concept. “All obesity treatment is lifelong,” he maintained.

He highlighted the importance of clinicians discussing with adolescent patients and their families the prospect of potentially remaining on liraglutide treatment for years to maintain weight loss. His experience with the randomized study convinced him that many adolescents with obesity are amenable to daily subcutaneous injection using the pen device that liraglutide comes in, but he acknowledged that some teens find this off-putting.

For the near term, Dr. Kelly foresaw liraglutide treatment of adolescents as something that will mostly be administered to patients who seek care at centers that specialize in obesity management. “I’ll think we’ll eventually see it move to more primary care settings, but that will be down the road.”

The study of liraglutide in adolescents was sponsored by Novo Nordisk, the company that markets liraglutide (Saxenda). Dr. Kelly has been a consultant to Novo Nordisk and also to Orexigen Therapeutics, Vivus, and WW, and he has received research funding from AstraZeneca. Dr. Caprio had no disclosures.

Publications
Topics
Sections

 

The Food and Drug Administration’s new indication for liraglutide (Saxenda) for weight loss in adolescents with obesity, announced on Dec. 4, received welcome as a milestone for advancing a field that’s seen no new drug options since 2003 and boosted by 50% the list of agents indicated for weight loss in this age group.

Dr. Aaron S. Kelly

But liraglutide’s track record in adolescents in the key study published earlier in 2020 left some experts unconvinced that liraglutide’s modest effects would have much impact on blunting the expanding cohort of teens who are obese.

“Until now, we’ve had phentermine and orlistat with FDA approval” for adolescents with obesity, and phentermine’s label specifies only patients older than 16 years. “It’s important that the FDA deemed liraglutide’s benefits greater than its risks for adolescents,” said Aaron S. Kelly, PhD, leader of the 82-week, multicenter, randomized study of liraglutide in 251 adolescents with obesity that directly led to the FDA’s action.

“We have results from a strong, published randomized trial, and the green light from the FDA, and that should give clinicians reassurance and confidence to use liraglutide clinically,” said Dr. Kelly, professor of pediatrics and codirector of the Center for Pediatric Obesity Medicine at the University of Minnesota in Minneapolis.
 

An ‘unimpressive’ drop in BMI

Sonia Caprio, MD, had a more skeptical take on liraglutide’s role with its new indication: “Approval of higher-dose liraglutide is an improvement that reflects a willingness to accept adolescent obesity as a disease that needs treatment with pharmacological agents. However, the study, published in New England Journal of Medicine, was not impressive in terms of weight loss, and more importantly liraglutide was not associated with any significant changes in metabolic markers” such as insulin resistance, high-sensitivity C-reactive protein, lipoproteins and triglycerides, and hemoglobin A1c.

The observed average 5% drop in body mass index seen after a year on liraglutide treatment, compared with baseline and relative to no average change from baseline in the placebo arm, was “totally insufficient, and will not diminish any of the metabolic complications in youth with obesity,” commented Dr. Caprio, an endocrinologist and professor of pediatrics at Yale University in New Haven, Conn.

Results from the study led by Dr. Kelly also showed that liraglutide for 56 weeks cut BMI by 5% in 43% of patients, and by 10% in 26%, compared with respective rates of 19% and 8% among those in the placebo-control arm. He took a more expansive view of the potential benefits from weight loss of the caliber demonstrated by liraglutide in the study.

“In general, we wait too long with obesity in children; the earlier the intervention the better. A 3% or 4% reduction in BMI at 12 or 13 years old can pay big dividends down the road” when a typical adolescent trajectory of steadily rising weight can be flattened, he said in an interview.

Bariatric and metabolic surgery, although highly effective and usually safe, is seen by many clinicians, patients, and families as an “intervention of last resort,” and its very low level of uptake in adolescents bears witness to that reputation. It also creates an important niche for safe and effective drugs to fill as an adjunct to lifestyle changes, which are often ineffective when used by themselves. Liraglutide’s main mechanism for weight loss is depressing hunger, Dr. Kelly noted.
 

 

 

Existing meds have limitations

The existing medical treatments, orlistat and phentermine, both have significant drawbacks that limit their use. Orlistat (Xenical, Alli), FDA approved for adolescents 12-16 years old since 2003, limits intestinal fat absorption and as a result often produces unwanted GI effects. Phentermine’s approval for older adolescents dates from 1959 and has a weak evidence base, its label limits it to “short-term” use that’s generally taken to mean a maximum of 12 weeks. And, as a stimulant, phentermine has often been regarded as potentially dangerous, although Dr. Kelly noted that stimulants are well-accepted treatments for other disorders in children and adolescents.

“The earlier we treat obesity in youth, the better, given that it tends to track into adulthood,” agreed Dr. Caprio. “However, it remains to be seen whether weight reduction with a pharmacological agent is going to help prevent the intractable trajectories of weight and its complications. So far, it looks like surgery may be more efficacious,” she said in an interview.

Another drawback for the near future with liraglutide will likely be its cost for many patients, more than $10,000/year at full retail prices for the weight-loss formulation, given that insurers have had a poor record of covering the drug for this indication in adults, both Dr. Caprio and Dr. Kelly noted.

Compliance with liraglutide is also important. Dr. Kelly’s study followed patients for their first 26 weeks off treatment after 56 weeks on the drug, and showed that on average weights rebounded to virtually baseline levels by 6 months after treatment stopped.
 

Obesity treatment lasts a lifetime

“Obesity is a chronic disease, that requires chronic treatment, just like hypertension,” Dr. Kelly stressed, and cited the rebound seen in his study when liraglutide stopped as further proof of that concept. “All obesity treatment is lifelong,” he maintained.

He highlighted the importance of clinicians discussing with adolescent patients and their families the prospect of potentially remaining on liraglutide treatment for years to maintain weight loss. His experience with the randomized study convinced him that many adolescents with obesity are amenable to daily subcutaneous injection using the pen device that liraglutide comes in, but he acknowledged that some teens find this off-putting.

For the near term, Dr. Kelly foresaw liraglutide treatment of adolescents as something that will mostly be administered to patients who seek care at centers that specialize in obesity management. “I’ll think we’ll eventually see it move to more primary care settings, but that will be down the road.”

The study of liraglutide in adolescents was sponsored by Novo Nordisk, the company that markets liraglutide (Saxenda). Dr. Kelly has been a consultant to Novo Nordisk and also to Orexigen Therapeutics, Vivus, and WW, and he has received research funding from AstraZeneca. Dr. Caprio had no disclosures.

 

The Food and Drug Administration’s new indication for liraglutide (Saxenda) for weight loss in adolescents with obesity, announced on Dec. 4, received welcome as a milestone for advancing a field that’s seen no new drug options since 2003 and boosted by 50% the list of agents indicated for weight loss in this age group.

Dr. Aaron S. Kelly

But liraglutide’s track record in adolescents in the key study published earlier in 2020 left some experts unconvinced that liraglutide’s modest effects would have much impact on blunting the expanding cohort of teens who are obese.

“Until now, we’ve had phentermine and orlistat with FDA approval” for adolescents with obesity, and phentermine’s label specifies only patients older than 16 years. “It’s important that the FDA deemed liraglutide’s benefits greater than its risks for adolescents,” said Aaron S. Kelly, PhD, leader of the 82-week, multicenter, randomized study of liraglutide in 251 adolescents with obesity that directly led to the FDA’s action.

“We have results from a strong, published randomized trial, and the green light from the FDA, and that should give clinicians reassurance and confidence to use liraglutide clinically,” said Dr. Kelly, professor of pediatrics and codirector of the Center for Pediatric Obesity Medicine at the University of Minnesota in Minneapolis.
 

An ‘unimpressive’ drop in BMI

Sonia Caprio, MD, had a more skeptical take on liraglutide’s role with its new indication: “Approval of higher-dose liraglutide is an improvement that reflects a willingness to accept adolescent obesity as a disease that needs treatment with pharmacological agents. However, the study, published in New England Journal of Medicine, was not impressive in terms of weight loss, and more importantly liraglutide was not associated with any significant changes in metabolic markers” such as insulin resistance, high-sensitivity C-reactive protein, lipoproteins and triglycerides, and hemoglobin A1c.

The observed average 5% drop in body mass index seen after a year on liraglutide treatment, compared with baseline and relative to no average change from baseline in the placebo arm, was “totally insufficient, and will not diminish any of the metabolic complications in youth with obesity,” commented Dr. Caprio, an endocrinologist and professor of pediatrics at Yale University in New Haven, Conn.

Results from the study led by Dr. Kelly also showed that liraglutide for 56 weeks cut BMI by 5% in 43% of patients, and by 10% in 26%, compared with respective rates of 19% and 8% among those in the placebo-control arm. He took a more expansive view of the potential benefits from weight loss of the caliber demonstrated by liraglutide in the study.

“In general, we wait too long with obesity in children; the earlier the intervention the better. A 3% or 4% reduction in BMI at 12 or 13 years old can pay big dividends down the road” when a typical adolescent trajectory of steadily rising weight can be flattened, he said in an interview.

Bariatric and metabolic surgery, although highly effective and usually safe, is seen by many clinicians, patients, and families as an “intervention of last resort,” and its very low level of uptake in adolescents bears witness to that reputation. It also creates an important niche for safe and effective drugs to fill as an adjunct to lifestyle changes, which are often ineffective when used by themselves. Liraglutide’s main mechanism for weight loss is depressing hunger, Dr. Kelly noted.
 

 

 

Existing meds have limitations

The existing medical treatments, orlistat and phentermine, both have significant drawbacks that limit their use. Orlistat (Xenical, Alli), FDA approved for adolescents 12-16 years old since 2003, limits intestinal fat absorption and as a result often produces unwanted GI effects. Phentermine’s approval for older adolescents dates from 1959 and has a weak evidence base, its label limits it to “short-term” use that’s generally taken to mean a maximum of 12 weeks. And, as a stimulant, phentermine has often been regarded as potentially dangerous, although Dr. Kelly noted that stimulants are well-accepted treatments for other disorders in children and adolescents.

“The earlier we treat obesity in youth, the better, given that it tends to track into adulthood,” agreed Dr. Caprio. “However, it remains to be seen whether weight reduction with a pharmacological agent is going to help prevent the intractable trajectories of weight and its complications. So far, it looks like surgery may be more efficacious,” she said in an interview.

Another drawback for the near future with liraglutide will likely be its cost for many patients, more than $10,000/year at full retail prices for the weight-loss formulation, given that insurers have had a poor record of covering the drug for this indication in adults, both Dr. Caprio and Dr. Kelly noted.

Compliance with liraglutide is also important. Dr. Kelly’s study followed patients for their first 26 weeks off treatment after 56 weeks on the drug, and showed that on average weights rebounded to virtually baseline levels by 6 months after treatment stopped.
 

Obesity treatment lasts a lifetime

“Obesity is a chronic disease, that requires chronic treatment, just like hypertension,” Dr. Kelly stressed, and cited the rebound seen in his study when liraglutide stopped as further proof of that concept. “All obesity treatment is lifelong,” he maintained.

He highlighted the importance of clinicians discussing with adolescent patients and their families the prospect of potentially remaining on liraglutide treatment for years to maintain weight loss. His experience with the randomized study convinced him that many adolescents with obesity are amenable to daily subcutaneous injection using the pen device that liraglutide comes in, but he acknowledged that some teens find this off-putting.

For the near term, Dr. Kelly foresaw liraglutide treatment of adolescents as something that will mostly be administered to patients who seek care at centers that specialize in obesity management. “I’ll think we’ll eventually see it move to more primary care settings, but that will be down the road.”

The study of liraglutide in adolescents was sponsored by Novo Nordisk, the company that markets liraglutide (Saxenda). Dr. Kelly has been a consultant to Novo Nordisk and also to Orexigen Therapeutics, Vivus, and WW, and he has received research funding from AstraZeneca. Dr. Caprio had no disclosures.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

Diabetes prevention diet may lower mortality risk in breast cancer

Article Type
Changed
Wed, 01/04/2023 - 16:42

 

Women who more closely followed a diabetes risk-reduction diet both before and after a diagnosis of breast cancer had lower risks for breast cancer–specific and all-cause mortality when compared with women with less healthy diets or those who did not substantially modify what they ate following diagnosis, according to pooled data from two prospective cohort studies.

Among more than 8,000 participants in the Nurses’ Health Study and NHS II, those who most closely adhered to a dietary pattern associated with lower risk for type 2 diabetes had a 13% lower risk for breast cancer–specific mortality and a 31% lower risk for death from any cause, compared with those at the bottom of the diabetes risk-reduction diet chart, reported Tengteng Wang, PhD, of the Harvard School of Public Health, Boston, and colleagues.

“Promoting dietary changes consistent with prevention of type 2 diabetes may be very important for breast cancer survivors,” Dr. Wang said in an oral abstract presentation at the 2020 San Antonio Breast Cancer Symposium.
 

Poor outcomes

Type 2 diabetes has been shown to be associated with poor outcomes for women with breast cancer, prompting the investigators to see whether diet modification could play a role in improving prognosis.

They looked at self-reported dietary data from 8,320 women diagnosed with stage I-III breast cancer who were participants in NHS, with data from 1980 to 2014, and NHS II, with data from 1991 to 2015.

Every 2-4 years, participants filled out validated follow-up questionnaires, including information on diet.

The investigators calculated a diabetes risk-reduction diet (DRRD) adherence score based on nine components, including higher intakes of cereal fiber, coffee, nuts, and whole fruits, as well as a higher polyunsatured to saturated fat ratio, and lower glycemic index, plus lower intakes of trans fats, sugar-sweetened beverages and/or fruit juices, and red meat.

The investigators calculated cumulative average DRRD scores based on repeated measures of diet after breast cancer diagnosis. They obtained data on deaths from family reports or the National Death Index, and they determined causes of death from either death certificates or medical records.

At a median follow-up of 13 years, 2,146 participants had died, with 948 of the deaths attributed to breast cancer.

After adjusting for socioeconomic factors, postdiagnosis time-varying covariates, and key breast cancer clinical factors, there was a nonsignificant trend toward a lower risk for breast cancer–specific deaths in the women in the highest versus lowest quintiles of DRRD score (hazard ratio, 0.87; P = .13), but significantly lower risk for all-cause mortality risk (HR, 0.69; P < .0001).

Looking at participants who changed their diet following breast cancer diagnosis, those who went from a low DRRD score prediagnosis to a high score post diagnosis had a 20% reduction in risk for breast cancer–specific mortality and a 14% reduction in risk for all-cause mortality, the investigators found (P values for this analysis were not shown).

There were no differences in results by either tumor estrogen receptor status or stage.

Dr. Wang acknowledged that the study was limited by the population (which was predominantly composed of educated, non-Hispanic White women), errors in dietary measurement, and limited power for estrogen receptor–negative tumor analysis.
 

 

 

Will patients do what’s good for them?

While this study adds to the body of evidence linking diet and cancer, putting the information into action is another story, according to Halle Moore, MD, of the Cleveland Clinic, who was not involved in this study.

“We have had supportive data for the role of diet in general health outcomes, including cancer-related outcomes, for a long time. But getting the public to implement these dietary changes is a challenge, so certainly the more convincing data that we have and the more specific we can be with specific types of dietary interventions, it does make it more helpful to counsel patients,” Dr. Moore said in an interview.

She said the finding that dietary change post diagnosis can have a significant effect on lowering both all-cause and breast cancer–specific mortality is compelling evidence for a role of diet in breast cancer outcomes.

In the question-and-answer session following Dr. Wang’s presentation, Hans-Christian Kolberg, MD, from Marienhospital Bottrop at the University of Duisburg-Essen (Germany), echoed the sentiment when he commented, “you have an important result that you did not mention in the conclusion: It is not too late to change diet after breast cancer diagnosis!”

This study was supported, in part, by grants from the National Cancer Institute, Breast Cancer Research Foundations, and Susan G. Komen Breast Cancer Foundations. Dr. Wang, Dr. Moore, and Dr. Kolberg reported no relevant conflicts of interest.

SOURCE: Wang T et al. SABCS 2020, Abstract GS2-09.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

Women who more closely followed a diabetes risk-reduction diet both before and after a diagnosis of breast cancer had lower risks for breast cancer–specific and all-cause mortality when compared with women with less healthy diets or those who did not substantially modify what they ate following diagnosis, according to pooled data from two prospective cohort studies.

Among more than 8,000 participants in the Nurses’ Health Study and NHS II, those who most closely adhered to a dietary pattern associated with lower risk for type 2 diabetes had a 13% lower risk for breast cancer–specific mortality and a 31% lower risk for death from any cause, compared with those at the bottom of the diabetes risk-reduction diet chart, reported Tengteng Wang, PhD, of the Harvard School of Public Health, Boston, and colleagues.

“Promoting dietary changes consistent with prevention of type 2 diabetes may be very important for breast cancer survivors,” Dr. Wang said in an oral abstract presentation at the 2020 San Antonio Breast Cancer Symposium.
 

Poor outcomes

Type 2 diabetes has been shown to be associated with poor outcomes for women with breast cancer, prompting the investigators to see whether diet modification could play a role in improving prognosis.

They looked at self-reported dietary data from 8,320 women diagnosed with stage I-III breast cancer who were participants in NHS, with data from 1980 to 2014, and NHS II, with data from 1991 to 2015.

Every 2-4 years, participants filled out validated follow-up questionnaires, including information on diet.

The investigators calculated a diabetes risk-reduction diet (DRRD) adherence score based on nine components, including higher intakes of cereal fiber, coffee, nuts, and whole fruits, as well as a higher polyunsatured to saturated fat ratio, and lower glycemic index, plus lower intakes of trans fats, sugar-sweetened beverages and/or fruit juices, and red meat.

The investigators calculated cumulative average DRRD scores based on repeated measures of diet after breast cancer diagnosis. They obtained data on deaths from family reports or the National Death Index, and they determined causes of death from either death certificates or medical records.

At a median follow-up of 13 years, 2,146 participants had died, with 948 of the deaths attributed to breast cancer.

After adjusting for socioeconomic factors, postdiagnosis time-varying covariates, and key breast cancer clinical factors, there was a nonsignificant trend toward a lower risk for breast cancer–specific deaths in the women in the highest versus lowest quintiles of DRRD score (hazard ratio, 0.87; P = .13), but significantly lower risk for all-cause mortality risk (HR, 0.69; P < .0001).

Looking at participants who changed their diet following breast cancer diagnosis, those who went from a low DRRD score prediagnosis to a high score post diagnosis had a 20% reduction in risk for breast cancer–specific mortality and a 14% reduction in risk for all-cause mortality, the investigators found (P values for this analysis were not shown).

There were no differences in results by either tumor estrogen receptor status or stage.

Dr. Wang acknowledged that the study was limited by the population (which was predominantly composed of educated, non-Hispanic White women), errors in dietary measurement, and limited power for estrogen receptor–negative tumor analysis.
 

 

 

Will patients do what’s good for them?

While this study adds to the body of evidence linking diet and cancer, putting the information into action is another story, according to Halle Moore, MD, of the Cleveland Clinic, who was not involved in this study.

“We have had supportive data for the role of diet in general health outcomes, including cancer-related outcomes, for a long time. But getting the public to implement these dietary changes is a challenge, so certainly the more convincing data that we have and the more specific we can be with specific types of dietary interventions, it does make it more helpful to counsel patients,” Dr. Moore said in an interview.

She said the finding that dietary change post diagnosis can have a significant effect on lowering both all-cause and breast cancer–specific mortality is compelling evidence for a role of diet in breast cancer outcomes.

In the question-and-answer session following Dr. Wang’s presentation, Hans-Christian Kolberg, MD, from Marienhospital Bottrop at the University of Duisburg-Essen (Germany), echoed the sentiment when he commented, “you have an important result that you did not mention in the conclusion: It is not too late to change diet after breast cancer diagnosis!”

This study was supported, in part, by grants from the National Cancer Institute, Breast Cancer Research Foundations, and Susan G. Komen Breast Cancer Foundations. Dr. Wang, Dr. Moore, and Dr. Kolberg reported no relevant conflicts of interest.

SOURCE: Wang T et al. SABCS 2020, Abstract GS2-09.

 

Women who more closely followed a diabetes risk-reduction diet both before and after a diagnosis of breast cancer had lower risks for breast cancer–specific and all-cause mortality when compared with women with less healthy diets or those who did not substantially modify what they ate following diagnosis, according to pooled data from two prospective cohort studies.

Among more than 8,000 participants in the Nurses’ Health Study and NHS II, those who most closely adhered to a dietary pattern associated with lower risk for type 2 diabetes had a 13% lower risk for breast cancer–specific mortality and a 31% lower risk for death from any cause, compared with those at the bottom of the diabetes risk-reduction diet chart, reported Tengteng Wang, PhD, of the Harvard School of Public Health, Boston, and colleagues.

“Promoting dietary changes consistent with prevention of type 2 diabetes may be very important for breast cancer survivors,” Dr. Wang said in an oral abstract presentation at the 2020 San Antonio Breast Cancer Symposium.
 

Poor outcomes

Type 2 diabetes has been shown to be associated with poor outcomes for women with breast cancer, prompting the investigators to see whether diet modification could play a role in improving prognosis.

They looked at self-reported dietary data from 8,320 women diagnosed with stage I-III breast cancer who were participants in NHS, with data from 1980 to 2014, and NHS II, with data from 1991 to 2015.

Every 2-4 years, participants filled out validated follow-up questionnaires, including information on diet.

The investigators calculated a diabetes risk-reduction diet (DRRD) adherence score based on nine components, including higher intakes of cereal fiber, coffee, nuts, and whole fruits, as well as a higher polyunsatured to saturated fat ratio, and lower glycemic index, plus lower intakes of trans fats, sugar-sweetened beverages and/or fruit juices, and red meat.

The investigators calculated cumulative average DRRD scores based on repeated measures of diet after breast cancer diagnosis. They obtained data on deaths from family reports or the National Death Index, and they determined causes of death from either death certificates or medical records.

At a median follow-up of 13 years, 2,146 participants had died, with 948 of the deaths attributed to breast cancer.

After adjusting for socioeconomic factors, postdiagnosis time-varying covariates, and key breast cancer clinical factors, there was a nonsignificant trend toward a lower risk for breast cancer–specific deaths in the women in the highest versus lowest quintiles of DRRD score (hazard ratio, 0.87; P = .13), but significantly lower risk for all-cause mortality risk (HR, 0.69; P < .0001).

Looking at participants who changed their diet following breast cancer diagnosis, those who went from a low DRRD score prediagnosis to a high score post diagnosis had a 20% reduction in risk for breast cancer–specific mortality and a 14% reduction in risk for all-cause mortality, the investigators found (P values for this analysis were not shown).

There were no differences in results by either tumor estrogen receptor status or stage.

Dr. Wang acknowledged that the study was limited by the population (which was predominantly composed of educated, non-Hispanic White women), errors in dietary measurement, and limited power for estrogen receptor–negative tumor analysis.
 

 

 

Will patients do what’s good for them?

While this study adds to the body of evidence linking diet and cancer, putting the information into action is another story, according to Halle Moore, MD, of the Cleveland Clinic, who was not involved in this study.

“We have had supportive data for the role of diet in general health outcomes, including cancer-related outcomes, for a long time. But getting the public to implement these dietary changes is a challenge, so certainly the more convincing data that we have and the more specific we can be with specific types of dietary interventions, it does make it more helpful to counsel patients,” Dr. Moore said in an interview.

She said the finding that dietary change post diagnosis can have a significant effect on lowering both all-cause and breast cancer–specific mortality is compelling evidence for a role of diet in breast cancer outcomes.

In the question-and-answer session following Dr. Wang’s presentation, Hans-Christian Kolberg, MD, from Marienhospital Bottrop at the University of Duisburg-Essen (Germany), echoed the sentiment when he commented, “you have an important result that you did not mention in the conclusion: It is not too late to change diet after breast cancer diagnosis!”

This study was supported, in part, by grants from the National Cancer Institute, Breast Cancer Research Foundations, and Susan G. Komen Breast Cancer Foundations. Dr. Wang, Dr. Moore, and Dr. Kolberg reported no relevant conflicts of interest.

SOURCE: Wang T et al. SABCS 2020, Abstract GS2-09.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM SABCS 2020

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

Can a health care worker refuse the COVID-19 vaccine?

Article Type
Changed
Thu, 08/26/2021 - 15:55

As hospitals across the country develop their plans to vaccinate their health care employees against COVID-19, a key question has come to the fore: What if an employee – whether nurse, physician, or other health care worker – refuses to receive the vaccine? Can hospitals require their employees to be vaccinated against COVID-19? And what consequences could an employee face for refusing the vaccine?

My answer needs to be based, in part, on the law related to previous vaccines – influenza, for example – because at the time of this writing (early December 2020), no vaccine for COVID-19 has been approved, although approval of at least one vaccine is expected within a week. So there have been no offers of vaccine and refusals yet, nor are there any cases to date involving an employee who refused a COVID-19 vaccine. As of December 2020, there are no state or federal laws that either require an employee to be vaccinated against COVID-19 or that protect an employee who refuses vaccination against COVID-19. It will take a while after the vaccine is approved and distributed before refusals, reactions, policies, cases, and laws begin to emerge.

If we look at the law related to health care workers refusing to be vaccinated against the closest relative to COVID-19 – influenza – then the answer would be yes, employers can require employees to be vaccinated.

An employer can fire an employee who refuses influenza vaccination. If an employee who refused and was fired sues the employer for wrongful termination, the employee has more or less chance of success depending on the reason for refusal. Some courts and the Equal Employment Opportunity Commission have held that a refusal on religious grounds is protected by the U.S. Constitution, as in this recent case. The Constitution protects freedom to practice one’s religion. Specific religions may have a range of tenets that support refusal to be vaccinated.

A refusal on medical grounds has been successful if the medical grounds fall under the protections of the Americans with Disabilities Act but may fail when the medical grounds for the claim are not covered by the ADA.

Refusal for secular, nonmedical reasons, such as a health care worker’s policy of treating their body as their temple, has not gone over well with employers or courts. However, in at least one case, a nurse who refused vaccination on secular, nonmedical grounds won her case against her employer, on appeal. The appeals court found that the hospital violated her First Amendment rights.

Employees who refuse vaccination for religious or medical reasons still will need to take measures to protect patients and other employees from infection. An employer such as a hospital can, rather than fire the employee, offer the employee an accommodation, such as requiring that the employee wear a mask or quarantine. There are no cases that have upheld an employee’s right to refuse to wear a mask or quarantine.

The situation with the COVID-19 vaccine is different from the situation surrounding influenza vaccines. There are plenty of data on effectiveness and side effects of influenza vaccines, but there is very little evidence of short- or long-term effects of the COVID-19 vaccines currently being tested and/or considered for approval. One could argue that the process of vaccine development is the same for all virus vaccines. However, public confidence in the vaccine vetting process is not what it once was. It has been widely publicized that the COVID-19 vaccine trials have been rushed. As of December 2020, only 60% of the general population say they would take the vaccine, although researchers say confidence is increasing.

The Centers for Disease Control and Prevention has designated health care workers as first in line to get the vaccine, but some health care workers may not want to be the first to try it. A CDC survey found that 63% of health care workers polled in recent months said they would get a COVID-19 vaccine.

Unions have entered the conversation. A coalition of unions that represent health care workers said, “we need a transparent, evidence-based federal vaccine strategy based on principles of equity, safety, and priority, as well as robust efforts to address a high degree of skepticism about safety of an authorized vaccine.” The organization declined to promote a vaccine until more is known.

As of publication date, the EEOC guidance for employers responding to COVID-19 does not address vaccines.

The CDC’s Interim Guidance for Businesses and Employers Responding to Coronavirus Disease 2019, May 2020, updated Dec. 4, 2020, does not address vaccines. The CDC’s page on COVID-19 vaccination for health care workers does not address a health care worker’s refusal. The site does assure health care workers that the vaccine development process is sound: “The current vaccine safety system is strong and robust, with the capacity to effectively monitor COVID-19 vaccine safety. Existing data systems have validated analytic methods that can rapidly detect statistical signals for possible vaccine safety problems. These systems are being scaled up to fully meet the needs of the nation. Additional systems and data sources are also being developed to further enhance safety monitoring capabilities. CDC is committed to ensuring that COVID-19 vaccines are safe.”

In the coming months, government officials and vaccine manufacturers will be working to reassure the public of the safety of the vaccine and the rigor of the vaccine development process. In November 2020, National Institute of Allergy and Infectious Diseases Director Anthony Fauci, MD, told Kaiser Health News: “The company looks at the data. I look at the data. Then the company puts the data to the FDA. The FDA will make the decision to do an emergency-use authorization or a license application approval. And they have career scientists who are really independent. They’re not beholden to anybody. Then there’s another independent group, the Vaccines and Related Biological Products Advisory Committee. The FDA commissioner has vowed publicly that he will go according to the opinion of the career scientists and the advisory board.” President-elect Joe Biden said he would get a vaccine when Dr. Fauci thinks it is safe.

An employee who, after researching the vaccine and the process, still wants to refuse when offered the vaccine is not likely to be fired for that reason right away, as long as the employee takes other precautions, such as wearing a mask. If the employer does fire the employee and the employee sues the employer, it is impossible to predict how a court would decide the case.

Related legal questions may arise in the coming months. For example:

  • Is an employer exempt from paying workers’ compensation to an employee who refuses to be vaccinated and then contracts the virus while on the job?
  • Can a prospective employer require COVID-19 vaccination as a precondition of employment?
  • Is it within a patient’s rights to receive an answer to the question: Has my health care worker been vaccinated against COVID-19?
  • If a hospital allows employees to refuse vaccination and keep working, and an outbreak occurs, and it is suggested through contact tracing that unvaccinated workers infected patients, will a court hold the hospital liable for patients’ damages?

Answers to these questions are yet to be determined.

Carolyn Buppert (www.buppert.com) is an attorney and former nurse practitioner who focuses on the legal issues affecting nurse practitioners.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

As hospitals across the country develop their plans to vaccinate their health care employees against COVID-19, a key question has come to the fore: What if an employee – whether nurse, physician, or other health care worker – refuses to receive the vaccine? Can hospitals require their employees to be vaccinated against COVID-19? And what consequences could an employee face for refusing the vaccine?

My answer needs to be based, in part, on the law related to previous vaccines – influenza, for example – because at the time of this writing (early December 2020), no vaccine for COVID-19 has been approved, although approval of at least one vaccine is expected within a week. So there have been no offers of vaccine and refusals yet, nor are there any cases to date involving an employee who refused a COVID-19 vaccine. As of December 2020, there are no state or federal laws that either require an employee to be vaccinated against COVID-19 or that protect an employee who refuses vaccination against COVID-19. It will take a while after the vaccine is approved and distributed before refusals, reactions, policies, cases, and laws begin to emerge.

If we look at the law related to health care workers refusing to be vaccinated against the closest relative to COVID-19 – influenza – then the answer would be yes, employers can require employees to be vaccinated.

An employer can fire an employee who refuses influenza vaccination. If an employee who refused and was fired sues the employer for wrongful termination, the employee has more or less chance of success depending on the reason for refusal. Some courts and the Equal Employment Opportunity Commission have held that a refusal on religious grounds is protected by the U.S. Constitution, as in this recent case. The Constitution protects freedom to practice one’s religion. Specific religions may have a range of tenets that support refusal to be vaccinated.

A refusal on medical grounds has been successful if the medical grounds fall under the protections of the Americans with Disabilities Act but may fail when the medical grounds for the claim are not covered by the ADA.

Refusal for secular, nonmedical reasons, such as a health care worker’s policy of treating their body as their temple, has not gone over well with employers or courts. However, in at least one case, a nurse who refused vaccination on secular, nonmedical grounds won her case against her employer, on appeal. The appeals court found that the hospital violated her First Amendment rights.

Employees who refuse vaccination for religious or medical reasons still will need to take measures to protect patients and other employees from infection. An employer such as a hospital can, rather than fire the employee, offer the employee an accommodation, such as requiring that the employee wear a mask or quarantine. There are no cases that have upheld an employee’s right to refuse to wear a mask or quarantine.

The situation with the COVID-19 vaccine is different from the situation surrounding influenza vaccines. There are plenty of data on effectiveness and side effects of influenza vaccines, but there is very little evidence of short- or long-term effects of the COVID-19 vaccines currently being tested and/or considered for approval. One could argue that the process of vaccine development is the same for all virus vaccines. However, public confidence in the vaccine vetting process is not what it once was. It has been widely publicized that the COVID-19 vaccine trials have been rushed. As of December 2020, only 60% of the general population say they would take the vaccine, although researchers say confidence is increasing.

The Centers for Disease Control and Prevention has designated health care workers as first in line to get the vaccine, but some health care workers may not want to be the first to try it. A CDC survey found that 63% of health care workers polled in recent months said they would get a COVID-19 vaccine.

Unions have entered the conversation. A coalition of unions that represent health care workers said, “we need a transparent, evidence-based federal vaccine strategy based on principles of equity, safety, and priority, as well as robust efforts to address a high degree of skepticism about safety of an authorized vaccine.” The organization declined to promote a vaccine until more is known.

As of publication date, the EEOC guidance for employers responding to COVID-19 does not address vaccines.

The CDC’s Interim Guidance for Businesses and Employers Responding to Coronavirus Disease 2019, May 2020, updated Dec. 4, 2020, does not address vaccines. The CDC’s page on COVID-19 vaccination for health care workers does not address a health care worker’s refusal. The site does assure health care workers that the vaccine development process is sound: “The current vaccine safety system is strong and robust, with the capacity to effectively monitor COVID-19 vaccine safety. Existing data systems have validated analytic methods that can rapidly detect statistical signals for possible vaccine safety problems. These systems are being scaled up to fully meet the needs of the nation. Additional systems and data sources are also being developed to further enhance safety monitoring capabilities. CDC is committed to ensuring that COVID-19 vaccines are safe.”

In the coming months, government officials and vaccine manufacturers will be working to reassure the public of the safety of the vaccine and the rigor of the vaccine development process. In November 2020, National Institute of Allergy and Infectious Diseases Director Anthony Fauci, MD, told Kaiser Health News: “The company looks at the data. I look at the data. Then the company puts the data to the FDA. The FDA will make the decision to do an emergency-use authorization or a license application approval. And they have career scientists who are really independent. They’re not beholden to anybody. Then there’s another independent group, the Vaccines and Related Biological Products Advisory Committee. The FDA commissioner has vowed publicly that he will go according to the opinion of the career scientists and the advisory board.” President-elect Joe Biden said he would get a vaccine when Dr. Fauci thinks it is safe.

An employee who, after researching the vaccine and the process, still wants to refuse when offered the vaccine is not likely to be fired for that reason right away, as long as the employee takes other precautions, such as wearing a mask. If the employer does fire the employee and the employee sues the employer, it is impossible to predict how a court would decide the case.

Related legal questions may arise in the coming months. For example:

  • Is an employer exempt from paying workers’ compensation to an employee who refuses to be vaccinated and then contracts the virus while on the job?
  • Can a prospective employer require COVID-19 vaccination as a precondition of employment?
  • Is it within a patient’s rights to receive an answer to the question: Has my health care worker been vaccinated against COVID-19?
  • If a hospital allows employees to refuse vaccination and keep working, and an outbreak occurs, and it is suggested through contact tracing that unvaccinated workers infected patients, will a court hold the hospital liable for patients’ damages?

Answers to these questions are yet to be determined.

Carolyn Buppert (www.buppert.com) is an attorney and former nurse practitioner who focuses on the legal issues affecting nurse practitioners.

A version of this article originally appeared on Medscape.com.

As hospitals across the country develop their plans to vaccinate their health care employees against COVID-19, a key question has come to the fore: What if an employee – whether nurse, physician, or other health care worker – refuses to receive the vaccine? Can hospitals require their employees to be vaccinated against COVID-19? And what consequences could an employee face for refusing the vaccine?

My answer needs to be based, in part, on the law related to previous vaccines – influenza, for example – because at the time of this writing (early December 2020), no vaccine for COVID-19 has been approved, although approval of at least one vaccine is expected within a week. So there have been no offers of vaccine and refusals yet, nor are there any cases to date involving an employee who refused a COVID-19 vaccine. As of December 2020, there are no state or federal laws that either require an employee to be vaccinated against COVID-19 or that protect an employee who refuses vaccination against COVID-19. It will take a while after the vaccine is approved and distributed before refusals, reactions, policies, cases, and laws begin to emerge.

If we look at the law related to health care workers refusing to be vaccinated against the closest relative to COVID-19 – influenza – then the answer would be yes, employers can require employees to be vaccinated.

An employer can fire an employee who refuses influenza vaccination. If an employee who refused and was fired sues the employer for wrongful termination, the employee has more or less chance of success depending on the reason for refusal. Some courts and the Equal Employment Opportunity Commission have held that a refusal on religious grounds is protected by the U.S. Constitution, as in this recent case. The Constitution protects freedom to practice one’s religion. Specific religions may have a range of tenets that support refusal to be vaccinated.

A refusal on medical grounds has been successful if the medical grounds fall under the protections of the Americans with Disabilities Act but may fail when the medical grounds for the claim are not covered by the ADA.

Refusal for secular, nonmedical reasons, such as a health care worker’s policy of treating their body as their temple, has not gone over well with employers or courts. However, in at least one case, a nurse who refused vaccination on secular, nonmedical grounds won her case against her employer, on appeal. The appeals court found that the hospital violated her First Amendment rights.

Employees who refuse vaccination for religious or medical reasons still will need to take measures to protect patients and other employees from infection. An employer such as a hospital can, rather than fire the employee, offer the employee an accommodation, such as requiring that the employee wear a mask or quarantine. There are no cases that have upheld an employee’s right to refuse to wear a mask or quarantine.

The situation with the COVID-19 vaccine is different from the situation surrounding influenza vaccines. There are plenty of data on effectiveness and side effects of influenza vaccines, but there is very little evidence of short- or long-term effects of the COVID-19 vaccines currently being tested and/or considered for approval. One could argue that the process of vaccine development is the same for all virus vaccines. However, public confidence in the vaccine vetting process is not what it once was. It has been widely publicized that the COVID-19 vaccine trials have been rushed. As of December 2020, only 60% of the general population say they would take the vaccine, although researchers say confidence is increasing.

The Centers for Disease Control and Prevention has designated health care workers as first in line to get the vaccine, but some health care workers may not want to be the first to try it. A CDC survey found that 63% of health care workers polled in recent months said they would get a COVID-19 vaccine.

Unions have entered the conversation. A coalition of unions that represent health care workers said, “we need a transparent, evidence-based federal vaccine strategy based on principles of equity, safety, and priority, as well as robust efforts to address a high degree of skepticism about safety of an authorized vaccine.” The organization declined to promote a vaccine until more is known.

As of publication date, the EEOC guidance for employers responding to COVID-19 does not address vaccines.

The CDC’s Interim Guidance for Businesses and Employers Responding to Coronavirus Disease 2019, May 2020, updated Dec. 4, 2020, does not address vaccines. The CDC’s page on COVID-19 vaccination for health care workers does not address a health care worker’s refusal. The site does assure health care workers that the vaccine development process is sound: “The current vaccine safety system is strong and robust, with the capacity to effectively monitor COVID-19 vaccine safety. Existing data systems have validated analytic methods that can rapidly detect statistical signals for possible vaccine safety problems. These systems are being scaled up to fully meet the needs of the nation. Additional systems and data sources are also being developed to further enhance safety monitoring capabilities. CDC is committed to ensuring that COVID-19 vaccines are safe.”

In the coming months, government officials and vaccine manufacturers will be working to reassure the public of the safety of the vaccine and the rigor of the vaccine development process. In November 2020, National Institute of Allergy and Infectious Diseases Director Anthony Fauci, MD, told Kaiser Health News: “The company looks at the data. I look at the data. Then the company puts the data to the FDA. The FDA will make the decision to do an emergency-use authorization or a license application approval. And they have career scientists who are really independent. They’re not beholden to anybody. Then there’s another independent group, the Vaccines and Related Biological Products Advisory Committee. The FDA commissioner has vowed publicly that he will go according to the opinion of the career scientists and the advisory board.” President-elect Joe Biden said he would get a vaccine when Dr. Fauci thinks it is safe.

An employee who, after researching the vaccine and the process, still wants to refuse when offered the vaccine is not likely to be fired for that reason right away, as long as the employee takes other precautions, such as wearing a mask. If the employer does fire the employee and the employee sues the employer, it is impossible to predict how a court would decide the case.

Related legal questions may arise in the coming months. For example:

  • Is an employer exempt from paying workers’ compensation to an employee who refuses to be vaccinated and then contracts the virus while on the job?
  • Can a prospective employer require COVID-19 vaccination as a precondition of employment?
  • Is it within a patient’s rights to receive an answer to the question: Has my health care worker been vaccinated against COVID-19?
  • If a hospital allows employees to refuse vaccination and keep working, and an outbreak occurs, and it is suggested through contact tracing that unvaccinated workers infected patients, will a court hold the hospital liable for patients’ damages?

Answers to these questions are yet to be determined.

Carolyn Buppert (www.buppert.com) is an attorney and former nurse practitioner who focuses on the legal issues affecting nurse practitioners.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

Peripheral neuropathy tied to mortality in adults without diabetes

Article Type
Changed
Tue, 05/03/2022 - 15:07

Peripheral neuropathy is common in U.S. adults and is associated with an increased risk of death, even in the absence of diabetes, researchers reported  in Annals of Internal Medicine.

©mheim3011/thinkstockphotos.com

The findings do not necessarily mean that doctors should implement broader screening for peripheral neuropathy at this time, however, the investigators said.

“Doctors don’t typically screen for peripheral neuropathy in persons without diabetes,” senior author Elizabeth Selvin, PhD, MPH, professor of epidemiology at the Johns Hopkins Bloomberg School of Public Health, Baltimore, said in an interview.

“Our study shows that peripheral neuropathy – as assessed by decreased sensation in the feet – is common, even in people without diabetes,” Dr. Selvin explained. “It is not yet clear whether we should be screening people without diabetes since we don’t have clear treatments, but our study does suggest that this condition is an underrecognized condition that is associated with poor outcomes.”

Patients with diabetes typically undergo annual foot examinations that include screening for peripheral neuropathy, but that’s not the case for most adults in the absence of diabetes.

“I don’t know if we can make the jump that we should be screening people without diabetes,” said first author Caitlin W. Hicks, MD, assistant professor of surgery, division of vascular surgery and endovascular therapy, Johns Hopkins University, Baltimore. “Right now, we do not exactly know what it means in the people without diabetes, and we definitely do not know how to treat it. So, screening for it will tell us that this person has this and is at higher risk of mortality than someone who doesn’t, but we do not know what to do with that information yet.”

Nevertheless, the study raises the question of whether physicians should pay more attention to peripheral neuropathy in people without diabetes, said Dr. Hicks, director of research at the university’s diabetic foot and wound service.
 

Heightened risk

To examine associations between peripheral neuropathy and all-cause and cardiovascular mortality in U.S. adults, Dr. Hicks and colleagues analyzed data from 7,116 adults aged 40 years or older who participated in the National Health and Nutrition Examination Survey (NHANES) between 1999 and 2004.

The study included participants who underwent monofilament testing for peripheral neuropathy. During testing, technicians used a standard 5.07 Semmes-Weinstein nylon monofilament to apply slight pressure to the bottom of each foot at three sites. If participants could not correctly identify where pressure was applied, the test was repeated. After participants gave two incorrect or undeterminable responses for a site, the site was defined as insensate. The researchers defined peripheral neuropathy as at least one insensate site on either foot.

The researchers determined deaths and causes of death using death certificate records from the National Death Index through 2015.

In all, 13.5% of the participants had peripheral neuropathy, including 27% of adults with diabetes and 11.6% of adults without diabetes. Those with peripheral neuropathy were older, were more likely to be male, and had lower levels of education, compared with participants without peripheral neuropathy. They also had higher body mass index, were more often former or current smokers, and had a higher prevalence of hypertension, hypercholesterolemia, and cardiovascular disease.

During a median follow-up of 13 years, 2,128 participants died, including 488 who died of cardiovascular causes.

The incidence rate of all-cause mortality per 1,000 person-years was 57.6 in adults with diabetes and peripheral neuropathy, 34.3 in adults with peripheral neuropathy but no diabetes, 27.1 in adults with diabetes but no peripheral neuropathy, and 13.0 in adults without diabetes or peripheral neuropathy.

Among participants with diabetes, the leading cause of death was cardiovascular disease (31% of deaths), whereas among participants without diabetes, the leading cause of death was malignant neoplasms (27% of deaths).

After adjustment for age, sex, race, or ethnicity, and risk factors such as cardiovascular disease, peripheral neuropathy was significantly associated with all-cause mortality (hazard ratio [HR], 1.49) and cardiovascular mortality (HR, 1.66) in participants with diabetes. In participants without diabetes, peripheral neuropathy was significantly associated with all-cause mortality (HR, 1.31), but its association with cardiovascular mortality was not statistically significant.

The association between peripheral neuropathy and all-cause mortality persisted in a sensitivity analysis that focused on adults with normoglycemia.
 

 

 

Related conditions

The study confirms findings from prior studies that examined the prevalence of loss of peripheral sensation in populations of older adults with and without diabetes, said Elsa S. Strotmeyer, PhD, MPH, associate professor of epidemiology at the University of Pittsburgh. “The clinical significance of the loss of peripheral sensation in older adults without diabetes is not fully appreciated,” she said.

A limitation of the study is that peripheral neuropathy was not a clinical diagnosis. “Monofilament testing at the foot is a quick clinical screen for decreased lower-extremity sensation that likely is a result of sensory peripheral nerve decline,” Dr. Strotmeyer said.

Another limitation is that death certificates are less accurate than medical records for determining cause of death.

“Past studies have indicated that peripheral nerve decline is related to common conditions in aging such as the metabolic syndrome and cardiovascular disease, cancer treatment, and physical function loss,” Dr. Strotmeyer said. “Therefore it is not surprising that is related to mortality as these conditions in aging are associated with increased mortality. Loss of peripheral sensation at the foot may also be related to fall injuries, and mortality from fall injuries has increased dramatically in older adults over the past several decades.”

Prior research has suggested that monofilament testing may play a role in screening for fall risk in older adults without diabetes, Dr. Strotmeyer added.

“For older adults both with and without diabetes, past studies have recommended monofilament testing be incorporated in geriatric screening for fall risk. Therefore, this article expands implications of clinical importance to understanding the pathology and consequences of loss of sensation at the foot in older patients,” she said.

The study was funded by the National Institute of Diabetes and Digestive and Kidney Diseases and the National Heart, Lung, and Blood Institute. Dr. Hicks, Dr. Selvin, and a coauthor, Kunihiro Matsushita, MD, PhD, disclosed NIH grants. In addition, Dr. Selvin disclosed personal fees from Novo Nordisk and grants from the Foundation for the National Institutes of Health outside the submitted work, and Dr. Matsushita disclosed grants and personal fees from Fukuda Denshi outside the submitted work. Dr. Strotmeyer receives funding from the National Institute on Aging and the National Institute of Arthritis and Musculoskeletal and Skin Diseases and is chair of the health sciences section of the Gerontological Society of America.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

Peripheral neuropathy is common in U.S. adults and is associated with an increased risk of death, even in the absence of diabetes, researchers reported  in Annals of Internal Medicine.

©mheim3011/thinkstockphotos.com

The findings do not necessarily mean that doctors should implement broader screening for peripheral neuropathy at this time, however, the investigators said.

“Doctors don’t typically screen for peripheral neuropathy in persons without diabetes,” senior author Elizabeth Selvin, PhD, MPH, professor of epidemiology at the Johns Hopkins Bloomberg School of Public Health, Baltimore, said in an interview.

“Our study shows that peripheral neuropathy – as assessed by decreased sensation in the feet – is common, even in people without diabetes,” Dr. Selvin explained. “It is not yet clear whether we should be screening people without diabetes since we don’t have clear treatments, but our study does suggest that this condition is an underrecognized condition that is associated with poor outcomes.”

Patients with diabetes typically undergo annual foot examinations that include screening for peripheral neuropathy, but that’s not the case for most adults in the absence of diabetes.

“I don’t know if we can make the jump that we should be screening people without diabetes,” said first author Caitlin W. Hicks, MD, assistant professor of surgery, division of vascular surgery and endovascular therapy, Johns Hopkins University, Baltimore. “Right now, we do not exactly know what it means in the people without diabetes, and we definitely do not know how to treat it. So, screening for it will tell us that this person has this and is at higher risk of mortality than someone who doesn’t, but we do not know what to do with that information yet.”

Nevertheless, the study raises the question of whether physicians should pay more attention to peripheral neuropathy in people without diabetes, said Dr. Hicks, director of research at the university’s diabetic foot and wound service.
 

Heightened risk

To examine associations between peripheral neuropathy and all-cause and cardiovascular mortality in U.S. adults, Dr. Hicks and colleagues analyzed data from 7,116 adults aged 40 years or older who participated in the National Health and Nutrition Examination Survey (NHANES) between 1999 and 2004.

The study included participants who underwent monofilament testing for peripheral neuropathy. During testing, technicians used a standard 5.07 Semmes-Weinstein nylon monofilament to apply slight pressure to the bottom of each foot at three sites. If participants could not correctly identify where pressure was applied, the test was repeated. After participants gave two incorrect or undeterminable responses for a site, the site was defined as insensate. The researchers defined peripheral neuropathy as at least one insensate site on either foot.

The researchers determined deaths and causes of death using death certificate records from the National Death Index through 2015.

In all, 13.5% of the participants had peripheral neuropathy, including 27% of adults with diabetes and 11.6% of adults without diabetes. Those with peripheral neuropathy were older, were more likely to be male, and had lower levels of education, compared with participants without peripheral neuropathy. They also had higher body mass index, were more often former or current smokers, and had a higher prevalence of hypertension, hypercholesterolemia, and cardiovascular disease.

During a median follow-up of 13 years, 2,128 participants died, including 488 who died of cardiovascular causes.

The incidence rate of all-cause mortality per 1,000 person-years was 57.6 in adults with diabetes and peripheral neuropathy, 34.3 in adults with peripheral neuropathy but no diabetes, 27.1 in adults with diabetes but no peripheral neuropathy, and 13.0 in adults without diabetes or peripheral neuropathy.

Among participants with diabetes, the leading cause of death was cardiovascular disease (31% of deaths), whereas among participants without diabetes, the leading cause of death was malignant neoplasms (27% of deaths).

After adjustment for age, sex, race, or ethnicity, and risk factors such as cardiovascular disease, peripheral neuropathy was significantly associated with all-cause mortality (hazard ratio [HR], 1.49) and cardiovascular mortality (HR, 1.66) in participants with diabetes. In participants without diabetes, peripheral neuropathy was significantly associated with all-cause mortality (HR, 1.31), but its association with cardiovascular mortality was not statistically significant.

The association between peripheral neuropathy and all-cause mortality persisted in a sensitivity analysis that focused on adults with normoglycemia.
 

 

 

Related conditions

The study confirms findings from prior studies that examined the prevalence of loss of peripheral sensation in populations of older adults with and without diabetes, said Elsa S. Strotmeyer, PhD, MPH, associate professor of epidemiology at the University of Pittsburgh. “The clinical significance of the loss of peripheral sensation in older adults without diabetes is not fully appreciated,” she said.

A limitation of the study is that peripheral neuropathy was not a clinical diagnosis. “Monofilament testing at the foot is a quick clinical screen for decreased lower-extremity sensation that likely is a result of sensory peripheral nerve decline,” Dr. Strotmeyer said.

Another limitation is that death certificates are less accurate than medical records for determining cause of death.

“Past studies have indicated that peripheral nerve decline is related to common conditions in aging such as the metabolic syndrome and cardiovascular disease, cancer treatment, and physical function loss,” Dr. Strotmeyer said. “Therefore it is not surprising that is related to mortality as these conditions in aging are associated with increased mortality. Loss of peripheral sensation at the foot may also be related to fall injuries, and mortality from fall injuries has increased dramatically in older adults over the past several decades.”

Prior research has suggested that monofilament testing may play a role in screening for fall risk in older adults without diabetes, Dr. Strotmeyer added.

“For older adults both with and without diabetes, past studies have recommended monofilament testing be incorporated in geriatric screening for fall risk. Therefore, this article expands implications of clinical importance to understanding the pathology and consequences of loss of sensation at the foot in older patients,” she said.

The study was funded by the National Institute of Diabetes and Digestive and Kidney Diseases and the National Heart, Lung, and Blood Institute. Dr. Hicks, Dr. Selvin, and a coauthor, Kunihiro Matsushita, MD, PhD, disclosed NIH grants. In addition, Dr. Selvin disclosed personal fees from Novo Nordisk and grants from the Foundation for the National Institutes of Health outside the submitted work, and Dr. Matsushita disclosed grants and personal fees from Fukuda Denshi outside the submitted work. Dr. Strotmeyer receives funding from the National Institute on Aging and the National Institute of Arthritis and Musculoskeletal and Skin Diseases and is chair of the health sciences section of the Gerontological Society of America.

A version of this article originally appeared on Medscape.com.

Peripheral neuropathy is common in U.S. adults and is associated with an increased risk of death, even in the absence of diabetes, researchers reported  in Annals of Internal Medicine.

©mheim3011/thinkstockphotos.com

The findings do not necessarily mean that doctors should implement broader screening for peripheral neuropathy at this time, however, the investigators said.

“Doctors don’t typically screen for peripheral neuropathy in persons without diabetes,” senior author Elizabeth Selvin, PhD, MPH, professor of epidemiology at the Johns Hopkins Bloomberg School of Public Health, Baltimore, said in an interview.

“Our study shows that peripheral neuropathy – as assessed by decreased sensation in the feet – is common, even in people without diabetes,” Dr. Selvin explained. “It is not yet clear whether we should be screening people without diabetes since we don’t have clear treatments, but our study does suggest that this condition is an underrecognized condition that is associated with poor outcomes.”

Patients with diabetes typically undergo annual foot examinations that include screening for peripheral neuropathy, but that’s not the case for most adults in the absence of diabetes.

“I don’t know if we can make the jump that we should be screening people without diabetes,” said first author Caitlin W. Hicks, MD, assistant professor of surgery, division of vascular surgery and endovascular therapy, Johns Hopkins University, Baltimore. “Right now, we do not exactly know what it means in the people without diabetes, and we definitely do not know how to treat it. So, screening for it will tell us that this person has this and is at higher risk of mortality than someone who doesn’t, but we do not know what to do with that information yet.”

Nevertheless, the study raises the question of whether physicians should pay more attention to peripheral neuropathy in people without diabetes, said Dr. Hicks, director of research at the university’s diabetic foot and wound service.
 

Heightened risk

To examine associations between peripheral neuropathy and all-cause and cardiovascular mortality in U.S. adults, Dr. Hicks and colleagues analyzed data from 7,116 adults aged 40 years or older who participated in the National Health and Nutrition Examination Survey (NHANES) between 1999 and 2004.

The study included participants who underwent monofilament testing for peripheral neuropathy. During testing, technicians used a standard 5.07 Semmes-Weinstein nylon monofilament to apply slight pressure to the bottom of each foot at three sites. If participants could not correctly identify where pressure was applied, the test was repeated. After participants gave two incorrect or undeterminable responses for a site, the site was defined as insensate. The researchers defined peripheral neuropathy as at least one insensate site on either foot.

The researchers determined deaths and causes of death using death certificate records from the National Death Index through 2015.

In all, 13.5% of the participants had peripheral neuropathy, including 27% of adults with diabetes and 11.6% of adults without diabetes. Those with peripheral neuropathy were older, were more likely to be male, and had lower levels of education, compared with participants without peripheral neuropathy. They also had higher body mass index, were more often former or current smokers, and had a higher prevalence of hypertension, hypercholesterolemia, and cardiovascular disease.

During a median follow-up of 13 years, 2,128 participants died, including 488 who died of cardiovascular causes.

The incidence rate of all-cause mortality per 1,000 person-years was 57.6 in adults with diabetes and peripheral neuropathy, 34.3 in adults with peripheral neuropathy but no diabetes, 27.1 in adults with diabetes but no peripheral neuropathy, and 13.0 in adults without diabetes or peripheral neuropathy.

Among participants with diabetes, the leading cause of death was cardiovascular disease (31% of deaths), whereas among participants without diabetes, the leading cause of death was malignant neoplasms (27% of deaths).

After adjustment for age, sex, race, or ethnicity, and risk factors such as cardiovascular disease, peripheral neuropathy was significantly associated with all-cause mortality (hazard ratio [HR], 1.49) and cardiovascular mortality (HR, 1.66) in participants with diabetes. In participants without diabetes, peripheral neuropathy was significantly associated with all-cause mortality (HR, 1.31), but its association with cardiovascular mortality was not statistically significant.

The association between peripheral neuropathy and all-cause mortality persisted in a sensitivity analysis that focused on adults with normoglycemia.
 

 

 

Related conditions

The study confirms findings from prior studies that examined the prevalence of loss of peripheral sensation in populations of older adults with and without diabetes, said Elsa S. Strotmeyer, PhD, MPH, associate professor of epidemiology at the University of Pittsburgh. “The clinical significance of the loss of peripheral sensation in older adults without diabetes is not fully appreciated,” she said.

A limitation of the study is that peripheral neuropathy was not a clinical diagnosis. “Monofilament testing at the foot is a quick clinical screen for decreased lower-extremity sensation that likely is a result of sensory peripheral nerve decline,” Dr. Strotmeyer said.

Another limitation is that death certificates are less accurate than medical records for determining cause of death.

“Past studies have indicated that peripheral nerve decline is related to common conditions in aging such as the metabolic syndrome and cardiovascular disease, cancer treatment, and physical function loss,” Dr. Strotmeyer said. “Therefore it is not surprising that is related to mortality as these conditions in aging are associated with increased mortality. Loss of peripheral sensation at the foot may also be related to fall injuries, and mortality from fall injuries has increased dramatically in older adults over the past several decades.”

Prior research has suggested that monofilament testing may play a role in screening for fall risk in older adults without diabetes, Dr. Strotmeyer added.

“For older adults both with and without diabetes, past studies have recommended monofilament testing be incorporated in geriatric screening for fall risk. Therefore, this article expands implications of clinical importance to understanding the pathology and consequences of loss of sensation at the foot in older patients,” she said.

The study was funded by the National Institute of Diabetes and Digestive and Kidney Diseases and the National Heart, Lung, and Blood Institute. Dr. Hicks, Dr. Selvin, and a coauthor, Kunihiro Matsushita, MD, PhD, disclosed NIH grants. In addition, Dr. Selvin disclosed personal fees from Novo Nordisk and grants from the Foundation for the National Institutes of Health outside the submitted work, and Dr. Matsushita disclosed grants and personal fees from Fukuda Denshi outside the submitted work. Dr. Strotmeyer receives funding from the National Institute on Aging and the National Institute of Arthritis and Musculoskeletal and Skin Diseases and is chair of the health sciences section of the Gerontological Society of America.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

Calcium burden drives CV risk whether coronary disease is obstructive or not

Article Type
Changed
Mon, 01/11/2021 - 15:18

Coronary artery calcium (CAC) score as a measure of plaque burden more reliably predicts future cardiovascular (CV) risk in patients with suspected coronary disease (CAD) than whether or not the disease is obstructive, a large retrospective study suggests.

Indeed, CV risk went up in tandem with growing plaque burden regardless of whether there was obstructive disease in any coronary artery, defined as a 50% or greater stenosis by computed tomographic angiography (CTA).

The findings argue for plaque burden as measured by CAC score, rather than percent-stenosis severity, for guiding further treatment decisions in such patients, researchers say.

The research was based on more than 20,000 symptomatic patients referred to diagnostic CTA in the Western Denmark Heart Registry who were then followed for about 4 years for major CV events, including death, myocardial infarction, or stroke.

“What we show is that CAC is important for prognosis, and that patients with no stenosis have similar high risk as patients with stenosis when CAC burden is similar,” Martin Bødtker Mortensen, MD, PhD, Aarhus (Denmark) University Hospital, said in an interview.

The guidelines “distinguish between primary and secondary prevention patients” based on the presence or absence of obstructive CAD, he said, but “our results challenge this long-held approach. We show that patients with nonobstructive CAD carry similar risk as patients with obstructive CAD.”

In practice, risk tends to be greater in patients with obstructive compared with nonobstructive CAD. But the reason “is simply that they normally have higher atherosclerosis burden,” Dr. Mortensen said. “When you stratify based on atherosclerosis burden, then patients with obstructive and nonobstructive CAD have similar risk.”

The analysis was published online Dec. 7 in the Journal of the American College of Cardiology with Mortensen as lead author.

Until recently, it had long been believed that CV-event risk was driven by ischemia – but “ischemia is just a surrogate for the extent of atherosclerotic disease,” Armin Arbab Zadeh, MD, PhD, MPH, who is not connected with the current study, said in an interview.

The finding that CV risk climbs with growing coronary plaque burden “essentially confirms” other recent studies, but with “added value in showing how well the calcium scores, compared to obstructive disease, track with risk. So it’s definitely a nice extension of the evidence,” said Dr. Zadeh, director of cardiac CT at Johns Hopkins University, Baltimore.

“This study clearly shows that there is no ischemia ‘threshold,’ that the risk starts from mild and goes up with the burden of atherosclerotic disease. We were essentially taught wrong for decades.”

Dr. Mortensen said that the new results “are in line with previous studies showing that atherosclerosis burden is very important for risk.” They also help explain why revascularization of patients with stable angina failed to cut the risk of MI or death in trials like COURAGEFAME-2, and ISCHEMIA. It’s because “stenosis per se explains little of the risk compared to atherosclerosis burden.”

In the current analysis, for example, about 65% of events were in patients who did not show obstructive CAD at CTA. Its 23,759 patients with symptoms suggestive of CAD were referred for CTA from 2008 through 2017; 5,043 (21.2%) were found to have obstructive disease and 18,716 (78.8%) either had no CAD or nonobstructive disease.

About 4.4% of patients experienced a first major CV event over a median follow-up of 4.3 years. Only events occurring later than 90 days after CTA were counted in an effort to exclude any directly related to revascularization, Dr. Mortensen noted.

The risk of events went up proportionally with both CAC score and the number of coronaries with obstructive disease.

The number of major CV events per 1,000 person-years was 6.2 for patients with a CAC score of 0, of whom 87% had no CAD by CTA, 7% had nonobstructive CAD, and 6% had obstructive CAD.

The corresponding rate was 17.5 among patients with a CAC score >100-399 for a hazard ratio (HR) of 1.7 (95% confidence interval [CI] 1.4-2.1) vs. a CAC score of 0.

And it was 42.3 per 1,000 patient-years among patients with CAC score >1000, HR 3.4 (95% CI, 2.5-4.6) vs. a CAC score of 0. Among those with the highest-tier CAC score, none were without CAD by CTA, 17% had nonobstructive disease, and 83% had obstructive CAD.

The major CV event rate rose similarly by number of coronaries with obstructive disease. It was 6.1 per 1,000 person-years in patients with no CAD. But it was 12.3 in those with nonobstructive disease, HR 1.3 (95% CI 1.1-1.6), up to 34.7 in those with triple-vessel obstructive disease, HR 2.9 (95% CI 2.2-3.9), vs. no CAD.

However, in an analysis with stratification by CAC score tier (0, 1-99, 100-399, 400-1,000, and >1,000), obstructive CAD was not associated with increased major CV-event risk in any stratum. The findings were similar in each subgroup with 1-vessel, 2-vessel, or 3-vessel CAD when stratified by CAC score.

Nor did major CV event risk track with obstructive CAD in analyses by age or after excluding all patients who underwent coronary revascularization within 90 days of CTA, the group reported.

“I believe these results support the use of CTA as a first-line test in patients with symptoms suggestive of CAD, as it provides valuable information for both diagnosis and prognosis in symptomatic patients,” Dr. Mortensen said. Those found to have a higher burden of atherosclerosis, he added, should receive aggressive preventive therapy regardless of whether or not they have obstructive disease.

The evidence from this study and others “supports a CTA-based approach” in such patients, Dr. Zadeh said. “And I would go further to say that a stress test is really inadequate,” in that it “detects the disease at such a late stage, you’re missing the opportunity to identify these patients who have atherosclerotic disease while you can do something about it.”

Its continued use as a first-line test, Dr. Zadeh said, “is essentially, in my mind, dismissing the evidence.”

An accompanying editorial Todd C. Villines, MD, and Patricia Rodriguez Lozano, MD, of the University of Virginia, Charlottesville agreed that “it is time that the traditional definitions of primary and secondary prevention evolve to incorporate CAC and CTA measures of patient risk based on coronary artery plaque burden.”

But they pointed out some limitations of the current study.

“The authors compared CAC with ≥50% stenosis, not CAC to comprehensive, contemporary coronary CTA,” and so “did not assess numerous other well-validated measures of coronary plaque burden that are routinely obtained from coronary CTA that typically improve the prognostic accuracy of coronary CTA beyond stenosis alone.” Also not performed was “plaque quantification on coronary CTA, an emerging field of study.”

The editorialists noted that noncontrast CT as used in the study for CAC scoring “is generally not recommended as a standalone test in symptomatic patients. Most studies have shown that coronary CTA, a test that accurately detects stenosis and identifies all types of coronary atherosclerosis (calcified and noncalcified), has significantly higher diagnostic and prognostic accuracy than CAC when performed in symptomatic patients without known coronary artery disease.”

Dr. Mortensen has disclosed no relevant financial relationships. Disclosures for the other authors are in the report. Dr. Villines and Dr. Rodriguez Lozano have disclosed no relevant financial relationships. Dr. Zadeh disclosed receiving grant support from Canon Medical Systems.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

Coronary artery calcium (CAC) score as a measure of plaque burden more reliably predicts future cardiovascular (CV) risk in patients with suspected coronary disease (CAD) than whether or not the disease is obstructive, a large retrospective study suggests.

Indeed, CV risk went up in tandem with growing plaque burden regardless of whether there was obstructive disease in any coronary artery, defined as a 50% or greater stenosis by computed tomographic angiography (CTA).

The findings argue for plaque burden as measured by CAC score, rather than percent-stenosis severity, for guiding further treatment decisions in such patients, researchers say.

The research was based on more than 20,000 symptomatic patients referred to diagnostic CTA in the Western Denmark Heart Registry who were then followed for about 4 years for major CV events, including death, myocardial infarction, or stroke.

“What we show is that CAC is important for prognosis, and that patients with no stenosis have similar high risk as patients with stenosis when CAC burden is similar,” Martin Bødtker Mortensen, MD, PhD, Aarhus (Denmark) University Hospital, said in an interview.

The guidelines “distinguish between primary and secondary prevention patients” based on the presence or absence of obstructive CAD, he said, but “our results challenge this long-held approach. We show that patients with nonobstructive CAD carry similar risk as patients with obstructive CAD.”

In practice, risk tends to be greater in patients with obstructive compared with nonobstructive CAD. But the reason “is simply that they normally have higher atherosclerosis burden,” Dr. Mortensen said. “When you stratify based on atherosclerosis burden, then patients with obstructive and nonobstructive CAD have similar risk.”

The analysis was published online Dec. 7 in the Journal of the American College of Cardiology with Mortensen as lead author.

Until recently, it had long been believed that CV-event risk was driven by ischemia – but “ischemia is just a surrogate for the extent of atherosclerotic disease,” Armin Arbab Zadeh, MD, PhD, MPH, who is not connected with the current study, said in an interview.

The finding that CV risk climbs with growing coronary plaque burden “essentially confirms” other recent studies, but with “added value in showing how well the calcium scores, compared to obstructive disease, track with risk. So it’s definitely a nice extension of the evidence,” said Dr. Zadeh, director of cardiac CT at Johns Hopkins University, Baltimore.

“This study clearly shows that there is no ischemia ‘threshold,’ that the risk starts from mild and goes up with the burden of atherosclerotic disease. We were essentially taught wrong for decades.”

Dr. Mortensen said that the new results “are in line with previous studies showing that atherosclerosis burden is very important for risk.” They also help explain why revascularization of patients with stable angina failed to cut the risk of MI or death in trials like COURAGEFAME-2, and ISCHEMIA. It’s because “stenosis per se explains little of the risk compared to atherosclerosis burden.”

In the current analysis, for example, about 65% of events were in patients who did not show obstructive CAD at CTA. Its 23,759 patients with symptoms suggestive of CAD were referred for CTA from 2008 through 2017; 5,043 (21.2%) were found to have obstructive disease and 18,716 (78.8%) either had no CAD or nonobstructive disease.

About 4.4% of patients experienced a first major CV event over a median follow-up of 4.3 years. Only events occurring later than 90 days after CTA were counted in an effort to exclude any directly related to revascularization, Dr. Mortensen noted.

The risk of events went up proportionally with both CAC score and the number of coronaries with obstructive disease.

The number of major CV events per 1,000 person-years was 6.2 for patients with a CAC score of 0, of whom 87% had no CAD by CTA, 7% had nonobstructive CAD, and 6% had obstructive CAD.

The corresponding rate was 17.5 among patients with a CAC score >100-399 for a hazard ratio (HR) of 1.7 (95% confidence interval [CI] 1.4-2.1) vs. a CAC score of 0.

And it was 42.3 per 1,000 patient-years among patients with CAC score >1000, HR 3.4 (95% CI, 2.5-4.6) vs. a CAC score of 0. Among those with the highest-tier CAC score, none were without CAD by CTA, 17% had nonobstructive disease, and 83% had obstructive CAD.

The major CV event rate rose similarly by number of coronaries with obstructive disease. It was 6.1 per 1,000 person-years in patients with no CAD. But it was 12.3 in those with nonobstructive disease, HR 1.3 (95% CI 1.1-1.6), up to 34.7 in those with triple-vessel obstructive disease, HR 2.9 (95% CI 2.2-3.9), vs. no CAD.

However, in an analysis with stratification by CAC score tier (0, 1-99, 100-399, 400-1,000, and >1,000), obstructive CAD was not associated with increased major CV-event risk in any stratum. The findings were similar in each subgroup with 1-vessel, 2-vessel, or 3-vessel CAD when stratified by CAC score.

Nor did major CV event risk track with obstructive CAD in analyses by age or after excluding all patients who underwent coronary revascularization within 90 days of CTA, the group reported.

“I believe these results support the use of CTA as a first-line test in patients with symptoms suggestive of CAD, as it provides valuable information for both diagnosis and prognosis in symptomatic patients,” Dr. Mortensen said. Those found to have a higher burden of atherosclerosis, he added, should receive aggressive preventive therapy regardless of whether or not they have obstructive disease.

The evidence from this study and others “supports a CTA-based approach” in such patients, Dr. Zadeh said. “And I would go further to say that a stress test is really inadequate,” in that it “detects the disease at such a late stage, you’re missing the opportunity to identify these patients who have atherosclerotic disease while you can do something about it.”

Its continued use as a first-line test, Dr. Zadeh said, “is essentially, in my mind, dismissing the evidence.”

An accompanying editorial Todd C. Villines, MD, and Patricia Rodriguez Lozano, MD, of the University of Virginia, Charlottesville agreed that “it is time that the traditional definitions of primary and secondary prevention evolve to incorporate CAC and CTA measures of patient risk based on coronary artery plaque burden.”

But they pointed out some limitations of the current study.

“The authors compared CAC with ≥50% stenosis, not CAC to comprehensive, contemporary coronary CTA,” and so “did not assess numerous other well-validated measures of coronary plaque burden that are routinely obtained from coronary CTA that typically improve the prognostic accuracy of coronary CTA beyond stenosis alone.” Also not performed was “plaque quantification on coronary CTA, an emerging field of study.”

The editorialists noted that noncontrast CT as used in the study for CAC scoring “is generally not recommended as a standalone test in symptomatic patients. Most studies have shown that coronary CTA, a test that accurately detects stenosis and identifies all types of coronary atherosclerosis (calcified and noncalcified), has significantly higher diagnostic and prognostic accuracy than CAC when performed in symptomatic patients without known coronary artery disease.”

Dr. Mortensen has disclosed no relevant financial relationships. Disclosures for the other authors are in the report. Dr. Villines and Dr. Rodriguez Lozano have disclosed no relevant financial relationships. Dr. Zadeh disclosed receiving grant support from Canon Medical Systems.

A version of this article originally appeared on Medscape.com.

Coronary artery calcium (CAC) score as a measure of plaque burden more reliably predicts future cardiovascular (CV) risk in patients with suspected coronary disease (CAD) than whether or not the disease is obstructive, a large retrospective study suggests.

Indeed, CV risk went up in tandem with growing plaque burden regardless of whether there was obstructive disease in any coronary artery, defined as a 50% or greater stenosis by computed tomographic angiography (CTA).

The findings argue for plaque burden as measured by CAC score, rather than percent-stenosis severity, for guiding further treatment decisions in such patients, researchers say.

The research was based on more than 20,000 symptomatic patients referred to diagnostic CTA in the Western Denmark Heart Registry who were then followed for about 4 years for major CV events, including death, myocardial infarction, or stroke.

“What we show is that CAC is important for prognosis, and that patients with no stenosis have similar high risk as patients with stenosis when CAC burden is similar,” Martin Bødtker Mortensen, MD, PhD, Aarhus (Denmark) University Hospital, said in an interview.

The guidelines “distinguish between primary and secondary prevention patients” based on the presence or absence of obstructive CAD, he said, but “our results challenge this long-held approach. We show that patients with nonobstructive CAD carry similar risk as patients with obstructive CAD.”

In practice, risk tends to be greater in patients with obstructive compared with nonobstructive CAD. But the reason “is simply that they normally have higher atherosclerosis burden,” Dr. Mortensen said. “When you stratify based on atherosclerosis burden, then patients with obstructive and nonobstructive CAD have similar risk.”

The analysis was published online Dec. 7 in the Journal of the American College of Cardiology with Mortensen as lead author.

Until recently, it had long been believed that CV-event risk was driven by ischemia – but “ischemia is just a surrogate for the extent of atherosclerotic disease,” Armin Arbab Zadeh, MD, PhD, MPH, who is not connected with the current study, said in an interview.

The finding that CV risk climbs with growing coronary plaque burden “essentially confirms” other recent studies, but with “added value in showing how well the calcium scores, compared to obstructive disease, track with risk. So it’s definitely a nice extension of the evidence,” said Dr. Zadeh, director of cardiac CT at Johns Hopkins University, Baltimore.

“This study clearly shows that there is no ischemia ‘threshold,’ that the risk starts from mild and goes up with the burden of atherosclerotic disease. We were essentially taught wrong for decades.”

Dr. Mortensen said that the new results “are in line with previous studies showing that atherosclerosis burden is very important for risk.” They also help explain why revascularization of patients with stable angina failed to cut the risk of MI or death in trials like COURAGEFAME-2, and ISCHEMIA. It’s because “stenosis per se explains little of the risk compared to atherosclerosis burden.”

In the current analysis, for example, about 65% of events were in patients who did not show obstructive CAD at CTA. Its 23,759 patients with symptoms suggestive of CAD were referred for CTA from 2008 through 2017; 5,043 (21.2%) were found to have obstructive disease and 18,716 (78.8%) either had no CAD or nonobstructive disease.

About 4.4% of patients experienced a first major CV event over a median follow-up of 4.3 years. Only events occurring later than 90 days after CTA were counted in an effort to exclude any directly related to revascularization, Dr. Mortensen noted.

The risk of events went up proportionally with both CAC score and the number of coronaries with obstructive disease.

The number of major CV events per 1,000 person-years was 6.2 for patients with a CAC score of 0, of whom 87% had no CAD by CTA, 7% had nonobstructive CAD, and 6% had obstructive CAD.

The corresponding rate was 17.5 among patients with a CAC score >100-399 for a hazard ratio (HR) of 1.7 (95% confidence interval [CI] 1.4-2.1) vs. a CAC score of 0.

And it was 42.3 per 1,000 patient-years among patients with CAC score >1000, HR 3.4 (95% CI, 2.5-4.6) vs. a CAC score of 0. Among those with the highest-tier CAC score, none were without CAD by CTA, 17% had nonobstructive disease, and 83% had obstructive CAD.

The major CV event rate rose similarly by number of coronaries with obstructive disease. It was 6.1 per 1,000 person-years in patients with no CAD. But it was 12.3 in those with nonobstructive disease, HR 1.3 (95% CI 1.1-1.6), up to 34.7 in those with triple-vessel obstructive disease, HR 2.9 (95% CI 2.2-3.9), vs. no CAD.

However, in an analysis with stratification by CAC score tier (0, 1-99, 100-399, 400-1,000, and >1,000), obstructive CAD was not associated with increased major CV-event risk in any stratum. The findings were similar in each subgroup with 1-vessel, 2-vessel, or 3-vessel CAD when stratified by CAC score.

Nor did major CV event risk track with obstructive CAD in analyses by age or after excluding all patients who underwent coronary revascularization within 90 days of CTA, the group reported.

“I believe these results support the use of CTA as a first-line test in patients with symptoms suggestive of CAD, as it provides valuable information for both diagnosis and prognosis in symptomatic patients,” Dr. Mortensen said. Those found to have a higher burden of atherosclerosis, he added, should receive aggressive preventive therapy regardless of whether or not they have obstructive disease.

The evidence from this study and others “supports a CTA-based approach” in such patients, Dr. Zadeh said. “And I would go further to say that a stress test is really inadequate,” in that it “detects the disease at such a late stage, you’re missing the opportunity to identify these patients who have atherosclerotic disease while you can do something about it.”

Its continued use as a first-line test, Dr. Zadeh said, “is essentially, in my mind, dismissing the evidence.”

An accompanying editorial Todd C. Villines, MD, and Patricia Rodriguez Lozano, MD, of the University of Virginia, Charlottesville agreed that “it is time that the traditional definitions of primary and secondary prevention evolve to incorporate CAC and CTA measures of patient risk based on coronary artery plaque burden.”

But they pointed out some limitations of the current study.

“The authors compared CAC with ≥50% stenosis, not CAC to comprehensive, contemporary coronary CTA,” and so “did not assess numerous other well-validated measures of coronary plaque burden that are routinely obtained from coronary CTA that typically improve the prognostic accuracy of coronary CTA beyond stenosis alone.” Also not performed was “plaque quantification on coronary CTA, an emerging field of study.”

The editorialists noted that noncontrast CT as used in the study for CAC scoring “is generally not recommended as a standalone test in symptomatic patients. Most studies have shown that coronary CTA, a test that accurately detects stenosis and identifies all types of coronary atherosclerosis (calcified and noncalcified), has significantly higher diagnostic and prognostic accuracy than CAC when performed in symptomatic patients without known coronary artery disease.”

Dr. Mortensen has disclosed no relevant financial relationships. Disclosures for the other authors are in the report. Dr. Villines and Dr. Rodriguez Lozano have disclosed no relevant financial relationships. Dr. Zadeh disclosed receiving grant support from Canon Medical Systems.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

Black race linked to poorer survival in AML

Article Type
Changed
Tue, 12/15/2020 - 09:09

Black race is the most important risk factor for patients with acute myeloid leukemia (AML) and is associated with poor survival, according to new findings.

Among patients with AML younger than 60 years, the rate of overall 3-year survival was significantly less among Black patients than White patients (34% vs. 43%). The risk for death was 27% higher for Black patients compared with White patients.

“Our study demonstrates the delicate interplay between a variety of factors that influence survival disparities, particularly for younger Black AML patients,” said first author Bhavana Bhatnagar, DO, of the Ohio State University’s Comprehensive Cancer Center, Columbus. “We were able to confirm the impact of socioeconomic factors while also demonstrating that being Black is, in and of itself, an independent poor prognostic variable for survival.”

She noted that the persistently poor outcomes of young Black patients that were seen despite similar treatments in clinical trials strongly suggest that additional factors have a bearing on their survival.

The findings of the study were presented during the plenary session of the annual meeting of the American Society of Hematology, which was held online this year. The study was simultaneously published in Cancer Discovery.

Racial disparities in cancer outcomes remain a challenge. The term “health disparities” describes the differences of health outcomes among different groups, said Chancellor Donald, MD, of Tulane University, New Orleans, who introduced the article at the meeting. “Racial health disparities usually result from an unequal distribution of power and resources, not genetics.

“The examination of health disparities is certainly a worthwhile endeavor,” he continued. “For generations, differences in key health outcomes have negatively impacted the quality of life and shortened the life span of countless individuals. As scientists, clinicians, and invested members of our shared society, we are obligated to obtain a profound understanding of the mechanisms and impact of this morbid reality.”
 

Black race a risk factor

For their study, Dr. Bhatnagar and colleagues conducted a nationwide population analysis using data from the Surveillance Epidemiology End Results (SEER) Program of the National Cancer Institute to identify 11,190 adults aged 18-60 years who were diagnosed with AML between 1986 and 2015.

To characterize molecular features, they conducted targeted sequencing of 81 genes in 1,339 patients with AML who were treated on frontline Cancer and Leukemia Group B/Alliance for Clinical Trials in Oncology (Alliance) protocols based on standard-intensity cytarabine/anthracycline induction followed by consolidation between 1986 and 2016. None of these patients received an allogeneic stem cell transplant when they achieved complete remission.

Although overall survival has improved during the past 3 decades, survival disparities between Black and White patients has widened over time (P < .001). The authors found a nonstatistically significant difference in survival between 1986 and 1995 (White patients, n = 1,365; Black patients, n = 160; P = .19). However, the difference was significant between 1996 and 2005 (White patients, n = 2,994; Black patients, n = 480; P = .004). “And it became even more noticeable in the most recent decade,” said Dr. Bhatnagar. “Furthermore, younger Black AML patients were found to have worse survival compared with younger White AML patients.”

Results from the second analysis of patients treated on Alliance protocols did not show any significant differences in early death rates (10% vs. 46%; P = .02) and complete remission rates (71% vs. 71%; P = 1.00). “While relapse rates were slightly higher in Black compared to White patients, this difference did not reach statistical significance,” said Dr. Bhatnagar. “There was also no significant difference in the number of cycles of consolidation chemotherapy administered to these patients.”

However, both disease-free and overall survival were significantly worse for Black patients, suggesting that factors other than treatment selection were likely at play in influencing the survival disparity. The median disease-free survival for Black patients was 0.8 years, vs. 1.4 years for White patients (P = .02). Overall survival was 1.2 years vs. 1.8 years (P = .02).

Relapse rates were slightly higher in Black patients than in White patients, at 71% vs. 59%, but this difference did not reach statistical significance (P = .14).
 

 

 

Differences in biomarkers

With regard to underlying molecular differences between Black and White patients, the investigators found that the most common mutations were in NPM1, FLT3-ITD, and DNM3TA. Mutations were detected in more than 20% of Black patients. Other commonly mutated genes were IDH2, NRAS, TET2, IDH1, and TP53, which were mutated in more than 10% of patients. “All of these genes are established commonly mutated genes in AML,” said Bhatnagar.

On univariable and multivariable outcome analyses, which were used to identify clinical or molecular features that had a bearing on outcome, FLT3-ITD and IDH2 mutations were the only mutations associated with a higher risk for death among Black patients.

“This is actually a very important finding, as both FLT3 and IDH2 are now targetable with small-molecule inhibitors,” said Dr. Bhatnagar. “In addition, it is also worth noting that other gene mutations that have known prognostic significance in AML, such as NPM1, as well as RUNX1 and TP53, did not remain in the final statistical model.

“Importantly, our study provides powerful evidence that suggests differences in underlying disease biology between young Black and White AML patients, as evidenced by differences in the frequencies of recurrent gene mutations, “ she said.
 

Understudied disparities

Although the study showed that Black patients had worse outcomes, “surprisingly, the authors found these outcomes hold even when the patients are participating in clinical trials,” noted Elisa Weiss, PhD, senior vice president of education, services, and health research for the Leukemia and Lymphoma Society.

“The study makes clear that the medical and science community need to do more to better understand the social, economic, environmental, and biological causes of these disparities,” she said in an interview. “In fact, the findings suggest that there are myriad complex and understudied causes of the identified disparities, and they are likely to lie at the intersection of all levels of the social ecology that impact an individual’s ability to access timely and unbiased care, maintain their mental and physical health, and receive needed social support and resources.”

She noted that the Leukemia and Lymphoma Society has an Equity in Access research program that aims to “advance study of underlying causes of inequitable access to care and identify policies, strategies, and interventions that have the potential to reduce inequities and increase access to health care, services, and programs for blood cancer patients and survivors.”

The research was supported in part by the National Cancer Institute of the National Institutes of Health, other institutions, and through several scholar awards. Dr. Bhatnagar has received advisory board honoraria from Novartis, Kite Pharma, Celgene, Astellas, and Cell Therapeutics. Dr. Weiss has disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

Black race is the most important risk factor for patients with acute myeloid leukemia (AML) and is associated with poor survival, according to new findings.

Among patients with AML younger than 60 years, the rate of overall 3-year survival was significantly less among Black patients than White patients (34% vs. 43%). The risk for death was 27% higher for Black patients compared with White patients.

“Our study demonstrates the delicate interplay between a variety of factors that influence survival disparities, particularly for younger Black AML patients,” said first author Bhavana Bhatnagar, DO, of the Ohio State University’s Comprehensive Cancer Center, Columbus. “We were able to confirm the impact of socioeconomic factors while also demonstrating that being Black is, in and of itself, an independent poor prognostic variable for survival.”

She noted that the persistently poor outcomes of young Black patients that were seen despite similar treatments in clinical trials strongly suggest that additional factors have a bearing on their survival.

The findings of the study were presented during the plenary session of the annual meeting of the American Society of Hematology, which was held online this year. The study was simultaneously published in Cancer Discovery.

Racial disparities in cancer outcomes remain a challenge. The term “health disparities” describes the differences of health outcomes among different groups, said Chancellor Donald, MD, of Tulane University, New Orleans, who introduced the article at the meeting. “Racial health disparities usually result from an unequal distribution of power and resources, not genetics.

“The examination of health disparities is certainly a worthwhile endeavor,” he continued. “For generations, differences in key health outcomes have negatively impacted the quality of life and shortened the life span of countless individuals. As scientists, clinicians, and invested members of our shared society, we are obligated to obtain a profound understanding of the mechanisms and impact of this morbid reality.”
 

Black race a risk factor

For their study, Dr. Bhatnagar and colleagues conducted a nationwide population analysis using data from the Surveillance Epidemiology End Results (SEER) Program of the National Cancer Institute to identify 11,190 adults aged 18-60 years who were diagnosed with AML between 1986 and 2015.

To characterize molecular features, they conducted targeted sequencing of 81 genes in 1,339 patients with AML who were treated on frontline Cancer and Leukemia Group B/Alliance for Clinical Trials in Oncology (Alliance) protocols based on standard-intensity cytarabine/anthracycline induction followed by consolidation between 1986 and 2016. None of these patients received an allogeneic stem cell transplant when they achieved complete remission.

Although overall survival has improved during the past 3 decades, survival disparities between Black and White patients has widened over time (P < .001). The authors found a nonstatistically significant difference in survival between 1986 and 1995 (White patients, n = 1,365; Black patients, n = 160; P = .19). However, the difference was significant between 1996 and 2005 (White patients, n = 2,994; Black patients, n = 480; P = .004). “And it became even more noticeable in the most recent decade,” said Dr. Bhatnagar. “Furthermore, younger Black AML patients were found to have worse survival compared with younger White AML patients.”

Results from the second analysis of patients treated on Alliance protocols did not show any significant differences in early death rates (10% vs. 46%; P = .02) and complete remission rates (71% vs. 71%; P = 1.00). “While relapse rates were slightly higher in Black compared to White patients, this difference did not reach statistical significance,” said Dr. Bhatnagar. “There was also no significant difference in the number of cycles of consolidation chemotherapy administered to these patients.”

However, both disease-free and overall survival were significantly worse for Black patients, suggesting that factors other than treatment selection were likely at play in influencing the survival disparity. The median disease-free survival for Black patients was 0.8 years, vs. 1.4 years for White patients (P = .02). Overall survival was 1.2 years vs. 1.8 years (P = .02).

Relapse rates were slightly higher in Black patients than in White patients, at 71% vs. 59%, but this difference did not reach statistical significance (P = .14).
 

 

 

Differences in biomarkers

With regard to underlying molecular differences between Black and White patients, the investigators found that the most common mutations were in NPM1, FLT3-ITD, and DNM3TA. Mutations were detected in more than 20% of Black patients. Other commonly mutated genes were IDH2, NRAS, TET2, IDH1, and TP53, which were mutated in more than 10% of patients. “All of these genes are established commonly mutated genes in AML,” said Bhatnagar.

On univariable and multivariable outcome analyses, which were used to identify clinical or molecular features that had a bearing on outcome, FLT3-ITD and IDH2 mutations were the only mutations associated with a higher risk for death among Black patients.

“This is actually a very important finding, as both FLT3 and IDH2 are now targetable with small-molecule inhibitors,” said Dr. Bhatnagar. “In addition, it is also worth noting that other gene mutations that have known prognostic significance in AML, such as NPM1, as well as RUNX1 and TP53, did not remain in the final statistical model.

“Importantly, our study provides powerful evidence that suggests differences in underlying disease biology between young Black and White AML patients, as evidenced by differences in the frequencies of recurrent gene mutations, “ she said.
 

Understudied disparities

Although the study showed that Black patients had worse outcomes, “surprisingly, the authors found these outcomes hold even when the patients are participating in clinical trials,” noted Elisa Weiss, PhD, senior vice president of education, services, and health research for the Leukemia and Lymphoma Society.

“The study makes clear that the medical and science community need to do more to better understand the social, economic, environmental, and biological causes of these disparities,” she said in an interview. “In fact, the findings suggest that there are myriad complex and understudied causes of the identified disparities, and they are likely to lie at the intersection of all levels of the social ecology that impact an individual’s ability to access timely and unbiased care, maintain their mental and physical health, and receive needed social support and resources.”

She noted that the Leukemia and Lymphoma Society has an Equity in Access research program that aims to “advance study of underlying causes of inequitable access to care and identify policies, strategies, and interventions that have the potential to reduce inequities and increase access to health care, services, and programs for blood cancer patients and survivors.”

The research was supported in part by the National Cancer Institute of the National Institutes of Health, other institutions, and through several scholar awards. Dr. Bhatnagar has received advisory board honoraria from Novartis, Kite Pharma, Celgene, Astellas, and Cell Therapeutics. Dr. Weiss has disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Black race is the most important risk factor for patients with acute myeloid leukemia (AML) and is associated with poor survival, according to new findings.

Among patients with AML younger than 60 years, the rate of overall 3-year survival was significantly less among Black patients than White patients (34% vs. 43%). The risk for death was 27% higher for Black patients compared with White patients.

“Our study demonstrates the delicate interplay between a variety of factors that influence survival disparities, particularly for younger Black AML patients,” said first author Bhavana Bhatnagar, DO, of the Ohio State University’s Comprehensive Cancer Center, Columbus. “We were able to confirm the impact of socioeconomic factors while also demonstrating that being Black is, in and of itself, an independent poor prognostic variable for survival.”

She noted that the persistently poor outcomes of young Black patients that were seen despite similar treatments in clinical trials strongly suggest that additional factors have a bearing on their survival.

The findings of the study were presented during the plenary session of the annual meeting of the American Society of Hematology, which was held online this year. The study was simultaneously published in Cancer Discovery.

Racial disparities in cancer outcomes remain a challenge. The term “health disparities” describes the differences of health outcomes among different groups, said Chancellor Donald, MD, of Tulane University, New Orleans, who introduced the article at the meeting. “Racial health disparities usually result from an unequal distribution of power and resources, not genetics.

“The examination of health disparities is certainly a worthwhile endeavor,” he continued. “For generations, differences in key health outcomes have negatively impacted the quality of life and shortened the life span of countless individuals. As scientists, clinicians, and invested members of our shared society, we are obligated to obtain a profound understanding of the mechanisms and impact of this morbid reality.”
 

Black race a risk factor

For their study, Dr. Bhatnagar and colleagues conducted a nationwide population analysis using data from the Surveillance Epidemiology End Results (SEER) Program of the National Cancer Institute to identify 11,190 adults aged 18-60 years who were diagnosed with AML between 1986 and 2015.

To characterize molecular features, they conducted targeted sequencing of 81 genes in 1,339 patients with AML who were treated on frontline Cancer and Leukemia Group B/Alliance for Clinical Trials in Oncology (Alliance) protocols based on standard-intensity cytarabine/anthracycline induction followed by consolidation between 1986 and 2016. None of these patients received an allogeneic stem cell transplant when they achieved complete remission.

Although overall survival has improved during the past 3 decades, survival disparities between Black and White patients has widened over time (P < .001). The authors found a nonstatistically significant difference in survival between 1986 and 1995 (White patients, n = 1,365; Black patients, n = 160; P = .19). However, the difference was significant between 1996 and 2005 (White patients, n = 2,994; Black patients, n = 480; P = .004). “And it became even more noticeable in the most recent decade,” said Dr. Bhatnagar. “Furthermore, younger Black AML patients were found to have worse survival compared with younger White AML patients.”

Results from the second analysis of patients treated on Alliance protocols did not show any significant differences in early death rates (10% vs. 46%; P = .02) and complete remission rates (71% vs. 71%; P = 1.00). “While relapse rates were slightly higher in Black compared to White patients, this difference did not reach statistical significance,” said Dr. Bhatnagar. “There was also no significant difference in the number of cycles of consolidation chemotherapy administered to these patients.”

However, both disease-free and overall survival were significantly worse for Black patients, suggesting that factors other than treatment selection were likely at play in influencing the survival disparity. The median disease-free survival for Black patients was 0.8 years, vs. 1.4 years for White patients (P = .02). Overall survival was 1.2 years vs. 1.8 years (P = .02).

Relapse rates were slightly higher in Black patients than in White patients, at 71% vs. 59%, but this difference did not reach statistical significance (P = .14).
 

 

 

Differences in biomarkers

With regard to underlying molecular differences between Black and White patients, the investigators found that the most common mutations were in NPM1, FLT3-ITD, and DNM3TA. Mutations were detected in more than 20% of Black patients. Other commonly mutated genes were IDH2, NRAS, TET2, IDH1, and TP53, which were mutated in more than 10% of patients. “All of these genes are established commonly mutated genes in AML,” said Bhatnagar.

On univariable and multivariable outcome analyses, which were used to identify clinical or molecular features that had a bearing on outcome, FLT3-ITD and IDH2 mutations were the only mutations associated with a higher risk for death among Black patients.

“This is actually a very important finding, as both FLT3 and IDH2 are now targetable with small-molecule inhibitors,” said Dr. Bhatnagar. “In addition, it is also worth noting that other gene mutations that have known prognostic significance in AML, such as NPM1, as well as RUNX1 and TP53, did not remain in the final statistical model.

“Importantly, our study provides powerful evidence that suggests differences in underlying disease biology between young Black and White AML patients, as evidenced by differences in the frequencies of recurrent gene mutations, “ she said.
 

Understudied disparities

Although the study showed that Black patients had worse outcomes, “surprisingly, the authors found these outcomes hold even when the patients are participating in clinical trials,” noted Elisa Weiss, PhD, senior vice president of education, services, and health research for the Leukemia and Lymphoma Society.

“The study makes clear that the medical and science community need to do more to better understand the social, economic, environmental, and biological causes of these disparities,” she said in an interview. “In fact, the findings suggest that there are myriad complex and understudied causes of the identified disparities, and they are likely to lie at the intersection of all levels of the social ecology that impact an individual’s ability to access timely and unbiased care, maintain their mental and physical health, and receive needed social support and resources.”

She noted that the Leukemia and Lymphoma Society has an Equity in Access research program that aims to “advance study of underlying causes of inequitable access to care and identify policies, strategies, and interventions that have the potential to reduce inequities and increase access to health care, services, and programs for blood cancer patients and survivors.”

The research was supported in part by the National Cancer Institute of the National Institutes of Health, other institutions, and through several scholar awards. Dr. Bhatnagar has received advisory board honoraria from Novartis, Kite Pharma, Celgene, Astellas, and Cell Therapeutics. Dr. Weiss has disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

To D or not to D? Vitamin D doesn’t reduce falls in older adults

Article Type
Changed
Tue, 12/15/2020 - 09:08

Higher doses of vitamin D supplementation not only show no benefit in the prevention of falls in older adults at increased risk of falling, compared with the lowest doses, but they appear to increase the risk, new research shows.

Zbynek Pospisil/iStock/Getty Images

Based on the findings, supplemental vitamin D above the minimum dose of 200 IU/day likely has little benefit, lead author Lawrence J. Appel, MD, MPH, told this news organization.

“In the absence of any benefit of 1,000 IU/day versus 2,000 IU/day [of vitamin D supplementation] on falls, along with the potential for harm from doses above 1,000 IU/day, it is hard to recommend a dose above 200 IU/day in older-aged persons, unless there is a compelling reason,” asserted Dr. Appel, director of the Welch Center for Prevention, Epidemiology, and Clinical Research at Johns Hopkins Bloomberg School of Public Health in Baltimore.

“More is not always better – and it may even be worse,” when it comes to vitamin D’s role in the prevention of falls, he said.

The research, published in Annals of Internal Medicine, adds important evidence in the ongoing struggle to prevent falls, says Bruce R. Troen, MD, in an accompanying editorial.

“Falls and their deleterious consequences remain a substantial risk for older adults and a huge challenge for health care teams,” writes Dr. Troen, a physician-investigator with the Veterans Affairs Western New York Healthcare System.

However, commenting in an interview, Dr. Troen cautions: “There are many epidemiological studies that are correlative, not causative, that do show a likelihood for benefit [with vitamin D supplementation]. … Therefore, there’s no reason for clinicians to discontinue vitamin D in individuals because of this study.”

“If you’re monitoring an older adult who is frail and has multiple comorbidities, you want to know what their vitamin D level is [and] provide them an appropriate supplement if needed,” he emphasized.

Some guidelines already reflect the lack of evidence of any role of vitamin D supplementation in the prevention of falls, including those of the 2018 U.S. Preventive Services Task Force, which, in a reversal of its 2012 recommendation, now does not recommend vitamin D supplementation for fall prevention in older persons without osteoporosis or vitamin D deficiency, Dr. Appel and colleagues note.
 

No prevention of falls regardless of baseline vitamin D

As part of STURDY (Study to understand fall reduction and vitamin D in you), Dr. Appel and colleagues enrolled 688 community-dwelling participants who had an elevated risk of falling, defined as a serum 25-hydroxyvitamin D [25(OH)D] level of 25 to 72.5 nmol/L (10-29 ng/dL).

Participants were a mean age of 77.2 years and had a mean total 25(OH)D level of 55.3 nmol/L at enrollment.

They were randomized to one of four doses of vitamin D3, including 200 IU/day (the control group), or 1,000, 2,000, or 4,000 IU/day.

The highest doses were found to be associated with worse – not better – outcomes including a shorter time to hospitalization or death, compared with the 1,000-IU/day group. The higher-dose groups were therefore switched to a dose of 1,000 IU/day or lower, and all participants were followed for up to 2 years.

Overall, 63% experienced falls over the course of the study, which, though high, was consistent with the study’s criteria of participants having an elevated fall risk.

Of the 667 participants who completed the trial, no benefit in prevention of falling was seen across any of the doses, compared with the control group dose of 200 IU/day, regardless of participants’ baseline vitamin D levels.

Safety analyses showed that even in the 1,000-IU/day group, a higher risk of first serious fall and first fall with hospitalization was seen compared with the 200-IU/day group.

A limitation is that the study did not have a placebo group, however, “200 IU/day is a very small dose, probably homeopathic,” Dr. Appel said. “It was likely close to a placebo,” he said.
 

 

 

Caveats: comorbidities, subgroups

In his editorial, Dr. Troen notes other studies, including VITAL (Vitamin D and Omega-3 Trial) also found no reduction in falls with higher vitamin D doses; however, that study did not show any significant risks with the higher doses.

He adds that the current study lacks information on subsets of participants.

“We don’t have enough information about the existing comorbidities and medications that these people are on to be able to pull back the layers. Maybe there is a subgroup that should not be getting 4,000 IU, whereas another subgroup may not be harmed and you may decide that patient can benefit,” he said.

Furthermore, the trial doesn’t address groups such as nursing home residents.

“I have, for instance, 85-year-olds with vitamin D levels of maybe 20 nmol/L with multiple medical issues, but levels that low were not included in the study, so this is a tricky business, but the bottom line is first, do no harm,” he said.

“We really need trials that factor in the multiple different aspects so we can come up, hopefully, with a holistic and interdisciplinary approach, which is usually the best way to optimize care for frail older adults,” he concluded.

The study received funding from the National Institute of Aging.
 

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

Higher doses of vitamin D supplementation not only show no benefit in the prevention of falls in older adults at increased risk of falling, compared with the lowest doses, but they appear to increase the risk, new research shows.

Zbynek Pospisil/iStock/Getty Images

Based on the findings, supplemental vitamin D above the minimum dose of 200 IU/day likely has little benefit, lead author Lawrence J. Appel, MD, MPH, told this news organization.

“In the absence of any benefit of 1,000 IU/day versus 2,000 IU/day [of vitamin D supplementation] on falls, along with the potential for harm from doses above 1,000 IU/day, it is hard to recommend a dose above 200 IU/day in older-aged persons, unless there is a compelling reason,” asserted Dr. Appel, director of the Welch Center for Prevention, Epidemiology, and Clinical Research at Johns Hopkins Bloomberg School of Public Health in Baltimore.

“More is not always better – and it may even be worse,” when it comes to vitamin D’s role in the prevention of falls, he said.

The research, published in Annals of Internal Medicine, adds important evidence in the ongoing struggle to prevent falls, says Bruce R. Troen, MD, in an accompanying editorial.

“Falls and their deleterious consequences remain a substantial risk for older adults and a huge challenge for health care teams,” writes Dr. Troen, a physician-investigator with the Veterans Affairs Western New York Healthcare System.

However, commenting in an interview, Dr. Troen cautions: “There are many epidemiological studies that are correlative, not causative, that do show a likelihood for benefit [with vitamin D supplementation]. … Therefore, there’s no reason for clinicians to discontinue vitamin D in individuals because of this study.”

“If you’re monitoring an older adult who is frail and has multiple comorbidities, you want to know what their vitamin D level is [and] provide them an appropriate supplement if needed,” he emphasized.

Some guidelines already reflect the lack of evidence of any role of vitamin D supplementation in the prevention of falls, including those of the 2018 U.S. Preventive Services Task Force, which, in a reversal of its 2012 recommendation, now does not recommend vitamin D supplementation for fall prevention in older persons without osteoporosis or vitamin D deficiency, Dr. Appel and colleagues note.
 

No prevention of falls regardless of baseline vitamin D

As part of STURDY (Study to understand fall reduction and vitamin D in you), Dr. Appel and colleagues enrolled 688 community-dwelling participants who had an elevated risk of falling, defined as a serum 25-hydroxyvitamin D [25(OH)D] level of 25 to 72.5 nmol/L (10-29 ng/dL).

Participants were a mean age of 77.2 years and had a mean total 25(OH)D level of 55.3 nmol/L at enrollment.

They were randomized to one of four doses of vitamin D3, including 200 IU/day (the control group), or 1,000, 2,000, or 4,000 IU/day.

The highest doses were found to be associated with worse – not better – outcomes including a shorter time to hospitalization or death, compared with the 1,000-IU/day group. The higher-dose groups were therefore switched to a dose of 1,000 IU/day or lower, and all participants were followed for up to 2 years.

Overall, 63% experienced falls over the course of the study, which, though high, was consistent with the study’s criteria of participants having an elevated fall risk.

Of the 667 participants who completed the trial, no benefit in prevention of falling was seen across any of the doses, compared with the control group dose of 200 IU/day, regardless of participants’ baseline vitamin D levels.

Safety analyses showed that even in the 1,000-IU/day group, a higher risk of first serious fall and first fall with hospitalization was seen compared with the 200-IU/day group.

A limitation is that the study did not have a placebo group, however, “200 IU/day is a very small dose, probably homeopathic,” Dr. Appel said. “It was likely close to a placebo,” he said.
 

 

 

Caveats: comorbidities, subgroups

In his editorial, Dr. Troen notes other studies, including VITAL (Vitamin D and Omega-3 Trial) also found no reduction in falls with higher vitamin D doses; however, that study did not show any significant risks with the higher doses.

He adds that the current study lacks information on subsets of participants.

“We don’t have enough information about the existing comorbidities and medications that these people are on to be able to pull back the layers. Maybe there is a subgroup that should not be getting 4,000 IU, whereas another subgroup may not be harmed and you may decide that patient can benefit,” he said.

Furthermore, the trial doesn’t address groups such as nursing home residents.

“I have, for instance, 85-year-olds with vitamin D levels of maybe 20 nmol/L with multiple medical issues, but levels that low were not included in the study, so this is a tricky business, but the bottom line is first, do no harm,” he said.

“We really need trials that factor in the multiple different aspects so we can come up, hopefully, with a holistic and interdisciplinary approach, which is usually the best way to optimize care for frail older adults,” he concluded.

The study received funding from the National Institute of Aging.
 

A version of this article originally appeared on Medscape.com.

Higher doses of vitamin D supplementation not only show no benefit in the prevention of falls in older adults at increased risk of falling, compared with the lowest doses, but they appear to increase the risk, new research shows.

Zbynek Pospisil/iStock/Getty Images

Based on the findings, supplemental vitamin D above the minimum dose of 200 IU/day likely has little benefit, lead author Lawrence J. Appel, MD, MPH, told this news organization.

“In the absence of any benefit of 1,000 IU/day versus 2,000 IU/day [of vitamin D supplementation] on falls, along with the potential for harm from doses above 1,000 IU/day, it is hard to recommend a dose above 200 IU/day in older-aged persons, unless there is a compelling reason,” asserted Dr. Appel, director of the Welch Center for Prevention, Epidemiology, and Clinical Research at Johns Hopkins Bloomberg School of Public Health in Baltimore.

“More is not always better – and it may even be worse,” when it comes to vitamin D’s role in the prevention of falls, he said.

The research, published in Annals of Internal Medicine, adds important evidence in the ongoing struggle to prevent falls, says Bruce R. Troen, MD, in an accompanying editorial.

“Falls and their deleterious consequences remain a substantial risk for older adults and a huge challenge for health care teams,” writes Dr. Troen, a physician-investigator with the Veterans Affairs Western New York Healthcare System.

However, commenting in an interview, Dr. Troen cautions: “There are many epidemiological studies that are correlative, not causative, that do show a likelihood for benefit [with vitamin D supplementation]. … Therefore, there’s no reason for clinicians to discontinue vitamin D in individuals because of this study.”

“If you’re monitoring an older adult who is frail and has multiple comorbidities, you want to know what their vitamin D level is [and] provide them an appropriate supplement if needed,” he emphasized.

Some guidelines already reflect the lack of evidence of any role of vitamin D supplementation in the prevention of falls, including those of the 2018 U.S. Preventive Services Task Force, which, in a reversal of its 2012 recommendation, now does not recommend vitamin D supplementation for fall prevention in older persons without osteoporosis or vitamin D deficiency, Dr. Appel and colleagues note.
 

No prevention of falls regardless of baseline vitamin D

As part of STURDY (Study to understand fall reduction and vitamin D in you), Dr. Appel and colleagues enrolled 688 community-dwelling participants who had an elevated risk of falling, defined as a serum 25-hydroxyvitamin D [25(OH)D] level of 25 to 72.5 nmol/L (10-29 ng/dL).

Participants were a mean age of 77.2 years and had a mean total 25(OH)D level of 55.3 nmol/L at enrollment.

They were randomized to one of four doses of vitamin D3, including 200 IU/day (the control group), or 1,000, 2,000, or 4,000 IU/day.

The highest doses were found to be associated with worse – not better – outcomes including a shorter time to hospitalization or death, compared with the 1,000-IU/day group. The higher-dose groups were therefore switched to a dose of 1,000 IU/day or lower, and all participants were followed for up to 2 years.

Overall, 63% experienced falls over the course of the study, which, though high, was consistent with the study’s criteria of participants having an elevated fall risk.

Of the 667 participants who completed the trial, no benefit in prevention of falling was seen across any of the doses, compared with the control group dose of 200 IU/day, regardless of participants’ baseline vitamin D levels.

Safety analyses showed that even in the 1,000-IU/day group, a higher risk of first serious fall and first fall with hospitalization was seen compared with the 200-IU/day group.

A limitation is that the study did not have a placebo group, however, “200 IU/day is a very small dose, probably homeopathic,” Dr. Appel said. “It was likely close to a placebo,” he said.
 

 

 

Caveats: comorbidities, subgroups

In his editorial, Dr. Troen notes other studies, including VITAL (Vitamin D and Omega-3 Trial) also found no reduction in falls with higher vitamin D doses; however, that study did not show any significant risks with the higher doses.

He adds that the current study lacks information on subsets of participants.

“We don’t have enough information about the existing comorbidities and medications that these people are on to be able to pull back the layers. Maybe there is a subgroup that should not be getting 4,000 IU, whereas another subgroup may not be harmed and you may decide that patient can benefit,” he said.

Furthermore, the trial doesn’t address groups such as nursing home residents.

“I have, for instance, 85-year-olds with vitamin D levels of maybe 20 nmol/L with multiple medical issues, but levels that low were not included in the study, so this is a tricky business, but the bottom line is first, do no harm,” he said.

“We really need trials that factor in the multiple different aspects so we can come up, hopefully, with a holistic and interdisciplinary approach, which is usually the best way to optimize care for frail older adults,” he concluded.

The study received funding from the National Institute of Aging.
 

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

FDA safety alert: Face masks with metal can burn during MRI

Article Type
Changed
Thu, 08/26/2021 - 15:55

After a patient’s face was burned in the outline of a mask worn during a 3-Tesla MRI neck scan, the US Food and Drug Administration (FDA) cautioned that face masks containing metal can heat to unsafe temperatures during scanning.

Clinicians have known for years to ask patients to remove all metal jewelry and other objects prior to an MRI. The widespread wearing of face masks during the COVID-19 pandemic, however, adds one more consideration to the list.

The FDA’s December 7 safety communication applies to surgical and nonsurgical face masks and respirators.

The injury risk relates to rapid heating of metal components. Many face masks contain a nose wire or metal clip that helps the product conform to the face. Some masks contain metal nanoparticles, while others feature antimicrobial coatings with silver or copper. Each of these products should be avoided during MRI scanning. Also watch out for staples on headbands, the FDA warned.

If the metal content of a face mask is unknown, the FDA suggests providing the patient with a facial covering that is known not to contain any metal.

Robert E. Watson Jr, MD, PhD, chair of the American College of Radiology (ACR) Committee on MR Safety, agreed. He recommended that facilities “provide patients with masks known to be MRI-safe and not permit patient-owned masks in the MRI.”

Watson suggested this strategy at a time when face masks are required.

“COVID-19 safety protocols require that patients wear masks when being scanned, to decrease infection risk to MRI staff, decrease risk of contaminating the MRI scanner, and to protect themselves from infection,” he told Medscape Medical News. “Any conducting metal that enters the MRI machine is at risk of heating due to the radiofrequency fields inherent to image generation.”

Adverse events related to the metal components of a face mask should be reported to the FDA using the MedWatch voluntary reporting form. In addition, healthcare providers subject to the FDA user facility reporting requirements should follow procedures at their facilities to report such events.

This article first appeared on Medscape.com.

Publications
Topics
Sections

After a patient’s face was burned in the outline of a mask worn during a 3-Tesla MRI neck scan, the US Food and Drug Administration (FDA) cautioned that face masks containing metal can heat to unsafe temperatures during scanning.

Clinicians have known for years to ask patients to remove all metal jewelry and other objects prior to an MRI. The widespread wearing of face masks during the COVID-19 pandemic, however, adds one more consideration to the list.

The FDA’s December 7 safety communication applies to surgical and nonsurgical face masks and respirators.

The injury risk relates to rapid heating of metal components. Many face masks contain a nose wire or metal clip that helps the product conform to the face. Some masks contain metal nanoparticles, while others feature antimicrobial coatings with silver or copper. Each of these products should be avoided during MRI scanning. Also watch out for staples on headbands, the FDA warned.

If the metal content of a face mask is unknown, the FDA suggests providing the patient with a facial covering that is known not to contain any metal.

Robert E. Watson Jr, MD, PhD, chair of the American College of Radiology (ACR) Committee on MR Safety, agreed. He recommended that facilities “provide patients with masks known to be MRI-safe and not permit patient-owned masks in the MRI.”

Watson suggested this strategy at a time when face masks are required.

“COVID-19 safety protocols require that patients wear masks when being scanned, to decrease infection risk to MRI staff, decrease risk of contaminating the MRI scanner, and to protect themselves from infection,” he told Medscape Medical News. “Any conducting metal that enters the MRI machine is at risk of heating due to the radiofrequency fields inherent to image generation.”

Adverse events related to the metal components of a face mask should be reported to the FDA using the MedWatch voluntary reporting form. In addition, healthcare providers subject to the FDA user facility reporting requirements should follow procedures at their facilities to report such events.

This article first appeared on Medscape.com.

After a patient’s face was burned in the outline of a mask worn during a 3-Tesla MRI neck scan, the US Food and Drug Administration (FDA) cautioned that face masks containing metal can heat to unsafe temperatures during scanning.

Clinicians have known for years to ask patients to remove all metal jewelry and other objects prior to an MRI. The widespread wearing of face masks during the COVID-19 pandemic, however, adds one more consideration to the list.

The FDA’s December 7 safety communication applies to surgical and nonsurgical face masks and respirators.

The injury risk relates to rapid heating of metal components. Many face masks contain a nose wire or metal clip that helps the product conform to the face. Some masks contain metal nanoparticles, while others feature antimicrobial coatings with silver or copper. Each of these products should be avoided during MRI scanning. Also watch out for staples on headbands, the FDA warned.

If the metal content of a face mask is unknown, the FDA suggests providing the patient with a facial covering that is known not to contain any metal.

Robert E. Watson Jr, MD, PhD, chair of the American College of Radiology (ACR) Committee on MR Safety, agreed. He recommended that facilities “provide patients with masks known to be MRI-safe and not permit patient-owned masks in the MRI.”

Watson suggested this strategy at a time when face masks are required.

“COVID-19 safety protocols require that patients wear masks when being scanned, to decrease infection risk to MRI staff, decrease risk of contaminating the MRI scanner, and to protect themselves from infection,” he told Medscape Medical News. “Any conducting metal that enters the MRI machine is at risk of heating due to the radiofrequency fields inherent to image generation.”

Adverse events related to the metal components of a face mask should be reported to the FDA using the MedWatch voluntary reporting form. In addition, healthcare providers subject to the FDA user facility reporting requirements should follow procedures at their facilities to report such events.

This article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

Joint guidelines favor antibody testing for certain Lyme disease manifestations

Article Type
Changed
Mon, 01/11/2021 - 13:38

New clinical practice guidelines on Lyme disease place a strong emphasis on antibody testing to assess for rheumatologic and neurologic syndromes. “Diagnostically, we recommend testing via antibodies, and an index of antibodies in cerebrospinal fluid [CSF] versus serum. Importantly, we recommend against using polymerase chain reaction [PCR] in CSF,” Jeffrey A. Rumbaugh, MD, PhD, a coauthor of the guidelines and a member of the American Academy of Neurology, said in an interview.

CDC/ Dr. Amanda Loftis, Dr. William Nicholson, Dr. Will Reeves, Dr. Chris Paddock

The Infectious Diseases Society of America, AAN, and the American College of Rheumatology convened a multidisciplinary panel to develop the 43 recommendations, seeking input from 12 additional medical specialties, and patients. The panel conducted a systematic review of available evidence on preventing, diagnosing, and treating Lyme disease, using the Grading of Recommendations Assessment, Development and Evaluation model to evaluate clinical evidence and strength of recommendations. The guidelines were simultaneous published in Clinical Infectious Diseases, Neurology, Arthritis & Rheumatology, and Arthritis Care & Research.

This is the first time these organizations have collaborated on joint Lyme disease guidelines, which focus mainly on neurologic, cardiac, and rheumatologic manifestations.

“We are very excited to provide these updated guidelines to assist clinicians working in numerous medical specialties around the country, and even the world, as they care for patients suffering from Lyme disease,” Dr. Rumbaugh said.
 

When to use and not to use PCR

Guideline authors called for specific testing regimens depending on presentation of symptoms. Generally, they advised that individuals with a skin rash suggestive of early disease seek a clinical diagnosis instead of laboratory testing.

Dr. Linda Bockenstedt

Recommendations on Lyme arthritis support previous IDSA guidelines published in 2006, Linda K. Bockenstedt, MD, professor of medicine at Yale University, New Haven, Conn., and a coauthor of the guidelines, said in an interview.

To evaluate for potential Lyme arthritis, clinicians should choose serum antibody testing over PCR or culture of blood or synovial fluid/tissue. However, if a doctor is assessing a seropositive patient for Lyme arthritis diagnosis but needs more information for treatment decisions, the authors recommended PCR applied to synovial fluid or tissue over Borrelia culture.

“Synovial fluid can be analyzed by PCR, but sensitivity is generally lower than serology,” Dr. Bockenstedt explained. Additionally, culture of joint fluid or synovial tissue for Lyme spirochetes has 0% sensitivity in multiple studies. “For these reasons, we recommend serum antibody testing over PCR of joint fluid or other methods for an initial diagnosis.”

Serum antibody testing over PCR or culture is also recommended for identifying Lyme neuroborreliosis in the peripheral nervous system (PNS) or CNS.

Despite the recent popularity of Lyme PCR testing in hospitals and labs, “with Lyme at least, antibodies are better in the CSF,” Dr. Rumbaugh said. Studies have shown that “most patients with even early neurologic Lyme disease are seropositive by conventional antibody testing at time of initial clinical presentation, and that intrathecal antibody production, as demonstrated by an elevated CSF:serum index, is highly specific for CNS involvement.”



If done correctly, antibody testing is both sensitive and specific for neurologic Lyme disease. “On the other hand, sensitivity of Lyme PCR performed on CSF has been only in the 5%-17% range in studies. Incidentally, Lyme PCR on blood is also not sensitive and therefore not recommended,” Dr. Rumbaugh said.

Guideline authors recommended testing in patients with the following conditions: acute neurologic disorders such as meningitis, painful radiculoneuritis, mononeuropathy multiplex; evidence of spinal cord or brain inflammation; and acute myocarditis/pericarditis of unknown cause in an appropriate epidemiologic setting.

They did not recommend testing in patients with typical amyotrophic lateral sclerosis; relapsing remitting multiple sclerosis; Parkinson’s disease, dementia, or cognitive decline; new-onset seizures; other neurologic syndromes or those lacking clinical or epidemiologic history that would support a diagnosis of Lyme disease; and patients with chronic cardiomyopathy of unknown cause.

The authors also called for judicious use of electrocardiogram to screen for Lyme carditis, recommending it only in patients signs or symptoms of this condition. However, patients at risk for or showing signs of severe cardiac complications of Lyme disease should be hospitalized and monitored via ECG.

 

 

Timelines for antibiotics

Most patients with Lyme disease should receive oral antibiotics, although duration times vary depending on the disease state. “We recommend that prophylactic antibiotic therapy be given to adults and children only within 72 hours of removal of an identified high-risk tick bite, but not for bites that are equivocal risk or low risk,” according to the guideline authors.

Specific antibiotic treatment regimens by condition are as follows: 10-14 days for early-stage disease, 14 days for Lyme carditis, 14-21 days for neurologic Lyme disease, and 28 days for late Lyme arthritis.

“Despite arthritis occurring late in the course of infection, treatment with a 28-day course of oral antibiotic is effective, although the rates of complete resolution of joint swelling can vary,” Dr. Bockenstedt said. Clinicians may consider a second 28-day course of oral antibiotics or a 2- to 4-week course of ceftriaxone in patients with persistent swelling, after an initial course of oral antibiotics.

Citing knowledge gaps, the authors made no recommendation on secondary antibiotic treatment for unresolved Lyme arthritis. Rheumatologists can play an important role in the care of this small subset of patients, Dr. Bockenstedt noted. “Studies of patients with ‘postantibiotic Lyme arthritis’ show that they can be treated successfully with intra-articular steroids, nonsteroidal anti-inflammatory drugs, disease-modifying antirheumatic drugs, biologic response modifiers, and even synovectomy with successful outcomes.” Some of these therapies also work in cases where first courses of oral and intravenous antibiotics are unsuccessful.

“Antibiotic therapy for longer than 8 weeks is not expected to provide additional benefit to patients with persistent arthritis if that treatment has included one course of IV therapy,” the authors clarified.



For patients with Lyme disease–associated meningitis, cranial neuropathy, radiculoneuropathy, or other PNS manifestations, the authors recommended intravenous ceftriaxone, cefotaxime, penicillin G, or oral doxycycline over other antimicrobials.

“For most neurologic presentations, oral doxycycline is just as effective as appropriate IV antibiotics,” Dr. Rumbaugh said. “The exception is the relatively rare situation where the patient is felt to have parenchymal involvement of brain or spinal cord, in which case the guidelines recommend IV antibiotics over oral antibiotics.” In the studies, there was no statistically significant difference between oral or intravenous regimens in response rate or risk of adverse effects.

Patients with nonspecific symptoms such as fatigue, pain, or cognitive impairment following treatment should not receive additional antibiotic therapy if there’s no evidence of treatment failure or infection. These two markers “would include objective signs of disease activity, such as arthritis, meningitis, or neuropathy,” the guideline authors wrote in comments accompanying the recommendation.

Clinicians caring for patients with symptomatic bradycardia caused by Lyme carditis should consider temporary pacing measures instead of a permanent pacemaker. For patients hospitalized with Lyme carditis, “we suggest initially using IV ceftriaxone over oral antibiotics until there is evidence of clinical improvement, then switching to oral antibiotics to complete treatment,” they advised. Outpatients with this condition should receive oral antibiotics instead of intravenous antibiotics.

Advice on antibodies testing ‘particularly cogent’

For individuals without expertise in these areas, the recommendations are clear and useful, Daniel E. Furst, MD, professor of medicine (emeritus) at the University of California, Los Angeles, adjunct professor at the University of Washington, Seattle, and research professor at the University of Florence (Italy), said in an interview.

Dr. Daniel E. Furst

“As a rheumatologist, I would have appreciated literature references for some of the recommendations but, nevertheless, find these useful. I applaud the care with which the evidence was gathered and the general formatting, which tried to review multiple possible scenarios surrounding Lyme arthritis,” said Dr. Furst, offering a third-party perspective.

The advice on using antibodies tests to make a diagnosis of Lyme arthritis “is particularly cogent and more useful than trying to culture these fastidious organisms,” he added.

The IDSA, AAN, and ACR provided support for the guideline. Dr. Bockenstedt reported receiving research funding from the National Institutes of Health and the Gordon and the Llura Gund Foundation and remuneration from L2 Diagnostics for investigator-initiated NIH-sponsored research. Dr. Rumbaugh had no conflicts of interest to disclose. Dr. Furst reported no conflicts of interest in commenting on these guidelines.

SOURCE: Rumbaugh JA et al. Clin Infect Dis. 2020 Nov 30. doi: 10.1093/cid/ciaa1215.

Publications
Topics
Sections

New clinical practice guidelines on Lyme disease place a strong emphasis on antibody testing to assess for rheumatologic and neurologic syndromes. “Diagnostically, we recommend testing via antibodies, and an index of antibodies in cerebrospinal fluid [CSF] versus serum. Importantly, we recommend against using polymerase chain reaction [PCR] in CSF,” Jeffrey A. Rumbaugh, MD, PhD, a coauthor of the guidelines and a member of the American Academy of Neurology, said in an interview.

CDC/ Dr. Amanda Loftis, Dr. William Nicholson, Dr. Will Reeves, Dr. Chris Paddock

The Infectious Diseases Society of America, AAN, and the American College of Rheumatology convened a multidisciplinary panel to develop the 43 recommendations, seeking input from 12 additional medical specialties, and patients. The panel conducted a systematic review of available evidence on preventing, diagnosing, and treating Lyme disease, using the Grading of Recommendations Assessment, Development and Evaluation model to evaluate clinical evidence and strength of recommendations. The guidelines were simultaneous published in Clinical Infectious Diseases, Neurology, Arthritis & Rheumatology, and Arthritis Care & Research.

This is the first time these organizations have collaborated on joint Lyme disease guidelines, which focus mainly on neurologic, cardiac, and rheumatologic manifestations.

“We are very excited to provide these updated guidelines to assist clinicians working in numerous medical specialties around the country, and even the world, as they care for patients suffering from Lyme disease,” Dr. Rumbaugh said.
 

When to use and not to use PCR

Guideline authors called for specific testing regimens depending on presentation of symptoms. Generally, they advised that individuals with a skin rash suggestive of early disease seek a clinical diagnosis instead of laboratory testing.

Dr. Linda Bockenstedt

Recommendations on Lyme arthritis support previous IDSA guidelines published in 2006, Linda K. Bockenstedt, MD, professor of medicine at Yale University, New Haven, Conn., and a coauthor of the guidelines, said in an interview.

To evaluate for potential Lyme arthritis, clinicians should choose serum antibody testing over PCR or culture of blood or synovial fluid/tissue. However, if a doctor is assessing a seropositive patient for Lyme arthritis diagnosis but needs more information for treatment decisions, the authors recommended PCR applied to synovial fluid or tissue over Borrelia culture.

“Synovial fluid can be analyzed by PCR, but sensitivity is generally lower than serology,” Dr. Bockenstedt explained. Additionally, culture of joint fluid or synovial tissue for Lyme spirochetes has 0% sensitivity in multiple studies. “For these reasons, we recommend serum antibody testing over PCR of joint fluid or other methods for an initial diagnosis.”

Serum antibody testing over PCR or culture is also recommended for identifying Lyme neuroborreliosis in the peripheral nervous system (PNS) or CNS.

Despite the recent popularity of Lyme PCR testing in hospitals and labs, “with Lyme at least, antibodies are better in the CSF,” Dr. Rumbaugh said. Studies have shown that “most patients with even early neurologic Lyme disease are seropositive by conventional antibody testing at time of initial clinical presentation, and that intrathecal antibody production, as demonstrated by an elevated CSF:serum index, is highly specific for CNS involvement.”



If done correctly, antibody testing is both sensitive and specific for neurologic Lyme disease. “On the other hand, sensitivity of Lyme PCR performed on CSF has been only in the 5%-17% range in studies. Incidentally, Lyme PCR on blood is also not sensitive and therefore not recommended,” Dr. Rumbaugh said.

Guideline authors recommended testing in patients with the following conditions: acute neurologic disorders such as meningitis, painful radiculoneuritis, mononeuropathy multiplex; evidence of spinal cord or brain inflammation; and acute myocarditis/pericarditis of unknown cause in an appropriate epidemiologic setting.

They did not recommend testing in patients with typical amyotrophic lateral sclerosis; relapsing remitting multiple sclerosis; Parkinson’s disease, dementia, or cognitive decline; new-onset seizures; other neurologic syndromes or those lacking clinical or epidemiologic history that would support a diagnosis of Lyme disease; and patients with chronic cardiomyopathy of unknown cause.

The authors also called for judicious use of electrocardiogram to screen for Lyme carditis, recommending it only in patients signs or symptoms of this condition. However, patients at risk for or showing signs of severe cardiac complications of Lyme disease should be hospitalized and monitored via ECG.

 

 

Timelines for antibiotics

Most patients with Lyme disease should receive oral antibiotics, although duration times vary depending on the disease state. “We recommend that prophylactic antibiotic therapy be given to adults and children only within 72 hours of removal of an identified high-risk tick bite, but not for bites that are equivocal risk or low risk,” according to the guideline authors.

Specific antibiotic treatment regimens by condition are as follows: 10-14 days for early-stage disease, 14 days for Lyme carditis, 14-21 days for neurologic Lyme disease, and 28 days for late Lyme arthritis.

“Despite arthritis occurring late in the course of infection, treatment with a 28-day course of oral antibiotic is effective, although the rates of complete resolution of joint swelling can vary,” Dr. Bockenstedt said. Clinicians may consider a second 28-day course of oral antibiotics or a 2- to 4-week course of ceftriaxone in patients with persistent swelling, after an initial course of oral antibiotics.

Citing knowledge gaps, the authors made no recommendation on secondary antibiotic treatment for unresolved Lyme arthritis. Rheumatologists can play an important role in the care of this small subset of patients, Dr. Bockenstedt noted. “Studies of patients with ‘postantibiotic Lyme arthritis’ show that they can be treated successfully with intra-articular steroids, nonsteroidal anti-inflammatory drugs, disease-modifying antirheumatic drugs, biologic response modifiers, and even synovectomy with successful outcomes.” Some of these therapies also work in cases where first courses of oral and intravenous antibiotics are unsuccessful.

“Antibiotic therapy for longer than 8 weeks is not expected to provide additional benefit to patients with persistent arthritis if that treatment has included one course of IV therapy,” the authors clarified.



For patients with Lyme disease–associated meningitis, cranial neuropathy, radiculoneuropathy, or other PNS manifestations, the authors recommended intravenous ceftriaxone, cefotaxime, penicillin G, or oral doxycycline over other antimicrobials.

“For most neurologic presentations, oral doxycycline is just as effective as appropriate IV antibiotics,” Dr. Rumbaugh said. “The exception is the relatively rare situation where the patient is felt to have parenchymal involvement of brain or spinal cord, in which case the guidelines recommend IV antibiotics over oral antibiotics.” In the studies, there was no statistically significant difference between oral or intravenous regimens in response rate or risk of adverse effects.

Patients with nonspecific symptoms such as fatigue, pain, or cognitive impairment following treatment should not receive additional antibiotic therapy if there’s no evidence of treatment failure or infection. These two markers “would include objective signs of disease activity, such as arthritis, meningitis, or neuropathy,” the guideline authors wrote in comments accompanying the recommendation.

Clinicians caring for patients with symptomatic bradycardia caused by Lyme carditis should consider temporary pacing measures instead of a permanent pacemaker. For patients hospitalized with Lyme carditis, “we suggest initially using IV ceftriaxone over oral antibiotics until there is evidence of clinical improvement, then switching to oral antibiotics to complete treatment,” they advised. Outpatients with this condition should receive oral antibiotics instead of intravenous antibiotics.

Advice on antibodies testing ‘particularly cogent’

For individuals without expertise in these areas, the recommendations are clear and useful, Daniel E. Furst, MD, professor of medicine (emeritus) at the University of California, Los Angeles, adjunct professor at the University of Washington, Seattle, and research professor at the University of Florence (Italy), said in an interview.

Dr. Daniel E. Furst

“As a rheumatologist, I would have appreciated literature references for some of the recommendations but, nevertheless, find these useful. I applaud the care with which the evidence was gathered and the general formatting, which tried to review multiple possible scenarios surrounding Lyme arthritis,” said Dr. Furst, offering a third-party perspective.

The advice on using antibodies tests to make a diagnosis of Lyme arthritis “is particularly cogent and more useful than trying to culture these fastidious organisms,” he added.

The IDSA, AAN, and ACR provided support for the guideline. Dr. Bockenstedt reported receiving research funding from the National Institutes of Health and the Gordon and the Llura Gund Foundation and remuneration from L2 Diagnostics for investigator-initiated NIH-sponsored research. Dr. Rumbaugh had no conflicts of interest to disclose. Dr. Furst reported no conflicts of interest in commenting on these guidelines.

SOURCE: Rumbaugh JA et al. Clin Infect Dis. 2020 Nov 30. doi: 10.1093/cid/ciaa1215.

New clinical practice guidelines on Lyme disease place a strong emphasis on antibody testing to assess for rheumatologic and neurologic syndromes. “Diagnostically, we recommend testing via antibodies, and an index of antibodies in cerebrospinal fluid [CSF] versus serum. Importantly, we recommend against using polymerase chain reaction [PCR] in CSF,” Jeffrey A. Rumbaugh, MD, PhD, a coauthor of the guidelines and a member of the American Academy of Neurology, said in an interview.

CDC/ Dr. Amanda Loftis, Dr. William Nicholson, Dr. Will Reeves, Dr. Chris Paddock

The Infectious Diseases Society of America, AAN, and the American College of Rheumatology convened a multidisciplinary panel to develop the 43 recommendations, seeking input from 12 additional medical specialties, and patients. The panel conducted a systematic review of available evidence on preventing, diagnosing, and treating Lyme disease, using the Grading of Recommendations Assessment, Development and Evaluation model to evaluate clinical evidence and strength of recommendations. The guidelines were simultaneous published in Clinical Infectious Diseases, Neurology, Arthritis & Rheumatology, and Arthritis Care & Research.

This is the first time these organizations have collaborated on joint Lyme disease guidelines, which focus mainly on neurologic, cardiac, and rheumatologic manifestations.

“We are very excited to provide these updated guidelines to assist clinicians working in numerous medical specialties around the country, and even the world, as they care for patients suffering from Lyme disease,” Dr. Rumbaugh said.
 

When to use and not to use PCR

Guideline authors called for specific testing regimens depending on presentation of symptoms. Generally, they advised that individuals with a skin rash suggestive of early disease seek a clinical diagnosis instead of laboratory testing.

Dr. Linda Bockenstedt

Recommendations on Lyme arthritis support previous IDSA guidelines published in 2006, Linda K. Bockenstedt, MD, professor of medicine at Yale University, New Haven, Conn., and a coauthor of the guidelines, said in an interview.

To evaluate for potential Lyme arthritis, clinicians should choose serum antibody testing over PCR or culture of blood or synovial fluid/tissue. However, if a doctor is assessing a seropositive patient for Lyme arthritis diagnosis but needs more information for treatment decisions, the authors recommended PCR applied to synovial fluid or tissue over Borrelia culture.

“Synovial fluid can be analyzed by PCR, but sensitivity is generally lower than serology,” Dr. Bockenstedt explained. Additionally, culture of joint fluid or synovial tissue for Lyme spirochetes has 0% sensitivity in multiple studies. “For these reasons, we recommend serum antibody testing over PCR of joint fluid or other methods for an initial diagnosis.”

Serum antibody testing over PCR or culture is also recommended for identifying Lyme neuroborreliosis in the peripheral nervous system (PNS) or CNS.

Despite the recent popularity of Lyme PCR testing in hospitals and labs, “with Lyme at least, antibodies are better in the CSF,” Dr. Rumbaugh said. Studies have shown that “most patients with even early neurologic Lyme disease are seropositive by conventional antibody testing at time of initial clinical presentation, and that intrathecal antibody production, as demonstrated by an elevated CSF:serum index, is highly specific for CNS involvement.”



If done correctly, antibody testing is both sensitive and specific for neurologic Lyme disease. “On the other hand, sensitivity of Lyme PCR performed on CSF has been only in the 5%-17% range in studies. Incidentally, Lyme PCR on blood is also not sensitive and therefore not recommended,” Dr. Rumbaugh said.

Guideline authors recommended testing in patients with the following conditions: acute neurologic disorders such as meningitis, painful radiculoneuritis, mononeuropathy multiplex; evidence of spinal cord or brain inflammation; and acute myocarditis/pericarditis of unknown cause in an appropriate epidemiologic setting.

They did not recommend testing in patients with typical amyotrophic lateral sclerosis; relapsing remitting multiple sclerosis; Parkinson’s disease, dementia, or cognitive decline; new-onset seizures; other neurologic syndromes or those lacking clinical or epidemiologic history that would support a diagnosis of Lyme disease; and patients with chronic cardiomyopathy of unknown cause.

The authors also called for judicious use of electrocardiogram to screen for Lyme carditis, recommending it only in patients signs or symptoms of this condition. However, patients at risk for or showing signs of severe cardiac complications of Lyme disease should be hospitalized and monitored via ECG.

 

 

Timelines for antibiotics

Most patients with Lyme disease should receive oral antibiotics, although duration times vary depending on the disease state. “We recommend that prophylactic antibiotic therapy be given to adults and children only within 72 hours of removal of an identified high-risk tick bite, but not for bites that are equivocal risk or low risk,” according to the guideline authors.

Specific antibiotic treatment regimens by condition are as follows: 10-14 days for early-stage disease, 14 days for Lyme carditis, 14-21 days for neurologic Lyme disease, and 28 days for late Lyme arthritis.

“Despite arthritis occurring late in the course of infection, treatment with a 28-day course of oral antibiotic is effective, although the rates of complete resolution of joint swelling can vary,” Dr. Bockenstedt said. Clinicians may consider a second 28-day course of oral antibiotics or a 2- to 4-week course of ceftriaxone in patients with persistent swelling, after an initial course of oral antibiotics.

Citing knowledge gaps, the authors made no recommendation on secondary antibiotic treatment for unresolved Lyme arthritis. Rheumatologists can play an important role in the care of this small subset of patients, Dr. Bockenstedt noted. “Studies of patients with ‘postantibiotic Lyme arthritis’ show that they can be treated successfully with intra-articular steroids, nonsteroidal anti-inflammatory drugs, disease-modifying antirheumatic drugs, biologic response modifiers, and even synovectomy with successful outcomes.” Some of these therapies also work in cases where first courses of oral and intravenous antibiotics are unsuccessful.

“Antibiotic therapy for longer than 8 weeks is not expected to provide additional benefit to patients with persistent arthritis if that treatment has included one course of IV therapy,” the authors clarified.



For patients with Lyme disease–associated meningitis, cranial neuropathy, radiculoneuropathy, or other PNS manifestations, the authors recommended intravenous ceftriaxone, cefotaxime, penicillin G, or oral doxycycline over other antimicrobials.

“For most neurologic presentations, oral doxycycline is just as effective as appropriate IV antibiotics,” Dr. Rumbaugh said. “The exception is the relatively rare situation where the patient is felt to have parenchymal involvement of brain or spinal cord, in which case the guidelines recommend IV antibiotics over oral antibiotics.” In the studies, there was no statistically significant difference between oral or intravenous regimens in response rate or risk of adverse effects.

Patients with nonspecific symptoms such as fatigue, pain, or cognitive impairment following treatment should not receive additional antibiotic therapy if there’s no evidence of treatment failure or infection. These two markers “would include objective signs of disease activity, such as arthritis, meningitis, or neuropathy,” the guideline authors wrote in comments accompanying the recommendation.

Clinicians caring for patients with symptomatic bradycardia caused by Lyme carditis should consider temporary pacing measures instead of a permanent pacemaker. For patients hospitalized with Lyme carditis, “we suggest initially using IV ceftriaxone over oral antibiotics until there is evidence of clinical improvement, then switching to oral antibiotics to complete treatment,” they advised. Outpatients with this condition should receive oral antibiotics instead of intravenous antibiotics.

Advice on antibodies testing ‘particularly cogent’

For individuals without expertise in these areas, the recommendations are clear and useful, Daniel E. Furst, MD, professor of medicine (emeritus) at the University of California, Los Angeles, adjunct professor at the University of Washington, Seattle, and research professor at the University of Florence (Italy), said in an interview.

Dr. Daniel E. Furst

“As a rheumatologist, I would have appreciated literature references for some of the recommendations but, nevertheless, find these useful. I applaud the care with which the evidence was gathered and the general formatting, which tried to review multiple possible scenarios surrounding Lyme arthritis,” said Dr. Furst, offering a third-party perspective.

The advice on using antibodies tests to make a diagnosis of Lyme arthritis “is particularly cogent and more useful than trying to culture these fastidious organisms,” he added.

The IDSA, AAN, and ACR provided support for the guideline. Dr. Bockenstedt reported receiving research funding from the National Institutes of Health and the Gordon and the Llura Gund Foundation and remuneration from L2 Diagnostics for investigator-initiated NIH-sponsored research. Dr. Rumbaugh had no conflicts of interest to disclose. Dr. Furst reported no conflicts of interest in commenting on these guidelines.

SOURCE: Rumbaugh JA et al. Clin Infect Dis. 2020 Nov 30. doi: 10.1093/cid/ciaa1215.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM CLINICAL INFECTIOUS DISEASES

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article