Bringing you the latest news, research and reviews, exclusive interviews, podcasts, quizzes, and more.

mdendo
Main menu
MD Endocrinology Main Menu
Explore menu
MD Endocrinology Explore Menu
Proclivity ID
18855001
Unpublish
Negative Keywords Excluded Elements
header[@id='header']
div[contains(@class, 'header__large-screen')]
div[contains(@class, 'read-next-article')]
div[contains(@class, 'nav-primary')]
nav[contains(@class, 'nav-primary')]
section[contains(@class, 'footer-nav-section-wrapper')]
footer[@id='footer']
div[contains(@class, 'main-prefix')]
section[contains(@class, 'nav-hidden')]
div[contains(@class, 'ce-card-content')]
nav[contains(@class, 'nav-ce-stack')]
Altmetric
Click for Credit Button Label
Click For Credit
DSM Affiliated
Display in offset block
Disqus Exclude
Best Practices
CE/CME
Education Center
Medical Education Library
Enable Disqus
Display Author and Disclosure Link
Publication Type
News
Slot System
Featured Buckets
Disable Sticky Ads
Disable Ad Block Mitigation
Featured Buckets Admin
Show Ads on this Publication's Homepage
Consolidated Pub
Show Article Page Numbers on TOC
Expire Announcement Bar
Wed, 11/27/2024 - 11:30
Use larger logo size
On
publication_blueconic_enabled
Off
Show More Destinations Menu
Disable Adhesion on Publication
Off
Restore Menu Label on Mobile Navigation
Disable Facebook Pixel from Publication
Exclude this publication from publication selection on articles and quiz
Gating Strategy
First Peek Free
Challenge Center
Disable Inline Native ads
survey writer start date
Wed, 11/27/2024 - 11:30

Weight gain during pregnancy may play role in child ADHD risk

Article Type
Changed
Mon, 11/07/2022 - 12:41

Obesity in women of reproductive age has emerged as one of the main risk factors associated with neonatal complications and long-term neuropsychiatric consequences in offspring, including attention-deficit/hyperactivity disorder.

Research has also linked pregestational diabetes and gestational diabetes mellitus (GDM) to an increased risk for ADHD in offspring. Now, an observational study of 1,036 singleton births at one hospital between 1998 and 2008 suggests that in the presence of GDM, maternal obesity combined with excessive weight gain during pregnancy may be jointly associated with increased risk of offspring ADHD. The median follow-up was 17.7 years.

Maternal obesity was independently associated with ADHD (adjusted hazard ratio, 1.66; 95% confidence interval: 1.07-2.60), but excessive weight gain during pregnancy and maternal overweight were not, reported Verónica Perea, MD, PhD, of the Hospital Universitari Mútua de Terrassa, Barcelona, and colleagues in the Journal of Clinical Endocrinology & Metabolism.

However, in women with pregestation obesity who gained more weight than recommended by the National Academy of Medicine (NAM), the risk of offspring ADHD was higher, compared with women of normal weight whose pregnancy weight stayed within NAM guidelines (adjusted hazard ratio, 2.13; 95% confidence interval: 1.14-4.01).

“The results of this study suggest that the negative repercussions of excessive weight gain on children within the setting of a high-risk population with GDM and obesity were not only observed during the prenatal period but also years later with a development of ADHD,” the researchers wrote.

The study also showed that when maternal weight gain did not exceed NAM guidelines, maternal obesity was no longer independently associated with ADHD in offspring (aHR, 1.36; 95% CI: 0.78-2.36). This finding conflicts with earlier studies focusing primarily on the role of pregestational maternal weight, the researchers said. A 2018 nationwide Finnish cohort study in newborns showed an increased long-term risk of ADHD in those born to women with GDM, compared with the nondiabetic population. This long-term risk of ADHD increased in the presence of pregestational obesity (HR, 1.64).

Similarly, evidence from systematic reviews and meta-analyses has demonstrated that antenatal lifestyle interventions to prevent excessive weight gain during pregnancy were associated with a reduction in adverse pregnancy outcomes. However, evidence on offspring mental health was lacking, especially in high-risk pregnancies with gestational diabetes, the study authors said.

Although causal inferences can’t be drawn from the current observational study, “it seems that the higher risk [of ADHD] observed would be explained by the role of gestational weight gain during the antenatal period,” Dr. Perea said in an interview. Importantly, the study highlights a window of opportunity for promoting healthy weight gain during pregnancy, Dr. Perea said. ”This should be a priority in the current management of gestation.”

Fatima Cody Stanford, MD, MPH, an associate professor of medicine and pediatrics at Harvard Medical School, Boston, agreed. “I think one of the key issues is that there’s very little attention paid to how weight gain is regulated during pregnancy,” she said in an interview. On many other points, however, Dr. Stanford, who is a specialist in obesity medicine at Massachusetts General Hospital Weight Center, did not agree.

The association between ADHD and obesity has already been well established by a 2019 meta-analysis and systematic review of studies over the last 10 years, she emphasized. “These studies were able to show a much stronger association between maternal obesity and ADHD in offspring because they were powered to detect differences.”

The current study does not say “anything new or novel,” Dr. Stanford added. “Maternal obesity and the association with an increased risk of ADHD in offspring is the main issue. I don’t think there was any appreciable increase when weight gain during pregnancy was factored in. It’s mild at best.”

Eran Bornstein, MD, vice-chair of obstetrics and gynecology at Lenox Hill Hospital, New York, expressed a similar point of view. Although the study findings “add to the current literature,” they should be interpreted “cautiously,” Dr. Bornstein said in an interview.

The size of the effect on ADHD risk attributable to maternal weight gain during pregnancy “was not clear,” he said. “Cohort studies of this sort are excellent for finding associations which help us generate the hypothesis, but this doesn’t demonstrate a cause and effect or a magnitude for this effect.”

Physicians should follow cumulative data suggesting that maternal obesity is associated with a number of pregnancy complications and neonatal outcomes in women with and without diabetes, Dr. Bornstein suggested. “Optimizing maternal weight prior to pregnancy and adhering to recommendations regarding weight gain has the potential to improve some of these outcomes.”

Treating obesity prior to conception mitigates GDM risk, agreed Dr. Stanford. “The issue,” she explained, “is that all of the drugs approved for the treatment of obesity are contraindicated in pregnancy and lifestyle modification fails in 96% of cases, even when there is no pregnancy.” Drugs such as metformin are being used off-label to treat obesity and to safely manage gestational weight gain, she said. “Those of us who practice obesity medicine know that metformin can be safely used throughout pregnancy with no harm to the fetus.”

This study was partially funded by Fundació Docència i Recerca MútuaTerrassa. Dr. Perea and study coauthors reporting have no conflicts of interest. Dr. Stanford disclosed relationships with Novo Nordisk, Eli Lilly, Boehringer Ingelheim, Gelesis, Pfizer, Currax, and Rhythm. Dr. Bornstein reported having no conflicts of interest.

This story was updated on 11/7/2022. 

Publications
Topics
Sections

Obesity in women of reproductive age has emerged as one of the main risk factors associated with neonatal complications and long-term neuropsychiatric consequences in offspring, including attention-deficit/hyperactivity disorder.

Research has also linked pregestational diabetes and gestational diabetes mellitus (GDM) to an increased risk for ADHD in offspring. Now, an observational study of 1,036 singleton births at one hospital between 1998 and 2008 suggests that in the presence of GDM, maternal obesity combined with excessive weight gain during pregnancy may be jointly associated with increased risk of offspring ADHD. The median follow-up was 17.7 years.

Maternal obesity was independently associated with ADHD (adjusted hazard ratio, 1.66; 95% confidence interval: 1.07-2.60), but excessive weight gain during pregnancy and maternal overweight were not, reported Verónica Perea, MD, PhD, of the Hospital Universitari Mútua de Terrassa, Barcelona, and colleagues in the Journal of Clinical Endocrinology & Metabolism.

However, in women with pregestation obesity who gained more weight than recommended by the National Academy of Medicine (NAM), the risk of offspring ADHD was higher, compared with women of normal weight whose pregnancy weight stayed within NAM guidelines (adjusted hazard ratio, 2.13; 95% confidence interval: 1.14-4.01).

“The results of this study suggest that the negative repercussions of excessive weight gain on children within the setting of a high-risk population with GDM and obesity were not only observed during the prenatal period but also years later with a development of ADHD,” the researchers wrote.

The study also showed that when maternal weight gain did not exceed NAM guidelines, maternal obesity was no longer independently associated with ADHD in offspring (aHR, 1.36; 95% CI: 0.78-2.36). This finding conflicts with earlier studies focusing primarily on the role of pregestational maternal weight, the researchers said. A 2018 nationwide Finnish cohort study in newborns showed an increased long-term risk of ADHD in those born to women with GDM, compared with the nondiabetic population. This long-term risk of ADHD increased in the presence of pregestational obesity (HR, 1.64).

Similarly, evidence from systematic reviews and meta-analyses has demonstrated that antenatal lifestyle interventions to prevent excessive weight gain during pregnancy were associated with a reduction in adverse pregnancy outcomes. However, evidence on offspring mental health was lacking, especially in high-risk pregnancies with gestational diabetes, the study authors said.

Although causal inferences can’t be drawn from the current observational study, “it seems that the higher risk [of ADHD] observed would be explained by the role of gestational weight gain during the antenatal period,” Dr. Perea said in an interview. Importantly, the study highlights a window of opportunity for promoting healthy weight gain during pregnancy, Dr. Perea said. ”This should be a priority in the current management of gestation.”

Fatima Cody Stanford, MD, MPH, an associate professor of medicine and pediatrics at Harvard Medical School, Boston, agreed. “I think one of the key issues is that there’s very little attention paid to how weight gain is regulated during pregnancy,” she said in an interview. On many other points, however, Dr. Stanford, who is a specialist in obesity medicine at Massachusetts General Hospital Weight Center, did not agree.

The association between ADHD and obesity has already been well established by a 2019 meta-analysis and systematic review of studies over the last 10 years, she emphasized. “These studies were able to show a much stronger association between maternal obesity and ADHD in offspring because they were powered to detect differences.”

The current study does not say “anything new or novel,” Dr. Stanford added. “Maternal obesity and the association with an increased risk of ADHD in offspring is the main issue. I don’t think there was any appreciable increase when weight gain during pregnancy was factored in. It’s mild at best.”

Eran Bornstein, MD, vice-chair of obstetrics and gynecology at Lenox Hill Hospital, New York, expressed a similar point of view. Although the study findings “add to the current literature,” they should be interpreted “cautiously,” Dr. Bornstein said in an interview.

The size of the effect on ADHD risk attributable to maternal weight gain during pregnancy “was not clear,” he said. “Cohort studies of this sort are excellent for finding associations which help us generate the hypothesis, but this doesn’t demonstrate a cause and effect or a magnitude for this effect.”

Physicians should follow cumulative data suggesting that maternal obesity is associated with a number of pregnancy complications and neonatal outcomes in women with and without diabetes, Dr. Bornstein suggested. “Optimizing maternal weight prior to pregnancy and adhering to recommendations regarding weight gain has the potential to improve some of these outcomes.”

Treating obesity prior to conception mitigates GDM risk, agreed Dr. Stanford. “The issue,” she explained, “is that all of the drugs approved for the treatment of obesity are contraindicated in pregnancy and lifestyle modification fails in 96% of cases, even when there is no pregnancy.” Drugs such as metformin are being used off-label to treat obesity and to safely manage gestational weight gain, she said. “Those of us who practice obesity medicine know that metformin can be safely used throughout pregnancy with no harm to the fetus.”

This study was partially funded by Fundació Docència i Recerca MútuaTerrassa. Dr. Perea and study coauthors reporting have no conflicts of interest. Dr. Stanford disclosed relationships with Novo Nordisk, Eli Lilly, Boehringer Ingelheim, Gelesis, Pfizer, Currax, and Rhythm. Dr. Bornstein reported having no conflicts of interest.

This story was updated on 11/7/2022. 

Obesity in women of reproductive age has emerged as one of the main risk factors associated with neonatal complications and long-term neuropsychiatric consequences in offspring, including attention-deficit/hyperactivity disorder.

Research has also linked pregestational diabetes and gestational diabetes mellitus (GDM) to an increased risk for ADHD in offspring. Now, an observational study of 1,036 singleton births at one hospital between 1998 and 2008 suggests that in the presence of GDM, maternal obesity combined with excessive weight gain during pregnancy may be jointly associated with increased risk of offspring ADHD. The median follow-up was 17.7 years.

Maternal obesity was independently associated with ADHD (adjusted hazard ratio, 1.66; 95% confidence interval: 1.07-2.60), but excessive weight gain during pregnancy and maternal overweight were not, reported Verónica Perea, MD, PhD, of the Hospital Universitari Mútua de Terrassa, Barcelona, and colleagues in the Journal of Clinical Endocrinology & Metabolism.

However, in women with pregestation obesity who gained more weight than recommended by the National Academy of Medicine (NAM), the risk of offspring ADHD was higher, compared with women of normal weight whose pregnancy weight stayed within NAM guidelines (adjusted hazard ratio, 2.13; 95% confidence interval: 1.14-4.01).

“The results of this study suggest that the negative repercussions of excessive weight gain on children within the setting of a high-risk population with GDM and obesity were not only observed during the prenatal period but also years later with a development of ADHD,” the researchers wrote.

The study also showed that when maternal weight gain did not exceed NAM guidelines, maternal obesity was no longer independently associated with ADHD in offspring (aHR, 1.36; 95% CI: 0.78-2.36). This finding conflicts with earlier studies focusing primarily on the role of pregestational maternal weight, the researchers said. A 2018 nationwide Finnish cohort study in newborns showed an increased long-term risk of ADHD in those born to women with GDM, compared with the nondiabetic population. This long-term risk of ADHD increased in the presence of pregestational obesity (HR, 1.64).

Similarly, evidence from systematic reviews and meta-analyses has demonstrated that antenatal lifestyle interventions to prevent excessive weight gain during pregnancy were associated with a reduction in adverse pregnancy outcomes. However, evidence on offspring mental health was lacking, especially in high-risk pregnancies with gestational diabetes, the study authors said.

Although causal inferences can’t be drawn from the current observational study, “it seems that the higher risk [of ADHD] observed would be explained by the role of gestational weight gain during the antenatal period,” Dr. Perea said in an interview. Importantly, the study highlights a window of opportunity for promoting healthy weight gain during pregnancy, Dr. Perea said. ”This should be a priority in the current management of gestation.”

Fatima Cody Stanford, MD, MPH, an associate professor of medicine and pediatrics at Harvard Medical School, Boston, agreed. “I think one of the key issues is that there’s very little attention paid to how weight gain is regulated during pregnancy,” she said in an interview. On many other points, however, Dr. Stanford, who is a specialist in obesity medicine at Massachusetts General Hospital Weight Center, did not agree.

The association between ADHD and obesity has already been well established by a 2019 meta-analysis and systematic review of studies over the last 10 years, she emphasized. “These studies were able to show a much stronger association between maternal obesity and ADHD in offspring because they were powered to detect differences.”

The current study does not say “anything new or novel,” Dr. Stanford added. “Maternal obesity and the association with an increased risk of ADHD in offspring is the main issue. I don’t think there was any appreciable increase when weight gain during pregnancy was factored in. It’s mild at best.”

Eran Bornstein, MD, vice-chair of obstetrics and gynecology at Lenox Hill Hospital, New York, expressed a similar point of view. Although the study findings “add to the current literature,” they should be interpreted “cautiously,” Dr. Bornstein said in an interview.

The size of the effect on ADHD risk attributable to maternal weight gain during pregnancy “was not clear,” he said. “Cohort studies of this sort are excellent for finding associations which help us generate the hypothesis, but this doesn’t demonstrate a cause and effect or a magnitude for this effect.”

Physicians should follow cumulative data suggesting that maternal obesity is associated with a number of pregnancy complications and neonatal outcomes in women with and without diabetes, Dr. Bornstein suggested. “Optimizing maternal weight prior to pregnancy and adhering to recommendations regarding weight gain has the potential to improve some of these outcomes.”

Treating obesity prior to conception mitigates GDM risk, agreed Dr. Stanford. “The issue,” she explained, “is that all of the drugs approved for the treatment of obesity are contraindicated in pregnancy and lifestyle modification fails in 96% of cases, even when there is no pregnancy.” Drugs such as metformin are being used off-label to treat obesity and to safely manage gestational weight gain, she said. “Those of us who practice obesity medicine know that metformin can be safely used throughout pregnancy with no harm to the fetus.”

This study was partially funded by Fundació Docència i Recerca MútuaTerrassa. Dr. Perea and study coauthors reporting have no conflicts of interest. Dr. Stanford disclosed relationships with Novo Nordisk, Eli Lilly, Boehringer Ingelheim, Gelesis, Pfizer, Currax, and Rhythm. Dr. Bornstein reported having no conflicts of interest.

This story was updated on 11/7/2022. 

Publications
Publications
Topics
Article Type
Sections
Article Source

The Journal of Clinical Endocrinology & Metabolism

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Lack of exercise linked to small heart, HFpEF

Article Type
Changed
Thu, 12/15/2022 - 14:25

Chronic lack of exercise – dubbed “exercise deficiency” – is associated with cardiac atrophy, reduced cardiac output and chamber size, and diminished cardiorespiratory fitness (CRF) in a subgroup of patients with heart failure with preserved ejection fraction (HFpEF), researchers say.

Increasing the physical activity levels of these sedentary individuals could be an effective preventive strategy, particularly for those who are younger and middle-aged, they suggest.

Thinking of HFpEF as an exercise deficiency syndrome leading to a small heart “flies in the face of decades of cardiovascular teaching, because traditionally, we’ve thought of heart failure as the big floppy heart,” Andre La Gerche, MBBS, PhD, of the Baker Heart and Diabetes Institute, Melbourne, told this news organization.

“While it is true that some people with HFpEF have thick, stiff hearts, we propose that another subset has a normal heart, except it’s small because it’s been underexercised,” he said.

The article, published online  as part of a Focus Seminar series in the Journal of the American College of Cardiology, has “gone viral on social media,” Jason C. Kovacic, MBBS, PhD, of the Victor Chang Cardiac Research Institute, Darlinghurst, Australia, told this news organization.

Dr. Kovacic is a JACC section editor and the coordinating and senior author of the series, which covers other issues surrounding physical activity, both in athletes and the general public.
 

‘Coin-dropping moment’

To support their hypothesis that HFpEF is an exercise deficiency in certain patients, Dr. La Gerche and colleagues conducted a literature review that highlights the following points:

  • There is a strong association between physical activity and both CRF and heart function.
  • Exercise deficiency is a major risk factor for HFpEF in a subset of patients.
  • Increasing physical activity is associated with greater cardiac mass, stroke volumes, cardiac output, and peak oxygen consumption.
  • Physical inactivity leads to loss of heart muscle, reduced output and chamber size, and less ability to improve cardiac performance with exercise.
  • Aging results in a smaller, stiffer heart; however, this effect is mitigated by regular exercise.
  • Individuals who are sedentary throughout life cannot attenuate age-related reductions in heart size and have increasing chamber stiffness.

“When we explain it, it’s like a coin-dropping moment, because it’s actually a really simple concept,” Dr. La Gerche said. “A small heart has a small stroke volume. A patient with a small heart with a maximal stroke volume of 60 mL can generate a cardiac output of 9 L/min at a heart rate of 150 beats/min during exercise – an output that just isn’t enough. It’s like trying to drive a truck with a 50cc motorbike engine.”

“Plus,” Dr. La Gerche added, “exercise deficiency also sets the stage for comorbidities such as obesity, diabetes, and high blood pressure, all of which can ultimately lead to HFpEF.”

Considering HFpEF as an exercise deficiency syndrome has two clinical implications, Dr. La Gerche said. “First, it helps us understand the condition and diagnose more cases. For example, I think practitioners will start to recognize that breathlessness in some of their patients is associated with a small heart.”

“Second,” he said, “if it’s an exercise deficiency syndrome, the treatment is exercise. For most people, that means exercising regularly before the age of 60 to prevent HFpEF, because studies have found that after the age of 60, the heart is a bit fixed and harder to remodel. That doesn’t mean you shouldn’t try after 60 or that you won’t get benefit. But the real sweet spot is in middle age and younger.”
 

 

 

The bigger picture

The JACC Focus Seminar series starts with an article that underscores the benefits of regular physical activity. “The key is getting our patients to meet the guidelines: 150 to 300 minutes of moderate intensity exercise per week, or 75 to 250 minutes of vigorous activity per week,” Dr. Kovacic emphasized.

“Yes, we can give a statin to lower cholesterol. Yes, we can give a blood pressure medication to lower blood pressure. But when you prescribe exercise, you impact patients’ blood pressure, their cholesterol, their weight, their sense of well-being,” he said. “It cuts across so many different aspects of people’s lives that it’s important to underscore the value of exercise to everybody.”

That includes physicians, he affirmed. “It behooves all physicians to be leading by example. I would encourage those who are overweight or aren’t exercising as much as they should be to make the time to be healthy and to exercise. If you don’t, then bad health will force you to make the time to deal with bad health issues.”

Other articles in the series deal with the athlete’s heart. Christopher Semsarian, MBBS, PhD, MPH, University of Sydney, and colleagues discuss emerging data on hypertrophic cardiomyopathy and other genetic cardiovascular diseases, with the conclusion that it is probably okay for more athletes with these conditions to participate in recreational and competitive sports than was previously thought – another paradigm shift, according to Dr. Kovacic.

The final article addresses some of the challenges and controversies related to the athlete’s heart, including whether extreme exercise is associated with vulnerability to atrial fibrillation and other arrhythmias, and the impact of gender on the cardiac response to exercise, which can’t be determined now because of a paucity of data on women in sports.

Overall, Dr. Kovacic said, the series makes for “compelling” reading that should encourage readers to embark on their own studies to add to the data and support exercise prescription across the board.

No commercial funding or relevant conflicts of interest were reported.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Chronic lack of exercise – dubbed “exercise deficiency” – is associated with cardiac atrophy, reduced cardiac output and chamber size, and diminished cardiorespiratory fitness (CRF) in a subgroup of patients with heart failure with preserved ejection fraction (HFpEF), researchers say.

Increasing the physical activity levels of these sedentary individuals could be an effective preventive strategy, particularly for those who are younger and middle-aged, they suggest.

Thinking of HFpEF as an exercise deficiency syndrome leading to a small heart “flies in the face of decades of cardiovascular teaching, because traditionally, we’ve thought of heart failure as the big floppy heart,” Andre La Gerche, MBBS, PhD, of the Baker Heart and Diabetes Institute, Melbourne, told this news organization.

“While it is true that some people with HFpEF have thick, stiff hearts, we propose that another subset has a normal heart, except it’s small because it’s been underexercised,” he said.

The article, published online  as part of a Focus Seminar series in the Journal of the American College of Cardiology, has “gone viral on social media,” Jason C. Kovacic, MBBS, PhD, of the Victor Chang Cardiac Research Institute, Darlinghurst, Australia, told this news organization.

Dr. Kovacic is a JACC section editor and the coordinating and senior author of the series, which covers other issues surrounding physical activity, both in athletes and the general public.
 

‘Coin-dropping moment’

To support their hypothesis that HFpEF is an exercise deficiency in certain patients, Dr. La Gerche and colleagues conducted a literature review that highlights the following points:

  • There is a strong association between physical activity and both CRF and heart function.
  • Exercise deficiency is a major risk factor for HFpEF in a subset of patients.
  • Increasing physical activity is associated with greater cardiac mass, stroke volumes, cardiac output, and peak oxygen consumption.
  • Physical inactivity leads to loss of heart muscle, reduced output and chamber size, and less ability to improve cardiac performance with exercise.
  • Aging results in a smaller, stiffer heart; however, this effect is mitigated by regular exercise.
  • Individuals who are sedentary throughout life cannot attenuate age-related reductions in heart size and have increasing chamber stiffness.

“When we explain it, it’s like a coin-dropping moment, because it’s actually a really simple concept,” Dr. La Gerche said. “A small heart has a small stroke volume. A patient with a small heart with a maximal stroke volume of 60 mL can generate a cardiac output of 9 L/min at a heart rate of 150 beats/min during exercise – an output that just isn’t enough. It’s like trying to drive a truck with a 50cc motorbike engine.”

“Plus,” Dr. La Gerche added, “exercise deficiency also sets the stage for comorbidities such as obesity, diabetes, and high blood pressure, all of which can ultimately lead to HFpEF.”

Considering HFpEF as an exercise deficiency syndrome has two clinical implications, Dr. La Gerche said. “First, it helps us understand the condition and diagnose more cases. For example, I think practitioners will start to recognize that breathlessness in some of their patients is associated with a small heart.”

“Second,” he said, “if it’s an exercise deficiency syndrome, the treatment is exercise. For most people, that means exercising regularly before the age of 60 to prevent HFpEF, because studies have found that after the age of 60, the heart is a bit fixed and harder to remodel. That doesn’t mean you shouldn’t try after 60 or that you won’t get benefit. But the real sweet spot is in middle age and younger.”
 

 

 

The bigger picture

The JACC Focus Seminar series starts with an article that underscores the benefits of regular physical activity. “The key is getting our patients to meet the guidelines: 150 to 300 minutes of moderate intensity exercise per week, or 75 to 250 minutes of vigorous activity per week,” Dr. Kovacic emphasized.

“Yes, we can give a statin to lower cholesterol. Yes, we can give a blood pressure medication to lower blood pressure. But when you prescribe exercise, you impact patients’ blood pressure, their cholesterol, their weight, their sense of well-being,” he said. “It cuts across so many different aspects of people’s lives that it’s important to underscore the value of exercise to everybody.”

That includes physicians, he affirmed. “It behooves all physicians to be leading by example. I would encourage those who are overweight or aren’t exercising as much as they should be to make the time to be healthy and to exercise. If you don’t, then bad health will force you to make the time to deal with bad health issues.”

Other articles in the series deal with the athlete’s heart. Christopher Semsarian, MBBS, PhD, MPH, University of Sydney, and colleagues discuss emerging data on hypertrophic cardiomyopathy and other genetic cardiovascular diseases, with the conclusion that it is probably okay for more athletes with these conditions to participate in recreational and competitive sports than was previously thought – another paradigm shift, according to Dr. Kovacic.

The final article addresses some of the challenges and controversies related to the athlete’s heart, including whether extreme exercise is associated with vulnerability to atrial fibrillation and other arrhythmias, and the impact of gender on the cardiac response to exercise, which can’t be determined now because of a paucity of data on women in sports.

Overall, Dr. Kovacic said, the series makes for “compelling” reading that should encourage readers to embark on their own studies to add to the data and support exercise prescription across the board.

No commercial funding or relevant conflicts of interest were reported.

A version of this article first appeared on Medscape.com.

Chronic lack of exercise – dubbed “exercise deficiency” – is associated with cardiac atrophy, reduced cardiac output and chamber size, and diminished cardiorespiratory fitness (CRF) in a subgroup of patients with heart failure with preserved ejection fraction (HFpEF), researchers say.

Increasing the physical activity levels of these sedentary individuals could be an effective preventive strategy, particularly for those who are younger and middle-aged, they suggest.

Thinking of HFpEF as an exercise deficiency syndrome leading to a small heart “flies in the face of decades of cardiovascular teaching, because traditionally, we’ve thought of heart failure as the big floppy heart,” Andre La Gerche, MBBS, PhD, of the Baker Heart and Diabetes Institute, Melbourne, told this news organization.

“While it is true that some people with HFpEF have thick, stiff hearts, we propose that another subset has a normal heart, except it’s small because it’s been underexercised,” he said.

The article, published online  as part of a Focus Seminar series in the Journal of the American College of Cardiology, has “gone viral on social media,” Jason C. Kovacic, MBBS, PhD, of the Victor Chang Cardiac Research Institute, Darlinghurst, Australia, told this news organization.

Dr. Kovacic is a JACC section editor and the coordinating and senior author of the series, which covers other issues surrounding physical activity, both in athletes and the general public.
 

‘Coin-dropping moment’

To support their hypothesis that HFpEF is an exercise deficiency in certain patients, Dr. La Gerche and colleagues conducted a literature review that highlights the following points:

  • There is a strong association between physical activity and both CRF and heart function.
  • Exercise deficiency is a major risk factor for HFpEF in a subset of patients.
  • Increasing physical activity is associated with greater cardiac mass, stroke volumes, cardiac output, and peak oxygen consumption.
  • Physical inactivity leads to loss of heart muscle, reduced output and chamber size, and less ability to improve cardiac performance with exercise.
  • Aging results in a smaller, stiffer heart; however, this effect is mitigated by regular exercise.
  • Individuals who are sedentary throughout life cannot attenuate age-related reductions in heart size and have increasing chamber stiffness.

“When we explain it, it’s like a coin-dropping moment, because it’s actually a really simple concept,” Dr. La Gerche said. “A small heart has a small stroke volume. A patient with a small heart with a maximal stroke volume of 60 mL can generate a cardiac output of 9 L/min at a heart rate of 150 beats/min during exercise – an output that just isn’t enough. It’s like trying to drive a truck with a 50cc motorbike engine.”

“Plus,” Dr. La Gerche added, “exercise deficiency also sets the stage for comorbidities such as obesity, diabetes, and high blood pressure, all of which can ultimately lead to HFpEF.”

Considering HFpEF as an exercise deficiency syndrome has two clinical implications, Dr. La Gerche said. “First, it helps us understand the condition and diagnose more cases. For example, I think practitioners will start to recognize that breathlessness in some of their patients is associated with a small heart.”

“Second,” he said, “if it’s an exercise deficiency syndrome, the treatment is exercise. For most people, that means exercising regularly before the age of 60 to prevent HFpEF, because studies have found that after the age of 60, the heart is a bit fixed and harder to remodel. That doesn’t mean you shouldn’t try after 60 or that you won’t get benefit. But the real sweet spot is in middle age and younger.”
 

 

 

The bigger picture

The JACC Focus Seminar series starts with an article that underscores the benefits of regular physical activity. “The key is getting our patients to meet the guidelines: 150 to 300 minutes of moderate intensity exercise per week, or 75 to 250 minutes of vigorous activity per week,” Dr. Kovacic emphasized.

“Yes, we can give a statin to lower cholesterol. Yes, we can give a blood pressure medication to lower blood pressure. But when you prescribe exercise, you impact patients’ blood pressure, their cholesterol, their weight, their sense of well-being,” he said. “It cuts across so many different aspects of people’s lives that it’s important to underscore the value of exercise to everybody.”

That includes physicians, he affirmed. “It behooves all physicians to be leading by example. I would encourage those who are overweight or aren’t exercising as much as they should be to make the time to be healthy and to exercise. If you don’t, then bad health will force you to make the time to deal with bad health issues.”

Other articles in the series deal with the athlete’s heart. Christopher Semsarian, MBBS, PhD, MPH, University of Sydney, and colleagues discuss emerging data on hypertrophic cardiomyopathy and other genetic cardiovascular diseases, with the conclusion that it is probably okay for more athletes with these conditions to participate in recreational and competitive sports than was previously thought – another paradigm shift, according to Dr. Kovacic.

The final article addresses some of the challenges and controversies related to the athlete’s heart, including whether extreme exercise is associated with vulnerability to atrial fibrillation and other arrhythmias, and the impact of gender on the cardiac response to exercise, which can’t be determined now because of a paucity of data on women in sports.

Overall, Dr. Kovacic said, the series makes for “compelling” reading that should encourage readers to embark on their own studies to add to the data and support exercise prescription across the board.

No commercial funding or relevant conflicts of interest were reported.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

When do we stop using BMI to diagnose obesity?

Article Type
Changed
Wed, 09/14/2022 - 10:11

“BMI is trash. Full stop.” This controversial tweet received 26,500 likes and almost 3,000 retweets. The 400 comments from medical and non–health care personnel ranged from agreeable to contrary to offensive.

Regardless of your opinion on BMI (body mass index), this conversation highlighted that the medical community needs to discuss the limitations of BMI and decide its future.

As a Black woman who is an obesity expert living with the impact of obesity in my own life, I know the emotion that a BMI conversation can evoke. Before emotions hijack the conversation, let’s discuss BMI’s past, present, and future.
 

BMI: From observational measurement to clinical use

Imagine walking into your favorite clothing store where an eager clerk greets you with a shirt to try on. The fit is off, but the clerk insists that the shirt must fit because everyone who’s your height should be able to wear it. This scenario seems ridiculous. But this is how we’ve come to use the BMI. Instead of thinking that people of the same height may be the same size, we declare that they must be the same size.

The idea behind the BMI was conceived in 1832 by Belgian anthropologist and mathematician Adolphe Quetelet, but he didn’t intend for it to be a health measure. Instead, it was simply an observation of how people’s weight changed in proportion to height over their lifetime.

Fast-forward to the 20th century, when insurance companies began using weight as an indicator of health status. Weights were recorded in a “Life Table.” Individual health status was determined on the basis of arbitrary cut-offs for weight on the Life Tables. Furthermore, White men set the “normal” weight standards because they were the primary insurance holders.

In 1972, Dr. Ancel Keys, a physician and leading expert in body composition at the time, cried foul on this practice and sought to standardize the use of weight as a health indicator. Dr. Keys used Quetelet’s calculation and termed it the Body Mass Index.

By 1985, the U.S. National Institutes of Health and the World Health Organization adopted the BMI. By the 21st century, BMI had become widely used in clinical settings. For example, the Centers for Medicare & Medicaid Services adopted BMI as a quality-of-care measure, placing even more pressure on clinicians to use BMI as a health screening tool.
 

BMI as a tool to diagnose obesity

We can’t discuss BMI without discussing the disease of obesity. BMI is the most widely used tool to diagnose obesity. In the United States, one-third of Americans meet the criteria for obesity. Another one-third are at risk for obesity.

Compared with BMI’s relatively quick acceptance into clinical practice, however, obesity was only recently recognized as a disease.

Historically, obesity has been viewed as a lifestyle choice, fueled by misinformation and multiple forms of bias. The historical bias associated with BMI and discrimination has led some public health officials and scholars to dismiss the use of BMI or fail to recognize obesity as disease.

This is a dangerous conclusion, because it comes to the detriment of the very people disproportionately impacted by obesity-related health disparities.

Furthermore, weight bias continues to prevent people living with obesity from receiving insurance coverage for life-enhancing obesity medications and interventions.
 

 

 

Is it time to phase out BMI?

The BMI is intertwined with many forms of bias: age, gender, racial, ethnic, and even weight. Therefore, it is time to phase out BMI. However, phasing out BMI is complex and will take time, given that:

  • Obesity is still a relatively “young” disease. 2023 marks the 10th anniversary of obesity’s recognition as a disease by the American Medical Association. Currently, BMI is the most widely used tool to diagnose obesity. Tools such as waist circumference, body composition, and metabolic health assessment will need to replace the BMI. Shifting from BMI emphasizes that obesity is more than a number on the scale. Obesity, as defined by the Obesity Medicine Association, is indeed a “chronic, relapsing, multi-factorial, neurobehavioral disease, wherein an increase in body fat promotes adipose tissue dysfunction and abnormal fat mass physical forces, resulting in adverse metabolic, biomechanical, and psychosocial health consequences.”
  • Much of our health research is tied to BMI. There have been some shifts in looking at non–weight-related health indicators. However, we need more robust studies evaluating other health indicators beyond weight and BMI. The availability of this data will help eliminate the need for BMI and promote individualized health assessment.
  • Current treatment guidelines for obesity medications are based on BMI. (Note: Medications to treat obesity are called “anti-obesity” medications or AOMs. However, given the stigma associated with obesity, I prefer not to use the term “anti-obesity.”) Presently this interferes with long-term obesity treatment. Once BMI is “normal,” many patients lose insurance coverage for their obesity medication, despite needing long-term metabolic support to overcome the compensatory mechanism of weight regain. Obesity is a chronic disease that exists independent of weight status. Therefore, using non-BMI measures will help ensure appropriate lifetime support for obesity.

The preceding are barriers, not impossibilities. In the interim, if BMI is still used in any capacity, the BMI reference chart should be an adjusted BMI chart based on agerace, ethnicity, biological sex, and obesity-related conditions. Furthermore, BMI isn’t the sole determining factor of health status.

Instead, an “abnormal” BMI should initiate conversation and further testing, if needed, to determine an individual’s health. For example, compare two people of the same height with different BMIs and lifestyles. Current studies support that a person flagged as having a high adjusted BMI but practicing a healthy lifestyle and having no metabolic diseases is less at risk than a person with a “normal” BMI but high waist circumference and an unhealthy lifestyle.

Regardless of your personal feelings, the facts are clear. Technology empowers us with better tools than BMI to determine health status. Therefore, it’s not a matter of if we will stop using BMI but when.

Sylvia Gonsahn-Bollie, MD, DipABOM, is an integrative obesity specialist who specializes in individualized solutions for emotional and biological overeating. Connect with her at www.embraceyouweightloss.com or on Instagram @embraceyoumd. Her bestselling book, “Embrace You: Your Guide to Transforming Weight Loss Misconceptions Into Lifelong Wellness,” is Healthline.com’s Best Overall Weight Loss Book 2022 and one of Livestrong.com’s picks for the 8 Best Weight-Loss Books to Read in 2022.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

“BMI is trash. Full stop.” This controversial tweet received 26,500 likes and almost 3,000 retweets. The 400 comments from medical and non–health care personnel ranged from agreeable to contrary to offensive.

Regardless of your opinion on BMI (body mass index), this conversation highlighted that the medical community needs to discuss the limitations of BMI and decide its future.

As a Black woman who is an obesity expert living with the impact of obesity in my own life, I know the emotion that a BMI conversation can evoke. Before emotions hijack the conversation, let’s discuss BMI’s past, present, and future.
 

BMI: From observational measurement to clinical use

Imagine walking into your favorite clothing store where an eager clerk greets you with a shirt to try on. The fit is off, but the clerk insists that the shirt must fit because everyone who’s your height should be able to wear it. This scenario seems ridiculous. But this is how we’ve come to use the BMI. Instead of thinking that people of the same height may be the same size, we declare that they must be the same size.

The idea behind the BMI was conceived in 1832 by Belgian anthropologist and mathematician Adolphe Quetelet, but he didn’t intend for it to be a health measure. Instead, it was simply an observation of how people’s weight changed in proportion to height over their lifetime.

Fast-forward to the 20th century, when insurance companies began using weight as an indicator of health status. Weights were recorded in a “Life Table.” Individual health status was determined on the basis of arbitrary cut-offs for weight on the Life Tables. Furthermore, White men set the “normal” weight standards because they were the primary insurance holders.

In 1972, Dr. Ancel Keys, a physician and leading expert in body composition at the time, cried foul on this practice and sought to standardize the use of weight as a health indicator. Dr. Keys used Quetelet’s calculation and termed it the Body Mass Index.

By 1985, the U.S. National Institutes of Health and the World Health Organization adopted the BMI. By the 21st century, BMI had become widely used in clinical settings. For example, the Centers for Medicare & Medicaid Services adopted BMI as a quality-of-care measure, placing even more pressure on clinicians to use BMI as a health screening tool.
 

BMI as a tool to diagnose obesity

We can’t discuss BMI without discussing the disease of obesity. BMI is the most widely used tool to diagnose obesity. In the United States, one-third of Americans meet the criteria for obesity. Another one-third are at risk for obesity.

Compared with BMI’s relatively quick acceptance into clinical practice, however, obesity was only recently recognized as a disease.

Historically, obesity has been viewed as a lifestyle choice, fueled by misinformation and multiple forms of bias. The historical bias associated with BMI and discrimination has led some public health officials and scholars to dismiss the use of BMI or fail to recognize obesity as disease.

This is a dangerous conclusion, because it comes to the detriment of the very people disproportionately impacted by obesity-related health disparities.

Furthermore, weight bias continues to prevent people living with obesity from receiving insurance coverage for life-enhancing obesity medications and interventions.
 

 

 

Is it time to phase out BMI?

The BMI is intertwined with many forms of bias: age, gender, racial, ethnic, and even weight. Therefore, it is time to phase out BMI. However, phasing out BMI is complex and will take time, given that:

  • Obesity is still a relatively “young” disease. 2023 marks the 10th anniversary of obesity’s recognition as a disease by the American Medical Association. Currently, BMI is the most widely used tool to diagnose obesity. Tools such as waist circumference, body composition, and metabolic health assessment will need to replace the BMI. Shifting from BMI emphasizes that obesity is more than a number on the scale. Obesity, as defined by the Obesity Medicine Association, is indeed a “chronic, relapsing, multi-factorial, neurobehavioral disease, wherein an increase in body fat promotes adipose tissue dysfunction and abnormal fat mass physical forces, resulting in adverse metabolic, biomechanical, and psychosocial health consequences.”
  • Much of our health research is tied to BMI. There have been some shifts in looking at non–weight-related health indicators. However, we need more robust studies evaluating other health indicators beyond weight and BMI. The availability of this data will help eliminate the need for BMI and promote individualized health assessment.
  • Current treatment guidelines for obesity medications are based on BMI. (Note: Medications to treat obesity are called “anti-obesity” medications or AOMs. However, given the stigma associated with obesity, I prefer not to use the term “anti-obesity.”) Presently this interferes with long-term obesity treatment. Once BMI is “normal,” many patients lose insurance coverage for their obesity medication, despite needing long-term metabolic support to overcome the compensatory mechanism of weight regain. Obesity is a chronic disease that exists independent of weight status. Therefore, using non-BMI measures will help ensure appropriate lifetime support for obesity.

The preceding are barriers, not impossibilities. In the interim, if BMI is still used in any capacity, the BMI reference chart should be an adjusted BMI chart based on agerace, ethnicity, biological sex, and obesity-related conditions. Furthermore, BMI isn’t the sole determining factor of health status.

Instead, an “abnormal” BMI should initiate conversation and further testing, if needed, to determine an individual’s health. For example, compare two people of the same height with different BMIs and lifestyles. Current studies support that a person flagged as having a high adjusted BMI but practicing a healthy lifestyle and having no metabolic diseases is less at risk than a person with a “normal” BMI but high waist circumference and an unhealthy lifestyle.

Regardless of your personal feelings, the facts are clear. Technology empowers us with better tools than BMI to determine health status. Therefore, it’s not a matter of if we will stop using BMI but when.

Sylvia Gonsahn-Bollie, MD, DipABOM, is an integrative obesity specialist who specializes in individualized solutions for emotional and biological overeating. Connect with her at www.embraceyouweightloss.com or on Instagram @embraceyoumd. Her bestselling book, “Embrace You: Your Guide to Transforming Weight Loss Misconceptions Into Lifelong Wellness,” is Healthline.com’s Best Overall Weight Loss Book 2022 and one of Livestrong.com’s picks for the 8 Best Weight-Loss Books to Read in 2022.

A version of this article first appeared on Medscape.com.

“BMI is trash. Full stop.” This controversial tweet received 26,500 likes and almost 3,000 retweets. The 400 comments from medical and non–health care personnel ranged from agreeable to contrary to offensive.

Regardless of your opinion on BMI (body mass index), this conversation highlighted that the medical community needs to discuss the limitations of BMI and decide its future.

As a Black woman who is an obesity expert living with the impact of obesity in my own life, I know the emotion that a BMI conversation can evoke. Before emotions hijack the conversation, let’s discuss BMI’s past, present, and future.
 

BMI: From observational measurement to clinical use

Imagine walking into your favorite clothing store where an eager clerk greets you with a shirt to try on. The fit is off, but the clerk insists that the shirt must fit because everyone who’s your height should be able to wear it. This scenario seems ridiculous. But this is how we’ve come to use the BMI. Instead of thinking that people of the same height may be the same size, we declare that they must be the same size.

The idea behind the BMI was conceived in 1832 by Belgian anthropologist and mathematician Adolphe Quetelet, but he didn’t intend for it to be a health measure. Instead, it was simply an observation of how people’s weight changed in proportion to height over their lifetime.

Fast-forward to the 20th century, when insurance companies began using weight as an indicator of health status. Weights were recorded in a “Life Table.” Individual health status was determined on the basis of arbitrary cut-offs for weight on the Life Tables. Furthermore, White men set the “normal” weight standards because they were the primary insurance holders.

In 1972, Dr. Ancel Keys, a physician and leading expert in body composition at the time, cried foul on this practice and sought to standardize the use of weight as a health indicator. Dr. Keys used Quetelet’s calculation and termed it the Body Mass Index.

By 1985, the U.S. National Institutes of Health and the World Health Organization adopted the BMI. By the 21st century, BMI had become widely used in clinical settings. For example, the Centers for Medicare & Medicaid Services adopted BMI as a quality-of-care measure, placing even more pressure on clinicians to use BMI as a health screening tool.
 

BMI as a tool to diagnose obesity

We can’t discuss BMI without discussing the disease of obesity. BMI is the most widely used tool to diagnose obesity. In the United States, one-third of Americans meet the criteria for obesity. Another one-third are at risk for obesity.

Compared with BMI’s relatively quick acceptance into clinical practice, however, obesity was only recently recognized as a disease.

Historically, obesity has been viewed as a lifestyle choice, fueled by misinformation and multiple forms of bias. The historical bias associated with BMI and discrimination has led some public health officials and scholars to dismiss the use of BMI or fail to recognize obesity as disease.

This is a dangerous conclusion, because it comes to the detriment of the very people disproportionately impacted by obesity-related health disparities.

Furthermore, weight bias continues to prevent people living with obesity from receiving insurance coverage for life-enhancing obesity medications and interventions.
 

 

 

Is it time to phase out BMI?

The BMI is intertwined with many forms of bias: age, gender, racial, ethnic, and even weight. Therefore, it is time to phase out BMI. However, phasing out BMI is complex and will take time, given that:

  • Obesity is still a relatively “young” disease. 2023 marks the 10th anniversary of obesity’s recognition as a disease by the American Medical Association. Currently, BMI is the most widely used tool to diagnose obesity. Tools such as waist circumference, body composition, and metabolic health assessment will need to replace the BMI. Shifting from BMI emphasizes that obesity is more than a number on the scale. Obesity, as defined by the Obesity Medicine Association, is indeed a “chronic, relapsing, multi-factorial, neurobehavioral disease, wherein an increase in body fat promotes adipose tissue dysfunction and abnormal fat mass physical forces, resulting in adverse metabolic, biomechanical, and psychosocial health consequences.”
  • Much of our health research is tied to BMI. There have been some shifts in looking at non–weight-related health indicators. However, we need more robust studies evaluating other health indicators beyond weight and BMI. The availability of this data will help eliminate the need for BMI and promote individualized health assessment.
  • Current treatment guidelines for obesity medications are based on BMI. (Note: Medications to treat obesity are called “anti-obesity” medications or AOMs. However, given the stigma associated with obesity, I prefer not to use the term “anti-obesity.”) Presently this interferes with long-term obesity treatment. Once BMI is “normal,” many patients lose insurance coverage for their obesity medication, despite needing long-term metabolic support to overcome the compensatory mechanism of weight regain. Obesity is a chronic disease that exists independent of weight status. Therefore, using non-BMI measures will help ensure appropriate lifetime support for obesity.

The preceding are barriers, not impossibilities. In the interim, if BMI is still used in any capacity, the BMI reference chart should be an adjusted BMI chart based on agerace, ethnicity, biological sex, and obesity-related conditions. Furthermore, BMI isn’t the sole determining factor of health status.

Instead, an “abnormal” BMI should initiate conversation and further testing, if needed, to determine an individual’s health. For example, compare two people of the same height with different BMIs and lifestyles. Current studies support that a person flagged as having a high adjusted BMI but practicing a healthy lifestyle and having no metabolic diseases is less at risk than a person with a “normal” BMI but high waist circumference and an unhealthy lifestyle.

Regardless of your personal feelings, the facts are clear. Technology empowers us with better tools than BMI to determine health status. Therefore, it’s not a matter of if we will stop using BMI but when.

Sylvia Gonsahn-Bollie, MD, DipABOM, is an integrative obesity specialist who specializes in individualized solutions for emotional and biological overeating. Connect with her at www.embraceyouweightloss.com or on Instagram @embraceyoumd. Her bestselling book, “Embrace You: Your Guide to Transforming Weight Loss Misconceptions Into Lifelong Wellness,” is Healthline.com’s Best Overall Weight Loss Book 2022 and one of Livestrong.com’s picks for the 8 Best Weight-Loss Books to Read in 2022.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Hip fractures likely to double by 2050 as population ages

Article Type
Changed
Wed, 09/14/2022 - 15:42

The annual incidence of hip fractures declined in most countries from 2005 to 2018, but this rate is projected to roughly double by 2050, according to a new study of 19 countries/regions.

The study by Chor-Wing Sing, PhD, and colleagues was presented at the annual meeting of the American Society of Bone and Mineral Research. The predicted increase in hip fractures is being driven by the aging population, with the population of those age 85 and older projected to increase 4.5-fold from 2010 to 2050, they note.

The researchers also estimate that from 2018 to 2050 the incidence of fractures will increase by 1.9-fold overall – more in men (2.4-fold) than in women (1.7-fold).

In addition, rates of use of osteoporosis drugs 1 year after a hip fracture were less than 50%, with less treatment in men. Men were also more likely than women to die within 1 year of a hip fracture.

iStock/Thinkstock


The researchers conclude that “larger and more collaborative efforts among health care providers, policymakers, and patients are needed to prevent hip fractures and improve the treatment gap and post-fracture care, especially in men and the oldest old.”
 

Aging will fuel rise in hip fractures; more preventive treatment needed

“Even though there is a decreasing trend of hip fracture incidence in some countries, such a percentage decrease is insufficient to offset the percentage increase in the aging population,” senior co-author Ching-Lung Cheung, PhD, associate professor in the department of pharmacology and pharmacy at the University of Hong Kong, explained to this news organization.

The takeaways from the study are that “a greater effort on fracture prevention should be made to avoid the continuous increase in the number of hip fractures,” he said.

In addition, “although initiation of anti-osteoporosis medication after hip fracture is recommended in international guidelines, the 1-year treatment rate [was] well below 50% in most of the countries and regions studied. This indicates the treatment rate is far from optimal.”

“Our study also showed that the use of anti-osteoporosis medications following a hip fracture is lower in men than in women by 30% to 67%,” he said. “Thus, more attention should be paid to preventing and treating hip fractures in men.”

“The greater increase in the projected number of hip fractures in men than in women “could be [because] osteoporosis is commonly perceived as a ‘woman’s disease,’ ” he speculated.

Invited to comment, Juliet Compston, MD, who selected the study as one of the top clinical science highlight abstracts at the ASBMR meeting, agrees that “there is substantial room for improvement” in osteoporosis treatment rates following a hip fracture “in all the regions covered by the study.”

“In addition,” she continues, “the wide variations in treatment rates can provide important lessons about the most effective models of care for people who sustain a hip fracture: for example, fracture liaison services.”

Men suffer as osteoporosis perceived to be a ‘woman’s disease’

The even lower treatment rate in men than women is “concerning and likely reflects the mistaken perception that osteoporosis is predominantly a disease affecting women,” notes Dr. Compston, emeritus professor of bone medicine, University of Cambridge, United Kingdom.  

Also invited to comment, Peter R. Ebeling, MD, outgoing president of the ASBMR, said that the projected doubling of hip fractures “is likely mainly due to aging of the population, with increasing lifespan for males in particular. However, increasing urbanization and decreasing weight-bearing exercise as a result are likely to also contribute in developing countries.”

“Unfortunately, despite the advances in treatments for osteoporosis over the last 25 years, osteoporosis treatment rates remain low, and osteoporosis remains undiagnosed in postmenopausal women and older men,” added Dr. Ebeling, from Monash University, Melbourne, who was not involved with the research.

“More targeted screening for osteoporosis would help,” he said, “as would treating patients for it following other minimal trauma fractures (vertebral, distal radius, and humerus, etc.), since if left untreated, about 50% of these patients will have hip fractures later in life.”

“Some countries may be doing better because they have health quality standards for hip fracture (for example, surgery within 24 hours, investigation, and treatment for osteoporosis). In other countries like Australia, bone density tests and treatment for osteoporosis are reimbursed, increasing their uptake.”

The public health implications of this study are “substantial” according to Dr. Compston. “People who have sustained a hip fracture are at high risk of subsequent fractures if untreated. There is a range of safe, cost-effective pharmacological therapies to reduce fracture rate, and wider use of these would have a major impact on the current and future burden imposed by hip fractures in the elderly population.”

Similarly, Dr. Ebeling noted that “prevention is important to save a huge health burden for patients and costs for society.”

“Patients with minimal trauma fractures (particularly hip or spinal fractures) should be investigated and treated for osteoporosis with care pathways established in the hospitals, reaching out to the community [fracture liaison services],” he said.

Support for these is being sought under Medicare in the United States, he noted, and bone densitometry reimbursement rates also need to be higher in the United States.
 

Projections for number of hip fractures to 2050

Previous international reviews of hip fractures have been based on heterogeneous data from more than 10 to 30 years ago, the researchers note.

They performed a retrospective cohort study using a common protocol across 19 countries/regions, as described in an article about the protocol published in BMJ Open.

They analyzed data from adults aged 50 and older who were hospitalized with a hip fracture to determine 1) the annual incidence of hip fractures in 2008-2015; 2) the uptake of drugs to treat osteoporosis at 1 year after a hip fracture; and 3) all-cause mortality at 1 year after a hip fracture.

In a second step, they estimated the number of hip fractures that would occur from 2030 to 2050, using World Bank population growth projections.

The data are from 20 health care databases from 19 countries/regions: Oceania (Australia, New Zealand), Asia (Hong Kong, Japan, Singapore, South Korea, Taiwan, and Thailand), Northern Europe (Denmark, Finland, and U.K.), Western Europe (France, Germany, Italy, The Netherlands, and Spain), and North and South America (Canada, United States, and Brazil).

The population in Japan was under age 75. U.S. data are from two databases: Medicare (age ≥ 65) and Optum.

Most databases (13) covered 90%-100% of the national population, and the rest covered 5%-70% of the population.

From 2008 to 2015, the annual incidence of hip fractures declined in 11 countries/regions (Singapore, Denmark, Hong Kong, Taiwan, Finland, U.K., Italy, Spain, United States [Medicare], Canada, and New Zealand).

“One potential reason that some countries have seen relatively large declines in hip fractures is better osteoporosis management and post-fracture care,” said Dr. Sing in a press release issued by ASBMR. “Better fall-prevention programs and clearer guidelines for clinical care have likely made a difference.”

Hip fracture incidence increased in five countries (The Netherlands, South Korea, France, Germany, and Brazil) and was stable in four countries (Australia, Japan, Thailand, and United States [Optum]).

The United Kingdom had the highest rate of osteoporosis treatment at 1-year after a hip fracture (50.3%). Rates in the other countries/regions ranged from 11.5% to 37%.

Fewer men than women were receiving drugs for osteoporosis at 1 year (range 5.1% to 38.2% versus 15.0% to 54.7%).

From 2005 to 2018, rates of osteoporosis treatment at 1 year after a hip fracture declined in six countries, increased in four countries, and were stable in five countries.

All-cause mortality within 1 year of hip fracture was higher in men than in women (range 19.2% to 35.8% versus 12.1% to 25.4%).

“Among the studied countries and regions, the U.S. ranks fifth with the highest hip fracture incidence,” Dr. Cheung replied when specifically asked about this. “The risk of hip fracture is determined by multiple factors: for example, lifestyle, diet, genetics, as well as management of osteoporosis,” he noted.

“Denmark is the only country showing no projected increase, and it is because Denmark had a continuous and remarkable decrease in the incidence of hip fractures,” he added, which “can offset the number of hip fractures contributed by the population aging.”

The study was funded by Amgen. Dr. Sing and Dr. Cheung have reported no relevant financial relationships. One of the study authors is employed by Amgen.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

The annual incidence of hip fractures declined in most countries from 2005 to 2018, but this rate is projected to roughly double by 2050, according to a new study of 19 countries/regions.

The study by Chor-Wing Sing, PhD, and colleagues was presented at the annual meeting of the American Society of Bone and Mineral Research. The predicted increase in hip fractures is being driven by the aging population, with the population of those age 85 and older projected to increase 4.5-fold from 2010 to 2050, they note.

The researchers also estimate that from 2018 to 2050 the incidence of fractures will increase by 1.9-fold overall – more in men (2.4-fold) than in women (1.7-fold).

In addition, rates of use of osteoporosis drugs 1 year after a hip fracture were less than 50%, with less treatment in men. Men were also more likely than women to die within 1 year of a hip fracture.

iStock/Thinkstock


The researchers conclude that “larger and more collaborative efforts among health care providers, policymakers, and patients are needed to prevent hip fractures and improve the treatment gap and post-fracture care, especially in men and the oldest old.”
 

Aging will fuel rise in hip fractures; more preventive treatment needed

“Even though there is a decreasing trend of hip fracture incidence in some countries, such a percentage decrease is insufficient to offset the percentage increase in the aging population,” senior co-author Ching-Lung Cheung, PhD, associate professor in the department of pharmacology and pharmacy at the University of Hong Kong, explained to this news organization.

The takeaways from the study are that “a greater effort on fracture prevention should be made to avoid the continuous increase in the number of hip fractures,” he said.

In addition, “although initiation of anti-osteoporosis medication after hip fracture is recommended in international guidelines, the 1-year treatment rate [was] well below 50% in most of the countries and regions studied. This indicates the treatment rate is far from optimal.”

“Our study also showed that the use of anti-osteoporosis medications following a hip fracture is lower in men than in women by 30% to 67%,” he said. “Thus, more attention should be paid to preventing and treating hip fractures in men.”

“The greater increase in the projected number of hip fractures in men than in women “could be [because] osteoporosis is commonly perceived as a ‘woman’s disease,’ ” he speculated.

Invited to comment, Juliet Compston, MD, who selected the study as one of the top clinical science highlight abstracts at the ASBMR meeting, agrees that “there is substantial room for improvement” in osteoporosis treatment rates following a hip fracture “in all the regions covered by the study.”

“In addition,” she continues, “the wide variations in treatment rates can provide important lessons about the most effective models of care for people who sustain a hip fracture: for example, fracture liaison services.”

Men suffer as osteoporosis perceived to be a ‘woman’s disease’

The even lower treatment rate in men than women is “concerning and likely reflects the mistaken perception that osteoporosis is predominantly a disease affecting women,” notes Dr. Compston, emeritus professor of bone medicine, University of Cambridge, United Kingdom.  

Also invited to comment, Peter R. Ebeling, MD, outgoing president of the ASBMR, said that the projected doubling of hip fractures “is likely mainly due to aging of the population, with increasing lifespan for males in particular. However, increasing urbanization and decreasing weight-bearing exercise as a result are likely to also contribute in developing countries.”

“Unfortunately, despite the advances in treatments for osteoporosis over the last 25 years, osteoporosis treatment rates remain low, and osteoporosis remains undiagnosed in postmenopausal women and older men,” added Dr. Ebeling, from Monash University, Melbourne, who was not involved with the research.

“More targeted screening for osteoporosis would help,” he said, “as would treating patients for it following other minimal trauma fractures (vertebral, distal radius, and humerus, etc.), since if left untreated, about 50% of these patients will have hip fractures later in life.”

“Some countries may be doing better because they have health quality standards for hip fracture (for example, surgery within 24 hours, investigation, and treatment for osteoporosis). In other countries like Australia, bone density tests and treatment for osteoporosis are reimbursed, increasing their uptake.”

The public health implications of this study are “substantial” according to Dr. Compston. “People who have sustained a hip fracture are at high risk of subsequent fractures if untreated. There is a range of safe, cost-effective pharmacological therapies to reduce fracture rate, and wider use of these would have a major impact on the current and future burden imposed by hip fractures in the elderly population.”

Similarly, Dr. Ebeling noted that “prevention is important to save a huge health burden for patients and costs for society.”

“Patients with minimal trauma fractures (particularly hip or spinal fractures) should be investigated and treated for osteoporosis with care pathways established in the hospitals, reaching out to the community [fracture liaison services],” he said.

Support for these is being sought under Medicare in the United States, he noted, and bone densitometry reimbursement rates also need to be higher in the United States.
 

Projections for number of hip fractures to 2050

Previous international reviews of hip fractures have been based on heterogeneous data from more than 10 to 30 years ago, the researchers note.

They performed a retrospective cohort study using a common protocol across 19 countries/regions, as described in an article about the protocol published in BMJ Open.

They analyzed data from adults aged 50 and older who were hospitalized with a hip fracture to determine 1) the annual incidence of hip fractures in 2008-2015; 2) the uptake of drugs to treat osteoporosis at 1 year after a hip fracture; and 3) all-cause mortality at 1 year after a hip fracture.

In a second step, they estimated the number of hip fractures that would occur from 2030 to 2050, using World Bank population growth projections.

The data are from 20 health care databases from 19 countries/regions: Oceania (Australia, New Zealand), Asia (Hong Kong, Japan, Singapore, South Korea, Taiwan, and Thailand), Northern Europe (Denmark, Finland, and U.K.), Western Europe (France, Germany, Italy, The Netherlands, and Spain), and North and South America (Canada, United States, and Brazil).

The population in Japan was under age 75. U.S. data are from two databases: Medicare (age ≥ 65) and Optum.

Most databases (13) covered 90%-100% of the national population, and the rest covered 5%-70% of the population.

From 2008 to 2015, the annual incidence of hip fractures declined in 11 countries/regions (Singapore, Denmark, Hong Kong, Taiwan, Finland, U.K., Italy, Spain, United States [Medicare], Canada, and New Zealand).

“One potential reason that some countries have seen relatively large declines in hip fractures is better osteoporosis management and post-fracture care,” said Dr. Sing in a press release issued by ASBMR. “Better fall-prevention programs and clearer guidelines for clinical care have likely made a difference.”

Hip fracture incidence increased in five countries (The Netherlands, South Korea, France, Germany, and Brazil) and was stable in four countries (Australia, Japan, Thailand, and United States [Optum]).

The United Kingdom had the highest rate of osteoporosis treatment at 1-year after a hip fracture (50.3%). Rates in the other countries/regions ranged from 11.5% to 37%.

Fewer men than women were receiving drugs for osteoporosis at 1 year (range 5.1% to 38.2% versus 15.0% to 54.7%).

From 2005 to 2018, rates of osteoporosis treatment at 1 year after a hip fracture declined in six countries, increased in four countries, and were stable in five countries.

All-cause mortality within 1 year of hip fracture was higher in men than in women (range 19.2% to 35.8% versus 12.1% to 25.4%).

“Among the studied countries and regions, the U.S. ranks fifth with the highest hip fracture incidence,” Dr. Cheung replied when specifically asked about this. “The risk of hip fracture is determined by multiple factors: for example, lifestyle, diet, genetics, as well as management of osteoporosis,” he noted.

“Denmark is the only country showing no projected increase, and it is because Denmark had a continuous and remarkable decrease in the incidence of hip fractures,” he added, which “can offset the number of hip fractures contributed by the population aging.”

The study was funded by Amgen. Dr. Sing and Dr. Cheung have reported no relevant financial relationships. One of the study authors is employed by Amgen.

A version of this article first appeared on Medscape.com.

The annual incidence of hip fractures declined in most countries from 2005 to 2018, but this rate is projected to roughly double by 2050, according to a new study of 19 countries/regions.

The study by Chor-Wing Sing, PhD, and colleagues was presented at the annual meeting of the American Society of Bone and Mineral Research. The predicted increase in hip fractures is being driven by the aging population, with the population of those age 85 and older projected to increase 4.5-fold from 2010 to 2050, they note.

The researchers also estimate that from 2018 to 2050 the incidence of fractures will increase by 1.9-fold overall – more in men (2.4-fold) than in women (1.7-fold).

In addition, rates of use of osteoporosis drugs 1 year after a hip fracture were less than 50%, with less treatment in men. Men were also more likely than women to die within 1 year of a hip fracture.

iStock/Thinkstock


The researchers conclude that “larger and more collaborative efforts among health care providers, policymakers, and patients are needed to prevent hip fractures and improve the treatment gap and post-fracture care, especially in men and the oldest old.”
 

Aging will fuel rise in hip fractures; more preventive treatment needed

“Even though there is a decreasing trend of hip fracture incidence in some countries, such a percentage decrease is insufficient to offset the percentage increase in the aging population,” senior co-author Ching-Lung Cheung, PhD, associate professor in the department of pharmacology and pharmacy at the University of Hong Kong, explained to this news organization.

The takeaways from the study are that “a greater effort on fracture prevention should be made to avoid the continuous increase in the number of hip fractures,” he said.

In addition, “although initiation of anti-osteoporosis medication after hip fracture is recommended in international guidelines, the 1-year treatment rate [was] well below 50% in most of the countries and regions studied. This indicates the treatment rate is far from optimal.”

“Our study also showed that the use of anti-osteoporosis medications following a hip fracture is lower in men than in women by 30% to 67%,” he said. “Thus, more attention should be paid to preventing and treating hip fractures in men.”

“The greater increase in the projected number of hip fractures in men than in women “could be [because] osteoporosis is commonly perceived as a ‘woman’s disease,’ ” he speculated.

Invited to comment, Juliet Compston, MD, who selected the study as one of the top clinical science highlight abstracts at the ASBMR meeting, agrees that “there is substantial room for improvement” in osteoporosis treatment rates following a hip fracture “in all the regions covered by the study.”

“In addition,” she continues, “the wide variations in treatment rates can provide important lessons about the most effective models of care for people who sustain a hip fracture: for example, fracture liaison services.”

Men suffer as osteoporosis perceived to be a ‘woman’s disease’

The even lower treatment rate in men than women is “concerning and likely reflects the mistaken perception that osteoporosis is predominantly a disease affecting women,” notes Dr. Compston, emeritus professor of bone medicine, University of Cambridge, United Kingdom.  

Also invited to comment, Peter R. Ebeling, MD, outgoing president of the ASBMR, said that the projected doubling of hip fractures “is likely mainly due to aging of the population, with increasing lifespan for males in particular. However, increasing urbanization and decreasing weight-bearing exercise as a result are likely to also contribute in developing countries.”

“Unfortunately, despite the advances in treatments for osteoporosis over the last 25 years, osteoporosis treatment rates remain low, and osteoporosis remains undiagnosed in postmenopausal women and older men,” added Dr. Ebeling, from Monash University, Melbourne, who was not involved with the research.

“More targeted screening for osteoporosis would help,” he said, “as would treating patients for it following other minimal trauma fractures (vertebral, distal radius, and humerus, etc.), since if left untreated, about 50% of these patients will have hip fractures later in life.”

“Some countries may be doing better because they have health quality standards for hip fracture (for example, surgery within 24 hours, investigation, and treatment for osteoporosis). In other countries like Australia, bone density tests and treatment for osteoporosis are reimbursed, increasing their uptake.”

The public health implications of this study are “substantial” according to Dr. Compston. “People who have sustained a hip fracture are at high risk of subsequent fractures if untreated. There is a range of safe, cost-effective pharmacological therapies to reduce fracture rate, and wider use of these would have a major impact on the current and future burden imposed by hip fractures in the elderly population.”

Similarly, Dr. Ebeling noted that “prevention is important to save a huge health burden for patients and costs for society.”

“Patients with minimal trauma fractures (particularly hip or spinal fractures) should be investigated and treated for osteoporosis with care pathways established in the hospitals, reaching out to the community [fracture liaison services],” he said.

Support for these is being sought under Medicare in the United States, he noted, and bone densitometry reimbursement rates also need to be higher in the United States.
 

Projections for number of hip fractures to 2050

Previous international reviews of hip fractures have been based on heterogeneous data from more than 10 to 30 years ago, the researchers note.

They performed a retrospective cohort study using a common protocol across 19 countries/regions, as described in an article about the protocol published in BMJ Open.

They analyzed data from adults aged 50 and older who were hospitalized with a hip fracture to determine 1) the annual incidence of hip fractures in 2008-2015; 2) the uptake of drugs to treat osteoporosis at 1 year after a hip fracture; and 3) all-cause mortality at 1 year after a hip fracture.

In a second step, they estimated the number of hip fractures that would occur from 2030 to 2050, using World Bank population growth projections.

The data are from 20 health care databases from 19 countries/regions: Oceania (Australia, New Zealand), Asia (Hong Kong, Japan, Singapore, South Korea, Taiwan, and Thailand), Northern Europe (Denmark, Finland, and U.K.), Western Europe (France, Germany, Italy, The Netherlands, and Spain), and North and South America (Canada, United States, and Brazil).

The population in Japan was under age 75. U.S. data are from two databases: Medicare (age ≥ 65) and Optum.

Most databases (13) covered 90%-100% of the national population, and the rest covered 5%-70% of the population.

From 2008 to 2015, the annual incidence of hip fractures declined in 11 countries/regions (Singapore, Denmark, Hong Kong, Taiwan, Finland, U.K., Italy, Spain, United States [Medicare], Canada, and New Zealand).

“One potential reason that some countries have seen relatively large declines in hip fractures is better osteoporosis management and post-fracture care,” said Dr. Sing in a press release issued by ASBMR. “Better fall-prevention programs and clearer guidelines for clinical care have likely made a difference.”

Hip fracture incidence increased in five countries (The Netherlands, South Korea, France, Germany, and Brazil) and was stable in four countries (Australia, Japan, Thailand, and United States [Optum]).

The United Kingdom had the highest rate of osteoporosis treatment at 1-year after a hip fracture (50.3%). Rates in the other countries/regions ranged from 11.5% to 37%.

Fewer men than women were receiving drugs for osteoporosis at 1 year (range 5.1% to 38.2% versus 15.0% to 54.7%).

From 2005 to 2018, rates of osteoporosis treatment at 1 year after a hip fracture declined in six countries, increased in four countries, and were stable in five countries.

All-cause mortality within 1 year of hip fracture was higher in men than in women (range 19.2% to 35.8% versus 12.1% to 25.4%).

“Among the studied countries and regions, the U.S. ranks fifth with the highest hip fracture incidence,” Dr. Cheung replied when specifically asked about this. “The risk of hip fracture is determined by multiple factors: for example, lifestyle, diet, genetics, as well as management of osteoporosis,” he noted.

“Denmark is the only country showing no projected increase, and it is because Denmark had a continuous and remarkable decrease in the incidence of hip fractures,” he added, which “can offset the number of hip fractures contributed by the population aging.”

The study was funded by Amgen. Dr. Sing and Dr. Cheung have reported no relevant financial relationships. One of the study authors is employed by Amgen.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ASBMR 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Dermatoses often occur in people who wear face masks

Article Type
Changed
Fri, 09/23/2022 - 13:50

Around half the people who wear face masks may develop acne, facial dermatitis, itch, or pressure injuries, and the risk increases with the length of time the mask is worn, according to a recently published systematic review and meta-analysis.

“This report finds the most statistically significant risk factor for developing a facial dermatosis under a face mask is how long one wears the mask. Specifically, wearing a mask for more than 4 to 6 hours correlated most strongly with the development of a facial skin problem,” Jami L. Miller, MD, associate professor of dermatology, Vanderbilt University Medical Center, Nashville, Tenn., told this news organization. Dr. Miller was not involved in the study.

“The type of mask and the environment were of less significance,” she added.

UerDomwet/PxHere


Mask wearing for infection control has been common during the COVID-19 pandemic and will likely continue for some time, study coauthors Lim Yi Shen Justin, MBBS, and Yik Weng Yew*, MBBS, MPH, PhD, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, write in Contact Dermatitis.  And cross-sectional studies have suggested a link between mask wearing and various facial dermatoses.

To evaluate this link, as well as potential risk factors for facial dermatoses, the researchers reviewed 37 studies published between 2004 and 2022 involving 29,557 adult participants self-reporting regular use of any face mask type across 17 countries in Europe and Asia. The mask types commonly studied in the papers they analyzed included surgical masks and respirators.

Facial dermatoses were self-reported in 30 studies (81.1%) and were diagnosed by trained dermatologists in seven studies (18.9%).

Dr. Justin and Dr. Yew found that:

  • The overall prevalence of facial dermatoses was 55%
  • Individually, facial dermatitis, itch, acne, and pressure injuries were consistently reported as facial dermatoses, with pooled prevalence rates of 24%, 30%, 31%, and 31%, respectively
  • The duration of mask wearing was the most significant risk factor for facial dermatoses (P < .001)
  • Respirators, including N95 masks, were not more likely than surgical masks to be linked with facial dermatoses

“Understanding risk factors of mask wearing, including situation, duration, and type of mask, may allow for targeted interventions to mitigate problems,” Dr. Yew told this news organization.

He advised taking a break from mask wearing after 4 to 6 hours to improve outcomes.  

Dr. Yew acknowledged limitations, including that most of the reviewed studies relied on self-reported symptoms.

“Patient factors were not investigated in most studies; therefore, we were not able to ascertain their contributory role in the development of facial dermatoses from mask wearing,” he said. “We were also unable to prove causation between risk factors and outcome.” 

Four dermatologists welcome the findings

Dr. Miller called this an “interesting, and certainly relevant” study, now that mask wearing is common and facial skin problems are fairly common complaints in medical visits.

“As the authors say, irritants or contact allergens with longer exposures can be expected to cause a more severe dermatitis than short contact,” she said. “Longer duration also can cause occlusion of pores and hair follicles, which can be expected to worsen acne and folliculitis.”

“I was surprised that the type of mask did not seem to matter significantly,” she added. “Patients wearing N95 masks may be relieved to know N95s do not cause more skin problems than lighter masks.”

Still, Dr. Miller had several questions, including if the materials and chemical finishes that vary by manufacturer may affect skin conditions.

Olga Bunimovich, MD, assistant professor, department of dermatology, University of Pittsburgh School of Medicine, Pennsylvania, called this study “an excellent step towards characterizing the role masks play in facial dermatoses.”

“The study provides a window into the prevalence of these conditions, as well as some understanding of the factors that may be contributing to it,” Dr. Bunimovich, who was not part of the study, added. But “we can also utilize this information to alter behavior in the work environment, allowing ‘mask-free’ breaks to decrease the risk of facial dermatoses.”

Elma Baron, MD, professor and director, Skin Study Center, department of dermatology, Case Western Reserve University School of Medicine, Cleveland, expected skin problems to be linked with mask wearing but didn’t expect the prevalence to be as high as 55%, which she called “very significant.”

“Mask wearing is an important means to prevent transmission of communicable infections, and the practice will most likely continue,” she said.

“Given the data, it is reasonable to advise patients who are already prone to these specific dermatoses to be proactive,” she added. “Early intervention with proper topical medications, preferably prescribed by a dermatologist or other health care provider, and changing masks frequently before they get soaked with moisture, will hopefully lessen the severity of skin rashes and minimize the negative impact on quality of life.”

Also commenting on the study, Susan Massick, MD, dermatologist and clinical associate professor of internal medicine, The Ohio State University Wexner Medical Center, Westerville, said in an interview that she urges people to wear masks, despite these risks.

“The majority of concerns are straightforward, manageable, and overall benign,” she said. “We have a multitude of treatments that can help control, address, or improve symptoms.”

“Masks are an effective and easy way to protect yourself from infection, and they remain one of the most reliable preventions we have,” Dr. Massick noted. “The findings in this article should not preclude anyone from wearing a mask, nor should facial dermatoses be a cause for people to stop wearing their masks.”

The study received no funding. The authors, as well as Dr. Baron, Dr. Miller, Dr. Bunimovich, and Dr. Massick, who were not involved in the study, reported no relevant financial relationships. All experts commented by email.

A version of this article first appeared on Medscape.com.

Correction, 9/22/22: An earlier version of this article misstated the name of Dr. Yik Weng Yew.

Publications
Topics
Sections

Around half the people who wear face masks may develop acne, facial dermatitis, itch, or pressure injuries, and the risk increases with the length of time the mask is worn, according to a recently published systematic review and meta-analysis.

“This report finds the most statistically significant risk factor for developing a facial dermatosis under a face mask is how long one wears the mask. Specifically, wearing a mask for more than 4 to 6 hours correlated most strongly with the development of a facial skin problem,” Jami L. Miller, MD, associate professor of dermatology, Vanderbilt University Medical Center, Nashville, Tenn., told this news organization. Dr. Miller was not involved in the study.

“The type of mask and the environment were of less significance,” she added.

UerDomwet/PxHere


Mask wearing for infection control has been common during the COVID-19 pandemic and will likely continue for some time, study coauthors Lim Yi Shen Justin, MBBS, and Yik Weng Yew*, MBBS, MPH, PhD, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, write in Contact Dermatitis.  And cross-sectional studies have suggested a link between mask wearing and various facial dermatoses.

To evaluate this link, as well as potential risk factors for facial dermatoses, the researchers reviewed 37 studies published between 2004 and 2022 involving 29,557 adult participants self-reporting regular use of any face mask type across 17 countries in Europe and Asia. The mask types commonly studied in the papers they analyzed included surgical masks and respirators.

Facial dermatoses were self-reported in 30 studies (81.1%) and were diagnosed by trained dermatologists in seven studies (18.9%).

Dr. Justin and Dr. Yew found that:

  • The overall prevalence of facial dermatoses was 55%
  • Individually, facial dermatitis, itch, acne, and pressure injuries were consistently reported as facial dermatoses, with pooled prevalence rates of 24%, 30%, 31%, and 31%, respectively
  • The duration of mask wearing was the most significant risk factor for facial dermatoses (P < .001)
  • Respirators, including N95 masks, were not more likely than surgical masks to be linked with facial dermatoses

“Understanding risk factors of mask wearing, including situation, duration, and type of mask, may allow for targeted interventions to mitigate problems,” Dr. Yew told this news organization.

He advised taking a break from mask wearing after 4 to 6 hours to improve outcomes.  

Dr. Yew acknowledged limitations, including that most of the reviewed studies relied on self-reported symptoms.

“Patient factors were not investigated in most studies; therefore, we were not able to ascertain their contributory role in the development of facial dermatoses from mask wearing,” he said. “We were also unable to prove causation between risk factors and outcome.” 

Four dermatologists welcome the findings

Dr. Miller called this an “interesting, and certainly relevant” study, now that mask wearing is common and facial skin problems are fairly common complaints in medical visits.

“As the authors say, irritants or contact allergens with longer exposures can be expected to cause a more severe dermatitis than short contact,” she said. “Longer duration also can cause occlusion of pores and hair follicles, which can be expected to worsen acne and folliculitis.”

“I was surprised that the type of mask did not seem to matter significantly,” she added. “Patients wearing N95 masks may be relieved to know N95s do not cause more skin problems than lighter masks.”

Still, Dr. Miller had several questions, including if the materials and chemical finishes that vary by manufacturer may affect skin conditions.

Olga Bunimovich, MD, assistant professor, department of dermatology, University of Pittsburgh School of Medicine, Pennsylvania, called this study “an excellent step towards characterizing the role masks play in facial dermatoses.”

“The study provides a window into the prevalence of these conditions, as well as some understanding of the factors that may be contributing to it,” Dr. Bunimovich, who was not part of the study, added. But “we can also utilize this information to alter behavior in the work environment, allowing ‘mask-free’ breaks to decrease the risk of facial dermatoses.”

Elma Baron, MD, professor and director, Skin Study Center, department of dermatology, Case Western Reserve University School of Medicine, Cleveland, expected skin problems to be linked with mask wearing but didn’t expect the prevalence to be as high as 55%, which she called “very significant.”

“Mask wearing is an important means to prevent transmission of communicable infections, and the practice will most likely continue,” she said.

“Given the data, it is reasonable to advise patients who are already prone to these specific dermatoses to be proactive,” she added. “Early intervention with proper topical medications, preferably prescribed by a dermatologist or other health care provider, and changing masks frequently before they get soaked with moisture, will hopefully lessen the severity of skin rashes and minimize the negative impact on quality of life.”

Also commenting on the study, Susan Massick, MD, dermatologist and clinical associate professor of internal medicine, The Ohio State University Wexner Medical Center, Westerville, said in an interview that she urges people to wear masks, despite these risks.

“The majority of concerns are straightforward, manageable, and overall benign,” she said. “We have a multitude of treatments that can help control, address, or improve symptoms.”

“Masks are an effective and easy way to protect yourself from infection, and they remain one of the most reliable preventions we have,” Dr. Massick noted. “The findings in this article should not preclude anyone from wearing a mask, nor should facial dermatoses be a cause for people to stop wearing their masks.”

The study received no funding. The authors, as well as Dr. Baron, Dr. Miller, Dr. Bunimovich, and Dr. Massick, who were not involved in the study, reported no relevant financial relationships. All experts commented by email.

A version of this article first appeared on Medscape.com.

Correction, 9/22/22: An earlier version of this article misstated the name of Dr. Yik Weng Yew.

Around half the people who wear face masks may develop acne, facial dermatitis, itch, or pressure injuries, and the risk increases with the length of time the mask is worn, according to a recently published systematic review and meta-analysis.

“This report finds the most statistically significant risk factor for developing a facial dermatosis under a face mask is how long one wears the mask. Specifically, wearing a mask for more than 4 to 6 hours correlated most strongly with the development of a facial skin problem,” Jami L. Miller, MD, associate professor of dermatology, Vanderbilt University Medical Center, Nashville, Tenn., told this news organization. Dr. Miller was not involved in the study.

“The type of mask and the environment were of less significance,” she added.

UerDomwet/PxHere


Mask wearing for infection control has been common during the COVID-19 pandemic and will likely continue for some time, study coauthors Lim Yi Shen Justin, MBBS, and Yik Weng Yew*, MBBS, MPH, PhD, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, write in Contact Dermatitis.  And cross-sectional studies have suggested a link between mask wearing and various facial dermatoses.

To evaluate this link, as well as potential risk factors for facial dermatoses, the researchers reviewed 37 studies published between 2004 and 2022 involving 29,557 adult participants self-reporting regular use of any face mask type across 17 countries in Europe and Asia. The mask types commonly studied in the papers they analyzed included surgical masks and respirators.

Facial dermatoses were self-reported in 30 studies (81.1%) and were diagnosed by trained dermatologists in seven studies (18.9%).

Dr. Justin and Dr. Yew found that:

  • The overall prevalence of facial dermatoses was 55%
  • Individually, facial dermatitis, itch, acne, and pressure injuries were consistently reported as facial dermatoses, with pooled prevalence rates of 24%, 30%, 31%, and 31%, respectively
  • The duration of mask wearing was the most significant risk factor for facial dermatoses (P < .001)
  • Respirators, including N95 masks, were not more likely than surgical masks to be linked with facial dermatoses

“Understanding risk factors of mask wearing, including situation, duration, and type of mask, may allow for targeted interventions to mitigate problems,” Dr. Yew told this news organization.

He advised taking a break from mask wearing after 4 to 6 hours to improve outcomes.  

Dr. Yew acknowledged limitations, including that most of the reviewed studies relied on self-reported symptoms.

“Patient factors were not investigated in most studies; therefore, we were not able to ascertain their contributory role in the development of facial dermatoses from mask wearing,” he said. “We were also unable to prove causation between risk factors and outcome.” 

Four dermatologists welcome the findings

Dr. Miller called this an “interesting, and certainly relevant” study, now that mask wearing is common and facial skin problems are fairly common complaints in medical visits.

“As the authors say, irritants or contact allergens with longer exposures can be expected to cause a more severe dermatitis than short contact,” she said. “Longer duration also can cause occlusion of pores and hair follicles, which can be expected to worsen acne and folliculitis.”

“I was surprised that the type of mask did not seem to matter significantly,” she added. “Patients wearing N95 masks may be relieved to know N95s do not cause more skin problems than lighter masks.”

Still, Dr. Miller had several questions, including if the materials and chemical finishes that vary by manufacturer may affect skin conditions.

Olga Bunimovich, MD, assistant professor, department of dermatology, University of Pittsburgh School of Medicine, Pennsylvania, called this study “an excellent step towards characterizing the role masks play in facial dermatoses.”

“The study provides a window into the prevalence of these conditions, as well as some understanding of the factors that may be contributing to it,” Dr. Bunimovich, who was not part of the study, added. But “we can also utilize this information to alter behavior in the work environment, allowing ‘mask-free’ breaks to decrease the risk of facial dermatoses.”

Elma Baron, MD, professor and director, Skin Study Center, department of dermatology, Case Western Reserve University School of Medicine, Cleveland, expected skin problems to be linked with mask wearing but didn’t expect the prevalence to be as high as 55%, which she called “very significant.”

“Mask wearing is an important means to prevent transmission of communicable infections, and the practice will most likely continue,” she said.

“Given the data, it is reasonable to advise patients who are already prone to these specific dermatoses to be proactive,” she added. “Early intervention with proper topical medications, preferably prescribed by a dermatologist or other health care provider, and changing masks frequently before they get soaked with moisture, will hopefully lessen the severity of skin rashes and minimize the negative impact on quality of life.”

Also commenting on the study, Susan Massick, MD, dermatologist and clinical associate professor of internal medicine, The Ohio State University Wexner Medical Center, Westerville, said in an interview that she urges people to wear masks, despite these risks.

“The majority of concerns are straightforward, manageable, and overall benign,” she said. “We have a multitude of treatments that can help control, address, or improve symptoms.”

“Masks are an effective and easy way to protect yourself from infection, and they remain one of the most reliable preventions we have,” Dr. Massick noted. “The findings in this article should not preclude anyone from wearing a mask, nor should facial dermatoses be a cause for people to stop wearing their masks.”

The study received no funding. The authors, as well as Dr. Baron, Dr. Miller, Dr. Bunimovich, and Dr. Massick, who were not involved in the study, reported no relevant financial relationships. All experts commented by email.

A version of this article first appeared on Medscape.com.

Correction, 9/22/22: An earlier version of this article misstated the name of Dr. Yik Weng Yew.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

‘Spectacular’ polypill results also puzzle docs

Article Type
Changed
Wed, 09/14/2022 - 09:52

New research shows that “polypills” can prevent a combination of cardiovascular events and cardiovascular deaths among patients who have recently experienced a myocardial infarction.

But results from the SECURE trial, published in the New England Journal of Medicine, also raise questions.

How do the polypills reduce cardiovascular problems? And will they ever be available in the United States?

Questions about how they work center on a mystery in the trial data: the polypill – containing aspirin, an angiotensin-converting enzyme (ACE) inhibitor, and a statin – apparently conferred substantial cardiovascular protection while producing average blood pressure and lipid levels that were virtually the same as with usual care.

As to when polypills will be available, the answer may hinge on whether companies, government agencies, or philanthropic foundations come to see making and paying for such treatments – combinations of typically inexpensive generic drugs in a single pill for the sake of convenience and greater adherence – as financially worthwhile.
 

A matter of adherence?

In the SECURE trial, presented late August at the annual congress of the European Society of Cardiology, Barcelona, investigators randomly assigned 2,499 patients with an MI in the previous 6 months to receive usual care or a polypill.

Patients in the usual-care group typically received the same types of treatments included the polypill, only taken separately. Different versions of the polypill were available to allow for titration to tolerated doses of the component medications: aspirin (100 mg), ramipril (2.5, 5, or 10 mg), and atorvastatin (20 mg or 40 mg).

Researchers used the Morisky Medication Adherence Scale to gauge participants’ adherence to their medication regimen and found the polypill group was more adherent. Patients who received the polypill were more likely to have a high level of adherence at 6 months (70.6% vs. 62.7%) and 24 months (74.1% vs. 63.2%), they reported. (The Morisky tool is the subject of some controversy because of aggressive licensing tactics of its creator.)

The primary endpoint of cardiovascular death, MI, stroke, or urgent revascularization was significantly less likely in the polypill group during a median of 3 years of follow-up (hazard ratio, 0.76; P = .02).

“A primary-outcome event occurred in 118 of 1,237 patients (9.5%) in the polypill group and in 156 of 1,229 (12.7%) in the usual-care group,” the researchers report.

“Probably, adherence is the most important reason of how this works,” Valentin Fuster, MD, physician-in-chief at Mount Sinai Hospital, New York, who led the study, said at ESC 2022.

Still, some clinicians were left scratching their heads by the lack of difference between treatment groups in average blood pressure and levels of low-density lipoprotein (LDL) cholesterol.

In the group that received the polypill, average systolic and diastolic blood pressure at 24 months were 135.2 mmHg and 74.8 mmHg, respectively. In the group that received usual care, those values were 135.5 mmHg and 74.9 mmHg, respectively.

Likewise, “no substantial differences were found in LDL-cholesterol levels over time between the groups, with a mean value at 24 months of 67.7 mg/dL in the polypill group and 67.2 mg/dL in the usual-care group,” according to the researchers.

One explanation for the findings is that greater adherence led to beneficial effects that were not reflected in lipid and blood pressure measurements, the investigators said. Alternatively, the open-label trial design could have led to different health behaviors between groups, they suggested.

Martha Gulati, MD, director of preventive cardiology at Cedars-Sinai Medical Center, Los Angeles, said she loves the idea of polypills. But she wonders about the lack of difference in blood pressure and lipids in SECURE.

Dr. Gulati said she sees in practice how medication adherence and measurements of blood pressure and lipids typically go hand in hand.

When a patient initially responds to a medication, but then their LDL cholesterol goes up later, “my first question is, ‘Are you still taking your medication or how frequently are you taking it?’” Dr. Gulati said in an interview. “And I get all kinds of answers.”

“If you are more adherent, why wouldn’t your LDL actually be lower, and why wouldn’t your blood pressure be lower?” she asked.
 

 

 

Can the results be replicated?

Ethan J. Weiss, MD, a cardiologist and volunteer associate clinical professor of medicine at the University of California, San Francisco, said the SECURE results are “spectacular,” but the seeming disconnect with the biomarker measurements “doesn’t make for a clean story.”

“It just seems like if you are making an argument that this is a way to improve compliance ... you would see some evidence of improved compliance objectively” in the biomarker readings, Dr. Weiss said.

Trying to understand how the polypill worked requires more imagination. “Or it makes you just say, ‘Who cares what the mechanism is?’ These people did a lot better, full stop, and that’s all that matters,” he said.

Dr. Weiss said he expects some degree of replication of the results may be needed before practice changes.

To Steven E. Nissen, MD, chief academic officer of the Heart and Vascular Institute at Cleveland Clinic, the results “don’t make any sense.”

“If they got the same results on the biomarkers that the pill was designed to intervene upon, why are the [primary outcome] results different? It’s completely unexplained,” Dr. Nissen said.

In general, Dr. Nissen has not been an advocate of the polypill approach in higher-income countries.

“Medicine is all about customization of therapy,” he said. “Not everybody needs blood pressure lowering. Not everybody needs the same intensity of LDL reduction. We spend much of our lives seeing patients and treating their blood pressure, and if it doesn’t come down adequately, giving them a higher dose or adding another agent.”

Polypills might be reasonable for primary prevention in countries where people have less access to health care resources, he added. In such settings, a low-cost, simple treatment strategy might have benefit.

But Dr. Nissen still doesn’t see a role for a polypill in secondary prevention.

“I think we have to take a step back, take a deep breath, and look very carefully at the science and try to understand whether this, in fact, is sensible,” he said. “We may need another study to see if this can be replicated.”

For Dhruv S. Kazi, MD, the results of the SECURE trial offer an opportunity to rekindle conversations about the use of polypills for cardiovascular protection. These conversations and studies have been taking place for nearly two decades.

Dr. Kazi, associate director of the Richard A. and Susan F. Smith Center for Outcomes Research in Cardiology at Beth Israel Deaconess Medical Center, Boston, has used models to study the expected cost-effectiveness of polypills in various countries.

Although polypills can improve patients’ adherence to their prescribed medications, Dr. Kazi and colleagues have found that treatment gaps are “often at the physician level,” with many patients not prescribed all of the medications from which they could benefit.

Availability of polypills could help address those gaps. At the same time, many patients, even those with higher incomes, may have a strong preference for taking a single pill.

Dr. Kazi’s research also shows that a polypill approach may be more economically attractive as countries develop because successful treatment averts cardiovascular events that are costlier to treat.

“In the United States, in order for this to work, we would need a polypill that is both available widely but also affordable,” Dr. Kazi said. “It is going to require a visionary mover” to make that happen.

That could include philanthropic foundations. But it could also be a business opportunity for a company like Barcelona-based Ferrer, which provided the polypills for the SECURE trial.

The clinical and economic evidence in support of polypills has been compelling, Dr. Kazi said: “We have to get on with the business of implementing something that is effective and has the potential to greatly improve population health at scale.” 

The SECURE trial was funded by the European Union Horizon 2020 program and coordinated by the Spanish National Center for Cardiovascular Research (CNIC). Ferrer International provided the polypill that was used in the trial. CNIC receives royalties for sales of the polypill from Ferrer. Dr. Weiss is starting a biotech company unrelated to this area of research.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

New research shows that “polypills” can prevent a combination of cardiovascular events and cardiovascular deaths among patients who have recently experienced a myocardial infarction.

But results from the SECURE trial, published in the New England Journal of Medicine, also raise questions.

How do the polypills reduce cardiovascular problems? And will they ever be available in the United States?

Questions about how they work center on a mystery in the trial data: the polypill – containing aspirin, an angiotensin-converting enzyme (ACE) inhibitor, and a statin – apparently conferred substantial cardiovascular protection while producing average blood pressure and lipid levels that were virtually the same as with usual care.

As to when polypills will be available, the answer may hinge on whether companies, government agencies, or philanthropic foundations come to see making and paying for such treatments – combinations of typically inexpensive generic drugs in a single pill for the sake of convenience and greater adherence – as financially worthwhile.
 

A matter of adherence?

In the SECURE trial, presented late August at the annual congress of the European Society of Cardiology, Barcelona, investigators randomly assigned 2,499 patients with an MI in the previous 6 months to receive usual care or a polypill.

Patients in the usual-care group typically received the same types of treatments included the polypill, only taken separately. Different versions of the polypill were available to allow for titration to tolerated doses of the component medications: aspirin (100 mg), ramipril (2.5, 5, or 10 mg), and atorvastatin (20 mg or 40 mg).

Researchers used the Morisky Medication Adherence Scale to gauge participants’ adherence to their medication regimen and found the polypill group was more adherent. Patients who received the polypill were more likely to have a high level of adherence at 6 months (70.6% vs. 62.7%) and 24 months (74.1% vs. 63.2%), they reported. (The Morisky tool is the subject of some controversy because of aggressive licensing tactics of its creator.)

The primary endpoint of cardiovascular death, MI, stroke, or urgent revascularization was significantly less likely in the polypill group during a median of 3 years of follow-up (hazard ratio, 0.76; P = .02).

“A primary-outcome event occurred in 118 of 1,237 patients (9.5%) in the polypill group and in 156 of 1,229 (12.7%) in the usual-care group,” the researchers report.

“Probably, adherence is the most important reason of how this works,” Valentin Fuster, MD, physician-in-chief at Mount Sinai Hospital, New York, who led the study, said at ESC 2022.

Still, some clinicians were left scratching their heads by the lack of difference between treatment groups in average blood pressure and levels of low-density lipoprotein (LDL) cholesterol.

In the group that received the polypill, average systolic and diastolic blood pressure at 24 months were 135.2 mmHg and 74.8 mmHg, respectively. In the group that received usual care, those values were 135.5 mmHg and 74.9 mmHg, respectively.

Likewise, “no substantial differences were found in LDL-cholesterol levels over time between the groups, with a mean value at 24 months of 67.7 mg/dL in the polypill group and 67.2 mg/dL in the usual-care group,” according to the researchers.

One explanation for the findings is that greater adherence led to beneficial effects that were not reflected in lipid and blood pressure measurements, the investigators said. Alternatively, the open-label trial design could have led to different health behaviors between groups, they suggested.

Martha Gulati, MD, director of preventive cardiology at Cedars-Sinai Medical Center, Los Angeles, said she loves the idea of polypills. But she wonders about the lack of difference in blood pressure and lipids in SECURE.

Dr. Gulati said she sees in practice how medication adherence and measurements of blood pressure and lipids typically go hand in hand.

When a patient initially responds to a medication, but then their LDL cholesterol goes up later, “my first question is, ‘Are you still taking your medication or how frequently are you taking it?’” Dr. Gulati said in an interview. “And I get all kinds of answers.”

“If you are more adherent, why wouldn’t your LDL actually be lower, and why wouldn’t your blood pressure be lower?” she asked.
 

 

 

Can the results be replicated?

Ethan J. Weiss, MD, a cardiologist and volunteer associate clinical professor of medicine at the University of California, San Francisco, said the SECURE results are “spectacular,” but the seeming disconnect with the biomarker measurements “doesn’t make for a clean story.”

“It just seems like if you are making an argument that this is a way to improve compliance ... you would see some evidence of improved compliance objectively” in the biomarker readings, Dr. Weiss said.

Trying to understand how the polypill worked requires more imagination. “Or it makes you just say, ‘Who cares what the mechanism is?’ These people did a lot better, full stop, and that’s all that matters,” he said.

Dr. Weiss said he expects some degree of replication of the results may be needed before practice changes.

To Steven E. Nissen, MD, chief academic officer of the Heart and Vascular Institute at Cleveland Clinic, the results “don’t make any sense.”

“If they got the same results on the biomarkers that the pill was designed to intervene upon, why are the [primary outcome] results different? It’s completely unexplained,” Dr. Nissen said.

In general, Dr. Nissen has not been an advocate of the polypill approach in higher-income countries.

“Medicine is all about customization of therapy,” he said. “Not everybody needs blood pressure lowering. Not everybody needs the same intensity of LDL reduction. We spend much of our lives seeing patients and treating their blood pressure, and if it doesn’t come down adequately, giving them a higher dose or adding another agent.”

Polypills might be reasonable for primary prevention in countries where people have less access to health care resources, he added. In such settings, a low-cost, simple treatment strategy might have benefit.

But Dr. Nissen still doesn’t see a role for a polypill in secondary prevention.

“I think we have to take a step back, take a deep breath, and look very carefully at the science and try to understand whether this, in fact, is sensible,” he said. “We may need another study to see if this can be replicated.”

For Dhruv S. Kazi, MD, the results of the SECURE trial offer an opportunity to rekindle conversations about the use of polypills for cardiovascular protection. These conversations and studies have been taking place for nearly two decades.

Dr. Kazi, associate director of the Richard A. and Susan F. Smith Center for Outcomes Research in Cardiology at Beth Israel Deaconess Medical Center, Boston, has used models to study the expected cost-effectiveness of polypills in various countries.

Although polypills can improve patients’ adherence to their prescribed medications, Dr. Kazi and colleagues have found that treatment gaps are “often at the physician level,” with many patients not prescribed all of the medications from which they could benefit.

Availability of polypills could help address those gaps. At the same time, many patients, even those with higher incomes, may have a strong preference for taking a single pill.

Dr. Kazi’s research also shows that a polypill approach may be more economically attractive as countries develop because successful treatment averts cardiovascular events that are costlier to treat.

“In the United States, in order for this to work, we would need a polypill that is both available widely but also affordable,” Dr. Kazi said. “It is going to require a visionary mover” to make that happen.

That could include philanthropic foundations. But it could also be a business opportunity for a company like Barcelona-based Ferrer, which provided the polypills for the SECURE trial.

The clinical and economic evidence in support of polypills has been compelling, Dr. Kazi said: “We have to get on with the business of implementing something that is effective and has the potential to greatly improve population health at scale.” 

The SECURE trial was funded by the European Union Horizon 2020 program and coordinated by the Spanish National Center for Cardiovascular Research (CNIC). Ferrer International provided the polypill that was used in the trial. CNIC receives royalties for sales of the polypill from Ferrer. Dr. Weiss is starting a biotech company unrelated to this area of research.

A version of this article first appeared on Medscape.com.

New research shows that “polypills” can prevent a combination of cardiovascular events and cardiovascular deaths among patients who have recently experienced a myocardial infarction.

But results from the SECURE trial, published in the New England Journal of Medicine, also raise questions.

How do the polypills reduce cardiovascular problems? And will they ever be available in the United States?

Questions about how they work center on a mystery in the trial data: the polypill – containing aspirin, an angiotensin-converting enzyme (ACE) inhibitor, and a statin – apparently conferred substantial cardiovascular protection while producing average blood pressure and lipid levels that were virtually the same as with usual care.

As to when polypills will be available, the answer may hinge on whether companies, government agencies, or philanthropic foundations come to see making and paying for such treatments – combinations of typically inexpensive generic drugs in a single pill for the sake of convenience and greater adherence – as financially worthwhile.
 

A matter of adherence?

In the SECURE trial, presented late August at the annual congress of the European Society of Cardiology, Barcelona, investigators randomly assigned 2,499 patients with an MI in the previous 6 months to receive usual care or a polypill.

Patients in the usual-care group typically received the same types of treatments included the polypill, only taken separately. Different versions of the polypill were available to allow for titration to tolerated doses of the component medications: aspirin (100 mg), ramipril (2.5, 5, or 10 mg), and atorvastatin (20 mg or 40 mg).

Researchers used the Morisky Medication Adherence Scale to gauge participants’ adherence to their medication regimen and found the polypill group was more adherent. Patients who received the polypill were more likely to have a high level of adherence at 6 months (70.6% vs. 62.7%) and 24 months (74.1% vs. 63.2%), they reported. (The Morisky tool is the subject of some controversy because of aggressive licensing tactics of its creator.)

The primary endpoint of cardiovascular death, MI, stroke, or urgent revascularization was significantly less likely in the polypill group during a median of 3 years of follow-up (hazard ratio, 0.76; P = .02).

“A primary-outcome event occurred in 118 of 1,237 patients (9.5%) in the polypill group and in 156 of 1,229 (12.7%) in the usual-care group,” the researchers report.

“Probably, adherence is the most important reason of how this works,” Valentin Fuster, MD, physician-in-chief at Mount Sinai Hospital, New York, who led the study, said at ESC 2022.

Still, some clinicians were left scratching their heads by the lack of difference between treatment groups in average blood pressure and levels of low-density lipoprotein (LDL) cholesterol.

In the group that received the polypill, average systolic and diastolic blood pressure at 24 months were 135.2 mmHg and 74.8 mmHg, respectively. In the group that received usual care, those values were 135.5 mmHg and 74.9 mmHg, respectively.

Likewise, “no substantial differences were found in LDL-cholesterol levels over time between the groups, with a mean value at 24 months of 67.7 mg/dL in the polypill group and 67.2 mg/dL in the usual-care group,” according to the researchers.

One explanation for the findings is that greater adherence led to beneficial effects that were not reflected in lipid and blood pressure measurements, the investigators said. Alternatively, the open-label trial design could have led to different health behaviors between groups, they suggested.

Martha Gulati, MD, director of preventive cardiology at Cedars-Sinai Medical Center, Los Angeles, said she loves the idea of polypills. But she wonders about the lack of difference in blood pressure and lipids in SECURE.

Dr. Gulati said she sees in practice how medication adherence and measurements of blood pressure and lipids typically go hand in hand.

When a patient initially responds to a medication, but then their LDL cholesterol goes up later, “my first question is, ‘Are you still taking your medication or how frequently are you taking it?’” Dr. Gulati said in an interview. “And I get all kinds of answers.”

“If you are more adherent, why wouldn’t your LDL actually be lower, and why wouldn’t your blood pressure be lower?” she asked.
 

 

 

Can the results be replicated?

Ethan J. Weiss, MD, a cardiologist and volunteer associate clinical professor of medicine at the University of California, San Francisco, said the SECURE results are “spectacular,” but the seeming disconnect with the biomarker measurements “doesn’t make for a clean story.”

“It just seems like if you are making an argument that this is a way to improve compliance ... you would see some evidence of improved compliance objectively” in the biomarker readings, Dr. Weiss said.

Trying to understand how the polypill worked requires more imagination. “Or it makes you just say, ‘Who cares what the mechanism is?’ These people did a lot better, full stop, and that’s all that matters,” he said.

Dr. Weiss said he expects some degree of replication of the results may be needed before practice changes.

To Steven E. Nissen, MD, chief academic officer of the Heart and Vascular Institute at Cleveland Clinic, the results “don’t make any sense.”

“If they got the same results on the biomarkers that the pill was designed to intervene upon, why are the [primary outcome] results different? It’s completely unexplained,” Dr. Nissen said.

In general, Dr. Nissen has not been an advocate of the polypill approach in higher-income countries.

“Medicine is all about customization of therapy,” he said. “Not everybody needs blood pressure lowering. Not everybody needs the same intensity of LDL reduction. We spend much of our lives seeing patients and treating their blood pressure, and if it doesn’t come down adequately, giving them a higher dose or adding another agent.”

Polypills might be reasonable for primary prevention in countries where people have less access to health care resources, he added. In such settings, a low-cost, simple treatment strategy might have benefit.

But Dr. Nissen still doesn’t see a role for a polypill in secondary prevention.

“I think we have to take a step back, take a deep breath, and look very carefully at the science and try to understand whether this, in fact, is sensible,” he said. “We may need another study to see if this can be replicated.”

For Dhruv S. Kazi, MD, the results of the SECURE trial offer an opportunity to rekindle conversations about the use of polypills for cardiovascular protection. These conversations and studies have been taking place for nearly two decades.

Dr. Kazi, associate director of the Richard A. and Susan F. Smith Center for Outcomes Research in Cardiology at Beth Israel Deaconess Medical Center, Boston, has used models to study the expected cost-effectiveness of polypills in various countries.

Although polypills can improve patients’ adherence to their prescribed medications, Dr. Kazi and colleagues have found that treatment gaps are “often at the physician level,” with many patients not prescribed all of the medications from which they could benefit.

Availability of polypills could help address those gaps. At the same time, many patients, even those with higher incomes, may have a strong preference for taking a single pill.

Dr. Kazi’s research also shows that a polypill approach may be more economically attractive as countries develop because successful treatment averts cardiovascular events that are costlier to treat.

“In the United States, in order for this to work, we would need a polypill that is both available widely but also affordable,” Dr. Kazi said. “It is going to require a visionary mover” to make that happen.

That could include philanthropic foundations. But it could also be a business opportunity for a company like Barcelona-based Ferrer, which provided the polypills for the SECURE trial.

The clinical and economic evidence in support of polypills has been compelling, Dr. Kazi said: “We have to get on with the business of implementing something that is effective and has the potential to greatly improve population health at scale.” 

The SECURE trial was funded by the European Union Horizon 2020 program and coordinated by the Spanish National Center for Cardiovascular Research (CNIC). Ferrer International provided the polypill that was used in the trial. CNIC receives royalties for sales of the polypill from Ferrer. Dr. Weiss is starting a biotech company unrelated to this area of research.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Crystal bone algorithm predicts early fractures, uses ICD codes

Article Type
Changed
Mon, 09/12/2022 - 18:41

The Crystal Bone (Amgen) novel algorithm predicted 2-year risk of osteoporotic fractures in a large dataset with an accuracy that was consistent with FRAX 10-year risk predictions, researchers report.  

The algorithm was built using machine learning and artificial intelligence to predict fracture risk based on International Classification of Diseases (ICD) codes, as described in an article published in the Journal of Medical Internet Research.

The current validation study was presented September 9 as a poster at the annual meeting of the American Society for Bone and Mineral Research.

The scientists validated the algorithm in more than 100,000 patients aged 50 and older (that is, at risk of fracture) who were part of the Reliant Medical Group dataset (a subset of Optum Care).

Importantly, the algorithm predicted increased fracture in many patients who did not have a diagnosis of osteoporosis.

The next steps are validation in other datasets to support the generalizability of Crystal Bone across U.S. health care systems, Elinor Mody, MD, Reliant Medical Group, and colleagues report.

“Implementation research, in which patients identified by Crystal Bone undergo a bone health assessment and receive ongoing management, will help inform the clinical utility of this novel algorithm,” they conclude.

At the poster session, Tina Kelley, Optum Life Sciences, explained: “It’s a screening tool that says: ‘These are your patients that maybe you should spend a little extra time with, ask a few extra questions.’ ”

However, further study is needed before it should be used in clinical practice, she emphasized to this news organization.

‘A very useful advance’ but needs further validation

Invited to comment, Peter R. Ebeling, MD, outgoing president of the ASBMR, noted that “many clinicians now use FRAX to calculate absolute fracture risk and select patients who should initiate anti-osteoporosis drugs.”

With FRAX, clinicians input a patient’s age, sex, weight, height, previous fracture, [history of] parent with fractured hip, current smoking status, glucocorticoids, rheumatoid arthritis, secondary osteoporosis, alcohol (3 units/day or more), and bone mineral density (by DXA at the femoral neck) into the tool, to obtain a 10-year probability of fracture.

“Crystal Bone takes a different approach,” Dr. Ebeling, from Monash University, Melbourne, who was not involved with the research but who disclosed receiving funding from Amgen, told this news organization in an email.

The algorithm uses electronic health records (EHRs) to identify patients who are likely to have a fracture within the next 2 years, he explained, based on diagnoses and medications associated with osteoporosis and fractures. These include ICD-10 codes for fractures at various sites and secondary causes of osteoporosis (such as rheumatoid and other inflammatory arthritis, chronic obstructive pulmonary disease, asthma, celiac disease, and inflammatory bowel disease).

“This is a very useful advance,” Dr. Ebeling summarized, “in that it would alert the clinician to patients in their practice who have a high fracture risk and need to be investigated for osteoporosis and initiated on treatment. Otherwise, the patients would be missed, as currently often occurs.”

“It would need to be adaptable to other [EMR] systems and to be validated in a large separate population to be ready to enter clinical practice,” he said, “but these data look very promising with a good [positive predictive value (PPV)].”

Similarly, Juliet Compston, MD, said: “It provides a novel, fully automated approach to population-based screening for osteoporosis using EHRs to identify people at high imminent risk of fracture.”

Dr. Compston, emeritus professor of bone medicine, University of Cambridge, England, who was not involved with the research but who also disclosed being a consultant for Amgen, selected the study as one of the top clinical science highlights abstracts at the meeting.

“The algorithm looks at ICD codes for previous history of fracture, medications that have adverse effects on bone – for example glucocorticoids, aromatase inhibitors, and anti-androgens – as well as chronic diseases that increase the risk of fracture,” she explained.

“FRAX is the most commonly used tool to estimate fracture probability in clinical practice and to guide treatment decisions,” she noted. However, “currently it requires human input of data into the FRAX website and is generally only performed on individuals who are selected on the basis of clinical risk factors.”

“The Crystal Bone algorithm offers the potential for fully automated population-based screening in older adults to identify those at high risk of fracture, for whom effective therapies are available to reduce fracture risk,” she summarized.

“It needs further validation,” she noted, “and implementation into clinical practice requires the availability of high-quality EHRs.”
 

 

 

Algorithm validated in 106,328 patients aged 50 and older

Despite guidelines that recommend screening for osteoporosis in women aged 65 and older, men older than 70, and adults aged 50-79 with risk factors, real-world data suggest such screening is low, the researchers note.

The current validation study identified 106,328 patients aged 50 and older who had at least 2 years of consecutive medical history with the Reliant Medical Group from December 2014 to November 2020 as well as at least two EHR codes.

The accuracy of predicting a fracture within 2 years, expressed as area under the receiver operating characteristic (AUROC), was 0.77, where 1 is perfect, 0.5 is no better than random selection, 0.7 to 0.8 is acceptable, and 0.8 to 0.9 indicates excellent predictive accuracy.

In the entire Optum Reliant population older than 50, the risk of fracture within 2 years was 1.95%.

The algorithm identified four groups with a greater risk: 19,100 patients had a threefold higher risk of fracture within 2 years, 9,246 patients had a fourfold higher risk, 3,533 patients had a sevenfold higher risk, and 1,735 patients had a ninefold higher risk.

Many of these patients had no prior diagnosis of osteoporosis

For example, in the 19,100 patients with a threefold greater risk of fracture in 2 years, 69% of patients had not been diagnosed with osteoporosis (49% of them had no history of fracture and 20% did have a history of fracture).

The algorithm had a positive predictive value of 6%-18%, a negative predictive value of 98%-99%, a specificity of 81%-98%, and a sensitivity of 18%-59%, for the four groups.

The study was funded by Amgen. Dr. Mody and another author are Reliant Medical Group employees. Ms. Kelley and another author are Optum Life Sciences employees. One author is an employee at Landing AI. Two authors are Amgen employees and own Amgen stock. Dr. Ebeling has disclosed receiving research funding from Amgen, Sanofi, and Alexion, and his institution has received honoraria from Amgen and Kyowa Kirin. Dr. Compston has disclosed receiving speaking and consultancy fees from Amgen and UCB.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

The Crystal Bone (Amgen) novel algorithm predicted 2-year risk of osteoporotic fractures in a large dataset with an accuracy that was consistent with FRAX 10-year risk predictions, researchers report.  

The algorithm was built using machine learning and artificial intelligence to predict fracture risk based on International Classification of Diseases (ICD) codes, as described in an article published in the Journal of Medical Internet Research.

The current validation study was presented September 9 as a poster at the annual meeting of the American Society for Bone and Mineral Research.

The scientists validated the algorithm in more than 100,000 patients aged 50 and older (that is, at risk of fracture) who were part of the Reliant Medical Group dataset (a subset of Optum Care).

Importantly, the algorithm predicted increased fracture in many patients who did not have a diagnosis of osteoporosis.

The next steps are validation in other datasets to support the generalizability of Crystal Bone across U.S. health care systems, Elinor Mody, MD, Reliant Medical Group, and colleagues report.

“Implementation research, in which patients identified by Crystal Bone undergo a bone health assessment and receive ongoing management, will help inform the clinical utility of this novel algorithm,” they conclude.

At the poster session, Tina Kelley, Optum Life Sciences, explained: “It’s a screening tool that says: ‘These are your patients that maybe you should spend a little extra time with, ask a few extra questions.’ ”

However, further study is needed before it should be used in clinical practice, she emphasized to this news organization.

‘A very useful advance’ but needs further validation

Invited to comment, Peter R. Ebeling, MD, outgoing president of the ASBMR, noted that “many clinicians now use FRAX to calculate absolute fracture risk and select patients who should initiate anti-osteoporosis drugs.”

With FRAX, clinicians input a patient’s age, sex, weight, height, previous fracture, [history of] parent with fractured hip, current smoking status, glucocorticoids, rheumatoid arthritis, secondary osteoporosis, alcohol (3 units/day or more), and bone mineral density (by DXA at the femoral neck) into the tool, to obtain a 10-year probability of fracture.

“Crystal Bone takes a different approach,” Dr. Ebeling, from Monash University, Melbourne, who was not involved with the research but who disclosed receiving funding from Amgen, told this news organization in an email.

The algorithm uses electronic health records (EHRs) to identify patients who are likely to have a fracture within the next 2 years, he explained, based on diagnoses and medications associated with osteoporosis and fractures. These include ICD-10 codes for fractures at various sites and secondary causes of osteoporosis (such as rheumatoid and other inflammatory arthritis, chronic obstructive pulmonary disease, asthma, celiac disease, and inflammatory bowel disease).

“This is a very useful advance,” Dr. Ebeling summarized, “in that it would alert the clinician to patients in their practice who have a high fracture risk and need to be investigated for osteoporosis and initiated on treatment. Otherwise, the patients would be missed, as currently often occurs.”

“It would need to be adaptable to other [EMR] systems and to be validated in a large separate population to be ready to enter clinical practice,” he said, “but these data look very promising with a good [positive predictive value (PPV)].”

Similarly, Juliet Compston, MD, said: “It provides a novel, fully automated approach to population-based screening for osteoporosis using EHRs to identify people at high imminent risk of fracture.”

Dr. Compston, emeritus professor of bone medicine, University of Cambridge, England, who was not involved with the research but who also disclosed being a consultant for Amgen, selected the study as one of the top clinical science highlights abstracts at the meeting.

“The algorithm looks at ICD codes for previous history of fracture, medications that have adverse effects on bone – for example glucocorticoids, aromatase inhibitors, and anti-androgens – as well as chronic diseases that increase the risk of fracture,” she explained.

“FRAX is the most commonly used tool to estimate fracture probability in clinical practice and to guide treatment decisions,” she noted. However, “currently it requires human input of data into the FRAX website and is generally only performed on individuals who are selected on the basis of clinical risk factors.”

“The Crystal Bone algorithm offers the potential for fully automated population-based screening in older adults to identify those at high risk of fracture, for whom effective therapies are available to reduce fracture risk,” she summarized.

“It needs further validation,” she noted, “and implementation into clinical practice requires the availability of high-quality EHRs.”
 

 

 

Algorithm validated in 106,328 patients aged 50 and older

Despite guidelines that recommend screening for osteoporosis in women aged 65 and older, men older than 70, and adults aged 50-79 with risk factors, real-world data suggest such screening is low, the researchers note.

The current validation study identified 106,328 patients aged 50 and older who had at least 2 years of consecutive medical history with the Reliant Medical Group from December 2014 to November 2020 as well as at least two EHR codes.

The accuracy of predicting a fracture within 2 years, expressed as area under the receiver operating characteristic (AUROC), was 0.77, where 1 is perfect, 0.5 is no better than random selection, 0.7 to 0.8 is acceptable, and 0.8 to 0.9 indicates excellent predictive accuracy.

In the entire Optum Reliant population older than 50, the risk of fracture within 2 years was 1.95%.

The algorithm identified four groups with a greater risk: 19,100 patients had a threefold higher risk of fracture within 2 years, 9,246 patients had a fourfold higher risk, 3,533 patients had a sevenfold higher risk, and 1,735 patients had a ninefold higher risk.

Many of these patients had no prior diagnosis of osteoporosis

For example, in the 19,100 patients with a threefold greater risk of fracture in 2 years, 69% of patients had not been diagnosed with osteoporosis (49% of them had no history of fracture and 20% did have a history of fracture).

The algorithm had a positive predictive value of 6%-18%, a negative predictive value of 98%-99%, a specificity of 81%-98%, and a sensitivity of 18%-59%, for the four groups.

The study was funded by Amgen. Dr. Mody and another author are Reliant Medical Group employees. Ms. Kelley and another author are Optum Life Sciences employees. One author is an employee at Landing AI. Two authors are Amgen employees and own Amgen stock. Dr. Ebeling has disclosed receiving research funding from Amgen, Sanofi, and Alexion, and his institution has received honoraria from Amgen and Kyowa Kirin. Dr. Compston has disclosed receiving speaking and consultancy fees from Amgen and UCB.

A version of this article first appeared on Medscape.com.

The Crystal Bone (Amgen) novel algorithm predicted 2-year risk of osteoporotic fractures in a large dataset with an accuracy that was consistent with FRAX 10-year risk predictions, researchers report.  

The algorithm was built using machine learning and artificial intelligence to predict fracture risk based on International Classification of Diseases (ICD) codes, as described in an article published in the Journal of Medical Internet Research.

The current validation study was presented September 9 as a poster at the annual meeting of the American Society for Bone and Mineral Research.

The scientists validated the algorithm in more than 100,000 patients aged 50 and older (that is, at risk of fracture) who were part of the Reliant Medical Group dataset (a subset of Optum Care).

Importantly, the algorithm predicted increased fracture in many patients who did not have a diagnosis of osteoporosis.

The next steps are validation in other datasets to support the generalizability of Crystal Bone across U.S. health care systems, Elinor Mody, MD, Reliant Medical Group, and colleagues report.

“Implementation research, in which patients identified by Crystal Bone undergo a bone health assessment and receive ongoing management, will help inform the clinical utility of this novel algorithm,” they conclude.

At the poster session, Tina Kelley, Optum Life Sciences, explained: “It’s a screening tool that says: ‘These are your patients that maybe you should spend a little extra time with, ask a few extra questions.’ ”

However, further study is needed before it should be used in clinical practice, she emphasized to this news organization.

‘A very useful advance’ but needs further validation

Invited to comment, Peter R. Ebeling, MD, outgoing president of the ASBMR, noted that “many clinicians now use FRAX to calculate absolute fracture risk and select patients who should initiate anti-osteoporosis drugs.”

With FRAX, clinicians input a patient’s age, sex, weight, height, previous fracture, [history of] parent with fractured hip, current smoking status, glucocorticoids, rheumatoid arthritis, secondary osteoporosis, alcohol (3 units/day or more), and bone mineral density (by DXA at the femoral neck) into the tool, to obtain a 10-year probability of fracture.

“Crystal Bone takes a different approach,” Dr. Ebeling, from Monash University, Melbourne, who was not involved with the research but who disclosed receiving funding from Amgen, told this news organization in an email.

The algorithm uses electronic health records (EHRs) to identify patients who are likely to have a fracture within the next 2 years, he explained, based on diagnoses and medications associated with osteoporosis and fractures. These include ICD-10 codes for fractures at various sites and secondary causes of osteoporosis (such as rheumatoid and other inflammatory arthritis, chronic obstructive pulmonary disease, asthma, celiac disease, and inflammatory bowel disease).

“This is a very useful advance,” Dr. Ebeling summarized, “in that it would alert the clinician to patients in their practice who have a high fracture risk and need to be investigated for osteoporosis and initiated on treatment. Otherwise, the patients would be missed, as currently often occurs.”

“It would need to be adaptable to other [EMR] systems and to be validated in a large separate population to be ready to enter clinical practice,” he said, “but these data look very promising with a good [positive predictive value (PPV)].”

Similarly, Juliet Compston, MD, said: “It provides a novel, fully automated approach to population-based screening for osteoporosis using EHRs to identify people at high imminent risk of fracture.”

Dr. Compston, emeritus professor of bone medicine, University of Cambridge, England, who was not involved with the research but who also disclosed being a consultant for Amgen, selected the study as one of the top clinical science highlights abstracts at the meeting.

“The algorithm looks at ICD codes for previous history of fracture, medications that have adverse effects on bone – for example glucocorticoids, aromatase inhibitors, and anti-androgens – as well as chronic diseases that increase the risk of fracture,” she explained.

“FRAX is the most commonly used tool to estimate fracture probability in clinical practice and to guide treatment decisions,” she noted. However, “currently it requires human input of data into the FRAX website and is generally only performed on individuals who are selected on the basis of clinical risk factors.”

“The Crystal Bone algorithm offers the potential for fully automated population-based screening in older adults to identify those at high risk of fracture, for whom effective therapies are available to reduce fracture risk,” she summarized.

“It needs further validation,” she noted, “and implementation into clinical practice requires the availability of high-quality EHRs.”
 

 

 

Algorithm validated in 106,328 patients aged 50 and older

Despite guidelines that recommend screening for osteoporosis in women aged 65 and older, men older than 70, and adults aged 50-79 with risk factors, real-world data suggest such screening is low, the researchers note.

The current validation study identified 106,328 patients aged 50 and older who had at least 2 years of consecutive medical history with the Reliant Medical Group from December 2014 to November 2020 as well as at least two EHR codes.

The accuracy of predicting a fracture within 2 years, expressed as area under the receiver operating characteristic (AUROC), was 0.77, where 1 is perfect, 0.5 is no better than random selection, 0.7 to 0.8 is acceptable, and 0.8 to 0.9 indicates excellent predictive accuracy.

In the entire Optum Reliant population older than 50, the risk of fracture within 2 years was 1.95%.

The algorithm identified four groups with a greater risk: 19,100 patients had a threefold higher risk of fracture within 2 years, 9,246 patients had a fourfold higher risk, 3,533 patients had a sevenfold higher risk, and 1,735 patients had a ninefold higher risk.

Many of these patients had no prior diagnosis of osteoporosis

For example, in the 19,100 patients with a threefold greater risk of fracture in 2 years, 69% of patients had not been diagnosed with osteoporosis (49% of them had no history of fracture and 20% did have a history of fracture).

The algorithm had a positive predictive value of 6%-18%, a negative predictive value of 98%-99%, a specificity of 81%-98%, and a sensitivity of 18%-59%, for the four groups.

The study was funded by Amgen. Dr. Mody and another author are Reliant Medical Group employees. Ms. Kelley and another author are Optum Life Sciences employees. One author is an employee at Landing AI. Two authors are Amgen employees and own Amgen stock. Dr. Ebeling has disclosed receiving research funding from Amgen, Sanofi, and Alexion, and his institution has received honoraria from Amgen and Kyowa Kirin. Dr. Compston has disclosed receiving speaking and consultancy fees from Amgen and UCB.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ASBMR 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

A ‘big breakfast’ diet affects hunger, not weight loss

Article Type
Changed
Mon, 09/12/2022 - 15:25

The old saying ‘breakfast like a king, lunch like a prince, and dine like a pauper’ is wrong, at least in terms of weight control, according to a new study, published in Cell Metabolism, from the University of Aberdeen. The idea that ‘front-loading’ calories early in the day might help dieting attempts was based on the belief that consuming the bulk of daily calories in the morning optimizes weight loss by burning calories more efficiently and quickly.

“There are a lot of myths surrounding the timing of eating and how it might influence either body weight or health,” said senior author Alexandra Johnstone, PhD, a researcher at the Rowett Institute, University of Aberdeen, who specializes in appetite control. “This has been driven largely by the circadian rhythm field. But we in the nutrition field have wondered how this could be possible. Where would the energy go? We decided to take a closer look at how time of day interacts with metabolism.”

Her team undertook a randomized crossover trial of 30 overweight and obese subjects recruited via social media ads. Participants – 16 men and 14 women – had a mean age of 51 years, and body mass index of 27-42 kg/ m2 but were otherwise healthy. The researchers compared two calorie-restricted but isoenergetic weight loss diets: morning-loaded calories with 45% of intake at breakfast, 35% at lunch, and 20% at dinner, and evening-loaded calories with the inverse proportions of 20%, 35%, and 45% at breakfast, lunch, and dinner, respectively.

Each diet was followed for 4 weeks, with a controlled baseline diet in which calories were balanced throughout the day provided for 1 week at the outset and during a 1-week washout period between the two intervention diets. Each person’s calorie intake was fixed, referenced to their individual measured resting metabolic rate, to assess the effect on weight loss and energy expenditure of meal timing under isoenergetic intake. Both diets were designed to provide the same nutrient composition of 30% protein, 35% carbohydrate, and 35% fat.

All food and beverages were provided, “making this the most rigorously controlled study to assess timing of eating in humans to date,” the team said, “with the aim of accounting for all aspects of energy balance.”
 

No optimum time to eat for weight loss

Results showed that both diets resulted in significant weight reduction at the end of each dietary intervention period, with subjects losing an average of just over 3 kg during each of the 4-week periods. However, there was no difference in weight loss between the morning-loaded and evening-loaded diets.

The relative size of breakfast and dinner – whether a person eats the largest meal early or late in the day – does not have an impact on metabolism, the team said. This challenges previous studies that have suggested that “evening eaters” – now a majority of the U.K. population – have a greater likelihood of gaining weight and greater difficulty in losing it.

“Participants were provided with all their meals for 8 weeks and their energy expenditure and body composition monitored for changes, using gold standard techniques at the Rowett Institute,” Dr. Johnstone said. “The same number of calories was consumed by volunteers at different times of the day, with energy expenditure measures using analysis of urine.

“This study is important because it challenges the previously held belief that eating at different times of the day leads to differential energy expenditure. The research shows that under weight loss conditions there is no optimum time to eat in order to manage weight, and that change in body weight is determined by energy balance.”
 

 

 

Meal timing reduces hunger but does not affect weight loss

However, the research also revealed that when subjects consumed the morning-loaded (big breakfast) diet, they reported feeling significantly less hungry later in the day. “Morning-loaded intake may assist with compliance to weight loss regime, through a greater suppression of appetite,” the authors said, adding that this “could foster easier weight loss in the real world.”

“The participants reported that their appetites were better controlled on the days they ate a bigger breakfast and that they felt satiated throughout the rest of the day,” Dr. Johnstone said.

“We know that appetite control is important to achieve weight loss, and our study suggests that those consuming the most calories in the morning felt less hungry, in contrast to when they consumed more calories in the evening period.

“This could be quite useful in the real-world environment, versus in the research setting that we were working in.”
 

‘Major finding’ for chrono-nutrition

Coauthor Jonathan Johnston, PhD, professor of chronobiology and integrative physiology at the University of Surrey, said: “This is a major finding for the field of meal timing (‘chrono-nutrition’) research. Many aspects of human biology change across the day and we are starting to understand how this interacts with food intake.

“Our new research shows that, in weight loss conditions, the size of breakfast and dinner regulates our appetite but not the total amount of energy that our bodies use,” Dr. Johnston said. “We plan to build upon this research to improve the health of the general population and specific groups, e.g, shift workers.”

It’s possible that shift workers could have different metabolic responses, due to the disruption of their circadian rhythms, the team said. Dr. Johnstone noted that this type of experiment could also be applied to the study of intermittent fasting (time-restricted eating), to help determine the best time of day for people to consume their calories.

“One thing that’s important to note is that when it comes to timing and dieting, there is not likely going to be one diet that fits all,” she concluded. “Figuring this out is going to be the future of diet studies, but it’s something that’s very difficult to measure.”
 

Great variability in individual responses to diets

Commenting on the study, Helena Gibson-Moore, RNutr (PH), nutrition scientist and spokesperson for the British Nutrition Foundation, said: “With about two in three adults in the UK either overweight or obese, it’s important that research continues to look into effective strategies for people to lose weight.” She described the study as “interesting,” and a challenge to previous research supporting “front-loading” calories earlier in the day as more effective for weight loss.

“However, whilst in this study there were no differences in weight loss, participants did report significantly lower hunger when eating a higher proportion of calories in the morning,” she said. “Therefore, for people who prefer having a big breakfast this may still be a useful way to help compliance to a weight loss regime through feeling less hungry in the evening, which in turn may lead to a reduced calorie intake later in the day.

“However, research has shown that as individuals we respond to diets in different ways. For example, a study comparing weight loss after a healthy low-fat diet vs. a healthy low-carbohydrate diet showed similar mean weight loss at 12 months, but there was large variability in the personal responses to each diet with some participants actually gaining weight.

“Differences in individual responses to dietary exposures has led to research into a personalized nutrition approach which requires collection of personal data and then provides individualized advice based on this.” Research has suggested that personalized dietary and physical activity advice was more effective than conventional generalized advice, she said.

“The bottom line for effective weight loss is that it is clear there is ‘no one size fits all’ approach and different weight loss strategies can work for different people but finding effective strategies for long-term sustainability of weight loss continues to be the major challenge. There are many factors that impact successful weight management and for some people it may not just be what we eat that is important, but also how and when we eat.”

This study was funded by the Medical Research Council and the Scottish Government, Rural and Environment Science and Analytical Services Division.

A version of this article first appeared on Medscape.co.uk.

Publications
Topics
Sections

The old saying ‘breakfast like a king, lunch like a prince, and dine like a pauper’ is wrong, at least in terms of weight control, according to a new study, published in Cell Metabolism, from the University of Aberdeen. The idea that ‘front-loading’ calories early in the day might help dieting attempts was based on the belief that consuming the bulk of daily calories in the morning optimizes weight loss by burning calories more efficiently and quickly.

“There are a lot of myths surrounding the timing of eating and how it might influence either body weight or health,” said senior author Alexandra Johnstone, PhD, a researcher at the Rowett Institute, University of Aberdeen, who specializes in appetite control. “This has been driven largely by the circadian rhythm field. But we in the nutrition field have wondered how this could be possible. Where would the energy go? We decided to take a closer look at how time of day interacts with metabolism.”

Her team undertook a randomized crossover trial of 30 overweight and obese subjects recruited via social media ads. Participants – 16 men and 14 women – had a mean age of 51 years, and body mass index of 27-42 kg/ m2 but were otherwise healthy. The researchers compared two calorie-restricted but isoenergetic weight loss diets: morning-loaded calories with 45% of intake at breakfast, 35% at lunch, and 20% at dinner, and evening-loaded calories with the inverse proportions of 20%, 35%, and 45% at breakfast, lunch, and dinner, respectively.

Each diet was followed for 4 weeks, with a controlled baseline diet in which calories were balanced throughout the day provided for 1 week at the outset and during a 1-week washout period between the two intervention diets. Each person’s calorie intake was fixed, referenced to their individual measured resting metabolic rate, to assess the effect on weight loss and energy expenditure of meal timing under isoenergetic intake. Both diets were designed to provide the same nutrient composition of 30% protein, 35% carbohydrate, and 35% fat.

All food and beverages were provided, “making this the most rigorously controlled study to assess timing of eating in humans to date,” the team said, “with the aim of accounting for all aspects of energy balance.”
 

No optimum time to eat for weight loss

Results showed that both diets resulted in significant weight reduction at the end of each dietary intervention period, with subjects losing an average of just over 3 kg during each of the 4-week periods. However, there was no difference in weight loss between the morning-loaded and evening-loaded diets.

The relative size of breakfast and dinner – whether a person eats the largest meal early or late in the day – does not have an impact on metabolism, the team said. This challenges previous studies that have suggested that “evening eaters” – now a majority of the U.K. population – have a greater likelihood of gaining weight and greater difficulty in losing it.

“Participants were provided with all their meals for 8 weeks and their energy expenditure and body composition monitored for changes, using gold standard techniques at the Rowett Institute,” Dr. Johnstone said. “The same number of calories was consumed by volunteers at different times of the day, with energy expenditure measures using analysis of urine.

“This study is important because it challenges the previously held belief that eating at different times of the day leads to differential energy expenditure. The research shows that under weight loss conditions there is no optimum time to eat in order to manage weight, and that change in body weight is determined by energy balance.”
 

 

 

Meal timing reduces hunger but does not affect weight loss

However, the research also revealed that when subjects consumed the morning-loaded (big breakfast) diet, they reported feeling significantly less hungry later in the day. “Morning-loaded intake may assist with compliance to weight loss regime, through a greater suppression of appetite,” the authors said, adding that this “could foster easier weight loss in the real world.”

“The participants reported that their appetites were better controlled on the days they ate a bigger breakfast and that they felt satiated throughout the rest of the day,” Dr. Johnstone said.

“We know that appetite control is important to achieve weight loss, and our study suggests that those consuming the most calories in the morning felt less hungry, in contrast to when they consumed more calories in the evening period.

“This could be quite useful in the real-world environment, versus in the research setting that we were working in.”
 

‘Major finding’ for chrono-nutrition

Coauthor Jonathan Johnston, PhD, professor of chronobiology and integrative physiology at the University of Surrey, said: “This is a major finding for the field of meal timing (‘chrono-nutrition’) research. Many aspects of human biology change across the day and we are starting to understand how this interacts with food intake.

“Our new research shows that, in weight loss conditions, the size of breakfast and dinner regulates our appetite but not the total amount of energy that our bodies use,” Dr. Johnston said. “We plan to build upon this research to improve the health of the general population and specific groups, e.g, shift workers.”

It’s possible that shift workers could have different metabolic responses, due to the disruption of their circadian rhythms, the team said. Dr. Johnstone noted that this type of experiment could also be applied to the study of intermittent fasting (time-restricted eating), to help determine the best time of day for people to consume their calories.

“One thing that’s important to note is that when it comes to timing and dieting, there is not likely going to be one diet that fits all,” she concluded. “Figuring this out is going to be the future of diet studies, but it’s something that’s very difficult to measure.”
 

Great variability in individual responses to diets

Commenting on the study, Helena Gibson-Moore, RNutr (PH), nutrition scientist and spokesperson for the British Nutrition Foundation, said: “With about two in three adults in the UK either overweight or obese, it’s important that research continues to look into effective strategies for people to lose weight.” She described the study as “interesting,” and a challenge to previous research supporting “front-loading” calories earlier in the day as more effective for weight loss.

“However, whilst in this study there were no differences in weight loss, participants did report significantly lower hunger when eating a higher proportion of calories in the morning,” she said. “Therefore, for people who prefer having a big breakfast this may still be a useful way to help compliance to a weight loss regime through feeling less hungry in the evening, which in turn may lead to a reduced calorie intake later in the day.

“However, research has shown that as individuals we respond to diets in different ways. For example, a study comparing weight loss after a healthy low-fat diet vs. a healthy low-carbohydrate diet showed similar mean weight loss at 12 months, but there was large variability in the personal responses to each diet with some participants actually gaining weight.

“Differences in individual responses to dietary exposures has led to research into a personalized nutrition approach which requires collection of personal data and then provides individualized advice based on this.” Research has suggested that personalized dietary and physical activity advice was more effective than conventional generalized advice, she said.

“The bottom line for effective weight loss is that it is clear there is ‘no one size fits all’ approach and different weight loss strategies can work for different people but finding effective strategies for long-term sustainability of weight loss continues to be the major challenge. There are many factors that impact successful weight management and for some people it may not just be what we eat that is important, but also how and when we eat.”

This study was funded by the Medical Research Council and the Scottish Government, Rural and Environment Science and Analytical Services Division.

A version of this article first appeared on Medscape.co.uk.

The old saying ‘breakfast like a king, lunch like a prince, and dine like a pauper’ is wrong, at least in terms of weight control, according to a new study, published in Cell Metabolism, from the University of Aberdeen. The idea that ‘front-loading’ calories early in the day might help dieting attempts was based on the belief that consuming the bulk of daily calories in the morning optimizes weight loss by burning calories more efficiently and quickly.

“There are a lot of myths surrounding the timing of eating and how it might influence either body weight or health,” said senior author Alexandra Johnstone, PhD, a researcher at the Rowett Institute, University of Aberdeen, who specializes in appetite control. “This has been driven largely by the circadian rhythm field. But we in the nutrition field have wondered how this could be possible. Where would the energy go? We decided to take a closer look at how time of day interacts with metabolism.”

Her team undertook a randomized crossover trial of 30 overweight and obese subjects recruited via social media ads. Participants – 16 men and 14 women – had a mean age of 51 years, and body mass index of 27-42 kg/ m2 but were otherwise healthy. The researchers compared two calorie-restricted but isoenergetic weight loss diets: morning-loaded calories with 45% of intake at breakfast, 35% at lunch, and 20% at dinner, and evening-loaded calories with the inverse proportions of 20%, 35%, and 45% at breakfast, lunch, and dinner, respectively.

Each diet was followed for 4 weeks, with a controlled baseline diet in which calories were balanced throughout the day provided for 1 week at the outset and during a 1-week washout period between the two intervention diets. Each person’s calorie intake was fixed, referenced to their individual measured resting metabolic rate, to assess the effect on weight loss and energy expenditure of meal timing under isoenergetic intake. Both diets were designed to provide the same nutrient composition of 30% protein, 35% carbohydrate, and 35% fat.

All food and beverages were provided, “making this the most rigorously controlled study to assess timing of eating in humans to date,” the team said, “with the aim of accounting for all aspects of energy balance.”
 

No optimum time to eat for weight loss

Results showed that both diets resulted in significant weight reduction at the end of each dietary intervention period, with subjects losing an average of just over 3 kg during each of the 4-week periods. However, there was no difference in weight loss between the morning-loaded and evening-loaded diets.

The relative size of breakfast and dinner – whether a person eats the largest meal early or late in the day – does not have an impact on metabolism, the team said. This challenges previous studies that have suggested that “evening eaters” – now a majority of the U.K. population – have a greater likelihood of gaining weight and greater difficulty in losing it.

“Participants were provided with all their meals for 8 weeks and their energy expenditure and body composition monitored for changes, using gold standard techniques at the Rowett Institute,” Dr. Johnstone said. “The same number of calories was consumed by volunteers at different times of the day, with energy expenditure measures using analysis of urine.

“This study is important because it challenges the previously held belief that eating at different times of the day leads to differential energy expenditure. The research shows that under weight loss conditions there is no optimum time to eat in order to manage weight, and that change in body weight is determined by energy balance.”
 

 

 

Meal timing reduces hunger but does not affect weight loss

However, the research also revealed that when subjects consumed the morning-loaded (big breakfast) diet, they reported feeling significantly less hungry later in the day. “Morning-loaded intake may assist with compliance to weight loss regime, through a greater suppression of appetite,” the authors said, adding that this “could foster easier weight loss in the real world.”

“The participants reported that their appetites were better controlled on the days they ate a bigger breakfast and that they felt satiated throughout the rest of the day,” Dr. Johnstone said.

“We know that appetite control is important to achieve weight loss, and our study suggests that those consuming the most calories in the morning felt less hungry, in contrast to when they consumed more calories in the evening period.

“This could be quite useful in the real-world environment, versus in the research setting that we were working in.”
 

‘Major finding’ for chrono-nutrition

Coauthor Jonathan Johnston, PhD, professor of chronobiology and integrative physiology at the University of Surrey, said: “This is a major finding for the field of meal timing (‘chrono-nutrition’) research. Many aspects of human biology change across the day and we are starting to understand how this interacts with food intake.

“Our new research shows that, in weight loss conditions, the size of breakfast and dinner regulates our appetite but not the total amount of energy that our bodies use,” Dr. Johnston said. “We plan to build upon this research to improve the health of the general population and specific groups, e.g, shift workers.”

It’s possible that shift workers could have different metabolic responses, due to the disruption of their circadian rhythms, the team said. Dr. Johnstone noted that this type of experiment could also be applied to the study of intermittent fasting (time-restricted eating), to help determine the best time of day for people to consume their calories.

“One thing that’s important to note is that when it comes to timing and dieting, there is not likely going to be one diet that fits all,” she concluded. “Figuring this out is going to be the future of diet studies, but it’s something that’s very difficult to measure.”
 

Great variability in individual responses to diets

Commenting on the study, Helena Gibson-Moore, RNutr (PH), nutrition scientist and spokesperson for the British Nutrition Foundation, said: “With about two in three adults in the UK either overweight or obese, it’s important that research continues to look into effective strategies for people to lose weight.” She described the study as “interesting,” and a challenge to previous research supporting “front-loading” calories earlier in the day as more effective for weight loss.

“However, whilst in this study there were no differences in weight loss, participants did report significantly lower hunger when eating a higher proportion of calories in the morning,” she said. “Therefore, for people who prefer having a big breakfast this may still be a useful way to help compliance to a weight loss regime through feeling less hungry in the evening, which in turn may lead to a reduced calorie intake later in the day.

“However, research has shown that as individuals we respond to diets in different ways. For example, a study comparing weight loss after a healthy low-fat diet vs. a healthy low-carbohydrate diet showed similar mean weight loss at 12 months, but there was large variability in the personal responses to each diet with some participants actually gaining weight.

“Differences in individual responses to dietary exposures has led to research into a personalized nutrition approach which requires collection of personal data and then provides individualized advice based on this.” Research has suggested that personalized dietary and physical activity advice was more effective than conventional generalized advice, she said.

“The bottom line for effective weight loss is that it is clear there is ‘no one size fits all’ approach and different weight loss strategies can work for different people but finding effective strategies for long-term sustainability of weight loss continues to be the major challenge. There are many factors that impact successful weight management and for some people it may not just be what we eat that is important, but also how and when we eat.”

This study was funded by the Medical Research Council and the Scottish Government, Rural and Environment Science and Analytical Services Division.

A version of this article first appeared on Medscape.co.uk.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CELL METABOLISM

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

 How does salt intake relate to mortality?

Article Type
Changed
Wed, 09/14/2022 - 15:49

Intake of salt is a biological necessity, inextricably woven into physiologic systems. However, excessive salt intake is associated with high blood pressure. Hypertension is linked to increased cardiovascular morbidity and mortality, and it is estimated that excessive salt intake causes approximately 5 million deaths per year worldwide. Reducing salt intake lowers blood pressure, but processed foods contain “hidden” salt, which makes dietary control of salt difficult. This problem is compounded by growing inequalities in food systems, which present another hurdle to sustaining individual dietary control of salt intake.

Krisana Antharith / EyeEm / Getty Images

Of the 87 risk factors included in the Global Burden of Diseases, Injuries, and Risk Factors Study 2019, high systolic blood pressure was identified as the leading risk factor for disease burden at the global level and for its effect on human health. A range of strategies, including primary care management and reduction in sodium intake, are known to reduce the burden of this critical risk factor. Two questions remain unanswered: “What is the relationship between mortality and adding salt to foods?” and “How much does a reduction in salt intake influence people’s health?”
 

Cardiovascular disease and death

Because dietary sodium intake has been identified as a risk factor for cardiovascular disease and premature death, high sodium intake can be expected to curtail life span. A study tested this hypothesis by analyzing the relationship between sodium intake and life expectancy and survival in 181 countries. Sodium intake correlated positively with life expectancy and inversely with all-cause mortality worldwide and in high-income countries, which argues against dietary sodium intake curtailing life span or a being risk factor for premature death. These results help fuel a scientific debate about sodium intake, life expectancy, and mortality. The debate requires interpreting composite data of positive linear, J-shaped, or inverse linear correlations, which underscores the uncertainty regarding this issue.

In a prospective study of 501,379 participants from the UK Biobank, researchers found that higher frequency of adding salt to foods was significantly associated with a higher risk of premature mortality and lower life expectancy independently of diet, lifestyle, socioeconomic level, and preexisting diseases. They found that the positive association appeared to be attenuated with increasing intake of high-potassium foods (vegetables and fruits).

In addition, the researchers made the following observations:

  • For cause-specific premature mortality, they found that higher frequency of adding salt to foods was significantly associated with a higher risk of cardiovascular disease mortality and cancer mortality (P-trend < .001 and P-trend < .001, respectively).
  • Always adding salt to foods was associated with the lower life expectancy at the age of 50 years by 1.50 (95% confidence interval, 0.72-2.30) and 2.28 (95% CI, 1.66-2.90) years for women and men, respectively, compared with participants who never or rarely added salt to foods.

The researchers noted that adding salt to foods (usually at the table) is common and is directly related to an individual’s long-term preference for salty foods and habitual salt intake. Indeed, in the Western diet, adding salt at the table accounts for 6%-20% of total salt intake. In addition, commonly used table salt contains 97%-99% sodium chloride, minimizing the potential confounding effects of other dietary factors, including potassium. Therefore, adding salt to foods provides a way to evaluate the association between habitual sodium intake and mortality – something that is relevant, given that it has been estimated that in 2010, a total of 1.65 million deaths from cardiovascular causes were attributable to consumption of more than 2.0 g of sodium per day.
 

 

 

Salt sensitivity

Current evidence supports a recommendation for moderate sodium intake in the general population (3-5 g/day). Persons with hypertension should consume salt at the lower end of that range. Some dietary guidelines recommend consuming less than 2,300 mg dietary sodium per day for persons aged 14 years or older and less for persons aged 2-13 years. Although low sodium intake (< 2.0 g/day) has been achieved in short-term clinical trials, sustained low sodium intake has not been achieved in any of the longer-term clinical trials (duration > 6 months).

The controversy continues as to the relationship between low sodium intake and blood pressure or cardiovascular diseases. Most studies show that both in individuals with hypertension and those without, blood pressure is reduced by consuming less sodium. However, it is not necessarily lowered by reducing sodium intake (< 3-5 g/day). With a sodium-rich diet, most normotensive individuals experienced a minimal change in mean arterial pressure; for many individuals with hypertension, the values increased by about 4 mm Hg. In addition, among individuals with hypertension who are “salt sensitive,” arterial pressure can increase by > 10 mm Hg in response to high sodium intake.
 

The effect of potassium

Replacing some of the sodium chloride in regular salt with potassium chloride may mitigate some of salt’s harmful cardiovascular effects. Indeed, salt substitutes that have reduced sodium levels and increased potassium levels have been shown to lower blood pressure.

In one trial, researchers enrolled over 20,000 persons from 600 villages in rural China and compared the use of regular salt (100% sodium chloride) with the use of a salt substitute (75% sodium chloride and 25% potassium chloride by mass).

The participants were at high risk for stroke, cardiovascular events, and death. The mean duration of follow-up was 4.74 years. The results were surprising. The rate of stroke was lower with the salt substitute than with regular salt (29.14 events vs. 33.65 events per 1,000 person-years; rate ratio, 0.86; 95% CI, 0.77-0.96; P = .006), as were the rates of major cardiovascular events and death from any cause. The rate of serious adverse events attributed to hyperkalemia was not significantly higher with the salt substitute than with regular salt.

Although there is an ongoing debate about the extent of salt’s effects on the cardiovascular system, there is no doubt that in most places in the world, people are consuming more salt than the body needs.

A lot depends upon the kind of diet consumed by a particular population. Processed food is rarely used in rural areas, such as those involved in the above-mentioned trial, with dietary sodium chloride being added while preparing food at home. This is a determining factor with regard to cardiovascular outcomes, but it cannot be generalized to other social-environmental settings.

In much of the world, commercial food preservation introduces a lot of sodium chloride into the diet, and most salt intake could not be fully attributed to the use of salt substitutes. Indeed, by comparing the sodium content of cereal-based products currently sold on the Italian market with the respective benchmarks proposed by the World Health Organization, researchers found that for most items, the sodium content is much higher than the benchmarks, especially with flatbreads, leavened breads, and crackers/savory biscuits. This shows that there is work to be done to achieve the World Health Organization/United Nations objective of a 30% global reduction in sodium intake by 2025.

This article was translated from Univadis Italy. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Intake of salt is a biological necessity, inextricably woven into physiologic systems. However, excessive salt intake is associated with high blood pressure. Hypertension is linked to increased cardiovascular morbidity and mortality, and it is estimated that excessive salt intake causes approximately 5 million deaths per year worldwide. Reducing salt intake lowers blood pressure, but processed foods contain “hidden” salt, which makes dietary control of salt difficult. This problem is compounded by growing inequalities in food systems, which present another hurdle to sustaining individual dietary control of salt intake.

Krisana Antharith / EyeEm / Getty Images

Of the 87 risk factors included in the Global Burden of Diseases, Injuries, and Risk Factors Study 2019, high systolic blood pressure was identified as the leading risk factor for disease burden at the global level and for its effect on human health. A range of strategies, including primary care management and reduction in sodium intake, are known to reduce the burden of this critical risk factor. Two questions remain unanswered: “What is the relationship between mortality and adding salt to foods?” and “How much does a reduction in salt intake influence people’s health?”
 

Cardiovascular disease and death

Because dietary sodium intake has been identified as a risk factor for cardiovascular disease and premature death, high sodium intake can be expected to curtail life span. A study tested this hypothesis by analyzing the relationship between sodium intake and life expectancy and survival in 181 countries. Sodium intake correlated positively with life expectancy and inversely with all-cause mortality worldwide and in high-income countries, which argues against dietary sodium intake curtailing life span or a being risk factor for premature death. These results help fuel a scientific debate about sodium intake, life expectancy, and mortality. The debate requires interpreting composite data of positive linear, J-shaped, or inverse linear correlations, which underscores the uncertainty regarding this issue.

In a prospective study of 501,379 participants from the UK Biobank, researchers found that higher frequency of adding salt to foods was significantly associated with a higher risk of premature mortality and lower life expectancy independently of diet, lifestyle, socioeconomic level, and preexisting diseases. They found that the positive association appeared to be attenuated with increasing intake of high-potassium foods (vegetables and fruits).

In addition, the researchers made the following observations:

  • For cause-specific premature mortality, they found that higher frequency of adding salt to foods was significantly associated with a higher risk of cardiovascular disease mortality and cancer mortality (P-trend < .001 and P-trend < .001, respectively).
  • Always adding salt to foods was associated with the lower life expectancy at the age of 50 years by 1.50 (95% confidence interval, 0.72-2.30) and 2.28 (95% CI, 1.66-2.90) years for women and men, respectively, compared with participants who never or rarely added salt to foods.

The researchers noted that adding salt to foods (usually at the table) is common and is directly related to an individual’s long-term preference for salty foods and habitual salt intake. Indeed, in the Western diet, adding salt at the table accounts for 6%-20% of total salt intake. In addition, commonly used table salt contains 97%-99% sodium chloride, minimizing the potential confounding effects of other dietary factors, including potassium. Therefore, adding salt to foods provides a way to evaluate the association between habitual sodium intake and mortality – something that is relevant, given that it has been estimated that in 2010, a total of 1.65 million deaths from cardiovascular causes were attributable to consumption of more than 2.0 g of sodium per day.
 

 

 

Salt sensitivity

Current evidence supports a recommendation for moderate sodium intake in the general population (3-5 g/day). Persons with hypertension should consume salt at the lower end of that range. Some dietary guidelines recommend consuming less than 2,300 mg dietary sodium per day for persons aged 14 years or older and less for persons aged 2-13 years. Although low sodium intake (< 2.0 g/day) has been achieved in short-term clinical trials, sustained low sodium intake has not been achieved in any of the longer-term clinical trials (duration > 6 months).

The controversy continues as to the relationship between low sodium intake and blood pressure or cardiovascular diseases. Most studies show that both in individuals with hypertension and those without, blood pressure is reduced by consuming less sodium. However, it is not necessarily lowered by reducing sodium intake (< 3-5 g/day). With a sodium-rich diet, most normotensive individuals experienced a minimal change in mean arterial pressure; for many individuals with hypertension, the values increased by about 4 mm Hg. In addition, among individuals with hypertension who are “salt sensitive,” arterial pressure can increase by > 10 mm Hg in response to high sodium intake.
 

The effect of potassium

Replacing some of the sodium chloride in regular salt with potassium chloride may mitigate some of salt’s harmful cardiovascular effects. Indeed, salt substitutes that have reduced sodium levels and increased potassium levels have been shown to lower blood pressure.

In one trial, researchers enrolled over 20,000 persons from 600 villages in rural China and compared the use of regular salt (100% sodium chloride) with the use of a salt substitute (75% sodium chloride and 25% potassium chloride by mass).

The participants were at high risk for stroke, cardiovascular events, and death. The mean duration of follow-up was 4.74 years. The results were surprising. The rate of stroke was lower with the salt substitute than with regular salt (29.14 events vs. 33.65 events per 1,000 person-years; rate ratio, 0.86; 95% CI, 0.77-0.96; P = .006), as were the rates of major cardiovascular events and death from any cause. The rate of serious adverse events attributed to hyperkalemia was not significantly higher with the salt substitute than with regular salt.

Although there is an ongoing debate about the extent of salt’s effects on the cardiovascular system, there is no doubt that in most places in the world, people are consuming more salt than the body needs.

A lot depends upon the kind of diet consumed by a particular population. Processed food is rarely used in rural areas, such as those involved in the above-mentioned trial, with dietary sodium chloride being added while preparing food at home. This is a determining factor with regard to cardiovascular outcomes, but it cannot be generalized to other social-environmental settings.

In much of the world, commercial food preservation introduces a lot of sodium chloride into the diet, and most salt intake could not be fully attributed to the use of salt substitutes. Indeed, by comparing the sodium content of cereal-based products currently sold on the Italian market with the respective benchmarks proposed by the World Health Organization, researchers found that for most items, the sodium content is much higher than the benchmarks, especially with flatbreads, leavened breads, and crackers/savory biscuits. This shows that there is work to be done to achieve the World Health Organization/United Nations objective of a 30% global reduction in sodium intake by 2025.

This article was translated from Univadis Italy. A version of this article first appeared on Medscape.com.

Intake of salt is a biological necessity, inextricably woven into physiologic systems. However, excessive salt intake is associated with high blood pressure. Hypertension is linked to increased cardiovascular morbidity and mortality, and it is estimated that excessive salt intake causes approximately 5 million deaths per year worldwide. Reducing salt intake lowers blood pressure, but processed foods contain “hidden” salt, which makes dietary control of salt difficult. This problem is compounded by growing inequalities in food systems, which present another hurdle to sustaining individual dietary control of salt intake.

Krisana Antharith / EyeEm / Getty Images

Of the 87 risk factors included in the Global Burden of Diseases, Injuries, and Risk Factors Study 2019, high systolic blood pressure was identified as the leading risk factor for disease burden at the global level and for its effect on human health. A range of strategies, including primary care management and reduction in sodium intake, are known to reduce the burden of this critical risk factor. Two questions remain unanswered: “What is the relationship between mortality and adding salt to foods?” and “How much does a reduction in salt intake influence people’s health?”
 

Cardiovascular disease and death

Because dietary sodium intake has been identified as a risk factor for cardiovascular disease and premature death, high sodium intake can be expected to curtail life span. A study tested this hypothesis by analyzing the relationship between sodium intake and life expectancy and survival in 181 countries. Sodium intake correlated positively with life expectancy and inversely with all-cause mortality worldwide and in high-income countries, which argues against dietary sodium intake curtailing life span or a being risk factor for premature death. These results help fuel a scientific debate about sodium intake, life expectancy, and mortality. The debate requires interpreting composite data of positive linear, J-shaped, or inverse linear correlations, which underscores the uncertainty regarding this issue.

In a prospective study of 501,379 participants from the UK Biobank, researchers found that higher frequency of adding salt to foods was significantly associated with a higher risk of premature mortality and lower life expectancy independently of diet, lifestyle, socioeconomic level, and preexisting diseases. They found that the positive association appeared to be attenuated with increasing intake of high-potassium foods (vegetables and fruits).

In addition, the researchers made the following observations:

  • For cause-specific premature mortality, they found that higher frequency of adding salt to foods was significantly associated with a higher risk of cardiovascular disease mortality and cancer mortality (P-trend < .001 and P-trend < .001, respectively).
  • Always adding salt to foods was associated with the lower life expectancy at the age of 50 years by 1.50 (95% confidence interval, 0.72-2.30) and 2.28 (95% CI, 1.66-2.90) years for women and men, respectively, compared with participants who never or rarely added salt to foods.

The researchers noted that adding salt to foods (usually at the table) is common and is directly related to an individual’s long-term preference for salty foods and habitual salt intake. Indeed, in the Western diet, adding salt at the table accounts for 6%-20% of total salt intake. In addition, commonly used table salt contains 97%-99% sodium chloride, minimizing the potential confounding effects of other dietary factors, including potassium. Therefore, adding salt to foods provides a way to evaluate the association between habitual sodium intake and mortality – something that is relevant, given that it has been estimated that in 2010, a total of 1.65 million deaths from cardiovascular causes were attributable to consumption of more than 2.0 g of sodium per day.
 

 

 

Salt sensitivity

Current evidence supports a recommendation for moderate sodium intake in the general population (3-5 g/day). Persons with hypertension should consume salt at the lower end of that range. Some dietary guidelines recommend consuming less than 2,300 mg dietary sodium per day for persons aged 14 years or older and less for persons aged 2-13 years. Although low sodium intake (< 2.0 g/day) has been achieved in short-term clinical trials, sustained low sodium intake has not been achieved in any of the longer-term clinical trials (duration > 6 months).

The controversy continues as to the relationship between low sodium intake and blood pressure or cardiovascular diseases. Most studies show that both in individuals with hypertension and those without, blood pressure is reduced by consuming less sodium. However, it is not necessarily lowered by reducing sodium intake (< 3-5 g/day). With a sodium-rich diet, most normotensive individuals experienced a minimal change in mean arterial pressure; for many individuals with hypertension, the values increased by about 4 mm Hg. In addition, among individuals with hypertension who are “salt sensitive,” arterial pressure can increase by > 10 mm Hg in response to high sodium intake.
 

The effect of potassium

Replacing some of the sodium chloride in regular salt with potassium chloride may mitigate some of salt’s harmful cardiovascular effects. Indeed, salt substitutes that have reduced sodium levels and increased potassium levels have been shown to lower blood pressure.

In one trial, researchers enrolled over 20,000 persons from 600 villages in rural China and compared the use of regular salt (100% sodium chloride) with the use of a salt substitute (75% sodium chloride and 25% potassium chloride by mass).

The participants were at high risk for stroke, cardiovascular events, and death. The mean duration of follow-up was 4.74 years. The results were surprising. The rate of stroke was lower with the salt substitute than with regular salt (29.14 events vs. 33.65 events per 1,000 person-years; rate ratio, 0.86; 95% CI, 0.77-0.96; P = .006), as were the rates of major cardiovascular events and death from any cause. The rate of serious adverse events attributed to hyperkalemia was not significantly higher with the salt substitute than with regular salt.

Although there is an ongoing debate about the extent of salt’s effects on the cardiovascular system, there is no doubt that in most places in the world, people are consuming more salt than the body needs.

A lot depends upon the kind of diet consumed by a particular population. Processed food is rarely used in rural areas, such as those involved in the above-mentioned trial, with dietary sodium chloride being added while preparing food at home. This is a determining factor with regard to cardiovascular outcomes, but it cannot be generalized to other social-environmental settings.

In much of the world, commercial food preservation introduces a lot of sodium chloride into the diet, and most salt intake could not be fully attributed to the use of salt substitutes. Indeed, by comparing the sodium content of cereal-based products currently sold on the Italian market with the respective benchmarks proposed by the World Health Organization, researchers found that for most items, the sodium content is much higher than the benchmarks, especially with flatbreads, leavened breads, and crackers/savory biscuits. This shows that there is work to be done to achieve the World Health Organization/United Nations objective of a 30% global reduction in sodium intake by 2025.

This article was translated from Univadis Italy. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

The potential problem(s) with a once-a-year COVID vaccine

Article Type
Changed
Tue, 09/13/2022 - 14:35

Comments from the White House this week suggesting a once-a-year COVID-19 shot for most Americans, “just like your annual flu shot,” were met with backlash from many who say COVID and influenza come from different viruses and need different schedules.

Remarks, from “capitulation” to too few data, hit the airwaves and social media.

Some, however, agree with the White House vision and say that asking people to get one shot in the fall instead of periodic pushes for boosters will raise public confidence and buy-in and reduce consumer confusion.  

Health leaders, including Bob Wachter, MD, chair of the department of medicine at the University of California, San Francisco, say they like the framing of the concept – that people who are not high-risk should plan each year for a COVID shot and a flu shot.

“Doesn’t mean we KNOW shot will prevent transmission for a year. DOES mean it’ll likely lower odds of SEVERE case for a year & we need strategy to bump uptake,” Dr. Wachter tweeted this week.

But the numbers of Americans seeking boosters remain low. Only one-third of all eligible people 50 years and older have gotten a second COVID booster, according to the Centers for Disease Control and Prevention. About half of those who got the original two shots got a first booster.

Meanwhile, the United States is still averaging about 70,000 new COVID cases and more than 300 deaths every day.

The suggested change in approach comes as Pfizer/BioNTech and Moderna roll out their new boosters that target Omicron subvariants BA.4 and BA.5 after the CDC recommended their use and the U.S. Food and Drug Administration approved emergency use authorization. 

“As the virus continues to change, we will now be able to update our vaccines annually to target the dominant variant,” President Joe Biden said in a statement promoting the yearly approach.
 

Some say annual shot premature

Other experts say it’s too soon to tell whether an annual approach will work.

“We have no data to support that current vaccines, including the new BA.5 booster, will provide durable protection beyond 4-6 months. It would be good to aspire to this objective, and much longer duration or protection, but that will likely require next generation and nasal vaccines,” said Eric Topol, MD, Medscape’s editor-in-chief and founder and director of the Scripps Research Translational Institute.

A report in Nature Reviews Immunology states, “Mucosal vaccines offer the potential to trigger robust protective immune responses at the predominant sites of pathogen infection” and potentially “can prevent an infection from becoming established in the first place, rather than only curtailing infection and protecting against the development of disease symptoms.”

Dr. Topol tweeted after the White House statements, “[An annual vaccine] has the ring of Covid capitulation.”

William Schaffner, MD, an infectious disease expert at Vanderbilt University, Nashville, Tenn., told this news organization that he cautions against interpreting the White House comments as official policy.

“This is the difficulty of having public health announcements come out of Washington,” he said. “They ought to come out of the CDC.”

He says there is a reasonable analogy between COVID and influenza, but warns, “don’t push the analogy.”

They are both serious respiratory viruses that can cause much illness and death in essentially the same populations, he notes. These are the older, frail people, people who have underlying illnesses or are immunocompromised.

Both viruses also mutate. But there the paths diverge.

“We’ve gotten into a pattern of annually updating the influenza vaccine because it is such a singularly seasonal virus,” Dr. Schaffner said. “Basically it disappears during the summer. We’ve had plenty of COVID during the summers.”

For COVID, he said, “We will need a periodic booster. Could this be annually? That would certainly make it easier.” But it’s too soon to tell, he said.

Dr. Schaffner noted that several manufacturers are working on a combined flu/COVID vaccine.
 

 

 

Just a ‘first step’ toward annual shot

The currently updated COVID vaccine may be the first step toward an annual vaccine, but it’s only the first step, Dr. Schaffner said. “We haven’t committed to further steps yet because we’re watching this virus.”

Syra Madad, DHSc, MSc, an infectious disease epidemiologist at Harvard University’s Belfer Center for Science and International Affairs, Cambridge, Mass., and the New York City hospital system, told this news organization that arguments on both sides make sense.

Having a single message once a year can help eliminate the considerable confusion involving people on individual timelines with different levels of immunity and separate campaigns for COVID and flu shots coming at different times of the year.

“Communication around vaccines is very muddled and that shows in our overall vaccination rates, particularly booster rates,” she says. “The overall strategy is hopeful and makes sense if we’re going to progress that way based on data.”

However, she said that the data are just not there yet to show it’s time for an annual vaccine. First, scientists will need to see how long protection lasts with the Omicron-specific vaccine and how well and how long it protects against severe disease and death as well as infection.

COVID is less predictable than influenza and the influenza vaccine has been around for decades, Dr. Madad noted. With influenza, the patterns are more easily anticipated with their “ladder-like pattern,” she said. “COVID-19 is not like that.”

What is hopeful, she said, “is that we’ve been in the Omicron dynasty since November of 2021. I’m hopeful that we’ll stick with that particular variant.”

Dr. Topol, Dr. Schaffner, and Dr. Madad declared no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Comments from the White House this week suggesting a once-a-year COVID-19 shot for most Americans, “just like your annual flu shot,” were met with backlash from many who say COVID and influenza come from different viruses and need different schedules.

Remarks, from “capitulation” to too few data, hit the airwaves and social media.

Some, however, agree with the White House vision and say that asking people to get one shot in the fall instead of periodic pushes for boosters will raise public confidence and buy-in and reduce consumer confusion.  

Health leaders, including Bob Wachter, MD, chair of the department of medicine at the University of California, San Francisco, say they like the framing of the concept – that people who are not high-risk should plan each year for a COVID shot and a flu shot.

“Doesn’t mean we KNOW shot will prevent transmission for a year. DOES mean it’ll likely lower odds of SEVERE case for a year & we need strategy to bump uptake,” Dr. Wachter tweeted this week.

But the numbers of Americans seeking boosters remain low. Only one-third of all eligible people 50 years and older have gotten a second COVID booster, according to the Centers for Disease Control and Prevention. About half of those who got the original two shots got a first booster.

Meanwhile, the United States is still averaging about 70,000 new COVID cases and more than 300 deaths every day.

The suggested change in approach comes as Pfizer/BioNTech and Moderna roll out their new boosters that target Omicron subvariants BA.4 and BA.5 after the CDC recommended their use and the U.S. Food and Drug Administration approved emergency use authorization. 

“As the virus continues to change, we will now be able to update our vaccines annually to target the dominant variant,” President Joe Biden said in a statement promoting the yearly approach.
 

Some say annual shot premature

Other experts say it’s too soon to tell whether an annual approach will work.

“We have no data to support that current vaccines, including the new BA.5 booster, will provide durable protection beyond 4-6 months. It would be good to aspire to this objective, and much longer duration or protection, but that will likely require next generation and nasal vaccines,” said Eric Topol, MD, Medscape’s editor-in-chief and founder and director of the Scripps Research Translational Institute.

A report in Nature Reviews Immunology states, “Mucosal vaccines offer the potential to trigger robust protective immune responses at the predominant sites of pathogen infection” and potentially “can prevent an infection from becoming established in the first place, rather than only curtailing infection and protecting against the development of disease symptoms.”

Dr. Topol tweeted after the White House statements, “[An annual vaccine] has the ring of Covid capitulation.”

William Schaffner, MD, an infectious disease expert at Vanderbilt University, Nashville, Tenn., told this news organization that he cautions against interpreting the White House comments as official policy.

“This is the difficulty of having public health announcements come out of Washington,” he said. “They ought to come out of the CDC.”

He says there is a reasonable analogy between COVID and influenza, but warns, “don’t push the analogy.”

They are both serious respiratory viruses that can cause much illness and death in essentially the same populations, he notes. These are the older, frail people, people who have underlying illnesses or are immunocompromised.

Both viruses also mutate. But there the paths diverge.

“We’ve gotten into a pattern of annually updating the influenza vaccine because it is such a singularly seasonal virus,” Dr. Schaffner said. “Basically it disappears during the summer. We’ve had plenty of COVID during the summers.”

For COVID, he said, “We will need a periodic booster. Could this be annually? That would certainly make it easier.” But it’s too soon to tell, he said.

Dr. Schaffner noted that several manufacturers are working on a combined flu/COVID vaccine.
 

 

 

Just a ‘first step’ toward annual shot

The currently updated COVID vaccine may be the first step toward an annual vaccine, but it’s only the first step, Dr. Schaffner said. “We haven’t committed to further steps yet because we’re watching this virus.”

Syra Madad, DHSc, MSc, an infectious disease epidemiologist at Harvard University’s Belfer Center for Science and International Affairs, Cambridge, Mass., and the New York City hospital system, told this news organization that arguments on both sides make sense.

Having a single message once a year can help eliminate the considerable confusion involving people on individual timelines with different levels of immunity and separate campaigns for COVID and flu shots coming at different times of the year.

“Communication around vaccines is very muddled and that shows in our overall vaccination rates, particularly booster rates,” she says. “The overall strategy is hopeful and makes sense if we’re going to progress that way based on data.”

However, she said that the data are just not there yet to show it’s time for an annual vaccine. First, scientists will need to see how long protection lasts with the Omicron-specific vaccine and how well and how long it protects against severe disease and death as well as infection.

COVID is less predictable than influenza and the influenza vaccine has been around for decades, Dr. Madad noted. With influenza, the patterns are more easily anticipated with their “ladder-like pattern,” she said. “COVID-19 is not like that.”

What is hopeful, she said, “is that we’ve been in the Omicron dynasty since November of 2021. I’m hopeful that we’ll stick with that particular variant.”

Dr. Topol, Dr. Schaffner, and Dr. Madad declared no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Comments from the White House this week suggesting a once-a-year COVID-19 shot for most Americans, “just like your annual flu shot,” were met with backlash from many who say COVID and influenza come from different viruses and need different schedules.

Remarks, from “capitulation” to too few data, hit the airwaves and social media.

Some, however, agree with the White House vision and say that asking people to get one shot in the fall instead of periodic pushes for boosters will raise public confidence and buy-in and reduce consumer confusion.  

Health leaders, including Bob Wachter, MD, chair of the department of medicine at the University of California, San Francisco, say they like the framing of the concept – that people who are not high-risk should plan each year for a COVID shot and a flu shot.

“Doesn’t mean we KNOW shot will prevent transmission for a year. DOES mean it’ll likely lower odds of SEVERE case for a year & we need strategy to bump uptake,” Dr. Wachter tweeted this week.

But the numbers of Americans seeking boosters remain low. Only one-third of all eligible people 50 years and older have gotten a second COVID booster, according to the Centers for Disease Control and Prevention. About half of those who got the original two shots got a first booster.

Meanwhile, the United States is still averaging about 70,000 new COVID cases and more than 300 deaths every day.

The suggested change in approach comes as Pfizer/BioNTech and Moderna roll out their new boosters that target Omicron subvariants BA.4 and BA.5 after the CDC recommended their use and the U.S. Food and Drug Administration approved emergency use authorization. 

“As the virus continues to change, we will now be able to update our vaccines annually to target the dominant variant,” President Joe Biden said in a statement promoting the yearly approach.
 

Some say annual shot premature

Other experts say it’s too soon to tell whether an annual approach will work.

“We have no data to support that current vaccines, including the new BA.5 booster, will provide durable protection beyond 4-6 months. It would be good to aspire to this objective, and much longer duration or protection, but that will likely require next generation and nasal vaccines,” said Eric Topol, MD, Medscape’s editor-in-chief and founder and director of the Scripps Research Translational Institute.

A report in Nature Reviews Immunology states, “Mucosal vaccines offer the potential to trigger robust protective immune responses at the predominant sites of pathogen infection” and potentially “can prevent an infection from becoming established in the first place, rather than only curtailing infection and protecting against the development of disease symptoms.”

Dr. Topol tweeted after the White House statements, “[An annual vaccine] has the ring of Covid capitulation.”

William Schaffner, MD, an infectious disease expert at Vanderbilt University, Nashville, Tenn., told this news organization that he cautions against interpreting the White House comments as official policy.

“This is the difficulty of having public health announcements come out of Washington,” he said. “They ought to come out of the CDC.”

He says there is a reasonable analogy between COVID and influenza, but warns, “don’t push the analogy.”

They are both serious respiratory viruses that can cause much illness and death in essentially the same populations, he notes. These are the older, frail people, people who have underlying illnesses or are immunocompromised.

Both viruses also mutate. But there the paths diverge.

“We’ve gotten into a pattern of annually updating the influenza vaccine because it is such a singularly seasonal virus,” Dr. Schaffner said. “Basically it disappears during the summer. We’ve had plenty of COVID during the summers.”

For COVID, he said, “We will need a periodic booster. Could this be annually? That would certainly make it easier.” But it’s too soon to tell, he said.

Dr. Schaffner noted that several manufacturers are working on a combined flu/COVID vaccine.
 

 

 

Just a ‘first step’ toward annual shot

The currently updated COVID vaccine may be the first step toward an annual vaccine, but it’s only the first step, Dr. Schaffner said. “We haven’t committed to further steps yet because we’re watching this virus.”

Syra Madad, DHSc, MSc, an infectious disease epidemiologist at Harvard University’s Belfer Center for Science and International Affairs, Cambridge, Mass., and the New York City hospital system, told this news organization that arguments on both sides make sense.

Having a single message once a year can help eliminate the considerable confusion involving people on individual timelines with different levels of immunity and separate campaigns for COVID and flu shots coming at different times of the year.

“Communication around vaccines is very muddled and that shows in our overall vaccination rates, particularly booster rates,” she says. “The overall strategy is hopeful and makes sense if we’re going to progress that way based on data.”

However, she said that the data are just not there yet to show it’s time for an annual vaccine. First, scientists will need to see how long protection lasts with the Omicron-specific vaccine and how well and how long it protects against severe disease and death as well as infection.

COVID is less predictable than influenza and the influenza vaccine has been around for decades, Dr. Madad noted. With influenza, the patterns are more easily anticipated with their “ladder-like pattern,” she said. “COVID-19 is not like that.”

What is hopeful, she said, “is that we’ve been in the Omicron dynasty since November of 2021. I’m hopeful that we’ll stick with that particular variant.”

Dr. Topol, Dr. Schaffner, and Dr. Madad declared no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article