The Journal of Clinical Outcomes Management® is an independent, peer-reviewed journal offering evidence-based, practical information for improving the quality, safety, and value of health care.

jcom
Main menu
JCOM Main
Explore menu
JCOM Explore
Proclivity ID
18843001
Unpublish
Negative Keywords Excluded Elements
header[@id='header']
div[contains(@class, 'header__large-screen')]
div[contains(@class, 'read-next-article')]
div[contains(@class, 'nav-primary')]
nav[contains(@class, 'nav-primary')]
section[contains(@class, 'footer-nav-section-wrapper')]
footer[@id='footer']
div[contains(@class, 'main-prefix')]
section[contains(@class, 'nav-hidden')]
div[contains(@class, 'ce-card-content')]
nav[contains(@class, 'nav-ce-stack')]
Altmetric
Click for Credit Button Label
Click For Credit
DSM Affiliated
Display in offset block
Enable Disqus
Display Author and Disclosure Link
Publication Type
Clinical
Slot System
Featured Buckets
Disable Sticky Ads
Disable Ad Block Mitigation
Featured Buckets Admin
Show Ads on this Publication's Homepage
Consolidated Pub
Show Article Page Numbers on TOC
Expire Announcement Bar
Use larger logo size
On
publication_blueconic_enabled
Off
Show More Destinations Menu
Disable Adhesion on Publication
Off
Restore Menu Label on Mobile Navigation
Disable Facebook Pixel from Publication
Exclude this publication from publication selection on articles and quiz
Gating Strategy
First Peek Free
Challenge Center
Disable Inline Native ads
survey writer start date

FDA okays cancer drugs faster than EMA. But at what cost?

Article Type
Changed

Over the past decade, the U.S. Food and Drug Administration has approved new cancer drugs twice as fast as the European Medicines Agency (EMA), often using accelerated pathways, a new analysis shows.

Between 2010 and 2019, the FDA approved almost all oncology therapies ahead of the EMA. Drugs entered the United States market about 8 months (241 days) before European market authorization.

But do quicker review times translate to wins for patients?

“The faster FDA approval process potentially provides earlier access to potentially life-prolonging medications for patients with cancer in the United States,” Ali Raza Khaki, MD, department of oncology, Stanford (Calif.) University School of Medicine, told this news organization. “On the surface, this is a good thing. However, it comes with limitations.”

Earlier drug approval often means greater uncertainty about an agent’s benefit – most notably, whether it will improve a patient’s survival or quality of life. Dr. Khaki pointed to a study published in JAMA Internal Medicine, which found that only 19 of 93 (20%) cancer drugs that had been recently approved through the FDA’s accelerated approval pathway demonstrated an improvement in overall survival.

In the new study, published online in JAMA Network Open, Dr. Khaki and colleagues found that among the 89 cancer drugs approved in the United States and Europe between January 2010 and December 2019, the FDA approved 85 (95%) before European authorization and four (5%) after.

The researchers found that the median FDA review time was half that of the EMA’s (200 vs. 426 days). Furthermore, 64 new drug applications (72%) were submitted to the FDA first, compared with 21 (23%) to the EMA.

Of the drugs approved through an accelerated pathway, three were ultimately pulled from the U.S. market, compared with one in Europe.

“These early drug approvals that later lead to withdrawal expose many more patients to toxicity, including financial toxicity, given the high cost of cancer medications,” Dr. Khaki commented.

In addition, 35 oncology therapies (39%) were approved by the FDA before trial results were published, compared with only eight (9%) by the EMA. Although FDA drug labels contain some information about efficacy and toxicity, scientific publications often have much more, including details about study populations and toxicities.

“Without this information, providers may be limited in their knowledge about patient selection, clinical benefit, and optimal toxicity management,” Dr. Khaki said.

Jeff Allen PhD, president and CEO of the nonprofit Friends of Cancer Research, who wasn’t involved in the study, believes that an FDA approval before publication shouldn’t be “particularly concerning.”

“Peer-reviewed publication is an important component of validating and communicating scientific findings, but the processes and time lines for individual journals can be highly variable,” he said. “I don’t think we would want to see a situation where potential beneficial treatments are held up due to unrelated publication processes.”

The author of an invited commentary in JAMA Network Open had a different take on the study findings.

“A tempting interpretation” of this study is that the FDA is a “superior agency for expedited review times that bring cancer drugs to patients earlier,” Kristina Jenei, BSN, MSc, with the University of British Columbia School of Population and Public Health, writes. In addition, the fact that more drugs were pulled from the market after approval in the United States than in Europe could be interpreted to mean that the system is working as it should.

Although the speed of FDA reviews and the number of subsequent approvals have increased over time, the proportion of cancer drugs that improve survival has declined. In addition, because the FDA’s follow-up of postmarketing studies has been “inconsistent,” a substantial number of cancer drugs that were approved through accelerated pathways have remained on the market for years without confirmation of their benefit.

Although regulatory agencies must balance earlier patient access to novel treatments with evidence that the therapies are effective and safe, “faster review times and approvals are not cause for celebration; better patient outcomes are,” Ms. Jenei writes. “In other words, quality over quantity.”

The study was supported by the National Cancer Institute. Dr. Khaki reported stock ownership from Merck and stock ownership from Sanofi outside the submitted work. Dr. Allen and Ms. Jenei have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Over the past decade, the U.S. Food and Drug Administration has approved new cancer drugs twice as fast as the European Medicines Agency (EMA), often using accelerated pathways, a new analysis shows.

Between 2010 and 2019, the FDA approved almost all oncology therapies ahead of the EMA. Drugs entered the United States market about 8 months (241 days) before European market authorization.

But do quicker review times translate to wins for patients?

“The faster FDA approval process potentially provides earlier access to potentially life-prolonging medications for patients with cancer in the United States,” Ali Raza Khaki, MD, department of oncology, Stanford (Calif.) University School of Medicine, told this news organization. “On the surface, this is a good thing. However, it comes with limitations.”

Earlier drug approval often means greater uncertainty about an agent’s benefit – most notably, whether it will improve a patient’s survival or quality of life. Dr. Khaki pointed to a study published in JAMA Internal Medicine, which found that only 19 of 93 (20%) cancer drugs that had been recently approved through the FDA’s accelerated approval pathway demonstrated an improvement in overall survival.

In the new study, published online in JAMA Network Open, Dr. Khaki and colleagues found that among the 89 cancer drugs approved in the United States and Europe between January 2010 and December 2019, the FDA approved 85 (95%) before European authorization and four (5%) after.

The researchers found that the median FDA review time was half that of the EMA’s (200 vs. 426 days). Furthermore, 64 new drug applications (72%) were submitted to the FDA first, compared with 21 (23%) to the EMA.

Of the drugs approved through an accelerated pathway, three were ultimately pulled from the U.S. market, compared with one in Europe.

“These early drug approvals that later lead to withdrawal expose many more patients to toxicity, including financial toxicity, given the high cost of cancer medications,” Dr. Khaki commented.

In addition, 35 oncology therapies (39%) were approved by the FDA before trial results were published, compared with only eight (9%) by the EMA. Although FDA drug labels contain some information about efficacy and toxicity, scientific publications often have much more, including details about study populations and toxicities.

“Without this information, providers may be limited in their knowledge about patient selection, clinical benefit, and optimal toxicity management,” Dr. Khaki said.

Jeff Allen PhD, president and CEO of the nonprofit Friends of Cancer Research, who wasn’t involved in the study, believes that an FDA approval before publication shouldn’t be “particularly concerning.”

“Peer-reviewed publication is an important component of validating and communicating scientific findings, but the processes and time lines for individual journals can be highly variable,” he said. “I don’t think we would want to see a situation where potential beneficial treatments are held up due to unrelated publication processes.”

The author of an invited commentary in JAMA Network Open had a different take on the study findings.

“A tempting interpretation” of this study is that the FDA is a “superior agency for expedited review times that bring cancer drugs to patients earlier,” Kristina Jenei, BSN, MSc, with the University of British Columbia School of Population and Public Health, writes. In addition, the fact that more drugs were pulled from the market after approval in the United States than in Europe could be interpreted to mean that the system is working as it should.

Although the speed of FDA reviews and the number of subsequent approvals have increased over time, the proportion of cancer drugs that improve survival has declined. In addition, because the FDA’s follow-up of postmarketing studies has been “inconsistent,” a substantial number of cancer drugs that were approved through accelerated pathways have remained on the market for years without confirmation of their benefit.

Although regulatory agencies must balance earlier patient access to novel treatments with evidence that the therapies are effective and safe, “faster review times and approvals are not cause for celebration; better patient outcomes are,” Ms. Jenei writes. “In other words, quality over quantity.”

The study was supported by the National Cancer Institute. Dr. Khaki reported stock ownership from Merck and stock ownership from Sanofi outside the submitted work. Dr. Allen and Ms. Jenei have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Over the past decade, the U.S. Food and Drug Administration has approved new cancer drugs twice as fast as the European Medicines Agency (EMA), often using accelerated pathways, a new analysis shows.

Between 2010 and 2019, the FDA approved almost all oncology therapies ahead of the EMA. Drugs entered the United States market about 8 months (241 days) before European market authorization.

But do quicker review times translate to wins for patients?

“The faster FDA approval process potentially provides earlier access to potentially life-prolonging medications for patients with cancer in the United States,” Ali Raza Khaki, MD, department of oncology, Stanford (Calif.) University School of Medicine, told this news organization. “On the surface, this is a good thing. However, it comes with limitations.”

Earlier drug approval often means greater uncertainty about an agent’s benefit – most notably, whether it will improve a patient’s survival or quality of life. Dr. Khaki pointed to a study published in JAMA Internal Medicine, which found that only 19 of 93 (20%) cancer drugs that had been recently approved through the FDA’s accelerated approval pathway demonstrated an improvement in overall survival.

In the new study, published online in JAMA Network Open, Dr. Khaki and colleagues found that among the 89 cancer drugs approved in the United States and Europe between January 2010 and December 2019, the FDA approved 85 (95%) before European authorization and four (5%) after.

The researchers found that the median FDA review time was half that of the EMA’s (200 vs. 426 days). Furthermore, 64 new drug applications (72%) were submitted to the FDA first, compared with 21 (23%) to the EMA.

Of the drugs approved through an accelerated pathway, three were ultimately pulled from the U.S. market, compared with one in Europe.

“These early drug approvals that later lead to withdrawal expose many more patients to toxicity, including financial toxicity, given the high cost of cancer medications,” Dr. Khaki commented.

In addition, 35 oncology therapies (39%) were approved by the FDA before trial results were published, compared with only eight (9%) by the EMA. Although FDA drug labels contain some information about efficacy and toxicity, scientific publications often have much more, including details about study populations and toxicities.

“Without this information, providers may be limited in their knowledge about patient selection, clinical benefit, and optimal toxicity management,” Dr. Khaki said.

Jeff Allen PhD, president and CEO of the nonprofit Friends of Cancer Research, who wasn’t involved in the study, believes that an FDA approval before publication shouldn’t be “particularly concerning.”

“Peer-reviewed publication is an important component of validating and communicating scientific findings, but the processes and time lines for individual journals can be highly variable,” he said. “I don’t think we would want to see a situation where potential beneficial treatments are held up due to unrelated publication processes.”

The author of an invited commentary in JAMA Network Open had a different take on the study findings.

“A tempting interpretation” of this study is that the FDA is a “superior agency for expedited review times that bring cancer drugs to patients earlier,” Kristina Jenei, BSN, MSc, with the University of British Columbia School of Population and Public Health, writes. In addition, the fact that more drugs were pulled from the market after approval in the United States than in Europe could be interpreted to mean that the system is working as it should.

Although the speed of FDA reviews and the number of subsequent approvals have increased over time, the proportion of cancer drugs that improve survival has declined. In addition, because the FDA’s follow-up of postmarketing studies has been “inconsistent,” a substantial number of cancer drugs that were approved through accelerated pathways have remained on the market for years without confirmation of their benefit.

Although regulatory agencies must balance earlier patient access to novel treatments with evidence that the therapies are effective and safe, “faster review times and approvals are not cause for celebration; better patient outcomes are,” Ms. Jenei writes. “In other words, quality over quantity.”

The study was supported by the National Cancer Institute. Dr. Khaki reported stock ownership from Merck and stock ownership from Sanofi outside the submitted work. Dr. Allen and Ms. Jenei have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Vaping safety views shifted following lung injury reports

Article Type
Changed

Adults in the United States increasingly perceive electronic cigarettes, or e-cigarettes, as “more harmful” than traditional cigarettes, according to a new study published in the American Journal of Preventive Medicine.

In addition, the percentage of people who exclusively used traditional cigarettes almost doubled between 2019 and 2020 among those who perceived e-cigarettes as more harmful, jumping from 8.4% in 2019 to 16.3% in 2020.

“We were able to show that these changes in perception potentially changed behaviors on a population level,” said Priti Bandi, PhD, principal scientist at the American Cancer Society in Atlanta and lead author of the study.

Since e-cigarettes entered the U.S. market in 2006, public health experts have questioned claims from manufacturers that the products work as a harm reduction tool to help traditional cigarette smokers to quit. Public perceptions have generally been that e-cigarettes are safer for a person’s health. While the research is still emerging on the long-term health outcomes of users, public opinion has shifted since the introduction of the devices.

The new study showed a sharp change in public perception of e-cigarettes following media coverage of cases of users who presented to emergency rooms with mysterious lung symptoms in 2019. The Centers for Disease Control and Prevention eventually found that what are now called e-cigarette or vaping product use–associated lung injuries were linked to vitamin E acetate, an additive to tetrahydrocannabinol-containing products but not nicotine.

The last update from the CDC came in February 2020, shortly before the COVID-19 pandemic swept through the United States, prompting a sharp shift to investigate the new virus among both health care providers and researchers.

Dr. Bandi and colleagues gathered 2018-2020 data from a National Institutes of Health database called the Health Information National Trends Survey, a mail-based, nationally representative, cross-sectional survey of U.S. adults and their attitudes of cancer and health-related information. More than 3,000 people each year responded to questions about e-cigarettes.

The study found that the percentage of people who believed e-cigarettes to be more harmful than traditional cigarettes more than tripled from 6.8% in 2018 to 28.3% in 2020. Fewer people also viewed e-cigarettes as less harmful than traditional cigarettes, falling from 17.6% in 2018 to 11.4% in 2020. Fewer people also said they were unsure about which product was more harmful.

Among those who believed e-cigarettes were “relatively” less harmful than traditional cigarettes, use of e-cigarettes jumped from 15.3% in 2019 to 26.7% in 2020.
 

The implications

The main finding that people started smoking cigarettes when they thought e-cigarettes were more harmful should be a wake-up to public health officials and doctors who communicate health risks to patients, according to Dr. Bandi and other experts.

Messaging should be more nuanced, Dr. Bandi said. Many adults use e-cigarettes as a cessation tool, and she and other experts point to research that shows the products are, at least in the short-term, less harmful especially as a smoking cessation tool. Vapes are among the most popular tools people use when they want to quit smoking – with the majority of U.S. adults using vapes either partially or fully to quit, according to the CDC.

Some countries, such as England, are moving to allow doctors to prescribe e-cigarettes to help reduce smoking rates. United Kingdom regulatory authorities in 2021 said they’re considering allowing licensing the devices for use in smoking cessation.

“There is an absolute need for ongoing, accurate communication from public health authorities targeted toward the appropriate audiences,” Bandi said.

Ashley Brooks-Russell, PhD, MPH, associate professor at the University of Colorado at Denver, Aurora, said the finding that perceptions can change behavior is good news. However, the bad news is that adults overcorrected and switched to cigarettes, which are proven to cause cancer and other health conditions.

“We’re good in public health about messaging that cigarettes are bad, that tobacco is broadly harmful,” Dr. Brooks-Russell said in an interview. “We’re really bad at talking about lesser options, like if you’re going to smoke, e-cigarettes are less harmful.” 

But other health leaders warn that e-cigarettes might produce the same adverse health outcomes, or worse, as cigarettes. The only way researchers will gain a conclusive answer is decades into a patient’s life. Until then, it’s not clear if any potential benefit from smoking cessation will outweigh the risks.

“This research should remind healthcare providers to find out what products patients are using, how much, and if those patients experience health issues later on,” said Kevin McQueen, MHA, lead respiratory director at University of Colorado Health System and president of the Colorado Respiratory Care Society.

“My concern is that while people are starting to think e-cigarettes are more dangerous, some people still think they are safe – and we don’t know how much safer they are,” he said. “And we aren’t going to know until 10, 15, 20 years from now.”

All authors were employed by the American Cancer Society at the time of the study, which receives grants from private and corporate foundations, including foundations associated with companies in the health sector for research outside of the submitted work. The authors are not funded by or key personnel for any of these grants, and their salaries are solely funded through American Cancer Society funds. No other financial disclosures were reported.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Adults in the United States increasingly perceive electronic cigarettes, or e-cigarettes, as “more harmful” than traditional cigarettes, according to a new study published in the American Journal of Preventive Medicine.

In addition, the percentage of people who exclusively used traditional cigarettes almost doubled between 2019 and 2020 among those who perceived e-cigarettes as more harmful, jumping from 8.4% in 2019 to 16.3% in 2020.

“We were able to show that these changes in perception potentially changed behaviors on a population level,” said Priti Bandi, PhD, principal scientist at the American Cancer Society in Atlanta and lead author of the study.

Since e-cigarettes entered the U.S. market in 2006, public health experts have questioned claims from manufacturers that the products work as a harm reduction tool to help traditional cigarette smokers to quit. Public perceptions have generally been that e-cigarettes are safer for a person’s health. While the research is still emerging on the long-term health outcomes of users, public opinion has shifted since the introduction of the devices.

The new study showed a sharp change in public perception of e-cigarettes following media coverage of cases of users who presented to emergency rooms with mysterious lung symptoms in 2019. The Centers for Disease Control and Prevention eventually found that what are now called e-cigarette or vaping product use–associated lung injuries were linked to vitamin E acetate, an additive to tetrahydrocannabinol-containing products but not nicotine.

The last update from the CDC came in February 2020, shortly before the COVID-19 pandemic swept through the United States, prompting a sharp shift to investigate the new virus among both health care providers and researchers.

Dr. Bandi and colleagues gathered 2018-2020 data from a National Institutes of Health database called the Health Information National Trends Survey, a mail-based, nationally representative, cross-sectional survey of U.S. adults and their attitudes of cancer and health-related information. More than 3,000 people each year responded to questions about e-cigarettes.

The study found that the percentage of people who believed e-cigarettes to be more harmful than traditional cigarettes more than tripled from 6.8% in 2018 to 28.3% in 2020. Fewer people also viewed e-cigarettes as less harmful than traditional cigarettes, falling from 17.6% in 2018 to 11.4% in 2020. Fewer people also said they were unsure about which product was more harmful.

Among those who believed e-cigarettes were “relatively” less harmful than traditional cigarettes, use of e-cigarettes jumped from 15.3% in 2019 to 26.7% in 2020.
 

The implications

The main finding that people started smoking cigarettes when they thought e-cigarettes were more harmful should be a wake-up to public health officials and doctors who communicate health risks to patients, according to Dr. Bandi and other experts.

Messaging should be more nuanced, Dr. Bandi said. Many adults use e-cigarettes as a cessation tool, and she and other experts point to research that shows the products are, at least in the short-term, less harmful especially as a smoking cessation tool. Vapes are among the most popular tools people use when they want to quit smoking – with the majority of U.S. adults using vapes either partially or fully to quit, according to the CDC.

Some countries, such as England, are moving to allow doctors to prescribe e-cigarettes to help reduce smoking rates. United Kingdom regulatory authorities in 2021 said they’re considering allowing licensing the devices for use in smoking cessation.

“There is an absolute need for ongoing, accurate communication from public health authorities targeted toward the appropriate audiences,” Bandi said.

Ashley Brooks-Russell, PhD, MPH, associate professor at the University of Colorado at Denver, Aurora, said the finding that perceptions can change behavior is good news. However, the bad news is that adults overcorrected and switched to cigarettes, which are proven to cause cancer and other health conditions.

“We’re good in public health about messaging that cigarettes are bad, that tobacco is broadly harmful,” Dr. Brooks-Russell said in an interview. “We’re really bad at talking about lesser options, like if you’re going to smoke, e-cigarettes are less harmful.” 

But other health leaders warn that e-cigarettes might produce the same adverse health outcomes, or worse, as cigarettes. The only way researchers will gain a conclusive answer is decades into a patient’s life. Until then, it’s not clear if any potential benefit from smoking cessation will outweigh the risks.

“This research should remind healthcare providers to find out what products patients are using, how much, and if those patients experience health issues later on,” said Kevin McQueen, MHA, lead respiratory director at University of Colorado Health System and president of the Colorado Respiratory Care Society.

“My concern is that while people are starting to think e-cigarettes are more dangerous, some people still think they are safe – and we don’t know how much safer they are,” he said. “And we aren’t going to know until 10, 15, 20 years from now.”

All authors were employed by the American Cancer Society at the time of the study, which receives grants from private and corporate foundations, including foundations associated with companies in the health sector for research outside of the submitted work. The authors are not funded by or key personnel for any of these grants, and their salaries are solely funded through American Cancer Society funds. No other financial disclosures were reported.

A version of this article first appeared on Medscape.com.

Adults in the United States increasingly perceive electronic cigarettes, or e-cigarettes, as “more harmful” than traditional cigarettes, according to a new study published in the American Journal of Preventive Medicine.

In addition, the percentage of people who exclusively used traditional cigarettes almost doubled between 2019 and 2020 among those who perceived e-cigarettes as more harmful, jumping from 8.4% in 2019 to 16.3% in 2020.

“We were able to show that these changes in perception potentially changed behaviors on a population level,” said Priti Bandi, PhD, principal scientist at the American Cancer Society in Atlanta and lead author of the study.

Since e-cigarettes entered the U.S. market in 2006, public health experts have questioned claims from manufacturers that the products work as a harm reduction tool to help traditional cigarette smokers to quit. Public perceptions have generally been that e-cigarettes are safer for a person’s health. While the research is still emerging on the long-term health outcomes of users, public opinion has shifted since the introduction of the devices.

The new study showed a sharp change in public perception of e-cigarettes following media coverage of cases of users who presented to emergency rooms with mysterious lung symptoms in 2019. The Centers for Disease Control and Prevention eventually found that what are now called e-cigarette or vaping product use–associated lung injuries were linked to vitamin E acetate, an additive to tetrahydrocannabinol-containing products but not nicotine.

The last update from the CDC came in February 2020, shortly before the COVID-19 pandemic swept through the United States, prompting a sharp shift to investigate the new virus among both health care providers and researchers.

Dr. Bandi and colleagues gathered 2018-2020 data from a National Institutes of Health database called the Health Information National Trends Survey, a mail-based, nationally representative, cross-sectional survey of U.S. adults and their attitudes of cancer and health-related information. More than 3,000 people each year responded to questions about e-cigarettes.

The study found that the percentage of people who believed e-cigarettes to be more harmful than traditional cigarettes more than tripled from 6.8% in 2018 to 28.3% in 2020. Fewer people also viewed e-cigarettes as less harmful than traditional cigarettes, falling from 17.6% in 2018 to 11.4% in 2020. Fewer people also said they were unsure about which product was more harmful.

Among those who believed e-cigarettes were “relatively” less harmful than traditional cigarettes, use of e-cigarettes jumped from 15.3% in 2019 to 26.7% in 2020.
 

The implications

The main finding that people started smoking cigarettes when they thought e-cigarettes were more harmful should be a wake-up to public health officials and doctors who communicate health risks to patients, according to Dr. Bandi and other experts.

Messaging should be more nuanced, Dr. Bandi said. Many adults use e-cigarettes as a cessation tool, and she and other experts point to research that shows the products are, at least in the short-term, less harmful especially as a smoking cessation tool. Vapes are among the most popular tools people use when they want to quit smoking – with the majority of U.S. adults using vapes either partially or fully to quit, according to the CDC.

Some countries, such as England, are moving to allow doctors to prescribe e-cigarettes to help reduce smoking rates. United Kingdom regulatory authorities in 2021 said they’re considering allowing licensing the devices for use in smoking cessation.

“There is an absolute need for ongoing, accurate communication from public health authorities targeted toward the appropriate audiences,” Bandi said.

Ashley Brooks-Russell, PhD, MPH, associate professor at the University of Colorado at Denver, Aurora, said the finding that perceptions can change behavior is good news. However, the bad news is that adults overcorrected and switched to cigarettes, which are proven to cause cancer and other health conditions.

“We’re good in public health about messaging that cigarettes are bad, that tobacco is broadly harmful,” Dr. Brooks-Russell said in an interview. “We’re really bad at talking about lesser options, like if you’re going to smoke, e-cigarettes are less harmful.” 

But other health leaders warn that e-cigarettes might produce the same adverse health outcomes, or worse, as cigarettes. The only way researchers will gain a conclusive answer is decades into a patient’s life. Until then, it’s not clear if any potential benefit from smoking cessation will outweigh the risks.

“This research should remind healthcare providers to find out what products patients are using, how much, and if those patients experience health issues later on,” said Kevin McQueen, MHA, lead respiratory director at University of Colorado Health System and president of the Colorado Respiratory Care Society.

“My concern is that while people are starting to think e-cigarettes are more dangerous, some people still think they are safe – and we don’t know how much safer they are,” he said. “And we aren’t going to know until 10, 15, 20 years from now.”

All authors were employed by the American Cancer Society at the time of the study, which receives grants from private and corporate foundations, including foundations associated with companies in the health sector for research outside of the submitted work. The authors are not funded by or key personnel for any of these grants, and their salaries are solely funded through American Cancer Society funds. No other financial disclosures were reported.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE AMERICAN JOURNAL OF PREVENTIVE MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

COVID-19 Pandemic stress affected ovulation, not menstruation

Article Type
Changed

ATLANTA – Disturbances in ovulation that didn’t produce any actual changes in the menstrual cycle of women were extremely common during the first year of the COVID-19 pandemic and were linked to emotional stress, according to the findings of an “experiment of nature” that allowed for comparison with women a decade earlier.

Findings from two studies of reproductive-age women, one conducted in 2006-2008 and the other in 2020-2021, were presented by Jerilynn C. Prior, MD, at the annual meeting of the Endocrine Society.

The comparison of the two time periods yielded several novel findings. “I was taught in medical school that when women don’t eat enough they lose their period. But what we now understand is there’s a graded response to various stressors, acting through the hypothalamus in a common pathway. There is a gradation of disturbances, some of which are subclinical or not obvious,” said Dr. Prior, professor of endocrinology and metabolism at the University of British Columbia, Vancouver.

Moreover, women’s menstrual cycle lengths didn’t differ across the two time periods, despite a dramatic 63% decrement in normal ovulatory function related to increased depression, anxiety, and outside stresses that the women reported in diaries.

“Assuming that regular cycles need normal ovulation is something we should just get out of our minds. It changes our concept about what’s normal if we only know about the cycle length,” she observed.

It will be critical going forward to see whether the ovulatory disturbances have resolved as the pandemic has shifted “because there’s strong evidence that ovulatory disturbances, even with normal cycle length, are related to bone loss and some evidence it’s related to early heart attacks, breast and endometrial cancers,” Dr. Prior said during a press conference.

Lisa Nainggolan/MDedge News
Dr. Genevieve Neal-Perry

Asked to comment, session moderator Genevieve Neal-Perry, MD, PhD, told this news organization: “I think what we can take away is that stress itself is a modifier of the way the brain and the gonads communicate with each other, and that then has an impact on ovulatory function.”

Dr. Neal-Perry noted that the association of stress and ovulatory disruption has been reported in various ways previously, but “clearly it doesn’t affect everyone. What we don’t know is who is most susceptible. There have been some studies showing a genetic predisposition and a genetic anomaly that actually makes them more susceptible to the impact of stress on the reproductive system.”

But the lack of data on weight change in the study cohorts is a limitation. “To me one of the more important questions was what was going on with weight. Just looking at a static number doesn’t tell you whether there were changes. We know that weight gain or weight loss can stress the reproductive axis,” noted Dr. Neal-Parry of the department of obstetrics and gynecology at the University of North Carolina at Chapel Hill.
 

‘Experiment of nature’ revealed invisible effect of pandemic stress

The women in both cohorts of the Menstruation Ovulation Study (MOS) were healthy volunteers aged 19-35 years recruited from the metropolitan Vancouver region. All were menstruating monthly and none were taking hormonal birth control. Recruitment for the second cohort had begun just prior to the March 2020 COVID-19 pandemic lockdown.

Interviewer-administered questionnaires (CaMos) covering demographics, socioeconomic status, and reproductive history, and daily diaries kept by the women (menstrual cycle diary) were identical for both cohorts.

Assessments of ovulation differed for the two studies but were cross-validated. For the earlier time period, ovulation was assessed by a threefold increase in follicular-to-luteal urinary progesterone (PdG). For the pandemic-era study, the validated quantitative basal temperature (QBT) method was used.

There were 301 women in the earlier cohort and 125 during the pandemic. Both were an average age of about 29 years and had a body mass index of about 24.3 kg/m2 (within the normal range). The pandemic cohort was more racially/ethnically diverse than the earlier one and more in-line with recent census data.

More of the women were nulliparous during the pandemic than earlier (92.7% vs. 80.4%; P = .002).

The distribution of menstrual cycle lengths didn’t differ, with both cohorts averaging about 30 days (P = .893). However, while 90% of the women in the earlier cohort ovulated normally, only 37% did during the pandemic, a highly significant difference (P < .0001).

Thus, during the pandemic, 63% of women had “silent ovulatory disturbances,” either with short luteal phases after ovulation or no ovulation, compared with just 10% in the earlier cohort, “which is remarkable, unbelievable actually,” Dr. Prior remarked.  

The difference wasn’t explained by any of the demographic information collected either, including socioeconomic status, lifestyle, or reproductive history variables.

And it wasn’t because of COVID-19 vaccination, as the vaccine wasn’t available when most of the women were recruited, and of the 79 who were recruited during vaccine availability, only two received a COVID-19 vaccine during the study (and both had normal ovulation).

Employment changes, caring responsibilities, and worry likely causes

The information from the diaries was more revealing. Several diary components were far more common during the pandemic, including negative mood (feeling depressed or anxious, sleep problems, and outside stresses), self-worth, interest in sex, energy level, and appetite. All were significantly different between the two cohorts (P < .001) and between those with and without ovulatory disturbances.

“So menstrual cycle lengths and long cycles didn’t differ, but there was a much higher prevalence of silent or subclinical ovulatory disturbances, and these were related to the increased stresses that women recorded in their diaries. This means that the estrogen levels were pretty close to normal but the progesterone levels were remarkably decreased,” Dr. Prior said.

Interestingly, reported menstrual cramps were also significantly more common during the pandemic and associated with ovulatory disruption.

“That is a new observation because previously we’ve always thought that you needed to ovulate in order to even have cramps,” she commented.

Asked whether COVID-19 itself might have played a role, Dr. Prior said no woman in the study tested positive for the virus or had long COVID.

“As far as I’m aware, it was the changes in employment … and caring for elders and worry about illness in somebody you loved that was related,” she said.

Asked what she thinks the result would be if the study were conducted now, she said: “I don’t know. We’re still in a stressful time with inflation and not complete recovery, so probably the issue is still very present.”

Dr. Prior and Dr. Neal-Perry have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

ATLANTA – Disturbances in ovulation that didn’t produce any actual changes in the menstrual cycle of women were extremely common during the first year of the COVID-19 pandemic and were linked to emotional stress, according to the findings of an “experiment of nature” that allowed for comparison with women a decade earlier.

Findings from two studies of reproductive-age women, one conducted in 2006-2008 and the other in 2020-2021, were presented by Jerilynn C. Prior, MD, at the annual meeting of the Endocrine Society.

The comparison of the two time periods yielded several novel findings. “I was taught in medical school that when women don’t eat enough they lose their period. But what we now understand is there’s a graded response to various stressors, acting through the hypothalamus in a common pathway. There is a gradation of disturbances, some of which are subclinical or not obvious,” said Dr. Prior, professor of endocrinology and metabolism at the University of British Columbia, Vancouver.

Moreover, women’s menstrual cycle lengths didn’t differ across the two time periods, despite a dramatic 63% decrement in normal ovulatory function related to increased depression, anxiety, and outside stresses that the women reported in diaries.

“Assuming that regular cycles need normal ovulation is something we should just get out of our minds. It changes our concept about what’s normal if we only know about the cycle length,” she observed.

It will be critical going forward to see whether the ovulatory disturbances have resolved as the pandemic has shifted “because there’s strong evidence that ovulatory disturbances, even with normal cycle length, are related to bone loss and some evidence it’s related to early heart attacks, breast and endometrial cancers,” Dr. Prior said during a press conference.

Lisa Nainggolan/MDedge News
Dr. Genevieve Neal-Perry

Asked to comment, session moderator Genevieve Neal-Perry, MD, PhD, told this news organization: “I think what we can take away is that stress itself is a modifier of the way the brain and the gonads communicate with each other, and that then has an impact on ovulatory function.”

Dr. Neal-Perry noted that the association of stress and ovulatory disruption has been reported in various ways previously, but “clearly it doesn’t affect everyone. What we don’t know is who is most susceptible. There have been some studies showing a genetic predisposition and a genetic anomaly that actually makes them more susceptible to the impact of stress on the reproductive system.”

But the lack of data on weight change in the study cohorts is a limitation. “To me one of the more important questions was what was going on with weight. Just looking at a static number doesn’t tell you whether there were changes. We know that weight gain or weight loss can stress the reproductive axis,” noted Dr. Neal-Parry of the department of obstetrics and gynecology at the University of North Carolina at Chapel Hill.
 

‘Experiment of nature’ revealed invisible effect of pandemic stress

The women in both cohorts of the Menstruation Ovulation Study (MOS) were healthy volunteers aged 19-35 years recruited from the metropolitan Vancouver region. All were menstruating monthly and none were taking hormonal birth control. Recruitment for the second cohort had begun just prior to the March 2020 COVID-19 pandemic lockdown.

Interviewer-administered questionnaires (CaMos) covering demographics, socioeconomic status, and reproductive history, and daily diaries kept by the women (menstrual cycle diary) were identical for both cohorts.

Assessments of ovulation differed for the two studies but were cross-validated. For the earlier time period, ovulation was assessed by a threefold increase in follicular-to-luteal urinary progesterone (PdG). For the pandemic-era study, the validated quantitative basal temperature (QBT) method was used.

There were 301 women in the earlier cohort and 125 during the pandemic. Both were an average age of about 29 years and had a body mass index of about 24.3 kg/m2 (within the normal range). The pandemic cohort was more racially/ethnically diverse than the earlier one and more in-line with recent census data.

More of the women were nulliparous during the pandemic than earlier (92.7% vs. 80.4%; P = .002).

The distribution of menstrual cycle lengths didn’t differ, with both cohorts averaging about 30 days (P = .893). However, while 90% of the women in the earlier cohort ovulated normally, only 37% did during the pandemic, a highly significant difference (P < .0001).

Thus, during the pandemic, 63% of women had “silent ovulatory disturbances,” either with short luteal phases after ovulation or no ovulation, compared with just 10% in the earlier cohort, “which is remarkable, unbelievable actually,” Dr. Prior remarked.  

The difference wasn’t explained by any of the demographic information collected either, including socioeconomic status, lifestyle, or reproductive history variables.

And it wasn’t because of COVID-19 vaccination, as the vaccine wasn’t available when most of the women were recruited, and of the 79 who were recruited during vaccine availability, only two received a COVID-19 vaccine during the study (and both had normal ovulation).

Employment changes, caring responsibilities, and worry likely causes

The information from the diaries was more revealing. Several diary components were far more common during the pandemic, including negative mood (feeling depressed or anxious, sleep problems, and outside stresses), self-worth, interest in sex, energy level, and appetite. All were significantly different between the two cohorts (P < .001) and between those with and without ovulatory disturbances.

“So menstrual cycle lengths and long cycles didn’t differ, but there was a much higher prevalence of silent or subclinical ovulatory disturbances, and these were related to the increased stresses that women recorded in their diaries. This means that the estrogen levels were pretty close to normal but the progesterone levels were remarkably decreased,” Dr. Prior said.

Interestingly, reported menstrual cramps were also significantly more common during the pandemic and associated with ovulatory disruption.

“That is a new observation because previously we’ve always thought that you needed to ovulate in order to even have cramps,” she commented.

Asked whether COVID-19 itself might have played a role, Dr. Prior said no woman in the study tested positive for the virus or had long COVID.

“As far as I’m aware, it was the changes in employment … and caring for elders and worry about illness in somebody you loved that was related,” she said.

Asked what she thinks the result would be if the study were conducted now, she said: “I don’t know. We’re still in a stressful time with inflation and not complete recovery, so probably the issue is still very present.”

Dr. Prior and Dr. Neal-Perry have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

ATLANTA – Disturbances in ovulation that didn’t produce any actual changes in the menstrual cycle of women were extremely common during the first year of the COVID-19 pandemic and were linked to emotional stress, according to the findings of an “experiment of nature” that allowed for comparison with women a decade earlier.

Findings from two studies of reproductive-age women, one conducted in 2006-2008 and the other in 2020-2021, were presented by Jerilynn C. Prior, MD, at the annual meeting of the Endocrine Society.

The comparison of the two time periods yielded several novel findings. “I was taught in medical school that when women don’t eat enough they lose their period. But what we now understand is there’s a graded response to various stressors, acting through the hypothalamus in a common pathway. There is a gradation of disturbances, some of which are subclinical or not obvious,” said Dr. Prior, professor of endocrinology and metabolism at the University of British Columbia, Vancouver.

Moreover, women’s menstrual cycle lengths didn’t differ across the two time periods, despite a dramatic 63% decrement in normal ovulatory function related to increased depression, anxiety, and outside stresses that the women reported in diaries.

“Assuming that regular cycles need normal ovulation is something we should just get out of our minds. It changes our concept about what’s normal if we only know about the cycle length,” she observed.

It will be critical going forward to see whether the ovulatory disturbances have resolved as the pandemic has shifted “because there’s strong evidence that ovulatory disturbances, even with normal cycle length, are related to bone loss and some evidence it’s related to early heart attacks, breast and endometrial cancers,” Dr. Prior said during a press conference.

Lisa Nainggolan/MDedge News
Dr. Genevieve Neal-Perry

Asked to comment, session moderator Genevieve Neal-Perry, MD, PhD, told this news organization: “I think what we can take away is that stress itself is a modifier of the way the brain and the gonads communicate with each other, and that then has an impact on ovulatory function.”

Dr. Neal-Perry noted that the association of stress and ovulatory disruption has been reported in various ways previously, but “clearly it doesn’t affect everyone. What we don’t know is who is most susceptible. There have been some studies showing a genetic predisposition and a genetic anomaly that actually makes them more susceptible to the impact of stress on the reproductive system.”

But the lack of data on weight change in the study cohorts is a limitation. “To me one of the more important questions was what was going on with weight. Just looking at a static number doesn’t tell you whether there were changes. We know that weight gain or weight loss can stress the reproductive axis,” noted Dr. Neal-Parry of the department of obstetrics and gynecology at the University of North Carolina at Chapel Hill.
 

‘Experiment of nature’ revealed invisible effect of pandemic stress

The women in both cohorts of the Menstruation Ovulation Study (MOS) were healthy volunteers aged 19-35 years recruited from the metropolitan Vancouver region. All were menstruating monthly and none were taking hormonal birth control. Recruitment for the second cohort had begun just prior to the March 2020 COVID-19 pandemic lockdown.

Interviewer-administered questionnaires (CaMos) covering demographics, socioeconomic status, and reproductive history, and daily diaries kept by the women (menstrual cycle diary) were identical for both cohorts.

Assessments of ovulation differed for the two studies but were cross-validated. For the earlier time period, ovulation was assessed by a threefold increase in follicular-to-luteal urinary progesterone (PdG). For the pandemic-era study, the validated quantitative basal temperature (QBT) method was used.

There were 301 women in the earlier cohort and 125 during the pandemic. Both were an average age of about 29 years and had a body mass index of about 24.3 kg/m2 (within the normal range). The pandemic cohort was more racially/ethnically diverse than the earlier one and more in-line with recent census data.

More of the women were nulliparous during the pandemic than earlier (92.7% vs. 80.4%; P = .002).

The distribution of menstrual cycle lengths didn’t differ, with both cohorts averaging about 30 days (P = .893). However, while 90% of the women in the earlier cohort ovulated normally, only 37% did during the pandemic, a highly significant difference (P < .0001).

Thus, during the pandemic, 63% of women had “silent ovulatory disturbances,” either with short luteal phases after ovulation or no ovulation, compared with just 10% in the earlier cohort, “which is remarkable, unbelievable actually,” Dr. Prior remarked.  

The difference wasn’t explained by any of the demographic information collected either, including socioeconomic status, lifestyle, or reproductive history variables.

And it wasn’t because of COVID-19 vaccination, as the vaccine wasn’t available when most of the women were recruited, and of the 79 who were recruited during vaccine availability, only two received a COVID-19 vaccine during the study (and both had normal ovulation).

Employment changes, caring responsibilities, and worry likely causes

The information from the diaries was more revealing. Several diary components were far more common during the pandemic, including negative mood (feeling depressed or anxious, sleep problems, and outside stresses), self-worth, interest in sex, energy level, and appetite. All were significantly different between the two cohorts (P < .001) and between those with and without ovulatory disturbances.

“So menstrual cycle lengths and long cycles didn’t differ, but there was a much higher prevalence of silent or subclinical ovulatory disturbances, and these were related to the increased stresses that women recorded in their diaries. This means that the estrogen levels were pretty close to normal but the progesterone levels were remarkably decreased,” Dr. Prior said.

Interestingly, reported menstrual cramps were also significantly more common during the pandemic and associated with ovulatory disruption.

“That is a new observation because previously we’ve always thought that you needed to ovulate in order to even have cramps,” she commented.

Asked whether COVID-19 itself might have played a role, Dr. Prior said no woman in the study tested positive for the virus or had long COVID.

“As far as I’m aware, it was the changes in employment … and caring for elders and worry about illness in somebody you loved that was related,” she said.

Asked what she thinks the result would be if the study were conducted now, she said: “I don’t know. We’re still in a stressful time with inflation and not complete recovery, so probably the issue is still very present.”

Dr. Prior and Dr. Neal-Perry have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT ENDO 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

What are the signs of post–acute infection syndromes?

Article Type
Changed

The long-term health consequences of COVID-19 have refocused our attention on post–acute infection syndromes (PAIS), starting a discussion on the need for a complete understanding of multisystemic pathophysiology, clinical indicators, and the epidemiology of these syndromes, representing a significant blind spot in the field of medicine. A better understanding of these persistent symptom profiles, not only for post-acute sequelae of SARS-CoV-2 infection (PASC), better known as long COVID, but also for other diseases with unexplainable post-acute sequelae, would allow doctors to fine tune the diagnostic criteria. Having a clear definition and better understanding of post–acute infection symptoms is a necessary step toward developing an evidence-based, multidisciplinary management approach.

PAIS, PASC, or long COVID

The observation of unexplained chronic sequelae after SARS-CoV-2 is known as PASC or long COVID.

Long COVID has been reported as a syndrome in survivors of serious and critical disease, but the effects also persist over time for subjects who experienced a mild infection that did not require admission to hospital. This means that PASC, especially when occurring after a mild or moderate COVID-19 infection, shares many of the same characteristics as chronic diseases triggered by other pathogenic organisms, many of which have not been sufficiently clarified.

PAIS are characterized by a set of core symptoms centering on the following:

  • Exertion intolerance
  • Disproportionate levels of fatigue
  • Neurocognitive and sensory impairment
  • Flu-like symptoms
  • Unrefreshing sleep
  • Myalgia/arthralgia

A plethora of nonspecific symptoms are often present to various degrees.

These similarities suggest a unifying pathophysiology that needs to be elucidated to properly understand and manage postinfectious chronic disability.
 

Overview of PAIS

A detailed revision on what is currently known about PAIS was published in Nature Medicine. It provided various useful pieces of information to assist with the poor recognition of these conditions in clinical practice, a result of which is that patients might experience delayed or a complete lack of clinical care.

The following consolidated postinfection sequelae are mentioned:

  • Q fever fatigue syndrome, which follows infection by the intracellular bacterium Coxiella burnetii
  • Post-dengue fatigue syndrome, which can follow infection by the mosquito-borne dengue virus
  • Fatiguing and rheumatic symptoms in a subset of individuals infected with chikungunya virus, a mosquito-borne virus that causes fever and joint pain in the acute phase
  • Post-polio syndrome, which can emerge as many as 15-40 years after an initial poliomyelitis attack (similarly, some other neurotropic microbes, such as West Nile virus, might lead to persistent effects)
  • Prolonged, debilitating, chronic symptoms have long been reported in a subset of patients after common and typically nonserious infections. For example, after mononucleosis, a condition generally caused by Epstein-Barr virus (EBV), and after an outbreak of Giardia lamblia, an intestinal parasite that usually causes acute intestinal illness. In fact, several studies identified the association of this outbreak of giardiasis with chronic fatigue, irritable bowel syndrome (IBS), and fibromyalgia persisting for many years.
  • Views expressed in the literature regarding the frequency and the validity of posttreatment Lyme disease syndrome are divided. Although substantial evidence points to persistence of arthralgia, fatigue, and subjective neurocognitive impairments in a minority of patients with Lyme disease after the recommended antibiotic treatment, some of the early studies have failed to characterize the initial Lyme disease episode with sufficient rigor.
 

 

Symptoms and signs

The symptoms and signs which, based on the evidence available, are seen more frequently in health care checks may be characterized as the following:

  • Exertion intolerance, fatigue
  • Flu-like and ‘sickness behavior’ symptoms: fever, feverishness, muscle pain, feeling sick, malaise, sweating, irritability
  • Neurological/neurocognitive symptoms: brain fog, impaired concentration or memory, trouble finding words
  • Rheumatologic symptoms: chronic or recurrent joint pain
  • Trigger-specific symptoms: for example, eye problems post Ebola, IBS post Giardia, anosmia and ageusia post COVID-19, motor disturbances post polio and post West Nile virus

Myalgic encephalomyelitis/chronic fatigue syndrome

Patients with this disorder experience worsening of symptoms following physical, cognitive, or emotional exertion above their (very low) tolerated limit. Other prominent features frequently observed in myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) are neurocognitive impairments (colloquially referred to as brain fog), unrefreshing sleep, pain, sensory disturbances, gastrointestinal issues, and various forms of dysautonomia. Up to 75% of ME/CFS cases report an infection-like episode preceding the onset of their illness. Postinfectious and postviral fatigue syndromes were originally postulated as subsets of chronic fatigue syndrome. However, there appears to be no clear consensus at present about whether these terms should be considered synonymous to the ME/CFS label or any of its subsets, or include a wider range of postinfectious fatigue conditions.

Practical diagnostic criteria

From a revision of the available criteria, it emerges that the diagnostic criteria for a PAIS should include not only the presence of symptoms, but ideally also the intensity, course, and constellation of symptoms within an individual, as the individual symptoms and symptom trajectories of PAIS vary over time, rendering a mere comparison of symptom presence at a single time point misleading. Furthermore, when a diagnosis of ME/CFS is made, attention should be given to the choice of diagnostic criteria, with preference given to the more conservative criteria, so as not to run the risk of overestimating the syndrome.

Asthenia is the cornerstone symptom for most epidemiological studies on PAIS, but it would be reductive to concentrate only on this rather than the other characteristics, such as the exacerbation of symptoms following exertion, together with other characteristic symptoms and signs that may allow for better identification of the overall, observable clinical picture in these postinfection syndromes, which have significant impacts on a patient’s quality of life.

This article was translated from Univadis Italy. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

The long-term health consequences of COVID-19 have refocused our attention on post–acute infection syndromes (PAIS), starting a discussion on the need for a complete understanding of multisystemic pathophysiology, clinical indicators, and the epidemiology of these syndromes, representing a significant blind spot in the field of medicine. A better understanding of these persistent symptom profiles, not only for post-acute sequelae of SARS-CoV-2 infection (PASC), better known as long COVID, but also for other diseases with unexplainable post-acute sequelae, would allow doctors to fine tune the diagnostic criteria. Having a clear definition and better understanding of post–acute infection symptoms is a necessary step toward developing an evidence-based, multidisciplinary management approach.

PAIS, PASC, or long COVID

The observation of unexplained chronic sequelae after SARS-CoV-2 is known as PASC or long COVID.

Long COVID has been reported as a syndrome in survivors of serious and critical disease, but the effects also persist over time for subjects who experienced a mild infection that did not require admission to hospital. This means that PASC, especially when occurring after a mild or moderate COVID-19 infection, shares many of the same characteristics as chronic diseases triggered by other pathogenic organisms, many of which have not been sufficiently clarified.

PAIS are characterized by a set of core symptoms centering on the following:

  • Exertion intolerance
  • Disproportionate levels of fatigue
  • Neurocognitive and sensory impairment
  • Flu-like symptoms
  • Unrefreshing sleep
  • Myalgia/arthralgia

A plethora of nonspecific symptoms are often present to various degrees.

These similarities suggest a unifying pathophysiology that needs to be elucidated to properly understand and manage postinfectious chronic disability.
 

Overview of PAIS

A detailed revision on what is currently known about PAIS was published in Nature Medicine. It provided various useful pieces of information to assist with the poor recognition of these conditions in clinical practice, a result of which is that patients might experience delayed or a complete lack of clinical care.

The following consolidated postinfection sequelae are mentioned:

  • Q fever fatigue syndrome, which follows infection by the intracellular bacterium Coxiella burnetii
  • Post-dengue fatigue syndrome, which can follow infection by the mosquito-borne dengue virus
  • Fatiguing and rheumatic symptoms in a subset of individuals infected with chikungunya virus, a mosquito-borne virus that causes fever and joint pain in the acute phase
  • Post-polio syndrome, which can emerge as many as 15-40 years after an initial poliomyelitis attack (similarly, some other neurotropic microbes, such as West Nile virus, might lead to persistent effects)
  • Prolonged, debilitating, chronic symptoms have long been reported in a subset of patients after common and typically nonserious infections. For example, after mononucleosis, a condition generally caused by Epstein-Barr virus (EBV), and after an outbreak of Giardia lamblia, an intestinal parasite that usually causes acute intestinal illness. In fact, several studies identified the association of this outbreak of giardiasis with chronic fatigue, irritable bowel syndrome (IBS), and fibromyalgia persisting for many years.
  • Views expressed in the literature regarding the frequency and the validity of posttreatment Lyme disease syndrome are divided. Although substantial evidence points to persistence of arthralgia, fatigue, and subjective neurocognitive impairments in a minority of patients with Lyme disease after the recommended antibiotic treatment, some of the early studies have failed to characterize the initial Lyme disease episode with sufficient rigor.
 

 

Symptoms and signs

The symptoms and signs which, based on the evidence available, are seen more frequently in health care checks may be characterized as the following:

  • Exertion intolerance, fatigue
  • Flu-like and ‘sickness behavior’ symptoms: fever, feverishness, muscle pain, feeling sick, malaise, sweating, irritability
  • Neurological/neurocognitive symptoms: brain fog, impaired concentration or memory, trouble finding words
  • Rheumatologic symptoms: chronic or recurrent joint pain
  • Trigger-specific symptoms: for example, eye problems post Ebola, IBS post Giardia, anosmia and ageusia post COVID-19, motor disturbances post polio and post West Nile virus

Myalgic encephalomyelitis/chronic fatigue syndrome

Patients with this disorder experience worsening of symptoms following physical, cognitive, or emotional exertion above their (very low) tolerated limit. Other prominent features frequently observed in myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) are neurocognitive impairments (colloquially referred to as brain fog), unrefreshing sleep, pain, sensory disturbances, gastrointestinal issues, and various forms of dysautonomia. Up to 75% of ME/CFS cases report an infection-like episode preceding the onset of their illness. Postinfectious and postviral fatigue syndromes were originally postulated as subsets of chronic fatigue syndrome. However, there appears to be no clear consensus at present about whether these terms should be considered synonymous to the ME/CFS label or any of its subsets, or include a wider range of postinfectious fatigue conditions.

Practical diagnostic criteria

From a revision of the available criteria, it emerges that the diagnostic criteria for a PAIS should include not only the presence of symptoms, but ideally also the intensity, course, and constellation of symptoms within an individual, as the individual symptoms and symptom trajectories of PAIS vary over time, rendering a mere comparison of symptom presence at a single time point misleading. Furthermore, when a diagnosis of ME/CFS is made, attention should be given to the choice of diagnostic criteria, with preference given to the more conservative criteria, so as not to run the risk of overestimating the syndrome.

Asthenia is the cornerstone symptom for most epidemiological studies on PAIS, but it would be reductive to concentrate only on this rather than the other characteristics, such as the exacerbation of symptoms following exertion, together with other characteristic symptoms and signs that may allow for better identification of the overall, observable clinical picture in these postinfection syndromes, which have significant impacts on a patient’s quality of life.

This article was translated from Univadis Italy. A version of this article appeared on Medscape.com.

The long-term health consequences of COVID-19 have refocused our attention on post–acute infection syndromes (PAIS), starting a discussion on the need for a complete understanding of multisystemic pathophysiology, clinical indicators, and the epidemiology of these syndromes, representing a significant blind spot in the field of medicine. A better understanding of these persistent symptom profiles, not only for post-acute sequelae of SARS-CoV-2 infection (PASC), better known as long COVID, but also for other diseases with unexplainable post-acute sequelae, would allow doctors to fine tune the diagnostic criteria. Having a clear definition and better understanding of post–acute infection symptoms is a necessary step toward developing an evidence-based, multidisciplinary management approach.

PAIS, PASC, or long COVID

The observation of unexplained chronic sequelae after SARS-CoV-2 is known as PASC or long COVID.

Long COVID has been reported as a syndrome in survivors of serious and critical disease, but the effects also persist over time for subjects who experienced a mild infection that did not require admission to hospital. This means that PASC, especially when occurring after a mild or moderate COVID-19 infection, shares many of the same characteristics as chronic diseases triggered by other pathogenic organisms, many of which have not been sufficiently clarified.

PAIS are characterized by a set of core symptoms centering on the following:

  • Exertion intolerance
  • Disproportionate levels of fatigue
  • Neurocognitive and sensory impairment
  • Flu-like symptoms
  • Unrefreshing sleep
  • Myalgia/arthralgia

A plethora of nonspecific symptoms are often present to various degrees.

These similarities suggest a unifying pathophysiology that needs to be elucidated to properly understand and manage postinfectious chronic disability.
 

Overview of PAIS

A detailed revision on what is currently known about PAIS was published in Nature Medicine. It provided various useful pieces of information to assist with the poor recognition of these conditions in clinical practice, a result of which is that patients might experience delayed or a complete lack of clinical care.

The following consolidated postinfection sequelae are mentioned:

  • Q fever fatigue syndrome, which follows infection by the intracellular bacterium Coxiella burnetii
  • Post-dengue fatigue syndrome, which can follow infection by the mosquito-borne dengue virus
  • Fatiguing and rheumatic symptoms in a subset of individuals infected with chikungunya virus, a mosquito-borne virus that causes fever and joint pain in the acute phase
  • Post-polio syndrome, which can emerge as many as 15-40 years after an initial poliomyelitis attack (similarly, some other neurotropic microbes, such as West Nile virus, might lead to persistent effects)
  • Prolonged, debilitating, chronic symptoms have long been reported in a subset of patients after common and typically nonserious infections. For example, after mononucleosis, a condition generally caused by Epstein-Barr virus (EBV), and after an outbreak of Giardia lamblia, an intestinal parasite that usually causes acute intestinal illness. In fact, several studies identified the association of this outbreak of giardiasis with chronic fatigue, irritable bowel syndrome (IBS), and fibromyalgia persisting for many years.
  • Views expressed in the literature regarding the frequency and the validity of posttreatment Lyme disease syndrome are divided. Although substantial evidence points to persistence of arthralgia, fatigue, and subjective neurocognitive impairments in a minority of patients with Lyme disease after the recommended antibiotic treatment, some of the early studies have failed to characterize the initial Lyme disease episode with sufficient rigor.
 

 

Symptoms and signs

The symptoms and signs which, based on the evidence available, are seen more frequently in health care checks may be characterized as the following:

  • Exertion intolerance, fatigue
  • Flu-like and ‘sickness behavior’ symptoms: fever, feverishness, muscle pain, feeling sick, malaise, sweating, irritability
  • Neurological/neurocognitive symptoms: brain fog, impaired concentration or memory, trouble finding words
  • Rheumatologic symptoms: chronic or recurrent joint pain
  • Trigger-specific symptoms: for example, eye problems post Ebola, IBS post Giardia, anosmia and ageusia post COVID-19, motor disturbances post polio and post West Nile virus

Myalgic encephalomyelitis/chronic fatigue syndrome

Patients with this disorder experience worsening of symptoms following physical, cognitive, or emotional exertion above their (very low) tolerated limit. Other prominent features frequently observed in myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) are neurocognitive impairments (colloquially referred to as brain fog), unrefreshing sleep, pain, sensory disturbances, gastrointestinal issues, and various forms of dysautonomia. Up to 75% of ME/CFS cases report an infection-like episode preceding the onset of their illness. Postinfectious and postviral fatigue syndromes were originally postulated as subsets of chronic fatigue syndrome. However, there appears to be no clear consensus at present about whether these terms should be considered synonymous to the ME/CFS label or any of its subsets, or include a wider range of postinfectious fatigue conditions.

Practical diagnostic criteria

From a revision of the available criteria, it emerges that the diagnostic criteria for a PAIS should include not only the presence of symptoms, but ideally also the intensity, course, and constellation of symptoms within an individual, as the individual symptoms and symptom trajectories of PAIS vary over time, rendering a mere comparison of symptom presence at a single time point misleading. Furthermore, when a diagnosis of ME/CFS is made, attention should be given to the choice of diagnostic criteria, with preference given to the more conservative criteria, so as not to run the risk of overestimating the syndrome.

Asthenia is the cornerstone symptom for most epidemiological studies on PAIS, but it would be reductive to concentrate only on this rather than the other characteristics, such as the exacerbation of symptoms following exertion, together with other characteristic symptoms and signs that may allow for better identification of the overall, observable clinical picture in these postinfection syndromes, which have significant impacts on a patient’s quality of life.

This article was translated from Univadis Italy. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

What’s the best time of day to exercise? It depends on your goals

Article Type
Changed

For most of us, the “best” time of day to work out is simple: When we can.

Maybe that’s before or after work. Or when the gym offers free daycare. Or when our favorite instructor teaches our favorite class.

That’s why we call it a “routine.” And if the results are the same, it’s hard to imagine changing it up.

But what if the results aren’t the same?

pojoslaw/Thinkstock

They may not be, according to a new study from a research team at Skidmore College in Saratoga Springs, N.Y. The results of a 12-week exercise program were different for morning versus evening workouts.

Women who worked out in the morning lost more fat, while those who trained in the evening gained more upper-body strength and power. As for men, the performance improvements were similar no matter when they exercised. But those who did so in the evening had a significant drop in blood pressure, among other benefits.

The study is part of a growing body of research showing different results for different times of day among different populations. As it turns out, when you exercise can ultimately have a big effect, not just on strength and fat loss, but also heart health, mood, and quality of sleep.
 

An accidental discovery

The original goal of the Skidmore study was to test a unique fitness program with a group of healthy, fit, and extremely active adults in early middle age.

The program includes four workouts a week, each with a different focus: strength, steady-pace endurance, high-intensity intervals, and flexibility (traditional stretching combined with yoga and Pilates exercises).

But because the group was so large – 27 women and 20 men completed the 3-month program – they had to split them into morning and evening workout groups.

It wasn’t until researchers looked at the results that they saw the differences between morning and evening exercise, says lead author Paul Arciero, PhD.

Dr. Arciero stressed that participants in every group got leaner and stronger. But the women who worked out in the morning got much bigger reductions in body fat and body-fat percentage than the evening group. Meanwhile, women in the evening group got much bigger gains in upper-body strength, power, and muscular endurance than their morning counterparts.

Among the men, the evening group had significantly larger improvements in blood pressure, cholesterol levels, and the percentage of fat they burned for energy, along with a bigger drop in feelings of fatigue.
 

Strategic timing for powerful results

Some of these findings are consistent with previous research. For example, a study published in 2021 showed that the ability to exert high effort and express strength and power peaks in the late afternoon, about the same time that your core body temperature is at its highest point.

On the other hand, you’ll probably perform better in the morning when the activity requires a lot of skill and coordination or depends on strategic decision-making.

The findings apply to both men and women.

Performance aside, exercise timing might offer strong health benefits for men with type 2 diabetes, or at high risk for it.

A study showed that men who exercised between 3 p.m. and 6 p.m. saw dramatic improvements in blood sugar management and insulin sensitivity, compared to a group that worked out between 8 a.m. and 10 a.m.

They also lost more fat during the 12-week program, even though they were doing the exact same workouts.
 

 

 

Train consistently, sleep well

When you exercise can affect your sleep quality in many ways, said neuroscientist Jennifer Heisz, PhD, of McMaster University, Hamilton, Ont.

First, she said, “exercise helps you fall asleep faster and sleep deeper at night.” (The only exception is if you exercise so intensely or so close to bedtime that your heart rate is still elevated.)

Second, “exercising at a consistent time every day helps regulate the body’s circadian rhythms.” It doesn’t matter if the exercise is in the morning, evening, or anywhere in between. As long as it’s predictable, it will help you fall asleep and wake up at the same times.

Outdoor exercise is even better, she said. The sun is the most powerful regulator of the circadian clock and works in tandem with physical activity.

Third, exercising at specific times can help you overcome jet lag or adjust to an earlier or later shift at work.

“Exercising at 7 a.m. or between 1 and 4 p.m. helps your circadian clock to ‘fall back’ in time, making it easier to wake up earlier,” Dr. Heisz said. If you need to train your body to wake up later in the morning, try working out between 7 p.m. and 10 p.m.
 

All exercise is good, but the right timing can make it even better

“The best time to exercise is when you can fit it in,” Dr. Arciero said. “You’ve got to choose the time that fits your lifestyle best.”

But context matters, he noted.

“For someone needing to achieve an improvement in their risk for cardiometabolic disease,” his study shows an advantage to working out later in the day, especially for men. If you’re more focused on building upper-body strength and power, you’ll probably get better results from training in the afternoon or evening.

And for fat loss, the Skidmore study shows better results for women who did morning workouts.

And if you’re still not sure? Try sleeping on it – preferably after your workout.

A version of this article first appeared on WebMD.com.

Publications
Topics
Sections

For most of us, the “best” time of day to work out is simple: When we can.

Maybe that’s before or after work. Or when the gym offers free daycare. Or when our favorite instructor teaches our favorite class.

That’s why we call it a “routine.” And if the results are the same, it’s hard to imagine changing it up.

But what if the results aren’t the same?

pojoslaw/Thinkstock

They may not be, according to a new study from a research team at Skidmore College in Saratoga Springs, N.Y. The results of a 12-week exercise program were different for morning versus evening workouts.

Women who worked out in the morning lost more fat, while those who trained in the evening gained more upper-body strength and power. As for men, the performance improvements were similar no matter when they exercised. But those who did so in the evening had a significant drop in blood pressure, among other benefits.

The study is part of a growing body of research showing different results for different times of day among different populations. As it turns out, when you exercise can ultimately have a big effect, not just on strength and fat loss, but also heart health, mood, and quality of sleep.
 

An accidental discovery

The original goal of the Skidmore study was to test a unique fitness program with a group of healthy, fit, and extremely active adults in early middle age.

The program includes four workouts a week, each with a different focus: strength, steady-pace endurance, high-intensity intervals, and flexibility (traditional stretching combined with yoga and Pilates exercises).

But because the group was so large – 27 women and 20 men completed the 3-month program – they had to split them into morning and evening workout groups.

It wasn’t until researchers looked at the results that they saw the differences between morning and evening exercise, says lead author Paul Arciero, PhD.

Dr. Arciero stressed that participants in every group got leaner and stronger. But the women who worked out in the morning got much bigger reductions in body fat and body-fat percentage than the evening group. Meanwhile, women in the evening group got much bigger gains in upper-body strength, power, and muscular endurance than their morning counterparts.

Among the men, the evening group had significantly larger improvements in blood pressure, cholesterol levels, and the percentage of fat they burned for energy, along with a bigger drop in feelings of fatigue.
 

Strategic timing for powerful results

Some of these findings are consistent with previous research. For example, a study published in 2021 showed that the ability to exert high effort and express strength and power peaks in the late afternoon, about the same time that your core body temperature is at its highest point.

On the other hand, you’ll probably perform better in the morning when the activity requires a lot of skill and coordination or depends on strategic decision-making.

The findings apply to both men and women.

Performance aside, exercise timing might offer strong health benefits for men with type 2 diabetes, or at high risk for it.

A study showed that men who exercised between 3 p.m. and 6 p.m. saw dramatic improvements in blood sugar management and insulin sensitivity, compared to a group that worked out between 8 a.m. and 10 a.m.

They also lost more fat during the 12-week program, even though they were doing the exact same workouts.
 

 

 

Train consistently, sleep well

When you exercise can affect your sleep quality in many ways, said neuroscientist Jennifer Heisz, PhD, of McMaster University, Hamilton, Ont.

First, she said, “exercise helps you fall asleep faster and sleep deeper at night.” (The only exception is if you exercise so intensely or so close to bedtime that your heart rate is still elevated.)

Second, “exercising at a consistent time every day helps regulate the body’s circadian rhythms.” It doesn’t matter if the exercise is in the morning, evening, or anywhere in between. As long as it’s predictable, it will help you fall asleep and wake up at the same times.

Outdoor exercise is even better, she said. The sun is the most powerful regulator of the circadian clock and works in tandem with physical activity.

Third, exercising at specific times can help you overcome jet lag or adjust to an earlier or later shift at work.

“Exercising at 7 a.m. or between 1 and 4 p.m. helps your circadian clock to ‘fall back’ in time, making it easier to wake up earlier,” Dr. Heisz said. If you need to train your body to wake up later in the morning, try working out between 7 p.m. and 10 p.m.
 

All exercise is good, but the right timing can make it even better

“The best time to exercise is when you can fit it in,” Dr. Arciero said. “You’ve got to choose the time that fits your lifestyle best.”

But context matters, he noted.

“For someone needing to achieve an improvement in their risk for cardiometabolic disease,” his study shows an advantage to working out later in the day, especially for men. If you’re more focused on building upper-body strength and power, you’ll probably get better results from training in the afternoon or evening.

And for fat loss, the Skidmore study shows better results for women who did morning workouts.

And if you’re still not sure? Try sleeping on it – preferably after your workout.

A version of this article first appeared on WebMD.com.

For most of us, the “best” time of day to work out is simple: When we can.

Maybe that’s before or after work. Or when the gym offers free daycare. Or when our favorite instructor teaches our favorite class.

That’s why we call it a “routine.” And if the results are the same, it’s hard to imagine changing it up.

But what if the results aren’t the same?

pojoslaw/Thinkstock

They may not be, according to a new study from a research team at Skidmore College in Saratoga Springs, N.Y. The results of a 12-week exercise program were different for morning versus evening workouts.

Women who worked out in the morning lost more fat, while those who trained in the evening gained more upper-body strength and power. As for men, the performance improvements were similar no matter when they exercised. But those who did so in the evening had a significant drop in blood pressure, among other benefits.

The study is part of a growing body of research showing different results for different times of day among different populations. As it turns out, when you exercise can ultimately have a big effect, not just on strength and fat loss, but also heart health, mood, and quality of sleep.
 

An accidental discovery

The original goal of the Skidmore study was to test a unique fitness program with a group of healthy, fit, and extremely active adults in early middle age.

The program includes four workouts a week, each with a different focus: strength, steady-pace endurance, high-intensity intervals, and flexibility (traditional stretching combined with yoga and Pilates exercises).

But because the group was so large – 27 women and 20 men completed the 3-month program – they had to split them into morning and evening workout groups.

It wasn’t until researchers looked at the results that they saw the differences between morning and evening exercise, says lead author Paul Arciero, PhD.

Dr. Arciero stressed that participants in every group got leaner and stronger. But the women who worked out in the morning got much bigger reductions in body fat and body-fat percentage than the evening group. Meanwhile, women in the evening group got much bigger gains in upper-body strength, power, and muscular endurance than their morning counterparts.

Among the men, the evening group had significantly larger improvements in blood pressure, cholesterol levels, and the percentage of fat they burned for energy, along with a bigger drop in feelings of fatigue.
 

Strategic timing for powerful results

Some of these findings are consistent with previous research. For example, a study published in 2021 showed that the ability to exert high effort and express strength and power peaks in the late afternoon, about the same time that your core body temperature is at its highest point.

On the other hand, you’ll probably perform better in the morning when the activity requires a lot of skill and coordination or depends on strategic decision-making.

The findings apply to both men and women.

Performance aside, exercise timing might offer strong health benefits for men with type 2 diabetes, or at high risk for it.

A study showed that men who exercised between 3 p.m. and 6 p.m. saw dramatic improvements in blood sugar management and insulin sensitivity, compared to a group that worked out between 8 a.m. and 10 a.m.

They also lost more fat during the 12-week program, even though they were doing the exact same workouts.
 

 

 

Train consistently, sleep well

When you exercise can affect your sleep quality in many ways, said neuroscientist Jennifer Heisz, PhD, of McMaster University, Hamilton, Ont.

First, she said, “exercise helps you fall asleep faster and sleep deeper at night.” (The only exception is if you exercise so intensely or so close to bedtime that your heart rate is still elevated.)

Second, “exercising at a consistent time every day helps regulate the body’s circadian rhythms.” It doesn’t matter if the exercise is in the morning, evening, or anywhere in between. As long as it’s predictable, it will help you fall asleep and wake up at the same times.

Outdoor exercise is even better, she said. The sun is the most powerful regulator of the circadian clock and works in tandem with physical activity.

Third, exercising at specific times can help you overcome jet lag or adjust to an earlier or later shift at work.

“Exercising at 7 a.m. or between 1 and 4 p.m. helps your circadian clock to ‘fall back’ in time, making it easier to wake up earlier,” Dr. Heisz said. If you need to train your body to wake up later in the morning, try working out between 7 p.m. and 10 p.m.
 

All exercise is good, but the right timing can make it even better

“The best time to exercise is when you can fit it in,” Dr. Arciero said. “You’ve got to choose the time that fits your lifestyle best.”

But context matters, he noted.

“For someone needing to achieve an improvement in their risk for cardiometabolic disease,” his study shows an advantage to working out later in the day, especially for men. If you’re more focused on building upper-body strength and power, you’ll probably get better results from training in the afternoon or evening.

And for fat loss, the Skidmore study shows better results for women who did morning workouts.

And if you’re still not sure? Try sleeping on it – preferably after your workout.

A version of this article first appeared on WebMD.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM FRONTIERS IN PHYSIOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

To predict mortality, you need a leg to stand on

Article Type
Changed

Storks everywhere, rejoice. A new study shows that the ability to stand on one leg for at least 10 seconds is strongly linked to the risk of death over the next 7 years.

According to the findings, people in middle age and older who couldn’t perform the 10-second standing test were nearly four times as likely to die of any cause – heart attacks, strokes, cancer, and more – in the coming years than those who could, well, stand the test of time.

Claudio Gil Araújo, MD, PhD, research director of the Exercise Medicine Clinic-CLINIMEX in Rio de Janeiro, who led the study, called the results “awesome!”

“As a physician who has worked with cardiac patients for over 4 decades, I was very impressed in finding out that, for those between 51 and 75 years of age, it is riskier for survival to not complete the 10-second one-leg standing test than to have been diagnosed as having coronary artery disease or in being hypertensive” or having abnormal cholesterol, Dr. Araújo said in an interview.

The findings appeared in the British Journal of Sports Medicine.

Researchers have known for at least a half century that balance and mortality are connected. One reason is falls: Worldwide, nearly 700,000 people each year die as a result of a fall, according to the World Health Organization, and more than 37 million falls annually require medical attention. But as the new study indicates, falls aren’t the only problem.

Dr. Araújo and colleagues have been working on ways to improve balance and strength as people age. In addition to the one-legged standing test, they have previously shown that the ability to rise from a sitting position on the floor is also a strong predictor of longevity.

For the new study, the researchers assessed 1,702 people in Brazil (68% men) aged 51-75 years who had been participating in an ongoing exercise study that began there in 1994.
 

Three tries to succeed

Starting in 2008, the team introduced the standing test, which involves balancing on one leg and placing the other foot at the back weight-bearing limb for support. People get three tries to maintain that posture for at least 10 seconds.

Not surprisingly, the ability to perform the test dropped with age. Although 20% of people in the study overall were unable to stand on one leg for 10 seconds, that figure rose to about 70% for those aged 76-80 years, and nearly 90% for those aged 81-85, according to the researchers. Of the two dozen 85-year-olds in the study, only two were able to complete the standing test, Dr. Araújo told this news organization.

At roughly age 70, half of people could not complete the 10-second test.

Over an average of 7 years of follow-up, 17.5% of people who could not manage the 10-second stand had died, compared with 4.5% of those who could last that long, the study found.

After accounting for age and many other risk factors, such as diabetes, body mass index, and a history of heart disease, people who were unable to complete the standing test were 84% more likely to die from any cause over the study period than their peers with better one-legged static balance (95% confidence interval, 1.23-2.78; P < .001).

The researchers said their study was limited by its lack of diversity – all the participants were relatively affluent Brazilians – and the inability to control for a history of falls and physical activity. But they said the size of the cohort, the long follow-up period, and their use of sophistical statistical methods helped mitigate the shortcomings.

Although low aerobic fitness is a marker of poor health, much less attention has been paid to nonaerobic fitness – things like balance, flexibility, and muscle strength and power, Dr. Araújo said.

“We are accumulating evidence that these three components of nonaerobic physical fitness are potentially relevant for good health and even more relevant for survival in older subjects,” Dr. Araújo said. Poor nonaerobic fitness, which is normally but not always associated with a sedentary lifestyle, “is the background of most cases of frailty, and being frail is strongly associated with a poor quality of life, less physical activity and exercise, and so on. It’s a bad circle.”

Dr. Araújo’s group has been using the standing test in their clinic for more than a dozen years and have seen gains in their patients. “Patients are often unaware that they are unable to sustain 10 seconds standing one legged. After this simple evaluation, they are much more prone to engage in balance training,” he said.

For now, the researchers don’t have data to show that improving static balance or performance on the standing test can affect survival, a “quite attractive” possibility, he added. But balance can be substantially improved through training.

“After only a few sessions, an improvement can be perceived, and this influences quality of life,” Dr. Araújo said. “And this is exactly what we do with the patients that we evaluated and for those that are attending our medically supervised exercise program.”

George A. Kuchel, MD, CM, FRCP, professor and Travelers Chair in Geriatrics and Gerontology at the University of Connecticut, Farmington, called the research “well done” and said the results “make perfect sense, since we have known for a long time that muscle strength is an important determinant of health, independence, and survival.”

Identifying frail patients quickly, simply, and reliably in the clinical setting is a pressing need, Dr. Kuchel, director of the UConn Center on Aging, said in an interview. The 10-second test “has considerable appeal” for this purpose.

“This could be, or rather should be, of great interest to all busy clinicians who see older adults in primary care or consultative capacities,” Dr. Kuchel added. “I hate to be nihilistic as regards what is possible in the context of really busy clinical practices, but even the minute or so that this takes to do may very well be too much for busy clinicians to do.”

Dr. Araújo and Dr. Kuchel reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Storks everywhere, rejoice. A new study shows that the ability to stand on one leg for at least 10 seconds is strongly linked to the risk of death over the next 7 years.

According to the findings, people in middle age and older who couldn’t perform the 10-second standing test were nearly four times as likely to die of any cause – heart attacks, strokes, cancer, and more – in the coming years than those who could, well, stand the test of time.

Claudio Gil Araújo, MD, PhD, research director of the Exercise Medicine Clinic-CLINIMEX in Rio de Janeiro, who led the study, called the results “awesome!”

“As a physician who has worked with cardiac patients for over 4 decades, I was very impressed in finding out that, for those between 51 and 75 years of age, it is riskier for survival to not complete the 10-second one-leg standing test than to have been diagnosed as having coronary artery disease or in being hypertensive” or having abnormal cholesterol, Dr. Araújo said in an interview.

The findings appeared in the British Journal of Sports Medicine.

Researchers have known for at least a half century that balance and mortality are connected. One reason is falls: Worldwide, nearly 700,000 people each year die as a result of a fall, according to the World Health Organization, and more than 37 million falls annually require medical attention. But as the new study indicates, falls aren’t the only problem.

Dr. Araújo and colleagues have been working on ways to improve balance and strength as people age. In addition to the one-legged standing test, they have previously shown that the ability to rise from a sitting position on the floor is also a strong predictor of longevity.

For the new study, the researchers assessed 1,702 people in Brazil (68% men) aged 51-75 years who had been participating in an ongoing exercise study that began there in 1994.
 

Three tries to succeed

Starting in 2008, the team introduced the standing test, which involves balancing on one leg and placing the other foot at the back weight-bearing limb for support. People get three tries to maintain that posture for at least 10 seconds.

Not surprisingly, the ability to perform the test dropped with age. Although 20% of people in the study overall were unable to stand on one leg for 10 seconds, that figure rose to about 70% for those aged 76-80 years, and nearly 90% for those aged 81-85, according to the researchers. Of the two dozen 85-year-olds in the study, only two were able to complete the standing test, Dr. Araújo told this news organization.

At roughly age 70, half of people could not complete the 10-second test.

Over an average of 7 years of follow-up, 17.5% of people who could not manage the 10-second stand had died, compared with 4.5% of those who could last that long, the study found.

After accounting for age and many other risk factors, such as diabetes, body mass index, and a history of heart disease, people who were unable to complete the standing test were 84% more likely to die from any cause over the study period than their peers with better one-legged static balance (95% confidence interval, 1.23-2.78; P < .001).

The researchers said their study was limited by its lack of diversity – all the participants were relatively affluent Brazilians – and the inability to control for a history of falls and physical activity. But they said the size of the cohort, the long follow-up period, and their use of sophistical statistical methods helped mitigate the shortcomings.

Although low aerobic fitness is a marker of poor health, much less attention has been paid to nonaerobic fitness – things like balance, flexibility, and muscle strength and power, Dr. Araújo said.

“We are accumulating evidence that these three components of nonaerobic physical fitness are potentially relevant for good health and even more relevant for survival in older subjects,” Dr. Araújo said. Poor nonaerobic fitness, which is normally but not always associated with a sedentary lifestyle, “is the background of most cases of frailty, and being frail is strongly associated with a poor quality of life, less physical activity and exercise, and so on. It’s a bad circle.”

Dr. Araújo’s group has been using the standing test in their clinic for more than a dozen years and have seen gains in their patients. “Patients are often unaware that they are unable to sustain 10 seconds standing one legged. After this simple evaluation, they are much more prone to engage in balance training,” he said.

For now, the researchers don’t have data to show that improving static balance or performance on the standing test can affect survival, a “quite attractive” possibility, he added. But balance can be substantially improved through training.

“After only a few sessions, an improvement can be perceived, and this influences quality of life,” Dr. Araújo said. “And this is exactly what we do with the patients that we evaluated and for those that are attending our medically supervised exercise program.”

George A. Kuchel, MD, CM, FRCP, professor and Travelers Chair in Geriatrics and Gerontology at the University of Connecticut, Farmington, called the research “well done” and said the results “make perfect sense, since we have known for a long time that muscle strength is an important determinant of health, independence, and survival.”

Identifying frail patients quickly, simply, and reliably in the clinical setting is a pressing need, Dr. Kuchel, director of the UConn Center on Aging, said in an interview. The 10-second test “has considerable appeal” for this purpose.

“This could be, or rather should be, of great interest to all busy clinicians who see older adults in primary care or consultative capacities,” Dr. Kuchel added. “I hate to be nihilistic as regards what is possible in the context of really busy clinical practices, but even the minute or so that this takes to do may very well be too much for busy clinicians to do.”

Dr. Araújo and Dr. Kuchel reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Storks everywhere, rejoice. A new study shows that the ability to stand on one leg for at least 10 seconds is strongly linked to the risk of death over the next 7 years.

According to the findings, people in middle age and older who couldn’t perform the 10-second standing test were nearly four times as likely to die of any cause – heart attacks, strokes, cancer, and more – in the coming years than those who could, well, stand the test of time.

Claudio Gil Araújo, MD, PhD, research director of the Exercise Medicine Clinic-CLINIMEX in Rio de Janeiro, who led the study, called the results “awesome!”

“As a physician who has worked with cardiac patients for over 4 decades, I was very impressed in finding out that, for those between 51 and 75 years of age, it is riskier for survival to not complete the 10-second one-leg standing test than to have been diagnosed as having coronary artery disease or in being hypertensive” or having abnormal cholesterol, Dr. Araújo said in an interview.

The findings appeared in the British Journal of Sports Medicine.

Researchers have known for at least a half century that balance and mortality are connected. One reason is falls: Worldwide, nearly 700,000 people each year die as a result of a fall, according to the World Health Organization, and more than 37 million falls annually require medical attention. But as the new study indicates, falls aren’t the only problem.

Dr. Araújo and colleagues have been working on ways to improve balance and strength as people age. In addition to the one-legged standing test, they have previously shown that the ability to rise from a sitting position on the floor is also a strong predictor of longevity.

For the new study, the researchers assessed 1,702 people in Brazil (68% men) aged 51-75 years who had been participating in an ongoing exercise study that began there in 1994.
 

Three tries to succeed

Starting in 2008, the team introduced the standing test, which involves balancing on one leg and placing the other foot at the back weight-bearing limb for support. People get three tries to maintain that posture for at least 10 seconds.

Not surprisingly, the ability to perform the test dropped with age. Although 20% of people in the study overall were unable to stand on one leg for 10 seconds, that figure rose to about 70% for those aged 76-80 years, and nearly 90% for those aged 81-85, according to the researchers. Of the two dozen 85-year-olds in the study, only two were able to complete the standing test, Dr. Araújo told this news organization.

At roughly age 70, half of people could not complete the 10-second test.

Over an average of 7 years of follow-up, 17.5% of people who could not manage the 10-second stand had died, compared with 4.5% of those who could last that long, the study found.

After accounting for age and many other risk factors, such as diabetes, body mass index, and a history of heart disease, people who were unable to complete the standing test were 84% more likely to die from any cause over the study period than their peers with better one-legged static balance (95% confidence interval, 1.23-2.78; P < .001).

The researchers said their study was limited by its lack of diversity – all the participants were relatively affluent Brazilians – and the inability to control for a history of falls and physical activity. But they said the size of the cohort, the long follow-up period, and their use of sophistical statistical methods helped mitigate the shortcomings.

Although low aerobic fitness is a marker of poor health, much less attention has been paid to nonaerobic fitness – things like balance, flexibility, and muscle strength and power, Dr. Araújo said.

“We are accumulating evidence that these three components of nonaerobic physical fitness are potentially relevant for good health and even more relevant for survival in older subjects,” Dr. Araújo said. Poor nonaerobic fitness, which is normally but not always associated with a sedentary lifestyle, “is the background of most cases of frailty, and being frail is strongly associated with a poor quality of life, less physical activity and exercise, and so on. It’s a bad circle.”

Dr. Araújo’s group has been using the standing test in their clinic for more than a dozen years and have seen gains in their patients. “Patients are often unaware that they are unable to sustain 10 seconds standing one legged. After this simple evaluation, they are much more prone to engage in balance training,” he said.

For now, the researchers don’t have data to show that improving static balance or performance on the standing test can affect survival, a “quite attractive” possibility, he added. But balance can be substantially improved through training.

“After only a few sessions, an improvement can be perceived, and this influences quality of life,” Dr. Araújo said. “And this is exactly what we do with the patients that we evaluated and for those that are attending our medically supervised exercise program.”

George A. Kuchel, MD, CM, FRCP, professor and Travelers Chair in Geriatrics and Gerontology at the University of Connecticut, Farmington, called the research “well done” and said the results “make perfect sense, since we have known for a long time that muscle strength is an important determinant of health, independence, and survival.”

Identifying frail patients quickly, simply, and reliably in the clinical setting is a pressing need, Dr. Kuchel, director of the UConn Center on Aging, said in an interview. The 10-second test “has considerable appeal” for this purpose.

“This could be, or rather should be, of great interest to all busy clinicians who see older adults in primary care or consultative capacities,” Dr. Kuchel added. “I hate to be nihilistic as regards what is possible in the context of really busy clinical practices, but even the minute or so that this takes to do may very well be too much for busy clinicians to do.”

Dr. Araújo and Dr. Kuchel reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE BRITISH JOURNAL OF SPORTS MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Precision medicine vs. antibiotic resistance

Article Type
Changed

Diversity is an omnipresent element in clinical practice: in the genome, in the environment, in patients’ lifestyles and habits. Precision medicine addresses the variability of the individual to improve diagnosis and treatment. It is increasingly used in specialties such as oncology, neurology, and cardiology. A personalized approach has many objectives, including to optimize treatment, minimize the risk of adverse effects, facilitate early diagnosis, and determine predisposition to disease. Genomic technologies, such as massive sequencing techniques, and tools such as CRISPR-Cas9 are key to the future of personalized medicine.

Jesús Oteo Iglesias, MD, PhD, a specialist in microbiology and director of Spain’s National Center for Microbiology, spoke at the Spanish Association of Infectious Diseases and Clinical Microbiology’s recent conference. He discussed various precision medicine projects aimed at reinforcing the fight against antibiotic resistance.

Infectious diseases are complex because the diversity of the pathogenic microorganism combines with the patient’s own diversity, which influences the interaction between the two, said Dr. Oteo. Thus, the antibiogram and targeted antibiotic treatments (which are chosen according to the species, sensitivity to antimicrobials, type of infection, and patient characteristics) have been established applications of precision medicine for decades. However, multiple tools could further strengthen personalized medicine against multiresistant pathogens.

Therapeutic drug monitoring, in which multiple pharmacokinetic and pharmacodynamic factors are considered, is a strategy with great potential to increase the effectiveness of antibiotics and minimize toxicity. Owing to its costs and the need for trained staff, this tool would be especially indicated in the treatment of patients with more complex conditions, such as those suffering from obesity, complex infections, or infections with multiresistant bacteria, as well as those in critical condition. Multiple computer programs are available to help determine the dosage of antibiotics by estimating drug exposure and to provide recommendations. However, clinical trials are needed to assess the pros and cons of applying therapeutic monitoring for types of antibiotics other than those for which a given type is already used (for example, aminoglycosides and glycopeptides).

One technology that could help in antibiotic use optimization programs is microneedle-based biosensors, which could be implanted in the skin for real-time antibiotic monitoring. This tool “could be the first step in establishing automated antibiotic administration systems, with infusion pumps and feedback systems, like those already used in diabetes for insulin administration,” said Dr. Oteo.

Artificial intelligence could also be a valuable technology for optimization programs. “We should go a step further in the implementation of artificial intelligence through clinical decision support systems,” said Dr. Oteo. This technology would guide the administration of antimicrobials using data extracted from the electronic medical record. However, there are great challenges to overcome in creating these tools, such as the risk of entering erroneous data; the difficulty in entering complex data, such as data relevant to antibiotic resistance; and the variability at the geographic and institutional levels.

Genomics is also a tool with great potential for identifying bacteria’s degree of resistance to antibiotics by studying mutations in chromosomal and acquired genes. A proof-of-concept study evaluated the sensitivity of different Pseudomonas aeruginosa strains to several antibiotics by analyzing genome sequences associated with resistance, said Dr. Otero. The researchers found that this system was effective at predicting the sensitivity of bacteria from genomic data.

In the United States, the PATRIC bioinformatics center, which is financed by the National Institute of Allergy and Infectious Diseases, works with automated learning models to predict the antimicrobial resistance of different species of bacteria, including Staphylococcus aureus, Streptococcus pneumoniae, and Mycobacterium tuberculosis. These models, which work with genomic data associated with antibiotic resistance phenotypes, are able to identify resistance without prior knowledge of the underlying mechanisms.

Another factor to consider with regard to the use of precision medicine for infectious diseases is the microbiota. Dr. Oteo explained that the pathogenic microorganism interacts not only with the host but also with its microbiota, “which can be diverse, is manifold, and can be very different, depending on the circumstances. These interactions can be translated into ecological and evolutionary pressures that may have clinical significance.” One of the best-known examples is the possibility that a beta-lactamase–producing bacterium benefits other bacteria around it by secreting these enzymes. Furthermore, some known forms of bacterial interaction (such as plasmid transfer) are directly related to antibiotic resistance. Metagenomics, which involves the genetic study of communities of microbes, could provide more information for predicting and avoiding infections by multiresistant pathogens by monitoring the microbiome.

The CRISPR-Cas9 gene editing tool could also be an ally in the fight against antibiotic resistance by eliminating resistance genes and thus making bacteria sensitive to certain antibiotics. Several published preliminary studies indicate that this is possible in vitro. The main challenge for the clinical application of CRISPR is in introducing it into the target microbial population. Use of conjugative plasmids and bacteriophages could perhaps be an option for overcoming this obstacle in the future.

Exploiting the possibilities of precision medicine through use of the most innovative tools in addressing antibiotic resistance is a great challenge, said Dr. Oteo, but the situation demands it, and it is necessary to take small steps to achieve this goal.

A version of this article appeared on Medscape.com. This article was translated from Univadis Spain.

Publications
Topics
Sections

Diversity is an omnipresent element in clinical practice: in the genome, in the environment, in patients’ lifestyles and habits. Precision medicine addresses the variability of the individual to improve diagnosis and treatment. It is increasingly used in specialties such as oncology, neurology, and cardiology. A personalized approach has many objectives, including to optimize treatment, minimize the risk of adverse effects, facilitate early diagnosis, and determine predisposition to disease. Genomic technologies, such as massive sequencing techniques, and tools such as CRISPR-Cas9 are key to the future of personalized medicine.

Jesús Oteo Iglesias, MD, PhD, a specialist in microbiology and director of Spain’s National Center for Microbiology, spoke at the Spanish Association of Infectious Diseases and Clinical Microbiology’s recent conference. He discussed various precision medicine projects aimed at reinforcing the fight against antibiotic resistance.

Infectious diseases are complex because the diversity of the pathogenic microorganism combines with the patient’s own diversity, which influences the interaction between the two, said Dr. Oteo. Thus, the antibiogram and targeted antibiotic treatments (which are chosen according to the species, sensitivity to antimicrobials, type of infection, and patient characteristics) have been established applications of precision medicine for decades. However, multiple tools could further strengthen personalized medicine against multiresistant pathogens.

Therapeutic drug monitoring, in which multiple pharmacokinetic and pharmacodynamic factors are considered, is a strategy with great potential to increase the effectiveness of antibiotics and minimize toxicity. Owing to its costs and the need for trained staff, this tool would be especially indicated in the treatment of patients with more complex conditions, such as those suffering from obesity, complex infections, or infections with multiresistant bacteria, as well as those in critical condition. Multiple computer programs are available to help determine the dosage of antibiotics by estimating drug exposure and to provide recommendations. However, clinical trials are needed to assess the pros and cons of applying therapeutic monitoring for types of antibiotics other than those for which a given type is already used (for example, aminoglycosides and glycopeptides).

One technology that could help in antibiotic use optimization programs is microneedle-based biosensors, which could be implanted in the skin for real-time antibiotic monitoring. This tool “could be the first step in establishing automated antibiotic administration systems, with infusion pumps and feedback systems, like those already used in diabetes for insulin administration,” said Dr. Oteo.

Artificial intelligence could also be a valuable technology for optimization programs. “We should go a step further in the implementation of artificial intelligence through clinical decision support systems,” said Dr. Oteo. This technology would guide the administration of antimicrobials using data extracted from the electronic medical record. However, there are great challenges to overcome in creating these tools, such as the risk of entering erroneous data; the difficulty in entering complex data, such as data relevant to antibiotic resistance; and the variability at the geographic and institutional levels.

Genomics is also a tool with great potential for identifying bacteria’s degree of resistance to antibiotics by studying mutations in chromosomal and acquired genes. A proof-of-concept study evaluated the sensitivity of different Pseudomonas aeruginosa strains to several antibiotics by analyzing genome sequences associated with resistance, said Dr. Otero. The researchers found that this system was effective at predicting the sensitivity of bacteria from genomic data.

In the United States, the PATRIC bioinformatics center, which is financed by the National Institute of Allergy and Infectious Diseases, works with automated learning models to predict the antimicrobial resistance of different species of bacteria, including Staphylococcus aureus, Streptococcus pneumoniae, and Mycobacterium tuberculosis. These models, which work with genomic data associated with antibiotic resistance phenotypes, are able to identify resistance without prior knowledge of the underlying mechanisms.

Another factor to consider with regard to the use of precision medicine for infectious diseases is the microbiota. Dr. Oteo explained that the pathogenic microorganism interacts not only with the host but also with its microbiota, “which can be diverse, is manifold, and can be very different, depending on the circumstances. These interactions can be translated into ecological and evolutionary pressures that may have clinical significance.” One of the best-known examples is the possibility that a beta-lactamase–producing bacterium benefits other bacteria around it by secreting these enzymes. Furthermore, some known forms of bacterial interaction (such as plasmid transfer) are directly related to antibiotic resistance. Metagenomics, which involves the genetic study of communities of microbes, could provide more information for predicting and avoiding infections by multiresistant pathogens by monitoring the microbiome.

The CRISPR-Cas9 gene editing tool could also be an ally in the fight against antibiotic resistance by eliminating resistance genes and thus making bacteria sensitive to certain antibiotics. Several published preliminary studies indicate that this is possible in vitro. The main challenge for the clinical application of CRISPR is in introducing it into the target microbial population. Use of conjugative plasmids and bacteriophages could perhaps be an option for overcoming this obstacle in the future.

Exploiting the possibilities of precision medicine through use of the most innovative tools in addressing antibiotic resistance is a great challenge, said Dr. Oteo, but the situation demands it, and it is necessary to take small steps to achieve this goal.

A version of this article appeared on Medscape.com. This article was translated from Univadis Spain.

Diversity is an omnipresent element in clinical practice: in the genome, in the environment, in patients’ lifestyles and habits. Precision medicine addresses the variability of the individual to improve diagnosis and treatment. It is increasingly used in specialties such as oncology, neurology, and cardiology. A personalized approach has many objectives, including to optimize treatment, minimize the risk of adverse effects, facilitate early diagnosis, and determine predisposition to disease. Genomic technologies, such as massive sequencing techniques, and tools such as CRISPR-Cas9 are key to the future of personalized medicine.

Jesús Oteo Iglesias, MD, PhD, a specialist in microbiology and director of Spain’s National Center for Microbiology, spoke at the Spanish Association of Infectious Diseases and Clinical Microbiology’s recent conference. He discussed various precision medicine projects aimed at reinforcing the fight against antibiotic resistance.

Infectious diseases are complex because the diversity of the pathogenic microorganism combines with the patient’s own diversity, which influences the interaction between the two, said Dr. Oteo. Thus, the antibiogram and targeted antibiotic treatments (which are chosen according to the species, sensitivity to antimicrobials, type of infection, and patient characteristics) have been established applications of precision medicine for decades. However, multiple tools could further strengthen personalized medicine against multiresistant pathogens.

Therapeutic drug monitoring, in which multiple pharmacokinetic and pharmacodynamic factors are considered, is a strategy with great potential to increase the effectiveness of antibiotics and minimize toxicity. Owing to its costs and the need for trained staff, this tool would be especially indicated in the treatment of patients with more complex conditions, such as those suffering from obesity, complex infections, or infections with multiresistant bacteria, as well as those in critical condition. Multiple computer programs are available to help determine the dosage of antibiotics by estimating drug exposure and to provide recommendations. However, clinical trials are needed to assess the pros and cons of applying therapeutic monitoring for types of antibiotics other than those for which a given type is already used (for example, aminoglycosides and glycopeptides).

One technology that could help in antibiotic use optimization programs is microneedle-based biosensors, which could be implanted in the skin for real-time antibiotic monitoring. This tool “could be the first step in establishing automated antibiotic administration systems, with infusion pumps and feedback systems, like those already used in diabetes for insulin administration,” said Dr. Oteo.

Artificial intelligence could also be a valuable technology for optimization programs. “We should go a step further in the implementation of artificial intelligence through clinical decision support systems,” said Dr. Oteo. This technology would guide the administration of antimicrobials using data extracted from the electronic medical record. However, there are great challenges to overcome in creating these tools, such as the risk of entering erroneous data; the difficulty in entering complex data, such as data relevant to antibiotic resistance; and the variability at the geographic and institutional levels.

Genomics is also a tool with great potential for identifying bacteria’s degree of resistance to antibiotics by studying mutations in chromosomal and acquired genes. A proof-of-concept study evaluated the sensitivity of different Pseudomonas aeruginosa strains to several antibiotics by analyzing genome sequences associated with resistance, said Dr. Otero. The researchers found that this system was effective at predicting the sensitivity of bacteria from genomic data.

In the United States, the PATRIC bioinformatics center, which is financed by the National Institute of Allergy and Infectious Diseases, works with automated learning models to predict the antimicrobial resistance of different species of bacteria, including Staphylococcus aureus, Streptococcus pneumoniae, and Mycobacterium tuberculosis. These models, which work with genomic data associated with antibiotic resistance phenotypes, are able to identify resistance without prior knowledge of the underlying mechanisms.

Another factor to consider with regard to the use of precision medicine for infectious diseases is the microbiota. Dr. Oteo explained that the pathogenic microorganism interacts not only with the host but also with its microbiota, “which can be diverse, is manifold, and can be very different, depending on the circumstances. These interactions can be translated into ecological and evolutionary pressures that may have clinical significance.” One of the best-known examples is the possibility that a beta-lactamase–producing bacterium benefits other bacteria around it by secreting these enzymes. Furthermore, some known forms of bacterial interaction (such as plasmid transfer) are directly related to antibiotic resistance. Metagenomics, which involves the genetic study of communities of microbes, could provide more information for predicting and avoiding infections by multiresistant pathogens by monitoring the microbiome.

The CRISPR-Cas9 gene editing tool could also be an ally in the fight against antibiotic resistance by eliminating resistance genes and thus making bacteria sensitive to certain antibiotics. Several published preliminary studies indicate that this is possible in vitro. The main challenge for the clinical application of CRISPR is in introducing it into the target microbial population. Use of conjugative plasmids and bacteriophages could perhaps be an option for overcoming this obstacle in the future.

Exploiting the possibilities of precision medicine through use of the most innovative tools in addressing antibiotic resistance is a great challenge, said Dr. Oteo, but the situation demands it, and it is necessary to take small steps to achieve this goal.

A version of this article appeared on Medscape.com. This article was translated from Univadis Spain.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Remnant cholesterol improves CV risk prediction

Article Type
Changed

Adding remnant cholesterol to guideline prediction models should improve the identification of individuals who would benefit the most from statin treatment for the primary prevention of heart disease, a new study suggests.

The study, which followed almost 42,000 Danish individuals without a history of ischemic cardiovascular disease, diabetes, or statin use for more than 10 years, found that elevated remnant cholesterol appropriately reclassified up to 40% of those who later experienced myocardial infarction and ischemic heart disease.

“The clinical implications of our study include that doctors and patients should be aware of remnant cholesterol levels to prevent future risk of MI and ischemic heart disease,” the authors conclude.

They suggest that the development of a cardiovascular risk algorithm, including remnant cholesterol together with LDL cholesterol, would help to better identify high-risk individuals who could be candidates for statins in a primary prevention setting.

They note that physicians are encouraged to evaluate non-HDL cholesterol and/or apolipoprotein B rather than LDL cholesterol and certainly not yet remnant cholesterol, possibly because of the limited availability of remnant cholesterol values in some parts of the world.

However, they point out that remnant cholesterol can be calculated with a standard lipid profile without additional cost, which is currently already the standard procedure in the greater Copenhagen area.

“This means that the use of remnant cholesterol is easy to introduce into daily clinical practice,” they say.

The study was published online in the Journal of the American College of Cardiology.

The authors, Takahito Doi, MD, Anne Langsted, MD, and Børge Nordestgaard, from Copenhagen University Hospital, Denmark, explain that remnant cholesterol is total cholesterol minus LDL-cholesterol minus HDL-cholesterol and includes the cholesterol content of the triglyceride-rich very-low-density lipoproteins, intermediate-density lipoproteins, and chylomicron remnants in the nonfasting state.

“When these particles enter the arterial wall, they are taken up by macrophages to produce foam cells, and therefore elevated remnant cholesterol likely enhance accumulation of cholesterol in the arterial wall, leading to progression of atherosclerosis and in consequence ischemic heart disease,” they note.  

They point out that most guidelines for assessment of the 10-year risk of ischemic heart and atherosclerotic cardiovascular disease include levels of total and HDL cholesterol, but remnant cholesterol levels are not included.

They conducted the current study to investigate whether elevated remnant cholesterol would lead to appropriate reclassification of individuals who later experienced MI or ischemic heart disease.

The researchers analyzed data from the Copenhagen General Population Study, which recruited individuals from the White Danish general population from 2003-2015 and followed them until 2018. Information on lifestyle, health, and medication, including statin therapy, was obtained through a questionnaire, and participants underwent physical examinations and had nonfasting blood samples drawn for biochemical measurements.

For the current study, they included 41,928 individuals aged 40-100 years enrolled before 2009 without a history of ischemic cardiovascular disease, diabetes, and statin use at baseline. The median follow-up time was 12 years. Information on diagnoses of MI and ischemic heart disease was collected from the national Danish Causes of Death Registry and all hospital admissions and diagnoses entered in the national Danish Patient Registry.

During the first 10 years of follow-up there were 1,063 MIs and 1,460 ischemic heart disease events (death of ischemic heart disease, nonfatal MI, and coronary revascularization).

Results showed that in models based on conventional risk factors estimating risk of heart disease of above or below 5% in 10 years, adding remnant cholesterol at levels above the 95th percentile, appropriately reclassified 23% of individuals who had an MI and 21% of individuals who had an ischemic heart disease event.

Using remnant cholesterol levels above the 75th percentile appropriately reclassified 10% of those who had an MI and 8% of those who had an ischemic heart disease event. No events were reclassified incorrectly.

Using measurements of remnant cholesterol also improved reclassification of individuals with heart disease risk above or below 7.5% or 10% in 10 years.

When reclassifications were combined from below to above 5%, 7.5%, and 10% risk of events, 42% of individuals with MI and 41% with ischemic heart disease events were reclassified appropriately.

In an editorial accompanying publication of the study in JACC, Peter Wilson, MD, Emory University School of Medicine, Atlanta, and Alan Remaley, MD, National Heart, Lung, and Blood Institute, say these findings rekindle interest in atherogenic nonfasting lipid measurements and emphasize an important role for elevated nonfasting remnant cholesterol as a value-added predictor of ischemic events.

Dr. Peter Wilson


The editorialists note that both fasting and nonfasting lipid values provide useful information for atherosclerotic cardiovascular disease (ASCVD) risk estimation, and elevated nonfasting remnant cholesterol appears to help identify persons at greater risk for an initial cardiovascular ischemic event.   

They add that very elevated levels (above the 75th percentile) of nonfasting remnant cholesterol deserve further evaluation as a potentially valuable “modifier of ASCVD risk,” and replication of the results could move these findings forward to potentially improve prognostication and care for patients at risk for ischemic heart disease events.
 
 

 

An indirect measure of triglycerides

Dr. Wilson explained that remnant cholesterol is an indirect measure of triglycerides beyond LDL levels, and it is thus including a new lipid measurement in risk prediction.

“We are completely focused on LDL cholesterol,” he said. “This opens it up a bit by adding in another measure that takes into account triglycerides as well as LDL.”

He also pointed out that use of a nonfasting sample is another advantage of measuring remnant cholesterol.  

“An accurate measure of LDL needs a fasting sample, which is a nuisance, whereas remnant cholesterol can be measured in a nonfasting blood sample, so it is more convenient,” Dr. Wilson said.

While this study shows this measure is helpful for risk prediction in the primary prevention population, Dr. Wilson believes remnant cholesterol could be most useful in helping to guide further medication choice in patients who are already taking statins.

“Statins mainly target LDL, but if we can also measure nonfasting triglycerides this will be helpful. It may help us select some patients who may need a different type of drug to use in addition to statins that lowers triglycerides,” he said.  

This work was supported by the Global Excellence Programme, the Research Fund for the Capital Region of Denmark, the Japanese College of Cardiology Overseas Research Fellowship, and the Scandinavia Japan Sasakawa Foundation. Mr. Nordestgaard has reported consultancies or talks sponsored by AstraZeneca, Sanofi, Regeneron, Akcea, Amgen, Amarin, Kowa, Denka, Novartis, Novo Nordisk, Esperion, and Silence Therapeutics. Dr. Doi has reported talks sponsored by MSD.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Adding remnant cholesterol to guideline prediction models should improve the identification of individuals who would benefit the most from statin treatment for the primary prevention of heart disease, a new study suggests.

The study, which followed almost 42,000 Danish individuals without a history of ischemic cardiovascular disease, diabetes, or statin use for more than 10 years, found that elevated remnant cholesterol appropriately reclassified up to 40% of those who later experienced myocardial infarction and ischemic heart disease.

“The clinical implications of our study include that doctors and patients should be aware of remnant cholesterol levels to prevent future risk of MI and ischemic heart disease,” the authors conclude.

They suggest that the development of a cardiovascular risk algorithm, including remnant cholesterol together with LDL cholesterol, would help to better identify high-risk individuals who could be candidates for statins in a primary prevention setting.

They note that physicians are encouraged to evaluate non-HDL cholesterol and/or apolipoprotein B rather than LDL cholesterol and certainly not yet remnant cholesterol, possibly because of the limited availability of remnant cholesterol values in some parts of the world.

However, they point out that remnant cholesterol can be calculated with a standard lipid profile without additional cost, which is currently already the standard procedure in the greater Copenhagen area.

“This means that the use of remnant cholesterol is easy to introduce into daily clinical practice,” they say.

The study was published online in the Journal of the American College of Cardiology.

The authors, Takahito Doi, MD, Anne Langsted, MD, and Børge Nordestgaard, from Copenhagen University Hospital, Denmark, explain that remnant cholesterol is total cholesterol minus LDL-cholesterol minus HDL-cholesterol and includes the cholesterol content of the triglyceride-rich very-low-density lipoproteins, intermediate-density lipoproteins, and chylomicron remnants in the nonfasting state.

“When these particles enter the arterial wall, they are taken up by macrophages to produce foam cells, and therefore elevated remnant cholesterol likely enhance accumulation of cholesterol in the arterial wall, leading to progression of atherosclerosis and in consequence ischemic heart disease,” they note.  

They point out that most guidelines for assessment of the 10-year risk of ischemic heart and atherosclerotic cardiovascular disease include levels of total and HDL cholesterol, but remnant cholesterol levels are not included.

They conducted the current study to investigate whether elevated remnant cholesterol would lead to appropriate reclassification of individuals who later experienced MI or ischemic heart disease.

The researchers analyzed data from the Copenhagen General Population Study, which recruited individuals from the White Danish general population from 2003-2015 and followed them until 2018. Information on lifestyle, health, and medication, including statin therapy, was obtained through a questionnaire, and participants underwent physical examinations and had nonfasting blood samples drawn for biochemical measurements.

For the current study, they included 41,928 individuals aged 40-100 years enrolled before 2009 without a history of ischemic cardiovascular disease, diabetes, and statin use at baseline. The median follow-up time was 12 years. Information on diagnoses of MI and ischemic heart disease was collected from the national Danish Causes of Death Registry and all hospital admissions and diagnoses entered in the national Danish Patient Registry.

During the first 10 years of follow-up there were 1,063 MIs and 1,460 ischemic heart disease events (death of ischemic heart disease, nonfatal MI, and coronary revascularization).

Results showed that in models based on conventional risk factors estimating risk of heart disease of above or below 5% in 10 years, adding remnant cholesterol at levels above the 95th percentile, appropriately reclassified 23% of individuals who had an MI and 21% of individuals who had an ischemic heart disease event.

Using remnant cholesterol levels above the 75th percentile appropriately reclassified 10% of those who had an MI and 8% of those who had an ischemic heart disease event. No events were reclassified incorrectly.

Using measurements of remnant cholesterol also improved reclassification of individuals with heart disease risk above or below 7.5% or 10% in 10 years.

When reclassifications were combined from below to above 5%, 7.5%, and 10% risk of events, 42% of individuals with MI and 41% with ischemic heart disease events were reclassified appropriately.

In an editorial accompanying publication of the study in JACC, Peter Wilson, MD, Emory University School of Medicine, Atlanta, and Alan Remaley, MD, National Heart, Lung, and Blood Institute, say these findings rekindle interest in atherogenic nonfasting lipid measurements and emphasize an important role for elevated nonfasting remnant cholesterol as a value-added predictor of ischemic events.

Dr. Peter Wilson


The editorialists note that both fasting and nonfasting lipid values provide useful information for atherosclerotic cardiovascular disease (ASCVD) risk estimation, and elevated nonfasting remnant cholesterol appears to help identify persons at greater risk for an initial cardiovascular ischemic event.   

They add that very elevated levels (above the 75th percentile) of nonfasting remnant cholesterol deserve further evaluation as a potentially valuable “modifier of ASCVD risk,” and replication of the results could move these findings forward to potentially improve prognostication and care for patients at risk for ischemic heart disease events.
 
 

 

An indirect measure of triglycerides

Dr. Wilson explained that remnant cholesterol is an indirect measure of triglycerides beyond LDL levels, and it is thus including a new lipid measurement in risk prediction.

“We are completely focused on LDL cholesterol,” he said. “This opens it up a bit by adding in another measure that takes into account triglycerides as well as LDL.”

He also pointed out that use of a nonfasting sample is another advantage of measuring remnant cholesterol.  

“An accurate measure of LDL needs a fasting sample, which is a nuisance, whereas remnant cholesterol can be measured in a nonfasting blood sample, so it is more convenient,” Dr. Wilson said.

While this study shows this measure is helpful for risk prediction in the primary prevention population, Dr. Wilson believes remnant cholesterol could be most useful in helping to guide further medication choice in patients who are already taking statins.

“Statins mainly target LDL, but if we can also measure nonfasting triglycerides this will be helpful. It may help us select some patients who may need a different type of drug to use in addition to statins that lowers triglycerides,” he said.  

This work was supported by the Global Excellence Programme, the Research Fund for the Capital Region of Denmark, the Japanese College of Cardiology Overseas Research Fellowship, and the Scandinavia Japan Sasakawa Foundation. Mr. Nordestgaard has reported consultancies or talks sponsored by AstraZeneca, Sanofi, Regeneron, Akcea, Amgen, Amarin, Kowa, Denka, Novartis, Novo Nordisk, Esperion, and Silence Therapeutics. Dr. Doi has reported talks sponsored by MSD.

A version of this article first appeared on Medscape.com.

Adding remnant cholesterol to guideline prediction models should improve the identification of individuals who would benefit the most from statin treatment for the primary prevention of heart disease, a new study suggests.

The study, which followed almost 42,000 Danish individuals without a history of ischemic cardiovascular disease, diabetes, or statin use for more than 10 years, found that elevated remnant cholesterol appropriately reclassified up to 40% of those who later experienced myocardial infarction and ischemic heart disease.

“The clinical implications of our study include that doctors and patients should be aware of remnant cholesterol levels to prevent future risk of MI and ischemic heart disease,” the authors conclude.

They suggest that the development of a cardiovascular risk algorithm, including remnant cholesterol together with LDL cholesterol, would help to better identify high-risk individuals who could be candidates for statins in a primary prevention setting.

They note that physicians are encouraged to evaluate non-HDL cholesterol and/or apolipoprotein B rather than LDL cholesterol and certainly not yet remnant cholesterol, possibly because of the limited availability of remnant cholesterol values in some parts of the world.

However, they point out that remnant cholesterol can be calculated with a standard lipid profile without additional cost, which is currently already the standard procedure in the greater Copenhagen area.

“This means that the use of remnant cholesterol is easy to introduce into daily clinical practice,” they say.

The study was published online in the Journal of the American College of Cardiology.

The authors, Takahito Doi, MD, Anne Langsted, MD, and Børge Nordestgaard, from Copenhagen University Hospital, Denmark, explain that remnant cholesterol is total cholesterol minus LDL-cholesterol minus HDL-cholesterol and includes the cholesterol content of the triglyceride-rich very-low-density lipoproteins, intermediate-density lipoproteins, and chylomicron remnants in the nonfasting state.

“When these particles enter the arterial wall, they are taken up by macrophages to produce foam cells, and therefore elevated remnant cholesterol likely enhance accumulation of cholesterol in the arterial wall, leading to progression of atherosclerosis and in consequence ischemic heart disease,” they note.  

They point out that most guidelines for assessment of the 10-year risk of ischemic heart and atherosclerotic cardiovascular disease include levels of total and HDL cholesterol, but remnant cholesterol levels are not included.

They conducted the current study to investigate whether elevated remnant cholesterol would lead to appropriate reclassification of individuals who later experienced MI or ischemic heart disease.

The researchers analyzed data from the Copenhagen General Population Study, which recruited individuals from the White Danish general population from 2003-2015 and followed them until 2018. Information on lifestyle, health, and medication, including statin therapy, was obtained through a questionnaire, and participants underwent physical examinations and had nonfasting blood samples drawn for biochemical measurements.

For the current study, they included 41,928 individuals aged 40-100 years enrolled before 2009 without a history of ischemic cardiovascular disease, diabetes, and statin use at baseline. The median follow-up time was 12 years. Information on diagnoses of MI and ischemic heart disease was collected from the national Danish Causes of Death Registry and all hospital admissions and diagnoses entered in the national Danish Patient Registry.

During the first 10 years of follow-up there were 1,063 MIs and 1,460 ischemic heart disease events (death of ischemic heart disease, nonfatal MI, and coronary revascularization).

Results showed that in models based on conventional risk factors estimating risk of heart disease of above or below 5% in 10 years, adding remnant cholesterol at levels above the 95th percentile, appropriately reclassified 23% of individuals who had an MI and 21% of individuals who had an ischemic heart disease event.

Using remnant cholesterol levels above the 75th percentile appropriately reclassified 10% of those who had an MI and 8% of those who had an ischemic heart disease event. No events were reclassified incorrectly.

Using measurements of remnant cholesterol also improved reclassification of individuals with heart disease risk above or below 7.5% or 10% in 10 years.

When reclassifications were combined from below to above 5%, 7.5%, and 10% risk of events, 42% of individuals with MI and 41% with ischemic heart disease events were reclassified appropriately.

In an editorial accompanying publication of the study in JACC, Peter Wilson, MD, Emory University School of Medicine, Atlanta, and Alan Remaley, MD, National Heart, Lung, and Blood Institute, say these findings rekindle interest in atherogenic nonfasting lipid measurements and emphasize an important role for elevated nonfasting remnant cholesterol as a value-added predictor of ischemic events.

Dr. Peter Wilson


The editorialists note that both fasting and nonfasting lipid values provide useful information for atherosclerotic cardiovascular disease (ASCVD) risk estimation, and elevated nonfasting remnant cholesterol appears to help identify persons at greater risk for an initial cardiovascular ischemic event.   

They add that very elevated levels (above the 75th percentile) of nonfasting remnant cholesterol deserve further evaluation as a potentially valuable “modifier of ASCVD risk,” and replication of the results could move these findings forward to potentially improve prognostication and care for patients at risk for ischemic heart disease events.
 
 

 

An indirect measure of triglycerides

Dr. Wilson explained that remnant cholesterol is an indirect measure of triglycerides beyond LDL levels, and it is thus including a new lipid measurement in risk prediction.

“We are completely focused on LDL cholesterol,” he said. “This opens it up a bit by adding in another measure that takes into account triglycerides as well as LDL.”

He also pointed out that use of a nonfasting sample is another advantage of measuring remnant cholesterol.  

“An accurate measure of LDL needs a fasting sample, which is a nuisance, whereas remnant cholesterol can be measured in a nonfasting blood sample, so it is more convenient,” Dr. Wilson said.

While this study shows this measure is helpful for risk prediction in the primary prevention population, Dr. Wilson believes remnant cholesterol could be most useful in helping to guide further medication choice in patients who are already taking statins.

“Statins mainly target LDL, but if we can also measure nonfasting triglycerides this will be helpful. It may help us select some patients who may need a different type of drug to use in addition to statins that lowers triglycerides,” he said.  

This work was supported by the Global Excellence Programme, the Research Fund for the Capital Region of Denmark, the Japanese College of Cardiology Overseas Research Fellowship, and the Scandinavia Japan Sasakawa Foundation. Mr. Nordestgaard has reported consultancies or talks sponsored by AstraZeneca, Sanofi, Regeneron, Akcea, Amgen, Amarin, Kowa, Denka, Novartis, Novo Nordisk, Esperion, and Silence Therapeutics. Dr. Doi has reported talks sponsored by MSD.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Survival for elderly breast cancer patients 25% after 4 years

Article Type
Changed

A study of elderly patients with HER2-positive/HR-negative metastatic breast cancer finds a significantly shorter median overall survival in actual clinical practice than younger counterparts.

After 46 months of treatment, the survival rate was only 25%, according to a study presented in June at the annual meeting of the American Society of Clinical Oncology. The finding suggests that older age is an important prognostic factor for breast cancer survival, said study author Zhonghui Jenny Ou, a doctoral candidate at the Massachusetts College of Pharmacy and Health Sciences in Boston.

For comparison, Ms. Ou cited the CLEOPATRA trial which showed a median overall survival of 57.1 months for patients who were treated with pertuzumab, docetaxel and trastuzumab versus 40.8 months for placebo with docetaxel plus trastuzumab.

The Ou study is based on an analysis of data between 2012 and 2016 from the SEER-Medicare database. The final analysis included 73 women (average age 75 years at diagnosis) with early-stage HER2-positive/HR-negative metastatic breast cancer. Fifty-six women were treated with trastuzumab with pertuzumab and chemotherapy as first-line treatment, and 17 were treated with chemotherapy only. The longest length of treatment with trastuzumab was over 44 months. And, the median follow-up for overall survival was 13 months (95% confidence interval, 12.7-18.7).

Between 2012 and 2016, five patients died from other causes, including lung cancer, cerebrovascular diseases, aortic aneurysm and dissection, pneumonia and influenza, and heart disease.

“While there are many clinical trials about HER2-positive metastatic breast cancer, these trials were all performed in younger and relatively healthier patients. Few studies included elderly patients 65 years or older,” Ms. Ou said.

According to the American Cancer Society, 31% of all newly diagnosed breast cancer cases are in women who are 70 years old or older, yet 47% of all breast cancer deaths each year are in women in this age group.

Undertreatment and lower treatment intensity have been cited by other studies as possible contributing factors to lower overall survival rates, but breast cancer in elderly women is a complex and understudied subject. Why the mortality rates for elderly women are disproportionately higher than those of younger women is attributable to a number of reasons, write the authors of one of the most recent studies on the subject.

“It is well established that receipt of adjuvant chemotherapy, trastuzumab, and hormonal therapy reduces risk of recurrence and death across all age groups, yet multiple studies document suboptimal systemic treatment and adherence for older patients, including omission of efficacious treatments, receipt of lower intensity and/or nonguideline treatment, or poor adherence to hormonal therapy,” Freedman et al. wrote in the May 15, 2018, issue of the journal Cancer.

While the Ou study sample size was small, the study’s real-world analysis is telling, Ms. Ou said.

“The major limitation of this study is that it has – after applying all the eligibility criteria to the 170,516 breast cancer patients from the SEER-Medicare database between 2012 and 2016 – a study population of just 73 patients. The number is sufficient to do survival analysis,” she said.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

A study of elderly patients with HER2-positive/HR-negative metastatic breast cancer finds a significantly shorter median overall survival in actual clinical practice than younger counterparts.

After 46 months of treatment, the survival rate was only 25%, according to a study presented in June at the annual meeting of the American Society of Clinical Oncology. The finding suggests that older age is an important prognostic factor for breast cancer survival, said study author Zhonghui Jenny Ou, a doctoral candidate at the Massachusetts College of Pharmacy and Health Sciences in Boston.

For comparison, Ms. Ou cited the CLEOPATRA trial which showed a median overall survival of 57.1 months for patients who were treated with pertuzumab, docetaxel and trastuzumab versus 40.8 months for placebo with docetaxel plus trastuzumab.

The Ou study is based on an analysis of data between 2012 and 2016 from the SEER-Medicare database. The final analysis included 73 women (average age 75 years at diagnosis) with early-stage HER2-positive/HR-negative metastatic breast cancer. Fifty-six women were treated with trastuzumab with pertuzumab and chemotherapy as first-line treatment, and 17 were treated with chemotherapy only. The longest length of treatment with trastuzumab was over 44 months. And, the median follow-up for overall survival was 13 months (95% confidence interval, 12.7-18.7).

Between 2012 and 2016, five patients died from other causes, including lung cancer, cerebrovascular diseases, aortic aneurysm and dissection, pneumonia and influenza, and heart disease.

“While there are many clinical trials about HER2-positive metastatic breast cancer, these trials were all performed in younger and relatively healthier patients. Few studies included elderly patients 65 years or older,” Ms. Ou said.

According to the American Cancer Society, 31% of all newly diagnosed breast cancer cases are in women who are 70 years old or older, yet 47% of all breast cancer deaths each year are in women in this age group.

Undertreatment and lower treatment intensity have been cited by other studies as possible contributing factors to lower overall survival rates, but breast cancer in elderly women is a complex and understudied subject. Why the mortality rates for elderly women are disproportionately higher than those of younger women is attributable to a number of reasons, write the authors of one of the most recent studies on the subject.

“It is well established that receipt of adjuvant chemotherapy, trastuzumab, and hormonal therapy reduces risk of recurrence and death across all age groups, yet multiple studies document suboptimal systemic treatment and adherence for older patients, including omission of efficacious treatments, receipt of lower intensity and/or nonguideline treatment, or poor adherence to hormonal therapy,” Freedman et al. wrote in the May 15, 2018, issue of the journal Cancer.

While the Ou study sample size was small, the study’s real-world analysis is telling, Ms. Ou said.

“The major limitation of this study is that it has – after applying all the eligibility criteria to the 170,516 breast cancer patients from the SEER-Medicare database between 2012 and 2016 – a study population of just 73 patients. The number is sufficient to do survival analysis,” she said.

A study of elderly patients with HER2-positive/HR-negative metastatic breast cancer finds a significantly shorter median overall survival in actual clinical practice than younger counterparts.

After 46 months of treatment, the survival rate was only 25%, according to a study presented in June at the annual meeting of the American Society of Clinical Oncology. The finding suggests that older age is an important prognostic factor for breast cancer survival, said study author Zhonghui Jenny Ou, a doctoral candidate at the Massachusetts College of Pharmacy and Health Sciences in Boston.

For comparison, Ms. Ou cited the CLEOPATRA trial which showed a median overall survival of 57.1 months for patients who were treated with pertuzumab, docetaxel and trastuzumab versus 40.8 months for placebo with docetaxel plus trastuzumab.

The Ou study is based on an analysis of data between 2012 and 2016 from the SEER-Medicare database. The final analysis included 73 women (average age 75 years at diagnosis) with early-stage HER2-positive/HR-negative metastatic breast cancer. Fifty-six women were treated with trastuzumab with pertuzumab and chemotherapy as first-line treatment, and 17 were treated with chemotherapy only. The longest length of treatment with trastuzumab was over 44 months. And, the median follow-up for overall survival was 13 months (95% confidence interval, 12.7-18.7).

Between 2012 and 2016, five patients died from other causes, including lung cancer, cerebrovascular diseases, aortic aneurysm and dissection, pneumonia and influenza, and heart disease.

“While there are many clinical trials about HER2-positive metastatic breast cancer, these trials were all performed in younger and relatively healthier patients. Few studies included elderly patients 65 years or older,” Ms. Ou said.

According to the American Cancer Society, 31% of all newly diagnosed breast cancer cases are in women who are 70 years old or older, yet 47% of all breast cancer deaths each year are in women in this age group.

Undertreatment and lower treatment intensity have been cited by other studies as possible contributing factors to lower overall survival rates, but breast cancer in elderly women is a complex and understudied subject. Why the mortality rates for elderly women are disproportionately higher than those of younger women is attributable to a number of reasons, write the authors of one of the most recent studies on the subject.

“It is well established that receipt of adjuvant chemotherapy, trastuzumab, and hormonal therapy reduces risk of recurrence and death across all age groups, yet multiple studies document suboptimal systemic treatment and adherence for older patients, including omission of efficacious treatments, receipt of lower intensity and/or nonguideline treatment, or poor adherence to hormonal therapy,” Freedman et al. wrote in the May 15, 2018, issue of the journal Cancer.

While the Ou study sample size was small, the study’s real-world analysis is telling, Ms. Ou said.

“The major limitation of this study is that it has – after applying all the eligibility criteria to the 170,516 breast cancer patients from the SEER-Medicare database between 2012 and 2016 – a study population of just 73 patients. The number is sufficient to do survival analysis,” she said.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ASCO 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Neighborhood analysis links breast cancer outcomes to socioeconomic status

Article Type
Changed

A neighborhood analysis of socioeconomic status conducted in the Pittsburgh area found worse metastatic breast cancer survival outcomes among patients of low socioeconomic status. The findings suggest that race is not a relevant factor in outcomes.

“This study demonstrates that metastatic breast cancer patients of low socioeconomic status have worse outcomes than those with higher socioeconomic status at our center. It also underscores the idea that race is not so much a biological construct but more a consequence of socioeconomic issues. The effect of race is likely mediated by lower socioeconomic status,” said Susrutha Puthanmadhom Narayanan, MD, who presented the results of her study earlier this month in Chicago at the annual meeting of the American Society of Clinical Oncology.

“The current study should make clinicians cognizant of the potential for biases in the management of metastatic breast cancer in terms of socioeconomic status and race. One should think of socioeconomic status as a predictor of bad outcomes, almost like a comorbidity, and think of [associations between race and outcomes], as a consequence of socioeconomic inequality,” said Dr. Puthanmadhom Narayanan, who is an internal medicine resident at University of Pittsburgh Medical Center.

She and her colleagues intend to dig deeper into the relationships. “We are interested in looking at utilization of different treatment options for metastatic breast cancer between the socioeconomic status groups. In the preliminary analysis, we saw that ER-positive metastatic breast cancer patients with lower socioeconomic status get treated with tamoxifen more often than aromatase inhibitors and newer agents. And, we have plans to study stress signaling and inflammation as mediators of bad outcomes in the low socioeconomic status population,” Dr. Puthanmadhom Narayanan said.

In fact, that tendency for lower socioeconomic status patients to receive older treatments should be a call to action for physicians. “This study should make clinicians cognizant of the potential for biases in management of metastatic breast cancer in terms of socioeconomic status and race,” she said.

The study is based on an analysis of data from the Neighborhood Atlas in which a Neighborhood Deprivation Index (NDI) score was calculated. An NDI score in the bottom tertile meant that patients were better off than patients with mid to high range NDI scores. In this study, socioeconomic status was described as “low deprivation” or “high depreviation.” Higher deprivation correlated with lower overall survival. And, there were more Black patients in the higher deprivation group (10.5%), compared with the low deprivation group (3.7%). In multivariate Cox proportional hazard model, socioeconomic status, but not race, had a significant effect on overall survival (HR for high deprivation was 1.19 [95% confidence interval; 1.04-1.37], P = 0.01).

It included 1,246 patients who were treated at the University of Pittsburgh Medical Center between 2000 and 2017. Of 1,246 patients, 414 patients considered in the bottom tertile of NDI as having low deprivation, while 832 patients in the middle or top tertiles were classified as having high deprivation.

The two socioeconomic status groups were similar in baseline characteristics, with the exception of race: 10.5% of the high deprivation group were African American, compared with 3.7% of the low deprivation group (P =.000093).

Univariate analyses showed worse survival in both Black women and women in the lower socioeconomic status group, but a multivariate analysis found only socioeconomic status was associated with overall survival (hazard ratio for lower socioeconomic status, 1.19; P = .01).

The study had several strengths, according to Rachel Freedman, MD, MPH, who served as a discussant for the abstract. “It included both de novo and recurrent metastatic breast cancer, unlike previous studies based on the Surveillance, Epidemiology, and End Results (SEER) database that only included de novo cases. It also employed a novel tool to define socioeconomic status in the form of the Neighborhood Atlas. The study “adds more evidence that socioeconomic status likely mediates much of what we see when it comes to racial disparities,” said Dr. Freedman, who is a senior physician at Dana Farber Cancer Institute.

Nevertheless, more work needs to be done. Dr. Freedman pointed out that the current study did not include information on treatment.

The findings underscore the failure to date to address disparities in breast cancer treatment, an effort that is hampered by difficulty in teasing out complex factors that may impact survival. “We need to standardize the way that we collect social determinants of health and act upon findings, and we need to standardize patient navigation, and we need to commit as a community to diverse clinical trial populations,” Dr. Freedman said.

Dr. Narayanan has no relevant financial disclosures. Dr. Freedman is an employee and stockholder of Firefly Health.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

A neighborhood analysis of socioeconomic status conducted in the Pittsburgh area found worse metastatic breast cancer survival outcomes among patients of low socioeconomic status. The findings suggest that race is not a relevant factor in outcomes.

“This study demonstrates that metastatic breast cancer patients of low socioeconomic status have worse outcomes than those with higher socioeconomic status at our center. It also underscores the idea that race is not so much a biological construct but more a consequence of socioeconomic issues. The effect of race is likely mediated by lower socioeconomic status,” said Susrutha Puthanmadhom Narayanan, MD, who presented the results of her study earlier this month in Chicago at the annual meeting of the American Society of Clinical Oncology.

“The current study should make clinicians cognizant of the potential for biases in the management of metastatic breast cancer in terms of socioeconomic status and race. One should think of socioeconomic status as a predictor of bad outcomes, almost like a comorbidity, and think of [associations between race and outcomes], as a consequence of socioeconomic inequality,” said Dr. Puthanmadhom Narayanan, who is an internal medicine resident at University of Pittsburgh Medical Center.

She and her colleagues intend to dig deeper into the relationships. “We are interested in looking at utilization of different treatment options for metastatic breast cancer between the socioeconomic status groups. In the preliminary analysis, we saw that ER-positive metastatic breast cancer patients with lower socioeconomic status get treated with tamoxifen more often than aromatase inhibitors and newer agents. And, we have plans to study stress signaling and inflammation as mediators of bad outcomes in the low socioeconomic status population,” Dr. Puthanmadhom Narayanan said.

In fact, that tendency for lower socioeconomic status patients to receive older treatments should be a call to action for physicians. “This study should make clinicians cognizant of the potential for biases in management of metastatic breast cancer in terms of socioeconomic status and race,” she said.

The study is based on an analysis of data from the Neighborhood Atlas in which a Neighborhood Deprivation Index (NDI) score was calculated. An NDI score in the bottom tertile meant that patients were better off than patients with mid to high range NDI scores. In this study, socioeconomic status was described as “low deprivation” or “high depreviation.” Higher deprivation correlated with lower overall survival. And, there were more Black patients in the higher deprivation group (10.5%), compared with the low deprivation group (3.7%). In multivariate Cox proportional hazard model, socioeconomic status, but not race, had a significant effect on overall survival (HR for high deprivation was 1.19 [95% confidence interval; 1.04-1.37], P = 0.01).

It included 1,246 patients who were treated at the University of Pittsburgh Medical Center between 2000 and 2017. Of 1,246 patients, 414 patients considered in the bottom tertile of NDI as having low deprivation, while 832 patients in the middle or top tertiles were classified as having high deprivation.

The two socioeconomic status groups were similar in baseline characteristics, with the exception of race: 10.5% of the high deprivation group were African American, compared with 3.7% of the low deprivation group (P =.000093).

Univariate analyses showed worse survival in both Black women and women in the lower socioeconomic status group, but a multivariate analysis found only socioeconomic status was associated with overall survival (hazard ratio for lower socioeconomic status, 1.19; P = .01).

The study had several strengths, according to Rachel Freedman, MD, MPH, who served as a discussant for the abstract. “It included both de novo and recurrent metastatic breast cancer, unlike previous studies based on the Surveillance, Epidemiology, and End Results (SEER) database that only included de novo cases. It also employed a novel tool to define socioeconomic status in the form of the Neighborhood Atlas. The study “adds more evidence that socioeconomic status likely mediates much of what we see when it comes to racial disparities,” said Dr. Freedman, who is a senior physician at Dana Farber Cancer Institute.

Nevertheless, more work needs to be done. Dr. Freedman pointed out that the current study did not include information on treatment.

The findings underscore the failure to date to address disparities in breast cancer treatment, an effort that is hampered by difficulty in teasing out complex factors that may impact survival. “We need to standardize the way that we collect social determinants of health and act upon findings, and we need to standardize patient navigation, and we need to commit as a community to diverse clinical trial populations,” Dr. Freedman said.

Dr. Narayanan has no relevant financial disclosures. Dr. Freedman is an employee and stockholder of Firefly Health.

A neighborhood analysis of socioeconomic status conducted in the Pittsburgh area found worse metastatic breast cancer survival outcomes among patients of low socioeconomic status. The findings suggest that race is not a relevant factor in outcomes.

“This study demonstrates that metastatic breast cancer patients of low socioeconomic status have worse outcomes than those with higher socioeconomic status at our center. It also underscores the idea that race is not so much a biological construct but more a consequence of socioeconomic issues. The effect of race is likely mediated by lower socioeconomic status,” said Susrutha Puthanmadhom Narayanan, MD, who presented the results of her study earlier this month in Chicago at the annual meeting of the American Society of Clinical Oncology.

“The current study should make clinicians cognizant of the potential for biases in the management of metastatic breast cancer in terms of socioeconomic status and race. One should think of socioeconomic status as a predictor of bad outcomes, almost like a comorbidity, and think of [associations between race and outcomes], as a consequence of socioeconomic inequality,” said Dr. Puthanmadhom Narayanan, who is an internal medicine resident at University of Pittsburgh Medical Center.

She and her colleagues intend to dig deeper into the relationships. “We are interested in looking at utilization of different treatment options for metastatic breast cancer between the socioeconomic status groups. In the preliminary analysis, we saw that ER-positive metastatic breast cancer patients with lower socioeconomic status get treated with tamoxifen more often than aromatase inhibitors and newer agents. And, we have plans to study stress signaling and inflammation as mediators of bad outcomes in the low socioeconomic status population,” Dr. Puthanmadhom Narayanan said.

In fact, that tendency for lower socioeconomic status patients to receive older treatments should be a call to action for physicians. “This study should make clinicians cognizant of the potential for biases in management of metastatic breast cancer in terms of socioeconomic status and race,” she said.

The study is based on an analysis of data from the Neighborhood Atlas in which a Neighborhood Deprivation Index (NDI) score was calculated. An NDI score in the bottom tertile meant that patients were better off than patients with mid to high range NDI scores. In this study, socioeconomic status was described as “low deprivation” or “high depreviation.” Higher deprivation correlated with lower overall survival. And, there were more Black patients in the higher deprivation group (10.5%), compared with the low deprivation group (3.7%). In multivariate Cox proportional hazard model, socioeconomic status, but not race, had a significant effect on overall survival (HR for high deprivation was 1.19 [95% confidence interval; 1.04-1.37], P = 0.01).

It included 1,246 patients who were treated at the University of Pittsburgh Medical Center between 2000 and 2017. Of 1,246 patients, 414 patients considered in the bottom tertile of NDI as having low deprivation, while 832 patients in the middle or top tertiles were classified as having high deprivation.

The two socioeconomic status groups were similar in baseline characteristics, with the exception of race: 10.5% of the high deprivation group were African American, compared with 3.7% of the low deprivation group (P =.000093).

Univariate analyses showed worse survival in both Black women and women in the lower socioeconomic status group, but a multivariate analysis found only socioeconomic status was associated with overall survival (hazard ratio for lower socioeconomic status, 1.19; P = .01).

The study had several strengths, according to Rachel Freedman, MD, MPH, who served as a discussant for the abstract. “It included both de novo and recurrent metastatic breast cancer, unlike previous studies based on the Surveillance, Epidemiology, and End Results (SEER) database that only included de novo cases. It also employed a novel tool to define socioeconomic status in the form of the Neighborhood Atlas. The study “adds more evidence that socioeconomic status likely mediates much of what we see when it comes to racial disparities,” said Dr. Freedman, who is a senior physician at Dana Farber Cancer Institute.

Nevertheless, more work needs to be done. Dr. Freedman pointed out that the current study did not include information on treatment.

The findings underscore the failure to date to address disparities in breast cancer treatment, an effort that is hampered by difficulty in teasing out complex factors that may impact survival. “We need to standardize the way that we collect social determinants of health and act upon findings, and we need to standardize patient navigation, and we need to commit as a community to diverse clinical trial populations,” Dr. Freedman said.

Dr. Narayanan has no relevant financial disclosures. Dr. Freedman is an employee and stockholder of Firefly Health.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ASCO 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article