The Journal of Clinical Outcomes Management® is an independent, peer-reviewed journal offering evidence-based, practical information for improving the quality, safety, and value of health care.

jcom
Main menu
JCOM Main
Explore menu
JCOM Explore
Proclivity ID
18843001
Unpublish
Negative Keywords Excluded Elements
header[@id='header']
div[contains(@class, 'header__large-screen')]
div[contains(@class, 'read-next-article')]
div[contains(@class, 'nav-primary')]
nav[contains(@class, 'nav-primary')]
section[contains(@class, 'footer-nav-section-wrapper')]
footer[@id='footer']
div[contains(@class, 'main-prefix')]
section[contains(@class, 'nav-hidden')]
div[contains(@class, 'ce-card-content')]
nav[contains(@class, 'nav-ce-stack')]
Altmetric
Click for Credit Button Label
Click For Credit
DSM Affiliated
Display in offset block
Enable Disqus
Display Author and Disclosure Link
Publication Type
Clinical
Slot System
Featured Buckets
Disable Sticky Ads
Disable Ad Block Mitigation
Featured Buckets Admin
Show Ads on this Publication's Homepage
Consolidated Pub
Show Article Page Numbers on TOC
Expire Announcement Bar
Use larger logo size
On
publication_blueconic_enabled
Off
Show More Destinations Menu
Disable Adhesion on Publication
Off
Restore Menu Label on Mobile Navigation
Disable Facebook Pixel from Publication
Exclude this publication from publication selection on articles and quiz
Gating Strategy
First Peek Free
Challenge Center
Disable Inline Native ads
survey writer start date

Sex is still a taboo subject for patients with breast cancer

Article Type
Changed

An Italian study of women diagnosed with breast cancer reported that around 50% experienced body image disturbance and 20% noted a negative impact on their sex life. And while meeting with a specialist in psycho-oncology was universally viewed as an acceptable option, only one out of four patients considered consulting a sexologist. All these women should be encouraged to face and address issues related to sexuality so that they can truly regain a good quality of life, the study suggests.

The study, which was conducted at the breast unit of Santa Maria Goretti Hospital in Latina, Italy, enrolled 141 patients who had undergone breast cancer surgery. Participants were asked to complete a questionnaire that included questions regarding self-image, sexual activity, and sexual satisfaction, and it analyzed these aspects before and after treatment. The participants were then asked whether they felt that they needed to see a sexologist or a specialist in psycho-oncology.

The findings clearly showed a worsening in terms of body image perception. When the women were asked about the relationship they had with their body, femininity, and beauty prior to being diagnosed, 37.4% characterized it as very good and 58.9% as “normal,” with ups and downs but nothing that they would term “conflictual.” After diagnosis, 48.9% noted that the disease had an impact on their body image with a partial conditioning about their femininity and beauty. However, 7.2% had difficulty when it came to recognizing their own body, and their relationship with femininity also became difficult.

On the topic of sexuality, 71.2% of patients were completely satisfied with their sex life before they were diagnosed with breast cancer, 23.7% were partially satisfied, and 5.0% were unsatisfied. As for their sex life after diagnosis and surgery, 20.1% stated that it continued to be fulfilling and 55.4% said that it had gotten worse; 18.8% reported significant sexual dissatisfaction.

The participants were asked whether consulting a professional would be warranted, and whether that would provide useful support for overcoming the difficulties and challenges arising from the disease and the related treatments. In response, 97.1% said they would go to a specialist in psycho-oncology, but only 27.3% would seek help from a sexologist.

“Despite the negative impact on body image and on sexuality, few patients would seek the help of a sexologist; nearly all of the patients, however, would seek the help of a specialist in psycho-oncology. This was very surprising to us,” write the authors. They went on to note that they are carrying out another project to understand the reason for this disparity.

In addition, they advised clinicians to encourage communication about sexuality – a topic that is regularly overlooked and not included in discussions with patients, mostly because of cultural barriers. Often, physicians aren’t comfortable talking about sexuality, as they don’t feel they have the proper training to do so. Patients who are experiencing issues related to sexuality also often have difficulty asking for help. And so, in their conclusion, the authors point out that “collaborating together in the right direction is the basis of change and good communication.”

This article was translated from Univadis Italy and appeared on Medscape.com.

Publications
Topics
Sections

An Italian study of women diagnosed with breast cancer reported that around 50% experienced body image disturbance and 20% noted a negative impact on their sex life. And while meeting with a specialist in psycho-oncology was universally viewed as an acceptable option, only one out of four patients considered consulting a sexologist. All these women should be encouraged to face and address issues related to sexuality so that they can truly regain a good quality of life, the study suggests.

The study, which was conducted at the breast unit of Santa Maria Goretti Hospital in Latina, Italy, enrolled 141 patients who had undergone breast cancer surgery. Participants were asked to complete a questionnaire that included questions regarding self-image, sexual activity, and sexual satisfaction, and it analyzed these aspects before and after treatment. The participants were then asked whether they felt that they needed to see a sexologist or a specialist in psycho-oncology.

The findings clearly showed a worsening in terms of body image perception. When the women were asked about the relationship they had with their body, femininity, and beauty prior to being diagnosed, 37.4% characterized it as very good and 58.9% as “normal,” with ups and downs but nothing that they would term “conflictual.” After diagnosis, 48.9% noted that the disease had an impact on their body image with a partial conditioning about their femininity and beauty. However, 7.2% had difficulty when it came to recognizing their own body, and their relationship with femininity also became difficult.

On the topic of sexuality, 71.2% of patients were completely satisfied with their sex life before they were diagnosed with breast cancer, 23.7% were partially satisfied, and 5.0% were unsatisfied. As for their sex life after diagnosis and surgery, 20.1% stated that it continued to be fulfilling and 55.4% said that it had gotten worse; 18.8% reported significant sexual dissatisfaction.

The participants were asked whether consulting a professional would be warranted, and whether that would provide useful support for overcoming the difficulties and challenges arising from the disease and the related treatments. In response, 97.1% said they would go to a specialist in psycho-oncology, but only 27.3% would seek help from a sexologist.

“Despite the negative impact on body image and on sexuality, few patients would seek the help of a sexologist; nearly all of the patients, however, would seek the help of a specialist in psycho-oncology. This was very surprising to us,” write the authors. They went on to note that they are carrying out another project to understand the reason for this disparity.

In addition, they advised clinicians to encourage communication about sexuality – a topic that is regularly overlooked and not included in discussions with patients, mostly because of cultural barriers. Often, physicians aren’t comfortable talking about sexuality, as they don’t feel they have the proper training to do so. Patients who are experiencing issues related to sexuality also often have difficulty asking for help. And so, in their conclusion, the authors point out that “collaborating together in the right direction is the basis of change and good communication.”

This article was translated from Univadis Italy and appeared on Medscape.com.

An Italian study of women diagnosed with breast cancer reported that around 50% experienced body image disturbance and 20% noted a negative impact on their sex life. And while meeting with a specialist in psycho-oncology was universally viewed as an acceptable option, only one out of four patients considered consulting a sexologist. All these women should be encouraged to face and address issues related to sexuality so that they can truly regain a good quality of life, the study suggests.

The study, which was conducted at the breast unit of Santa Maria Goretti Hospital in Latina, Italy, enrolled 141 patients who had undergone breast cancer surgery. Participants were asked to complete a questionnaire that included questions regarding self-image, sexual activity, and sexual satisfaction, and it analyzed these aspects before and after treatment. The participants were then asked whether they felt that they needed to see a sexologist or a specialist in psycho-oncology.

The findings clearly showed a worsening in terms of body image perception. When the women were asked about the relationship they had with their body, femininity, and beauty prior to being diagnosed, 37.4% characterized it as very good and 58.9% as “normal,” with ups and downs but nothing that they would term “conflictual.” After diagnosis, 48.9% noted that the disease had an impact on their body image with a partial conditioning about their femininity and beauty. However, 7.2% had difficulty when it came to recognizing their own body, and their relationship with femininity also became difficult.

On the topic of sexuality, 71.2% of patients were completely satisfied with their sex life before they were diagnosed with breast cancer, 23.7% were partially satisfied, and 5.0% were unsatisfied. As for their sex life after diagnosis and surgery, 20.1% stated that it continued to be fulfilling and 55.4% said that it had gotten worse; 18.8% reported significant sexual dissatisfaction.

The participants were asked whether consulting a professional would be warranted, and whether that would provide useful support for overcoming the difficulties and challenges arising from the disease and the related treatments. In response, 97.1% said they would go to a specialist in psycho-oncology, but only 27.3% would seek help from a sexologist.

“Despite the negative impact on body image and on sexuality, few patients would seek the help of a sexologist; nearly all of the patients, however, would seek the help of a specialist in psycho-oncology. This was very surprising to us,” write the authors. They went on to note that they are carrying out another project to understand the reason for this disparity.

In addition, they advised clinicians to encourage communication about sexuality – a topic that is regularly overlooked and not included in discussions with patients, mostly because of cultural barriers. Often, physicians aren’t comfortable talking about sexuality, as they don’t feel they have the proper training to do so. Patients who are experiencing issues related to sexuality also often have difficulty asking for help. And so, in their conclusion, the authors point out that “collaborating together in the right direction is the basis of change and good communication.”

This article was translated from Univadis Italy and appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Risk factors linked to post–COVID vaccination death identified

Article Type
Changed

Those with risk factors associated with COVID-19–related death post coronavirus vaccination should be considered a priority for COVID therapeutics and further booster doses say U.K. researchers.

The researchers have identified factors that put a person at greater risk of COVID-related death after they have completed both doses of the primary COVID vaccination schedule and a booster dose.

For their research, published in JAMA Network Open, researchers from the Office for National Statistics (ONS); Public Health Scotland; the University of Strathclyde, Glasgow; and the University of Edinburgh used data from the ONS Public linked data set combining the 2011 Census of England and covering 80% of the population of England. The study population included 19,473,570 individuals aged 18-100 years (mean age 60.8 years, 45.2% men, 92.0% White individuals) living in England who had completed both doses of their primary vaccination schedule and had received their mRNA booster 14 days or more prior to Dec. 31, 2021. The outcome of interest was time to death involving COVID-19 occurring between Jan. 1 and March 16, 2022.
 

Prioritization of booster doses and COVID-19 treatments

The authors highlighted how it had become “critical” to identify risk factors associated with COVID-19 death in those who had been vaccinated and pointed out that existing evidence was “based on people who have received one or two doses of a COVID-19 vaccine and were infected by the Alpha or Delta variant”. They emphasized that establishing which groups are at increased risk of COVID-19 death after receiving a booster is crucial for the “prioritization of further booster doses and access to COVID-19 therapeutics.”

During the study period the authors found that there were 4,781 (0.02%) deaths involving COVID-19 and 58,020 (0.3%) deaths from other causes. Of those who died of coronavirus, the mean age was 83.3 years, and the authors highlighted how “age was the most important characteristic” associated with the risk of postbooster COVID-19 death. They added that, compared with a 50-year-old, the HR for an 80-year-old individual was 31.3 (95% confidence interval, 26.1-37.6).

They found that women were at lower risk than men with an HR of 0.52 (95% CI, 0.49-0.55). An increased risk of COVID-19 death was also associated with living in a care home or in a socioeconomically deprived area.

Of note, they said that “there was no association between the risk of COVID-19 death and ethnicity, except for those of Indian background”, who they explained were at slightly elevated risk, compared with White individuals. However, they explained how the association with ethnicity was “unclear and differed from previous studies”, with their findings likely to be due “largely to the pronounced differences in vaccination uptake” between ethnic groups in previous studies.
 

Dementia concern

With regard to existing health conditions the authors commented that “most of the QCovid risk groups were associated with an increased HR of postbooster breakthrough death, except for of congenital heart disease, asthma, and prior fracture.”

Risk was particularly elevated, they said, for people with severe combined immunodeficiency (HR, 6.2; 95% CI, 3.3-11.5), and they also identified several conditions associated with HRs of greater than 3, including dementia.

In July, Alzheimer’s Research UK urged the Government to boost the development and deployment of new dementia treatments having found that a significant proportion of people who died of COVID-19 in 2020 and 2021 were living with the condition. At the time, data published by the ONS of deaths caused by coronavirus in England and Wales in 2021 showed dementia to be the second-most common pre-existing condition.

David Thomas, head of policy at Alzheimer’s Research UK, said: “We’ve known for some time that people with dementia have been hit disproportionately hard during the pandemic, but this new data serves as a stark reminder of the growing challenge we face in tackling the condition, and the urgent need to address it.”

The authors of the new research acknowledged the study’s limitations, notably that only data for the population living in England who were enumerated in the 2011 Census of England and Wales was included.

However, subpopulations “remain at increased risk of COVID-19 fatality” after receiving a booster vaccine during the Omicron wave, they pointed out.

“The subpopulations with the highest risk should be considered a priority for COVID-19 therapeutics and further booster doses,” they urged.

A version of this article first appeared on Medscape UK.

Publications
Topics
Sections

Those with risk factors associated with COVID-19–related death post coronavirus vaccination should be considered a priority for COVID therapeutics and further booster doses say U.K. researchers.

The researchers have identified factors that put a person at greater risk of COVID-related death after they have completed both doses of the primary COVID vaccination schedule and a booster dose.

For their research, published in JAMA Network Open, researchers from the Office for National Statistics (ONS); Public Health Scotland; the University of Strathclyde, Glasgow; and the University of Edinburgh used data from the ONS Public linked data set combining the 2011 Census of England and covering 80% of the population of England. The study population included 19,473,570 individuals aged 18-100 years (mean age 60.8 years, 45.2% men, 92.0% White individuals) living in England who had completed both doses of their primary vaccination schedule and had received their mRNA booster 14 days or more prior to Dec. 31, 2021. The outcome of interest was time to death involving COVID-19 occurring between Jan. 1 and March 16, 2022.
 

Prioritization of booster doses and COVID-19 treatments

The authors highlighted how it had become “critical” to identify risk factors associated with COVID-19 death in those who had been vaccinated and pointed out that existing evidence was “based on people who have received one or two doses of a COVID-19 vaccine and were infected by the Alpha or Delta variant”. They emphasized that establishing which groups are at increased risk of COVID-19 death after receiving a booster is crucial for the “prioritization of further booster doses and access to COVID-19 therapeutics.”

During the study period the authors found that there were 4,781 (0.02%) deaths involving COVID-19 and 58,020 (0.3%) deaths from other causes. Of those who died of coronavirus, the mean age was 83.3 years, and the authors highlighted how “age was the most important characteristic” associated with the risk of postbooster COVID-19 death. They added that, compared with a 50-year-old, the HR for an 80-year-old individual was 31.3 (95% confidence interval, 26.1-37.6).

They found that women were at lower risk than men with an HR of 0.52 (95% CI, 0.49-0.55). An increased risk of COVID-19 death was also associated with living in a care home or in a socioeconomically deprived area.

Of note, they said that “there was no association between the risk of COVID-19 death and ethnicity, except for those of Indian background”, who they explained were at slightly elevated risk, compared with White individuals. However, they explained how the association with ethnicity was “unclear and differed from previous studies”, with their findings likely to be due “largely to the pronounced differences in vaccination uptake” between ethnic groups in previous studies.
 

Dementia concern

With regard to existing health conditions the authors commented that “most of the QCovid risk groups were associated with an increased HR of postbooster breakthrough death, except for of congenital heart disease, asthma, and prior fracture.”

Risk was particularly elevated, they said, for people with severe combined immunodeficiency (HR, 6.2; 95% CI, 3.3-11.5), and they also identified several conditions associated with HRs of greater than 3, including dementia.

In July, Alzheimer’s Research UK urged the Government to boost the development and deployment of new dementia treatments having found that a significant proportion of people who died of COVID-19 in 2020 and 2021 were living with the condition. At the time, data published by the ONS of deaths caused by coronavirus in England and Wales in 2021 showed dementia to be the second-most common pre-existing condition.

David Thomas, head of policy at Alzheimer’s Research UK, said: “We’ve known for some time that people with dementia have been hit disproportionately hard during the pandemic, but this new data serves as a stark reminder of the growing challenge we face in tackling the condition, and the urgent need to address it.”

The authors of the new research acknowledged the study’s limitations, notably that only data for the population living in England who were enumerated in the 2011 Census of England and Wales was included.

However, subpopulations “remain at increased risk of COVID-19 fatality” after receiving a booster vaccine during the Omicron wave, they pointed out.

“The subpopulations with the highest risk should be considered a priority for COVID-19 therapeutics and further booster doses,” they urged.

A version of this article first appeared on Medscape UK.

Those with risk factors associated with COVID-19–related death post coronavirus vaccination should be considered a priority for COVID therapeutics and further booster doses say U.K. researchers.

The researchers have identified factors that put a person at greater risk of COVID-related death after they have completed both doses of the primary COVID vaccination schedule and a booster dose.

For their research, published in JAMA Network Open, researchers from the Office for National Statistics (ONS); Public Health Scotland; the University of Strathclyde, Glasgow; and the University of Edinburgh used data from the ONS Public linked data set combining the 2011 Census of England and covering 80% of the population of England. The study population included 19,473,570 individuals aged 18-100 years (mean age 60.8 years, 45.2% men, 92.0% White individuals) living in England who had completed both doses of their primary vaccination schedule and had received their mRNA booster 14 days or more prior to Dec. 31, 2021. The outcome of interest was time to death involving COVID-19 occurring between Jan. 1 and March 16, 2022.
 

Prioritization of booster doses and COVID-19 treatments

The authors highlighted how it had become “critical” to identify risk factors associated with COVID-19 death in those who had been vaccinated and pointed out that existing evidence was “based on people who have received one or two doses of a COVID-19 vaccine and were infected by the Alpha or Delta variant”. They emphasized that establishing which groups are at increased risk of COVID-19 death after receiving a booster is crucial for the “prioritization of further booster doses and access to COVID-19 therapeutics.”

During the study period the authors found that there were 4,781 (0.02%) deaths involving COVID-19 and 58,020 (0.3%) deaths from other causes. Of those who died of coronavirus, the mean age was 83.3 years, and the authors highlighted how “age was the most important characteristic” associated with the risk of postbooster COVID-19 death. They added that, compared with a 50-year-old, the HR for an 80-year-old individual was 31.3 (95% confidence interval, 26.1-37.6).

They found that women were at lower risk than men with an HR of 0.52 (95% CI, 0.49-0.55). An increased risk of COVID-19 death was also associated with living in a care home or in a socioeconomically deprived area.

Of note, they said that “there was no association between the risk of COVID-19 death and ethnicity, except for those of Indian background”, who they explained were at slightly elevated risk, compared with White individuals. However, they explained how the association with ethnicity was “unclear and differed from previous studies”, with their findings likely to be due “largely to the pronounced differences in vaccination uptake” between ethnic groups in previous studies.
 

Dementia concern

With regard to existing health conditions the authors commented that “most of the QCovid risk groups were associated with an increased HR of postbooster breakthrough death, except for of congenital heart disease, asthma, and prior fracture.”

Risk was particularly elevated, they said, for people with severe combined immunodeficiency (HR, 6.2; 95% CI, 3.3-11.5), and they also identified several conditions associated with HRs of greater than 3, including dementia.

In July, Alzheimer’s Research UK urged the Government to boost the development and deployment of new dementia treatments having found that a significant proportion of people who died of COVID-19 in 2020 and 2021 were living with the condition. At the time, data published by the ONS of deaths caused by coronavirus in England and Wales in 2021 showed dementia to be the second-most common pre-existing condition.

David Thomas, head of policy at Alzheimer’s Research UK, said: “We’ve known for some time that people with dementia have been hit disproportionately hard during the pandemic, but this new data serves as a stark reminder of the growing challenge we face in tackling the condition, and the urgent need to address it.”

The authors of the new research acknowledged the study’s limitations, notably that only data for the population living in England who were enumerated in the 2011 Census of England and Wales was included.

However, subpopulations “remain at increased risk of COVID-19 fatality” after receiving a booster vaccine during the Omicron wave, they pointed out.

“The subpopulations with the highest risk should be considered a priority for COVID-19 therapeutics and further booster doses,” they urged.

A version of this article first appeared on Medscape UK.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

‘Spectacular’ polypill results also puzzle docs

Article Type
Changed

New research shows that “polypills” can prevent a combination of cardiovascular events and cardiovascular deaths among patients who have recently experienced a myocardial infarction.

But results from the SECURE trial, published in the New England Journal of Medicine, also raise questions.

How do the polypills reduce cardiovascular problems? And will they ever be available in the United States?

Questions about how they work center on a mystery in the trial data: the polypill – containing aspirin, an angiotensin-converting enzyme (ACE) inhibitor, and a statin – apparently conferred substantial cardiovascular protection while producing average blood pressure and lipid levels that were virtually the same as with usual care.

As to when polypills will be available, the answer may hinge on whether companies, government agencies, or philanthropic foundations come to see making and paying for such treatments – combinations of typically inexpensive generic drugs in a single pill for the sake of convenience and greater adherence – as financially worthwhile.
 

A matter of adherence?

In the SECURE trial, presented late August at the annual congress of the European Society of Cardiology, Barcelona, investigators randomly assigned 2,499 patients with an MI in the previous 6 months to receive usual care or a polypill.

Patients in the usual-care group typically received the same types of treatments included the polypill, only taken separately. Different versions of the polypill were available to allow for titration to tolerated doses of the component medications: aspirin (100 mg), ramipril (2.5, 5, or 10 mg), and atorvastatin (20 mg or 40 mg).

Researchers used the Morisky Medication Adherence Scale to gauge participants’ adherence to their medication regimen and found the polypill group was more adherent. Patients who received the polypill were more likely to have a high level of adherence at 6 months (70.6% vs. 62.7%) and 24 months (74.1% vs. 63.2%), they reported. (The Morisky tool is the subject of some controversy because of aggressive licensing tactics of its creator.)

The primary endpoint of cardiovascular death, MI, stroke, or urgent revascularization was significantly less likely in the polypill group during a median of 3 years of follow-up (hazard ratio, 0.76; P = .02).

“A primary-outcome event occurred in 118 of 1,237 patients (9.5%) in the polypill group and in 156 of 1,229 (12.7%) in the usual-care group,” the researchers report.

“Probably, adherence is the most important reason of how this works,” Valentin Fuster, MD, physician-in-chief at Mount Sinai Hospital, New York, who led the study, said at ESC 2022.

Still, some clinicians were left scratching their heads by the lack of difference between treatment groups in average blood pressure and levels of low-density lipoprotein (LDL) cholesterol.

In the group that received the polypill, average systolic and diastolic blood pressure at 24 months were 135.2 mmHg and 74.8 mmHg, respectively. In the group that received usual care, those values were 135.5 mmHg and 74.9 mmHg, respectively.

Likewise, “no substantial differences were found in LDL-cholesterol levels over time between the groups, with a mean value at 24 months of 67.7 mg/dL in the polypill group and 67.2 mg/dL in the usual-care group,” according to the researchers.

One explanation for the findings is that greater adherence led to beneficial effects that were not reflected in lipid and blood pressure measurements, the investigators said. Alternatively, the open-label trial design could have led to different health behaviors between groups, they suggested.

Martha Gulati, MD, director of preventive cardiology at Cedars-Sinai Medical Center, Los Angeles, said she loves the idea of polypills. But she wonders about the lack of difference in blood pressure and lipids in SECURE.

Dr. Gulati said she sees in practice how medication adherence and measurements of blood pressure and lipids typically go hand in hand.

When a patient initially responds to a medication, but then their LDL cholesterol goes up later, “my first question is, ‘Are you still taking your medication or how frequently are you taking it?’” Dr. Gulati said in an interview. “And I get all kinds of answers.”

“If you are more adherent, why wouldn’t your LDL actually be lower, and why wouldn’t your blood pressure be lower?” she asked.
 

 

 

Can the results be replicated?

Ethan J. Weiss, MD, a cardiologist and volunteer associate clinical professor of medicine at the University of California, San Francisco, said the SECURE results are “spectacular,” but the seeming disconnect with the biomarker measurements “doesn’t make for a clean story.”

“It just seems like if you are making an argument that this is a way to improve compliance ... you would see some evidence of improved compliance objectively” in the biomarker readings, Dr. Weiss said.

Trying to understand how the polypill worked requires more imagination. “Or it makes you just say, ‘Who cares what the mechanism is?’ These people did a lot better, full stop, and that’s all that matters,” he said.

Dr. Weiss said he expects some degree of replication of the results may be needed before practice changes.

To Steven E. Nissen, MD, chief academic officer of the Heart and Vascular Institute at Cleveland Clinic, the results “don’t make any sense.”

“If they got the same results on the biomarkers that the pill was designed to intervene upon, why are the [primary outcome] results different? It’s completely unexplained,” Dr. Nissen said.

In general, Dr. Nissen has not been an advocate of the polypill approach in higher-income countries.

“Medicine is all about customization of therapy,” he said. “Not everybody needs blood pressure lowering. Not everybody needs the same intensity of LDL reduction. We spend much of our lives seeing patients and treating their blood pressure, and if it doesn’t come down adequately, giving them a higher dose or adding another agent.”

Polypills might be reasonable for primary prevention in countries where people have less access to health care resources, he added. In such settings, a low-cost, simple treatment strategy might have benefit.

But Dr. Nissen still doesn’t see a role for a polypill in secondary prevention.

“I think we have to take a step back, take a deep breath, and look very carefully at the science and try to understand whether this, in fact, is sensible,” he said. “We may need another study to see if this can be replicated.”

For Dhruv S. Kazi, MD, the results of the SECURE trial offer an opportunity to rekindle conversations about the use of polypills for cardiovascular protection. These conversations and studies have been taking place for nearly two decades.

Dr. Kazi, associate director of the Richard A. and Susan F. Smith Center for Outcomes Research in Cardiology at Beth Israel Deaconess Medical Center, Boston, has used models to study the expected cost-effectiveness of polypills in various countries.

Although polypills can improve patients’ adherence to their prescribed medications, Dr. Kazi and colleagues have found that treatment gaps are “often at the physician level,” with many patients not prescribed all of the medications from which they could benefit.

Availability of polypills could help address those gaps. At the same time, many patients, even those with higher incomes, may have a strong preference for taking a single pill.

Dr. Kazi’s research also shows that a polypill approach may be more economically attractive as countries develop because successful treatment averts cardiovascular events that are costlier to treat.

“In the United States, in order for this to work, we would need a polypill that is both available widely but also affordable,” Dr. Kazi said. “It is going to require a visionary mover” to make that happen.

That could include philanthropic foundations. But it could also be a business opportunity for a company like Barcelona-based Ferrer, which provided the polypills for the SECURE trial.

The clinical and economic evidence in support of polypills has been compelling, Dr. Kazi said: “We have to get on with the business of implementing something that is effective and has the potential to greatly improve population health at scale.” 

The SECURE trial was funded by the European Union Horizon 2020 program and coordinated by the Spanish National Center for Cardiovascular Research (CNIC). Ferrer International provided the polypill that was used in the trial. CNIC receives royalties for sales of the polypill from Ferrer. Dr. Weiss is starting a biotech company unrelated to this area of research.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

New research shows that “polypills” can prevent a combination of cardiovascular events and cardiovascular deaths among patients who have recently experienced a myocardial infarction.

But results from the SECURE trial, published in the New England Journal of Medicine, also raise questions.

How do the polypills reduce cardiovascular problems? And will they ever be available in the United States?

Questions about how they work center on a mystery in the trial data: the polypill – containing aspirin, an angiotensin-converting enzyme (ACE) inhibitor, and a statin – apparently conferred substantial cardiovascular protection while producing average blood pressure and lipid levels that were virtually the same as with usual care.

As to when polypills will be available, the answer may hinge on whether companies, government agencies, or philanthropic foundations come to see making and paying for such treatments – combinations of typically inexpensive generic drugs in a single pill for the sake of convenience and greater adherence – as financially worthwhile.
 

A matter of adherence?

In the SECURE trial, presented late August at the annual congress of the European Society of Cardiology, Barcelona, investigators randomly assigned 2,499 patients with an MI in the previous 6 months to receive usual care or a polypill.

Patients in the usual-care group typically received the same types of treatments included the polypill, only taken separately. Different versions of the polypill were available to allow for titration to tolerated doses of the component medications: aspirin (100 mg), ramipril (2.5, 5, or 10 mg), and atorvastatin (20 mg or 40 mg).

Researchers used the Morisky Medication Adherence Scale to gauge participants’ adherence to their medication regimen and found the polypill group was more adherent. Patients who received the polypill were more likely to have a high level of adherence at 6 months (70.6% vs. 62.7%) and 24 months (74.1% vs. 63.2%), they reported. (The Morisky tool is the subject of some controversy because of aggressive licensing tactics of its creator.)

The primary endpoint of cardiovascular death, MI, stroke, or urgent revascularization was significantly less likely in the polypill group during a median of 3 years of follow-up (hazard ratio, 0.76; P = .02).

“A primary-outcome event occurred in 118 of 1,237 patients (9.5%) in the polypill group and in 156 of 1,229 (12.7%) in the usual-care group,” the researchers report.

“Probably, adherence is the most important reason of how this works,” Valentin Fuster, MD, physician-in-chief at Mount Sinai Hospital, New York, who led the study, said at ESC 2022.

Still, some clinicians were left scratching their heads by the lack of difference between treatment groups in average blood pressure and levels of low-density lipoprotein (LDL) cholesterol.

In the group that received the polypill, average systolic and diastolic blood pressure at 24 months were 135.2 mmHg and 74.8 mmHg, respectively. In the group that received usual care, those values were 135.5 mmHg and 74.9 mmHg, respectively.

Likewise, “no substantial differences were found in LDL-cholesterol levels over time between the groups, with a mean value at 24 months of 67.7 mg/dL in the polypill group and 67.2 mg/dL in the usual-care group,” according to the researchers.

One explanation for the findings is that greater adherence led to beneficial effects that were not reflected in lipid and blood pressure measurements, the investigators said. Alternatively, the open-label trial design could have led to different health behaviors between groups, they suggested.

Martha Gulati, MD, director of preventive cardiology at Cedars-Sinai Medical Center, Los Angeles, said she loves the idea of polypills. But she wonders about the lack of difference in blood pressure and lipids in SECURE.

Dr. Gulati said she sees in practice how medication adherence and measurements of blood pressure and lipids typically go hand in hand.

When a patient initially responds to a medication, but then their LDL cholesterol goes up later, “my first question is, ‘Are you still taking your medication or how frequently are you taking it?’” Dr. Gulati said in an interview. “And I get all kinds of answers.”

“If you are more adherent, why wouldn’t your LDL actually be lower, and why wouldn’t your blood pressure be lower?” she asked.
 

 

 

Can the results be replicated?

Ethan J. Weiss, MD, a cardiologist and volunteer associate clinical professor of medicine at the University of California, San Francisco, said the SECURE results are “spectacular,” but the seeming disconnect with the biomarker measurements “doesn’t make for a clean story.”

“It just seems like if you are making an argument that this is a way to improve compliance ... you would see some evidence of improved compliance objectively” in the biomarker readings, Dr. Weiss said.

Trying to understand how the polypill worked requires more imagination. “Or it makes you just say, ‘Who cares what the mechanism is?’ These people did a lot better, full stop, and that’s all that matters,” he said.

Dr. Weiss said he expects some degree of replication of the results may be needed before practice changes.

To Steven E. Nissen, MD, chief academic officer of the Heart and Vascular Institute at Cleveland Clinic, the results “don’t make any sense.”

“If they got the same results on the biomarkers that the pill was designed to intervene upon, why are the [primary outcome] results different? It’s completely unexplained,” Dr. Nissen said.

In general, Dr. Nissen has not been an advocate of the polypill approach in higher-income countries.

“Medicine is all about customization of therapy,” he said. “Not everybody needs blood pressure lowering. Not everybody needs the same intensity of LDL reduction. We spend much of our lives seeing patients and treating their blood pressure, and if it doesn’t come down adequately, giving them a higher dose or adding another agent.”

Polypills might be reasonable for primary prevention in countries where people have less access to health care resources, he added. In such settings, a low-cost, simple treatment strategy might have benefit.

But Dr. Nissen still doesn’t see a role for a polypill in secondary prevention.

“I think we have to take a step back, take a deep breath, and look very carefully at the science and try to understand whether this, in fact, is sensible,” he said. “We may need another study to see if this can be replicated.”

For Dhruv S. Kazi, MD, the results of the SECURE trial offer an opportunity to rekindle conversations about the use of polypills for cardiovascular protection. These conversations and studies have been taking place for nearly two decades.

Dr. Kazi, associate director of the Richard A. and Susan F. Smith Center for Outcomes Research in Cardiology at Beth Israel Deaconess Medical Center, Boston, has used models to study the expected cost-effectiveness of polypills in various countries.

Although polypills can improve patients’ adherence to their prescribed medications, Dr. Kazi and colleagues have found that treatment gaps are “often at the physician level,” with many patients not prescribed all of the medications from which they could benefit.

Availability of polypills could help address those gaps. At the same time, many patients, even those with higher incomes, may have a strong preference for taking a single pill.

Dr. Kazi’s research also shows that a polypill approach may be more economically attractive as countries develop because successful treatment averts cardiovascular events that are costlier to treat.

“In the United States, in order for this to work, we would need a polypill that is both available widely but also affordable,” Dr. Kazi said. “It is going to require a visionary mover” to make that happen.

That could include philanthropic foundations. But it could also be a business opportunity for a company like Barcelona-based Ferrer, which provided the polypills for the SECURE trial.

The clinical and economic evidence in support of polypills has been compelling, Dr. Kazi said: “We have to get on with the business of implementing something that is effective and has the potential to greatly improve population health at scale.” 

The SECURE trial was funded by the European Union Horizon 2020 program and coordinated by the Spanish National Center for Cardiovascular Research (CNIC). Ferrer International provided the polypill that was used in the trial. CNIC receives royalties for sales of the polypill from Ferrer. Dr. Weiss is starting a biotech company unrelated to this area of research.

A version of this article first appeared on Medscape.com.

New research shows that “polypills” can prevent a combination of cardiovascular events and cardiovascular deaths among patients who have recently experienced a myocardial infarction.

But results from the SECURE trial, published in the New England Journal of Medicine, also raise questions.

How do the polypills reduce cardiovascular problems? And will they ever be available in the United States?

Questions about how they work center on a mystery in the trial data: the polypill – containing aspirin, an angiotensin-converting enzyme (ACE) inhibitor, and a statin – apparently conferred substantial cardiovascular protection while producing average blood pressure and lipid levels that were virtually the same as with usual care.

As to when polypills will be available, the answer may hinge on whether companies, government agencies, or philanthropic foundations come to see making and paying for such treatments – combinations of typically inexpensive generic drugs in a single pill for the sake of convenience and greater adherence – as financially worthwhile.
 

A matter of adherence?

In the SECURE trial, presented late August at the annual congress of the European Society of Cardiology, Barcelona, investigators randomly assigned 2,499 patients with an MI in the previous 6 months to receive usual care or a polypill.

Patients in the usual-care group typically received the same types of treatments included the polypill, only taken separately. Different versions of the polypill were available to allow for titration to tolerated doses of the component medications: aspirin (100 mg), ramipril (2.5, 5, or 10 mg), and atorvastatin (20 mg or 40 mg).

Researchers used the Morisky Medication Adherence Scale to gauge participants’ adherence to their medication regimen and found the polypill group was more adherent. Patients who received the polypill were more likely to have a high level of adherence at 6 months (70.6% vs. 62.7%) and 24 months (74.1% vs. 63.2%), they reported. (The Morisky tool is the subject of some controversy because of aggressive licensing tactics of its creator.)

The primary endpoint of cardiovascular death, MI, stroke, or urgent revascularization was significantly less likely in the polypill group during a median of 3 years of follow-up (hazard ratio, 0.76; P = .02).

“A primary-outcome event occurred in 118 of 1,237 patients (9.5%) in the polypill group and in 156 of 1,229 (12.7%) in the usual-care group,” the researchers report.

“Probably, adherence is the most important reason of how this works,” Valentin Fuster, MD, physician-in-chief at Mount Sinai Hospital, New York, who led the study, said at ESC 2022.

Still, some clinicians were left scratching their heads by the lack of difference between treatment groups in average blood pressure and levels of low-density lipoprotein (LDL) cholesterol.

In the group that received the polypill, average systolic and diastolic blood pressure at 24 months were 135.2 mmHg and 74.8 mmHg, respectively. In the group that received usual care, those values were 135.5 mmHg and 74.9 mmHg, respectively.

Likewise, “no substantial differences were found in LDL-cholesterol levels over time between the groups, with a mean value at 24 months of 67.7 mg/dL in the polypill group and 67.2 mg/dL in the usual-care group,” according to the researchers.

One explanation for the findings is that greater adherence led to beneficial effects that were not reflected in lipid and blood pressure measurements, the investigators said. Alternatively, the open-label trial design could have led to different health behaviors between groups, they suggested.

Martha Gulati, MD, director of preventive cardiology at Cedars-Sinai Medical Center, Los Angeles, said she loves the idea of polypills. But she wonders about the lack of difference in blood pressure and lipids in SECURE.

Dr. Gulati said she sees in practice how medication adherence and measurements of blood pressure and lipids typically go hand in hand.

When a patient initially responds to a medication, but then their LDL cholesterol goes up later, “my first question is, ‘Are you still taking your medication or how frequently are you taking it?’” Dr. Gulati said in an interview. “And I get all kinds of answers.”

“If you are more adherent, why wouldn’t your LDL actually be lower, and why wouldn’t your blood pressure be lower?” she asked.
 

 

 

Can the results be replicated?

Ethan J. Weiss, MD, a cardiologist and volunteer associate clinical professor of medicine at the University of California, San Francisco, said the SECURE results are “spectacular,” but the seeming disconnect with the biomarker measurements “doesn’t make for a clean story.”

“It just seems like if you are making an argument that this is a way to improve compliance ... you would see some evidence of improved compliance objectively” in the biomarker readings, Dr. Weiss said.

Trying to understand how the polypill worked requires more imagination. “Or it makes you just say, ‘Who cares what the mechanism is?’ These people did a lot better, full stop, and that’s all that matters,” he said.

Dr. Weiss said he expects some degree of replication of the results may be needed before practice changes.

To Steven E. Nissen, MD, chief academic officer of the Heart and Vascular Institute at Cleveland Clinic, the results “don’t make any sense.”

“If they got the same results on the biomarkers that the pill was designed to intervene upon, why are the [primary outcome] results different? It’s completely unexplained,” Dr. Nissen said.

In general, Dr. Nissen has not been an advocate of the polypill approach in higher-income countries.

“Medicine is all about customization of therapy,” he said. “Not everybody needs blood pressure lowering. Not everybody needs the same intensity of LDL reduction. We spend much of our lives seeing patients and treating their blood pressure, and if it doesn’t come down adequately, giving them a higher dose or adding another agent.”

Polypills might be reasonable for primary prevention in countries where people have less access to health care resources, he added. In such settings, a low-cost, simple treatment strategy might have benefit.

But Dr. Nissen still doesn’t see a role for a polypill in secondary prevention.

“I think we have to take a step back, take a deep breath, and look very carefully at the science and try to understand whether this, in fact, is sensible,” he said. “We may need another study to see if this can be replicated.”

For Dhruv S. Kazi, MD, the results of the SECURE trial offer an opportunity to rekindle conversations about the use of polypills for cardiovascular protection. These conversations and studies have been taking place for nearly two decades.

Dr. Kazi, associate director of the Richard A. and Susan F. Smith Center for Outcomes Research in Cardiology at Beth Israel Deaconess Medical Center, Boston, has used models to study the expected cost-effectiveness of polypills in various countries.

Although polypills can improve patients’ adherence to their prescribed medications, Dr. Kazi and colleagues have found that treatment gaps are “often at the physician level,” with many patients not prescribed all of the medications from which they could benefit.

Availability of polypills could help address those gaps. At the same time, many patients, even those with higher incomes, may have a strong preference for taking a single pill.

Dr. Kazi’s research also shows that a polypill approach may be more economically attractive as countries develop because successful treatment averts cardiovascular events that are costlier to treat.

“In the United States, in order for this to work, we would need a polypill that is both available widely but also affordable,” Dr. Kazi said. “It is going to require a visionary mover” to make that happen.

That could include philanthropic foundations. But it could also be a business opportunity for a company like Barcelona-based Ferrer, which provided the polypills for the SECURE trial.

The clinical and economic evidence in support of polypills has been compelling, Dr. Kazi said: “We have to get on with the business of implementing something that is effective and has the potential to greatly improve population health at scale.” 

The SECURE trial was funded by the European Union Horizon 2020 program and coordinated by the Spanish National Center for Cardiovascular Research (CNIC). Ferrer International provided the polypill that was used in the trial. CNIC receives royalties for sales of the polypill from Ferrer. Dr. Weiss is starting a biotech company unrelated to this area of research.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Psychedelics may ease fear of death and dying

Article Type
Changed

Psychedelics can produce positive changes in attitudes about death and dying – and may be a way to help ease anxiety and depression toward the end of life, new research suggests.

In a retrospective study of more than 3,000 participants, near-death experiences occurring naturally or via a psychedelic drug had a “remarkably” similar effect on attitudes about death and dying, with most participants reporting less fear and anxiety around death.

“Individuals with existential anxiety and depression at end of life account for substantial suffering and significantly increased health care expenses from desperate and often futile seeking of intensive and expensive medical treatments,” co-investigator Roland Griffiths, PhD, Center for Psychedelics and Consciousness Research at Johns Hopkins Medicine, Baltimore, told this news organization.

Dr. Roland R. Griffiths


“The present findings, which show that both psychedelic and non–drug-occasioned experiences can produce positive and enduring changes in attitudes about death, suggest the importance of future prospective experimental and clinical observational studies to better understand mechanisms of such changes as well as their potential clinical utility in ameliorating suffering related to fear of death,” Dr. Griffiths said.

The results were published online Aug. 24 in PLOS ONE.
 

Direct comparisons

Both psychedelic drug experiences and near-death experiences can alter perspectives on death and dying, but there have been few direct comparisons of these phenomena, the investigators note.

In the current study, they directly compared psychedelic-occasioned and nondrug experiences, which altered individuals’ beliefs about death.

The researchers surveyed 3,192 mostly White adults from the United States, including 933 who had a natural, nondrug near-death experience and 2,259 who had psychedelic near-death experiences induced with lysergic acid diethylamide, psilocybin, ayahuasca, or N,N-dimethyltryptamine.

The psychedelic group had more men than women and tended to be younger at the time of the experience than was the nondrug group.

Nearly 90% of individuals in both groups said that they were less afraid of death than they were before their experiences.

About half of both groups said they’d encountered something they might call “God” during the experience.

Three-quarters of the psychedelic group and 85% of the nondrug group rated their experiences as among the top five most personally meaningful and spiritually significant events of their life.

Individuals in both groups also reported moderate- to strong-lasting positive changes in personal well-being and life purpose and meaning after their experiences. 

However, there were some differences between the groups.
 

More research needed

Compared with the psychedelic group, the nondrug group was more likely to report being unconscious, clinically dead, or that their life was in imminent danger.

The nonpsychedelic group was also more likely to report that their experience was very brief, lasting 5 minutes or less.

Both the psychedelic and nondrug participants showed robust increases on standardized measures of mystical and near-death experiences, but these measures were significantly greater in the psychedelic group.

The survey findings are in line with several recent clinical trials showing that a single treatment with the psychedelic psilocybin produced sustained decreases in anxiety and depression among patients with a life-threatening cancer diagnosis.

This includes a 2016 study by Dr. Griffiths and colleagues, which included 51 patients with late-stage cancer. As reported at the time, results showed a single, high dose of psilocybin had rapid, clinically significant, and lasting effects on mood and anxiety.

Limitations of the current survey cited by the researchers include the use of retrospective self-report to describe changes in death attitudes and the subjective features of the experiences. Also, respondents were a self-selected study population that may not be representative of all psychedelic or near-death experiences.

In addition, the study did not attempt to document worldview and other belief changes, such as increased belief in afterlife, that might help explain why death attitudes changed.

Looking ahead, the researchers note that future studies are needed to better understand the potential clinical use of psychedelics in ameliorating suffering related to fear of death.

Support through the Johns Hopkins Center for Psychedelic and Consciousness Research was provided by Tim Ferriss, Matt Mullenweg, Blake Mycoskie, Craig Nerenberg, and the Steven and Alexandra Cohen Foundation. Funding was also provided by the Y.C. Ho/Helen and Michael Chiang Foundation. The investigators have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Psychedelics can produce positive changes in attitudes about death and dying – and may be a way to help ease anxiety and depression toward the end of life, new research suggests.

In a retrospective study of more than 3,000 participants, near-death experiences occurring naturally or via a psychedelic drug had a “remarkably” similar effect on attitudes about death and dying, with most participants reporting less fear and anxiety around death.

“Individuals with existential anxiety and depression at end of life account for substantial suffering and significantly increased health care expenses from desperate and often futile seeking of intensive and expensive medical treatments,” co-investigator Roland Griffiths, PhD, Center for Psychedelics and Consciousness Research at Johns Hopkins Medicine, Baltimore, told this news organization.

Dr. Roland R. Griffiths


“The present findings, which show that both psychedelic and non–drug-occasioned experiences can produce positive and enduring changes in attitudes about death, suggest the importance of future prospective experimental and clinical observational studies to better understand mechanisms of such changes as well as their potential clinical utility in ameliorating suffering related to fear of death,” Dr. Griffiths said.

The results were published online Aug. 24 in PLOS ONE.
 

Direct comparisons

Both psychedelic drug experiences and near-death experiences can alter perspectives on death and dying, but there have been few direct comparisons of these phenomena, the investigators note.

In the current study, they directly compared psychedelic-occasioned and nondrug experiences, which altered individuals’ beliefs about death.

The researchers surveyed 3,192 mostly White adults from the United States, including 933 who had a natural, nondrug near-death experience and 2,259 who had psychedelic near-death experiences induced with lysergic acid diethylamide, psilocybin, ayahuasca, or N,N-dimethyltryptamine.

The psychedelic group had more men than women and tended to be younger at the time of the experience than was the nondrug group.

Nearly 90% of individuals in both groups said that they were less afraid of death than they were before their experiences.

About half of both groups said they’d encountered something they might call “God” during the experience.

Three-quarters of the psychedelic group and 85% of the nondrug group rated their experiences as among the top five most personally meaningful and spiritually significant events of their life.

Individuals in both groups also reported moderate- to strong-lasting positive changes in personal well-being and life purpose and meaning after their experiences. 

However, there were some differences between the groups.
 

More research needed

Compared with the psychedelic group, the nondrug group was more likely to report being unconscious, clinically dead, or that their life was in imminent danger.

The nonpsychedelic group was also more likely to report that their experience was very brief, lasting 5 minutes or less.

Both the psychedelic and nondrug participants showed robust increases on standardized measures of mystical and near-death experiences, but these measures were significantly greater in the psychedelic group.

The survey findings are in line with several recent clinical trials showing that a single treatment with the psychedelic psilocybin produced sustained decreases in anxiety and depression among patients with a life-threatening cancer diagnosis.

This includes a 2016 study by Dr. Griffiths and colleagues, which included 51 patients with late-stage cancer. As reported at the time, results showed a single, high dose of psilocybin had rapid, clinically significant, and lasting effects on mood and anxiety.

Limitations of the current survey cited by the researchers include the use of retrospective self-report to describe changes in death attitudes and the subjective features of the experiences. Also, respondents were a self-selected study population that may not be representative of all psychedelic or near-death experiences.

In addition, the study did not attempt to document worldview and other belief changes, such as increased belief in afterlife, that might help explain why death attitudes changed.

Looking ahead, the researchers note that future studies are needed to better understand the potential clinical use of psychedelics in ameliorating suffering related to fear of death.

Support through the Johns Hopkins Center for Psychedelic and Consciousness Research was provided by Tim Ferriss, Matt Mullenweg, Blake Mycoskie, Craig Nerenberg, and the Steven and Alexandra Cohen Foundation. Funding was also provided by the Y.C. Ho/Helen and Michael Chiang Foundation. The investigators have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Psychedelics can produce positive changes in attitudes about death and dying – and may be a way to help ease anxiety and depression toward the end of life, new research suggests.

In a retrospective study of more than 3,000 participants, near-death experiences occurring naturally or via a psychedelic drug had a “remarkably” similar effect on attitudes about death and dying, with most participants reporting less fear and anxiety around death.

“Individuals with existential anxiety and depression at end of life account for substantial suffering and significantly increased health care expenses from desperate and often futile seeking of intensive and expensive medical treatments,” co-investigator Roland Griffiths, PhD, Center for Psychedelics and Consciousness Research at Johns Hopkins Medicine, Baltimore, told this news organization.

Dr. Roland R. Griffiths


“The present findings, which show that both psychedelic and non–drug-occasioned experiences can produce positive and enduring changes in attitudes about death, suggest the importance of future prospective experimental and clinical observational studies to better understand mechanisms of such changes as well as their potential clinical utility in ameliorating suffering related to fear of death,” Dr. Griffiths said.

The results were published online Aug. 24 in PLOS ONE.
 

Direct comparisons

Both psychedelic drug experiences and near-death experiences can alter perspectives on death and dying, but there have been few direct comparisons of these phenomena, the investigators note.

In the current study, they directly compared psychedelic-occasioned and nondrug experiences, which altered individuals’ beliefs about death.

The researchers surveyed 3,192 mostly White adults from the United States, including 933 who had a natural, nondrug near-death experience and 2,259 who had psychedelic near-death experiences induced with lysergic acid diethylamide, psilocybin, ayahuasca, or N,N-dimethyltryptamine.

The psychedelic group had more men than women and tended to be younger at the time of the experience than was the nondrug group.

Nearly 90% of individuals in both groups said that they were less afraid of death than they were before their experiences.

About half of both groups said they’d encountered something they might call “God” during the experience.

Three-quarters of the psychedelic group and 85% of the nondrug group rated their experiences as among the top five most personally meaningful and spiritually significant events of their life.

Individuals in both groups also reported moderate- to strong-lasting positive changes in personal well-being and life purpose and meaning after their experiences. 

However, there were some differences between the groups.
 

More research needed

Compared with the psychedelic group, the nondrug group was more likely to report being unconscious, clinically dead, or that their life was in imminent danger.

The nonpsychedelic group was also more likely to report that their experience was very brief, lasting 5 minutes or less.

Both the psychedelic and nondrug participants showed robust increases on standardized measures of mystical and near-death experiences, but these measures were significantly greater in the psychedelic group.

The survey findings are in line with several recent clinical trials showing that a single treatment with the psychedelic psilocybin produced sustained decreases in anxiety and depression among patients with a life-threatening cancer diagnosis.

This includes a 2016 study by Dr. Griffiths and colleagues, which included 51 patients with late-stage cancer. As reported at the time, results showed a single, high dose of psilocybin had rapid, clinically significant, and lasting effects on mood and anxiety.

Limitations of the current survey cited by the researchers include the use of retrospective self-report to describe changes in death attitudes and the subjective features of the experiences. Also, respondents were a self-selected study population that may not be representative of all psychedelic or near-death experiences.

In addition, the study did not attempt to document worldview and other belief changes, such as increased belief in afterlife, that might help explain why death attitudes changed.

Looking ahead, the researchers note that future studies are needed to better understand the potential clinical use of psychedelics in ameliorating suffering related to fear of death.

Support through the Johns Hopkins Center for Psychedelic and Consciousness Research was provided by Tim Ferriss, Matt Mullenweg, Blake Mycoskie, Craig Nerenberg, and the Steven and Alexandra Cohen Foundation. Funding was also provided by the Y.C. Ho/Helen and Michael Chiang Foundation. The investigators have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM PLOS ONE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Blood test for multiple cancers: Many false positives

Article Type
Changed

PARIS – New results from a large prospective trial give a better idea of how a blood test that can detect multiple cancers performs in a “real-life” setting.

“As this technology develops, people must continue with their standard cancer screening, but this is a glimpse of what the future may hold,” commented study investigator Deborah Schrag, MD, MPH, chair, department of medicine, Memorial Sloan Kettering Cancer Center, New York.

For the PATHFINDER study, the Galleri blood test (developed by Grail) was used in 6,621 healthy individuals aged over 50, with or without additional cancer risk factors (such as history of smoking or genetic risk).

It found a positive cancer signal in 92 individuals (1.4%). 

None of the individuals who tested positive was known to have cancer at the time of testing. Subsequent workup, which could include scans and/or biopsy, found cancer in 38% of those with a positive test.

“When the test was positive, the workups were typically done in less than 3 months,” Dr. Schrag commented, adding that “the blood test typically predicted the origin of the cancer.”

Dr. Schrag presented the findings at the annual meeting of the European Society for Medical Oncology (ESMO).

Approached for comment, Anthony J. Olszanski, MD, RPh, vice chair of research at the Fox Chase Cancer Center, Philadelphia, noted that the use of a blood test to “find” cancer has long been on the minds of patients. “It is not uncommon to hear oncology patients ask: ‘Why didn’t my doctor find my cancer earlier, on blood tests?’ ”

As this study suggests, finding a malignancy before it becomes apparent on imaging or because of symptoms is one step closer to becoming a reality. “But although this is an important study, it must be noted that only about 40% of patients with a positive test result were actually found to have cancer,” Dr. Olszanski said. “Conversely, about 60% of patients with a positive test result likely suffered from a considerable amount of anxiety that may persist even after further testing did not reveal a malignancy.”

Another important issue is that such testing may incur substantial health care cost. “Less than 2 participants per 100 had a positive test result, and those patients underwent further testing to interrogate the result,” he added. “It also remains unclear if detecting cancer early will lead to better outcomes.”

Whether or not the test will be cost-effective remains unknown, as Dr. Schrag emphasized they do not have a formal cost analysis at this time. “This technology is not ready for population-wide screening, but as the technology improves, costs will go down,” she said.

Dr. Schrag also added that this is a new concept and the trial shows it is feasible to detect cancer using a blood test. “It was not designed to determine if the test can decrease cancer mortality, which is obviously the purpose of screening, but it’s premature for that,” she said.
 

Details of the results

The Galleri test uses cell-free DNA and machine learning to detect a common cancer signal across more than 50 cancer types as well as to predict cancer signal origin.

Overall, the test detected a cancer signal in 1.4% (n = 92) of participants with analyzable samples.

A total of 90 participants underwent diagnostic testing (33 true positives and 57 false positives). Of the true positives, 81.8% underwent more than one invasive diagnostic test, as did 29.8% of false positives.

Specificity was 99.1%, positive predictive value (PPV) was approximately 40%, and 73% of those who were true positives had diagnostic resolution in less than 3 months.

Of the cancers that were diagnosed, 19 were solid tumors and 17 were hematologic cancers; 7 were diagnosed in a person with a history of cancer, 26 were cancer types without standard screening, and 14 were diagnosed at an early stage.

“What is exciting about this new paradigm is that many of these were cancers for which we don’t have standard screening,” said Dr. Schrag.

Dr. Schrag noted that given the immense interest in this study, the manufacturer is working toward refining the assay and improving the test. A reanalysis was conducted on all specimens using a refined version of the test.

“Importantly, the new analysis identified fewer patients with having positive signals, from 1.4% to 0.9%,” she said. “Specificity improved to 99.5% as did PPV – from 38% to 43.1% – and more people need to be screened to find a cancer – up to 263 from 189.”
 

False positives concerning

Previous, and very similar, results from the PATHFINDER trial were presented last year at the annual meeting of the American Society of Clinical Oncology.

Max Diehn, MD, PhD, associate professor of radiation oncology at Stanford (Calif.) University, was an invited discussant for the study.

He pointed out that there were more false positives than true positives and noted that “there were a significant number of invasive procedures in false positives, which could cause harm to these patients who don’t have cancer.”

Dr. Diehn also explained that most true positives were for lymphoid malignancies, not solid tumors, and it is not known whether early detection of lymphoid malignancy has clinical utility. 

The Galleri test is already available in the United States and is being offered by a number of U.S. health networks. However, it is not approved by the U.S. Food and Drug Administration and is not covered by medical insurance, so individuals have to pay around $950 for it out of pocket. 

Although some experts are excited by its potential, describing it as a “game-changer,” others are concerned that there are no clinical pathways in place yet to deal with the results of such a blood test, and say it is not ready for prime time. 

The study was funded by Grail, a subsidiary of Illumina. Dr. Shrag has reported relationships with Grail, the Journal of the American Medical Association, and Pfizer. Several coauthors also have disclosed relationships with industry. Dr. Olszanski has reported participating in advisory boards for BMS, Merck, and Instil Bio, and running trials for them.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

PARIS – New results from a large prospective trial give a better idea of how a blood test that can detect multiple cancers performs in a “real-life” setting.

“As this technology develops, people must continue with their standard cancer screening, but this is a glimpse of what the future may hold,” commented study investigator Deborah Schrag, MD, MPH, chair, department of medicine, Memorial Sloan Kettering Cancer Center, New York.

For the PATHFINDER study, the Galleri blood test (developed by Grail) was used in 6,621 healthy individuals aged over 50, with or without additional cancer risk factors (such as history of smoking or genetic risk).

It found a positive cancer signal in 92 individuals (1.4%). 

None of the individuals who tested positive was known to have cancer at the time of testing. Subsequent workup, which could include scans and/or biopsy, found cancer in 38% of those with a positive test.

“When the test was positive, the workups were typically done in less than 3 months,” Dr. Schrag commented, adding that “the blood test typically predicted the origin of the cancer.”

Dr. Schrag presented the findings at the annual meeting of the European Society for Medical Oncology (ESMO).

Approached for comment, Anthony J. Olszanski, MD, RPh, vice chair of research at the Fox Chase Cancer Center, Philadelphia, noted that the use of a blood test to “find” cancer has long been on the minds of patients. “It is not uncommon to hear oncology patients ask: ‘Why didn’t my doctor find my cancer earlier, on blood tests?’ ”

As this study suggests, finding a malignancy before it becomes apparent on imaging or because of symptoms is one step closer to becoming a reality. “But although this is an important study, it must be noted that only about 40% of patients with a positive test result were actually found to have cancer,” Dr. Olszanski said. “Conversely, about 60% of patients with a positive test result likely suffered from a considerable amount of anxiety that may persist even after further testing did not reveal a malignancy.”

Another important issue is that such testing may incur substantial health care cost. “Less than 2 participants per 100 had a positive test result, and those patients underwent further testing to interrogate the result,” he added. “It also remains unclear if detecting cancer early will lead to better outcomes.”

Whether or not the test will be cost-effective remains unknown, as Dr. Schrag emphasized they do not have a formal cost analysis at this time. “This technology is not ready for population-wide screening, but as the technology improves, costs will go down,” she said.

Dr. Schrag also added that this is a new concept and the trial shows it is feasible to detect cancer using a blood test. “It was not designed to determine if the test can decrease cancer mortality, which is obviously the purpose of screening, but it’s premature for that,” she said.
 

Details of the results

The Galleri test uses cell-free DNA and machine learning to detect a common cancer signal across more than 50 cancer types as well as to predict cancer signal origin.

Overall, the test detected a cancer signal in 1.4% (n = 92) of participants with analyzable samples.

A total of 90 participants underwent diagnostic testing (33 true positives and 57 false positives). Of the true positives, 81.8% underwent more than one invasive diagnostic test, as did 29.8% of false positives.

Specificity was 99.1%, positive predictive value (PPV) was approximately 40%, and 73% of those who were true positives had diagnostic resolution in less than 3 months.

Of the cancers that were diagnosed, 19 were solid tumors and 17 were hematologic cancers; 7 were diagnosed in a person with a history of cancer, 26 were cancer types without standard screening, and 14 were diagnosed at an early stage.

“What is exciting about this new paradigm is that many of these were cancers for which we don’t have standard screening,” said Dr. Schrag.

Dr. Schrag noted that given the immense interest in this study, the manufacturer is working toward refining the assay and improving the test. A reanalysis was conducted on all specimens using a refined version of the test.

“Importantly, the new analysis identified fewer patients with having positive signals, from 1.4% to 0.9%,” she said. “Specificity improved to 99.5% as did PPV – from 38% to 43.1% – and more people need to be screened to find a cancer – up to 263 from 189.”
 

False positives concerning

Previous, and very similar, results from the PATHFINDER trial were presented last year at the annual meeting of the American Society of Clinical Oncology.

Max Diehn, MD, PhD, associate professor of radiation oncology at Stanford (Calif.) University, was an invited discussant for the study.

He pointed out that there were more false positives than true positives and noted that “there were a significant number of invasive procedures in false positives, which could cause harm to these patients who don’t have cancer.”

Dr. Diehn also explained that most true positives were for lymphoid malignancies, not solid tumors, and it is not known whether early detection of lymphoid malignancy has clinical utility. 

The Galleri test is already available in the United States and is being offered by a number of U.S. health networks. However, it is not approved by the U.S. Food and Drug Administration and is not covered by medical insurance, so individuals have to pay around $950 for it out of pocket. 

Although some experts are excited by its potential, describing it as a “game-changer,” others are concerned that there are no clinical pathways in place yet to deal with the results of such a blood test, and say it is not ready for prime time. 

The study was funded by Grail, a subsidiary of Illumina. Dr. Shrag has reported relationships with Grail, the Journal of the American Medical Association, and Pfizer. Several coauthors also have disclosed relationships with industry. Dr. Olszanski has reported participating in advisory boards for BMS, Merck, and Instil Bio, and running trials for them.

A version of this article first appeared on Medscape.com.

PARIS – New results from a large prospective trial give a better idea of how a blood test that can detect multiple cancers performs in a “real-life” setting.

“As this technology develops, people must continue with their standard cancer screening, but this is a glimpse of what the future may hold,” commented study investigator Deborah Schrag, MD, MPH, chair, department of medicine, Memorial Sloan Kettering Cancer Center, New York.

For the PATHFINDER study, the Galleri blood test (developed by Grail) was used in 6,621 healthy individuals aged over 50, with or without additional cancer risk factors (such as history of smoking or genetic risk).

It found a positive cancer signal in 92 individuals (1.4%). 

None of the individuals who tested positive was known to have cancer at the time of testing. Subsequent workup, which could include scans and/or biopsy, found cancer in 38% of those with a positive test.

“When the test was positive, the workups were typically done in less than 3 months,” Dr. Schrag commented, adding that “the blood test typically predicted the origin of the cancer.”

Dr. Schrag presented the findings at the annual meeting of the European Society for Medical Oncology (ESMO).

Approached for comment, Anthony J. Olszanski, MD, RPh, vice chair of research at the Fox Chase Cancer Center, Philadelphia, noted that the use of a blood test to “find” cancer has long been on the minds of patients. “It is not uncommon to hear oncology patients ask: ‘Why didn’t my doctor find my cancer earlier, on blood tests?’ ”

As this study suggests, finding a malignancy before it becomes apparent on imaging or because of symptoms is one step closer to becoming a reality. “But although this is an important study, it must be noted that only about 40% of patients with a positive test result were actually found to have cancer,” Dr. Olszanski said. “Conversely, about 60% of patients with a positive test result likely suffered from a considerable amount of anxiety that may persist even after further testing did not reveal a malignancy.”

Another important issue is that such testing may incur substantial health care cost. “Less than 2 participants per 100 had a positive test result, and those patients underwent further testing to interrogate the result,” he added. “It also remains unclear if detecting cancer early will lead to better outcomes.”

Whether or not the test will be cost-effective remains unknown, as Dr. Schrag emphasized they do not have a formal cost analysis at this time. “This technology is not ready for population-wide screening, but as the technology improves, costs will go down,” she said.

Dr. Schrag also added that this is a new concept and the trial shows it is feasible to detect cancer using a blood test. “It was not designed to determine if the test can decrease cancer mortality, which is obviously the purpose of screening, but it’s premature for that,” she said.
 

Details of the results

The Galleri test uses cell-free DNA and machine learning to detect a common cancer signal across more than 50 cancer types as well as to predict cancer signal origin.

Overall, the test detected a cancer signal in 1.4% (n = 92) of participants with analyzable samples.

A total of 90 participants underwent diagnostic testing (33 true positives and 57 false positives). Of the true positives, 81.8% underwent more than one invasive diagnostic test, as did 29.8% of false positives.

Specificity was 99.1%, positive predictive value (PPV) was approximately 40%, and 73% of those who were true positives had diagnostic resolution in less than 3 months.

Of the cancers that were diagnosed, 19 were solid tumors and 17 were hematologic cancers; 7 were diagnosed in a person with a history of cancer, 26 were cancer types without standard screening, and 14 were diagnosed at an early stage.

“What is exciting about this new paradigm is that many of these were cancers for which we don’t have standard screening,” said Dr. Schrag.

Dr. Schrag noted that given the immense interest in this study, the manufacturer is working toward refining the assay and improving the test. A reanalysis was conducted on all specimens using a refined version of the test.

“Importantly, the new analysis identified fewer patients with having positive signals, from 1.4% to 0.9%,” she said. “Specificity improved to 99.5% as did PPV – from 38% to 43.1% – and more people need to be screened to find a cancer – up to 263 from 189.”
 

False positives concerning

Previous, and very similar, results from the PATHFINDER trial were presented last year at the annual meeting of the American Society of Clinical Oncology.

Max Diehn, MD, PhD, associate professor of radiation oncology at Stanford (Calif.) University, was an invited discussant for the study.

He pointed out that there were more false positives than true positives and noted that “there were a significant number of invasive procedures in false positives, which could cause harm to these patients who don’t have cancer.”

Dr. Diehn also explained that most true positives were for lymphoid malignancies, not solid tumors, and it is not known whether early detection of lymphoid malignancy has clinical utility. 

The Galleri test is already available in the United States and is being offered by a number of U.S. health networks. However, it is not approved by the U.S. Food and Drug Administration and is not covered by medical insurance, so individuals have to pay around $950 for it out of pocket. 

Although some experts are excited by its potential, describing it as a “game-changer,” others are concerned that there are no clinical pathways in place yet to deal with the results of such a blood test, and say it is not ready for prime time. 

The study was funded by Grail, a subsidiary of Illumina. Dr. Shrag has reported relationships with Grail, the Journal of the American Medical Association, and Pfizer. Several coauthors also have disclosed relationships with industry. Dr. Olszanski has reported participating in advisory boards for BMS, Merck, and Instil Bio, and running trials for them.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT ESMO 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Prior psychological distress tied to ‘long-COVID’ conditions

Article Type
Changed

Experiencing psychological distress prior to becoming infected with SARS-CoV-2 is tied to an increased risk for post-COVID conditions often called “long COVID,” new research suggests.

In an analysis of almost 55,000 adult participants in three ongoing studies, having depression, anxiety, worry, perceived stress, or loneliness early in the pandemic, before SARS-CoV-2 infection, was associated with a 50% increased risk for developing long COVID. These types of psychological distress were also associated with a 15% to 51% greater risk for impairment in daily life among individuals with long COVID.

Psychological distress was even more strongly associated with developing long COVID than were physical health risk factors, and the increased risk was not explained by health behaviors such as smoking or physical comorbidities, researchers note.

“Our findings suggest the need to consider psychological health in addition to physical health as risk factors of long COVID-19,” lead author Siwen Wang, MD, postdoctoral fellow, department of nutrition, Harvard T. H. Chan School of Public Health, Boston, said in an interview.

“We need to increase public awareness of the importance of mental health and focus on getting mental health care for people who need it, increasing the supply of mental health clinicians and improving access to care,” she said.

The findings were published online in JAMA Psychiatry.
 

‘Poorly understood’

Postacute sequelae of SARS-CoV-2 (“long COVID”), which are “signs and symptoms consistent with COVID-19 that extend beyond 4 weeks from onset of infection” constitute “an emerging health issue,” the investigators write.

Dr. Wang noted that it has been estimated that 8-23 million Americans have developed long COVID. However, “despite the high prevalence and daily life impairment associated with long COVID, it is still poorly understood, and few risk factors have been established,” she said.

Although psychological distress may be implicated in long COVID, only three previous studies investigated psychological factors as potential contributors, the researchers note. Also, no study has investigated the potential role of other common manifestations of distress that have increased during the pandemic, such as loneliness and perceived stress, they add.

To investigate these issues, the researchers turned to three large ongoing longitudinal studies: the Nurses’ Health Study II (NSHII), the Nurses’ Health study 3 (NHS3), and the Growing Up Today Study (GUTS).

They analyzed data on 54,960 total participants (96.6% women; mean age, 57.5 years). Of the full group, 38% were active health care workers.

Participants completed an online COVID-19 questionnaire from April 2020 to Sept. 1, 2020 (baseline), and monthly surveys thereafter. Beginning in August 2020, surveys were administered quarterly. The end of follow-up was in November 2021.

The COVID questionnaires included questions about positive SARS-CoV-2 test results, COVID symptoms and hospitalization since March 1, 2020, and the presence of long-term COVID symptoms, such as fatigue, respiratory problems, persistent cough, muscle/joint/chest pain, smell/taste problems, confusion/disorientation/brain fog, depression/anxiety/changes in mood, headache, and memory problems.

Participants who reported these post-COVID conditions were asked about the frequency of symptoms and the degree of impairment in daily life.
 

Inflammation, immune dysregulation implicated?

The Patient Health Questionnaire–4 (PHQ-4) was used to assess for anxiety and depressive symptoms in the past 2 weeks. It consists of a two-item depression measure (PHQ-2) and a two-item Generalized Anxiety Disorder Scale (GAD-2).

Non–health care providers completed two additional assessments of psychological distress: the four-item Perceived Stress Scale and the three-item UCLA Loneliness Scale.

The researchers included demographic factors, weight, smoking status, marital status, and medical conditions, including diabetes, hypertension, hypercholesterolemia, asthma, and cancer, and socioeconomic factors as covariates.

For each participant, the investigators calculated the number of types of distress experienced at a high level, including probable depression, probable anxiety, worry about COVID-19, being in the top quartile of perceived stress, and loneliness.

During the 19 months of follow-up (1-47 weeks after baseline), 6% of respondents reported a positive result on a SARS-CoV-2 antibody, antigen, or polymerase chain reaction test.

Of these, 43.9% reported long-COVID conditions, with most reporting that symptoms lasted 2 months or longer; 55.8% reported at least occasional daily life impairment.

The most common post-COVID conditions were fatigue (reported by 56%), loss of smell or taste problems (44.6%), shortness of breath (25.5%), confusion/disorientation/ brain fog (24.5%), and memory issues (21.8%).

Among patients who had been infected, there was a considerably higher rate of preinfection psychological distress after adjusting for sociodemographic factors, health behaviors, and comorbidities. Each type of distress was associated with post-COVID conditions.

In addition, participants who had experienced at least two types of distress prior to infection were at nearly 50% increased risk for post–COVID conditions (risk ratio, 1.49; 95% confidence interval, 1.23-1.80).

Among those with post-COVID conditions, all types of distress were associated with increased risk for daily life impairment (RR range, 1.15-1.51).

Senior author Andrea Roberts, PhD, senior research scientist at the Harvard T. H. Chan School of Public Health, Boston, noted that the investigators did not examine biological mechanisms potentially underlying the association they found.

However, “based on prior research, it may be that inflammation and immune dysregulation related to psychological distress play a role in the association of distress with long COVID, but we can’t be sure,” Dr. Roberts said.
 

Contributes to the field

Commenting for this article, Yapeng Su, PhD, a postdoctoral researcher at the Fred Hutchinson Cancer Research Center in Seattle, called the study “great work contributing to the long-COVID research field and revealing important connections” with psychological stress prior to infection.

Dr. Su, who was not involved with the study, was previously at the Institute for Systems Biology, also in Seattle, and has written about long COVID.

He noted that the “biological mechanism of such intriguing linkage is definitely the important next step, which will likely require deep phenotyping of biological specimens from these patients longitudinally.”

Dr. Wang pointed to past research suggesting that some patients with mental illness “sometimes develop autoantibodies that have also been associated with increased risk of long COVID.” In addition, depression “affects the brain in ways that may explain certain cognitive symptoms in long COVID,” she added.

More studies are now needed to understand how psychological distress increases the risk for long COVID, said Dr. Wang.

The research was supported by grants from the Eunice Kennedy Shriver National Institute of Child Health and Human Development, the National Institutes of Health, the Dean’s Fund for Scientific Advancement Acceleration Award from the Harvard T. H. Chan School of Public Health, the Massachusetts Consortium on Pathogen Readiness Evergrande COVID-19 Response Fund Award, and the Veterans Affairs Health Services Research and Development Service funds. Dr. Wang and Dr. Roberts have reported no relevant financial relationships. The other investigators’ disclosures are listed in the original article. Dr. Su reports no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Experiencing psychological distress prior to becoming infected with SARS-CoV-2 is tied to an increased risk for post-COVID conditions often called “long COVID,” new research suggests.

In an analysis of almost 55,000 adult participants in three ongoing studies, having depression, anxiety, worry, perceived stress, or loneliness early in the pandemic, before SARS-CoV-2 infection, was associated with a 50% increased risk for developing long COVID. These types of psychological distress were also associated with a 15% to 51% greater risk for impairment in daily life among individuals with long COVID.

Psychological distress was even more strongly associated with developing long COVID than were physical health risk factors, and the increased risk was not explained by health behaviors such as smoking or physical comorbidities, researchers note.

“Our findings suggest the need to consider psychological health in addition to physical health as risk factors of long COVID-19,” lead author Siwen Wang, MD, postdoctoral fellow, department of nutrition, Harvard T. H. Chan School of Public Health, Boston, said in an interview.

“We need to increase public awareness of the importance of mental health and focus on getting mental health care for people who need it, increasing the supply of mental health clinicians and improving access to care,” she said.

The findings were published online in JAMA Psychiatry.
 

‘Poorly understood’

Postacute sequelae of SARS-CoV-2 (“long COVID”), which are “signs and symptoms consistent with COVID-19 that extend beyond 4 weeks from onset of infection” constitute “an emerging health issue,” the investigators write.

Dr. Wang noted that it has been estimated that 8-23 million Americans have developed long COVID. However, “despite the high prevalence and daily life impairment associated with long COVID, it is still poorly understood, and few risk factors have been established,” she said.

Although psychological distress may be implicated in long COVID, only three previous studies investigated psychological factors as potential contributors, the researchers note. Also, no study has investigated the potential role of other common manifestations of distress that have increased during the pandemic, such as loneliness and perceived stress, they add.

To investigate these issues, the researchers turned to three large ongoing longitudinal studies: the Nurses’ Health Study II (NSHII), the Nurses’ Health study 3 (NHS3), and the Growing Up Today Study (GUTS).

They analyzed data on 54,960 total participants (96.6% women; mean age, 57.5 years). Of the full group, 38% were active health care workers.

Participants completed an online COVID-19 questionnaire from April 2020 to Sept. 1, 2020 (baseline), and monthly surveys thereafter. Beginning in August 2020, surveys were administered quarterly. The end of follow-up was in November 2021.

The COVID questionnaires included questions about positive SARS-CoV-2 test results, COVID symptoms and hospitalization since March 1, 2020, and the presence of long-term COVID symptoms, such as fatigue, respiratory problems, persistent cough, muscle/joint/chest pain, smell/taste problems, confusion/disorientation/brain fog, depression/anxiety/changes in mood, headache, and memory problems.

Participants who reported these post-COVID conditions were asked about the frequency of symptoms and the degree of impairment in daily life.
 

Inflammation, immune dysregulation implicated?

The Patient Health Questionnaire–4 (PHQ-4) was used to assess for anxiety and depressive symptoms in the past 2 weeks. It consists of a two-item depression measure (PHQ-2) and a two-item Generalized Anxiety Disorder Scale (GAD-2).

Non–health care providers completed two additional assessments of psychological distress: the four-item Perceived Stress Scale and the three-item UCLA Loneliness Scale.

The researchers included demographic factors, weight, smoking status, marital status, and medical conditions, including diabetes, hypertension, hypercholesterolemia, asthma, and cancer, and socioeconomic factors as covariates.

For each participant, the investigators calculated the number of types of distress experienced at a high level, including probable depression, probable anxiety, worry about COVID-19, being in the top quartile of perceived stress, and loneliness.

During the 19 months of follow-up (1-47 weeks after baseline), 6% of respondents reported a positive result on a SARS-CoV-2 antibody, antigen, or polymerase chain reaction test.

Of these, 43.9% reported long-COVID conditions, with most reporting that symptoms lasted 2 months or longer; 55.8% reported at least occasional daily life impairment.

The most common post-COVID conditions were fatigue (reported by 56%), loss of smell or taste problems (44.6%), shortness of breath (25.5%), confusion/disorientation/ brain fog (24.5%), and memory issues (21.8%).

Among patients who had been infected, there was a considerably higher rate of preinfection psychological distress after adjusting for sociodemographic factors, health behaviors, and comorbidities. Each type of distress was associated with post-COVID conditions.

In addition, participants who had experienced at least two types of distress prior to infection were at nearly 50% increased risk for post–COVID conditions (risk ratio, 1.49; 95% confidence interval, 1.23-1.80).

Among those with post-COVID conditions, all types of distress were associated with increased risk for daily life impairment (RR range, 1.15-1.51).

Senior author Andrea Roberts, PhD, senior research scientist at the Harvard T. H. Chan School of Public Health, Boston, noted that the investigators did not examine biological mechanisms potentially underlying the association they found.

However, “based on prior research, it may be that inflammation and immune dysregulation related to psychological distress play a role in the association of distress with long COVID, but we can’t be sure,” Dr. Roberts said.
 

Contributes to the field

Commenting for this article, Yapeng Su, PhD, a postdoctoral researcher at the Fred Hutchinson Cancer Research Center in Seattle, called the study “great work contributing to the long-COVID research field and revealing important connections” with psychological stress prior to infection.

Dr. Su, who was not involved with the study, was previously at the Institute for Systems Biology, also in Seattle, and has written about long COVID.

He noted that the “biological mechanism of such intriguing linkage is definitely the important next step, which will likely require deep phenotyping of biological specimens from these patients longitudinally.”

Dr. Wang pointed to past research suggesting that some patients with mental illness “sometimes develop autoantibodies that have also been associated with increased risk of long COVID.” In addition, depression “affects the brain in ways that may explain certain cognitive symptoms in long COVID,” she added.

More studies are now needed to understand how psychological distress increases the risk for long COVID, said Dr. Wang.

The research was supported by grants from the Eunice Kennedy Shriver National Institute of Child Health and Human Development, the National Institutes of Health, the Dean’s Fund for Scientific Advancement Acceleration Award from the Harvard T. H. Chan School of Public Health, the Massachusetts Consortium on Pathogen Readiness Evergrande COVID-19 Response Fund Award, and the Veterans Affairs Health Services Research and Development Service funds. Dr. Wang and Dr. Roberts have reported no relevant financial relationships. The other investigators’ disclosures are listed in the original article. Dr. Su reports no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Experiencing psychological distress prior to becoming infected with SARS-CoV-2 is tied to an increased risk for post-COVID conditions often called “long COVID,” new research suggests.

In an analysis of almost 55,000 adult participants in three ongoing studies, having depression, anxiety, worry, perceived stress, or loneliness early in the pandemic, before SARS-CoV-2 infection, was associated with a 50% increased risk for developing long COVID. These types of psychological distress were also associated with a 15% to 51% greater risk for impairment in daily life among individuals with long COVID.

Psychological distress was even more strongly associated with developing long COVID than were physical health risk factors, and the increased risk was not explained by health behaviors such as smoking or physical comorbidities, researchers note.

“Our findings suggest the need to consider psychological health in addition to physical health as risk factors of long COVID-19,” lead author Siwen Wang, MD, postdoctoral fellow, department of nutrition, Harvard T. H. Chan School of Public Health, Boston, said in an interview.

“We need to increase public awareness of the importance of mental health and focus on getting mental health care for people who need it, increasing the supply of mental health clinicians and improving access to care,” she said.

The findings were published online in JAMA Psychiatry.
 

‘Poorly understood’

Postacute sequelae of SARS-CoV-2 (“long COVID”), which are “signs and symptoms consistent with COVID-19 that extend beyond 4 weeks from onset of infection” constitute “an emerging health issue,” the investigators write.

Dr. Wang noted that it has been estimated that 8-23 million Americans have developed long COVID. However, “despite the high prevalence and daily life impairment associated with long COVID, it is still poorly understood, and few risk factors have been established,” she said.

Although psychological distress may be implicated in long COVID, only three previous studies investigated psychological factors as potential contributors, the researchers note. Also, no study has investigated the potential role of other common manifestations of distress that have increased during the pandemic, such as loneliness and perceived stress, they add.

To investigate these issues, the researchers turned to three large ongoing longitudinal studies: the Nurses’ Health Study II (NSHII), the Nurses’ Health study 3 (NHS3), and the Growing Up Today Study (GUTS).

They analyzed data on 54,960 total participants (96.6% women; mean age, 57.5 years). Of the full group, 38% were active health care workers.

Participants completed an online COVID-19 questionnaire from April 2020 to Sept. 1, 2020 (baseline), and monthly surveys thereafter. Beginning in August 2020, surveys were administered quarterly. The end of follow-up was in November 2021.

The COVID questionnaires included questions about positive SARS-CoV-2 test results, COVID symptoms and hospitalization since March 1, 2020, and the presence of long-term COVID symptoms, such as fatigue, respiratory problems, persistent cough, muscle/joint/chest pain, smell/taste problems, confusion/disorientation/brain fog, depression/anxiety/changes in mood, headache, and memory problems.

Participants who reported these post-COVID conditions were asked about the frequency of symptoms and the degree of impairment in daily life.
 

Inflammation, immune dysregulation implicated?

The Patient Health Questionnaire–4 (PHQ-4) was used to assess for anxiety and depressive symptoms in the past 2 weeks. It consists of a two-item depression measure (PHQ-2) and a two-item Generalized Anxiety Disorder Scale (GAD-2).

Non–health care providers completed two additional assessments of psychological distress: the four-item Perceived Stress Scale and the three-item UCLA Loneliness Scale.

The researchers included demographic factors, weight, smoking status, marital status, and medical conditions, including diabetes, hypertension, hypercholesterolemia, asthma, and cancer, and socioeconomic factors as covariates.

For each participant, the investigators calculated the number of types of distress experienced at a high level, including probable depression, probable anxiety, worry about COVID-19, being in the top quartile of perceived stress, and loneliness.

During the 19 months of follow-up (1-47 weeks after baseline), 6% of respondents reported a positive result on a SARS-CoV-2 antibody, antigen, or polymerase chain reaction test.

Of these, 43.9% reported long-COVID conditions, with most reporting that symptoms lasted 2 months or longer; 55.8% reported at least occasional daily life impairment.

The most common post-COVID conditions were fatigue (reported by 56%), loss of smell or taste problems (44.6%), shortness of breath (25.5%), confusion/disorientation/ brain fog (24.5%), and memory issues (21.8%).

Among patients who had been infected, there was a considerably higher rate of preinfection psychological distress after adjusting for sociodemographic factors, health behaviors, and comorbidities. Each type of distress was associated with post-COVID conditions.

In addition, participants who had experienced at least two types of distress prior to infection were at nearly 50% increased risk for post–COVID conditions (risk ratio, 1.49; 95% confidence interval, 1.23-1.80).

Among those with post-COVID conditions, all types of distress were associated with increased risk for daily life impairment (RR range, 1.15-1.51).

Senior author Andrea Roberts, PhD, senior research scientist at the Harvard T. H. Chan School of Public Health, Boston, noted that the investigators did not examine biological mechanisms potentially underlying the association they found.

However, “based on prior research, it may be that inflammation and immune dysregulation related to psychological distress play a role in the association of distress with long COVID, but we can’t be sure,” Dr. Roberts said.
 

Contributes to the field

Commenting for this article, Yapeng Su, PhD, a postdoctoral researcher at the Fred Hutchinson Cancer Research Center in Seattle, called the study “great work contributing to the long-COVID research field and revealing important connections” with psychological stress prior to infection.

Dr. Su, who was not involved with the study, was previously at the Institute for Systems Biology, also in Seattle, and has written about long COVID.

He noted that the “biological mechanism of such intriguing linkage is definitely the important next step, which will likely require deep phenotyping of biological specimens from these patients longitudinally.”

Dr. Wang pointed to past research suggesting that some patients with mental illness “sometimes develop autoantibodies that have also been associated with increased risk of long COVID.” In addition, depression “affects the brain in ways that may explain certain cognitive symptoms in long COVID,” she added.

More studies are now needed to understand how psychological distress increases the risk for long COVID, said Dr. Wang.

The research was supported by grants from the Eunice Kennedy Shriver National Institute of Child Health and Human Development, the National Institutes of Health, the Dean’s Fund for Scientific Advancement Acceleration Award from the Harvard T. H. Chan School of Public Health, the Massachusetts Consortium on Pathogen Readiness Evergrande COVID-19 Response Fund Award, and the Veterans Affairs Health Services Research and Development Service funds. Dr. Wang and Dr. Roberts have reported no relevant financial relationships. The other investigators’ disclosures are listed in the original article. Dr. Su reports no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA PSYCHIATRY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Barriers to System Quality Improvement in Health Care

Article Type
Changed
Display Headline
Barriers to System Quality Improvement in Health Care

Corresponding author: Ebrahim Barkoudah, MD, MPH, Department of Medicine, Brigham and Women’s Hospital, Boston, MA; [email protected]

Process improvement in any industry sector aims to increase the efficiency of resource utilization and delivery methods (cost) and the quality of the product (outcomes), with the goal of ultimately achieving continuous development.1 In the health care industry, variation in processes and outcomes along with inefficiency in resource use that result in changes in value (the product of outcomes/costs) are the general targets of quality improvement (QI) efforts employing various implementation methodologies.2 When the ultimate aim is to serve the patient (customer), best clinical practice includes both maintaining high quality (individual care delivery) and controlling costs (efficient care system delivery), leading to optimal delivery (value-based care). High-quality individual care and efficient care delivery are not competing concepts, but when working to improve both health care outcomes and cost, traditional and nontraditional barriers to system QI often arise.3

The possible scenarios after a QI intervention include backsliding (regression to the mean over time), steady-state (minimal fixed improvement that could sustain), and continuous improvement (tangible enhancement after completing the intervention with legacy effect).4 The scalability of results can be considered during the process measurement and the intervention design phases of all QI projects; however, the complex nature of barriers in the health care environment during each level of implementation should be accounted for to prevent failure in the scalability phase.5

The barriers to optimal QI outcomes leading to continuous improvement are multifactorial and are related to intrinsic and extrinsic factors.6 These factors include 3 fundamental levels: (1) individual level inertia/beliefs, prior personal knowledge, and team-related factors7,8; (2) intervention-related and process-specific barriers and clinical practice obstacles; and (3) organizational level challenges and macro-level and population-level barriers (Figure). The obstacles faced during the implementation phase will likely include 2 of these levels simultaneously, which could add complexity and hinder or prevent the implementation of a tangible successful QI process and eventually lead to backsliding or minimal fixed improvement rather than continuous improvement. Furthermore, a patient-centered approach to QI would contribute to further complexity in design and execution, given the importance of reaching sustainable, meaningful improvement by adding elements of patient’s preferences, caregiver engagement, and the shared decision-making processes.9

Barriers to progress in quality improvement

Overcoming these multidomain barriers and reaching resilience and sustainability requires thoughtful planning and execution through a multifaceted approach.10 A meaningful start could include addressing the clinical inertia for the individual and the team by promoting open innovation and allowing outside institutional collaborations and ideas through networks.11 On the individual level, encouraging participation and motivating health care workers in QI to reach a multidisciplinary operation approach will lead to harmony in collaboration. Concurrently, the organization should support the QI capability and scalability by removing competing priorities and establishing effective leadership that ensures resource allocation, communicates clear value-based principles, and engenders a psychological safety environment.

A continuous improvement state is the optimal QI target, a target that can be attained by removing obstacles and paving a clear pathway to implementation. Focusing on the 3 levels of barriers will position the organization for meaningful and successful QI phases to achieve continuous improvement.

References

1. Adesola S, Baines T. Developing and evaluating a methodology for business process improvement. Business Process Manage J. 2005;11(1):37-46. doi:10.1108/14637150510578719

2. Gershon M. Choosing which process improvement methodology to implement. J Appl Business & Economics. 2010;10(5):61-69.

3. Porter ME, Teisberg EO. Redefining Health Care: Creating Value-Based Competition on Results. Harvard Business Press; 2006.

4. Holweg M, Davies J, De Meyer A, Lawson B, Schmenner RW. Process Theory: The Principles of Operations Management. Oxford University Press; 2018.

5. Shortell SM, Bennett CL, Byck GR. Assessing the impact of continuous quality improvement on clinical practice: what it will take to accelerate progress. Milbank Q. 1998;76(4):593-624. doi:10.1111/1468-0009.00107

6. Solomons NM, Spross JA. Evidence‐based practice barriers and facilitators from a continuous quality improvement perspective: an integrative review. J Nurs Manage. 2011;19(1):109-120. doi:10.1111/j.1365-2834.2010.01144.x

7. Phillips LS, Branch WT, Cook CB, et al. Clinical inertia. Ann Intern Med. 2001;135(9):825-34. doi:10.7326/0003-4819-135-9-200111060-00012

8. Stevenson K, Baker R, Farooqi A, Sorrie R, Khunti K. Features of primary health care teams associated with successful quality improvement of diabetes care: a qualitative study. Fam Pract. 2001;18(1):21-26. doi:10.1093/fampra/18.1.21

9. What is patient-centered care? NEJM Catalyst. January 1, 2017. Accessed August 31, 2022. https://catalyst.nejm.org/doi/full/10.1056/CAT.17.0559

10. Kilbourne AM, Beck K, Spaeth‐Rublee B, et al. Measuring and improving the quality of mental health care: a global perspective. World Psychiatry. 2018;17(1):30-8. doi:10.1002/wps.20482

11. Huang HC, Lai MC, Lin LH, Chen CT. Overcoming organizational inertia to strengthen business model innovation: An open innovation perspective. J Organizational Change Manage. 2013;26(6):977-1002. doi:10.1108/JOCM-04-2012-0047

Article PDF
Issue
Journal of Clinical Outcomes Management - 29(5)
Publications
Topics
Page Number
175-176
Sections
Article PDF
Article PDF

Corresponding author: Ebrahim Barkoudah, MD, MPH, Department of Medicine, Brigham and Women’s Hospital, Boston, MA; [email protected]

Process improvement in any industry sector aims to increase the efficiency of resource utilization and delivery methods (cost) and the quality of the product (outcomes), with the goal of ultimately achieving continuous development.1 In the health care industry, variation in processes and outcomes along with inefficiency in resource use that result in changes in value (the product of outcomes/costs) are the general targets of quality improvement (QI) efforts employing various implementation methodologies.2 When the ultimate aim is to serve the patient (customer), best clinical practice includes both maintaining high quality (individual care delivery) and controlling costs (efficient care system delivery), leading to optimal delivery (value-based care). High-quality individual care and efficient care delivery are not competing concepts, but when working to improve both health care outcomes and cost, traditional and nontraditional barriers to system QI often arise.3

The possible scenarios after a QI intervention include backsliding (regression to the mean over time), steady-state (minimal fixed improvement that could sustain), and continuous improvement (tangible enhancement after completing the intervention with legacy effect).4 The scalability of results can be considered during the process measurement and the intervention design phases of all QI projects; however, the complex nature of barriers in the health care environment during each level of implementation should be accounted for to prevent failure in the scalability phase.5

The barriers to optimal QI outcomes leading to continuous improvement are multifactorial and are related to intrinsic and extrinsic factors.6 These factors include 3 fundamental levels: (1) individual level inertia/beliefs, prior personal knowledge, and team-related factors7,8; (2) intervention-related and process-specific barriers and clinical practice obstacles; and (3) organizational level challenges and macro-level and population-level barriers (Figure). The obstacles faced during the implementation phase will likely include 2 of these levels simultaneously, which could add complexity and hinder or prevent the implementation of a tangible successful QI process and eventually lead to backsliding or minimal fixed improvement rather than continuous improvement. Furthermore, a patient-centered approach to QI would contribute to further complexity in design and execution, given the importance of reaching sustainable, meaningful improvement by adding elements of patient’s preferences, caregiver engagement, and the shared decision-making processes.9

Barriers to progress in quality improvement

Overcoming these multidomain barriers and reaching resilience and sustainability requires thoughtful planning and execution through a multifaceted approach.10 A meaningful start could include addressing the clinical inertia for the individual and the team by promoting open innovation and allowing outside institutional collaborations and ideas through networks.11 On the individual level, encouraging participation and motivating health care workers in QI to reach a multidisciplinary operation approach will lead to harmony in collaboration. Concurrently, the organization should support the QI capability and scalability by removing competing priorities and establishing effective leadership that ensures resource allocation, communicates clear value-based principles, and engenders a psychological safety environment.

A continuous improvement state is the optimal QI target, a target that can be attained by removing obstacles and paving a clear pathway to implementation. Focusing on the 3 levels of barriers will position the organization for meaningful and successful QI phases to achieve continuous improvement.

Corresponding author: Ebrahim Barkoudah, MD, MPH, Department of Medicine, Brigham and Women’s Hospital, Boston, MA; [email protected]

Process improvement in any industry sector aims to increase the efficiency of resource utilization and delivery methods (cost) and the quality of the product (outcomes), with the goal of ultimately achieving continuous development.1 In the health care industry, variation in processes and outcomes along with inefficiency in resource use that result in changes in value (the product of outcomes/costs) are the general targets of quality improvement (QI) efforts employing various implementation methodologies.2 When the ultimate aim is to serve the patient (customer), best clinical practice includes both maintaining high quality (individual care delivery) and controlling costs (efficient care system delivery), leading to optimal delivery (value-based care). High-quality individual care and efficient care delivery are not competing concepts, but when working to improve both health care outcomes and cost, traditional and nontraditional barriers to system QI often arise.3

The possible scenarios after a QI intervention include backsliding (regression to the mean over time), steady-state (minimal fixed improvement that could sustain), and continuous improvement (tangible enhancement after completing the intervention with legacy effect).4 The scalability of results can be considered during the process measurement and the intervention design phases of all QI projects; however, the complex nature of barriers in the health care environment during each level of implementation should be accounted for to prevent failure in the scalability phase.5

The barriers to optimal QI outcomes leading to continuous improvement are multifactorial and are related to intrinsic and extrinsic factors.6 These factors include 3 fundamental levels: (1) individual level inertia/beliefs, prior personal knowledge, and team-related factors7,8; (2) intervention-related and process-specific barriers and clinical practice obstacles; and (3) organizational level challenges and macro-level and population-level barriers (Figure). The obstacles faced during the implementation phase will likely include 2 of these levels simultaneously, which could add complexity and hinder or prevent the implementation of a tangible successful QI process and eventually lead to backsliding or minimal fixed improvement rather than continuous improvement. Furthermore, a patient-centered approach to QI would contribute to further complexity in design and execution, given the importance of reaching sustainable, meaningful improvement by adding elements of patient’s preferences, caregiver engagement, and the shared decision-making processes.9

Barriers to progress in quality improvement

Overcoming these multidomain barriers and reaching resilience and sustainability requires thoughtful planning and execution through a multifaceted approach.10 A meaningful start could include addressing the clinical inertia for the individual and the team by promoting open innovation and allowing outside institutional collaborations and ideas through networks.11 On the individual level, encouraging participation and motivating health care workers in QI to reach a multidisciplinary operation approach will lead to harmony in collaboration. Concurrently, the organization should support the QI capability and scalability by removing competing priorities and establishing effective leadership that ensures resource allocation, communicates clear value-based principles, and engenders a psychological safety environment.

A continuous improvement state is the optimal QI target, a target that can be attained by removing obstacles and paving a clear pathway to implementation. Focusing on the 3 levels of barriers will position the organization for meaningful and successful QI phases to achieve continuous improvement.

References

1. Adesola S, Baines T. Developing and evaluating a methodology for business process improvement. Business Process Manage J. 2005;11(1):37-46. doi:10.1108/14637150510578719

2. Gershon M. Choosing which process improvement methodology to implement. J Appl Business & Economics. 2010;10(5):61-69.

3. Porter ME, Teisberg EO. Redefining Health Care: Creating Value-Based Competition on Results. Harvard Business Press; 2006.

4. Holweg M, Davies J, De Meyer A, Lawson B, Schmenner RW. Process Theory: The Principles of Operations Management. Oxford University Press; 2018.

5. Shortell SM, Bennett CL, Byck GR. Assessing the impact of continuous quality improvement on clinical practice: what it will take to accelerate progress. Milbank Q. 1998;76(4):593-624. doi:10.1111/1468-0009.00107

6. Solomons NM, Spross JA. Evidence‐based practice barriers and facilitators from a continuous quality improvement perspective: an integrative review. J Nurs Manage. 2011;19(1):109-120. doi:10.1111/j.1365-2834.2010.01144.x

7. Phillips LS, Branch WT, Cook CB, et al. Clinical inertia. Ann Intern Med. 2001;135(9):825-34. doi:10.7326/0003-4819-135-9-200111060-00012

8. Stevenson K, Baker R, Farooqi A, Sorrie R, Khunti K. Features of primary health care teams associated with successful quality improvement of diabetes care: a qualitative study. Fam Pract. 2001;18(1):21-26. doi:10.1093/fampra/18.1.21

9. What is patient-centered care? NEJM Catalyst. January 1, 2017. Accessed August 31, 2022. https://catalyst.nejm.org/doi/full/10.1056/CAT.17.0559

10. Kilbourne AM, Beck K, Spaeth‐Rublee B, et al. Measuring and improving the quality of mental health care: a global perspective. World Psychiatry. 2018;17(1):30-8. doi:10.1002/wps.20482

11. Huang HC, Lai MC, Lin LH, Chen CT. Overcoming organizational inertia to strengthen business model innovation: An open innovation perspective. J Organizational Change Manage. 2013;26(6):977-1002. doi:10.1108/JOCM-04-2012-0047

References

1. Adesola S, Baines T. Developing and evaluating a methodology for business process improvement. Business Process Manage J. 2005;11(1):37-46. doi:10.1108/14637150510578719

2. Gershon M. Choosing which process improvement methodology to implement. J Appl Business & Economics. 2010;10(5):61-69.

3. Porter ME, Teisberg EO. Redefining Health Care: Creating Value-Based Competition on Results. Harvard Business Press; 2006.

4. Holweg M, Davies J, De Meyer A, Lawson B, Schmenner RW. Process Theory: The Principles of Operations Management. Oxford University Press; 2018.

5. Shortell SM, Bennett CL, Byck GR. Assessing the impact of continuous quality improvement on clinical practice: what it will take to accelerate progress. Milbank Q. 1998;76(4):593-624. doi:10.1111/1468-0009.00107

6. Solomons NM, Spross JA. Evidence‐based practice barriers and facilitators from a continuous quality improvement perspective: an integrative review. J Nurs Manage. 2011;19(1):109-120. doi:10.1111/j.1365-2834.2010.01144.x

7. Phillips LS, Branch WT, Cook CB, et al. Clinical inertia. Ann Intern Med. 2001;135(9):825-34. doi:10.7326/0003-4819-135-9-200111060-00012

8. Stevenson K, Baker R, Farooqi A, Sorrie R, Khunti K. Features of primary health care teams associated with successful quality improvement of diabetes care: a qualitative study. Fam Pract. 2001;18(1):21-26. doi:10.1093/fampra/18.1.21

9. What is patient-centered care? NEJM Catalyst. January 1, 2017. Accessed August 31, 2022. https://catalyst.nejm.org/doi/full/10.1056/CAT.17.0559

10. Kilbourne AM, Beck K, Spaeth‐Rublee B, et al. Measuring and improving the quality of mental health care: a global perspective. World Psychiatry. 2018;17(1):30-8. doi:10.1002/wps.20482

11. Huang HC, Lai MC, Lin LH, Chen CT. Overcoming organizational inertia to strengthen business model innovation: An open innovation perspective. J Organizational Change Manage. 2013;26(6):977-1002. doi:10.1108/JOCM-04-2012-0047

Issue
Journal of Clinical Outcomes Management - 29(5)
Issue
Journal of Clinical Outcomes Management - 29(5)
Page Number
175-176
Page Number
175-176
Publications
Publications
Topics
Article Type
Display Headline
Barriers to System Quality Improvement in Health Care
Display Headline
Barriers to System Quality Improvement in Health Care
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

A ‘big breakfast’ diet affects hunger, not weight loss

Article Type
Changed

The old saying ‘breakfast like a king, lunch like a prince, and dine like a pauper’ is wrong, at least in terms of weight control, according to a new study, published in Cell Metabolism, from the University of Aberdeen. The idea that ‘front-loading’ calories early in the day might help dieting attempts was based on the belief that consuming the bulk of daily calories in the morning optimizes weight loss by burning calories more efficiently and quickly.

“There are a lot of myths surrounding the timing of eating and how it might influence either body weight or health,” said senior author Alexandra Johnstone, PhD, a researcher at the Rowett Institute, University of Aberdeen, who specializes in appetite control. “This has been driven largely by the circadian rhythm field. But we in the nutrition field have wondered how this could be possible. Where would the energy go? We decided to take a closer look at how time of day interacts with metabolism.”

Her team undertook a randomized crossover trial of 30 overweight and obese subjects recruited via social media ads. Participants – 16 men and 14 women – had a mean age of 51 years, and body mass index of 27-42 kg/ m2 but were otherwise healthy. The researchers compared two calorie-restricted but isoenergetic weight loss diets: morning-loaded calories with 45% of intake at breakfast, 35% at lunch, and 20% at dinner, and evening-loaded calories with the inverse proportions of 20%, 35%, and 45% at breakfast, lunch, and dinner, respectively.

Each diet was followed for 4 weeks, with a controlled baseline diet in which calories were balanced throughout the day provided for 1 week at the outset and during a 1-week washout period between the two intervention diets. Each person’s calorie intake was fixed, referenced to their individual measured resting metabolic rate, to assess the effect on weight loss and energy expenditure of meal timing under isoenergetic intake. Both diets were designed to provide the same nutrient composition of 30% protein, 35% carbohydrate, and 35% fat.

All food and beverages were provided, “making this the most rigorously controlled study to assess timing of eating in humans to date,” the team said, “with the aim of accounting for all aspects of energy balance.”
 

No optimum time to eat for weight loss

Results showed that both diets resulted in significant weight reduction at the end of each dietary intervention period, with subjects losing an average of just over 3 kg during each of the 4-week periods. However, there was no difference in weight loss between the morning-loaded and evening-loaded diets.

The relative size of breakfast and dinner – whether a person eats the largest meal early or late in the day – does not have an impact on metabolism, the team said. This challenges previous studies that have suggested that “evening eaters” – now a majority of the U.K. population – have a greater likelihood of gaining weight and greater difficulty in losing it.

“Participants were provided with all their meals for 8 weeks and their energy expenditure and body composition monitored for changes, using gold standard techniques at the Rowett Institute,” Dr. Johnstone said. “The same number of calories was consumed by volunteers at different times of the day, with energy expenditure measures using analysis of urine.

“This study is important because it challenges the previously held belief that eating at different times of the day leads to differential energy expenditure. The research shows that under weight loss conditions there is no optimum time to eat in order to manage weight, and that change in body weight is determined by energy balance.”
 

 

 

Meal timing reduces hunger but does not affect weight loss

However, the research also revealed that when subjects consumed the morning-loaded (big breakfast) diet, they reported feeling significantly less hungry later in the day. “Morning-loaded intake may assist with compliance to weight loss regime, through a greater suppression of appetite,” the authors said, adding that this “could foster easier weight loss in the real world.”

“The participants reported that their appetites were better controlled on the days they ate a bigger breakfast and that they felt satiated throughout the rest of the day,” Dr. Johnstone said.

“We know that appetite control is important to achieve weight loss, and our study suggests that those consuming the most calories in the morning felt less hungry, in contrast to when they consumed more calories in the evening period.

“This could be quite useful in the real-world environment, versus in the research setting that we were working in.”
 

‘Major finding’ for chrono-nutrition

Coauthor Jonathan Johnston, PhD, professor of chronobiology and integrative physiology at the University of Surrey, said: “This is a major finding for the field of meal timing (‘chrono-nutrition’) research. Many aspects of human biology change across the day and we are starting to understand how this interacts with food intake.

“Our new research shows that, in weight loss conditions, the size of breakfast and dinner regulates our appetite but not the total amount of energy that our bodies use,” Dr. Johnston said. “We plan to build upon this research to improve the health of the general population and specific groups, e.g, shift workers.”

It’s possible that shift workers could have different metabolic responses, due to the disruption of their circadian rhythms, the team said. Dr. Johnstone noted that this type of experiment could also be applied to the study of intermittent fasting (time-restricted eating), to help determine the best time of day for people to consume their calories.

“One thing that’s important to note is that when it comes to timing and dieting, there is not likely going to be one diet that fits all,” she concluded. “Figuring this out is going to be the future of diet studies, but it’s something that’s very difficult to measure.”
 

Great variability in individual responses to diets

Commenting on the study, Helena Gibson-Moore, RNutr (PH), nutrition scientist and spokesperson for the British Nutrition Foundation, said: “With about two in three adults in the UK either overweight or obese, it’s important that research continues to look into effective strategies for people to lose weight.” She described the study as “interesting,” and a challenge to previous research supporting “front-loading” calories earlier in the day as more effective for weight loss.

“However, whilst in this study there were no differences in weight loss, participants did report significantly lower hunger when eating a higher proportion of calories in the morning,” she said. “Therefore, for people who prefer having a big breakfast this may still be a useful way to help compliance to a weight loss regime through feeling less hungry in the evening, which in turn may lead to a reduced calorie intake later in the day.

“However, research has shown that as individuals we respond to diets in different ways. For example, a study comparing weight loss after a healthy low-fat diet vs. a healthy low-carbohydrate diet showed similar mean weight loss at 12 months, but there was large variability in the personal responses to each diet with some participants actually gaining weight.

“Differences in individual responses to dietary exposures has led to research into a personalized nutrition approach which requires collection of personal data and then provides individualized advice based on this.” Research has suggested that personalized dietary and physical activity advice was more effective than conventional generalized advice, she said.

“The bottom line for effective weight loss is that it is clear there is ‘no one size fits all’ approach and different weight loss strategies can work for different people but finding effective strategies for long-term sustainability of weight loss continues to be the major challenge. There are many factors that impact successful weight management and for some people it may not just be what we eat that is important, but also how and when we eat.”

This study was funded by the Medical Research Council and the Scottish Government, Rural and Environment Science and Analytical Services Division.

A version of this article first appeared on Medscape.co.uk.

Publications
Topics
Sections

The old saying ‘breakfast like a king, lunch like a prince, and dine like a pauper’ is wrong, at least in terms of weight control, according to a new study, published in Cell Metabolism, from the University of Aberdeen. The idea that ‘front-loading’ calories early in the day might help dieting attempts was based on the belief that consuming the bulk of daily calories in the morning optimizes weight loss by burning calories more efficiently and quickly.

“There are a lot of myths surrounding the timing of eating and how it might influence either body weight or health,” said senior author Alexandra Johnstone, PhD, a researcher at the Rowett Institute, University of Aberdeen, who specializes in appetite control. “This has been driven largely by the circadian rhythm field. But we in the nutrition field have wondered how this could be possible. Where would the energy go? We decided to take a closer look at how time of day interacts with metabolism.”

Her team undertook a randomized crossover trial of 30 overweight and obese subjects recruited via social media ads. Participants – 16 men and 14 women – had a mean age of 51 years, and body mass index of 27-42 kg/ m2 but were otherwise healthy. The researchers compared two calorie-restricted but isoenergetic weight loss diets: morning-loaded calories with 45% of intake at breakfast, 35% at lunch, and 20% at dinner, and evening-loaded calories with the inverse proportions of 20%, 35%, and 45% at breakfast, lunch, and dinner, respectively.

Each diet was followed for 4 weeks, with a controlled baseline diet in which calories were balanced throughout the day provided for 1 week at the outset and during a 1-week washout period between the two intervention diets. Each person’s calorie intake was fixed, referenced to their individual measured resting metabolic rate, to assess the effect on weight loss and energy expenditure of meal timing under isoenergetic intake. Both diets were designed to provide the same nutrient composition of 30% protein, 35% carbohydrate, and 35% fat.

All food and beverages were provided, “making this the most rigorously controlled study to assess timing of eating in humans to date,” the team said, “with the aim of accounting for all aspects of energy balance.”
 

No optimum time to eat for weight loss

Results showed that both diets resulted in significant weight reduction at the end of each dietary intervention period, with subjects losing an average of just over 3 kg during each of the 4-week periods. However, there was no difference in weight loss between the morning-loaded and evening-loaded diets.

The relative size of breakfast and dinner – whether a person eats the largest meal early or late in the day – does not have an impact on metabolism, the team said. This challenges previous studies that have suggested that “evening eaters” – now a majority of the U.K. population – have a greater likelihood of gaining weight and greater difficulty in losing it.

“Participants were provided with all their meals for 8 weeks and their energy expenditure and body composition monitored for changes, using gold standard techniques at the Rowett Institute,” Dr. Johnstone said. “The same number of calories was consumed by volunteers at different times of the day, with energy expenditure measures using analysis of urine.

“This study is important because it challenges the previously held belief that eating at different times of the day leads to differential energy expenditure. The research shows that under weight loss conditions there is no optimum time to eat in order to manage weight, and that change in body weight is determined by energy balance.”
 

 

 

Meal timing reduces hunger but does not affect weight loss

However, the research also revealed that when subjects consumed the morning-loaded (big breakfast) diet, they reported feeling significantly less hungry later in the day. “Morning-loaded intake may assist with compliance to weight loss regime, through a greater suppression of appetite,” the authors said, adding that this “could foster easier weight loss in the real world.”

“The participants reported that their appetites were better controlled on the days they ate a bigger breakfast and that they felt satiated throughout the rest of the day,” Dr. Johnstone said.

“We know that appetite control is important to achieve weight loss, and our study suggests that those consuming the most calories in the morning felt less hungry, in contrast to when they consumed more calories in the evening period.

“This could be quite useful in the real-world environment, versus in the research setting that we were working in.”
 

‘Major finding’ for chrono-nutrition

Coauthor Jonathan Johnston, PhD, professor of chronobiology and integrative physiology at the University of Surrey, said: “This is a major finding for the field of meal timing (‘chrono-nutrition’) research. Many aspects of human biology change across the day and we are starting to understand how this interacts with food intake.

“Our new research shows that, in weight loss conditions, the size of breakfast and dinner regulates our appetite but not the total amount of energy that our bodies use,” Dr. Johnston said. “We plan to build upon this research to improve the health of the general population and specific groups, e.g, shift workers.”

It’s possible that shift workers could have different metabolic responses, due to the disruption of their circadian rhythms, the team said. Dr. Johnstone noted that this type of experiment could also be applied to the study of intermittent fasting (time-restricted eating), to help determine the best time of day for people to consume their calories.

“One thing that’s important to note is that when it comes to timing and dieting, there is not likely going to be one diet that fits all,” she concluded. “Figuring this out is going to be the future of diet studies, but it’s something that’s very difficult to measure.”
 

Great variability in individual responses to diets

Commenting on the study, Helena Gibson-Moore, RNutr (PH), nutrition scientist and spokesperson for the British Nutrition Foundation, said: “With about two in three adults in the UK either overweight or obese, it’s important that research continues to look into effective strategies for people to lose weight.” She described the study as “interesting,” and a challenge to previous research supporting “front-loading” calories earlier in the day as more effective for weight loss.

“However, whilst in this study there were no differences in weight loss, participants did report significantly lower hunger when eating a higher proportion of calories in the morning,” she said. “Therefore, for people who prefer having a big breakfast this may still be a useful way to help compliance to a weight loss regime through feeling less hungry in the evening, which in turn may lead to a reduced calorie intake later in the day.

“However, research has shown that as individuals we respond to diets in different ways. For example, a study comparing weight loss after a healthy low-fat diet vs. a healthy low-carbohydrate diet showed similar mean weight loss at 12 months, but there was large variability in the personal responses to each diet with some participants actually gaining weight.

“Differences in individual responses to dietary exposures has led to research into a personalized nutrition approach which requires collection of personal data and then provides individualized advice based on this.” Research has suggested that personalized dietary and physical activity advice was more effective than conventional generalized advice, she said.

“The bottom line for effective weight loss is that it is clear there is ‘no one size fits all’ approach and different weight loss strategies can work for different people but finding effective strategies for long-term sustainability of weight loss continues to be the major challenge. There are many factors that impact successful weight management and for some people it may not just be what we eat that is important, but also how and when we eat.”

This study was funded by the Medical Research Council and the Scottish Government, Rural and Environment Science and Analytical Services Division.

A version of this article first appeared on Medscape.co.uk.

The old saying ‘breakfast like a king, lunch like a prince, and dine like a pauper’ is wrong, at least in terms of weight control, according to a new study, published in Cell Metabolism, from the University of Aberdeen. The idea that ‘front-loading’ calories early in the day might help dieting attempts was based on the belief that consuming the bulk of daily calories in the morning optimizes weight loss by burning calories more efficiently and quickly.

“There are a lot of myths surrounding the timing of eating and how it might influence either body weight or health,” said senior author Alexandra Johnstone, PhD, a researcher at the Rowett Institute, University of Aberdeen, who specializes in appetite control. “This has been driven largely by the circadian rhythm field. But we in the nutrition field have wondered how this could be possible. Where would the energy go? We decided to take a closer look at how time of day interacts with metabolism.”

Her team undertook a randomized crossover trial of 30 overweight and obese subjects recruited via social media ads. Participants – 16 men and 14 women – had a mean age of 51 years, and body mass index of 27-42 kg/ m2 but were otherwise healthy. The researchers compared two calorie-restricted but isoenergetic weight loss diets: morning-loaded calories with 45% of intake at breakfast, 35% at lunch, and 20% at dinner, and evening-loaded calories with the inverse proportions of 20%, 35%, and 45% at breakfast, lunch, and dinner, respectively.

Each diet was followed for 4 weeks, with a controlled baseline diet in which calories were balanced throughout the day provided for 1 week at the outset and during a 1-week washout period between the two intervention diets. Each person’s calorie intake was fixed, referenced to their individual measured resting metabolic rate, to assess the effect on weight loss and energy expenditure of meal timing under isoenergetic intake. Both diets were designed to provide the same nutrient composition of 30% protein, 35% carbohydrate, and 35% fat.

All food and beverages were provided, “making this the most rigorously controlled study to assess timing of eating in humans to date,” the team said, “with the aim of accounting for all aspects of energy balance.”
 

No optimum time to eat for weight loss

Results showed that both diets resulted in significant weight reduction at the end of each dietary intervention period, with subjects losing an average of just over 3 kg during each of the 4-week periods. However, there was no difference in weight loss between the morning-loaded and evening-loaded diets.

The relative size of breakfast and dinner – whether a person eats the largest meal early or late in the day – does not have an impact on metabolism, the team said. This challenges previous studies that have suggested that “evening eaters” – now a majority of the U.K. population – have a greater likelihood of gaining weight and greater difficulty in losing it.

“Participants were provided with all their meals for 8 weeks and their energy expenditure and body composition monitored for changes, using gold standard techniques at the Rowett Institute,” Dr. Johnstone said. “The same number of calories was consumed by volunteers at different times of the day, with energy expenditure measures using analysis of urine.

“This study is important because it challenges the previously held belief that eating at different times of the day leads to differential energy expenditure. The research shows that under weight loss conditions there is no optimum time to eat in order to manage weight, and that change in body weight is determined by energy balance.”
 

 

 

Meal timing reduces hunger but does not affect weight loss

However, the research also revealed that when subjects consumed the morning-loaded (big breakfast) diet, they reported feeling significantly less hungry later in the day. “Morning-loaded intake may assist with compliance to weight loss regime, through a greater suppression of appetite,” the authors said, adding that this “could foster easier weight loss in the real world.”

“The participants reported that their appetites were better controlled on the days they ate a bigger breakfast and that they felt satiated throughout the rest of the day,” Dr. Johnstone said.

“We know that appetite control is important to achieve weight loss, and our study suggests that those consuming the most calories in the morning felt less hungry, in contrast to when they consumed more calories in the evening period.

“This could be quite useful in the real-world environment, versus in the research setting that we were working in.”
 

‘Major finding’ for chrono-nutrition

Coauthor Jonathan Johnston, PhD, professor of chronobiology and integrative physiology at the University of Surrey, said: “This is a major finding for the field of meal timing (‘chrono-nutrition’) research. Many aspects of human biology change across the day and we are starting to understand how this interacts with food intake.

“Our new research shows that, in weight loss conditions, the size of breakfast and dinner regulates our appetite but not the total amount of energy that our bodies use,” Dr. Johnston said. “We plan to build upon this research to improve the health of the general population and specific groups, e.g, shift workers.”

It’s possible that shift workers could have different metabolic responses, due to the disruption of their circadian rhythms, the team said. Dr. Johnstone noted that this type of experiment could also be applied to the study of intermittent fasting (time-restricted eating), to help determine the best time of day for people to consume their calories.

“One thing that’s important to note is that when it comes to timing and dieting, there is not likely going to be one diet that fits all,” she concluded. “Figuring this out is going to be the future of diet studies, but it’s something that’s very difficult to measure.”
 

Great variability in individual responses to diets

Commenting on the study, Helena Gibson-Moore, RNutr (PH), nutrition scientist and spokesperson for the British Nutrition Foundation, said: “With about two in three adults in the UK either overweight or obese, it’s important that research continues to look into effective strategies for people to lose weight.” She described the study as “interesting,” and a challenge to previous research supporting “front-loading” calories earlier in the day as more effective for weight loss.

“However, whilst in this study there were no differences in weight loss, participants did report significantly lower hunger when eating a higher proportion of calories in the morning,” she said. “Therefore, for people who prefer having a big breakfast this may still be a useful way to help compliance to a weight loss regime through feeling less hungry in the evening, which in turn may lead to a reduced calorie intake later in the day.

“However, research has shown that as individuals we respond to diets in different ways. For example, a study comparing weight loss after a healthy low-fat diet vs. a healthy low-carbohydrate diet showed similar mean weight loss at 12 months, but there was large variability in the personal responses to each diet with some participants actually gaining weight.

“Differences in individual responses to dietary exposures has led to research into a personalized nutrition approach which requires collection of personal data and then provides individualized advice based on this.” Research has suggested that personalized dietary and physical activity advice was more effective than conventional generalized advice, she said.

“The bottom line for effective weight loss is that it is clear there is ‘no one size fits all’ approach and different weight loss strategies can work for different people but finding effective strategies for long-term sustainability of weight loss continues to be the major challenge. There are many factors that impact successful weight management and for some people it may not just be what we eat that is important, but also how and when we eat.”

This study was funded by the Medical Research Council and the Scottish Government, Rural and Environment Science and Analytical Services Division.

A version of this article first appeared on Medscape.co.uk.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CELL METABOLISM

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

 How does salt intake relate to mortality?

Article Type
Changed

Intake of salt is a biological necessity, inextricably woven into physiologic systems. However, excessive salt intake is associated with high blood pressure. Hypertension is linked to increased cardiovascular morbidity and mortality, and it is estimated that excessive salt intake causes approximately 5 million deaths per year worldwide. Reducing salt intake lowers blood pressure, but processed foods contain “hidden” salt, which makes dietary control of salt difficult. This problem is compounded by growing inequalities in food systems, which present another hurdle to sustaining individual dietary control of salt intake.

Krisana Antharith / EyeEm / Getty Images

Of the 87 risk factors included in the Global Burden of Diseases, Injuries, and Risk Factors Study 2019, high systolic blood pressure was identified as the leading risk factor for disease burden at the global level and for its effect on human health. A range of strategies, including primary care management and reduction in sodium intake, are known to reduce the burden of this critical risk factor. Two questions remain unanswered: “What is the relationship between mortality and adding salt to foods?” and “How much does a reduction in salt intake influence people’s health?”
 

Cardiovascular disease and death

Because dietary sodium intake has been identified as a risk factor for cardiovascular disease and premature death, high sodium intake can be expected to curtail life span. A study tested this hypothesis by analyzing the relationship between sodium intake and life expectancy and survival in 181 countries. Sodium intake correlated positively with life expectancy and inversely with all-cause mortality worldwide and in high-income countries, which argues against dietary sodium intake curtailing life span or a being risk factor for premature death. These results help fuel a scientific debate about sodium intake, life expectancy, and mortality. The debate requires interpreting composite data of positive linear, J-shaped, or inverse linear correlations, which underscores the uncertainty regarding this issue.

In a prospective study of 501,379 participants from the UK Biobank, researchers found that higher frequency of adding salt to foods was significantly associated with a higher risk of premature mortality and lower life expectancy independently of diet, lifestyle, socioeconomic level, and preexisting diseases. They found that the positive association appeared to be attenuated with increasing intake of high-potassium foods (vegetables and fruits).

In addition, the researchers made the following observations:

  • For cause-specific premature mortality, they found that higher frequency of adding salt to foods was significantly associated with a higher risk of cardiovascular disease mortality and cancer mortality (P-trend < .001 and P-trend < .001, respectively).
  • Always adding salt to foods was associated with the lower life expectancy at the age of 50 years by 1.50 (95% confidence interval, 0.72-2.30) and 2.28 (95% CI, 1.66-2.90) years for women and men, respectively, compared with participants who never or rarely added salt to foods.

The researchers noted that adding salt to foods (usually at the table) is common and is directly related to an individual’s long-term preference for salty foods and habitual salt intake. Indeed, in the Western diet, adding salt at the table accounts for 6%-20% of total salt intake. In addition, commonly used table salt contains 97%-99% sodium chloride, minimizing the potential confounding effects of other dietary factors, including potassium. Therefore, adding salt to foods provides a way to evaluate the association between habitual sodium intake and mortality – something that is relevant, given that it has been estimated that in 2010, a total of 1.65 million deaths from cardiovascular causes were attributable to consumption of more than 2.0 g of sodium per day.
 

 

 

Salt sensitivity

Current evidence supports a recommendation for moderate sodium intake in the general population (3-5 g/day). Persons with hypertension should consume salt at the lower end of that range. Some dietary guidelines recommend consuming less than 2,300 mg dietary sodium per day for persons aged 14 years or older and less for persons aged 2-13 years. Although low sodium intake (< 2.0 g/day) has been achieved in short-term clinical trials, sustained low sodium intake has not been achieved in any of the longer-term clinical trials (duration > 6 months).

The controversy continues as to the relationship between low sodium intake and blood pressure or cardiovascular diseases. Most studies show that both in individuals with hypertension and those without, blood pressure is reduced by consuming less sodium. However, it is not necessarily lowered by reducing sodium intake (< 3-5 g/day). With a sodium-rich diet, most normotensive individuals experienced a minimal change in mean arterial pressure; for many individuals with hypertension, the values increased by about 4 mm Hg. In addition, among individuals with hypertension who are “salt sensitive,” arterial pressure can increase by > 10 mm Hg in response to high sodium intake.
 

The effect of potassium

Replacing some of the sodium chloride in regular salt with potassium chloride may mitigate some of salt’s harmful cardiovascular effects. Indeed, salt substitutes that have reduced sodium levels and increased potassium levels have been shown to lower blood pressure.

In one trial, researchers enrolled over 20,000 persons from 600 villages in rural China and compared the use of regular salt (100% sodium chloride) with the use of a salt substitute (75% sodium chloride and 25% potassium chloride by mass).

The participants were at high risk for stroke, cardiovascular events, and death. The mean duration of follow-up was 4.74 years. The results were surprising. The rate of stroke was lower with the salt substitute than with regular salt (29.14 events vs. 33.65 events per 1,000 person-years; rate ratio, 0.86; 95% CI, 0.77-0.96; P = .006), as were the rates of major cardiovascular events and death from any cause. The rate of serious adverse events attributed to hyperkalemia was not significantly higher with the salt substitute than with regular salt.

Although there is an ongoing debate about the extent of salt’s effects on the cardiovascular system, there is no doubt that in most places in the world, people are consuming more salt than the body needs.

A lot depends upon the kind of diet consumed by a particular population. Processed food is rarely used in rural areas, such as those involved in the above-mentioned trial, with dietary sodium chloride being added while preparing food at home. This is a determining factor with regard to cardiovascular outcomes, but it cannot be generalized to other social-environmental settings.

In much of the world, commercial food preservation introduces a lot of sodium chloride into the diet, and most salt intake could not be fully attributed to the use of salt substitutes. Indeed, by comparing the sodium content of cereal-based products currently sold on the Italian market with the respective benchmarks proposed by the World Health Organization, researchers found that for most items, the sodium content is much higher than the benchmarks, especially with flatbreads, leavened breads, and crackers/savory biscuits. This shows that there is work to be done to achieve the World Health Organization/United Nations objective of a 30% global reduction in sodium intake by 2025.

This article was translated from Univadis Italy. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Intake of salt is a biological necessity, inextricably woven into physiologic systems. However, excessive salt intake is associated with high blood pressure. Hypertension is linked to increased cardiovascular morbidity and mortality, and it is estimated that excessive salt intake causes approximately 5 million deaths per year worldwide. Reducing salt intake lowers blood pressure, but processed foods contain “hidden” salt, which makes dietary control of salt difficult. This problem is compounded by growing inequalities in food systems, which present another hurdle to sustaining individual dietary control of salt intake.

Krisana Antharith / EyeEm / Getty Images

Of the 87 risk factors included in the Global Burden of Diseases, Injuries, and Risk Factors Study 2019, high systolic blood pressure was identified as the leading risk factor for disease burden at the global level and for its effect on human health. A range of strategies, including primary care management and reduction in sodium intake, are known to reduce the burden of this critical risk factor. Two questions remain unanswered: “What is the relationship between mortality and adding salt to foods?” and “How much does a reduction in salt intake influence people’s health?”
 

Cardiovascular disease and death

Because dietary sodium intake has been identified as a risk factor for cardiovascular disease and premature death, high sodium intake can be expected to curtail life span. A study tested this hypothesis by analyzing the relationship between sodium intake and life expectancy and survival in 181 countries. Sodium intake correlated positively with life expectancy and inversely with all-cause mortality worldwide and in high-income countries, which argues against dietary sodium intake curtailing life span or a being risk factor for premature death. These results help fuel a scientific debate about sodium intake, life expectancy, and mortality. The debate requires interpreting composite data of positive linear, J-shaped, or inverse linear correlations, which underscores the uncertainty regarding this issue.

In a prospective study of 501,379 participants from the UK Biobank, researchers found that higher frequency of adding salt to foods was significantly associated with a higher risk of premature mortality and lower life expectancy independently of diet, lifestyle, socioeconomic level, and preexisting diseases. They found that the positive association appeared to be attenuated with increasing intake of high-potassium foods (vegetables and fruits).

In addition, the researchers made the following observations:

  • For cause-specific premature mortality, they found that higher frequency of adding salt to foods was significantly associated with a higher risk of cardiovascular disease mortality and cancer mortality (P-trend < .001 and P-trend < .001, respectively).
  • Always adding salt to foods was associated with the lower life expectancy at the age of 50 years by 1.50 (95% confidence interval, 0.72-2.30) and 2.28 (95% CI, 1.66-2.90) years for women and men, respectively, compared with participants who never or rarely added salt to foods.

The researchers noted that adding salt to foods (usually at the table) is common and is directly related to an individual’s long-term preference for salty foods and habitual salt intake. Indeed, in the Western diet, adding salt at the table accounts for 6%-20% of total salt intake. In addition, commonly used table salt contains 97%-99% sodium chloride, minimizing the potential confounding effects of other dietary factors, including potassium. Therefore, adding salt to foods provides a way to evaluate the association between habitual sodium intake and mortality – something that is relevant, given that it has been estimated that in 2010, a total of 1.65 million deaths from cardiovascular causes were attributable to consumption of more than 2.0 g of sodium per day.
 

 

 

Salt sensitivity

Current evidence supports a recommendation for moderate sodium intake in the general population (3-5 g/day). Persons with hypertension should consume salt at the lower end of that range. Some dietary guidelines recommend consuming less than 2,300 mg dietary sodium per day for persons aged 14 years or older and less for persons aged 2-13 years. Although low sodium intake (< 2.0 g/day) has been achieved in short-term clinical trials, sustained low sodium intake has not been achieved in any of the longer-term clinical trials (duration > 6 months).

The controversy continues as to the relationship between low sodium intake and blood pressure or cardiovascular diseases. Most studies show that both in individuals with hypertension and those without, blood pressure is reduced by consuming less sodium. However, it is not necessarily lowered by reducing sodium intake (< 3-5 g/day). With a sodium-rich diet, most normotensive individuals experienced a minimal change in mean arterial pressure; for many individuals with hypertension, the values increased by about 4 mm Hg. In addition, among individuals with hypertension who are “salt sensitive,” arterial pressure can increase by > 10 mm Hg in response to high sodium intake.
 

The effect of potassium

Replacing some of the sodium chloride in regular salt with potassium chloride may mitigate some of salt’s harmful cardiovascular effects. Indeed, salt substitutes that have reduced sodium levels and increased potassium levels have been shown to lower blood pressure.

In one trial, researchers enrolled over 20,000 persons from 600 villages in rural China and compared the use of regular salt (100% sodium chloride) with the use of a salt substitute (75% sodium chloride and 25% potassium chloride by mass).

The participants were at high risk for stroke, cardiovascular events, and death. The mean duration of follow-up was 4.74 years. The results were surprising. The rate of stroke was lower with the salt substitute than with regular salt (29.14 events vs. 33.65 events per 1,000 person-years; rate ratio, 0.86; 95% CI, 0.77-0.96; P = .006), as were the rates of major cardiovascular events and death from any cause. The rate of serious adverse events attributed to hyperkalemia was not significantly higher with the salt substitute than with regular salt.

Although there is an ongoing debate about the extent of salt’s effects on the cardiovascular system, there is no doubt that in most places in the world, people are consuming more salt than the body needs.

A lot depends upon the kind of diet consumed by a particular population. Processed food is rarely used in rural areas, such as those involved in the above-mentioned trial, with dietary sodium chloride being added while preparing food at home. This is a determining factor with regard to cardiovascular outcomes, but it cannot be generalized to other social-environmental settings.

In much of the world, commercial food preservation introduces a lot of sodium chloride into the diet, and most salt intake could not be fully attributed to the use of salt substitutes. Indeed, by comparing the sodium content of cereal-based products currently sold on the Italian market with the respective benchmarks proposed by the World Health Organization, researchers found that for most items, the sodium content is much higher than the benchmarks, especially with flatbreads, leavened breads, and crackers/savory biscuits. This shows that there is work to be done to achieve the World Health Organization/United Nations objective of a 30% global reduction in sodium intake by 2025.

This article was translated from Univadis Italy. A version of this article first appeared on Medscape.com.

Intake of salt is a biological necessity, inextricably woven into physiologic systems. However, excessive salt intake is associated with high blood pressure. Hypertension is linked to increased cardiovascular morbidity and mortality, and it is estimated that excessive salt intake causes approximately 5 million deaths per year worldwide. Reducing salt intake lowers blood pressure, but processed foods contain “hidden” salt, which makes dietary control of salt difficult. This problem is compounded by growing inequalities in food systems, which present another hurdle to sustaining individual dietary control of salt intake.

Krisana Antharith / EyeEm / Getty Images

Of the 87 risk factors included in the Global Burden of Diseases, Injuries, and Risk Factors Study 2019, high systolic blood pressure was identified as the leading risk factor for disease burden at the global level and for its effect on human health. A range of strategies, including primary care management and reduction in sodium intake, are known to reduce the burden of this critical risk factor. Two questions remain unanswered: “What is the relationship between mortality and adding salt to foods?” and “How much does a reduction in salt intake influence people’s health?”
 

Cardiovascular disease and death

Because dietary sodium intake has been identified as a risk factor for cardiovascular disease and premature death, high sodium intake can be expected to curtail life span. A study tested this hypothesis by analyzing the relationship between sodium intake and life expectancy and survival in 181 countries. Sodium intake correlated positively with life expectancy and inversely with all-cause mortality worldwide and in high-income countries, which argues against dietary sodium intake curtailing life span or a being risk factor for premature death. These results help fuel a scientific debate about sodium intake, life expectancy, and mortality. The debate requires interpreting composite data of positive linear, J-shaped, or inverse linear correlations, which underscores the uncertainty regarding this issue.

In a prospective study of 501,379 participants from the UK Biobank, researchers found that higher frequency of adding salt to foods was significantly associated with a higher risk of premature mortality and lower life expectancy independently of diet, lifestyle, socioeconomic level, and preexisting diseases. They found that the positive association appeared to be attenuated with increasing intake of high-potassium foods (vegetables and fruits).

In addition, the researchers made the following observations:

  • For cause-specific premature mortality, they found that higher frequency of adding salt to foods was significantly associated with a higher risk of cardiovascular disease mortality and cancer mortality (P-trend < .001 and P-trend < .001, respectively).
  • Always adding salt to foods was associated with the lower life expectancy at the age of 50 years by 1.50 (95% confidence interval, 0.72-2.30) and 2.28 (95% CI, 1.66-2.90) years for women and men, respectively, compared with participants who never or rarely added salt to foods.

The researchers noted that adding salt to foods (usually at the table) is common and is directly related to an individual’s long-term preference for salty foods and habitual salt intake. Indeed, in the Western diet, adding salt at the table accounts for 6%-20% of total salt intake. In addition, commonly used table salt contains 97%-99% sodium chloride, minimizing the potential confounding effects of other dietary factors, including potassium. Therefore, adding salt to foods provides a way to evaluate the association between habitual sodium intake and mortality – something that is relevant, given that it has been estimated that in 2010, a total of 1.65 million deaths from cardiovascular causes were attributable to consumption of more than 2.0 g of sodium per day.
 

 

 

Salt sensitivity

Current evidence supports a recommendation for moderate sodium intake in the general population (3-5 g/day). Persons with hypertension should consume salt at the lower end of that range. Some dietary guidelines recommend consuming less than 2,300 mg dietary sodium per day for persons aged 14 years or older and less for persons aged 2-13 years. Although low sodium intake (< 2.0 g/day) has been achieved in short-term clinical trials, sustained low sodium intake has not been achieved in any of the longer-term clinical trials (duration > 6 months).

The controversy continues as to the relationship between low sodium intake and blood pressure or cardiovascular diseases. Most studies show that both in individuals with hypertension and those without, blood pressure is reduced by consuming less sodium. However, it is not necessarily lowered by reducing sodium intake (< 3-5 g/day). With a sodium-rich diet, most normotensive individuals experienced a minimal change in mean arterial pressure; for many individuals with hypertension, the values increased by about 4 mm Hg. In addition, among individuals with hypertension who are “salt sensitive,” arterial pressure can increase by > 10 mm Hg in response to high sodium intake.
 

The effect of potassium

Replacing some of the sodium chloride in regular salt with potassium chloride may mitigate some of salt’s harmful cardiovascular effects. Indeed, salt substitutes that have reduced sodium levels and increased potassium levels have been shown to lower blood pressure.

In one trial, researchers enrolled over 20,000 persons from 600 villages in rural China and compared the use of regular salt (100% sodium chloride) with the use of a salt substitute (75% sodium chloride and 25% potassium chloride by mass).

The participants were at high risk for stroke, cardiovascular events, and death. The mean duration of follow-up was 4.74 years. The results were surprising. The rate of stroke was lower with the salt substitute than with regular salt (29.14 events vs. 33.65 events per 1,000 person-years; rate ratio, 0.86; 95% CI, 0.77-0.96; P = .006), as were the rates of major cardiovascular events and death from any cause. The rate of serious adverse events attributed to hyperkalemia was not significantly higher with the salt substitute than with regular salt.

Although there is an ongoing debate about the extent of salt’s effects on the cardiovascular system, there is no doubt that in most places in the world, people are consuming more salt than the body needs.

A lot depends upon the kind of diet consumed by a particular population. Processed food is rarely used in rural areas, such as those involved in the above-mentioned trial, with dietary sodium chloride being added while preparing food at home. This is a determining factor with regard to cardiovascular outcomes, but it cannot be generalized to other social-environmental settings.

In much of the world, commercial food preservation introduces a lot of sodium chloride into the diet, and most salt intake could not be fully attributed to the use of salt substitutes. Indeed, by comparing the sodium content of cereal-based products currently sold on the Italian market with the respective benchmarks proposed by the World Health Organization, researchers found that for most items, the sodium content is much higher than the benchmarks, especially with flatbreads, leavened breads, and crackers/savory biscuits. This shows that there is work to be done to achieve the World Health Organization/United Nations objective of a 30% global reduction in sodium intake by 2025.

This article was translated from Univadis Italy. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

The potential problem(s) with a once-a-year COVID vaccine

Article Type
Changed

Comments from the White House this week suggesting a once-a-year COVID-19 shot for most Americans, “just like your annual flu shot,” were met with backlash from many who say COVID and influenza come from different viruses and need different schedules.

Remarks, from “capitulation” to too few data, hit the airwaves and social media.

Some, however, agree with the White House vision and say that asking people to get one shot in the fall instead of periodic pushes for boosters will raise public confidence and buy-in and reduce consumer confusion.  

Health leaders, including Bob Wachter, MD, chair of the department of medicine at the University of California, San Francisco, say they like the framing of the concept – that people who are not high-risk should plan each year for a COVID shot and a flu shot.

“Doesn’t mean we KNOW shot will prevent transmission for a year. DOES mean it’ll likely lower odds of SEVERE case for a year & we need strategy to bump uptake,” Dr. Wachter tweeted this week.

But the numbers of Americans seeking boosters remain low. Only one-third of all eligible people 50 years and older have gotten a second COVID booster, according to the Centers for Disease Control and Prevention. About half of those who got the original two shots got a first booster.

Meanwhile, the United States is still averaging about 70,000 new COVID cases and more than 300 deaths every day.

The suggested change in approach comes as Pfizer/BioNTech and Moderna roll out their new boosters that target Omicron subvariants BA.4 and BA.5 after the CDC recommended their use and the U.S. Food and Drug Administration approved emergency use authorization. 

“As the virus continues to change, we will now be able to update our vaccines annually to target the dominant variant,” President Joe Biden said in a statement promoting the yearly approach.
 

Some say annual shot premature

Other experts say it’s too soon to tell whether an annual approach will work.

“We have no data to support that current vaccines, including the new BA.5 booster, will provide durable protection beyond 4-6 months. It would be good to aspire to this objective, and much longer duration or protection, but that will likely require next generation and nasal vaccines,” said Eric Topol, MD, Medscape’s editor-in-chief and founder and director of the Scripps Research Translational Institute.

A report in Nature Reviews Immunology states, “Mucosal vaccines offer the potential to trigger robust protective immune responses at the predominant sites of pathogen infection” and potentially “can prevent an infection from becoming established in the first place, rather than only curtailing infection and protecting against the development of disease symptoms.”

Dr. Topol tweeted after the White House statements, “[An annual vaccine] has the ring of Covid capitulation.”

William Schaffner, MD, an infectious disease expert at Vanderbilt University, Nashville, Tenn., told this news organization that he cautions against interpreting the White House comments as official policy.

“This is the difficulty of having public health announcements come out of Washington,” he said. “They ought to come out of the CDC.”

He says there is a reasonable analogy between COVID and influenza, but warns, “don’t push the analogy.”

They are both serious respiratory viruses that can cause much illness and death in essentially the same populations, he notes. These are the older, frail people, people who have underlying illnesses or are immunocompromised.

Both viruses also mutate. But there the paths diverge.

“We’ve gotten into a pattern of annually updating the influenza vaccine because it is such a singularly seasonal virus,” Dr. Schaffner said. “Basically it disappears during the summer. We’ve had plenty of COVID during the summers.”

For COVID, he said, “We will need a periodic booster. Could this be annually? That would certainly make it easier.” But it’s too soon to tell, he said.

Dr. Schaffner noted that several manufacturers are working on a combined flu/COVID vaccine.
 

 

 

Just a ‘first step’ toward annual shot

The currently updated COVID vaccine may be the first step toward an annual vaccine, but it’s only the first step, Dr. Schaffner said. “We haven’t committed to further steps yet because we’re watching this virus.”

Syra Madad, DHSc, MSc, an infectious disease epidemiologist at Harvard University’s Belfer Center for Science and International Affairs, Cambridge, Mass., and the New York City hospital system, told this news organization that arguments on both sides make sense.

Having a single message once a year can help eliminate the considerable confusion involving people on individual timelines with different levels of immunity and separate campaigns for COVID and flu shots coming at different times of the year.

“Communication around vaccines is very muddled and that shows in our overall vaccination rates, particularly booster rates,” she says. “The overall strategy is hopeful and makes sense if we’re going to progress that way based on data.”

However, she said that the data are just not there yet to show it’s time for an annual vaccine. First, scientists will need to see how long protection lasts with the Omicron-specific vaccine and how well and how long it protects against severe disease and death as well as infection.

COVID is less predictable than influenza and the influenza vaccine has been around for decades, Dr. Madad noted. With influenza, the patterns are more easily anticipated with their “ladder-like pattern,” she said. “COVID-19 is not like that.”

What is hopeful, she said, “is that we’ve been in the Omicron dynasty since November of 2021. I’m hopeful that we’ll stick with that particular variant.”

Dr. Topol, Dr. Schaffner, and Dr. Madad declared no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Comments from the White House this week suggesting a once-a-year COVID-19 shot for most Americans, “just like your annual flu shot,” were met with backlash from many who say COVID and influenza come from different viruses and need different schedules.

Remarks, from “capitulation” to too few data, hit the airwaves and social media.

Some, however, agree with the White House vision and say that asking people to get one shot in the fall instead of periodic pushes for boosters will raise public confidence and buy-in and reduce consumer confusion.  

Health leaders, including Bob Wachter, MD, chair of the department of medicine at the University of California, San Francisco, say they like the framing of the concept – that people who are not high-risk should plan each year for a COVID shot and a flu shot.

“Doesn’t mean we KNOW shot will prevent transmission for a year. DOES mean it’ll likely lower odds of SEVERE case for a year & we need strategy to bump uptake,” Dr. Wachter tweeted this week.

But the numbers of Americans seeking boosters remain low. Only one-third of all eligible people 50 years and older have gotten a second COVID booster, according to the Centers for Disease Control and Prevention. About half of those who got the original two shots got a first booster.

Meanwhile, the United States is still averaging about 70,000 new COVID cases and more than 300 deaths every day.

The suggested change in approach comes as Pfizer/BioNTech and Moderna roll out their new boosters that target Omicron subvariants BA.4 and BA.5 after the CDC recommended their use and the U.S. Food and Drug Administration approved emergency use authorization. 

“As the virus continues to change, we will now be able to update our vaccines annually to target the dominant variant,” President Joe Biden said in a statement promoting the yearly approach.
 

Some say annual shot premature

Other experts say it’s too soon to tell whether an annual approach will work.

“We have no data to support that current vaccines, including the new BA.5 booster, will provide durable protection beyond 4-6 months. It would be good to aspire to this objective, and much longer duration or protection, but that will likely require next generation and nasal vaccines,” said Eric Topol, MD, Medscape’s editor-in-chief and founder and director of the Scripps Research Translational Institute.

A report in Nature Reviews Immunology states, “Mucosal vaccines offer the potential to trigger robust protective immune responses at the predominant sites of pathogen infection” and potentially “can prevent an infection from becoming established in the first place, rather than only curtailing infection and protecting against the development of disease symptoms.”

Dr. Topol tweeted after the White House statements, “[An annual vaccine] has the ring of Covid capitulation.”

William Schaffner, MD, an infectious disease expert at Vanderbilt University, Nashville, Tenn., told this news organization that he cautions against interpreting the White House comments as official policy.

“This is the difficulty of having public health announcements come out of Washington,” he said. “They ought to come out of the CDC.”

He says there is a reasonable analogy between COVID and influenza, but warns, “don’t push the analogy.”

They are both serious respiratory viruses that can cause much illness and death in essentially the same populations, he notes. These are the older, frail people, people who have underlying illnesses or are immunocompromised.

Both viruses also mutate. But there the paths diverge.

“We’ve gotten into a pattern of annually updating the influenza vaccine because it is such a singularly seasonal virus,” Dr. Schaffner said. “Basically it disappears during the summer. We’ve had plenty of COVID during the summers.”

For COVID, he said, “We will need a periodic booster. Could this be annually? That would certainly make it easier.” But it’s too soon to tell, he said.

Dr. Schaffner noted that several manufacturers are working on a combined flu/COVID vaccine.
 

 

 

Just a ‘first step’ toward annual shot

The currently updated COVID vaccine may be the first step toward an annual vaccine, but it’s only the first step, Dr. Schaffner said. “We haven’t committed to further steps yet because we’re watching this virus.”

Syra Madad, DHSc, MSc, an infectious disease epidemiologist at Harvard University’s Belfer Center for Science and International Affairs, Cambridge, Mass., and the New York City hospital system, told this news organization that arguments on both sides make sense.

Having a single message once a year can help eliminate the considerable confusion involving people on individual timelines with different levels of immunity and separate campaigns for COVID and flu shots coming at different times of the year.

“Communication around vaccines is very muddled and that shows in our overall vaccination rates, particularly booster rates,” she says. “The overall strategy is hopeful and makes sense if we’re going to progress that way based on data.”

However, she said that the data are just not there yet to show it’s time for an annual vaccine. First, scientists will need to see how long protection lasts with the Omicron-specific vaccine and how well and how long it protects against severe disease and death as well as infection.

COVID is less predictable than influenza and the influenza vaccine has been around for decades, Dr. Madad noted. With influenza, the patterns are more easily anticipated with their “ladder-like pattern,” she said. “COVID-19 is not like that.”

What is hopeful, she said, “is that we’ve been in the Omicron dynasty since November of 2021. I’m hopeful that we’ll stick with that particular variant.”

Dr. Topol, Dr. Schaffner, and Dr. Madad declared no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Comments from the White House this week suggesting a once-a-year COVID-19 shot for most Americans, “just like your annual flu shot,” were met with backlash from many who say COVID and influenza come from different viruses and need different schedules.

Remarks, from “capitulation” to too few data, hit the airwaves and social media.

Some, however, agree with the White House vision and say that asking people to get one shot in the fall instead of periodic pushes for boosters will raise public confidence and buy-in and reduce consumer confusion.  

Health leaders, including Bob Wachter, MD, chair of the department of medicine at the University of California, San Francisco, say they like the framing of the concept – that people who are not high-risk should plan each year for a COVID shot and a flu shot.

“Doesn’t mean we KNOW shot will prevent transmission for a year. DOES mean it’ll likely lower odds of SEVERE case for a year & we need strategy to bump uptake,” Dr. Wachter tweeted this week.

But the numbers of Americans seeking boosters remain low. Only one-third of all eligible people 50 years and older have gotten a second COVID booster, according to the Centers for Disease Control and Prevention. About half of those who got the original two shots got a first booster.

Meanwhile, the United States is still averaging about 70,000 new COVID cases and more than 300 deaths every day.

The suggested change in approach comes as Pfizer/BioNTech and Moderna roll out their new boosters that target Omicron subvariants BA.4 and BA.5 after the CDC recommended their use and the U.S. Food and Drug Administration approved emergency use authorization. 

“As the virus continues to change, we will now be able to update our vaccines annually to target the dominant variant,” President Joe Biden said in a statement promoting the yearly approach.
 

Some say annual shot premature

Other experts say it’s too soon to tell whether an annual approach will work.

“We have no data to support that current vaccines, including the new BA.5 booster, will provide durable protection beyond 4-6 months. It would be good to aspire to this objective, and much longer duration or protection, but that will likely require next generation and nasal vaccines,” said Eric Topol, MD, Medscape’s editor-in-chief and founder and director of the Scripps Research Translational Institute.

A report in Nature Reviews Immunology states, “Mucosal vaccines offer the potential to trigger robust protective immune responses at the predominant sites of pathogen infection” and potentially “can prevent an infection from becoming established in the first place, rather than only curtailing infection and protecting against the development of disease symptoms.”

Dr. Topol tweeted after the White House statements, “[An annual vaccine] has the ring of Covid capitulation.”

William Schaffner, MD, an infectious disease expert at Vanderbilt University, Nashville, Tenn., told this news organization that he cautions against interpreting the White House comments as official policy.

“This is the difficulty of having public health announcements come out of Washington,” he said. “They ought to come out of the CDC.”

He says there is a reasonable analogy between COVID and influenza, but warns, “don’t push the analogy.”

They are both serious respiratory viruses that can cause much illness and death in essentially the same populations, he notes. These are the older, frail people, people who have underlying illnesses or are immunocompromised.

Both viruses also mutate. But there the paths diverge.

“We’ve gotten into a pattern of annually updating the influenza vaccine because it is such a singularly seasonal virus,” Dr. Schaffner said. “Basically it disappears during the summer. We’ve had plenty of COVID during the summers.”

For COVID, he said, “We will need a periodic booster. Could this be annually? That would certainly make it easier.” But it’s too soon to tell, he said.

Dr. Schaffner noted that several manufacturers are working on a combined flu/COVID vaccine.
 

 

 

Just a ‘first step’ toward annual shot

The currently updated COVID vaccine may be the first step toward an annual vaccine, but it’s only the first step, Dr. Schaffner said. “We haven’t committed to further steps yet because we’re watching this virus.”

Syra Madad, DHSc, MSc, an infectious disease epidemiologist at Harvard University’s Belfer Center for Science and International Affairs, Cambridge, Mass., and the New York City hospital system, told this news organization that arguments on both sides make sense.

Having a single message once a year can help eliminate the considerable confusion involving people on individual timelines with different levels of immunity and separate campaigns for COVID and flu shots coming at different times of the year.

“Communication around vaccines is very muddled and that shows in our overall vaccination rates, particularly booster rates,” she says. “The overall strategy is hopeful and makes sense if we’re going to progress that way based on data.”

However, she said that the data are just not there yet to show it’s time for an annual vaccine. First, scientists will need to see how long protection lasts with the Omicron-specific vaccine and how well and how long it protects against severe disease and death as well as infection.

COVID is less predictable than influenza and the influenza vaccine has been around for decades, Dr. Madad noted. With influenza, the patterns are more easily anticipated with their “ladder-like pattern,” she said. “COVID-19 is not like that.”

What is hopeful, she said, “is that we’ve been in the Omicron dynasty since November of 2021. I’m hopeful that we’ll stick with that particular variant.”

Dr. Topol, Dr. Schaffner, and Dr. Madad declared no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article