The Journal of Clinical Outcomes Management® is an independent, peer-reviewed journal offering evidence-based, practical information for improving the quality, safety, and value of health care.

jcom
Main menu
JCOM Main
Explore menu
JCOM Explore
Proclivity ID
18843001
Unpublish
Negative Keywords Excluded Elements
header[@id='header']
div[contains(@class, 'header__large-screen')]
div[contains(@class, 'read-next-article')]
div[contains(@class, 'nav-primary')]
nav[contains(@class, 'nav-primary')]
section[contains(@class, 'footer-nav-section-wrapper')]
footer[@id='footer']
div[contains(@class, 'main-prefix')]
section[contains(@class, 'nav-hidden')]
div[contains(@class, 'ce-card-content')]
nav[contains(@class, 'nav-ce-stack')]
Altmetric
Click for Credit Button Label
Click For Credit
DSM Affiliated
Display in offset block
Enable Disqus
Display Author and Disclosure Link
Publication Type
Clinical
Slot System
Featured Buckets
Disable Sticky Ads
Disable Ad Block Mitigation
Featured Buckets Admin
Show Ads on this Publication's Homepage
Consolidated Pub
Show Article Page Numbers on TOC
Expire Announcement Bar
Use larger logo size
On
publication_blueconic_enabled
Off
Show More Destinations Menu
Disable Adhesion on Publication
Off
Restore Menu Label on Mobile Navigation
Disable Facebook Pixel from Publication
Exclude this publication from publication selection on articles and quiz
Gating Strategy
First Peek Free
Challenge Center
Disable Inline Native ads
survey writer start date

The best statins to lower non-HDL cholesterol in diabetes?

Article Type
Changed

A network meta-analysis of 42 clinical trials concludes that rosuvastatin, simvastatin, and atorvastatin are the statins most effective at lowering non-high-density-lipoprotein cholesterol (non-HDL-C) in people with diabetes and at risk for cardiovascular disease.

The analysis focused on the efficacy of statin treatment on reducing non-HDL-C, as opposed to reducing low-density-lipoprotein cholesterol (LDL-C), which has traditionally been used as a surrogate to determine cardiovascular disease risk from hypercholesterolemia.

“The National Cholesterol Education Program in the United States recommends that LDL-C values should be used to estimate the risk of cardiovascular disease related to lipoproteins,” lead author Alexander Hodkinson, MD, senior National Institute for Health Research fellow, University of Manchester, England, told this news organization.

“But we believe that non-high-density-lipoprotein cholesterol is more strongly associated with the risk of cardiovascular disease, because non-HDL-C combines all the bad types of cholesterol, which LDL-C misses, so it could be a better tool than LDL-C for assessing CVD risk and effects of treatment. We already knew which of the statins reduce LDL-C, but we wanted to know which ones reduced non-HDL-C; hence the reason for our study,” Dr. Hodkinson said.

The findings were published online  in BMJ.

In April 2021, the National Institute for Health and Care Excellence (NICE) in the United Kingdom updated guidelines for adults with diabetes to recommend that non-HDL-C should replace LDL-C as the primary target for reducing the risk for cardiovascular disease with lipid-lowering treatment.

Currently, NICE is alone in its recommendation. Other international guidelines do not have a non-HDL-C target and use LDL-C reduction instead. These include guidelines from the European Society of Cardiology (ESC), the American College of Cardiology (ACC), the American Heart Association (AHA), and the National Lipid Association.

Non–HDL-C is simple to calculate and can easily be done by clinicians by subtracting HDL-C from the total cholesterol level, he added.

This analysis compared the effectiveness of different statins at different intensities in reducing levels of non-HDL-C in 42 randomized controlled trials that included 20,193 adults with diabetes.

Compared with placebo, rosuvastatin, given at moderate- and high-intensity doses, and simvastatin and atorvastatin at high-intensity doses, were the best at lowering levels of non-HDL-C over an average treatment period of 12 weeks.

High-intensity rosuvastatin led to a 2.31 mmol/L reduction in non-HDL-C (95% credible interval, –3.39 to –1.21). Moderate-intensity rosuvastatin led to a 2.27 mmol/L reduction in non-HDL-C (95% credible interval, –3.00 to –1.49).



High-intensity simvastatin led to a 2.26 mmol/L reduction in non-HDL-C (95% credible interval, –2.99 to –1.51).

High-intensity atorvastatin led to a 2.20 mmol/L reduction in non-HDL-C (95% credible interval, –2.69 to –1.70).

Atorvastatin and simvastatin at any intensity and pravastatin at low intensity were also effective in reducing levels of non-HDL-C, the researchers noted.

In 4,670 patients who were at great risk for a major cardiovascular event, atorvastatin at high intensity showed the largest reduction in levels of non-HDL-C (1.98 mmol/L; 95% credible interval, –4.16 to 0.26).

In addition, high-intensity simvastatin and rosuvastatin were the most effective in reducing LDL-C.

High-intensity simvastatin led to a 1.93 mmol/L reduction in LDL-C (95% credible interval, –2.63 to –1.21), and high-intensity rosuvastatin led to a 1.76 mmol/L reduction in LDL-C (95% credible interval, –2.37 to –1.15).

In four studies, significant reductions in nonfatal myocardial infarction were shown for atorvastatin at moderate intensity, compared with placebo (relative risk, 0.57; 95% confidence interval, 0.43-.76). No significant differences were seen for discontinuations, nonfatal stroke, or cardiovascular death.

“We hope our findings will help guide clinicians on statin selection itself, and what types of doses they should be giving patients. These results support using NICE’s new policy guidelines on cholesterol monitoring, using this non-HDL-C measure, which contains all the bad types of cholesterol for patients with diabetes,” Dr. Hodkinson said.

“This study further emphasizes what we have known about the benefit of statin therapy in patients with type 2 diabetes,” Prakash Deedwania, MD, professor of medicine, University of California, San Francisco, told this news organization.

Dr. Deedwania and others have published data on patients with diabetes that showed that treatment with high-intensity atorvastatin was associated with significant reductions in major adverse cardiovascular events.

Dr. Prakash Deedwania

“Here they use non-HDL cholesterol as a target. The NICE guidelines are the only guidelines looking at non-HDL cholesterol; however, all guidelines suggest an LDL to be less than 70 in all people with diabetes, and for those with recent acute coronary syndromes, the latest evidence suggests the LDL should actually be less than 50,” said Dr. Deedwania, spokesperson for the AHA and ACC.

As far as which measure to use, he believes both are useful. “It’s six of one and half a dozen of the other, in my opinion. The societies have not recommended non-HDL cholesterol and it’s easier to stay with what is readily available for clinicians, and using LDL cholesterol is still okay. The results of this analysis are confirmatory, in that looking at non-HDL cholesterol gives results very similar to what these statins have shown for their effect on LDL cholesterol,” he said.
 

 

 

Non-HDL cholesterol a better marker?

For Robert Rosenson, MD, director of metabolism and lipids at Mount Sinai Health System and professor of medicine and cardiology at the Icahn School of Medicine at Mount Sinai, New York, non-HDL cholesterol is becoming an important marker of risk for several reasons.

“The focus on LDL cholesterol has been due to the causal relationship of LDL with atherosclerotic cardiovascular disease, but in the last few decades, non-HDL has emerged because more people are overweight, have insulin resistance, and have diabetes,” Dr. Rosenson told this news organization. “In those situations, the LDL cholesterol underrepresents the risk of the LDL particles. With insulin resistance, the particles become more triglycerides and less cholesterol, so on a per-particle basis, you need to get more LDL particles to get to a certain LDL cholesterol concentration.”

Non-HDL cholesterol testing does not require fasting, another advantage of using it to monitor cholesterol, he added.

What is often forgotten is that moderate- to high-intensity statins have very good triglyceride-lowering effects, Dr. Rosenson said.

“This article highlights that, by using higher doses, you get more triglyceride-lowering. Hopefully, this will get practitioners to recognize that non-HDL cholesterol is a better predictor of risk in people with diabetes,” he said.

The study was funded by the National Institute for Health Research. Dr. Hodkinson, Dr. Rosenson, and Dr. Deedwania report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

A network meta-analysis of 42 clinical trials concludes that rosuvastatin, simvastatin, and atorvastatin are the statins most effective at lowering non-high-density-lipoprotein cholesterol (non-HDL-C) in people with diabetes and at risk for cardiovascular disease.

The analysis focused on the efficacy of statin treatment on reducing non-HDL-C, as opposed to reducing low-density-lipoprotein cholesterol (LDL-C), which has traditionally been used as a surrogate to determine cardiovascular disease risk from hypercholesterolemia.

“The National Cholesterol Education Program in the United States recommends that LDL-C values should be used to estimate the risk of cardiovascular disease related to lipoproteins,” lead author Alexander Hodkinson, MD, senior National Institute for Health Research fellow, University of Manchester, England, told this news organization.

“But we believe that non-high-density-lipoprotein cholesterol is more strongly associated with the risk of cardiovascular disease, because non-HDL-C combines all the bad types of cholesterol, which LDL-C misses, so it could be a better tool than LDL-C for assessing CVD risk and effects of treatment. We already knew which of the statins reduce LDL-C, but we wanted to know which ones reduced non-HDL-C; hence the reason for our study,” Dr. Hodkinson said.

The findings were published online  in BMJ.

In April 2021, the National Institute for Health and Care Excellence (NICE) in the United Kingdom updated guidelines for adults with diabetes to recommend that non-HDL-C should replace LDL-C as the primary target for reducing the risk for cardiovascular disease with lipid-lowering treatment.

Currently, NICE is alone in its recommendation. Other international guidelines do not have a non-HDL-C target and use LDL-C reduction instead. These include guidelines from the European Society of Cardiology (ESC), the American College of Cardiology (ACC), the American Heart Association (AHA), and the National Lipid Association.

Non–HDL-C is simple to calculate and can easily be done by clinicians by subtracting HDL-C from the total cholesterol level, he added.

This analysis compared the effectiveness of different statins at different intensities in reducing levels of non-HDL-C in 42 randomized controlled trials that included 20,193 adults with diabetes.

Compared with placebo, rosuvastatin, given at moderate- and high-intensity doses, and simvastatin and atorvastatin at high-intensity doses, were the best at lowering levels of non-HDL-C over an average treatment period of 12 weeks.

High-intensity rosuvastatin led to a 2.31 mmol/L reduction in non-HDL-C (95% credible interval, –3.39 to –1.21). Moderate-intensity rosuvastatin led to a 2.27 mmol/L reduction in non-HDL-C (95% credible interval, –3.00 to –1.49).



High-intensity simvastatin led to a 2.26 mmol/L reduction in non-HDL-C (95% credible interval, –2.99 to –1.51).

High-intensity atorvastatin led to a 2.20 mmol/L reduction in non-HDL-C (95% credible interval, –2.69 to –1.70).

Atorvastatin and simvastatin at any intensity and pravastatin at low intensity were also effective in reducing levels of non-HDL-C, the researchers noted.

In 4,670 patients who were at great risk for a major cardiovascular event, atorvastatin at high intensity showed the largest reduction in levels of non-HDL-C (1.98 mmol/L; 95% credible interval, –4.16 to 0.26).

In addition, high-intensity simvastatin and rosuvastatin were the most effective in reducing LDL-C.

High-intensity simvastatin led to a 1.93 mmol/L reduction in LDL-C (95% credible interval, –2.63 to –1.21), and high-intensity rosuvastatin led to a 1.76 mmol/L reduction in LDL-C (95% credible interval, –2.37 to –1.15).

In four studies, significant reductions in nonfatal myocardial infarction were shown for atorvastatin at moderate intensity, compared with placebo (relative risk, 0.57; 95% confidence interval, 0.43-.76). No significant differences were seen for discontinuations, nonfatal stroke, or cardiovascular death.

“We hope our findings will help guide clinicians on statin selection itself, and what types of doses they should be giving patients. These results support using NICE’s new policy guidelines on cholesterol monitoring, using this non-HDL-C measure, which contains all the bad types of cholesterol for patients with diabetes,” Dr. Hodkinson said.

“This study further emphasizes what we have known about the benefit of statin therapy in patients with type 2 diabetes,” Prakash Deedwania, MD, professor of medicine, University of California, San Francisco, told this news organization.

Dr. Deedwania and others have published data on patients with diabetes that showed that treatment with high-intensity atorvastatin was associated with significant reductions in major adverse cardiovascular events.

Dr. Prakash Deedwania

“Here they use non-HDL cholesterol as a target. The NICE guidelines are the only guidelines looking at non-HDL cholesterol; however, all guidelines suggest an LDL to be less than 70 in all people with diabetes, and for those with recent acute coronary syndromes, the latest evidence suggests the LDL should actually be less than 50,” said Dr. Deedwania, spokesperson for the AHA and ACC.

As far as which measure to use, he believes both are useful. “It’s six of one and half a dozen of the other, in my opinion. The societies have not recommended non-HDL cholesterol and it’s easier to stay with what is readily available for clinicians, and using LDL cholesterol is still okay. The results of this analysis are confirmatory, in that looking at non-HDL cholesterol gives results very similar to what these statins have shown for their effect on LDL cholesterol,” he said.
 

 

 

Non-HDL cholesterol a better marker?

For Robert Rosenson, MD, director of metabolism and lipids at Mount Sinai Health System and professor of medicine and cardiology at the Icahn School of Medicine at Mount Sinai, New York, non-HDL cholesterol is becoming an important marker of risk for several reasons.

“The focus on LDL cholesterol has been due to the causal relationship of LDL with atherosclerotic cardiovascular disease, but in the last few decades, non-HDL has emerged because more people are overweight, have insulin resistance, and have diabetes,” Dr. Rosenson told this news organization. “In those situations, the LDL cholesterol underrepresents the risk of the LDL particles. With insulin resistance, the particles become more triglycerides and less cholesterol, so on a per-particle basis, you need to get more LDL particles to get to a certain LDL cholesterol concentration.”

Non-HDL cholesterol testing does not require fasting, another advantage of using it to monitor cholesterol, he added.

What is often forgotten is that moderate- to high-intensity statins have very good triglyceride-lowering effects, Dr. Rosenson said.

“This article highlights that, by using higher doses, you get more triglyceride-lowering. Hopefully, this will get practitioners to recognize that non-HDL cholesterol is a better predictor of risk in people with diabetes,” he said.

The study was funded by the National Institute for Health Research. Dr. Hodkinson, Dr. Rosenson, and Dr. Deedwania report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

A network meta-analysis of 42 clinical trials concludes that rosuvastatin, simvastatin, and atorvastatin are the statins most effective at lowering non-high-density-lipoprotein cholesterol (non-HDL-C) in people with diabetes and at risk for cardiovascular disease.

The analysis focused on the efficacy of statin treatment on reducing non-HDL-C, as opposed to reducing low-density-lipoprotein cholesterol (LDL-C), which has traditionally been used as a surrogate to determine cardiovascular disease risk from hypercholesterolemia.

“The National Cholesterol Education Program in the United States recommends that LDL-C values should be used to estimate the risk of cardiovascular disease related to lipoproteins,” lead author Alexander Hodkinson, MD, senior National Institute for Health Research fellow, University of Manchester, England, told this news organization.

“But we believe that non-high-density-lipoprotein cholesterol is more strongly associated with the risk of cardiovascular disease, because non-HDL-C combines all the bad types of cholesterol, which LDL-C misses, so it could be a better tool than LDL-C for assessing CVD risk and effects of treatment. We already knew which of the statins reduce LDL-C, but we wanted to know which ones reduced non-HDL-C; hence the reason for our study,” Dr. Hodkinson said.

The findings were published online  in BMJ.

In April 2021, the National Institute for Health and Care Excellence (NICE) in the United Kingdom updated guidelines for adults with diabetes to recommend that non-HDL-C should replace LDL-C as the primary target for reducing the risk for cardiovascular disease with lipid-lowering treatment.

Currently, NICE is alone in its recommendation. Other international guidelines do not have a non-HDL-C target and use LDL-C reduction instead. These include guidelines from the European Society of Cardiology (ESC), the American College of Cardiology (ACC), the American Heart Association (AHA), and the National Lipid Association.

Non–HDL-C is simple to calculate and can easily be done by clinicians by subtracting HDL-C from the total cholesterol level, he added.

This analysis compared the effectiveness of different statins at different intensities in reducing levels of non-HDL-C in 42 randomized controlled trials that included 20,193 adults with diabetes.

Compared with placebo, rosuvastatin, given at moderate- and high-intensity doses, and simvastatin and atorvastatin at high-intensity doses, were the best at lowering levels of non-HDL-C over an average treatment period of 12 weeks.

High-intensity rosuvastatin led to a 2.31 mmol/L reduction in non-HDL-C (95% credible interval, –3.39 to –1.21). Moderate-intensity rosuvastatin led to a 2.27 mmol/L reduction in non-HDL-C (95% credible interval, –3.00 to –1.49).



High-intensity simvastatin led to a 2.26 mmol/L reduction in non-HDL-C (95% credible interval, –2.99 to –1.51).

High-intensity atorvastatin led to a 2.20 mmol/L reduction in non-HDL-C (95% credible interval, –2.69 to –1.70).

Atorvastatin and simvastatin at any intensity and pravastatin at low intensity were also effective in reducing levels of non-HDL-C, the researchers noted.

In 4,670 patients who were at great risk for a major cardiovascular event, atorvastatin at high intensity showed the largest reduction in levels of non-HDL-C (1.98 mmol/L; 95% credible interval, –4.16 to 0.26).

In addition, high-intensity simvastatin and rosuvastatin were the most effective in reducing LDL-C.

High-intensity simvastatin led to a 1.93 mmol/L reduction in LDL-C (95% credible interval, –2.63 to –1.21), and high-intensity rosuvastatin led to a 1.76 mmol/L reduction in LDL-C (95% credible interval, –2.37 to –1.15).

In four studies, significant reductions in nonfatal myocardial infarction were shown for atorvastatin at moderate intensity, compared with placebo (relative risk, 0.57; 95% confidence interval, 0.43-.76). No significant differences were seen for discontinuations, nonfatal stroke, or cardiovascular death.

“We hope our findings will help guide clinicians on statin selection itself, and what types of doses they should be giving patients. These results support using NICE’s new policy guidelines on cholesterol monitoring, using this non-HDL-C measure, which contains all the bad types of cholesterol for patients with diabetes,” Dr. Hodkinson said.

“This study further emphasizes what we have known about the benefit of statin therapy in patients with type 2 diabetes,” Prakash Deedwania, MD, professor of medicine, University of California, San Francisco, told this news organization.

Dr. Deedwania and others have published data on patients with diabetes that showed that treatment with high-intensity atorvastatin was associated with significant reductions in major adverse cardiovascular events.

Dr. Prakash Deedwania

“Here they use non-HDL cholesterol as a target. The NICE guidelines are the only guidelines looking at non-HDL cholesterol; however, all guidelines suggest an LDL to be less than 70 in all people with diabetes, and for those with recent acute coronary syndromes, the latest evidence suggests the LDL should actually be less than 50,” said Dr. Deedwania, spokesperson for the AHA and ACC.

As far as which measure to use, he believes both are useful. “It’s six of one and half a dozen of the other, in my opinion. The societies have not recommended non-HDL cholesterol and it’s easier to stay with what is readily available for clinicians, and using LDL cholesterol is still okay. The results of this analysis are confirmatory, in that looking at non-HDL cholesterol gives results very similar to what these statins have shown for their effect on LDL cholesterol,” he said.
 

 

 

Non-HDL cholesterol a better marker?

For Robert Rosenson, MD, director of metabolism and lipids at Mount Sinai Health System and professor of medicine and cardiology at the Icahn School of Medicine at Mount Sinai, New York, non-HDL cholesterol is becoming an important marker of risk for several reasons.

“The focus on LDL cholesterol has been due to the causal relationship of LDL with atherosclerotic cardiovascular disease, but in the last few decades, non-HDL has emerged because more people are overweight, have insulin resistance, and have diabetes,” Dr. Rosenson told this news organization. “In those situations, the LDL cholesterol underrepresents the risk of the LDL particles. With insulin resistance, the particles become more triglycerides and less cholesterol, so on a per-particle basis, you need to get more LDL particles to get to a certain LDL cholesterol concentration.”

Non-HDL cholesterol testing does not require fasting, another advantage of using it to monitor cholesterol, he added.

What is often forgotten is that moderate- to high-intensity statins have very good triglyceride-lowering effects, Dr. Rosenson said.

“This article highlights that, by using higher doses, you get more triglyceride-lowering. Hopefully, this will get practitioners to recognize that non-HDL cholesterol is a better predictor of risk in people with diabetes,” he said.

The study was funded by the National Institute for Health Research. Dr. Hodkinson, Dr. Rosenson, and Dr. Deedwania report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Melanoma screening study stokes overdiagnosis debate

Article Type
Changed

Screening for melanoma at the primary care level is associated with significant increases in the detection of in situ and invasive thin melanomas but not thicker, more worrisome disease, new research shows.

Without a corresponding decrease in melanoma mortality, an increase in the detection of those thin melanomas “raises the concern that early detection efforts, such as visual skin screening, may result in overdiagnosis,” the study authors wrote. “The value of a cancer screening program should most rigorously be measured not by the number of new, early cancers detected, but by its impact on the development of late-stage disease and its associated morbidity, cost, and mortality.”

The research, published in JAMA Dermatology, has reignited the controversy over the benefits and harms of primary care skin cancer screening, garnering two editorials that reflect different sides of the debate.

In one, Robert A. Swerlick, MD, pointed out that, “despite public messaging to the contrary, to my knowledge there is no evidence that routine skin examinations have any effect on melanoma mortality.

“The stage shift to smaller tumors should not be viewed as success and is very strong evidence of overdiagnosis,” wrote Dr. Swerlick, of the department of dermatology, Emory University, Atlanta.

The other editorial, however, argued that routine screening saves lives. “Most melanoma deaths are because of stage I disease, with an estimated 3%-15% of thin melanomas (≤ 1 mm) being lethal,” wrote a trio of editorialists from Oregon Health & Science University, Portland.

When considering the high mutation rate associated with melanoma and the current limits of treatment options, early diagnosis becomes “particularly important and counterbalances the risk of overdiagnosis,” the editorialists asserted.

Primary care screening study

The new findings come from an observational study of a quality improvement initiative conducted at the University of Pittsburgh Medical Center system between 2014 and 2018, in which primary care clinicians were offered training in melanoma identification through skin examination and were encouraged to offer annual skin cancer screening to patients aged 35 years and older.

Of 595,799 eligible patients, 144,851 (24.3%) were screened at least once during the study period. Those who received screening were more likely than unscreened patients to be older (median age, 59 vs. 55 years), women, and non-Hispanic White persons.

During a follow-up of 5 years, the researchers found that patients who received screening were significantly more likely than unscreened patients to be diagnosed with in situ melanoma (incidence, 30.4 vs. 14.4; hazard ratio, 2.6; P < .001) or thin invasive melanoma (incidence, 24.5 vs. 16.1; HR, 1.8; P < .001), after adjusting for factors that included age, sex, and race.

The screened patients were also more likely than unscreened patients to be diagnosed with in situ interval melanomas, defined as melanomas occurring at least 60 days after initial screening (incidence, 26.7 vs. 12.9; HR, 2.1; P < .001), as well as thin invasive interval melanomas (incidence, 18.5 vs. 14.4; HR, 1.3; P = .03).

The 60-day interval was included to account for the possible time to referral to a specialist for definitive diagnosis, the authors explained.

The incidence of the detection of melanomas thicker than 4 mm was lower in screened versus unscreened patients, but the difference was not statistically significant for all melanomas (2.7 vs. 3.3; HR, 0.8; P = .38) or interval melanomas (1.5 vs. 2.7; HR, 0.6; P = .15).
 

 

 

Experts weigh in

Although the follow-up period was of 5 years, not all patients were followed that long after undergoing screening. For instance, for some patients, follow-up occurred only 1 year after they had been screened.

The study’s senior author, Laura K. Ferris, MD, PhD, of the department of dermatology, University of Pittsburgh, noted that a longer follow-up could shift the results.

“When you look at the curves in our figures, you do start to see them separate more and more over time for the thicker melanomas,” Dr. Ferris said in an interview. “I do suspect that, if we followed patients longer, we might start to see a more significant difference.”

The findings nevertheless add to evidence that although routine screening substantially increases the detection of melanomas overall, these melanomas are often not the ones doctors are most worried about or that increase a person’s risk of mortality, Dr. Ferris noted.

When it comes to melanoma screening, balancing the risks and benefits is key. One major downside, Dr. Ferris said, is in regard to the burden such screening could place on the health care system, with potentially unproductive screenings causing delays in care for patients with more urgent needs.

“We are undersupplied in the dermatology workforce, and there is often a long wait to see dermatologists, so we really want to make sure, as trained professionals, that patients have access to us,” she said. “If we’re doing something that doesn’t have proven benefit and is increasing the wait time, that will come at the expense of other patients’ access.”

Costs involved in skin biopsies and excisions of borderline lesions as well as the potential to increase patients’ anxiety represent other important considerations, Dr. Ferris noted.

However, Sancy A. Leachman, MD, PhD, a coauthor of the editorial in favor of screening, said in an interview that “at the individual level, there are an almost infinite number of individual circumstances that could lead a person to decide that the potential benefits outweigh the harms.”

According to Dr. Leachman, who is chair of the department of dermatology, Oregon Health & Science University, these individual priorities may not align with those of the various decision-makers or with guidelines, such as those from the U.S. Preventive Services Task Force, which gives visual skin cancer screening of asymptomatic patients an “I” rating, indicating “insufficient evidence.”

“Many federal agencies and payer groups focus on minimizing costs and optimizing outcomes,” Dr. Leachman and coauthors wrote. As the only professional advocates for individual patients, physicians “have a responsibility to assure that the best interests of patients are served.”

The study was funded by the University of Pittsburgh Melanoma and Skin Cancer Program. Dr. Ferris and Dr. Swerlick disclosed no relevant financial relationships. Dr. Leachman is the principal investigator for War on Melanoma, an early-detection program in Oregon.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Screening for melanoma at the primary care level is associated with significant increases in the detection of in situ and invasive thin melanomas but not thicker, more worrisome disease, new research shows.

Without a corresponding decrease in melanoma mortality, an increase in the detection of those thin melanomas “raises the concern that early detection efforts, such as visual skin screening, may result in overdiagnosis,” the study authors wrote. “The value of a cancer screening program should most rigorously be measured not by the number of new, early cancers detected, but by its impact on the development of late-stage disease and its associated morbidity, cost, and mortality.”

The research, published in JAMA Dermatology, has reignited the controversy over the benefits and harms of primary care skin cancer screening, garnering two editorials that reflect different sides of the debate.

In one, Robert A. Swerlick, MD, pointed out that, “despite public messaging to the contrary, to my knowledge there is no evidence that routine skin examinations have any effect on melanoma mortality.

“The stage shift to smaller tumors should not be viewed as success and is very strong evidence of overdiagnosis,” wrote Dr. Swerlick, of the department of dermatology, Emory University, Atlanta.

The other editorial, however, argued that routine screening saves lives. “Most melanoma deaths are because of stage I disease, with an estimated 3%-15% of thin melanomas (≤ 1 mm) being lethal,” wrote a trio of editorialists from Oregon Health & Science University, Portland.

When considering the high mutation rate associated with melanoma and the current limits of treatment options, early diagnosis becomes “particularly important and counterbalances the risk of overdiagnosis,” the editorialists asserted.

Primary care screening study

The new findings come from an observational study of a quality improvement initiative conducted at the University of Pittsburgh Medical Center system between 2014 and 2018, in which primary care clinicians were offered training in melanoma identification through skin examination and were encouraged to offer annual skin cancer screening to patients aged 35 years and older.

Of 595,799 eligible patients, 144,851 (24.3%) were screened at least once during the study period. Those who received screening were more likely than unscreened patients to be older (median age, 59 vs. 55 years), women, and non-Hispanic White persons.

During a follow-up of 5 years, the researchers found that patients who received screening were significantly more likely than unscreened patients to be diagnosed with in situ melanoma (incidence, 30.4 vs. 14.4; hazard ratio, 2.6; P < .001) or thin invasive melanoma (incidence, 24.5 vs. 16.1; HR, 1.8; P < .001), after adjusting for factors that included age, sex, and race.

The screened patients were also more likely than unscreened patients to be diagnosed with in situ interval melanomas, defined as melanomas occurring at least 60 days after initial screening (incidence, 26.7 vs. 12.9; HR, 2.1; P < .001), as well as thin invasive interval melanomas (incidence, 18.5 vs. 14.4; HR, 1.3; P = .03).

The 60-day interval was included to account for the possible time to referral to a specialist for definitive diagnosis, the authors explained.

The incidence of the detection of melanomas thicker than 4 mm was lower in screened versus unscreened patients, but the difference was not statistically significant for all melanomas (2.7 vs. 3.3; HR, 0.8; P = .38) or interval melanomas (1.5 vs. 2.7; HR, 0.6; P = .15).
 

 

 

Experts weigh in

Although the follow-up period was of 5 years, not all patients were followed that long after undergoing screening. For instance, for some patients, follow-up occurred only 1 year after they had been screened.

The study’s senior author, Laura K. Ferris, MD, PhD, of the department of dermatology, University of Pittsburgh, noted that a longer follow-up could shift the results.

“When you look at the curves in our figures, you do start to see them separate more and more over time for the thicker melanomas,” Dr. Ferris said in an interview. “I do suspect that, if we followed patients longer, we might start to see a more significant difference.”

The findings nevertheless add to evidence that although routine screening substantially increases the detection of melanomas overall, these melanomas are often not the ones doctors are most worried about or that increase a person’s risk of mortality, Dr. Ferris noted.

When it comes to melanoma screening, balancing the risks and benefits is key. One major downside, Dr. Ferris said, is in regard to the burden such screening could place on the health care system, with potentially unproductive screenings causing delays in care for patients with more urgent needs.

“We are undersupplied in the dermatology workforce, and there is often a long wait to see dermatologists, so we really want to make sure, as trained professionals, that patients have access to us,” she said. “If we’re doing something that doesn’t have proven benefit and is increasing the wait time, that will come at the expense of other patients’ access.”

Costs involved in skin biopsies and excisions of borderline lesions as well as the potential to increase patients’ anxiety represent other important considerations, Dr. Ferris noted.

However, Sancy A. Leachman, MD, PhD, a coauthor of the editorial in favor of screening, said in an interview that “at the individual level, there are an almost infinite number of individual circumstances that could lead a person to decide that the potential benefits outweigh the harms.”

According to Dr. Leachman, who is chair of the department of dermatology, Oregon Health & Science University, these individual priorities may not align with those of the various decision-makers or with guidelines, such as those from the U.S. Preventive Services Task Force, which gives visual skin cancer screening of asymptomatic patients an “I” rating, indicating “insufficient evidence.”

“Many federal agencies and payer groups focus on minimizing costs and optimizing outcomes,” Dr. Leachman and coauthors wrote. As the only professional advocates for individual patients, physicians “have a responsibility to assure that the best interests of patients are served.”

The study was funded by the University of Pittsburgh Melanoma and Skin Cancer Program. Dr. Ferris and Dr. Swerlick disclosed no relevant financial relationships. Dr. Leachman is the principal investigator for War on Melanoma, an early-detection program in Oregon.

A version of this article first appeared on Medscape.com.

Screening for melanoma at the primary care level is associated with significant increases in the detection of in situ and invasive thin melanomas but not thicker, more worrisome disease, new research shows.

Without a corresponding decrease in melanoma mortality, an increase in the detection of those thin melanomas “raises the concern that early detection efforts, such as visual skin screening, may result in overdiagnosis,” the study authors wrote. “The value of a cancer screening program should most rigorously be measured not by the number of new, early cancers detected, but by its impact on the development of late-stage disease and its associated morbidity, cost, and mortality.”

The research, published in JAMA Dermatology, has reignited the controversy over the benefits and harms of primary care skin cancer screening, garnering two editorials that reflect different sides of the debate.

In one, Robert A. Swerlick, MD, pointed out that, “despite public messaging to the contrary, to my knowledge there is no evidence that routine skin examinations have any effect on melanoma mortality.

“The stage shift to smaller tumors should not be viewed as success and is very strong evidence of overdiagnosis,” wrote Dr. Swerlick, of the department of dermatology, Emory University, Atlanta.

The other editorial, however, argued that routine screening saves lives. “Most melanoma deaths are because of stage I disease, with an estimated 3%-15% of thin melanomas (≤ 1 mm) being lethal,” wrote a trio of editorialists from Oregon Health & Science University, Portland.

When considering the high mutation rate associated with melanoma and the current limits of treatment options, early diagnosis becomes “particularly important and counterbalances the risk of overdiagnosis,” the editorialists asserted.

Primary care screening study

The new findings come from an observational study of a quality improvement initiative conducted at the University of Pittsburgh Medical Center system between 2014 and 2018, in which primary care clinicians were offered training in melanoma identification through skin examination and were encouraged to offer annual skin cancer screening to patients aged 35 years and older.

Of 595,799 eligible patients, 144,851 (24.3%) were screened at least once during the study period. Those who received screening were more likely than unscreened patients to be older (median age, 59 vs. 55 years), women, and non-Hispanic White persons.

During a follow-up of 5 years, the researchers found that patients who received screening were significantly more likely than unscreened patients to be diagnosed with in situ melanoma (incidence, 30.4 vs. 14.4; hazard ratio, 2.6; P < .001) or thin invasive melanoma (incidence, 24.5 vs. 16.1; HR, 1.8; P < .001), after adjusting for factors that included age, sex, and race.

The screened patients were also more likely than unscreened patients to be diagnosed with in situ interval melanomas, defined as melanomas occurring at least 60 days after initial screening (incidence, 26.7 vs. 12.9; HR, 2.1; P < .001), as well as thin invasive interval melanomas (incidence, 18.5 vs. 14.4; HR, 1.3; P = .03).

The 60-day interval was included to account for the possible time to referral to a specialist for definitive diagnosis, the authors explained.

The incidence of the detection of melanomas thicker than 4 mm was lower in screened versus unscreened patients, but the difference was not statistically significant for all melanomas (2.7 vs. 3.3; HR, 0.8; P = .38) or interval melanomas (1.5 vs. 2.7; HR, 0.6; P = .15).
 

 

 

Experts weigh in

Although the follow-up period was of 5 years, not all patients were followed that long after undergoing screening. For instance, for some patients, follow-up occurred only 1 year after they had been screened.

The study’s senior author, Laura K. Ferris, MD, PhD, of the department of dermatology, University of Pittsburgh, noted that a longer follow-up could shift the results.

“When you look at the curves in our figures, you do start to see them separate more and more over time for the thicker melanomas,” Dr. Ferris said in an interview. “I do suspect that, if we followed patients longer, we might start to see a more significant difference.”

The findings nevertheless add to evidence that although routine screening substantially increases the detection of melanomas overall, these melanomas are often not the ones doctors are most worried about or that increase a person’s risk of mortality, Dr. Ferris noted.

When it comes to melanoma screening, balancing the risks and benefits is key. One major downside, Dr. Ferris said, is in regard to the burden such screening could place on the health care system, with potentially unproductive screenings causing delays in care for patients with more urgent needs.

“We are undersupplied in the dermatology workforce, and there is often a long wait to see dermatologists, so we really want to make sure, as trained professionals, that patients have access to us,” she said. “If we’re doing something that doesn’t have proven benefit and is increasing the wait time, that will come at the expense of other patients’ access.”

Costs involved in skin biopsies and excisions of borderline lesions as well as the potential to increase patients’ anxiety represent other important considerations, Dr. Ferris noted.

However, Sancy A. Leachman, MD, PhD, a coauthor of the editorial in favor of screening, said in an interview that “at the individual level, there are an almost infinite number of individual circumstances that could lead a person to decide that the potential benefits outweigh the harms.”

According to Dr. Leachman, who is chair of the department of dermatology, Oregon Health & Science University, these individual priorities may not align with those of the various decision-makers or with guidelines, such as those from the U.S. Preventive Services Task Force, which gives visual skin cancer screening of asymptomatic patients an “I” rating, indicating “insufficient evidence.”

“Many federal agencies and payer groups focus on minimizing costs and optimizing outcomes,” Dr. Leachman and coauthors wrote. As the only professional advocates for individual patients, physicians “have a responsibility to assure that the best interests of patients are served.”

The study was funded by the University of Pittsburgh Melanoma and Skin Cancer Program. Dr. Ferris and Dr. Swerlick disclosed no relevant financial relationships. Dr. Leachman is the principal investigator for War on Melanoma, an early-detection program in Oregon.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA DERMATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Long-term cannabis use linked to dementia risk factors

Article Type
Changed

Long-term cannabis use is linked to hippocampal atrophy and poorer cognitive function in midlife – known risk factors for dementia.

A large prospective, longitudinal study showed long-term cannabis users had an intelligence quotient (IQ) decline from age 18 to midlife (mean, 5.5 IQ points), poorer learning and processing speed, compared with childhood, and self-reported memory and attention problems. Long-term cannabis users also showed hippocampal atrophy at midlife (age 45), which combined with mild midlife cognitive deficits, all known risk factors for dementia.

“Long-term cannabis users – people who have used cannabis from 18 or 19 years old and continued using through midlife – showed cognitive deficits, compared with nonusers. They also showed more severe cognitive deficits, compared with long-term alcohol users and long-term tobacco users. But people who used infrequently or recreationally in midlife did not show as severe cognitive deficits. Cognitive deficits were confined to cannabis users,” lead investigator Madeline Meier, PhD, associate professor of psychology, Arizona State University, Tempe, said in an interview.

“Long-term cannabis users had smaller hippocampal volume, but we also found that smaller hippocampal volume did not explain the cognitive deficits among the long-term cannabis users,” she added.

The study was recently published online in the American Journal of Psychiatry.
 

Growing use in Boomers

Long-term cannabis use has been associated with memory problems. Studies examining the impact of cannabis use on the brain have shown conflicting results. Some suggest regular use in adolescence is associated with altered connectivity and reduced volume of brain regions involved in executive functions such as memory, learning, and impulse control compared with those who do not use cannabis.

Others found no significant structural differences between the brains of cannabis users and nonusers.

An earlier, large longitudinal study in New Zealand found that persistent cannabis use (with frequent use starting in adolescence) was associated with a loss of an average of six (or up to eight) IQ points measured in mid-adulthood.

Cannabis use is increasing among Baby Boomers – a group born between 1946 and 1964 – who used cannabis at historically high rates as young adults, and who now use it at historically high rates in midlife and as older adults.

To date, case-control studies, which are predominantly in adolescents and young adults, have found that cannabis users show subtle cognitive deficits and structural brain differences, but it is unclear whether these differences in young cannabis users might be larger in midlife and in older adults who have longer histories of use.

The study included a representative cohort of 1,037 individuals in Dunedin, New Zealand, born between April 1972 and March 1973, and followed from age 3 to 45.

Cannabis use and dependence were assessed at ages 18, 21, 26, 32, 38, and 45. IQ was assessed at ages 7, 9, 11, and 45. Specific neuropsychological functions and hippocampal volume were assessed at age 45. 

“Most of the previous research has focused on adolescent and young-adult cannabis users. What we’re looking at here is long-term cannabis users in midlife, and we’re finding that long-term users show cognitive deficits. But we’re not just looking at a snapshot of people in midlife, we’re also doing a longitudinal comparison – comparing them to themselves in childhood. We saw that long-term cannabis users showed a decline in IQ from childhood to adulthood,” said Dr. Meier. 

Participants in the study are members of the Dunedin Longitudinal Study, a representative birth cohort (n = 1,037; 91% of eligible births; 52% male) born between April 1972 and March 1973 in Dunedin, New Zealand, who participated in the first assessment at age 3.

This cohort matched socioeconomic status (SES), key health indicators, and demographics. Assessments were carried out at birth and ages 3, 5, 7, 9, 11, 13, 15, 18, 21, 26, 32, 38, and 45. IQ was assessed at ages 7, 9, 11, and 45. Specific neuropsychological functions and hippocampal volume were assessed at age 45. 
 

 

 

Shrinking hippocampal volume

Cannabis use, cognitive function, and hippocampal volume were assessed comparing long-term cannabis users (n = 84) against five distinct groups:

  • Lifelong cannabis nonusers (n = 196) – to replicate the control group most often reported in the case-control literature
  • Midlife recreational cannabis users (n = 65) – to determine if cognitive deficits and structural brain differences are apparent in nonproblem users – the majority of cannabis users
  • Long-term tobacco users (n = 75)
  • Long-term alcohol users (n = 57) – benchmark comparisons for any cannabis findings and to disentangle potential cannabis effects from tobacco and alcohol effects
  • Cannabis quitters (n = 58) – to determine whether differences are apparent after cessation

Tests were conducted on dose-response associations using continuously measured persistence of cannabis use, rigorously adjusting for numerous confounders derived from multiple longitudinal waves and data sources.

The investigators also tested whether associations between continuously measured persistence of cannabis use and cognitive deficits were mediated by hippocampal volume differences.

The hippocampus was the area of focus because it has a high density of cannabinoid receptors and is also instrumental for learning and memory, which is one of the most consistently impaired cognitive domains in cannabis users, and has been the brain region that most consistently emerges as smaller in cannabis users relative to controls. Structural MRI was done at age 45 for 875 participants (93% of age 45 participants).

Of 997 cohort members still alive at age 45, 938 (94.1%) were assessed at age 45. Age 45 participants did not differ significantly from other participants on childhood SES, childhood self-control, or childhood IQ. Cognitive functioning among midlife recreational cannabis users was similar to representative cohort norms, suggesting that infrequent recreational cannabis use in midlife is unlikely to compromise cognitive functioning.

However, long-term cannabis users did not perform significantly worse on any test than cannabis quitters. Cannabis quitters showed subtle cognitive deficits that may explain inconsistent findings on the benefits of cessation.

Smaller hippocampal volume is thought to be a possible mediator of cannabis-related cognitive deficits because the hippocampus is rich in CB1 receptors and is involved in learning and memory.

Long-term cannabis users had smaller bilateral volume in total hippocampus and 5 of 12 structurally and functionally distinct subregions (tail, hippocampal amygdala transition area, CA1, molecular layer, and dentate gyrus), and significantly smaller volumes than midlife recreational cannabis users in the left and right hippocampus, and 3 of 12 subfields (tail, CA1, and molecular layer), compared with non-users, consistent with case-control studies.
 

More potent

“If you’ve been using cannabis very long term and now are in midlife, you might want to consider quitting. Quitting is associated with slightly better cognitive performance in midlife. We also need to watch for risk of dementia. We know that people who show cognitive deficits at midlife are at elevated risk for later life dementia. And the deficits we saw among long-term cannabis users (although fairly mild), they were in the range in terms of effect size of what we see among people in other studies who have gone on to develop dementia in later life,” said Dr. Meier.

The study findings conflict with those of other studies, including one by the same research group, which compared the cognitive functioning of twins who were discordant for cannabis use and found little evidence of cannabis-related cognitive deficits. Because long-term cannabis users also use tobacco, alcohol, and other illicit drugs, disentangling cannabis effects from other substances is challenging.

“Long-term cannabis users tend to be long-term polysubstance users, so it’s hard to isolate,” said Dr. Meier.

Additionally, some group sizes were small, raising concerns about low statistical power.

“Group sizes were small but we didn’t rely only on those group comparisons; however, we did find statistical differences. We also tested highly statistically powered dose-response associations between persistence of cannabis use over ages 18-45 and each of our outcomes (IQ, learning, and processing speed in midlife) while adjusting possible alternate explanations such as low childhood IQ, other substance use, [and] socioeconomic backgrounds.

“These dose-response associations used large sample sizes, were highly powered, and took into account a number of alternative explanations. These two different approaches showed very similar findings and one bolstered the other,” said Dr. Meier.

The study’s results were based on individuals who began using cannabis in the 1980s or ‘90s, but the concentration of tetrahydrocannabinol (THC) has risen in recent years.

“When the study began, THC concentration was approximately 4%. Over the last decade we have seen it go up to 12% or even higher. A recent study surveying U.S. dispensaries found 20% THC. If THC accounts for impairment, then the effects can be larger [with higher concentrations]. One of the challenges in the U.S. is that there are laws prohibiting researchers from testing cannabis, so we have to rely on product labels, which we know are unreliable,” said Dr. Meier.

A separate report is forthcoming with results of exploratory analyses of associations between long-term cannabis use and comprehensive MRI measures of global and regional gray and white matter.

The data will also be used to answer a number of different questions about cognitive deficits, brain structure, aging preparedness, social preparedness (strength of social networks), financial and health preparedness, and biological aging (the pace of aging relative to chronological age) in long-term cannabis users, Dr. Meier noted.
 

‘Fantastic’ research

Commenting on the research for this news organization , Andrew J. Saxon, MD, professor, department of psychiatry & behavioral sciences at University of Washington, Seattle, and a member of the American Psychiatric Association’s Council on Addiction Psychiatry, said the study “provides more evidence that heavy and regular cannabis use is not benign behavior.”

“It’s a fantastic piece of research in which they enrolled participants at birth and have followed them up to age 45. In most of the other research that has been done, we have no idea what their baseline was. What’s so remarkable here is that they can clearly demonstrate the loss of IQ points from childhood to age 45,” said Dr. Saxon.

“It is clear that, in people using cannabis long term, cognition is impaired. It would be good to have a better handle on how much cognitive function can be regained if you quit, because that could be a motivator for quitting in people where cannabis is having an adverse effect on their lives,” he added.

On the issue of THC potency, Dr. Saxon said that, while it’s true the potency of cannabis is increasing in terms of THC concentrations, the question is: “Do people who use cannabis use a set amount or do they imbibe until they achieve the state of altered consciousness that they’re seeking? Although there has been some research in the area of self-regulation and cannabis potency, we do not yet have the answers to determine if there is any causation,” said Dr. Saxon.

Dr. Meier and Dr. Saxon reported no relevant financial conflicts of interest.

 

A version of this article first appeared on Medscape.com.

Issue
Neurology reviews- 30(5)
Publications
Topics
Sections

Long-term cannabis use is linked to hippocampal atrophy and poorer cognitive function in midlife – known risk factors for dementia.

A large prospective, longitudinal study showed long-term cannabis users had an intelligence quotient (IQ) decline from age 18 to midlife (mean, 5.5 IQ points), poorer learning and processing speed, compared with childhood, and self-reported memory and attention problems. Long-term cannabis users also showed hippocampal atrophy at midlife (age 45), which combined with mild midlife cognitive deficits, all known risk factors for dementia.

“Long-term cannabis users – people who have used cannabis from 18 or 19 years old and continued using through midlife – showed cognitive deficits, compared with nonusers. They also showed more severe cognitive deficits, compared with long-term alcohol users and long-term tobacco users. But people who used infrequently or recreationally in midlife did not show as severe cognitive deficits. Cognitive deficits were confined to cannabis users,” lead investigator Madeline Meier, PhD, associate professor of psychology, Arizona State University, Tempe, said in an interview.

“Long-term cannabis users had smaller hippocampal volume, but we also found that smaller hippocampal volume did not explain the cognitive deficits among the long-term cannabis users,” she added.

The study was recently published online in the American Journal of Psychiatry.
 

Growing use in Boomers

Long-term cannabis use has been associated with memory problems. Studies examining the impact of cannabis use on the brain have shown conflicting results. Some suggest regular use in adolescence is associated with altered connectivity and reduced volume of brain regions involved in executive functions such as memory, learning, and impulse control compared with those who do not use cannabis.

Others found no significant structural differences between the brains of cannabis users and nonusers.

An earlier, large longitudinal study in New Zealand found that persistent cannabis use (with frequent use starting in adolescence) was associated with a loss of an average of six (or up to eight) IQ points measured in mid-adulthood.

Cannabis use is increasing among Baby Boomers – a group born between 1946 and 1964 – who used cannabis at historically high rates as young adults, and who now use it at historically high rates in midlife and as older adults.

To date, case-control studies, which are predominantly in adolescents and young adults, have found that cannabis users show subtle cognitive deficits and structural brain differences, but it is unclear whether these differences in young cannabis users might be larger in midlife and in older adults who have longer histories of use.

The study included a representative cohort of 1,037 individuals in Dunedin, New Zealand, born between April 1972 and March 1973, and followed from age 3 to 45.

Cannabis use and dependence were assessed at ages 18, 21, 26, 32, 38, and 45. IQ was assessed at ages 7, 9, 11, and 45. Specific neuropsychological functions and hippocampal volume were assessed at age 45. 

“Most of the previous research has focused on adolescent and young-adult cannabis users. What we’re looking at here is long-term cannabis users in midlife, and we’re finding that long-term users show cognitive deficits. But we’re not just looking at a snapshot of people in midlife, we’re also doing a longitudinal comparison – comparing them to themselves in childhood. We saw that long-term cannabis users showed a decline in IQ from childhood to adulthood,” said Dr. Meier. 

Participants in the study are members of the Dunedin Longitudinal Study, a representative birth cohort (n = 1,037; 91% of eligible births; 52% male) born between April 1972 and March 1973 in Dunedin, New Zealand, who participated in the first assessment at age 3.

This cohort matched socioeconomic status (SES), key health indicators, and demographics. Assessments were carried out at birth and ages 3, 5, 7, 9, 11, 13, 15, 18, 21, 26, 32, 38, and 45. IQ was assessed at ages 7, 9, 11, and 45. Specific neuropsychological functions and hippocampal volume were assessed at age 45. 
 

 

 

Shrinking hippocampal volume

Cannabis use, cognitive function, and hippocampal volume were assessed comparing long-term cannabis users (n = 84) against five distinct groups:

  • Lifelong cannabis nonusers (n = 196) – to replicate the control group most often reported in the case-control literature
  • Midlife recreational cannabis users (n = 65) – to determine if cognitive deficits and structural brain differences are apparent in nonproblem users – the majority of cannabis users
  • Long-term tobacco users (n = 75)
  • Long-term alcohol users (n = 57) – benchmark comparisons for any cannabis findings and to disentangle potential cannabis effects from tobacco and alcohol effects
  • Cannabis quitters (n = 58) – to determine whether differences are apparent after cessation

Tests were conducted on dose-response associations using continuously measured persistence of cannabis use, rigorously adjusting for numerous confounders derived from multiple longitudinal waves and data sources.

The investigators also tested whether associations between continuously measured persistence of cannabis use and cognitive deficits were mediated by hippocampal volume differences.

The hippocampus was the area of focus because it has a high density of cannabinoid receptors and is also instrumental for learning and memory, which is one of the most consistently impaired cognitive domains in cannabis users, and has been the brain region that most consistently emerges as smaller in cannabis users relative to controls. Structural MRI was done at age 45 for 875 participants (93% of age 45 participants).

Of 997 cohort members still alive at age 45, 938 (94.1%) were assessed at age 45. Age 45 participants did not differ significantly from other participants on childhood SES, childhood self-control, or childhood IQ. Cognitive functioning among midlife recreational cannabis users was similar to representative cohort norms, suggesting that infrequent recreational cannabis use in midlife is unlikely to compromise cognitive functioning.

However, long-term cannabis users did not perform significantly worse on any test than cannabis quitters. Cannabis quitters showed subtle cognitive deficits that may explain inconsistent findings on the benefits of cessation.

Smaller hippocampal volume is thought to be a possible mediator of cannabis-related cognitive deficits because the hippocampus is rich in CB1 receptors and is involved in learning and memory.

Long-term cannabis users had smaller bilateral volume in total hippocampus and 5 of 12 structurally and functionally distinct subregions (tail, hippocampal amygdala transition area, CA1, molecular layer, and dentate gyrus), and significantly smaller volumes than midlife recreational cannabis users in the left and right hippocampus, and 3 of 12 subfields (tail, CA1, and molecular layer), compared with non-users, consistent with case-control studies.
 

More potent

“If you’ve been using cannabis very long term and now are in midlife, you might want to consider quitting. Quitting is associated with slightly better cognitive performance in midlife. We also need to watch for risk of dementia. We know that people who show cognitive deficits at midlife are at elevated risk for later life dementia. And the deficits we saw among long-term cannabis users (although fairly mild), they were in the range in terms of effect size of what we see among people in other studies who have gone on to develop dementia in later life,” said Dr. Meier.

The study findings conflict with those of other studies, including one by the same research group, which compared the cognitive functioning of twins who were discordant for cannabis use and found little evidence of cannabis-related cognitive deficits. Because long-term cannabis users also use tobacco, alcohol, and other illicit drugs, disentangling cannabis effects from other substances is challenging.

“Long-term cannabis users tend to be long-term polysubstance users, so it’s hard to isolate,” said Dr. Meier.

Additionally, some group sizes were small, raising concerns about low statistical power.

“Group sizes were small but we didn’t rely only on those group comparisons; however, we did find statistical differences. We also tested highly statistically powered dose-response associations between persistence of cannabis use over ages 18-45 and each of our outcomes (IQ, learning, and processing speed in midlife) while adjusting possible alternate explanations such as low childhood IQ, other substance use, [and] socioeconomic backgrounds.

“These dose-response associations used large sample sizes, were highly powered, and took into account a number of alternative explanations. These two different approaches showed very similar findings and one bolstered the other,” said Dr. Meier.

The study’s results were based on individuals who began using cannabis in the 1980s or ‘90s, but the concentration of tetrahydrocannabinol (THC) has risen in recent years.

“When the study began, THC concentration was approximately 4%. Over the last decade we have seen it go up to 12% or even higher. A recent study surveying U.S. dispensaries found 20% THC. If THC accounts for impairment, then the effects can be larger [with higher concentrations]. One of the challenges in the U.S. is that there are laws prohibiting researchers from testing cannabis, so we have to rely on product labels, which we know are unreliable,” said Dr. Meier.

A separate report is forthcoming with results of exploratory analyses of associations between long-term cannabis use and comprehensive MRI measures of global and regional gray and white matter.

The data will also be used to answer a number of different questions about cognitive deficits, brain structure, aging preparedness, social preparedness (strength of social networks), financial and health preparedness, and biological aging (the pace of aging relative to chronological age) in long-term cannabis users, Dr. Meier noted.
 

‘Fantastic’ research

Commenting on the research for this news organization , Andrew J. Saxon, MD, professor, department of psychiatry & behavioral sciences at University of Washington, Seattle, and a member of the American Psychiatric Association’s Council on Addiction Psychiatry, said the study “provides more evidence that heavy and regular cannabis use is not benign behavior.”

“It’s a fantastic piece of research in which they enrolled participants at birth and have followed them up to age 45. In most of the other research that has been done, we have no idea what their baseline was. What’s so remarkable here is that they can clearly demonstrate the loss of IQ points from childhood to age 45,” said Dr. Saxon.

“It is clear that, in people using cannabis long term, cognition is impaired. It would be good to have a better handle on how much cognitive function can be regained if you quit, because that could be a motivator for quitting in people where cannabis is having an adverse effect on their lives,” he added.

On the issue of THC potency, Dr. Saxon said that, while it’s true the potency of cannabis is increasing in terms of THC concentrations, the question is: “Do people who use cannabis use a set amount or do they imbibe until they achieve the state of altered consciousness that they’re seeking? Although there has been some research in the area of self-regulation and cannabis potency, we do not yet have the answers to determine if there is any causation,” said Dr. Saxon.

Dr. Meier and Dr. Saxon reported no relevant financial conflicts of interest.

 

A version of this article first appeared on Medscape.com.

Long-term cannabis use is linked to hippocampal atrophy and poorer cognitive function in midlife – known risk factors for dementia.

A large prospective, longitudinal study showed long-term cannabis users had an intelligence quotient (IQ) decline from age 18 to midlife (mean, 5.5 IQ points), poorer learning and processing speed, compared with childhood, and self-reported memory and attention problems. Long-term cannabis users also showed hippocampal atrophy at midlife (age 45), which combined with mild midlife cognitive deficits, all known risk factors for dementia.

“Long-term cannabis users – people who have used cannabis from 18 or 19 years old and continued using through midlife – showed cognitive deficits, compared with nonusers. They also showed more severe cognitive deficits, compared with long-term alcohol users and long-term tobacco users. But people who used infrequently or recreationally in midlife did not show as severe cognitive deficits. Cognitive deficits were confined to cannabis users,” lead investigator Madeline Meier, PhD, associate professor of psychology, Arizona State University, Tempe, said in an interview.

“Long-term cannabis users had smaller hippocampal volume, but we also found that smaller hippocampal volume did not explain the cognitive deficits among the long-term cannabis users,” she added.

The study was recently published online in the American Journal of Psychiatry.
 

Growing use in Boomers

Long-term cannabis use has been associated with memory problems. Studies examining the impact of cannabis use on the brain have shown conflicting results. Some suggest regular use in adolescence is associated with altered connectivity and reduced volume of brain regions involved in executive functions such as memory, learning, and impulse control compared with those who do not use cannabis.

Others found no significant structural differences between the brains of cannabis users and nonusers.

An earlier, large longitudinal study in New Zealand found that persistent cannabis use (with frequent use starting in adolescence) was associated with a loss of an average of six (or up to eight) IQ points measured in mid-adulthood.

Cannabis use is increasing among Baby Boomers – a group born between 1946 and 1964 – who used cannabis at historically high rates as young adults, and who now use it at historically high rates in midlife and as older adults.

To date, case-control studies, which are predominantly in adolescents and young adults, have found that cannabis users show subtle cognitive deficits and structural brain differences, but it is unclear whether these differences in young cannabis users might be larger in midlife and in older adults who have longer histories of use.

The study included a representative cohort of 1,037 individuals in Dunedin, New Zealand, born between April 1972 and March 1973, and followed from age 3 to 45.

Cannabis use and dependence were assessed at ages 18, 21, 26, 32, 38, and 45. IQ was assessed at ages 7, 9, 11, and 45. Specific neuropsychological functions and hippocampal volume were assessed at age 45. 

“Most of the previous research has focused on adolescent and young-adult cannabis users. What we’re looking at here is long-term cannabis users in midlife, and we’re finding that long-term users show cognitive deficits. But we’re not just looking at a snapshot of people in midlife, we’re also doing a longitudinal comparison – comparing them to themselves in childhood. We saw that long-term cannabis users showed a decline in IQ from childhood to adulthood,” said Dr. Meier. 

Participants in the study are members of the Dunedin Longitudinal Study, a representative birth cohort (n = 1,037; 91% of eligible births; 52% male) born between April 1972 and March 1973 in Dunedin, New Zealand, who participated in the first assessment at age 3.

This cohort matched socioeconomic status (SES), key health indicators, and demographics. Assessments were carried out at birth and ages 3, 5, 7, 9, 11, 13, 15, 18, 21, 26, 32, 38, and 45. IQ was assessed at ages 7, 9, 11, and 45. Specific neuropsychological functions and hippocampal volume were assessed at age 45. 
 

 

 

Shrinking hippocampal volume

Cannabis use, cognitive function, and hippocampal volume were assessed comparing long-term cannabis users (n = 84) against five distinct groups:

  • Lifelong cannabis nonusers (n = 196) – to replicate the control group most often reported in the case-control literature
  • Midlife recreational cannabis users (n = 65) – to determine if cognitive deficits and structural brain differences are apparent in nonproblem users – the majority of cannabis users
  • Long-term tobacco users (n = 75)
  • Long-term alcohol users (n = 57) – benchmark comparisons for any cannabis findings and to disentangle potential cannabis effects from tobacco and alcohol effects
  • Cannabis quitters (n = 58) – to determine whether differences are apparent after cessation

Tests were conducted on dose-response associations using continuously measured persistence of cannabis use, rigorously adjusting for numerous confounders derived from multiple longitudinal waves and data sources.

The investigators also tested whether associations between continuously measured persistence of cannabis use and cognitive deficits were mediated by hippocampal volume differences.

The hippocampus was the area of focus because it has a high density of cannabinoid receptors and is also instrumental for learning and memory, which is one of the most consistently impaired cognitive domains in cannabis users, and has been the brain region that most consistently emerges as smaller in cannabis users relative to controls. Structural MRI was done at age 45 for 875 participants (93% of age 45 participants).

Of 997 cohort members still alive at age 45, 938 (94.1%) were assessed at age 45. Age 45 participants did not differ significantly from other participants on childhood SES, childhood self-control, or childhood IQ. Cognitive functioning among midlife recreational cannabis users was similar to representative cohort norms, suggesting that infrequent recreational cannabis use in midlife is unlikely to compromise cognitive functioning.

However, long-term cannabis users did not perform significantly worse on any test than cannabis quitters. Cannabis quitters showed subtle cognitive deficits that may explain inconsistent findings on the benefits of cessation.

Smaller hippocampal volume is thought to be a possible mediator of cannabis-related cognitive deficits because the hippocampus is rich in CB1 receptors and is involved in learning and memory.

Long-term cannabis users had smaller bilateral volume in total hippocampus and 5 of 12 structurally and functionally distinct subregions (tail, hippocampal amygdala transition area, CA1, molecular layer, and dentate gyrus), and significantly smaller volumes than midlife recreational cannabis users in the left and right hippocampus, and 3 of 12 subfields (tail, CA1, and molecular layer), compared with non-users, consistent with case-control studies.
 

More potent

“If you’ve been using cannabis very long term and now are in midlife, you might want to consider quitting. Quitting is associated with slightly better cognitive performance in midlife. We also need to watch for risk of dementia. We know that people who show cognitive deficits at midlife are at elevated risk for later life dementia. And the deficits we saw among long-term cannabis users (although fairly mild), they were in the range in terms of effect size of what we see among people in other studies who have gone on to develop dementia in later life,” said Dr. Meier.

The study findings conflict with those of other studies, including one by the same research group, which compared the cognitive functioning of twins who were discordant for cannabis use and found little evidence of cannabis-related cognitive deficits. Because long-term cannabis users also use tobacco, alcohol, and other illicit drugs, disentangling cannabis effects from other substances is challenging.

“Long-term cannabis users tend to be long-term polysubstance users, so it’s hard to isolate,” said Dr. Meier.

Additionally, some group sizes were small, raising concerns about low statistical power.

“Group sizes were small but we didn’t rely only on those group comparisons; however, we did find statistical differences. We also tested highly statistically powered dose-response associations between persistence of cannabis use over ages 18-45 and each of our outcomes (IQ, learning, and processing speed in midlife) while adjusting possible alternate explanations such as low childhood IQ, other substance use, [and] socioeconomic backgrounds.

“These dose-response associations used large sample sizes, were highly powered, and took into account a number of alternative explanations. These two different approaches showed very similar findings and one bolstered the other,” said Dr. Meier.

The study’s results were based on individuals who began using cannabis in the 1980s or ‘90s, but the concentration of tetrahydrocannabinol (THC) has risen in recent years.

“When the study began, THC concentration was approximately 4%. Over the last decade we have seen it go up to 12% or even higher. A recent study surveying U.S. dispensaries found 20% THC. If THC accounts for impairment, then the effects can be larger [with higher concentrations]. One of the challenges in the U.S. is that there are laws prohibiting researchers from testing cannabis, so we have to rely on product labels, which we know are unreliable,” said Dr. Meier.

A separate report is forthcoming with results of exploratory analyses of associations between long-term cannabis use and comprehensive MRI measures of global and regional gray and white matter.

The data will also be used to answer a number of different questions about cognitive deficits, brain structure, aging preparedness, social preparedness (strength of social networks), financial and health preparedness, and biological aging (the pace of aging relative to chronological age) in long-term cannabis users, Dr. Meier noted.
 

‘Fantastic’ research

Commenting on the research for this news organization , Andrew J. Saxon, MD, professor, department of psychiatry & behavioral sciences at University of Washington, Seattle, and a member of the American Psychiatric Association’s Council on Addiction Psychiatry, said the study “provides more evidence that heavy and regular cannabis use is not benign behavior.”

“It’s a fantastic piece of research in which they enrolled participants at birth and have followed them up to age 45. In most of the other research that has been done, we have no idea what their baseline was. What’s so remarkable here is that they can clearly demonstrate the loss of IQ points from childhood to age 45,” said Dr. Saxon.

“It is clear that, in people using cannabis long term, cognition is impaired. It would be good to have a better handle on how much cognitive function can be regained if you quit, because that could be a motivator for quitting in people where cannabis is having an adverse effect on their lives,” he added.

On the issue of THC potency, Dr. Saxon said that, while it’s true the potency of cannabis is increasing in terms of THC concentrations, the question is: “Do people who use cannabis use a set amount or do they imbibe until they achieve the state of altered consciousness that they’re seeking? Although there has been some research in the area of self-regulation and cannabis potency, we do not yet have the answers to determine if there is any causation,” said Dr. Saxon.

Dr. Meier and Dr. Saxon reported no relevant financial conflicts of interest.

 

A version of this article first appeared on Medscape.com.

Issue
Neurology reviews- 30(5)
Issue
Neurology reviews- 30(5)
Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE AMERICAN JOURNAL OF PSYCHIATRY

Citation Override
Publish date: April 15, 2022
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

U.S. life expectancy dropped by 2 years in 2020: Study

Article Type
Changed

The average life expectancy in the United States is expected to drop by 2.26 years from 2019 to 2021, the sharpest decrease during that time among high-income nations, according to a new study.

The study, published in medRxiv, said U.S. life expectancy went from 78.86 years in 2019 to 76.99 years in 2020, during the thick of the global COVID-19 pandemic. Though vaccines were widely available in 2021, the U.S. life expectancy was expected to keep going down, to 76.60 years.

In “peer countries” – Austria, Belgium, Denmark, England and Wales, Finland, France, Germany, Israel, Italy, the Netherlands, New Zealand, Northern Ireland, Norway, Portugal, Scotland, South Korea, Spain, Sweden, and Switzerland – life expectancy went down only 0.57 years from 2019 to 2020 and increased by 0.28 years in 2021, the study said. The peer countries now have a life expectancy that’s 5 years longer than in the United States.

“The fact the U.S. lost so many more lives than other high-income countries speaks not only to how we managed the pandemic, but also to more deeply rooted problems that predated the pandemic,” said Steven H. Woolf, MD, one of the study authors and a professor of family medicine and population health at Virginia Commonwealth University, Richmond, according to Reuters.

“U.S. life expectancy has been falling behind other countries since the 1980s, and the gap has widened over time, especially in the last decade.”

Lack of universal health care, income and educational inequality, and less-healthy physical and social environments helped lead to the decline in American life expectancy, according to Dr. Woolf.

The life expectancy drop from 2019 to 2020 hit Black and Hispanic people hardest, according to the study. But the drop from 2020 to 2021 affected White people the most, with average life expectancy among them going down about a third of a year.

Researchers looked at death data from the National Center for Health Statistics, the Human Mortality Database, and overseas statistical agencies. Life expectancy for 2021 was estimated “using a previously validated modeling method,” the study said.

A version of this article first appeared on WebMD.com.

Publications
Topics
Sections

The average life expectancy in the United States is expected to drop by 2.26 years from 2019 to 2021, the sharpest decrease during that time among high-income nations, according to a new study.

The study, published in medRxiv, said U.S. life expectancy went from 78.86 years in 2019 to 76.99 years in 2020, during the thick of the global COVID-19 pandemic. Though vaccines were widely available in 2021, the U.S. life expectancy was expected to keep going down, to 76.60 years.

In “peer countries” – Austria, Belgium, Denmark, England and Wales, Finland, France, Germany, Israel, Italy, the Netherlands, New Zealand, Northern Ireland, Norway, Portugal, Scotland, South Korea, Spain, Sweden, and Switzerland – life expectancy went down only 0.57 years from 2019 to 2020 and increased by 0.28 years in 2021, the study said. The peer countries now have a life expectancy that’s 5 years longer than in the United States.

“The fact the U.S. lost so many more lives than other high-income countries speaks not only to how we managed the pandemic, but also to more deeply rooted problems that predated the pandemic,” said Steven H. Woolf, MD, one of the study authors and a professor of family medicine and population health at Virginia Commonwealth University, Richmond, according to Reuters.

“U.S. life expectancy has been falling behind other countries since the 1980s, and the gap has widened over time, especially in the last decade.”

Lack of universal health care, income and educational inequality, and less-healthy physical and social environments helped lead to the decline in American life expectancy, according to Dr. Woolf.

The life expectancy drop from 2019 to 2020 hit Black and Hispanic people hardest, according to the study. But the drop from 2020 to 2021 affected White people the most, with average life expectancy among them going down about a third of a year.

Researchers looked at death data from the National Center for Health Statistics, the Human Mortality Database, and overseas statistical agencies. Life expectancy for 2021 was estimated “using a previously validated modeling method,” the study said.

A version of this article first appeared on WebMD.com.

The average life expectancy in the United States is expected to drop by 2.26 years from 2019 to 2021, the sharpest decrease during that time among high-income nations, according to a new study.

The study, published in medRxiv, said U.S. life expectancy went from 78.86 years in 2019 to 76.99 years in 2020, during the thick of the global COVID-19 pandemic. Though vaccines were widely available in 2021, the U.S. life expectancy was expected to keep going down, to 76.60 years.

In “peer countries” – Austria, Belgium, Denmark, England and Wales, Finland, France, Germany, Israel, Italy, the Netherlands, New Zealand, Northern Ireland, Norway, Portugal, Scotland, South Korea, Spain, Sweden, and Switzerland – life expectancy went down only 0.57 years from 2019 to 2020 and increased by 0.28 years in 2021, the study said. The peer countries now have a life expectancy that’s 5 years longer than in the United States.

“The fact the U.S. lost so many more lives than other high-income countries speaks not only to how we managed the pandemic, but also to more deeply rooted problems that predated the pandemic,” said Steven H. Woolf, MD, one of the study authors and a professor of family medicine and population health at Virginia Commonwealth University, Richmond, according to Reuters.

“U.S. life expectancy has been falling behind other countries since the 1980s, and the gap has widened over time, especially in the last decade.”

Lack of universal health care, income and educational inequality, and less-healthy physical and social environments helped lead to the decline in American life expectancy, according to Dr. Woolf.

The life expectancy drop from 2019 to 2020 hit Black and Hispanic people hardest, according to the study. But the drop from 2020 to 2021 affected White people the most, with average life expectancy among them going down about a third of a year.

Researchers looked at death data from the National Center for Health Statistics, the Human Mortality Database, and overseas statistical agencies. Life expectancy for 2021 was estimated “using a previously validated modeling method,” the study said.

A version of this article first appeared on WebMD.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM MEDRXIV

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Weight gain may exacerbate structural damage in knee OA

Article Type
Changed

An increase in body weight appears to have a detrimental effect on some radiographic features of knee, but not hip, osteoarthritis, researchers reported at the OARSI 2022 World Congress.

Using data from the Osteoarthritis Initiative (OAI), researchers from the University of California found that a greater than 5% increase in body weight over 4 years was associated with a 29% increased risk for medial joint space narrowing (JSN), compared with controls (P = .038). There was also a 34% increased risk for developing frequent knee pain (P = .009)

Conversely, weight loss appeared to offer some protection from structural damage in knee OA, Gabby B. Joseph, PhD, a specialist in radiology and biomedical imaging, said at the congress, sponsored by the Osteoarthritis Research Society International.

Indeed, individuals who had achieved a weight loss of more than 5% at 4-year follow up were less likely to have a worsened Kellgren and Lawrence (KL) grade than those whose body weight remained the same (odds ratio, 0.69, P = .009).

Weight loss was also associated with a higher change of experiencing resolution in knee pain over 12 months, with an OR of 1.40 (P = .019).

Importance of weight change in OA

“We know that weight loss has beneficial effects on knee OA symptoms, such as pain relief and improvement in physical function,” commented Xingzhong Jin, PhD, an NHMRC Early Career Fellow at the Centre for Big Data Research in Health at the University of New South Wales, Sydney.

“But what is unclear is whether weight loss could slow down structural degradation in the joint in the long run,” he said in an interview. “These findings mean that weight control is clearly very important for knee OA, in terms of improving symptoms as well as preventing structural progression.”

He added: “The evidence on hip OA is less clear. As most of the knowledge in this space was generated from people with knee OA, this work is an important contribution to knowledge around the care of people with hip OA.”
 

Why look at weight change effects in OA?

“Obesity is a modifiable risk factor for osteoarthritis,” Dr. Joseph said at the start of her virtual presentation. Indeed, patients with obesity are more than twice as likely to develop knee OA than their normal weight counterparts.

Although there have been various studies looking at weight loss and weight gain in OA, most have focused on weight loss rather than gain, and OA in the knee rather than the hip, she explained.

The aim of the present study, therefore, was to take a closer look at the possible effect of both weight gain and weight loss in people with hip or knee OA in terms of radiographic outcomes (KL grade change, medial JSN), symptomatic outcomes (knee pain and resolution at 12 months), and the need for joint replacement.

“The clinical implications are to develop targeted long-term strategies for site-specific informed recommendations to prevent joint degeneration,” Dr. Joseph said.

Using data on nearly 3,000 individuals from the OAI, Dr Joseph and collaborators classified people with OA into one of three groups: those with at least a 5% gain in weight, (n = 714), those with no (–3% to 3%) change in weight (n = 1,553), and those with at least a 5% loss in weight over a 4-year period.

The results, which were published in Arthritis Care & Research, also revealed no differences in the rate of total hip or knee arthroplasties between the groups, and no differences between the weight gain and weight loss groups and controls in term of hip radiographic or symptomatic changes.

“Why are there differing effects of weight change in the knee versus the hip? This could be multifactorial, but there could be a few things going on,” said Dr. Joseph. “First, the joint structure is clearly different between the knee and the hip. The knee is a hinge joint. The hip is a ball and socket joint malalignment could affect these in different ways.”

Additionally, “the knee also has thicker cartilage, the hip has thinner cartilage again, and the loading patterns may be different in these joints.”

There were also differences in the rate of progression between the knee and the hip, “this was especially noticeable for the radiographic progression,” Dr. Joseph said, with rates being higher in the knee.

Noting that the study is limited by its retrospective design, Dr. Joseph concluded: “We don’t know why these people lost or gained weight. So, this would be something that would be more apparent in a prospective study.

“Also, there were no MRI outcomes, as MRI imaging was not available in the hip in the OAI, but clearly morphology T1 and T2 would be useful to assess as outcomes here as well.”

The OAI is a public-private partnership funded by the National Institutes of Health and initial support from Merck, Novartis, GlaxoSmithKline and Pfizer. Dr. Joseph and Dr. Jin reported having no conflicts of interest to disclose.
 

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

An increase in body weight appears to have a detrimental effect on some radiographic features of knee, but not hip, osteoarthritis, researchers reported at the OARSI 2022 World Congress.

Using data from the Osteoarthritis Initiative (OAI), researchers from the University of California found that a greater than 5% increase in body weight over 4 years was associated with a 29% increased risk for medial joint space narrowing (JSN), compared with controls (P = .038). There was also a 34% increased risk for developing frequent knee pain (P = .009)

Conversely, weight loss appeared to offer some protection from structural damage in knee OA, Gabby B. Joseph, PhD, a specialist in radiology and biomedical imaging, said at the congress, sponsored by the Osteoarthritis Research Society International.

Indeed, individuals who had achieved a weight loss of more than 5% at 4-year follow up were less likely to have a worsened Kellgren and Lawrence (KL) grade than those whose body weight remained the same (odds ratio, 0.69, P = .009).

Weight loss was also associated with a higher change of experiencing resolution in knee pain over 12 months, with an OR of 1.40 (P = .019).

Importance of weight change in OA

“We know that weight loss has beneficial effects on knee OA symptoms, such as pain relief and improvement in physical function,” commented Xingzhong Jin, PhD, an NHMRC Early Career Fellow at the Centre for Big Data Research in Health at the University of New South Wales, Sydney.

“But what is unclear is whether weight loss could slow down structural degradation in the joint in the long run,” he said in an interview. “These findings mean that weight control is clearly very important for knee OA, in terms of improving symptoms as well as preventing structural progression.”

He added: “The evidence on hip OA is less clear. As most of the knowledge in this space was generated from people with knee OA, this work is an important contribution to knowledge around the care of people with hip OA.”
 

Why look at weight change effects in OA?

“Obesity is a modifiable risk factor for osteoarthritis,” Dr. Joseph said at the start of her virtual presentation. Indeed, patients with obesity are more than twice as likely to develop knee OA than their normal weight counterparts.

Although there have been various studies looking at weight loss and weight gain in OA, most have focused on weight loss rather than gain, and OA in the knee rather than the hip, she explained.

The aim of the present study, therefore, was to take a closer look at the possible effect of both weight gain and weight loss in people with hip or knee OA in terms of radiographic outcomes (KL grade change, medial JSN), symptomatic outcomes (knee pain and resolution at 12 months), and the need for joint replacement.

“The clinical implications are to develop targeted long-term strategies for site-specific informed recommendations to prevent joint degeneration,” Dr. Joseph said.

Using data on nearly 3,000 individuals from the OAI, Dr Joseph and collaborators classified people with OA into one of three groups: those with at least a 5% gain in weight, (n = 714), those with no (–3% to 3%) change in weight (n = 1,553), and those with at least a 5% loss in weight over a 4-year period.

The results, which were published in Arthritis Care & Research, also revealed no differences in the rate of total hip or knee arthroplasties between the groups, and no differences between the weight gain and weight loss groups and controls in term of hip radiographic or symptomatic changes.

“Why are there differing effects of weight change in the knee versus the hip? This could be multifactorial, but there could be a few things going on,” said Dr. Joseph. “First, the joint structure is clearly different between the knee and the hip. The knee is a hinge joint. The hip is a ball and socket joint malalignment could affect these in different ways.”

Additionally, “the knee also has thicker cartilage, the hip has thinner cartilage again, and the loading patterns may be different in these joints.”

There were also differences in the rate of progression between the knee and the hip, “this was especially noticeable for the radiographic progression,” Dr. Joseph said, with rates being higher in the knee.

Noting that the study is limited by its retrospective design, Dr. Joseph concluded: “We don’t know why these people lost or gained weight. So, this would be something that would be more apparent in a prospective study.

“Also, there were no MRI outcomes, as MRI imaging was not available in the hip in the OAI, but clearly morphology T1 and T2 would be useful to assess as outcomes here as well.”

The OAI is a public-private partnership funded by the National Institutes of Health and initial support from Merck, Novartis, GlaxoSmithKline and Pfizer. Dr. Joseph and Dr. Jin reported having no conflicts of interest to disclose.
 

An increase in body weight appears to have a detrimental effect on some radiographic features of knee, but not hip, osteoarthritis, researchers reported at the OARSI 2022 World Congress.

Using data from the Osteoarthritis Initiative (OAI), researchers from the University of California found that a greater than 5% increase in body weight over 4 years was associated with a 29% increased risk for medial joint space narrowing (JSN), compared with controls (P = .038). There was also a 34% increased risk for developing frequent knee pain (P = .009)

Conversely, weight loss appeared to offer some protection from structural damage in knee OA, Gabby B. Joseph, PhD, a specialist in radiology and biomedical imaging, said at the congress, sponsored by the Osteoarthritis Research Society International.

Indeed, individuals who had achieved a weight loss of more than 5% at 4-year follow up were less likely to have a worsened Kellgren and Lawrence (KL) grade than those whose body weight remained the same (odds ratio, 0.69, P = .009).

Weight loss was also associated with a higher change of experiencing resolution in knee pain over 12 months, with an OR of 1.40 (P = .019).

Importance of weight change in OA

“We know that weight loss has beneficial effects on knee OA symptoms, such as pain relief and improvement in physical function,” commented Xingzhong Jin, PhD, an NHMRC Early Career Fellow at the Centre for Big Data Research in Health at the University of New South Wales, Sydney.

“But what is unclear is whether weight loss could slow down structural degradation in the joint in the long run,” he said in an interview. “These findings mean that weight control is clearly very important for knee OA, in terms of improving symptoms as well as preventing structural progression.”

He added: “The evidence on hip OA is less clear. As most of the knowledge in this space was generated from people with knee OA, this work is an important contribution to knowledge around the care of people with hip OA.”
 

Why look at weight change effects in OA?

“Obesity is a modifiable risk factor for osteoarthritis,” Dr. Joseph said at the start of her virtual presentation. Indeed, patients with obesity are more than twice as likely to develop knee OA than their normal weight counterparts.

Although there have been various studies looking at weight loss and weight gain in OA, most have focused on weight loss rather than gain, and OA in the knee rather than the hip, she explained.

The aim of the present study, therefore, was to take a closer look at the possible effect of both weight gain and weight loss in people with hip or knee OA in terms of radiographic outcomes (KL grade change, medial JSN), symptomatic outcomes (knee pain and resolution at 12 months), and the need for joint replacement.

“The clinical implications are to develop targeted long-term strategies for site-specific informed recommendations to prevent joint degeneration,” Dr. Joseph said.

Using data on nearly 3,000 individuals from the OAI, Dr Joseph and collaborators classified people with OA into one of three groups: those with at least a 5% gain in weight, (n = 714), those with no (–3% to 3%) change in weight (n = 1,553), and those with at least a 5% loss in weight over a 4-year period.

The results, which were published in Arthritis Care & Research, also revealed no differences in the rate of total hip or knee arthroplasties between the groups, and no differences between the weight gain and weight loss groups and controls in term of hip radiographic or symptomatic changes.

“Why are there differing effects of weight change in the knee versus the hip? This could be multifactorial, but there could be a few things going on,” said Dr. Joseph. “First, the joint structure is clearly different between the knee and the hip. The knee is a hinge joint. The hip is a ball and socket joint malalignment could affect these in different ways.”

Additionally, “the knee also has thicker cartilage, the hip has thinner cartilage again, and the loading patterns may be different in these joints.”

There were also differences in the rate of progression between the knee and the hip, “this was especially noticeable for the radiographic progression,” Dr. Joseph said, with rates being higher in the knee.

Noting that the study is limited by its retrospective design, Dr. Joseph concluded: “We don’t know why these people lost or gained weight. So, this would be something that would be more apparent in a prospective study.

“Also, there were no MRI outcomes, as MRI imaging was not available in the hip in the OAI, but clearly morphology T1 and T2 would be useful to assess as outcomes here as well.”

The OAI is a public-private partnership funded by the National Institutes of Health and initial support from Merck, Novartis, GlaxoSmithKline and Pfizer. Dr. Joseph and Dr. Jin reported having no conflicts of interest to disclose.
 

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM OARSI 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Overuse of surveillance in bladder cancer, despite guidelines

Article Type
Changed

Clinicians are not following guidelines that recommend a de-escalation in surveillance for patients with low-risk non–muscle-invasive bladder cancer (NMIBC), a new study concludes.  

These cancers are associated with low rates of recurrence, progression, and bladder cancer–specific death, so current clinical practice guidelines recommend against frequent monitoring and testing.

However, the study authors found that patients with a low grade Ta NMIBC diagnosis underwent a median of three cystoscopies per year, and many also received a median of two imagine scans (CT or MRI) as well as 2-3 urine-based tests.

“These data suggest a need for ongoing efforts to limit overuse of treatment and surveillance, which may in turn mitigate associated increases in the costs of care,” write the authors, led by Kelly K. Bree, MD, from the department of urology, University of Texas MD Anderson Cancer Center, Houston. Bladder cancer has the highest lifetime treatment cost of all malignancies, they point out.

The study was published online in JAMA Network Open.
 

Higher value and more evidence-based

The impact of increased surveillance of this patient cohort has broad implications for patients and the health care system in general, say experts writing in an accompanying editorial.

“It has been well established that workup for NMIBC can have negative consequences for the physical and psychological health of patients,” note Grayden S. Cook, BS, and Jeffrey M. Howard, MD, PhD, both from University of Texas Southwestern Medical Center, Dallas.

“Many of these patients undergo frequent CT imaging of the urinary tract, which carries a high dose of radiation as well as the potential for financial toxic effects (that is, detrimental consequences to the patient because of health care costs),” they write.

Additionally, patient distress is a factor, as they may experience preprocedural anxiety, physical discomfort during procedures, and worry about disease progression, they point out.

“The impact of these patterns is substantial and may have negative consequences for both patients and the health care system,” they conclude. “Thus, it is imperative to move forward with initiatives that provide higher value and are more evidence-based and patient-centered.”
 

Study finds frequent surveillance

The American Urological Association (AUA)/Society of Urologic Oncologists (SUO), the European Association of Urology, and the International Bladder Cancer Group have made an effort to de-escalate surveillance and treatment for patients with low-grade Ta disease, while at the same time maintaining appropriate surveillance for high-grade aggressive disease.

However, the new study found that in practice, such patients undergo frequent testing.

The study involved 13,054 patients with low-grade Ta NMIBC. Most of the participants were male (73.5%), with a median age of 76 years, and had no or few comorbidities (71.2%).

Most patients had undergone cystoscopy, and rates increased over time: from 79.3% of patients in 2004 to 81.5% of patients in 2013 (P = .007). Patients underwent a median of 3.0 cystoscopies per year following their diagnosis, and upper-tract imaging was performed in most patients.

The use of kidney ultrasonography also rose from 19% of patients in 2004 to 23.2% in 2013, as did retrograde pyelography (20.9% in 2004 vs. 24.2% in 2013). Conversely, the use of intravenous pyelography declined (from 14.5% in 2004 to 1.7% in 2012), but there was an increase in CT and MRI in all years except 2010 (from 30.4% of patients in 2004 to 47% of patients in 2013; P < .001). The rate of urine-based testing also significantly increased during the study period (from 44.8% in 2004 to 54.9% in 2013; P < .001), with patients undergoing between two to three tests per year.

Adherence to current guidelines remained similar during the study time frame. For example, 55.2% of patients received two cystoscopies per year in 2004-2008, compared with 53.8% in 2009-2013 (P = .11), suggesting that there was an overuse of all surveillance testing modalities.

As for treatment, 17.2% received intravesical immunotherapy with bacillus Calmette-Guérin, 6.1% were treated with intravesical chemotherapy (excluding receipt of a single perioperative dose). Disease recurrence within this cohort was 1.7%, and only 0.4% experienced disease progression.

When looking at the cost, the total median expenditures at 1 year after diagnosis increased by 60% during the study period, from $34,792 in 2004 to $53,986 in 2013. Higher costs were seen among patients who experienced a recurrence versus no recurrence ($76,669 vs. $53,909).

The study was supported by a grant from the U.S. Department of Defense Peer Reviewed Cancer Research Program. Several of the authors have disclosed relationships with industry, as noted in the original article. Editorialists Mr. Cook and Dr. Howard have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Clinicians are not following guidelines that recommend a de-escalation in surveillance for patients with low-risk non–muscle-invasive bladder cancer (NMIBC), a new study concludes.  

These cancers are associated with low rates of recurrence, progression, and bladder cancer–specific death, so current clinical practice guidelines recommend against frequent monitoring and testing.

However, the study authors found that patients with a low grade Ta NMIBC diagnosis underwent a median of three cystoscopies per year, and many also received a median of two imagine scans (CT or MRI) as well as 2-3 urine-based tests.

“These data suggest a need for ongoing efforts to limit overuse of treatment and surveillance, which may in turn mitigate associated increases in the costs of care,” write the authors, led by Kelly K. Bree, MD, from the department of urology, University of Texas MD Anderson Cancer Center, Houston. Bladder cancer has the highest lifetime treatment cost of all malignancies, they point out.

The study was published online in JAMA Network Open.
 

Higher value and more evidence-based

The impact of increased surveillance of this patient cohort has broad implications for patients and the health care system in general, say experts writing in an accompanying editorial.

“It has been well established that workup for NMIBC can have negative consequences for the physical and psychological health of patients,” note Grayden S. Cook, BS, and Jeffrey M. Howard, MD, PhD, both from University of Texas Southwestern Medical Center, Dallas.

“Many of these patients undergo frequent CT imaging of the urinary tract, which carries a high dose of radiation as well as the potential for financial toxic effects (that is, detrimental consequences to the patient because of health care costs),” they write.

Additionally, patient distress is a factor, as they may experience preprocedural anxiety, physical discomfort during procedures, and worry about disease progression, they point out.

“The impact of these patterns is substantial and may have negative consequences for both patients and the health care system,” they conclude. “Thus, it is imperative to move forward with initiatives that provide higher value and are more evidence-based and patient-centered.”
 

Study finds frequent surveillance

The American Urological Association (AUA)/Society of Urologic Oncologists (SUO), the European Association of Urology, and the International Bladder Cancer Group have made an effort to de-escalate surveillance and treatment for patients with low-grade Ta disease, while at the same time maintaining appropriate surveillance for high-grade aggressive disease.

However, the new study found that in practice, such patients undergo frequent testing.

The study involved 13,054 patients with low-grade Ta NMIBC. Most of the participants were male (73.5%), with a median age of 76 years, and had no or few comorbidities (71.2%).

Most patients had undergone cystoscopy, and rates increased over time: from 79.3% of patients in 2004 to 81.5% of patients in 2013 (P = .007). Patients underwent a median of 3.0 cystoscopies per year following their diagnosis, and upper-tract imaging was performed in most patients.

The use of kidney ultrasonography also rose from 19% of patients in 2004 to 23.2% in 2013, as did retrograde pyelography (20.9% in 2004 vs. 24.2% in 2013). Conversely, the use of intravenous pyelography declined (from 14.5% in 2004 to 1.7% in 2012), but there was an increase in CT and MRI in all years except 2010 (from 30.4% of patients in 2004 to 47% of patients in 2013; P < .001). The rate of urine-based testing also significantly increased during the study period (from 44.8% in 2004 to 54.9% in 2013; P < .001), with patients undergoing between two to three tests per year.

Adherence to current guidelines remained similar during the study time frame. For example, 55.2% of patients received two cystoscopies per year in 2004-2008, compared with 53.8% in 2009-2013 (P = .11), suggesting that there was an overuse of all surveillance testing modalities.

As for treatment, 17.2% received intravesical immunotherapy with bacillus Calmette-Guérin, 6.1% were treated with intravesical chemotherapy (excluding receipt of a single perioperative dose). Disease recurrence within this cohort was 1.7%, and only 0.4% experienced disease progression.

When looking at the cost, the total median expenditures at 1 year after diagnosis increased by 60% during the study period, from $34,792 in 2004 to $53,986 in 2013. Higher costs were seen among patients who experienced a recurrence versus no recurrence ($76,669 vs. $53,909).

The study was supported by a grant from the U.S. Department of Defense Peer Reviewed Cancer Research Program. Several of the authors have disclosed relationships with industry, as noted in the original article. Editorialists Mr. Cook and Dr. Howard have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Clinicians are not following guidelines that recommend a de-escalation in surveillance for patients with low-risk non–muscle-invasive bladder cancer (NMIBC), a new study concludes.  

These cancers are associated with low rates of recurrence, progression, and bladder cancer–specific death, so current clinical practice guidelines recommend against frequent monitoring and testing.

However, the study authors found that patients with a low grade Ta NMIBC diagnosis underwent a median of three cystoscopies per year, and many also received a median of two imagine scans (CT or MRI) as well as 2-3 urine-based tests.

“These data suggest a need for ongoing efforts to limit overuse of treatment and surveillance, which may in turn mitigate associated increases in the costs of care,” write the authors, led by Kelly K. Bree, MD, from the department of urology, University of Texas MD Anderson Cancer Center, Houston. Bladder cancer has the highest lifetime treatment cost of all malignancies, they point out.

The study was published online in JAMA Network Open.
 

Higher value and more evidence-based

The impact of increased surveillance of this patient cohort has broad implications for patients and the health care system in general, say experts writing in an accompanying editorial.

“It has been well established that workup for NMIBC can have negative consequences for the physical and psychological health of patients,” note Grayden S. Cook, BS, and Jeffrey M. Howard, MD, PhD, both from University of Texas Southwestern Medical Center, Dallas.

“Many of these patients undergo frequent CT imaging of the urinary tract, which carries a high dose of radiation as well as the potential for financial toxic effects (that is, detrimental consequences to the patient because of health care costs),” they write.

Additionally, patient distress is a factor, as they may experience preprocedural anxiety, physical discomfort during procedures, and worry about disease progression, they point out.

“The impact of these patterns is substantial and may have negative consequences for both patients and the health care system,” they conclude. “Thus, it is imperative to move forward with initiatives that provide higher value and are more evidence-based and patient-centered.”
 

Study finds frequent surveillance

The American Urological Association (AUA)/Society of Urologic Oncologists (SUO), the European Association of Urology, and the International Bladder Cancer Group have made an effort to de-escalate surveillance and treatment for patients with low-grade Ta disease, while at the same time maintaining appropriate surveillance for high-grade aggressive disease.

However, the new study found that in practice, such patients undergo frequent testing.

The study involved 13,054 patients with low-grade Ta NMIBC. Most of the participants were male (73.5%), with a median age of 76 years, and had no or few comorbidities (71.2%).

Most patients had undergone cystoscopy, and rates increased over time: from 79.3% of patients in 2004 to 81.5% of patients in 2013 (P = .007). Patients underwent a median of 3.0 cystoscopies per year following their diagnosis, and upper-tract imaging was performed in most patients.

The use of kidney ultrasonography also rose from 19% of patients in 2004 to 23.2% in 2013, as did retrograde pyelography (20.9% in 2004 vs. 24.2% in 2013). Conversely, the use of intravenous pyelography declined (from 14.5% in 2004 to 1.7% in 2012), but there was an increase in CT and MRI in all years except 2010 (from 30.4% of patients in 2004 to 47% of patients in 2013; P < .001). The rate of urine-based testing also significantly increased during the study period (from 44.8% in 2004 to 54.9% in 2013; P < .001), with patients undergoing between two to three tests per year.

Adherence to current guidelines remained similar during the study time frame. For example, 55.2% of patients received two cystoscopies per year in 2004-2008, compared with 53.8% in 2009-2013 (P = .11), suggesting that there was an overuse of all surveillance testing modalities.

As for treatment, 17.2% received intravesical immunotherapy with bacillus Calmette-Guérin, 6.1% were treated with intravesical chemotherapy (excluding receipt of a single perioperative dose). Disease recurrence within this cohort was 1.7%, and only 0.4% experienced disease progression.

When looking at the cost, the total median expenditures at 1 year after diagnosis increased by 60% during the study period, from $34,792 in 2004 to $53,986 in 2013. Higher costs were seen among patients who experienced a recurrence versus no recurrence ($76,669 vs. $53,909).

The study was supported by a grant from the U.S. Department of Defense Peer Reviewed Cancer Research Program. Several of the authors have disclosed relationships with industry, as noted in the original article. Editorialists Mr. Cook and Dr. Howard have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Aspirin exposure fails to reduce cardiovascular event risk

Article Type
Changed

 

The addition of aspirin to standard guideline management for blood pressure did not reduce the risk of cardiovascular events among adults with hypertension and controlled systolic blood pressure in a study.

The benefits of aspirin use for the primary prevention of atherosclerotic cardiovascular disease (ASCVD) have been questioned in light of data showing neutral outcomes in low-risk patients and concerns about increased bleeding risk and mortality in healthy older adults, wrote Rita Del Pinto, MD, of University of L’Aquila (Italy) and colleagues in JAMA Network Open.

Dr. Rita Del Pinto

In the study, Dr. Del Pinto and colleagues conducted a post hoc analysis of data from more than 2,500 participants in SPRINT (Systolic Blood Pressure Intervention Trial), a multicenter, randomized trial conducted from 2010 to 2013.

The goal of SPRINT was to compare intensive and standard blood pressure–lowering strategies for hypertension patients. The primary outcome of the current study was risk of a first cardiovascular event, which included adjudicated myocardial infarction, non–myocardial infarction acute coronary syndrome, stroke, acute heart failure, and CVD death.“There has been considerable improvement in the management of cardiovascular risk factors since the first reports on aspirin use for cardiovascular prevention,” Dr. Del Pinto said in an interview.

“As for hypertension, not only have more effective antihypertensive medications become available, but also evidence has recently emerged in support of a downwards redefinition of blood pressure targets during treatment,” she said. “In this context, in an era when great attention is paid to the personalization of treatment, no specific studies had addressed the association of aspirin use as a primary prevention strategy in a cohort of relatively old, high-risk individuals with treated systolic blood pressure steadily below the recommended target,” she added.

The researchers assessed whether aspirin use in addition to standard blood pressure management (a target of less than 140 mm Hg) decreased risk and improved survival.

The study population included 2,664 adult patients; 29.3% were women, and 24.5% were aged 75 years and older. Half of the patients (1,332) received aspirin and 1,332 did not.

In a multivariate analysis, 42 cardiovascular events occurred in the aspirin group, compared with 20 events in those not exposed to aspirin (hazard ratio, 2.30). The findings were consistent in subgroup analyses of younger individuals, current and former smokers, and patients on statins.

An additional subgroup analysis of individuals randomized to standard care or intensive care in the SPRINT study showed no significant difference in primary outcome rates between individuals who received aspirin and those who did not. The rates for aspirin use vs. non–aspirin use were 5.85% vs. 3.60% in the standard treatment group and 4.66% vs. 2.56% in the intensive treatment group.

The study findings were limited by several factors, including the post hoc design, short follow-up period, and lack of data on the initiation of aspirin and bleeding events, the researchers wrote. However, the results suggest that modern management of hypertension may have redefined the potential benefits of aspirin in patients with hypertension, they concluded.

 

 

Findings confirm value of preventive care

“The study was conducted as a post-hoc analysis on an experimental cohort, which must be considered when interpreting the results,” Dr. Del Pinto said.

Despite the limitations, the study findings affirm that effective treatment of major cardiovascular risk factors, such as hypertension, with proven drugs is “a mainstay of the primary prevention of ASCVD,” she emphasized.

As for additional research, “Testing our findings in a dedicated setting with sufficiently long follow-up, where aspirin dose and indication, as well as any possible bleeding event, are reported could expand the clinical meaning of our observations,” said Dr. Del Pinto. “Also, the clinical impact of aspirin, even in combination with novel cardiovascular drugs such as direct oral anticoagulants, in populations exposed to combinations of risk factors, deserves further investigation.”

Data support shared decision-making

“While recent evidence has not shown a benefit of aspirin in the primary prevention of ASCVD in several populations, the subpopulation of patients with hypertension as an ASCVD risk factor is also of interest to the clinician,” Suman Pal, MD, of the University of New Mexico, Albuquerque, said in an interview. “The lack of benefit of aspirin in this study, despite its limitations, was surprising, and I would be eager to see how the role of aspirin in ASCVD prevention would continue to evolve in conjunction with improvement in other therapies for modification of risk factors.”

“The decision to continue aspirin in this subgroup of patients should warrant a discussion with patients and a reexamination of risks and benefits until further data are available,” Dr. Pal emphasized. 

Larger studies with long-term follow-ups would be required to further clarify the role of aspirin in primary prevention of ASCVD in patients with hypertension without diabetes or chronic kidney disease, he added.

Data were supplied courtesy of BioLINCC. The study received no outside funding. The researchers and Dr. Pal had no financial conflicts to disclose.

Publications
Topics
Sections

 

The addition of aspirin to standard guideline management for blood pressure did not reduce the risk of cardiovascular events among adults with hypertension and controlled systolic blood pressure in a study.

The benefits of aspirin use for the primary prevention of atherosclerotic cardiovascular disease (ASCVD) have been questioned in light of data showing neutral outcomes in low-risk patients and concerns about increased bleeding risk and mortality in healthy older adults, wrote Rita Del Pinto, MD, of University of L’Aquila (Italy) and colleagues in JAMA Network Open.

Dr. Rita Del Pinto

In the study, Dr. Del Pinto and colleagues conducted a post hoc analysis of data from more than 2,500 participants in SPRINT (Systolic Blood Pressure Intervention Trial), a multicenter, randomized trial conducted from 2010 to 2013.

The goal of SPRINT was to compare intensive and standard blood pressure–lowering strategies for hypertension patients. The primary outcome of the current study was risk of a first cardiovascular event, which included adjudicated myocardial infarction, non–myocardial infarction acute coronary syndrome, stroke, acute heart failure, and CVD death.“There has been considerable improvement in the management of cardiovascular risk factors since the first reports on aspirin use for cardiovascular prevention,” Dr. Del Pinto said in an interview.

“As for hypertension, not only have more effective antihypertensive medications become available, but also evidence has recently emerged in support of a downwards redefinition of blood pressure targets during treatment,” she said. “In this context, in an era when great attention is paid to the personalization of treatment, no specific studies had addressed the association of aspirin use as a primary prevention strategy in a cohort of relatively old, high-risk individuals with treated systolic blood pressure steadily below the recommended target,” she added.

The researchers assessed whether aspirin use in addition to standard blood pressure management (a target of less than 140 mm Hg) decreased risk and improved survival.

The study population included 2,664 adult patients; 29.3% were women, and 24.5% were aged 75 years and older. Half of the patients (1,332) received aspirin and 1,332 did not.

In a multivariate analysis, 42 cardiovascular events occurred in the aspirin group, compared with 20 events in those not exposed to aspirin (hazard ratio, 2.30). The findings were consistent in subgroup analyses of younger individuals, current and former smokers, and patients on statins.

An additional subgroup analysis of individuals randomized to standard care or intensive care in the SPRINT study showed no significant difference in primary outcome rates between individuals who received aspirin and those who did not. The rates for aspirin use vs. non–aspirin use were 5.85% vs. 3.60% in the standard treatment group and 4.66% vs. 2.56% in the intensive treatment group.

The study findings were limited by several factors, including the post hoc design, short follow-up period, and lack of data on the initiation of aspirin and bleeding events, the researchers wrote. However, the results suggest that modern management of hypertension may have redefined the potential benefits of aspirin in patients with hypertension, they concluded.

 

 

Findings confirm value of preventive care

“The study was conducted as a post-hoc analysis on an experimental cohort, which must be considered when interpreting the results,” Dr. Del Pinto said.

Despite the limitations, the study findings affirm that effective treatment of major cardiovascular risk factors, such as hypertension, with proven drugs is “a mainstay of the primary prevention of ASCVD,” she emphasized.

As for additional research, “Testing our findings in a dedicated setting with sufficiently long follow-up, where aspirin dose and indication, as well as any possible bleeding event, are reported could expand the clinical meaning of our observations,” said Dr. Del Pinto. “Also, the clinical impact of aspirin, even in combination with novel cardiovascular drugs such as direct oral anticoagulants, in populations exposed to combinations of risk factors, deserves further investigation.”

Data support shared decision-making

“While recent evidence has not shown a benefit of aspirin in the primary prevention of ASCVD in several populations, the subpopulation of patients with hypertension as an ASCVD risk factor is also of interest to the clinician,” Suman Pal, MD, of the University of New Mexico, Albuquerque, said in an interview. “The lack of benefit of aspirin in this study, despite its limitations, was surprising, and I would be eager to see how the role of aspirin in ASCVD prevention would continue to evolve in conjunction with improvement in other therapies for modification of risk factors.”

“The decision to continue aspirin in this subgroup of patients should warrant a discussion with patients and a reexamination of risks and benefits until further data are available,” Dr. Pal emphasized. 

Larger studies with long-term follow-ups would be required to further clarify the role of aspirin in primary prevention of ASCVD in patients with hypertension without diabetes or chronic kidney disease, he added.

Data were supplied courtesy of BioLINCC. The study received no outside funding. The researchers and Dr. Pal had no financial conflicts to disclose.

 

The addition of aspirin to standard guideline management for blood pressure did not reduce the risk of cardiovascular events among adults with hypertension and controlled systolic blood pressure in a study.

The benefits of aspirin use for the primary prevention of atherosclerotic cardiovascular disease (ASCVD) have been questioned in light of data showing neutral outcomes in low-risk patients and concerns about increased bleeding risk and mortality in healthy older adults, wrote Rita Del Pinto, MD, of University of L’Aquila (Italy) and colleagues in JAMA Network Open.

Dr. Rita Del Pinto

In the study, Dr. Del Pinto and colleagues conducted a post hoc analysis of data from more than 2,500 participants in SPRINT (Systolic Blood Pressure Intervention Trial), a multicenter, randomized trial conducted from 2010 to 2013.

The goal of SPRINT was to compare intensive and standard blood pressure–lowering strategies for hypertension patients. The primary outcome of the current study was risk of a first cardiovascular event, which included adjudicated myocardial infarction, non–myocardial infarction acute coronary syndrome, stroke, acute heart failure, and CVD death.“There has been considerable improvement in the management of cardiovascular risk factors since the first reports on aspirin use for cardiovascular prevention,” Dr. Del Pinto said in an interview.

“As for hypertension, not only have more effective antihypertensive medications become available, but also evidence has recently emerged in support of a downwards redefinition of blood pressure targets during treatment,” she said. “In this context, in an era when great attention is paid to the personalization of treatment, no specific studies had addressed the association of aspirin use as a primary prevention strategy in a cohort of relatively old, high-risk individuals with treated systolic blood pressure steadily below the recommended target,” she added.

The researchers assessed whether aspirin use in addition to standard blood pressure management (a target of less than 140 mm Hg) decreased risk and improved survival.

The study population included 2,664 adult patients; 29.3% were women, and 24.5% were aged 75 years and older. Half of the patients (1,332) received aspirin and 1,332 did not.

In a multivariate analysis, 42 cardiovascular events occurred in the aspirin group, compared with 20 events in those not exposed to aspirin (hazard ratio, 2.30). The findings were consistent in subgroup analyses of younger individuals, current and former smokers, and patients on statins.

An additional subgroup analysis of individuals randomized to standard care or intensive care in the SPRINT study showed no significant difference in primary outcome rates between individuals who received aspirin and those who did not. The rates for aspirin use vs. non–aspirin use were 5.85% vs. 3.60% in the standard treatment group and 4.66% vs. 2.56% in the intensive treatment group.

The study findings were limited by several factors, including the post hoc design, short follow-up period, and lack of data on the initiation of aspirin and bleeding events, the researchers wrote. However, the results suggest that modern management of hypertension may have redefined the potential benefits of aspirin in patients with hypertension, they concluded.

 

 

Findings confirm value of preventive care

“The study was conducted as a post-hoc analysis on an experimental cohort, which must be considered when interpreting the results,” Dr. Del Pinto said.

Despite the limitations, the study findings affirm that effective treatment of major cardiovascular risk factors, such as hypertension, with proven drugs is “a mainstay of the primary prevention of ASCVD,” she emphasized.

As for additional research, “Testing our findings in a dedicated setting with sufficiently long follow-up, where aspirin dose and indication, as well as any possible bleeding event, are reported could expand the clinical meaning of our observations,” said Dr. Del Pinto. “Also, the clinical impact of aspirin, even in combination with novel cardiovascular drugs such as direct oral anticoagulants, in populations exposed to combinations of risk factors, deserves further investigation.”

Data support shared decision-making

“While recent evidence has not shown a benefit of aspirin in the primary prevention of ASCVD in several populations, the subpopulation of patients with hypertension as an ASCVD risk factor is also of interest to the clinician,” Suman Pal, MD, of the University of New Mexico, Albuquerque, said in an interview. “The lack of benefit of aspirin in this study, despite its limitations, was surprising, and I would be eager to see how the role of aspirin in ASCVD prevention would continue to evolve in conjunction with improvement in other therapies for modification of risk factors.”

“The decision to continue aspirin in this subgroup of patients should warrant a discussion with patients and a reexamination of risks and benefits until further data are available,” Dr. Pal emphasized. 

Larger studies with long-term follow-ups would be required to further clarify the role of aspirin in primary prevention of ASCVD in patients with hypertension without diabetes or chronic kidney disease, he added.

Data were supplied courtesy of BioLINCC. The study received no outside funding. The researchers and Dr. Pal had no financial conflicts to disclose.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

New smart device shows highly accurate AFib detection: mAFA II

Article Type
Changed

Screening for heart rhythm disorders with a smartphone app and a wearable device had a high rate of correctly detecting atrial fibrillation (AFib) in a large new study.

The mAFA II study, conducted in a mass low-risk population in China, showed that more than 93% of possible AFib episodes detected by the smartphone app were confirmed to be AFib on further monitoring.

wildpixel/iStock/Getty Images


The study also used the app to screen for obstructive sleep apnea and found that sleep apnea was the most common risk factor associated with increased AFib susceptibility, and those identified as having the most severe sleep apnea were 1.5 times more likely to have AFib than those who did not have this condition.

This suggests that tools suitable for detecting both AFib and sleep apnea can work synergistically to further enhance health monitoring, said lead author, Yutao Guo, MD, professor of internal medicine at Chinese PLA General Hospital, Beijing.

Dr. Guo presented the mAFA II study at the American College of Cardiology (ACC) 2022 Scientific Session held in Washington, D.C., and online.

The trial, which involved more than 2.8 million participants, is the largest study to date to demonstrate how wearable consumer technologies can be used to screen for heart problems during everyday activities, Dr. Guo noted.

“Consumer-led screening with these technologies could increase early diagnosis of AFib and facilitate an integrated approach to fully implement clustered risk management to reduce AFib burden and its related complications,” she concluded.

Discussant of the study at the ACC session at which it was presented, Jodie Hurwitz, MD, Director of the Electrophysiology Lab at Medical City Hospital, Dallas, called this “a pretty impressive study. To get a 93.8% confirmation of AFib with these devices is great.”

But Dr. Hurwitz pointed out that the age of patients in the study was relatively young (average 37 years), and the group who really need to use such a device is much older than that.

“The take-home messages from this study are that AFib wearable detection algorithms have the ability to detect true AFib and that they might also be able to detect risk factors (such as sleep apnea) that predispose to AFib possibly even before AFib is present,” Dr. Hurwitz commented.

Moderator of the session, Edward Fry, MD, cardiologist at Ascension St. Vincent Heart Center, Indianapolis, and incoming president of the ACC, described the area of AFib screening with smart devices as “fascinating, especially with the perspective of the scalability of these types of studies.”

The mAFA II study tracked more than 2.8 million people who used a Huawei phone app together with Huawei and Honor smart devices incorporating photoplethysmography (PPG) technology, a light-based method to monitor blood flow and pulse. If an abnormal rhythm was detected, the wearer would be contacted by a clinician to set up an appointment for a clinical assessment.



Over the course of 4 years of the study, 12,244 (0.4%) of users received a notification of suspected AFib. Among 5,227 people who chose to follow up with a clinician, AFib was confirmed in 93.8% of patients using standard AFib diagnostic tools, including clinical evaluation, an electrocardiogram, and 24-hour Holter monitoring.

In this study, a subset of the individuals screened for AFib were also screened for signs of sleep apnea using the same PPG technology to detect physiological changes in parameters including oxygenation and respiratory rates. The app is also able to determine whether the individual is awake or asleep. Dr. Guo noted that the PPG algorithm for obstructive sleep apnea risk has been validated, compared with polysomnography or home sleep apnea tests.

Using measurements of apnea (signalled by a reduced respiratory rate) and hypopnea (when oxygenation would decrease), the apnea–hypopnea index (AHI) is calculated to determine the severity of the sleep apnea.

Of the 961,931 participants screened for sleep apnea, about 18,000 were notified they may have the condition.  

Obstructive sleep apnea was the most reported common risk factor associated with increased AFib susceptibility, and those individuals with the highest risk sleep apnea (more than 80% monitoring measures with AHI greater than or equal to 30 during sleep) resulted in a 1.5-fold increase in prevalent AFib, Dr. Guo reported.

The mAFA II is the latest of several studies to show that AFib can be detected with various smartphone apps and wearable devices. Previous studies have included the Fitbit Heart Study and the Apple Heart Study.

Dr. Hurwitz told this news organization that the electrophysiologist community is enthusiastic about this new smart device technology.

“I sent my sister one so she could determine if she develops AFib: That’s a pretty good endorsement,” she commented, but added that there are still concerns about the rate of false-positive results.

Dr. Hurwitz said she suspected that there will probably be meaningful differences between the different apps and devices, but the algorithms are all proprietary, and the use of photoplethysmography seems to make a big difference.

She noted that the detection of sleep apnea in the current study was a novel approach. “This is important, as sleep apnea is felt to contribute to AFib, and treating it is felt to decrease the frequency of AFib. Perhaps if patients with sleep apnea were treated before they had documented AFib, the AFib burden could be reduced,” she said.

She added that further studies were needed to fine tune the algorithms and to try and identify other factors or heart rate variabilities that may predict future risk of AFib.

The study was funded by the National Natural Science Foundation of China. Dr. Guo reports no disclosures.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Screening for heart rhythm disorders with a smartphone app and a wearable device had a high rate of correctly detecting atrial fibrillation (AFib) in a large new study.

The mAFA II study, conducted in a mass low-risk population in China, showed that more than 93% of possible AFib episodes detected by the smartphone app were confirmed to be AFib on further monitoring.

wildpixel/iStock/Getty Images


The study also used the app to screen for obstructive sleep apnea and found that sleep apnea was the most common risk factor associated with increased AFib susceptibility, and those identified as having the most severe sleep apnea were 1.5 times more likely to have AFib than those who did not have this condition.

This suggests that tools suitable for detecting both AFib and sleep apnea can work synergistically to further enhance health monitoring, said lead author, Yutao Guo, MD, professor of internal medicine at Chinese PLA General Hospital, Beijing.

Dr. Guo presented the mAFA II study at the American College of Cardiology (ACC) 2022 Scientific Session held in Washington, D.C., and online.

The trial, which involved more than 2.8 million participants, is the largest study to date to demonstrate how wearable consumer technologies can be used to screen for heart problems during everyday activities, Dr. Guo noted.

“Consumer-led screening with these technologies could increase early diagnosis of AFib and facilitate an integrated approach to fully implement clustered risk management to reduce AFib burden and its related complications,” she concluded.

Discussant of the study at the ACC session at which it was presented, Jodie Hurwitz, MD, Director of the Electrophysiology Lab at Medical City Hospital, Dallas, called this “a pretty impressive study. To get a 93.8% confirmation of AFib with these devices is great.”

But Dr. Hurwitz pointed out that the age of patients in the study was relatively young (average 37 years), and the group who really need to use such a device is much older than that.

“The take-home messages from this study are that AFib wearable detection algorithms have the ability to detect true AFib and that they might also be able to detect risk factors (such as sleep apnea) that predispose to AFib possibly even before AFib is present,” Dr. Hurwitz commented.

Moderator of the session, Edward Fry, MD, cardiologist at Ascension St. Vincent Heart Center, Indianapolis, and incoming president of the ACC, described the area of AFib screening with smart devices as “fascinating, especially with the perspective of the scalability of these types of studies.”

The mAFA II study tracked more than 2.8 million people who used a Huawei phone app together with Huawei and Honor smart devices incorporating photoplethysmography (PPG) technology, a light-based method to monitor blood flow and pulse. If an abnormal rhythm was detected, the wearer would be contacted by a clinician to set up an appointment for a clinical assessment.



Over the course of 4 years of the study, 12,244 (0.4%) of users received a notification of suspected AFib. Among 5,227 people who chose to follow up with a clinician, AFib was confirmed in 93.8% of patients using standard AFib diagnostic tools, including clinical evaluation, an electrocardiogram, and 24-hour Holter monitoring.

In this study, a subset of the individuals screened for AFib were also screened for signs of sleep apnea using the same PPG technology to detect physiological changes in parameters including oxygenation and respiratory rates. The app is also able to determine whether the individual is awake or asleep. Dr. Guo noted that the PPG algorithm for obstructive sleep apnea risk has been validated, compared with polysomnography or home sleep apnea tests.

Using measurements of apnea (signalled by a reduced respiratory rate) and hypopnea (when oxygenation would decrease), the apnea–hypopnea index (AHI) is calculated to determine the severity of the sleep apnea.

Of the 961,931 participants screened for sleep apnea, about 18,000 were notified they may have the condition.  

Obstructive sleep apnea was the most reported common risk factor associated with increased AFib susceptibility, and those individuals with the highest risk sleep apnea (more than 80% monitoring measures with AHI greater than or equal to 30 during sleep) resulted in a 1.5-fold increase in prevalent AFib, Dr. Guo reported.

The mAFA II is the latest of several studies to show that AFib can be detected with various smartphone apps and wearable devices. Previous studies have included the Fitbit Heart Study and the Apple Heart Study.

Dr. Hurwitz told this news organization that the electrophysiologist community is enthusiastic about this new smart device technology.

“I sent my sister one so she could determine if she develops AFib: That’s a pretty good endorsement,” she commented, but added that there are still concerns about the rate of false-positive results.

Dr. Hurwitz said she suspected that there will probably be meaningful differences between the different apps and devices, but the algorithms are all proprietary, and the use of photoplethysmography seems to make a big difference.

She noted that the detection of sleep apnea in the current study was a novel approach. “This is important, as sleep apnea is felt to contribute to AFib, and treating it is felt to decrease the frequency of AFib. Perhaps if patients with sleep apnea were treated before they had documented AFib, the AFib burden could be reduced,” she said.

She added that further studies were needed to fine tune the algorithms and to try and identify other factors or heart rate variabilities that may predict future risk of AFib.

The study was funded by the National Natural Science Foundation of China. Dr. Guo reports no disclosures.

A version of this article first appeared on Medscape.com.

Screening for heart rhythm disorders with a smartphone app and a wearable device had a high rate of correctly detecting atrial fibrillation (AFib) in a large new study.

The mAFA II study, conducted in a mass low-risk population in China, showed that more than 93% of possible AFib episodes detected by the smartphone app were confirmed to be AFib on further monitoring.

wildpixel/iStock/Getty Images


The study also used the app to screen for obstructive sleep apnea and found that sleep apnea was the most common risk factor associated with increased AFib susceptibility, and those identified as having the most severe sleep apnea were 1.5 times more likely to have AFib than those who did not have this condition.

This suggests that tools suitable for detecting both AFib and sleep apnea can work synergistically to further enhance health monitoring, said lead author, Yutao Guo, MD, professor of internal medicine at Chinese PLA General Hospital, Beijing.

Dr. Guo presented the mAFA II study at the American College of Cardiology (ACC) 2022 Scientific Session held in Washington, D.C., and online.

The trial, which involved more than 2.8 million participants, is the largest study to date to demonstrate how wearable consumer technologies can be used to screen for heart problems during everyday activities, Dr. Guo noted.

“Consumer-led screening with these technologies could increase early diagnosis of AFib and facilitate an integrated approach to fully implement clustered risk management to reduce AFib burden and its related complications,” she concluded.

Discussant of the study at the ACC session at which it was presented, Jodie Hurwitz, MD, Director of the Electrophysiology Lab at Medical City Hospital, Dallas, called this “a pretty impressive study. To get a 93.8% confirmation of AFib with these devices is great.”

But Dr. Hurwitz pointed out that the age of patients in the study was relatively young (average 37 years), and the group who really need to use such a device is much older than that.

“The take-home messages from this study are that AFib wearable detection algorithms have the ability to detect true AFib and that they might also be able to detect risk factors (such as sleep apnea) that predispose to AFib possibly even before AFib is present,” Dr. Hurwitz commented.

Moderator of the session, Edward Fry, MD, cardiologist at Ascension St. Vincent Heart Center, Indianapolis, and incoming president of the ACC, described the area of AFib screening with smart devices as “fascinating, especially with the perspective of the scalability of these types of studies.”

The mAFA II study tracked more than 2.8 million people who used a Huawei phone app together with Huawei and Honor smart devices incorporating photoplethysmography (PPG) technology, a light-based method to monitor blood flow and pulse. If an abnormal rhythm was detected, the wearer would be contacted by a clinician to set up an appointment for a clinical assessment.



Over the course of 4 years of the study, 12,244 (0.4%) of users received a notification of suspected AFib. Among 5,227 people who chose to follow up with a clinician, AFib was confirmed in 93.8% of patients using standard AFib diagnostic tools, including clinical evaluation, an electrocardiogram, and 24-hour Holter monitoring.

In this study, a subset of the individuals screened for AFib were also screened for signs of sleep apnea using the same PPG technology to detect physiological changes in parameters including oxygenation and respiratory rates. The app is also able to determine whether the individual is awake or asleep. Dr. Guo noted that the PPG algorithm for obstructive sleep apnea risk has been validated, compared with polysomnography or home sleep apnea tests.

Using measurements of apnea (signalled by a reduced respiratory rate) and hypopnea (when oxygenation would decrease), the apnea–hypopnea index (AHI) is calculated to determine the severity of the sleep apnea.

Of the 961,931 participants screened for sleep apnea, about 18,000 were notified they may have the condition.  

Obstructive sleep apnea was the most reported common risk factor associated with increased AFib susceptibility, and those individuals with the highest risk sleep apnea (more than 80% monitoring measures with AHI greater than or equal to 30 during sleep) resulted in a 1.5-fold increase in prevalent AFib, Dr. Guo reported.

The mAFA II is the latest of several studies to show that AFib can be detected with various smartphone apps and wearable devices. Previous studies have included the Fitbit Heart Study and the Apple Heart Study.

Dr. Hurwitz told this news organization that the electrophysiologist community is enthusiastic about this new smart device technology.

“I sent my sister one so she could determine if she develops AFib: That’s a pretty good endorsement,” she commented, but added that there are still concerns about the rate of false-positive results.

Dr. Hurwitz said she suspected that there will probably be meaningful differences between the different apps and devices, but the algorithms are all proprietary, and the use of photoplethysmography seems to make a big difference.

She noted that the detection of sleep apnea in the current study was a novel approach. “This is important, as sleep apnea is felt to contribute to AFib, and treating it is felt to decrease the frequency of AFib. Perhaps if patients with sleep apnea were treated before they had documented AFib, the AFib burden could be reduced,” she said.

She added that further studies were needed to fine tune the algorithms and to try and identify other factors or heart rate variabilities that may predict future risk of AFib.

The study was funded by the National Natural Science Foundation of China. Dr. Guo reports no disclosures.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Babies die as congenital syphilis continues a decade-long surge across the U.S.

Article Type
Changed

For a decade, the number of babies born with syphilis in the United States has surged, undeterred. Data released Apr. 12 by the Centers for Disease Control and Prevention shows just how dire the outbreak has become.

In 2012, 332 babies were born infected with the disease. In 2021, that number had climbed nearly sevenfold, to at least 2,268, according to preliminary estimates. And 166 of those babies died.

About 7% of babies diagnosed with syphilis in recent years have died; thousands of others born with the disease have faced problems that include brain and bone malformations, blindness, and organ damage.

For public health officials, the situation is all the more heartbreaking, considering that congenital syphilis rates reached near-historic modern lows from 2000 to 2012 amid ambitious prevention and education efforts. By 2020, following a sharp erosion in funding and attention, the nationwide case rate was more than seven times that of 2012.

“The really depressing thing about it is we had this thing virtually eradicated back in the year 2000,” said William Andrews, a public information officer for Oklahoma’s sexual health and harm reduction service. “Now it’s back with a vengeance. We are really trying to get the message out that sexual health is health. It’s nothing to be ashamed of.”

Even as caseloads soar, the CDC budget for STD prevention – the primary funding source for most public health departments – has been largely stagnant for two decades, its purchasing power dragged even lower by inflation.

The CDC report on STD trends provides official data on congenital syphilis cases for 2020, as well as preliminary case counts for 2021 that are expected to increase. CDC data shows that congenital syphilis rates in 2020 continued to climb in already overwhelmed states like Texas, California, and Nevada and that the disease is now present in almost every state in the nation. All but three states – Maine, New Hampshire, and Vermont – reported congenital syphilis cases in 2020.

From 2011 to 2020, congenital syphilis resulted in 633 documented stillbirths and infant deaths, according to the new CDC data.

Preventing congenital syphilis – the term used when syphilis is transferred to a fetus in utero – is from a medical standpoint exceedingly simple: If a pregnant woman is diagnosed at least a month before giving birth, just a few shots of penicillin have a near-perfect cure rate for mother and baby. But funding cuts and competing priorities in the nation’s fragmented public health care system have vastly narrowed access to such services.

The reasons pregnant people with syphilis go undiagnosed or untreated vary geographically, according to data collected by states and analyzed by the CDC.

In Western states, the largest share of cases involve women who have received little to no prenatal care and aren’t tested for syphilis until they give birth. Many have substance use disorders, primarily related to methamphetamines. “They’ve felt a lot of judgment and stigma by the medical community,” said Stephanie Pierce, MD, a maternal fetal medicine specialist at the University of Oklahoma, Oklahoma City, who runs a clinic for women with high-risk pregnancies.

In Southern states, a CDC study of 2018 data found that the largest share of congenital syphilis cases were among women who had been tested and diagnosed but hadn’t received treatment. That year, among Black moms who gave birth to a baby with syphilis, 37% had not been treated adequately even though they’d received a timely diagnosis. Among white moms, that number was 24%. Longstanding racism in medical care, poverty, transportation issues, poorly funded public health departments, and crowded clinics whose employees are too overworked to follow up with patients all contribute to the problem, according to infectious disease experts.

Doctors are also noticing a growing number of women who are treated for syphilis but reinfected during pregnancy. Amid rising cases and stagnant resources, some states have focused disease investigations on pregnant women of childbearing age; they can no longer prioritize treating sexual partners who are also infected.

Eric McGrath, MD, a pediatric infectious disease specialist at Wayne State University, Detroit, said that he’d seen several newborns in recent years whose mothers had been treated for syphilis but then were re-exposed during pregnancy by partners who hadn’t been treated.

Treating a newborn baby for syphilis isn’t trivial. Penicillin carries little risk, but delivering it to a baby often involves a lumbar puncture and other painful procedures. And treatment typically means keeping the baby in the hospital for 10 days, interrupting an important time for family bonding.

Dr. McGrath has seen a couple of babies in his career who weren’t diagnosed or treated at birth and later came to him with full-blown syphilis complications, including full-body rashes and inflamed livers. It was an awful experience he doesn’t want to repeat. The preferred course, he said, is to spare the baby the ordeal and treat parents early in the pregnancy.

But in some places, providers aren’t routinely testing for syphilis. Although most states mandate testing at some point during pregnancy, as of last year just 14 required it for everyone in the third trimester. The CDC recommends third-trimester testing in areas with high rates of syphilis, a growing share of the United States.

After Arizona declared a statewide outbreak in 2018, state health officials wanted to know whether widespread testing in the third trimester could have prevented infections. Looking at 18 months of data, analysts found that nearly three-quarters of the more than 200 pregnant women diagnosed with syphilis in 2017 and the first half of 2018 got treatment. That left 57 babies born with syphilis, nine of whom died. The analysts estimated that a third of the infections could have been prevented with testing in the third trimester.

Based on the numbers they saw in those 18 months, officials estimated that screening all women on Medicaid in the third trimester would cost the state $113,300 annually, and that treating all cases of syphilis that screening would catch could be done for just $113. Factoring in the hospitalization costs for infected infants, the officials concluded the additional testing would save the state money.

And yet prevention money has been hard to come by. Taking inflation into account, CDC prevention funding for STDs has fallen 41% since 2003, according to an analysis by the National Coalition of STD Directors. That’s even as cases have risen, leaving public health departments saddled with more work and far less money.

Janine Waters, STD program manager for the state of New Mexico, has watched the unraveling. When Ms. Waters started her career more than 20 years ago, she and her colleagues followed up on every case of chlamydia, gonorrhea, and syphilis reported, not only making sure that people got treatment but also getting in touch with their sexual partners, with the aim of stopping the spread of infection. In a 2019 interview with Kaiser Health News, she said her team was struggling to keep up with syphilis alone, even as they registered with dread congenital syphilis cases surging in neighboring Texas and Arizona.

By 2020, New Mexico had the highest rate of congenital syphilis in the country.

The COVID-19 pandemic drained the remaining resources. Half of health departments across the country discontinued STD fieldwork altogether, diverting their resources to COVID. In California, which for years has struggled with high rates of congenital syphilis, three-quarters of local health departments dispatched more than half of their STD staffers to work on COVID.

As the pandemic ebbs – at least in the short term – many public health departments are turning their attention back to syphilis and other diseases. And they are doing it with reinforcements. Although the Biden administration’s proposed STD prevention budget for 2023 remains flat, the American Rescue Plan Act included $200 million to help health departments boost contact tracing and surveillance for covid and other infectious diseases. Many departments are funneling that money toward STDs.

The money is an infusion that state health officials say will make a difference. But when taking inflation into account, it essentially brings STD prevention funding back to what it was in 2003, said Stephanie Arnold Pang of the National Coalition of STD Directors. And the American Rescue Plan money doesn’t cover some aspects of STD prevention, including clinical services.

The coalition wants to revive dedicated STD clinics, where people can drop in for testing and treatment at little to no cost. Advocates say that would fill a void that has plagued treatment efforts since public clinics closed en masse in the wake of the 2008 recession.

Texas, battling its own pervasive outbreak, will use its share of American Rescue Plan money to fill 94 new positions focused on various aspects of STD prevention. Those hires will bolster a range of measures the state put in place before the pandemic, including an updated data system to track infections, review boards in major cities that examine what went wrong for every case of congenital syphilis, and a requirement that providers test for syphilis during the third trimester of pregnancy. The suite of interventions seems to be working, but it could be a while before cases go down, said Amy Carter, the state’s congenital syphilis coordinator.

“The growth didn’t happen overnight,” Ms. Carter said. “So our prevention efforts aren’t going to have a direct impact overnight either.”

KHN (Kaiser Health News) is a national newsroom that produces in-depth journalism about health issues. Together with Policy Analysis and Polling, KHN is one of the three major operating programs at KFF (Kaiser Family Foundation). KFF is an endowed nonprofit organization providing information on health issues to the nation

 

 

Publications
Topics
Sections

For a decade, the number of babies born with syphilis in the United States has surged, undeterred. Data released Apr. 12 by the Centers for Disease Control and Prevention shows just how dire the outbreak has become.

In 2012, 332 babies were born infected with the disease. In 2021, that number had climbed nearly sevenfold, to at least 2,268, according to preliminary estimates. And 166 of those babies died.

About 7% of babies diagnosed with syphilis in recent years have died; thousands of others born with the disease have faced problems that include brain and bone malformations, blindness, and organ damage.

For public health officials, the situation is all the more heartbreaking, considering that congenital syphilis rates reached near-historic modern lows from 2000 to 2012 amid ambitious prevention and education efforts. By 2020, following a sharp erosion in funding and attention, the nationwide case rate was more than seven times that of 2012.

“The really depressing thing about it is we had this thing virtually eradicated back in the year 2000,” said William Andrews, a public information officer for Oklahoma’s sexual health and harm reduction service. “Now it’s back with a vengeance. We are really trying to get the message out that sexual health is health. It’s nothing to be ashamed of.”

Even as caseloads soar, the CDC budget for STD prevention – the primary funding source for most public health departments – has been largely stagnant for two decades, its purchasing power dragged even lower by inflation.

The CDC report on STD trends provides official data on congenital syphilis cases for 2020, as well as preliminary case counts for 2021 that are expected to increase. CDC data shows that congenital syphilis rates in 2020 continued to climb in already overwhelmed states like Texas, California, and Nevada and that the disease is now present in almost every state in the nation. All but three states – Maine, New Hampshire, and Vermont – reported congenital syphilis cases in 2020.

From 2011 to 2020, congenital syphilis resulted in 633 documented stillbirths and infant deaths, according to the new CDC data.

Preventing congenital syphilis – the term used when syphilis is transferred to a fetus in utero – is from a medical standpoint exceedingly simple: If a pregnant woman is diagnosed at least a month before giving birth, just a few shots of penicillin have a near-perfect cure rate for mother and baby. But funding cuts and competing priorities in the nation’s fragmented public health care system have vastly narrowed access to such services.

The reasons pregnant people with syphilis go undiagnosed or untreated vary geographically, according to data collected by states and analyzed by the CDC.

In Western states, the largest share of cases involve women who have received little to no prenatal care and aren’t tested for syphilis until they give birth. Many have substance use disorders, primarily related to methamphetamines. “They’ve felt a lot of judgment and stigma by the medical community,” said Stephanie Pierce, MD, a maternal fetal medicine specialist at the University of Oklahoma, Oklahoma City, who runs a clinic for women with high-risk pregnancies.

In Southern states, a CDC study of 2018 data found that the largest share of congenital syphilis cases were among women who had been tested and diagnosed but hadn’t received treatment. That year, among Black moms who gave birth to a baby with syphilis, 37% had not been treated adequately even though they’d received a timely diagnosis. Among white moms, that number was 24%. Longstanding racism in medical care, poverty, transportation issues, poorly funded public health departments, and crowded clinics whose employees are too overworked to follow up with patients all contribute to the problem, according to infectious disease experts.

Doctors are also noticing a growing number of women who are treated for syphilis but reinfected during pregnancy. Amid rising cases and stagnant resources, some states have focused disease investigations on pregnant women of childbearing age; they can no longer prioritize treating sexual partners who are also infected.

Eric McGrath, MD, a pediatric infectious disease specialist at Wayne State University, Detroit, said that he’d seen several newborns in recent years whose mothers had been treated for syphilis but then were re-exposed during pregnancy by partners who hadn’t been treated.

Treating a newborn baby for syphilis isn’t trivial. Penicillin carries little risk, but delivering it to a baby often involves a lumbar puncture and other painful procedures. And treatment typically means keeping the baby in the hospital for 10 days, interrupting an important time for family bonding.

Dr. McGrath has seen a couple of babies in his career who weren’t diagnosed or treated at birth and later came to him with full-blown syphilis complications, including full-body rashes and inflamed livers. It was an awful experience he doesn’t want to repeat. The preferred course, he said, is to spare the baby the ordeal and treat parents early in the pregnancy.

But in some places, providers aren’t routinely testing for syphilis. Although most states mandate testing at some point during pregnancy, as of last year just 14 required it for everyone in the third trimester. The CDC recommends third-trimester testing in areas with high rates of syphilis, a growing share of the United States.

After Arizona declared a statewide outbreak in 2018, state health officials wanted to know whether widespread testing in the third trimester could have prevented infections. Looking at 18 months of data, analysts found that nearly three-quarters of the more than 200 pregnant women diagnosed with syphilis in 2017 and the first half of 2018 got treatment. That left 57 babies born with syphilis, nine of whom died. The analysts estimated that a third of the infections could have been prevented with testing in the third trimester.

Based on the numbers they saw in those 18 months, officials estimated that screening all women on Medicaid in the third trimester would cost the state $113,300 annually, and that treating all cases of syphilis that screening would catch could be done for just $113. Factoring in the hospitalization costs for infected infants, the officials concluded the additional testing would save the state money.

And yet prevention money has been hard to come by. Taking inflation into account, CDC prevention funding for STDs has fallen 41% since 2003, according to an analysis by the National Coalition of STD Directors. That’s even as cases have risen, leaving public health departments saddled with more work and far less money.

Janine Waters, STD program manager for the state of New Mexico, has watched the unraveling. When Ms. Waters started her career more than 20 years ago, she and her colleagues followed up on every case of chlamydia, gonorrhea, and syphilis reported, not only making sure that people got treatment but also getting in touch with their sexual partners, with the aim of stopping the spread of infection. In a 2019 interview with Kaiser Health News, she said her team was struggling to keep up with syphilis alone, even as they registered with dread congenital syphilis cases surging in neighboring Texas and Arizona.

By 2020, New Mexico had the highest rate of congenital syphilis in the country.

The COVID-19 pandemic drained the remaining resources. Half of health departments across the country discontinued STD fieldwork altogether, diverting their resources to COVID. In California, which for years has struggled with high rates of congenital syphilis, three-quarters of local health departments dispatched more than half of their STD staffers to work on COVID.

As the pandemic ebbs – at least in the short term – many public health departments are turning their attention back to syphilis and other diseases. And they are doing it with reinforcements. Although the Biden administration’s proposed STD prevention budget for 2023 remains flat, the American Rescue Plan Act included $200 million to help health departments boost contact tracing and surveillance for covid and other infectious diseases. Many departments are funneling that money toward STDs.

The money is an infusion that state health officials say will make a difference. But when taking inflation into account, it essentially brings STD prevention funding back to what it was in 2003, said Stephanie Arnold Pang of the National Coalition of STD Directors. And the American Rescue Plan money doesn’t cover some aspects of STD prevention, including clinical services.

The coalition wants to revive dedicated STD clinics, where people can drop in for testing and treatment at little to no cost. Advocates say that would fill a void that has plagued treatment efforts since public clinics closed en masse in the wake of the 2008 recession.

Texas, battling its own pervasive outbreak, will use its share of American Rescue Plan money to fill 94 new positions focused on various aspects of STD prevention. Those hires will bolster a range of measures the state put in place before the pandemic, including an updated data system to track infections, review boards in major cities that examine what went wrong for every case of congenital syphilis, and a requirement that providers test for syphilis during the third trimester of pregnancy. The suite of interventions seems to be working, but it could be a while before cases go down, said Amy Carter, the state’s congenital syphilis coordinator.

“The growth didn’t happen overnight,” Ms. Carter said. “So our prevention efforts aren’t going to have a direct impact overnight either.”

KHN (Kaiser Health News) is a national newsroom that produces in-depth journalism about health issues. Together with Policy Analysis and Polling, KHN is one of the three major operating programs at KFF (Kaiser Family Foundation). KFF is an endowed nonprofit organization providing information on health issues to the nation

 

 

For a decade, the number of babies born with syphilis in the United States has surged, undeterred. Data released Apr. 12 by the Centers for Disease Control and Prevention shows just how dire the outbreak has become.

In 2012, 332 babies were born infected with the disease. In 2021, that number had climbed nearly sevenfold, to at least 2,268, according to preliminary estimates. And 166 of those babies died.

About 7% of babies diagnosed with syphilis in recent years have died; thousands of others born with the disease have faced problems that include brain and bone malformations, blindness, and organ damage.

For public health officials, the situation is all the more heartbreaking, considering that congenital syphilis rates reached near-historic modern lows from 2000 to 2012 amid ambitious prevention and education efforts. By 2020, following a sharp erosion in funding and attention, the nationwide case rate was more than seven times that of 2012.

“The really depressing thing about it is we had this thing virtually eradicated back in the year 2000,” said William Andrews, a public information officer for Oklahoma’s sexual health and harm reduction service. “Now it’s back with a vengeance. We are really trying to get the message out that sexual health is health. It’s nothing to be ashamed of.”

Even as caseloads soar, the CDC budget for STD prevention – the primary funding source for most public health departments – has been largely stagnant for two decades, its purchasing power dragged even lower by inflation.

The CDC report on STD trends provides official data on congenital syphilis cases for 2020, as well as preliminary case counts for 2021 that are expected to increase. CDC data shows that congenital syphilis rates in 2020 continued to climb in already overwhelmed states like Texas, California, and Nevada and that the disease is now present in almost every state in the nation. All but three states – Maine, New Hampshire, and Vermont – reported congenital syphilis cases in 2020.

From 2011 to 2020, congenital syphilis resulted in 633 documented stillbirths and infant deaths, according to the new CDC data.

Preventing congenital syphilis – the term used when syphilis is transferred to a fetus in utero – is from a medical standpoint exceedingly simple: If a pregnant woman is diagnosed at least a month before giving birth, just a few shots of penicillin have a near-perfect cure rate for mother and baby. But funding cuts and competing priorities in the nation’s fragmented public health care system have vastly narrowed access to such services.

The reasons pregnant people with syphilis go undiagnosed or untreated vary geographically, according to data collected by states and analyzed by the CDC.

In Western states, the largest share of cases involve women who have received little to no prenatal care and aren’t tested for syphilis until they give birth. Many have substance use disorders, primarily related to methamphetamines. “They’ve felt a lot of judgment and stigma by the medical community,” said Stephanie Pierce, MD, a maternal fetal medicine specialist at the University of Oklahoma, Oklahoma City, who runs a clinic for women with high-risk pregnancies.

In Southern states, a CDC study of 2018 data found that the largest share of congenital syphilis cases were among women who had been tested and diagnosed but hadn’t received treatment. That year, among Black moms who gave birth to a baby with syphilis, 37% had not been treated adequately even though they’d received a timely diagnosis. Among white moms, that number was 24%. Longstanding racism in medical care, poverty, transportation issues, poorly funded public health departments, and crowded clinics whose employees are too overworked to follow up with patients all contribute to the problem, according to infectious disease experts.

Doctors are also noticing a growing number of women who are treated for syphilis but reinfected during pregnancy. Amid rising cases and stagnant resources, some states have focused disease investigations on pregnant women of childbearing age; they can no longer prioritize treating sexual partners who are also infected.

Eric McGrath, MD, a pediatric infectious disease specialist at Wayne State University, Detroit, said that he’d seen several newborns in recent years whose mothers had been treated for syphilis but then were re-exposed during pregnancy by partners who hadn’t been treated.

Treating a newborn baby for syphilis isn’t trivial. Penicillin carries little risk, but delivering it to a baby often involves a lumbar puncture and other painful procedures. And treatment typically means keeping the baby in the hospital for 10 days, interrupting an important time for family bonding.

Dr. McGrath has seen a couple of babies in his career who weren’t diagnosed or treated at birth and later came to him with full-blown syphilis complications, including full-body rashes and inflamed livers. It was an awful experience he doesn’t want to repeat. The preferred course, he said, is to spare the baby the ordeal and treat parents early in the pregnancy.

But in some places, providers aren’t routinely testing for syphilis. Although most states mandate testing at some point during pregnancy, as of last year just 14 required it for everyone in the third trimester. The CDC recommends third-trimester testing in areas with high rates of syphilis, a growing share of the United States.

After Arizona declared a statewide outbreak in 2018, state health officials wanted to know whether widespread testing in the third trimester could have prevented infections. Looking at 18 months of data, analysts found that nearly three-quarters of the more than 200 pregnant women diagnosed with syphilis in 2017 and the first half of 2018 got treatment. That left 57 babies born with syphilis, nine of whom died. The analysts estimated that a third of the infections could have been prevented with testing in the third trimester.

Based on the numbers they saw in those 18 months, officials estimated that screening all women on Medicaid in the third trimester would cost the state $113,300 annually, and that treating all cases of syphilis that screening would catch could be done for just $113. Factoring in the hospitalization costs for infected infants, the officials concluded the additional testing would save the state money.

And yet prevention money has been hard to come by. Taking inflation into account, CDC prevention funding for STDs has fallen 41% since 2003, according to an analysis by the National Coalition of STD Directors. That’s even as cases have risen, leaving public health departments saddled with more work and far less money.

Janine Waters, STD program manager for the state of New Mexico, has watched the unraveling. When Ms. Waters started her career more than 20 years ago, she and her colleagues followed up on every case of chlamydia, gonorrhea, and syphilis reported, not only making sure that people got treatment but also getting in touch with their sexual partners, with the aim of stopping the spread of infection. In a 2019 interview with Kaiser Health News, she said her team was struggling to keep up with syphilis alone, even as they registered with dread congenital syphilis cases surging in neighboring Texas and Arizona.

By 2020, New Mexico had the highest rate of congenital syphilis in the country.

The COVID-19 pandemic drained the remaining resources. Half of health departments across the country discontinued STD fieldwork altogether, diverting their resources to COVID. In California, which for years has struggled with high rates of congenital syphilis, three-quarters of local health departments dispatched more than half of their STD staffers to work on COVID.

As the pandemic ebbs – at least in the short term – many public health departments are turning their attention back to syphilis and other diseases. And they are doing it with reinforcements. Although the Biden administration’s proposed STD prevention budget for 2023 remains flat, the American Rescue Plan Act included $200 million to help health departments boost contact tracing and surveillance for covid and other infectious diseases. Many departments are funneling that money toward STDs.

The money is an infusion that state health officials say will make a difference. But when taking inflation into account, it essentially brings STD prevention funding back to what it was in 2003, said Stephanie Arnold Pang of the National Coalition of STD Directors. And the American Rescue Plan money doesn’t cover some aspects of STD prevention, including clinical services.

The coalition wants to revive dedicated STD clinics, where people can drop in for testing and treatment at little to no cost. Advocates say that would fill a void that has plagued treatment efforts since public clinics closed en masse in the wake of the 2008 recession.

Texas, battling its own pervasive outbreak, will use its share of American Rescue Plan money to fill 94 new positions focused on various aspects of STD prevention. Those hires will bolster a range of measures the state put in place before the pandemic, including an updated data system to track infections, review boards in major cities that examine what went wrong for every case of congenital syphilis, and a requirement that providers test for syphilis during the third trimester of pregnancy. The suite of interventions seems to be working, but it could be a while before cases go down, said Amy Carter, the state’s congenital syphilis coordinator.

“The growth didn’t happen overnight,” Ms. Carter said. “So our prevention efforts aren’t going to have a direct impact overnight either.”

KHN (Kaiser Health News) is a national newsroom that produces in-depth journalism about health issues. Together with Policy Analysis and Polling, KHN is one of the three major operating programs at KFF (Kaiser Family Foundation). KFF is an endowed nonprofit organization providing information on health issues to the nation

 

 

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Asymptomatic C. difficile carriers may infect the people they live with after hospitalization

Article Type
Changed
Display Headline
Asymptomatic C. difficile carriers may infect the people they live with after hospitalization

 

Hospitalized patients who are asymptomatic Clostridioides difficile carriers may infect people they live with after they return home, a study based on U.S. insurance claim data suggests.

Although C. difficile infection (CDI) is considered to be a common hospital-acquired infection, reports of community-associated CDI in patients who have not been hospitalized are increasing, the authors wrote in Emerging Infectious Diseases.

“Individuals in households where another family member was recently hospitalized but not diagnosed with a CDI appear to be at increased risk for CDI,” said lead author Aaron C. Miller, PhD, a research assistant professor in the department of internal medicine at the University of Iowa, Iowa City. “When individuals are hospitalized, they may become colonized with C. difficile without developing symptoms and subsequently transmit the pathogen to other family members after they return home,” he said by email.

Dr. Miller and colleagues analyzed insurance claims data from 2001 through 2017 using the U.S. Commercial Claims and Medicare Supplemental datasets of IBM MarketScan Research Databases. Over that period, they searched employer-sponsored commercial insurance claims and Medicare supplemental claims of 194,424 enrollees, and they linked claims from multiple family members in the same enrollment plan.

They identified 224,818 CDI cases, and 3,871 of them were considered potential asymptomatic C. difficile transmissions from a recently hospitalized family member.

The researchers gathered monthly C. difficile incidence data from households with a family member who had been hospitalized within the past 60 days and compared them with data from households without a hospitalized family member.

Enrollees exposed to a recently hospitalized family member had a 73% greater incidence of CDI compared with enrollees who were not exposed. The longer the family member’s hospital stay, the greater the risk that someone in the household became infected.

Compared with people whose family members were hospitalized less than 1 day, people whose family members were hospitalized from 1 to 3 days had an incidence rate ratio (IRR) of 1.30 (95% confidence interval [CI], 1.19-1.41), and those whose family members were hospitalized for more than 30 days had an IRR of 2.45 (95% CI, 1.66-3.60).

CDI incidence increased with age. Compared with people 17 years of age or younger, the IRR increased to 9.32 (95% CI, 8.92-9.73) for those over 65.

Females had higher CDI incidence than males (IRR 1.30; 95% CI, 1.28-1.33).

Households with an infant also had higher CDI incidence than those without (IRR 1.5; 95% CI, 1.44-1.58).

People taking antimicrobials had higher CDI IRRs: 2.69 (95% CI, 2.59-2.79) for low-CDI-risk antibiotics and 8.83 (95% CI, 8.63-9.03) for high-CDI-risk antibiotics.

People taking proton-pump inhibitors had an IRR of 2.23 (95% CI, 2.15-2.30).
 

Reactions from four experts

Douglas S. Paauw MD, MACP, professor of medicine and the chair for patient-centered clinical education at the University of Washington, Seattle, was not surprised by the findings. “We have wondered for a while how community-acquired CDI occurs,” he said in an email. “This important study offers a plausible explanation for some cases.”

Dr. Paauw advises doctors to consider CDI in their patients who have been exposed to hospitalized people.

David M. Aronoff, MD, FIDSA, FAAM, professor of medicine and the chair of the department of medicine at Indiana University, Indianapolis, advises providers to educate hospital patients being discharged about how CDI is spread and how they can practice good hand hygiene at home.

“An open question of this strong study is whether we should be testing certain hospital patients for asymptomatic C. difficile carriage before they are discharged,” he added in an email.

In a phone interview, Paul G. Auwaerter, MD, MBA, professor of medicine and clinical director of the division of infectious diseases at Johns Hopkins University, Baltimore, noted that community-acquired CDI is frequent enough that his institution performs routine C. difficile testing on all patients with unexplained severe diarrhea.

“This intriguing study bears additional research and follow-up because clearly these spores are hardy,” he said. “But a key point in this billings- and claims-based study is that no one knows where household members acquired CDI, whether it was actually through household transmission.”

Ramin Asgary, MD, MPH, FASTMH, associate professor of global health in the Milken Institute School of Public Health at George Washington University, Washington, cautioned about “an increasing issue with drug-resistant CDI.

“This important, timely study provides another step in the right direction to better understanding and addressing CDI and other hospital-based infections that have become increasing threats to the safety of our patients, their families, and health care in general,” he said in an email.

Dr. Miller said that the scale and scope of the data are strengths of the study, and he acknowledged that its basis in claims and billing data is a limitation. He and his group plan to explore genetic relationships involved in CDI transmission.

The study was funded by the Centers for Disease Control and Prevention. All authors and independent experts have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

Hospitalized patients who are asymptomatic Clostridioides difficile carriers may infect people they live with after they return home, a study based on U.S. insurance claim data suggests.

Although C. difficile infection (CDI) is considered to be a common hospital-acquired infection, reports of community-associated CDI in patients who have not been hospitalized are increasing, the authors wrote in Emerging Infectious Diseases.

“Individuals in households where another family member was recently hospitalized but not diagnosed with a CDI appear to be at increased risk for CDI,” said lead author Aaron C. Miller, PhD, a research assistant professor in the department of internal medicine at the University of Iowa, Iowa City. “When individuals are hospitalized, they may become colonized with C. difficile without developing symptoms and subsequently transmit the pathogen to other family members after they return home,” he said by email.

Dr. Miller and colleagues analyzed insurance claims data from 2001 through 2017 using the U.S. Commercial Claims and Medicare Supplemental datasets of IBM MarketScan Research Databases. Over that period, they searched employer-sponsored commercial insurance claims and Medicare supplemental claims of 194,424 enrollees, and they linked claims from multiple family members in the same enrollment plan.

They identified 224,818 CDI cases, and 3,871 of them were considered potential asymptomatic C. difficile transmissions from a recently hospitalized family member.

The researchers gathered monthly C. difficile incidence data from households with a family member who had been hospitalized within the past 60 days and compared them with data from households without a hospitalized family member.

Enrollees exposed to a recently hospitalized family member had a 73% greater incidence of CDI compared with enrollees who were not exposed. The longer the family member’s hospital stay, the greater the risk that someone in the household became infected.

Compared with people whose family members were hospitalized less than 1 day, people whose family members were hospitalized from 1 to 3 days had an incidence rate ratio (IRR) of 1.30 (95% confidence interval [CI], 1.19-1.41), and those whose family members were hospitalized for more than 30 days had an IRR of 2.45 (95% CI, 1.66-3.60).

CDI incidence increased with age. Compared with people 17 years of age or younger, the IRR increased to 9.32 (95% CI, 8.92-9.73) for those over 65.

Females had higher CDI incidence than males (IRR 1.30; 95% CI, 1.28-1.33).

Households with an infant also had higher CDI incidence than those without (IRR 1.5; 95% CI, 1.44-1.58).

People taking antimicrobials had higher CDI IRRs: 2.69 (95% CI, 2.59-2.79) for low-CDI-risk antibiotics and 8.83 (95% CI, 8.63-9.03) for high-CDI-risk antibiotics.

People taking proton-pump inhibitors had an IRR of 2.23 (95% CI, 2.15-2.30).
 

Reactions from four experts

Douglas S. Paauw MD, MACP, professor of medicine and the chair for patient-centered clinical education at the University of Washington, Seattle, was not surprised by the findings. “We have wondered for a while how community-acquired CDI occurs,” he said in an email. “This important study offers a plausible explanation for some cases.”

Dr. Paauw advises doctors to consider CDI in their patients who have been exposed to hospitalized people.

David M. Aronoff, MD, FIDSA, FAAM, professor of medicine and the chair of the department of medicine at Indiana University, Indianapolis, advises providers to educate hospital patients being discharged about how CDI is spread and how they can practice good hand hygiene at home.

“An open question of this strong study is whether we should be testing certain hospital patients for asymptomatic C. difficile carriage before they are discharged,” he added in an email.

In a phone interview, Paul G. Auwaerter, MD, MBA, professor of medicine and clinical director of the division of infectious diseases at Johns Hopkins University, Baltimore, noted that community-acquired CDI is frequent enough that his institution performs routine C. difficile testing on all patients with unexplained severe diarrhea.

“This intriguing study bears additional research and follow-up because clearly these spores are hardy,” he said. “But a key point in this billings- and claims-based study is that no one knows where household members acquired CDI, whether it was actually through household transmission.”

Ramin Asgary, MD, MPH, FASTMH, associate professor of global health in the Milken Institute School of Public Health at George Washington University, Washington, cautioned about “an increasing issue with drug-resistant CDI.

“This important, timely study provides another step in the right direction to better understanding and addressing CDI and other hospital-based infections that have become increasing threats to the safety of our patients, their families, and health care in general,” he said in an email.

Dr. Miller said that the scale and scope of the data are strengths of the study, and he acknowledged that its basis in claims and billing data is a limitation. He and his group plan to explore genetic relationships involved in CDI transmission.

The study was funded by the Centers for Disease Control and Prevention. All authors and independent experts have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

 

Hospitalized patients who are asymptomatic Clostridioides difficile carriers may infect people they live with after they return home, a study based on U.S. insurance claim data suggests.

Although C. difficile infection (CDI) is considered to be a common hospital-acquired infection, reports of community-associated CDI in patients who have not been hospitalized are increasing, the authors wrote in Emerging Infectious Diseases.

“Individuals in households where another family member was recently hospitalized but not diagnosed with a CDI appear to be at increased risk for CDI,” said lead author Aaron C. Miller, PhD, a research assistant professor in the department of internal medicine at the University of Iowa, Iowa City. “When individuals are hospitalized, they may become colonized with C. difficile without developing symptoms and subsequently transmit the pathogen to other family members after they return home,” he said by email.

Dr. Miller and colleagues analyzed insurance claims data from 2001 through 2017 using the U.S. Commercial Claims and Medicare Supplemental datasets of IBM MarketScan Research Databases. Over that period, they searched employer-sponsored commercial insurance claims and Medicare supplemental claims of 194,424 enrollees, and they linked claims from multiple family members in the same enrollment plan.

They identified 224,818 CDI cases, and 3,871 of them were considered potential asymptomatic C. difficile transmissions from a recently hospitalized family member.

The researchers gathered monthly C. difficile incidence data from households with a family member who had been hospitalized within the past 60 days and compared them with data from households without a hospitalized family member.

Enrollees exposed to a recently hospitalized family member had a 73% greater incidence of CDI compared with enrollees who were not exposed. The longer the family member’s hospital stay, the greater the risk that someone in the household became infected.

Compared with people whose family members were hospitalized less than 1 day, people whose family members were hospitalized from 1 to 3 days had an incidence rate ratio (IRR) of 1.30 (95% confidence interval [CI], 1.19-1.41), and those whose family members were hospitalized for more than 30 days had an IRR of 2.45 (95% CI, 1.66-3.60).

CDI incidence increased with age. Compared with people 17 years of age or younger, the IRR increased to 9.32 (95% CI, 8.92-9.73) for those over 65.

Females had higher CDI incidence than males (IRR 1.30; 95% CI, 1.28-1.33).

Households with an infant also had higher CDI incidence than those without (IRR 1.5; 95% CI, 1.44-1.58).

People taking antimicrobials had higher CDI IRRs: 2.69 (95% CI, 2.59-2.79) for low-CDI-risk antibiotics and 8.83 (95% CI, 8.63-9.03) for high-CDI-risk antibiotics.

People taking proton-pump inhibitors had an IRR of 2.23 (95% CI, 2.15-2.30).
 

Reactions from four experts

Douglas S. Paauw MD, MACP, professor of medicine and the chair for patient-centered clinical education at the University of Washington, Seattle, was not surprised by the findings. “We have wondered for a while how community-acquired CDI occurs,” he said in an email. “This important study offers a plausible explanation for some cases.”

Dr. Paauw advises doctors to consider CDI in their patients who have been exposed to hospitalized people.

David M. Aronoff, MD, FIDSA, FAAM, professor of medicine and the chair of the department of medicine at Indiana University, Indianapolis, advises providers to educate hospital patients being discharged about how CDI is spread and how they can practice good hand hygiene at home.

“An open question of this strong study is whether we should be testing certain hospital patients for asymptomatic C. difficile carriage before they are discharged,” he added in an email.

In a phone interview, Paul G. Auwaerter, MD, MBA, professor of medicine and clinical director of the division of infectious diseases at Johns Hopkins University, Baltimore, noted that community-acquired CDI is frequent enough that his institution performs routine C. difficile testing on all patients with unexplained severe diarrhea.

“This intriguing study bears additional research and follow-up because clearly these spores are hardy,” he said. “But a key point in this billings- and claims-based study is that no one knows where household members acquired CDI, whether it was actually through household transmission.”

Ramin Asgary, MD, MPH, FASTMH, associate professor of global health in the Milken Institute School of Public Health at George Washington University, Washington, cautioned about “an increasing issue with drug-resistant CDI.

“This important, timely study provides another step in the right direction to better understanding and addressing CDI and other hospital-based infections that have become increasing threats to the safety of our patients, their families, and health care in general,” he said in an email.

Dr. Miller said that the scale and scope of the data are strengths of the study, and he acknowledged that its basis in claims and billing data is a limitation. He and his group plan to explore genetic relationships involved in CDI transmission.

The study was funded by the Centers for Disease Control and Prevention. All authors and independent experts have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Display Headline
Asymptomatic C. difficile carriers may infect the people they live with after hospitalization
Display Headline
Asymptomatic C. difficile carriers may infect the people they live with after hospitalization
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article