Slot System
Featured Buckets
Featured Buckets Admin
Reverse Chronological Sort

Early remission in lupus nephritis can still progress to advanced CKD

Article Type
Changed
Tue, 05/30/2023 - 11:26

– Nearly 8% of people with lupus nephritis who achieve complete remission of disease within 1 year of starting treatment will still go on to develop advanced chronic kidney disease (CKD), according to a presentation at an international congress on systemic lupus erythematosus.

Rheumatologist Dafna Gladman, MD, professor of medicine at the University of Toronto and codirector of the Lupus Clinic at Toronto Western Hospital, showed data from the Lupus Clinic’s prospective longitudinal cohort study in 273 patients with confirmed lupus nephritis who achieved complete remission within 12 months of baseline.

Bianca Nogrady/MDedge News
Dr. Dafna Gladman

Remission was defined as less than 0.5 g proteinuria over 24 hours, inactive urinary sediment, and serum creatinine less than 120% of baseline.

Of this group, 21 (7.7%) progressed to advanced CKD during follow-up, which ranged from 0.7 to 31.7 years with a median of 5.8 years, after enrollment.

Patients who had experienced at least one flare during their first 5 years were around 4.5 times more likely to progress to advanced CKD than were those who did not experience a flare.

While the study excluded patients who already had advanced CKD, the analysis found those with evidence of impaired kidney function at baseline also had more than a fourfold higher risk of developing advanced CKD.

Other significant risk factors for progression were having low complement C3 levels at baseline and having had a longer duration of disease before enrollment.

“Those patients already have abnormal renal function, so the message is that patients who are already in trouble, you’ve got to watch them very carefully,” Dr. Gladman said in an interview.



The study also looked at whether there was a difference between patients who developed advanced CKD earlier – before the median of 5.8 years – or later. While the numbers were small, Dr. Gladman said patients who progressed earlier tended to be older and were more likely to be on antihypertensive treatment and have lower estimated glomerular filtration rate and a lower Systemic Lupus Erythematosus Disease Activity Index–2K, compared with those who progressed later. Some patients also were noncompliant and/or experienced concomitant infections; four had moderate to severe interstitial fibrosis and tubular atrophy.

“We conclude that such patients should be monitored closely despite early remission, and we also highlight the importance of maintenance therapy, which should be communicated to the patients to prevent noncompliance and subsequent flare,” Dr. Gladman told the conference.

Dr. Gladman said her clinic told patients from the very beginning of their treatment that they would need to be seen at 2- to 6-month intervals, regardless of how well their disease was doing.

Commenting on the presentation, rheumatologist Mandana Nikpour MD, PhD, of St. Vincent’s Hospital in Melbourne, said the findings showed the importance of keeping a close eye on patients with lupus nephritis, even if their disease appears to be in remission.

“If you’ve had nephritis, and you go into remission, you may already have a degree of damage in your kidneys,” said Dr. Nikpour, also from the University of Melbourne. “If there’s a degree of uncontrolled hypertension, or if a patient is noncompliant with their treatment, and there’s a degree of grumbling disease activity, that can all conspire and add up to result in long-term kidney damage and loss of renal function.”

Dr. Gladman has received grants or research support from, or has consulted for, Amgen, AbbVie, Celgene, Eli Lilly, Janssen, Novartis, Pfizer, UCB, Bristol-Myers Squibb, Galapagos, and Gilead.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

– Nearly 8% of people with lupus nephritis who achieve complete remission of disease within 1 year of starting treatment will still go on to develop advanced chronic kidney disease (CKD), according to a presentation at an international congress on systemic lupus erythematosus.

Rheumatologist Dafna Gladman, MD, professor of medicine at the University of Toronto and codirector of the Lupus Clinic at Toronto Western Hospital, showed data from the Lupus Clinic’s prospective longitudinal cohort study in 273 patients with confirmed lupus nephritis who achieved complete remission within 12 months of baseline.

Bianca Nogrady/MDedge News
Dr. Dafna Gladman

Remission was defined as less than 0.5 g proteinuria over 24 hours, inactive urinary sediment, and serum creatinine less than 120% of baseline.

Of this group, 21 (7.7%) progressed to advanced CKD during follow-up, which ranged from 0.7 to 31.7 years with a median of 5.8 years, after enrollment.

Patients who had experienced at least one flare during their first 5 years were around 4.5 times more likely to progress to advanced CKD than were those who did not experience a flare.

While the study excluded patients who already had advanced CKD, the analysis found those with evidence of impaired kidney function at baseline also had more than a fourfold higher risk of developing advanced CKD.

Other significant risk factors for progression were having low complement C3 levels at baseline and having had a longer duration of disease before enrollment.

“Those patients already have abnormal renal function, so the message is that patients who are already in trouble, you’ve got to watch them very carefully,” Dr. Gladman said in an interview.



The study also looked at whether there was a difference between patients who developed advanced CKD earlier – before the median of 5.8 years – or later. While the numbers were small, Dr. Gladman said patients who progressed earlier tended to be older and were more likely to be on antihypertensive treatment and have lower estimated glomerular filtration rate and a lower Systemic Lupus Erythematosus Disease Activity Index–2K, compared with those who progressed later. Some patients also were noncompliant and/or experienced concomitant infections; four had moderate to severe interstitial fibrosis and tubular atrophy.

“We conclude that such patients should be monitored closely despite early remission, and we also highlight the importance of maintenance therapy, which should be communicated to the patients to prevent noncompliance and subsequent flare,” Dr. Gladman told the conference.

Dr. Gladman said her clinic told patients from the very beginning of their treatment that they would need to be seen at 2- to 6-month intervals, regardless of how well their disease was doing.

Commenting on the presentation, rheumatologist Mandana Nikpour MD, PhD, of St. Vincent’s Hospital in Melbourne, said the findings showed the importance of keeping a close eye on patients with lupus nephritis, even if their disease appears to be in remission.

“If you’ve had nephritis, and you go into remission, you may already have a degree of damage in your kidneys,” said Dr. Nikpour, also from the University of Melbourne. “If there’s a degree of uncontrolled hypertension, or if a patient is noncompliant with their treatment, and there’s a degree of grumbling disease activity, that can all conspire and add up to result in long-term kidney damage and loss of renal function.”

Dr. Gladman has received grants or research support from, or has consulted for, Amgen, AbbVie, Celgene, Eli Lilly, Janssen, Novartis, Pfizer, UCB, Bristol-Myers Squibb, Galapagos, and Gilead.

– Nearly 8% of people with lupus nephritis who achieve complete remission of disease within 1 year of starting treatment will still go on to develop advanced chronic kidney disease (CKD), according to a presentation at an international congress on systemic lupus erythematosus.

Rheumatologist Dafna Gladman, MD, professor of medicine at the University of Toronto and codirector of the Lupus Clinic at Toronto Western Hospital, showed data from the Lupus Clinic’s prospective longitudinal cohort study in 273 patients with confirmed lupus nephritis who achieved complete remission within 12 months of baseline.

Bianca Nogrady/MDedge News
Dr. Dafna Gladman

Remission was defined as less than 0.5 g proteinuria over 24 hours, inactive urinary sediment, and serum creatinine less than 120% of baseline.

Of this group, 21 (7.7%) progressed to advanced CKD during follow-up, which ranged from 0.7 to 31.7 years with a median of 5.8 years, after enrollment.

Patients who had experienced at least one flare during their first 5 years were around 4.5 times more likely to progress to advanced CKD than were those who did not experience a flare.

While the study excluded patients who already had advanced CKD, the analysis found those with evidence of impaired kidney function at baseline also had more than a fourfold higher risk of developing advanced CKD.

Other significant risk factors for progression were having low complement C3 levels at baseline and having had a longer duration of disease before enrollment.

“Those patients already have abnormal renal function, so the message is that patients who are already in trouble, you’ve got to watch them very carefully,” Dr. Gladman said in an interview.



The study also looked at whether there was a difference between patients who developed advanced CKD earlier – before the median of 5.8 years – or later. While the numbers were small, Dr. Gladman said patients who progressed earlier tended to be older and were more likely to be on antihypertensive treatment and have lower estimated glomerular filtration rate and a lower Systemic Lupus Erythematosus Disease Activity Index–2K, compared with those who progressed later. Some patients also were noncompliant and/or experienced concomitant infections; four had moderate to severe interstitial fibrosis and tubular atrophy.

“We conclude that such patients should be monitored closely despite early remission, and we also highlight the importance of maintenance therapy, which should be communicated to the patients to prevent noncompliance and subsequent flare,” Dr. Gladman told the conference.

Dr. Gladman said her clinic told patients from the very beginning of their treatment that they would need to be seen at 2- to 6-month intervals, regardless of how well their disease was doing.

Commenting on the presentation, rheumatologist Mandana Nikpour MD, PhD, of St. Vincent’s Hospital in Melbourne, said the findings showed the importance of keeping a close eye on patients with lupus nephritis, even if their disease appears to be in remission.

“If you’ve had nephritis, and you go into remission, you may already have a degree of damage in your kidneys,” said Dr. Nikpour, also from the University of Melbourne. “If there’s a degree of uncontrolled hypertension, or if a patient is noncompliant with their treatment, and there’s a degree of grumbling disease activity, that can all conspire and add up to result in long-term kidney damage and loss of renal function.”

Dr. Gladman has received grants or research support from, or has consulted for, Amgen, AbbVie, Celgene, Eli Lilly, Janssen, Novartis, Pfizer, UCB, Bristol-Myers Squibb, Galapagos, and Gilead.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT LUPUS 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Coronary artery calcium score bests polygenic risk score in CHD prediction

Article Type
Changed
Tue, 05/30/2023 - 11:26

As a predictor of coronary heart disease (CHD) events, the coronary artery calcium (CAC) score on computed tomography had better risk discrimination than the polygenic risk score, a binational study found. And when added to classic cardiovascular risk factors, the CAC score significantly improved risk classification while the polygenic risk factor score did not.

Sadiya S. Khan
Dr. Sadiya S. Khan

These findings emerged from two large cohorts of middle-aged and older White adults from the United States and the Netherlands in the first head-to-head comparison of these two approaches. Led by Sadiya S. Kahn, MD, MSc, an assistant professor of medicine (cardiology) and preventive medicine (epidemiology) at Northwestern University, Chicago, the study was published online in JAMA.

There has been much interest in using both genetic factors and CT imaging to better identify individuals at risk for heart disease. “Each approach has advantages and disadvantages, and we wanted to better understand the comparative predictive utility to provide support for what the preferred approach should be,” Dr. Kahn said in an interview. “We focused on middle-aged to older adults for whom current risk prediction equations are relevant in estimating risk with the Pooled Cohort Equation, or PCE.”

The superiority of the CT-imaged coronary artery risk score may be because of its direct visualization of calcification in the arteries and the subclinical disease burden rather than a focus on common genetic variants, Dr. Kahn explained. “In addition, prior studies have demonstrated that genetics, or inherited risk, is not destiny, so this score may not perform as well for risk discrimination as the traditional risk factors themselves along with CT.”
 

The study

Study participants came from the U.S. Multi-Ethnic Study of Atherosclerosis (MESA, n = 1,991) and the Dutch Rotterdam Study (RS, n = 1,217). Ages ranged from 45 to 79, with the medians in the two cohorts 61 and 68 years, respectively. Slightly more than half of participants in both groups were female.

Traditional risk factors were used to calculate CHD risk with pooled cohort equations, while computed tomography was used to determine the CAC score and genotyped samples for a validated polygenic risk score.

Both scores were significantly associated with 10-year risk of incident CHD.

The median predicted atherosclerotic disease risk based on traditional risk factors was 6.99% in MESA and 5.93% in RS. During the total available follow-up in MESA (median, 16.0 years) and RS (median, 14.2 years), incident CHD occurred in 187 participants (9.4%) and 98 participants (8.1%), respectively.

C (concordance) statistics for the two scores showed the superiority of the CAC. This statistic measures a model’s ability to rank patients from high to low risk, with a value of 1 being perfect risk fit or concordance and 0.70 or more indicating good concordance and risk discrimination. The CAC score had a C statistic of 0.76 (95% confidence interval, 0.71-0.79) vs. 0.69 for the polygenic risk score (95% CI, 0.63-0.71).

When each score was added to PCEs, the C statistics changed as follows: CAC score, 0.09 (95% CI, 0.06-0.13); polygenic risk score, 0.02 (95% CI, 0.00-0.04); and 0.10 (95% CI, 0.07-0.14) for both.

Net reclassification significantly improved with the CAC plus PCEs by the following values: 0.19 (95% CI, 0.06-0.28). The change was not significant, however, with the polygenic risk score plus PCEs: 0.04 (95% CI, –0.05-0.10).

In the clinical setting, Dr. Kahn said, “The use of CT in patients who are at intermediate risk for heart disease can be helpful in refining risk estimation and guiding recommendations for lipid-lowering therapy. Polygenic risk scores are not helpful in middle-aged to older adults above and beyond traditional risk factors for predicting risk of heart disease.”

This study was supported by the National Heart, Lung, and Blood Institute. MESA is supported by the NHLBI. The Rotterdam Study is funded by Erasmus Medical Center and Erasmus University Rotterdam; the Netherlands Organization for Scientific Research; the Netherlands Organization for Health Research and Development; the Research Institute for Diseases in the Elderly; the Netherlands Genomics Initiative; the Ministry of Education, Culture and Science, the Ministry of Health, Welfare and Sports; the European Commission (DG XII); and the Municipality of Rotterdam. Dr. Khan reported grants from the NHLBI and the NIH during the study and outside of the submitted work. Several coauthors reported grant support from, variously, the NIH, the NHLBI, and the American Heart Association.
 

Publications
Topics
Sections

As a predictor of coronary heart disease (CHD) events, the coronary artery calcium (CAC) score on computed tomography had better risk discrimination than the polygenic risk score, a binational study found. And when added to classic cardiovascular risk factors, the CAC score significantly improved risk classification while the polygenic risk factor score did not.

Sadiya S. Khan
Dr. Sadiya S. Khan

These findings emerged from two large cohorts of middle-aged and older White adults from the United States and the Netherlands in the first head-to-head comparison of these two approaches. Led by Sadiya S. Kahn, MD, MSc, an assistant professor of medicine (cardiology) and preventive medicine (epidemiology) at Northwestern University, Chicago, the study was published online in JAMA.

There has been much interest in using both genetic factors and CT imaging to better identify individuals at risk for heart disease. “Each approach has advantages and disadvantages, and we wanted to better understand the comparative predictive utility to provide support for what the preferred approach should be,” Dr. Kahn said in an interview. “We focused on middle-aged to older adults for whom current risk prediction equations are relevant in estimating risk with the Pooled Cohort Equation, or PCE.”

The superiority of the CT-imaged coronary artery risk score may be because of its direct visualization of calcification in the arteries and the subclinical disease burden rather than a focus on common genetic variants, Dr. Kahn explained. “In addition, prior studies have demonstrated that genetics, or inherited risk, is not destiny, so this score may not perform as well for risk discrimination as the traditional risk factors themselves along with CT.”
 

The study

Study participants came from the U.S. Multi-Ethnic Study of Atherosclerosis (MESA, n = 1,991) and the Dutch Rotterdam Study (RS, n = 1,217). Ages ranged from 45 to 79, with the medians in the two cohorts 61 and 68 years, respectively. Slightly more than half of participants in both groups were female.

Traditional risk factors were used to calculate CHD risk with pooled cohort equations, while computed tomography was used to determine the CAC score and genotyped samples for a validated polygenic risk score.

Both scores were significantly associated with 10-year risk of incident CHD.

The median predicted atherosclerotic disease risk based on traditional risk factors was 6.99% in MESA and 5.93% in RS. During the total available follow-up in MESA (median, 16.0 years) and RS (median, 14.2 years), incident CHD occurred in 187 participants (9.4%) and 98 participants (8.1%), respectively.

C (concordance) statistics for the two scores showed the superiority of the CAC. This statistic measures a model’s ability to rank patients from high to low risk, with a value of 1 being perfect risk fit or concordance and 0.70 or more indicating good concordance and risk discrimination. The CAC score had a C statistic of 0.76 (95% confidence interval, 0.71-0.79) vs. 0.69 for the polygenic risk score (95% CI, 0.63-0.71).

When each score was added to PCEs, the C statistics changed as follows: CAC score, 0.09 (95% CI, 0.06-0.13); polygenic risk score, 0.02 (95% CI, 0.00-0.04); and 0.10 (95% CI, 0.07-0.14) for both.

Net reclassification significantly improved with the CAC plus PCEs by the following values: 0.19 (95% CI, 0.06-0.28). The change was not significant, however, with the polygenic risk score plus PCEs: 0.04 (95% CI, –0.05-0.10).

In the clinical setting, Dr. Kahn said, “The use of CT in patients who are at intermediate risk for heart disease can be helpful in refining risk estimation and guiding recommendations for lipid-lowering therapy. Polygenic risk scores are not helpful in middle-aged to older adults above and beyond traditional risk factors for predicting risk of heart disease.”

This study was supported by the National Heart, Lung, and Blood Institute. MESA is supported by the NHLBI. The Rotterdam Study is funded by Erasmus Medical Center and Erasmus University Rotterdam; the Netherlands Organization for Scientific Research; the Netherlands Organization for Health Research and Development; the Research Institute for Diseases in the Elderly; the Netherlands Genomics Initiative; the Ministry of Education, Culture and Science, the Ministry of Health, Welfare and Sports; the European Commission (DG XII); and the Municipality of Rotterdam. Dr. Khan reported grants from the NHLBI and the NIH during the study and outside of the submitted work. Several coauthors reported grant support from, variously, the NIH, the NHLBI, and the American Heart Association.
 

As a predictor of coronary heart disease (CHD) events, the coronary artery calcium (CAC) score on computed tomography had better risk discrimination than the polygenic risk score, a binational study found. And when added to classic cardiovascular risk factors, the CAC score significantly improved risk classification while the polygenic risk factor score did not.

Sadiya S. Khan
Dr. Sadiya S. Khan

These findings emerged from two large cohorts of middle-aged and older White adults from the United States and the Netherlands in the first head-to-head comparison of these two approaches. Led by Sadiya S. Kahn, MD, MSc, an assistant professor of medicine (cardiology) and preventive medicine (epidemiology) at Northwestern University, Chicago, the study was published online in JAMA.

There has been much interest in using both genetic factors and CT imaging to better identify individuals at risk for heart disease. “Each approach has advantages and disadvantages, and we wanted to better understand the comparative predictive utility to provide support for what the preferred approach should be,” Dr. Kahn said in an interview. “We focused on middle-aged to older adults for whom current risk prediction equations are relevant in estimating risk with the Pooled Cohort Equation, or PCE.”

The superiority of the CT-imaged coronary artery risk score may be because of its direct visualization of calcification in the arteries and the subclinical disease burden rather than a focus on common genetic variants, Dr. Kahn explained. “In addition, prior studies have demonstrated that genetics, or inherited risk, is not destiny, so this score may not perform as well for risk discrimination as the traditional risk factors themselves along with CT.”
 

The study

Study participants came from the U.S. Multi-Ethnic Study of Atherosclerosis (MESA, n = 1,991) and the Dutch Rotterdam Study (RS, n = 1,217). Ages ranged from 45 to 79, with the medians in the two cohorts 61 and 68 years, respectively. Slightly more than half of participants in both groups were female.

Traditional risk factors were used to calculate CHD risk with pooled cohort equations, while computed tomography was used to determine the CAC score and genotyped samples for a validated polygenic risk score.

Both scores were significantly associated with 10-year risk of incident CHD.

The median predicted atherosclerotic disease risk based on traditional risk factors was 6.99% in MESA and 5.93% in RS. During the total available follow-up in MESA (median, 16.0 years) and RS (median, 14.2 years), incident CHD occurred in 187 participants (9.4%) and 98 participants (8.1%), respectively.

C (concordance) statistics for the two scores showed the superiority of the CAC. This statistic measures a model’s ability to rank patients from high to low risk, with a value of 1 being perfect risk fit or concordance and 0.70 or more indicating good concordance and risk discrimination. The CAC score had a C statistic of 0.76 (95% confidence interval, 0.71-0.79) vs. 0.69 for the polygenic risk score (95% CI, 0.63-0.71).

When each score was added to PCEs, the C statistics changed as follows: CAC score, 0.09 (95% CI, 0.06-0.13); polygenic risk score, 0.02 (95% CI, 0.00-0.04); and 0.10 (95% CI, 0.07-0.14) for both.

Net reclassification significantly improved with the CAC plus PCEs by the following values: 0.19 (95% CI, 0.06-0.28). The change was not significant, however, with the polygenic risk score plus PCEs: 0.04 (95% CI, –0.05-0.10).

In the clinical setting, Dr. Kahn said, “The use of CT in patients who are at intermediate risk for heart disease can be helpful in refining risk estimation and guiding recommendations for lipid-lowering therapy. Polygenic risk scores are not helpful in middle-aged to older adults above and beyond traditional risk factors for predicting risk of heart disease.”

This study was supported by the National Heart, Lung, and Blood Institute. MESA is supported by the NHLBI. The Rotterdam Study is funded by Erasmus Medical Center and Erasmus University Rotterdam; the Netherlands Organization for Scientific Research; the Netherlands Organization for Health Research and Development; the Research Institute for Diseases in the Elderly; the Netherlands Genomics Initiative; the Ministry of Education, Culture and Science, the Ministry of Health, Welfare and Sports; the European Commission (DG XII); and the Municipality of Rotterdam. Dr. Khan reported grants from the NHLBI and the NIH during the study and outside of the submitted work. Several coauthors reported grant support from, variously, the NIH, the NHLBI, and the American Heart Association.
 

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Focus of new ASH VTE guidelines: Thrombophilia testing

Article Type
Changed
Tue, 05/23/2023 - 11:07

Thrombophilia testing should be limited to specific circumstances, including when venous thromboembolism (VTE) is provoked by nonsurgical risk factors such as pregnancy or oral contraception use, according to new clinical practice guidelines released by the American Society of Hematology. Individuals with a family history of VTE and high-risk thrombophilia, and those with VTE at unusual body sites should also be tested, the guidelines panel agreed.

“These guidelines will potentially change practice – we know that providers and patients will make a shared treatment decision and we wanted to outline specific scenarios to guide that decision,” panel cochair and first author Saskia Middeldorp, MD, PhD, explained in a press release announcing the publication of the guidelines in Blood Advances.

Dr. Middeldorp is a professor of medicine and head of the department of internal medicine at Radboud University Medical Center, Nijmegen, the Netherlands.

The guidelines are the latest in an ASH series of VTE-related guidelines. ASH convened a multidisciplinary panel with clinical and methodological expertise to develop the guidelines, which were subject to public comment, and they “provide recommendations informed by case-based approaches and modeling to ensure the medical community can better diagnose and treat thrombophilia and people with the condition can make the best decisions for their care,” the press release explains.

Thrombophilia affects an estimated 10% of the population. Testing for the clotting disorder can be costly, and the use of testing to help guide treatment decisions is controversial.

“For decades there has been dispute about thrombophilia testing,” Dr. Middeldorp said. “We created a model about whether and when it would be useful to test for thrombophilia, and based on the model, we suggest it can be appropriate in [the specified] situations.

The panel agreed on 23 recommendations regarding thrombophilia testing and management. Most are based on “very low certainty” in the evidence because of modeling assumptions.

However, the panel agreed on a strong recommendation against testing the general population before starting combined oral contraceptives (COC), and a conditional recommendation for thrombophilia testing in:

  • Patients with VTE associated with nonsurgical major transient or hormonal risk factors
  • Patients with cerebral or splanchnic venous thrombosis in settings where anticoagulation would otherwise be discontinued
  • Individuals with a family history of antithrombin, protein C, or protein S deficiency when considering thromboprophylaxis for minor provoking risk factors and for guidance related to the use of COC or hormone therapy
  • Pregnant women with a family history of high-risk thrombophilia types
  • Patients with cancer at low or intermediate risk of thrombosis and with a family history of VTE

“In all other instances, we suggest not testing for thrombophilia,” said Dr. Middeldorp.

The ASH guidelines largely mirror those of existing guidelines from a number of other organizations, but the recommendation in favor of testing for thrombophilia in patients with VTE provoked by a nonsurgical major transient risk factor or associated with COCs, hormone therapy, pregnancy or postpartum is new and “may cause considerable discussion, as many currently view these VTE episodes as provoked and are generally inclined to use short-term anticoagulation for such patients,” the guideline authors wrote.

“It is important to note, however, that most guidelines or guidance statements on thrombophilia testing did not distinguish between major and minor provoking risk factors, which current science suggests is appropriate,” they added.

Another novel recommendation is the suggestion to test for hereditary thrombophilia to guide the use of thromboprophylaxis during systemic treatment in ambulatory patients with cancer who are at low or intermediate risk for VTE and who have a family history of VTE.

“This new recommendation should be seen as a new application of an established risk stratification approach,” they said.

Additional research is urgently needed, particularly “large implementation studies comparing the impact, in terms of outcomes rates, among management strategies involving or not involving thrombophilia testing,” they noted.

The guideline was wholly funded by ASH. Dr. Middeldorp reported having no conflicts of interest.

Publications
Topics
Sections

Thrombophilia testing should be limited to specific circumstances, including when venous thromboembolism (VTE) is provoked by nonsurgical risk factors such as pregnancy or oral contraception use, according to new clinical practice guidelines released by the American Society of Hematology. Individuals with a family history of VTE and high-risk thrombophilia, and those with VTE at unusual body sites should also be tested, the guidelines panel agreed.

“These guidelines will potentially change practice – we know that providers and patients will make a shared treatment decision and we wanted to outline specific scenarios to guide that decision,” panel cochair and first author Saskia Middeldorp, MD, PhD, explained in a press release announcing the publication of the guidelines in Blood Advances.

Dr. Middeldorp is a professor of medicine and head of the department of internal medicine at Radboud University Medical Center, Nijmegen, the Netherlands.

The guidelines are the latest in an ASH series of VTE-related guidelines. ASH convened a multidisciplinary panel with clinical and methodological expertise to develop the guidelines, which were subject to public comment, and they “provide recommendations informed by case-based approaches and modeling to ensure the medical community can better diagnose and treat thrombophilia and people with the condition can make the best decisions for their care,” the press release explains.

Thrombophilia affects an estimated 10% of the population. Testing for the clotting disorder can be costly, and the use of testing to help guide treatment decisions is controversial.

“For decades there has been dispute about thrombophilia testing,” Dr. Middeldorp said. “We created a model about whether and when it would be useful to test for thrombophilia, and based on the model, we suggest it can be appropriate in [the specified] situations.

The panel agreed on 23 recommendations regarding thrombophilia testing and management. Most are based on “very low certainty” in the evidence because of modeling assumptions.

However, the panel agreed on a strong recommendation against testing the general population before starting combined oral contraceptives (COC), and a conditional recommendation for thrombophilia testing in:

  • Patients with VTE associated with nonsurgical major transient or hormonal risk factors
  • Patients with cerebral or splanchnic venous thrombosis in settings where anticoagulation would otherwise be discontinued
  • Individuals with a family history of antithrombin, protein C, or protein S deficiency when considering thromboprophylaxis for minor provoking risk factors and for guidance related to the use of COC or hormone therapy
  • Pregnant women with a family history of high-risk thrombophilia types
  • Patients with cancer at low or intermediate risk of thrombosis and with a family history of VTE

“In all other instances, we suggest not testing for thrombophilia,” said Dr. Middeldorp.

The ASH guidelines largely mirror those of existing guidelines from a number of other organizations, but the recommendation in favor of testing for thrombophilia in patients with VTE provoked by a nonsurgical major transient risk factor or associated with COCs, hormone therapy, pregnancy or postpartum is new and “may cause considerable discussion, as many currently view these VTE episodes as provoked and are generally inclined to use short-term anticoagulation for such patients,” the guideline authors wrote.

“It is important to note, however, that most guidelines or guidance statements on thrombophilia testing did not distinguish between major and minor provoking risk factors, which current science suggests is appropriate,” they added.

Another novel recommendation is the suggestion to test for hereditary thrombophilia to guide the use of thromboprophylaxis during systemic treatment in ambulatory patients with cancer who are at low or intermediate risk for VTE and who have a family history of VTE.

“This new recommendation should be seen as a new application of an established risk stratification approach,” they said.

Additional research is urgently needed, particularly “large implementation studies comparing the impact, in terms of outcomes rates, among management strategies involving or not involving thrombophilia testing,” they noted.

The guideline was wholly funded by ASH. Dr. Middeldorp reported having no conflicts of interest.

Thrombophilia testing should be limited to specific circumstances, including when venous thromboembolism (VTE) is provoked by nonsurgical risk factors such as pregnancy or oral contraception use, according to new clinical practice guidelines released by the American Society of Hematology. Individuals with a family history of VTE and high-risk thrombophilia, and those with VTE at unusual body sites should also be tested, the guidelines panel agreed.

“These guidelines will potentially change practice – we know that providers and patients will make a shared treatment decision and we wanted to outline specific scenarios to guide that decision,” panel cochair and first author Saskia Middeldorp, MD, PhD, explained in a press release announcing the publication of the guidelines in Blood Advances.

Dr. Middeldorp is a professor of medicine and head of the department of internal medicine at Radboud University Medical Center, Nijmegen, the Netherlands.

The guidelines are the latest in an ASH series of VTE-related guidelines. ASH convened a multidisciplinary panel with clinical and methodological expertise to develop the guidelines, which were subject to public comment, and they “provide recommendations informed by case-based approaches and modeling to ensure the medical community can better diagnose and treat thrombophilia and people with the condition can make the best decisions for their care,” the press release explains.

Thrombophilia affects an estimated 10% of the population. Testing for the clotting disorder can be costly, and the use of testing to help guide treatment decisions is controversial.

“For decades there has been dispute about thrombophilia testing,” Dr. Middeldorp said. “We created a model about whether and when it would be useful to test for thrombophilia, and based on the model, we suggest it can be appropriate in [the specified] situations.

The panel agreed on 23 recommendations regarding thrombophilia testing and management. Most are based on “very low certainty” in the evidence because of modeling assumptions.

However, the panel agreed on a strong recommendation against testing the general population before starting combined oral contraceptives (COC), and a conditional recommendation for thrombophilia testing in:

  • Patients with VTE associated with nonsurgical major transient or hormonal risk factors
  • Patients with cerebral or splanchnic venous thrombosis in settings where anticoagulation would otherwise be discontinued
  • Individuals with a family history of antithrombin, protein C, or protein S deficiency when considering thromboprophylaxis for minor provoking risk factors and for guidance related to the use of COC or hormone therapy
  • Pregnant women with a family history of high-risk thrombophilia types
  • Patients with cancer at low or intermediate risk of thrombosis and with a family history of VTE

“In all other instances, we suggest not testing for thrombophilia,” said Dr. Middeldorp.

The ASH guidelines largely mirror those of existing guidelines from a number of other organizations, but the recommendation in favor of testing for thrombophilia in patients with VTE provoked by a nonsurgical major transient risk factor or associated with COCs, hormone therapy, pregnancy or postpartum is new and “may cause considerable discussion, as many currently view these VTE episodes as provoked and are generally inclined to use short-term anticoagulation for such patients,” the guideline authors wrote.

“It is important to note, however, that most guidelines or guidance statements on thrombophilia testing did not distinguish between major and minor provoking risk factors, which current science suggests is appropriate,” they added.

Another novel recommendation is the suggestion to test for hereditary thrombophilia to guide the use of thromboprophylaxis during systemic treatment in ambulatory patients with cancer who are at low or intermediate risk for VTE and who have a family history of VTE.

“This new recommendation should be seen as a new application of an established risk stratification approach,” they said.

Additional research is urgently needed, particularly “large implementation studies comparing the impact, in terms of outcomes rates, among management strategies involving or not involving thrombophilia testing,” they noted.

The guideline was wholly funded by ASH. Dr. Middeldorp reported having no conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM BLOOD ADVANCES

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

CKD Screening in all U.S. adults found cost effective

Article Type
Changed
Tue, 05/23/2023 - 08:51

Screening for and treating chronic kidney disease (CKD) in all U.S. adults 35-75 years old is cost effective using a strategy that starts by measuring their urine albumin-creatinine ratio (UACR) followed by confirmatory tests and treatment of confirmed cases with current standard-care medications, according to an analysis published in the Annals of Internal Medicine.

This new evidence may prove important as the U.S. Preventive Services Task Force has begun revisiting its 2012 conclusion that “evidence is insufficient to assess the balance of benefits and harms of routine screening for chronic kidney disease in asymptomatic adults.”

Ms. Marika M. Cusick

A big difference between 2012 and today has been that sodium-glucose cotransporter 2 (SGLT2) inhibitors arrived on the scene as an important complement to well-established treatment with an angiotensin-converting enzyme inhibitor or angiotensin receptor blocker. SGLT2 inhibitors have been documented as safe and effective for slowing CKD progression regardless of a person’s diabetes status, and have “dramatically altered” first-line treatment of adults with CKD, wrote the authors of the new study.
 

‘Large population health gains’ from CKD screening

“Given the high prevalence of CKD, even among those without risk factors, low-cost screening combined with effective treatment using SGLT2 inhibitors represent value,” explained Marika M. Cusick, lead author of the report, a PhD student, and a health policy researcher at Stanford (Calif.) University. “Our results show large population health gains can be achieved through CKD screening,” she said in an interview.

“This is a well-designed cost-effectiveness analysis that, importantly, considers newer treatments shown to be effective for slowing progression of CKD. The overall findings are convincing,” commented Deidra C. Crews, MD, a nephrologist and professor at Johns Hopkins University in Baltimore who was not involved in the research.

Dr. Crews, who is also president-elect of the American Society of Nephrology noted that the findings “may be a conservative estimate of the cost-effectiveness of CKD screening in certain subgroups, particularly when considering profound racial, ethnic and socioeconomic disparities in survival and CKD progression.”

The USPSTF starts a relook

The new evidence of cost-effectiveness of routine CKD screening follows the USPSTF’s release in January 2023 of a draft research plan to reassess the potential role for CKD screening of asymptomatic adults in the United States, the first step on a potential path to a revised set of recommendations. Public comment on the draft plan closed in February, and based on the standard USPSTF development steps and time frames, a final recommendation statement could appear by early 2026.

Revisiting the prior USPSTF decision from 2012 received endorsement earlier in 2023 from the ASN. The organization issued a statement last January that cited “more than a decade of advocacy in support of more kidney health screening by ASN and other stakeholders dedicated to intervening earlier to slow or stop the progression of kidney diseases.”

A more detailed letter of support for CKD screening sent to top USPSTF officials followed in February 2023 from ASN president Michelle A. Josephson, MD, who said in part that “ASN believes that kidney care is at an inflection point. There are now far more novel therapeutics to slow the progression of CKD, evidence to support the impact of nonpharmacologic interventions on CKD, and an increased commitment in public health to confront disparities and their causes.”
 

 

 

USPSTF recommendation could make a difference

Dr. Josephson also cited the modest effect that CKD screening recommendations from other groups have had up to now.

“Although guidance from Kidney Disease Improving Global Outcomes and the National Kidney Foundation recommends CKD screening among patients with hypertension, only approximately 10% of individuals with hypertension receive yearly screening. Furthermore, American Diabetes Association guidelines recommend yearly CKD screening in patients with diabetes, but only 40%-50% of patients receive this.”

Dr. Deidra C. Crews

“USPSTF recommendations tend to reach clinicians in primary care settings, where screening for diseases most commonly occurs, much more than recommendations from professional or patient organizations,” Dr. Crews said in an interview. “USPSTF recommendations also often influence health policies that might financially incentivize clinicians and health systems to screen their patients.”

“We hope [the USPSTF] will be interested in including our results within the totality of evidence assessed in their review of CKD screening,” said Ms. Cusick.
 

Preventing hundreds of thousands dialysis cases

The Stanford researchers developed a decision analytic Markov cohort model of CKD progression in U.S. adults aged 35 years or older and fit their model to data from the National Health and Nutrition Examination Survey (NHANES). They found that implementing one-time screening and adding SGLT2 inhibitors to treatment of the 158 million U.S. adults 35-75 years old would prevent the need for kidney replacement therapy (dialysis or transplant) in approximately 398,000 people over their lifetimes, representing a 10% decrease in such cases, compared with the status quo. Screening every 10 or 5 years combined with SGLT2 inhibitors would prevent approximately 598,000 or 658,000 people, respectively, from requiring kidney replacement therapy, compared with not screening.

Analysis showed that one-time screening produced an incremental cost-effectiveness ratio of $86,300 per quality-adjusted life-year (QALY) gained when one-time screening occurred in adults when they reached 55 years old. Screening every 10 years until people became 75 years old cost $98,400 per QALY gained for this group when adults were 35 years old, and $89,800 per QALY gained when screening occurred at 65 years old. These QALY costs are less than “commonly used” U.S. thresholds for acceptable cost-effectiveness of $100,000-$150,000 per QALY gained, the authors said.

Ms. Cusick highlighted the advantages of population-level screening for all U.S. adults, including those who are asymptomatic, compared with focusing on adults with risk factors, such as hypertension or diabetes.

“While risk-based screening can be more cost effective in some settings, risk factors are not always known, especially in marginalized and disadvantaged populations. This may lead to disparities in the use of screening and downstream health outcomes that could be avoided through universal screening policies,” she explained.

The study received no commercial funding. Ms. Cusick had no disclosures. Dr. Crews has received research grants from Somatus. Dr. Josephson has been a consultant to Exosome Diagnostics, IMMUCOR, Labcorp, Otsuka, UBC, and Vera Therapeutics, and has an ownership interest in Seagen.

Publications
Topics
Sections

Screening for and treating chronic kidney disease (CKD) in all U.S. adults 35-75 years old is cost effective using a strategy that starts by measuring their urine albumin-creatinine ratio (UACR) followed by confirmatory tests and treatment of confirmed cases with current standard-care medications, according to an analysis published in the Annals of Internal Medicine.

This new evidence may prove important as the U.S. Preventive Services Task Force has begun revisiting its 2012 conclusion that “evidence is insufficient to assess the balance of benefits and harms of routine screening for chronic kidney disease in asymptomatic adults.”

Ms. Marika M. Cusick

A big difference between 2012 and today has been that sodium-glucose cotransporter 2 (SGLT2) inhibitors arrived on the scene as an important complement to well-established treatment with an angiotensin-converting enzyme inhibitor or angiotensin receptor blocker. SGLT2 inhibitors have been documented as safe and effective for slowing CKD progression regardless of a person’s diabetes status, and have “dramatically altered” first-line treatment of adults with CKD, wrote the authors of the new study.
 

‘Large population health gains’ from CKD screening

“Given the high prevalence of CKD, even among those without risk factors, low-cost screening combined with effective treatment using SGLT2 inhibitors represent value,” explained Marika M. Cusick, lead author of the report, a PhD student, and a health policy researcher at Stanford (Calif.) University. “Our results show large population health gains can be achieved through CKD screening,” she said in an interview.

“This is a well-designed cost-effectiveness analysis that, importantly, considers newer treatments shown to be effective for slowing progression of CKD. The overall findings are convincing,” commented Deidra C. Crews, MD, a nephrologist and professor at Johns Hopkins University in Baltimore who was not involved in the research.

Dr. Crews, who is also president-elect of the American Society of Nephrology noted that the findings “may be a conservative estimate of the cost-effectiveness of CKD screening in certain subgroups, particularly when considering profound racial, ethnic and socioeconomic disparities in survival and CKD progression.”

The USPSTF starts a relook

The new evidence of cost-effectiveness of routine CKD screening follows the USPSTF’s release in January 2023 of a draft research plan to reassess the potential role for CKD screening of asymptomatic adults in the United States, the first step on a potential path to a revised set of recommendations. Public comment on the draft plan closed in February, and based on the standard USPSTF development steps and time frames, a final recommendation statement could appear by early 2026.

Revisiting the prior USPSTF decision from 2012 received endorsement earlier in 2023 from the ASN. The organization issued a statement last January that cited “more than a decade of advocacy in support of more kidney health screening by ASN and other stakeholders dedicated to intervening earlier to slow or stop the progression of kidney diseases.”

A more detailed letter of support for CKD screening sent to top USPSTF officials followed in February 2023 from ASN president Michelle A. Josephson, MD, who said in part that “ASN believes that kidney care is at an inflection point. There are now far more novel therapeutics to slow the progression of CKD, evidence to support the impact of nonpharmacologic interventions on CKD, and an increased commitment in public health to confront disparities and their causes.”
 

 

 

USPSTF recommendation could make a difference

Dr. Josephson also cited the modest effect that CKD screening recommendations from other groups have had up to now.

“Although guidance from Kidney Disease Improving Global Outcomes and the National Kidney Foundation recommends CKD screening among patients with hypertension, only approximately 10% of individuals with hypertension receive yearly screening. Furthermore, American Diabetes Association guidelines recommend yearly CKD screening in patients with diabetes, but only 40%-50% of patients receive this.”

Dr. Deidra C. Crews

“USPSTF recommendations tend to reach clinicians in primary care settings, where screening for diseases most commonly occurs, much more than recommendations from professional or patient organizations,” Dr. Crews said in an interview. “USPSTF recommendations also often influence health policies that might financially incentivize clinicians and health systems to screen their patients.”

“We hope [the USPSTF] will be interested in including our results within the totality of evidence assessed in their review of CKD screening,” said Ms. Cusick.
 

Preventing hundreds of thousands dialysis cases

The Stanford researchers developed a decision analytic Markov cohort model of CKD progression in U.S. adults aged 35 years or older and fit their model to data from the National Health and Nutrition Examination Survey (NHANES). They found that implementing one-time screening and adding SGLT2 inhibitors to treatment of the 158 million U.S. adults 35-75 years old would prevent the need for kidney replacement therapy (dialysis or transplant) in approximately 398,000 people over their lifetimes, representing a 10% decrease in such cases, compared with the status quo. Screening every 10 or 5 years combined with SGLT2 inhibitors would prevent approximately 598,000 or 658,000 people, respectively, from requiring kidney replacement therapy, compared with not screening.

Analysis showed that one-time screening produced an incremental cost-effectiveness ratio of $86,300 per quality-adjusted life-year (QALY) gained when one-time screening occurred in adults when they reached 55 years old. Screening every 10 years until people became 75 years old cost $98,400 per QALY gained for this group when adults were 35 years old, and $89,800 per QALY gained when screening occurred at 65 years old. These QALY costs are less than “commonly used” U.S. thresholds for acceptable cost-effectiveness of $100,000-$150,000 per QALY gained, the authors said.

Ms. Cusick highlighted the advantages of population-level screening for all U.S. adults, including those who are asymptomatic, compared with focusing on adults with risk factors, such as hypertension or diabetes.

“While risk-based screening can be more cost effective in some settings, risk factors are not always known, especially in marginalized and disadvantaged populations. This may lead to disparities in the use of screening and downstream health outcomes that could be avoided through universal screening policies,” she explained.

The study received no commercial funding. Ms. Cusick had no disclosures. Dr. Crews has received research grants from Somatus. Dr. Josephson has been a consultant to Exosome Diagnostics, IMMUCOR, Labcorp, Otsuka, UBC, and Vera Therapeutics, and has an ownership interest in Seagen.

Screening for and treating chronic kidney disease (CKD) in all U.S. adults 35-75 years old is cost effective using a strategy that starts by measuring their urine albumin-creatinine ratio (UACR) followed by confirmatory tests and treatment of confirmed cases with current standard-care medications, according to an analysis published in the Annals of Internal Medicine.

This new evidence may prove important as the U.S. Preventive Services Task Force has begun revisiting its 2012 conclusion that “evidence is insufficient to assess the balance of benefits and harms of routine screening for chronic kidney disease in asymptomatic adults.”

Ms. Marika M. Cusick

A big difference between 2012 and today has been that sodium-glucose cotransporter 2 (SGLT2) inhibitors arrived on the scene as an important complement to well-established treatment with an angiotensin-converting enzyme inhibitor or angiotensin receptor blocker. SGLT2 inhibitors have been documented as safe and effective for slowing CKD progression regardless of a person’s diabetes status, and have “dramatically altered” first-line treatment of adults with CKD, wrote the authors of the new study.
 

‘Large population health gains’ from CKD screening

“Given the high prevalence of CKD, even among those without risk factors, low-cost screening combined with effective treatment using SGLT2 inhibitors represent value,” explained Marika M. Cusick, lead author of the report, a PhD student, and a health policy researcher at Stanford (Calif.) University. “Our results show large population health gains can be achieved through CKD screening,” she said in an interview.

“This is a well-designed cost-effectiveness analysis that, importantly, considers newer treatments shown to be effective for slowing progression of CKD. The overall findings are convincing,” commented Deidra C. Crews, MD, a nephrologist and professor at Johns Hopkins University in Baltimore who was not involved in the research.

Dr. Crews, who is also president-elect of the American Society of Nephrology noted that the findings “may be a conservative estimate of the cost-effectiveness of CKD screening in certain subgroups, particularly when considering profound racial, ethnic and socioeconomic disparities in survival and CKD progression.”

The USPSTF starts a relook

The new evidence of cost-effectiveness of routine CKD screening follows the USPSTF’s release in January 2023 of a draft research plan to reassess the potential role for CKD screening of asymptomatic adults in the United States, the first step on a potential path to a revised set of recommendations. Public comment on the draft plan closed in February, and based on the standard USPSTF development steps and time frames, a final recommendation statement could appear by early 2026.

Revisiting the prior USPSTF decision from 2012 received endorsement earlier in 2023 from the ASN. The organization issued a statement last January that cited “more than a decade of advocacy in support of more kidney health screening by ASN and other stakeholders dedicated to intervening earlier to slow or stop the progression of kidney diseases.”

A more detailed letter of support for CKD screening sent to top USPSTF officials followed in February 2023 from ASN president Michelle A. Josephson, MD, who said in part that “ASN believes that kidney care is at an inflection point. There are now far more novel therapeutics to slow the progression of CKD, evidence to support the impact of nonpharmacologic interventions on CKD, and an increased commitment in public health to confront disparities and their causes.”
 

 

 

USPSTF recommendation could make a difference

Dr. Josephson also cited the modest effect that CKD screening recommendations from other groups have had up to now.

“Although guidance from Kidney Disease Improving Global Outcomes and the National Kidney Foundation recommends CKD screening among patients with hypertension, only approximately 10% of individuals with hypertension receive yearly screening. Furthermore, American Diabetes Association guidelines recommend yearly CKD screening in patients with diabetes, but only 40%-50% of patients receive this.”

Dr. Deidra C. Crews

“USPSTF recommendations tend to reach clinicians in primary care settings, where screening for diseases most commonly occurs, much more than recommendations from professional or patient organizations,” Dr. Crews said in an interview. “USPSTF recommendations also often influence health policies that might financially incentivize clinicians and health systems to screen their patients.”

“We hope [the USPSTF] will be interested in including our results within the totality of evidence assessed in their review of CKD screening,” said Ms. Cusick.
 

Preventing hundreds of thousands dialysis cases

The Stanford researchers developed a decision analytic Markov cohort model of CKD progression in U.S. adults aged 35 years or older and fit their model to data from the National Health and Nutrition Examination Survey (NHANES). They found that implementing one-time screening and adding SGLT2 inhibitors to treatment of the 158 million U.S. adults 35-75 years old would prevent the need for kidney replacement therapy (dialysis or transplant) in approximately 398,000 people over their lifetimes, representing a 10% decrease in such cases, compared with the status quo. Screening every 10 or 5 years combined with SGLT2 inhibitors would prevent approximately 598,000 or 658,000 people, respectively, from requiring kidney replacement therapy, compared with not screening.

Analysis showed that one-time screening produced an incremental cost-effectiveness ratio of $86,300 per quality-adjusted life-year (QALY) gained when one-time screening occurred in adults when they reached 55 years old. Screening every 10 years until people became 75 years old cost $98,400 per QALY gained for this group when adults were 35 years old, and $89,800 per QALY gained when screening occurred at 65 years old. These QALY costs are less than “commonly used” U.S. thresholds for acceptable cost-effectiveness of $100,000-$150,000 per QALY gained, the authors said.

Ms. Cusick highlighted the advantages of population-level screening for all U.S. adults, including those who are asymptomatic, compared with focusing on adults with risk factors, such as hypertension or diabetes.

“While risk-based screening can be more cost effective in some settings, risk factors are not always known, especially in marginalized and disadvantaged populations. This may lead to disparities in the use of screening and downstream health outcomes that could be avoided through universal screening policies,” she explained.

The study received no commercial funding. Ms. Cusick had no disclosures. Dr. Crews has received research grants from Somatus. Dr. Josephson has been a consultant to Exosome Diagnostics, IMMUCOR, Labcorp, Otsuka, UBC, and Vera Therapeutics, and has an ownership interest in Seagen.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ANNALS OF INTERNAL MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Noninferior to DES, novel bioadaptable stent may improve target vessel physiology

Article Type
Changed
Wed, 05/24/2023 - 15:32

Stent is not a “me-too” device

Moving in a very different direction from past coronary stent designs, a new device that is being characterized as bioadaptable, as opposed to bioabsorbable, was noninferior to a widely used drug-eluting stent, and associated with several unique vessel functional improvements at 12 months in a randomized controlled trial.

“The device restored vessel motion, which we think is the reason that we saw plaque stabilization and regression,” reported Shigero Saito, MD, director of the catheterization laboratory at Shonan Kamakura (Japan) General Hospital.

The principal features of the bioadaptable design are cobalt-chromium metal helical strands to provide indefinite scaffolding support coupled with a biodegradable sirolimus-containing poly(D,L-lacti-co-glycolic acid) (PLGA) topcoat and a biodegradable poly-L-lactic acid (PLLA) bottom coat to “uncage” the vessel once these materials are resorbed, said Dr. Saito.

Twelve-month data from the randomized BIOADAPTOR trial, presented as a late breaker at the annual meeting of the European Association of Percutaneous Cardiovascular Interventions, provide the first evidence that this uncaging of the vessel is an advantage.

Compared head-to-head in a contemporary drug-eluting stent (DES) in a randomized trial, the bioadaptable stent, as predicted in prior studies, “improved hemodynamics and supported plaque stabilization and positive remodeling,” said Dr. Saito.

In BIOADAPTOR, 445 patients in Japan, Germany, Belgium, and New Zealand were randomized to the novel stent, called DynamX, or to the Resolute Onyx. The trial has a planned follow-up of 5 years.

While the primary endpoint at 12 months was noninferiority for target lesion failure (TLF), it was a series of secondary imaging endpoints that suggest an important impact of uncaging the vessel. This includes better vessel function potentially relevant to resistance to restenosis.

As a result of numerically lower TLF in the DynamX group (1.8% vs. 2.8%), the new device easily demonstrated noninferiority at a high level of significance (P < .001). A numerical advantage for most events, including cardiovascular death (0% vs. 0.9%) and target-vessel myocardial infarction (1.4% vs. 1.9%), favored the novel device, but event rates were low in both arms and none of these differences were statistically significant.

However, the secondary imaging analyses at 12 months suggested major differences between the two devices from “uncaging” the vessel.

These differences included a highly significant improvement at 12 months in vessel pulsatility (P < .001) within the DynamX stent relative to the Onyx stent in all measured segments (proximal, mid, and distal).

In addition, compliance remained suppressed relative to both the proximal (P < .001) and distal (P < .001) vessels of patients fitted with Onyx device. Conversely, there was no significant relative difference in this measure among those fitted with the DynamX device.

At 12 months, the plaque volume change behind the stent of noncalcified lesions increased 9% in the Onyx group but was reduced 4% in the DynamX group (P = .028).

While there was a 13% gain overall in percent diameter stenosis within the stent of patients receiving the DynamX device, it was consistently lower than that observed in the Onyx group. This difference was only a trend overall (12.7% vs. 17.3%; P = .051), but the advantage reached significance, favoring DynamX, for the left anterior descending (LAD) artery (12.1% vs. 19.0%; P = .006), small vessels (13.0% vs. 18.3%; P = .045), and long lesions (13.0% vs. 22.9%; P = .008).

The same relative advantage for DynamX was seen on late lumen loss at 6 months. In this case, the overall advantage of DynamX (0.09 vs. 0.25; P = .038) did reach significance, and there was an advantage for the LAD (–0.02 vs. 0.24; P = .007) and long lesions (–0.06 vs. 0.38; P = .016). The difference did not reach significance for small vessels (0.08 vs. 0.26; P = .121).

All of these advantages on the secondary endpoints can be directly attributed to the effect of uncaging the vessel, according to Dr. Saito, who said this new design “addresses the shortcomings” of both previous drug-eluting and biodegradable stents.

Pointing out that the nonplateauing of late events has persisted regardless of stent design after “more than 20 years of innovation in design and materials,” Dr. Saito said all current stents have weaknesses. While biodegradable stents have not improved long-term outcomes relative to DES “as a result of loss of long-term vessel dynamic support,” DES are flawed due to “permanent caging of the vessel and loss of vessel motion and function.”

This novel hybrid design, employing both metal and biodegradable components, “is a completely different concept,” said Ron Waksman, MD, associate director, division of cardiology, Medstar Hospital Center, Washington. He was particularly impressed by the improvements in pulsatility and compliance in target vessels along with the favorable effects on plaque volume.

“The reduction in plaque volume is something we have not seen before. Usually we see the opposite,” Dr. Waksman said.

Dr. Ron Waksman

“Clearly, the Bioadaptor device is not a me-too stent,” he said. He was not surprised that there was no difference in hard outcomes given both the small sample size and the fact that the advantages of uncaging the vessel are likely to accrue over time.

“We need to look at what happens after 1 year. We still have not seen the potential of this device,” he said, adding he was “impressed” by the features of this novel concept. However, he suggested the advantages remain theoretical from the clinical standpoint, advising Dr. Saito that “you still need to demonstrate the clinical benefits.”

Dr. Saito reports a financial relationship with Elixir Medical, which funded the BIOADAPTOR trial. Dr. Waksman reports financial relationships with 19 pharmaceutical companies including those that manufacture cardiovascular stents.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Stent is not a “me-too” device

Stent is not a “me-too” device

Moving in a very different direction from past coronary stent designs, a new device that is being characterized as bioadaptable, as opposed to bioabsorbable, was noninferior to a widely used drug-eluting stent, and associated with several unique vessel functional improvements at 12 months in a randomized controlled trial.

“The device restored vessel motion, which we think is the reason that we saw plaque stabilization and regression,” reported Shigero Saito, MD, director of the catheterization laboratory at Shonan Kamakura (Japan) General Hospital.

The principal features of the bioadaptable design are cobalt-chromium metal helical strands to provide indefinite scaffolding support coupled with a biodegradable sirolimus-containing poly(D,L-lacti-co-glycolic acid) (PLGA) topcoat and a biodegradable poly-L-lactic acid (PLLA) bottom coat to “uncage” the vessel once these materials are resorbed, said Dr. Saito.

Twelve-month data from the randomized BIOADAPTOR trial, presented as a late breaker at the annual meeting of the European Association of Percutaneous Cardiovascular Interventions, provide the first evidence that this uncaging of the vessel is an advantage.

Compared head-to-head in a contemporary drug-eluting stent (DES) in a randomized trial, the bioadaptable stent, as predicted in prior studies, “improved hemodynamics and supported plaque stabilization and positive remodeling,” said Dr. Saito.

In BIOADAPTOR, 445 patients in Japan, Germany, Belgium, and New Zealand were randomized to the novel stent, called DynamX, or to the Resolute Onyx. The trial has a planned follow-up of 5 years.

While the primary endpoint at 12 months was noninferiority for target lesion failure (TLF), it was a series of secondary imaging endpoints that suggest an important impact of uncaging the vessel. This includes better vessel function potentially relevant to resistance to restenosis.

As a result of numerically lower TLF in the DynamX group (1.8% vs. 2.8%), the new device easily demonstrated noninferiority at a high level of significance (P < .001). A numerical advantage for most events, including cardiovascular death (0% vs. 0.9%) and target-vessel myocardial infarction (1.4% vs. 1.9%), favored the novel device, but event rates were low in both arms and none of these differences were statistically significant.

However, the secondary imaging analyses at 12 months suggested major differences between the two devices from “uncaging” the vessel.

These differences included a highly significant improvement at 12 months in vessel pulsatility (P < .001) within the DynamX stent relative to the Onyx stent in all measured segments (proximal, mid, and distal).

In addition, compliance remained suppressed relative to both the proximal (P < .001) and distal (P < .001) vessels of patients fitted with Onyx device. Conversely, there was no significant relative difference in this measure among those fitted with the DynamX device.

At 12 months, the plaque volume change behind the stent of noncalcified lesions increased 9% in the Onyx group but was reduced 4% in the DynamX group (P = .028).

While there was a 13% gain overall in percent diameter stenosis within the stent of patients receiving the DynamX device, it was consistently lower than that observed in the Onyx group. This difference was only a trend overall (12.7% vs. 17.3%; P = .051), but the advantage reached significance, favoring DynamX, for the left anterior descending (LAD) artery (12.1% vs. 19.0%; P = .006), small vessels (13.0% vs. 18.3%; P = .045), and long lesions (13.0% vs. 22.9%; P = .008).

The same relative advantage for DynamX was seen on late lumen loss at 6 months. In this case, the overall advantage of DynamX (0.09 vs. 0.25; P = .038) did reach significance, and there was an advantage for the LAD (–0.02 vs. 0.24; P = .007) and long lesions (–0.06 vs. 0.38; P = .016). The difference did not reach significance for small vessels (0.08 vs. 0.26; P = .121).

All of these advantages on the secondary endpoints can be directly attributed to the effect of uncaging the vessel, according to Dr. Saito, who said this new design “addresses the shortcomings” of both previous drug-eluting and biodegradable stents.

Pointing out that the nonplateauing of late events has persisted regardless of stent design after “more than 20 years of innovation in design and materials,” Dr. Saito said all current stents have weaknesses. While biodegradable stents have not improved long-term outcomes relative to DES “as a result of loss of long-term vessel dynamic support,” DES are flawed due to “permanent caging of the vessel and loss of vessel motion and function.”

This novel hybrid design, employing both metal and biodegradable components, “is a completely different concept,” said Ron Waksman, MD, associate director, division of cardiology, Medstar Hospital Center, Washington. He was particularly impressed by the improvements in pulsatility and compliance in target vessels along with the favorable effects on plaque volume.

“The reduction in plaque volume is something we have not seen before. Usually we see the opposite,” Dr. Waksman said.

Dr. Ron Waksman

“Clearly, the Bioadaptor device is not a me-too stent,” he said. He was not surprised that there was no difference in hard outcomes given both the small sample size and the fact that the advantages of uncaging the vessel are likely to accrue over time.

“We need to look at what happens after 1 year. We still have not seen the potential of this device,” he said, adding he was “impressed” by the features of this novel concept. However, he suggested the advantages remain theoretical from the clinical standpoint, advising Dr. Saito that “you still need to demonstrate the clinical benefits.”

Dr. Saito reports a financial relationship with Elixir Medical, which funded the BIOADAPTOR trial. Dr. Waksman reports financial relationships with 19 pharmaceutical companies including those that manufacture cardiovascular stents.

Moving in a very different direction from past coronary stent designs, a new device that is being characterized as bioadaptable, as opposed to bioabsorbable, was noninferior to a widely used drug-eluting stent, and associated with several unique vessel functional improvements at 12 months in a randomized controlled trial.

“The device restored vessel motion, which we think is the reason that we saw plaque stabilization and regression,” reported Shigero Saito, MD, director of the catheterization laboratory at Shonan Kamakura (Japan) General Hospital.

The principal features of the bioadaptable design are cobalt-chromium metal helical strands to provide indefinite scaffolding support coupled with a biodegradable sirolimus-containing poly(D,L-lacti-co-glycolic acid) (PLGA) topcoat and a biodegradable poly-L-lactic acid (PLLA) bottom coat to “uncage” the vessel once these materials are resorbed, said Dr. Saito.

Twelve-month data from the randomized BIOADAPTOR trial, presented as a late breaker at the annual meeting of the European Association of Percutaneous Cardiovascular Interventions, provide the first evidence that this uncaging of the vessel is an advantage.

Compared head-to-head in a contemporary drug-eluting stent (DES) in a randomized trial, the bioadaptable stent, as predicted in prior studies, “improved hemodynamics and supported plaque stabilization and positive remodeling,” said Dr. Saito.

In BIOADAPTOR, 445 patients in Japan, Germany, Belgium, and New Zealand were randomized to the novel stent, called DynamX, or to the Resolute Onyx. The trial has a planned follow-up of 5 years.

While the primary endpoint at 12 months was noninferiority for target lesion failure (TLF), it was a series of secondary imaging endpoints that suggest an important impact of uncaging the vessel. This includes better vessel function potentially relevant to resistance to restenosis.

As a result of numerically lower TLF in the DynamX group (1.8% vs. 2.8%), the new device easily demonstrated noninferiority at a high level of significance (P < .001). A numerical advantage for most events, including cardiovascular death (0% vs. 0.9%) and target-vessel myocardial infarction (1.4% vs. 1.9%), favored the novel device, but event rates were low in both arms and none of these differences were statistically significant.

However, the secondary imaging analyses at 12 months suggested major differences between the two devices from “uncaging” the vessel.

These differences included a highly significant improvement at 12 months in vessel pulsatility (P < .001) within the DynamX stent relative to the Onyx stent in all measured segments (proximal, mid, and distal).

In addition, compliance remained suppressed relative to both the proximal (P < .001) and distal (P < .001) vessels of patients fitted with Onyx device. Conversely, there was no significant relative difference in this measure among those fitted with the DynamX device.

At 12 months, the plaque volume change behind the stent of noncalcified lesions increased 9% in the Onyx group but was reduced 4% in the DynamX group (P = .028).

While there was a 13% gain overall in percent diameter stenosis within the stent of patients receiving the DynamX device, it was consistently lower than that observed in the Onyx group. This difference was only a trend overall (12.7% vs. 17.3%; P = .051), but the advantage reached significance, favoring DynamX, for the left anterior descending (LAD) artery (12.1% vs. 19.0%; P = .006), small vessels (13.0% vs. 18.3%; P = .045), and long lesions (13.0% vs. 22.9%; P = .008).

The same relative advantage for DynamX was seen on late lumen loss at 6 months. In this case, the overall advantage of DynamX (0.09 vs. 0.25; P = .038) did reach significance, and there was an advantage for the LAD (–0.02 vs. 0.24; P = .007) and long lesions (–0.06 vs. 0.38; P = .016). The difference did not reach significance for small vessels (0.08 vs. 0.26; P = .121).

All of these advantages on the secondary endpoints can be directly attributed to the effect of uncaging the vessel, according to Dr. Saito, who said this new design “addresses the shortcomings” of both previous drug-eluting and biodegradable stents.

Pointing out that the nonplateauing of late events has persisted regardless of stent design after “more than 20 years of innovation in design and materials,” Dr. Saito said all current stents have weaknesses. While biodegradable stents have not improved long-term outcomes relative to DES “as a result of loss of long-term vessel dynamic support,” DES are flawed due to “permanent caging of the vessel and loss of vessel motion and function.”

This novel hybrid design, employing both metal and biodegradable components, “is a completely different concept,” said Ron Waksman, MD, associate director, division of cardiology, Medstar Hospital Center, Washington. He was particularly impressed by the improvements in pulsatility and compliance in target vessels along with the favorable effects on plaque volume.

“The reduction in plaque volume is something we have not seen before. Usually we see the opposite,” Dr. Waksman said.

Dr. Ron Waksman

“Clearly, the Bioadaptor device is not a me-too stent,” he said. He was not surprised that there was no difference in hard outcomes given both the small sample size and the fact that the advantages of uncaging the vessel are likely to accrue over time.

“We need to look at what happens after 1 year. We still have not seen the potential of this device,” he said, adding he was “impressed” by the features of this novel concept. However, he suggested the advantages remain theoretical from the clinical standpoint, advising Dr. Saito that “you still need to demonstrate the clinical benefits.”

Dr. Saito reports a financial relationship with Elixir Medical, which funded the BIOADAPTOR trial. Dr. Waksman reports financial relationships with 19 pharmaceutical companies including those that manufacture cardiovascular stents.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM EUROPCR 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Researchers make headway in understanding axSpA diagnostic delay

Article Type
Changed
Mon, 05/22/2023 - 20:47

 

– With early diagnosis an ongoing complex target for axial spondyloarthritis (axSpA), new research may help to answer where the biggest delays lie.

Gregory McDermott, MD, a research fellow at Brigham and Women’s Hospital in Boston, led a pilot study with data from Mass General Brigham electronic health records. He shared top results at the annual meeting of the Spondyloarthritis Research and Treatment Network (SPARTAN) , where addressing delay in diagnosis was a major theme.

Included in the cohort were 554 patients who had three ICD-9 or ICD-10 codes and an imaging report of sacroiliitis, ankylosis, or syndesmophytes, and were screened via manual chart review for modified New York and Assessment of Spondyloarthritis International Society criteria.

The average diagnostic delay for axSpA was 6.8 years in this study (median, 3.8 years), relatively consistent with findings in previous studies globally, and the average age of onset was 29.5.

The researchers also factored in history of specialty care for back pain (orthopedics, physical medicine and rehabilitation, pain medicine) or extra-articular manifestations (ophthalmology, dermatology, gastroenterology) before axSpA diagnosis. Other factors included smoking and insurance status, along with age, sex, race, and other demographic data.

The results showed shorter delays in diagnosing axSpA were associated with older age at symptom onset and peripheral arthritis, whereas longer delays (more than 4 years) were associated with a history of uveitis, ankylosing spondylitis at diagnosis, and being among those in the 80-99th percentile on the social vulnerability index (SVI). The SVI includes U.S. census data on factors including housing type, household composition and disability status, employment status, minority status, non-English speaking, educational attainment, transportation, and mean income level.
 

Notable uveitis finding

Dr. McDermott said the team was surprised by the association between having had uveitis and delayed axSpA diagnosis.

Among patients with uveitis, 12% had a short delay from symptom onset to axSpA diagnosis of 0-1 years, but more than twice that percentage (27%) had a delay of more than 4 years (P < .001).

“We thought the finding related to uveitis was interesting and potentially clinically meaningful as 27% of axSpA patients in our cohort with more than 4 years of diagnostic delay sought ophthalmology care prior to their diagnosis, [compared with 13% of patients with a diagnosis within 1 year],” Dr. McDermott said. “This practice setting in particular may be a place where we can intervene with simple screening or increased education in order to get people appropriately referred to rheumatology care.”

Longer delays can lead to more functional impairment, radiographic progression, and work disability, as well as poorer quality of life, increased depression, and higher unemployment and health care costs, Dr. McDermott said.
 

Patients may miss key treatment window

Maureen Dubreuil, MD, MSc, assistant professor at Boston University and a rheumatologist with the VA Boston Healthcare System, who was not part of the study, said: “This study addressed a critically important problem in the field – that diagnosis of axSpA is delayed by 7 years, which is much longer than the average time to diagnosis for other forms of arthritis, such as rheumatoid arthritis, which is under 6 months.

“It is critical that diagnostic delay is reduced in axSpA because undiagnosed individuals may miss an important window of opportunity to receive treatment that prevents permanent structural damage and functional declines. This work, if confirmed in other data, would allow development of interventions to improve timely evaluation of individuals with chronic back pain who may have axSpA, particularly among those with within lower socioeconomic strata, and those who are older or have uveitis.”
 

Study tests screening tool

Among the ideas proposed for reducing the delay was a referral strategy with a screening tool.

Swetha Alexander, MD, a rheumatology fellow at the University of Utah, Salt Lake City, who presented her team’s poster, noted that, in the United States, patients with chronic back pain often come first to a primary care doctor or another specialty and not to a rheumatologist.

As an internal medicine resident at Yale University, New Haven, Conn., Dr. Alexander and colleagues there conducted the Finding Axial Spondyloarthritis (FaxSpA) study to test whether patient self-referral or referral by other physicians, guided by answers to a screening tool, could help to speed the process of getting patients more likely to have axSpA to a rheumatologist.

Dr. Alexander said they found that using the screening tool was better than having no referral strategy, explaining that screening helped diagnose about 34% of the study population with axSpA, whereas if a patient came in with chronic back pain to a primary care physician without any screening and ultimately to a rheumatologist, “you’re only capturing about 20%,” she said, citing estimates in the literature.
 

Questions may need rewording

However, the researchers found that patient interpretation of the screening questions was different depending on whether they were answering online or directly from a rheumatologist’s in-person questions. For more success, Dr. Alexander said, the questions may need to be reworded or more education may be needed for both patients and physicians to get more valid information.

For instance, she said, when the screening tool asks about inflammation, the patient may assume the physician is asking about pain and answer one way, but when a rheumatologist asks the question a slightly different way in the clinic, the patient may give a different answer.
 

First questions in portal, on social media

In the screening intervention (called A-tool) patients first answered three questions via the MyChart portal or Facebook. If they answered all three questions positively, they would move on to another round of questions and the answers would decide whether they would be eligible to come into the rheumatologist to get evaluated for axSpA.

At the study visit, rheumatologists asked the same questions as the online A-tool, which focus on SpA features with reasonable sensitivity and specificity for axSpA (no labs or imaging included). Clinicians’ judgment was considered the gold standard for diagnosis of axSpA.

The authors reported that 1,274 patients answered questions with the screening tool via Facebook (50%) and MyChart (50%) from April 2019 to February 2022. Among the responders, 507 (40%) were eligible for a rheumatologist visit.

As of May 2022, 100 patients were enrolled. Of the enrolled patients, 86 patients completed all the study procedures, including study visit, labs, and imaging (x-ray and MRI of the pelvis). Of the 86 patients, 29 (34%) were diagnosed with axSpA.

The tool appears to help narrow the chronic back pain patients who need to be seen by a rheumatologist for potential axSpA, Dr. Alexander said, which may help to speed diagnosis and also save time and resources.

Dr. McDermott, Dr. Dubreuil, and Dr. Alexander reported no relevant financial relationships. The FaxSpA study was supported with funding from Novartis and the Spondylitis Association of America.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– With early diagnosis an ongoing complex target for axial spondyloarthritis (axSpA), new research may help to answer where the biggest delays lie.

Gregory McDermott, MD, a research fellow at Brigham and Women’s Hospital in Boston, led a pilot study with data from Mass General Brigham electronic health records. He shared top results at the annual meeting of the Spondyloarthritis Research and Treatment Network (SPARTAN) , where addressing delay in diagnosis was a major theme.

Included in the cohort were 554 patients who had three ICD-9 or ICD-10 codes and an imaging report of sacroiliitis, ankylosis, or syndesmophytes, and were screened via manual chart review for modified New York and Assessment of Spondyloarthritis International Society criteria.

The average diagnostic delay for axSpA was 6.8 years in this study (median, 3.8 years), relatively consistent with findings in previous studies globally, and the average age of onset was 29.5.

The researchers also factored in history of specialty care for back pain (orthopedics, physical medicine and rehabilitation, pain medicine) or extra-articular manifestations (ophthalmology, dermatology, gastroenterology) before axSpA diagnosis. Other factors included smoking and insurance status, along with age, sex, race, and other demographic data.

The results showed shorter delays in diagnosing axSpA were associated with older age at symptom onset and peripheral arthritis, whereas longer delays (more than 4 years) were associated with a history of uveitis, ankylosing spondylitis at diagnosis, and being among those in the 80-99th percentile on the social vulnerability index (SVI). The SVI includes U.S. census data on factors including housing type, household composition and disability status, employment status, minority status, non-English speaking, educational attainment, transportation, and mean income level.
 

Notable uveitis finding

Dr. McDermott said the team was surprised by the association between having had uveitis and delayed axSpA diagnosis.

Among patients with uveitis, 12% had a short delay from symptom onset to axSpA diagnosis of 0-1 years, but more than twice that percentage (27%) had a delay of more than 4 years (P < .001).

“We thought the finding related to uveitis was interesting and potentially clinically meaningful as 27% of axSpA patients in our cohort with more than 4 years of diagnostic delay sought ophthalmology care prior to their diagnosis, [compared with 13% of patients with a diagnosis within 1 year],” Dr. McDermott said. “This practice setting in particular may be a place where we can intervene with simple screening or increased education in order to get people appropriately referred to rheumatology care.”

Longer delays can lead to more functional impairment, radiographic progression, and work disability, as well as poorer quality of life, increased depression, and higher unemployment and health care costs, Dr. McDermott said.
 

Patients may miss key treatment window

Maureen Dubreuil, MD, MSc, assistant professor at Boston University and a rheumatologist with the VA Boston Healthcare System, who was not part of the study, said: “This study addressed a critically important problem in the field – that diagnosis of axSpA is delayed by 7 years, which is much longer than the average time to diagnosis for other forms of arthritis, such as rheumatoid arthritis, which is under 6 months.

“It is critical that diagnostic delay is reduced in axSpA because undiagnosed individuals may miss an important window of opportunity to receive treatment that prevents permanent structural damage and functional declines. This work, if confirmed in other data, would allow development of interventions to improve timely evaluation of individuals with chronic back pain who may have axSpA, particularly among those with within lower socioeconomic strata, and those who are older or have uveitis.”
 

Study tests screening tool

Among the ideas proposed for reducing the delay was a referral strategy with a screening tool.

Swetha Alexander, MD, a rheumatology fellow at the University of Utah, Salt Lake City, who presented her team’s poster, noted that, in the United States, patients with chronic back pain often come first to a primary care doctor or another specialty and not to a rheumatologist.

As an internal medicine resident at Yale University, New Haven, Conn., Dr. Alexander and colleagues there conducted the Finding Axial Spondyloarthritis (FaxSpA) study to test whether patient self-referral or referral by other physicians, guided by answers to a screening tool, could help to speed the process of getting patients more likely to have axSpA to a rheumatologist.

Dr. Alexander said they found that using the screening tool was better than having no referral strategy, explaining that screening helped diagnose about 34% of the study population with axSpA, whereas if a patient came in with chronic back pain to a primary care physician without any screening and ultimately to a rheumatologist, “you’re only capturing about 20%,” she said, citing estimates in the literature.
 

Questions may need rewording

However, the researchers found that patient interpretation of the screening questions was different depending on whether they were answering online or directly from a rheumatologist’s in-person questions. For more success, Dr. Alexander said, the questions may need to be reworded or more education may be needed for both patients and physicians to get more valid information.

For instance, she said, when the screening tool asks about inflammation, the patient may assume the physician is asking about pain and answer one way, but when a rheumatologist asks the question a slightly different way in the clinic, the patient may give a different answer.
 

First questions in portal, on social media

In the screening intervention (called A-tool) patients first answered three questions via the MyChart portal or Facebook. If they answered all three questions positively, they would move on to another round of questions and the answers would decide whether they would be eligible to come into the rheumatologist to get evaluated for axSpA.

At the study visit, rheumatologists asked the same questions as the online A-tool, which focus on SpA features with reasonable sensitivity and specificity for axSpA (no labs or imaging included). Clinicians’ judgment was considered the gold standard for diagnosis of axSpA.

The authors reported that 1,274 patients answered questions with the screening tool via Facebook (50%) and MyChart (50%) from April 2019 to February 2022. Among the responders, 507 (40%) were eligible for a rheumatologist visit.

As of May 2022, 100 patients were enrolled. Of the enrolled patients, 86 patients completed all the study procedures, including study visit, labs, and imaging (x-ray and MRI of the pelvis). Of the 86 patients, 29 (34%) were diagnosed with axSpA.

The tool appears to help narrow the chronic back pain patients who need to be seen by a rheumatologist for potential axSpA, Dr. Alexander said, which may help to speed diagnosis and also save time and resources.

Dr. McDermott, Dr. Dubreuil, and Dr. Alexander reported no relevant financial relationships. The FaxSpA study was supported with funding from Novartis and the Spondylitis Association of America.

 

– With early diagnosis an ongoing complex target for axial spondyloarthritis (axSpA), new research may help to answer where the biggest delays lie.

Gregory McDermott, MD, a research fellow at Brigham and Women’s Hospital in Boston, led a pilot study with data from Mass General Brigham electronic health records. He shared top results at the annual meeting of the Spondyloarthritis Research and Treatment Network (SPARTAN) , where addressing delay in diagnosis was a major theme.

Included in the cohort were 554 patients who had three ICD-9 or ICD-10 codes and an imaging report of sacroiliitis, ankylosis, or syndesmophytes, and were screened via manual chart review for modified New York and Assessment of Spondyloarthritis International Society criteria.

The average diagnostic delay for axSpA was 6.8 years in this study (median, 3.8 years), relatively consistent with findings in previous studies globally, and the average age of onset was 29.5.

The researchers also factored in history of specialty care for back pain (orthopedics, physical medicine and rehabilitation, pain medicine) or extra-articular manifestations (ophthalmology, dermatology, gastroenterology) before axSpA diagnosis. Other factors included smoking and insurance status, along with age, sex, race, and other demographic data.

The results showed shorter delays in diagnosing axSpA were associated with older age at symptom onset and peripheral arthritis, whereas longer delays (more than 4 years) were associated with a history of uveitis, ankylosing spondylitis at diagnosis, and being among those in the 80-99th percentile on the social vulnerability index (SVI). The SVI includes U.S. census data on factors including housing type, household composition and disability status, employment status, minority status, non-English speaking, educational attainment, transportation, and mean income level.
 

Notable uveitis finding

Dr. McDermott said the team was surprised by the association between having had uveitis and delayed axSpA diagnosis.

Among patients with uveitis, 12% had a short delay from symptom onset to axSpA diagnosis of 0-1 years, but more than twice that percentage (27%) had a delay of more than 4 years (P < .001).

“We thought the finding related to uveitis was interesting and potentially clinically meaningful as 27% of axSpA patients in our cohort with more than 4 years of diagnostic delay sought ophthalmology care prior to their diagnosis, [compared with 13% of patients with a diagnosis within 1 year],” Dr. McDermott said. “This practice setting in particular may be a place where we can intervene with simple screening or increased education in order to get people appropriately referred to rheumatology care.”

Longer delays can lead to more functional impairment, radiographic progression, and work disability, as well as poorer quality of life, increased depression, and higher unemployment and health care costs, Dr. McDermott said.
 

Patients may miss key treatment window

Maureen Dubreuil, MD, MSc, assistant professor at Boston University and a rheumatologist with the VA Boston Healthcare System, who was not part of the study, said: “This study addressed a critically important problem in the field – that diagnosis of axSpA is delayed by 7 years, which is much longer than the average time to diagnosis for other forms of arthritis, such as rheumatoid arthritis, which is under 6 months.

“It is critical that diagnostic delay is reduced in axSpA because undiagnosed individuals may miss an important window of opportunity to receive treatment that prevents permanent structural damage and functional declines. This work, if confirmed in other data, would allow development of interventions to improve timely evaluation of individuals with chronic back pain who may have axSpA, particularly among those with within lower socioeconomic strata, and those who are older or have uveitis.”
 

Study tests screening tool

Among the ideas proposed for reducing the delay was a referral strategy with a screening tool.

Swetha Alexander, MD, a rheumatology fellow at the University of Utah, Salt Lake City, who presented her team’s poster, noted that, in the United States, patients with chronic back pain often come first to a primary care doctor or another specialty and not to a rheumatologist.

As an internal medicine resident at Yale University, New Haven, Conn., Dr. Alexander and colleagues there conducted the Finding Axial Spondyloarthritis (FaxSpA) study to test whether patient self-referral or referral by other physicians, guided by answers to a screening tool, could help to speed the process of getting patients more likely to have axSpA to a rheumatologist.

Dr. Alexander said they found that using the screening tool was better than having no referral strategy, explaining that screening helped diagnose about 34% of the study population with axSpA, whereas if a patient came in with chronic back pain to a primary care physician without any screening and ultimately to a rheumatologist, “you’re only capturing about 20%,” she said, citing estimates in the literature.
 

Questions may need rewording

However, the researchers found that patient interpretation of the screening questions was different depending on whether they were answering online or directly from a rheumatologist’s in-person questions. For more success, Dr. Alexander said, the questions may need to be reworded or more education may be needed for both patients and physicians to get more valid information.

For instance, she said, when the screening tool asks about inflammation, the patient may assume the physician is asking about pain and answer one way, but when a rheumatologist asks the question a slightly different way in the clinic, the patient may give a different answer.
 

First questions in portal, on social media

In the screening intervention (called A-tool) patients first answered three questions via the MyChart portal or Facebook. If they answered all three questions positively, they would move on to another round of questions and the answers would decide whether they would be eligible to come into the rheumatologist to get evaluated for axSpA.

At the study visit, rheumatologists asked the same questions as the online A-tool, which focus on SpA features with reasonable sensitivity and specificity for axSpA (no labs or imaging included). Clinicians’ judgment was considered the gold standard for diagnosis of axSpA.

The authors reported that 1,274 patients answered questions with the screening tool via Facebook (50%) and MyChart (50%) from April 2019 to February 2022. Among the responders, 507 (40%) were eligible for a rheumatologist visit.

As of May 2022, 100 patients were enrolled. Of the enrolled patients, 86 patients completed all the study procedures, including study visit, labs, and imaging (x-ray and MRI of the pelvis). Of the 86 patients, 29 (34%) were diagnosed with axSpA.

The tool appears to help narrow the chronic back pain patients who need to be seen by a rheumatologist for potential axSpA, Dr. Alexander said, which may help to speed diagnosis and also save time and resources.

Dr. McDermott, Dr. Dubreuil, and Dr. Alexander reported no relevant financial relationships. The FaxSpA study was supported with funding from Novartis and the Spondylitis Association of America.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT SPARTAN 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Distal radial access doesn’t harm hand function at 1 year

Article Type
Changed
Tue, 05/23/2023 - 08:54

Outcomes equal to proximal approach

In what may be the first randomized trial to compare coronary intervention access using the distal or proximal radial arteries, researchers have found no significant differences between the two in hand function a year after the procedure.

The distal radial artery (DRA) access point is just below the thumb on the inside of the wrist. The proximal radial artery (PRA) entry is in the inside lower forearm above the wrist.

“There has been growing interest in the use of distal radial access given its ease of hemostasis, lower incidence of radial artery occlusions, as well as the more ergonomic favorable setup for a left radial access, which is typically utilized in patients with prior CABG who undergo a cardiac catheterization when used as alternative to femoral artery access,” Karim Al-Azizi, MD, of Texas A&M University, an interventional cardiologist and associate program director of the cardiology fellowship at  Baylor Scott & White Health, in Plano, Tex., said in an interview.

Baylor Scott &amp; White
Dr. Karim Al-Azizi

Dr Al-Azizi presented the late-breaking 1-year results of the DIPRA–for Distal vs. Proximal Radial Artery–study at the Society for Cardiovascular Angiography & Interventions annual scientific sessions. The 30-day results of the DIPRA trial were presented in 2022 at this meeting.

Dr. Al-Azizi said DIPRA is the first randomized, controlled trial comparing hand function outcomes with the two approaches. “I think the biggest question for most investigators and most practitioners is that, is this safe on the hand? Are we doing the right thing by going into the radial artery in the anatomical snuff box in proximity to the radial nerve and would that affect motor function?” he said. “And it does not seem like it from a head-to-head comparison of proximal versus distal access.”

The DIPRA study randomized 300 patients 1:1 to cardiac catheterization through either the distal or proximal access. Of those, 216 completed 1-year follow-up, 112 randomized to DRA and 104 to PRA.

The study used three metrics to evaluate hand function: hand-grip strength; pinch test, which measured the strength of a pinch between the thumb and index finger; and QuickDASH, an abbreviated version of the Disabilities of the Arm, Shoulder, and Hand questionnaire, in which participants self-evaluate their hand function. Study protocol mandated that operators use ultrasound guidance for DRA access.

The 1-year results of all three measures showed no significant difference in change of hand function from baseline between the two groups. The composite average score change was –0.07 (–0.41, 0.44) for the DRA patients and –0.03. (–0.36, 0.44) for the PRA group (P = .59).

One-year change for the specific hand function measures for DRA and PRA, respectively, were: hand grip, 0.7 (–3, 4.5) vs. 1.3 (–2, 4.3) kg (P = .57); pinch grip, –0.1 (–1.1, 1) vs. –0.3 (–1, 0.7) kg (P = .66); and none for change in the QuickDASH score (–6.6, 2.3 vs. –4.6, 2.9) points (P = .58).

Outcomes at intervention were also similar. Bleeding incidence was 0% and 1.4% (P = .25) in the respective groups. Successful RA access was achieved in 96.7% and 98% (P = .72).

Baseline characteristics were balanced between the two groups: 75% were male; mean age was 66.6 ± 9.6 years; 32% had diabetes; 77% had hypertension; and 19% had a previous percutaneous coronary intervention.

One key strength of the DIPRA study Dr. Al-Azizi noted is that it included some investigators who were at the early stage of the learning curve with the procedure. A limitation is that it didn’t evaluate hand numbness or tingling, but hand sensory testing is “very subjective,” he said. “To avoid confusion, we decided to go with the more repeatable questionnaire rather than a sensation or sensory test,” he added.

The next step for his research team is to conduct a meta-analysis of studies that have evaluated DRA and PRA, Dr. Al-Azizi said.
 

 

 

‘Slow to the party’

U.S. interventional cardiologists have been “slow to the party” in adopting radial artery access for PCI, said David A. Cox, MD, of Sanger Heart and Vascular Institute in Charlotte, N.C., and SCAI communications committee chair. Even now uptake is low, compared with the rest of the world, he said.

“I can tell you what patients care about: Did you have to stick my groin?” he said at a SCAI press conference. “What they just want to know is that there are no issues with hand function.”

Some patients who need fine motor hand function would still opt for femoral access, he said.

“Are we looking at the right metric?” he asked Dr. Al-Azizi. “It took a long time to get American doctors to stick the radial, so why would I want to learn distal radial artery if I’m really pretty good at proximal and if it’s not inferior?”

Dr. Al-Azizi noted that previous studies showed a trend toward a lower incidence of radial artery occlusion (RAO) with DRA access. It also better preserves the renal arteries for dialysis and CABG, he said.

“The metric that would move the needle,” Dr. Cox noted, “is if you had radial artery occlusion rates vs. snuff box occlusion rates, and we don’t have that rate.”

Dr. Al-Azizi has no relevant financial disclosures. Dr. Cox disclosed financial relationships with Medtronic.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Outcomes equal to proximal approach

Outcomes equal to proximal approach

In what may be the first randomized trial to compare coronary intervention access using the distal or proximal radial arteries, researchers have found no significant differences between the two in hand function a year after the procedure.

The distal radial artery (DRA) access point is just below the thumb on the inside of the wrist. The proximal radial artery (PRA) entry is in the inside lower forearm above the wrist.

“There has been growing interest in the use of distal radial access given its ease of hemostasis, lower incidence of radial artery occlusions, as well as the more ergonomic favorable setup for a left radial access, which is typically utilized in patients with prior CABG who undergo a cardiac catheterization when used as alternative to femoral artery access,” Karim Al-Azizi, MD, of Texas A&M University, an interventional cardiologist and associate program director of the cardiology fellowship at  Baylor Scott & White Health, in Plano, Tex., said in an interview.

Baylor Scott &amp; White
Dr. Karim Al-Azizi

Dr Al-Azizi presented the late-breaking 1-year results of the DIPRA–for Distal vs. Proximal Radial Artery–study at the Society for Cardiovascular Angiography & Interventions annual scientific sessions. The 30-day results of the DIPRA trial were presented in 2022 at this meeting.

Dr. Al-Azizi said DIPRA is the first randomized, controlled trial comparing hand function outcomes with the two approaches. “I think the biggest question for most investigators and most practitioners is that, is this safe on the hand? Are we doing the right thing by going into the radial artery in the anatomical snuff box in proximity to the radial nerve and would that affect motor function?” he said. “And it does not seem like it from a head-to-head comparison of proximal versus distal access.”

The DIPRA study randomized 300 patients 1:1 to cardiac catheterization through either the distal or proximal access. Of those, 216 completed 1-year follow-up, 112 randomized to DRA and 104 to PRA.

The study used three metrics to evaluate hand function: hand-grip strength; pinch test, which measured the strength of a pinch between the thumb and index finger; and QuickDASH, an abbreviated version of the Disabilities of the Arm, Shoulder, and Hand questionnaire, in which participants self-evaluate their hand function. Study protocol mandated that operators use ultrasound guidance for DRA access.

The 1-year results of all three measures showed no significant difference in change of hand function from baseline between the two groups. The composite average score change was –0.07 (–0.41, 0.44) for the DRA patients and –0.03. (–0.36, 0.44) for the PRA group (P = .59).

One-year change for the specific hand function measures for DRA and PRA, respectively, were: hand grip, 0.7 (–3, 4.5) vs. 1.3 (–2, 4.3) kg (P = .57); pinch grip, –0.1 (–1.1, 1) vs. –0.3 (–1, 0.7) kg (P = .66); and none for change in the QuickDASH score (–6.6, 2.3 vs. –4.6, 2.9) points (P = .58).

Outcomes at intervention were also similar. Bleeding incidence was 0% and 1.4% (P = .25) in the respective groups. Successful RA access was achieved in 96.7% and 98% (P = .72).

Baseline characteristics were balanced between the two groups: 75% were male; mean age was 66.6 ± 9.6 years; 32% had diabetes; 77% had hypertension; and 19% had a previous percutaneous coronary intervention.

One key strength of the DIPRA study Dr. Al-Azizi noted is that it included some investigators who were at the early stage of the learning curve with the procedure. A limitation is that it didn’t evaluate hand numbness or tingling, but hand sensory testing is “very subjective,” he said. “To avoid confusion, we decided to go with the more repeatable questionnaire rather than a sensation or sensory test,” he added.

The next step for his research team is to conduct a meta-analysis of studies that have evaluated DRA and PRA, Dr. Al-Azizi said.
 

 

 

‘Slow to the party’

U.S. interventional cardiologists have been “slow to the party” in adopting radial artery access for PCI, said David A. Cox, MD, of Sanger Heart and Vascular Institute in Charlotte, N.C., and SCAI communications committee chair. Even now uptake is low, compared with the rest of the world, he said.

“I can tell you what patients care about: Did you have to stick my groin?” he said at a SCAI press conference. “What they just want to know is that there are no issues with hand function.”

Some patients who need fine motor hand function would still opt for femoral access, he said.

“Are we looking at the right metric?” he asked Dr. Al-Azizi. “It took a long time to get American doctors to stick the radial, so why would I want to learn distal radial artery if I’m really pretty good at proximal and if it’s not inferior?”

Dr. Al-Azizi noted that previous studies showed a trend toward a lower incidence of radial artery occlusion (RAO) with DRA access. It also better preserves the renal arteries for dialysis and CABG, he said.

“The metric that would move the needle,” Dr. Cox noted, “is if you had radial artery occlusion rates vs. snuff box occlusion rates, and we don’t have that rate.”

Dr. Al-Azizi has no relevant financial disclosures. Dr. Cox disclosed financial relationships with Medtronic.

In what may be the first randomized trial to compare coronary intervention access using the distal or proximal radial arteries, researchers have found no significant differences between the two in hand function a year after the procedure.

The distal radial artery (DRA) access point is just below the thumb on the inside of the wrist. The proximal radial artery (PRA) entry is in the inside lower forearm above the wrist.

“There has been growing interest in the use of distal radial access given its ease of hemostasis, lower incidence of radial artery occlusions, as well as the more ergonomic favorable setup for a left radial access, which is typically utilized in patients with prior CABG who undergo a cardiac catheterization when used as alternative to femoral artery access,” Karim Al-Azizi, MD, of Texas A&M University, an interventional cardiologist and associate program director of the cardiology fellowship at  Baylor Scott & White Health, in Plano, Tex., said in an interview.

Baylor Scott &amp; White
Dr. Karim Al-Azizi

Dr Al-Azizi presented the late-breaking 1-year results of the DIPRA–for Distal vs. Proximal Radial Artery–study at the Society for Cardiovascular Angiography & Interventions annual scientific sessions. The 30-day results of the DIPRA trial were presented in 2022 at this meeting.

Dr. Al-Azizi said DIPRA is the first randomized, controlled trial comparing hand function outcomes with the two approaches. “I think the biggest question for most investigators and most practitioners is that, is this safe on the hand? Are we doing the right thing by going into the radial artery in the anatomical snuff box in proximity to the radial nerve and would that affect motor function?” he said. “And it does not seem like it from a head-to-head comparison of proximal versus distal access.”

The DIPRA study randomized 300 patients 1:1 to cardiac catheterization through either the distal or proximal access. Of those, 216 completed 1-year follow-up, 112 randomized to DRA and 104 to PRA.

The study used three metrics to evaluate hand function: hand-grip strength; pinch test, which measured the strength of a pinch between the thumb and index finger; and QuickDASH, an abbreviated version of the Disabilities of the Arm, Shoulder, and Hand questionnaire, in which participants self-evaluate their hand function. Study protocol mandated that operators use ultrasound guidance for DRA access.

The 1-year results of all three measures showed no significant difference in change of hand function from baseline between the two groups. The composite average score change was –0.07 (–0.41, 0.44) for the DRA patients and –0.03. (–0.36, 0.44) for the PRA group (P = .59).

One-year change for the specific hand function measures for DRA and PRA, respectively, were: hand grip, 0.7 (–3, 4.5) vs. 1.3 (–2, 4.3) kg (P = .57); pinch grip, –0.1 (–1.1, 1) vs. –0.3 (–1, 0.7) kg (P = .66); and none for change in the QuickDASH score (–6.6, 2.3 vs. –4.6, 2.9) points (P = .58).

Outcomes at intervention were also similar. Bleeding incidence was 0% and 1.4% (P = .25) in the respective groups. Successful RA access was achieved in 96.7% and 98% (P = .72).

Baseline characteristics were balanced between the two groups: 75% were male; mean age was 66.6 ± 9.6 years; 32% had diabetes; 77% had hypertension; and 19% had a previous percutaneous coronary intervention.

One key strength of the DIPRA study Dr. Al-Azizi noted is that it included some investigators who were at the early stage of the learning curve with the procedure. A limitation is that it didn’t evaluate hand numbness or tingling, but hand sensory testing is “very subjective,” he said. “To avoid confusion, we decided to go with the more repeatable questionnaire rather than a sensation or sensory test,” he added.

The next step for his research team is to conduct a meta-analysis of studies that have evaluated DRA and PRA, Dr. Al-Azizi said.
 

 

 

‘Slow to the party’

U.S. interventional cardiologists have been “slow to the party” in adopting radial artery access for PCI, said David A. Cox, MD, of Sanger Heart and Vascular Institute in Charlotte, N.C., and SCAI communications committee chair. Even now uptake is low, compared with the rest of the world, he said.

“I can tell you what patients care about: Did you have to stick my groin?” he said at a SCAI press conference. “What they just want to know is that there are no issues with hand function.”

Some patients who need fine motor hand function would still opt for femoral access, he said.

“Are we looking at the right metric?” he asked Dr. Al-Azizi. “It took a long time to get American doctors to stick the radial, so why would I want to learn distal radial artery if I’m really pretty good at proximal and if it’s not inferior?”

Dr. Al-Azizi noted that previous studies showed a trend toward a lower incidence of radial artery occlusion (RAO) with DRA access. It also better preserves the renal arteries for dialysis and CABG, he said.

“The metric that would move the needle,” Dr. Cox noted, “is if you had radial artery occlusion rates vs. snuff box occlusion rates, and we don’t have that rate.”

Dr. Al-Azizi has no relevant financial disclosures. Dr. Cox disclosed financial relationships with Medtronic.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM SCAI 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Could love hormone help psychological symptoms in AVD?

Article Type
Changed
Mon, 05/22/2023 - 20:47

Patients with arginine vasopressin deficiency (AVD) appear to also be deficient in oxytocin, the “love hormone,” suggesting a possible pathway for treating the psychological symptoms associated with the illness.
 

Formerly known as central diabetes insipidus, AVD is a rare neuroendocrine condition in which fluid isn’t regulated, leading to polydipsia and polyuria. The vasopressin receptor 2 agonist desmopressin treats those symptoms, but patients often also experience psychopathological problems, such as increased anxiety, depression, and emotional withdrawal.

It has been hypothesized that those symptoms are caused by a concurrent deficiency of the so-called “love hormone” oxytocin, given the anatomic proximity of vasopressin and oxytocin production in the brain.

Now, for the first time, researchers have demonstrated evidence of that phenomenon using 3,4-methylenedioxymethamphetamine (MDMA, also known as “ecstasy”) to provoke oxytocin release. In individuals without AVD, use of MDMA resulted in large increases in plasma oxytocin concentrations, whereas there was very little response among those with AVD, suggesting that the latter patients were deficient in oxytocin.

“These findings are suggestive of a new hypothalamic–pituitary disease entity and contribute to deepening our understanding of oxytocin as a key hormone in centrally generated socioemotional effects, as reflected by reduced prosocial, empathic, and anxiolytic effects in patients with an oxytocin deficiency,” Cihan Atila, MD, of the University of Basel (Switzerland), and colleagues wrote.

“Future studies should evaluate whether oxytocin replacement therapy can alleviate residual symptoms related to oxytocin deficiency in patients with [AVD],” they added.

The findings, from a single-center study of 15 patients with AVD and 15 healthy control persons, were published online in The Lancet Diabetes and Endocrinology.

“Atila and colleagues provide compelling evidence for a clinically relevant oxytocin deficiency in this population of patients, which appears to be at least partly responsible for the associated increase in psychopathological findings,” say Mirela Diana Ilie, MD, an endocrinologist in training at the National Institute of Endocrinology, Bucharest, Romania, and Gérald Raverot, MD, professor of endocrinology at Lyon (France) University Hospital, France, in an accompanying editorial.

“From a therapeutic viewpoint, the findings ... pave the way to intervention studies assessing the effect of intranasal oxytocin in patients with [AVD] and better clinical care for these patients,” they add.

However, Dr. Ilie and Dr. Raverot urged caution for a variety of reasons, including the fact that, thus far, only one patient with arginine vasopressin deficiency has been administered oxytocin on a long-term basis. They suggested further studies to answer many pertinent questions, such as what the appropriate doses and frequency of oxytocin administration are, whether the dose should remain constant or be increased during stress or particular acute situations, whether long-term administration is suitable for all patients regardless of the extent of oxytocin deficiency, and how follow-up should be conducted.

“Answering these questions seems all the more important considering that oxytocin therapy has shown conflicting results when administered for psychiatric disorders,” said Dr. Ilie and Dr. Raverot.

In the meantime, “independent of the potential use of oxytocin, given the frequent and important psychological burden of [AVD], clinicians should screen patients for psychological comorbidities and should not hesitate to refer them to appropriate psychological and psychiatric care,” the editorialists wrote.
 

 

 

Eightfold increase in plasma oxytocin levels in patients vs. control persons

The 15 AVD patients and 15 matched healthy control persons were recruited between Feb. 1, 2021, and May 1, 2022. Of those with AVD, eight had an isolated posterior pituitary dysfunction, and seven had a combined pituitary dysfunction. The patients had significantly higher scores on measures of anxiety, alexithymia, and depression, and self-reported mental health was lower, compared with control persons.

All participants were randomly assigned to receive either a single oral dose of MDMA 100 mg or placebo in the first experimental session and the opposite treatment in a second session. There was a 2-week washout period in between.

Median oxytocin concentrations at baseline were 77 pg/mL in the healthy control persons and peaked after MDMA stimulation to 624 pg/mL after 180 minutes, with a maximum of 659 pg/mL. In contrast, among the patients with AVD, baseline oxytocin levels were 60 pg/mL and peaked to just 92 pg/mL after 150 minutes, with a maximum change in concentration of 66 pg/mL.

In response to MDMA, there was an eightfold increase in plasma oxytocin area under the curve among the control persons versus no notable increase in the patients with AVD.

The net incremental oxytocin area under the curve after MDMA administration was 82% higher among control persons than patients (P < .0001).

The MDMA-induced increase in oxytocin was associated with reduced anxiety scores among the control persons but not the AVD patients. Similar results were seen for subjective prosocial and empathic effects.

The most frequently reported adverse effects of the MDMA provocation in both groups were fatigue, lack of appetite, and dry mouth, all of which occurred in more than half of participants.

“These findings contradict the previous theory that oxytocin stimulation has only a secondary role in the effects of MDMA. Our results, by contrast, suggest a paradigm shift and underline the importance of oxytocin as a key feature of the effects of MDMA,” Dr. Atila and colleagues concluded.

Dr. Atila, Dr. Ilie, and Dr. Raverot have disclosed no relevant financial relationships. One study coauthor owns stock in MiniMed.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Patients with arginine vasopressin deficiency (AVD) appear to also be deficient in oxytocin, the “love hormone,” suggesting a possible pathway for treating the psychological symptoms associated with the illness.
 

Formerly known as central diabetes insipidus, AVD is a rare neuroendocrine condition in which fluid isn’t regulated, leading to polydipsia and polyuria. The vasopressin receptor 2 agonist desmopressin treats those symptoms, but patients often also experience psychopathological problems, such as increased anxiety, depression, and emotional withdrawal.

It has been hypothesized that those symptoms are caused by a concurrent deficiency of the so-called “love hormone” oxytocin, given the anatomic proximity of vasopressin and oxytocin production in the brain.

Now, for the first time, researchers have demonstrated evidence of that phenomenon using 3,4-methylenedioxymethamphetamine (MDMA, also known as “ecstasy”) to provoke oxytocin release. In individuals without AVD, use of MDMA resulted in large increases in plasma oxytocin concentrations, whereas there was very little response among those with AVD, suggesting that the latter patients were deficient in oxytocin.

“These findings are suggestive of a new hypothalamic–pituitary disease entity and contribute to deepening our understanding of oxytocin as a key hormone in centrally generated socioemotional effects, as reflected by reduced prosocial, empathic, and anxiolytic effects in patients with an oxytocin deficiency,” Cihan Atila, MD, of the University of Basel (Switzerland), and colleagues wrote.

“Future studies should evaluate whether oxytocin replacement therapy can alleviate residual symptoms related to oxytocin deficiency in patients with [AVD],” they added.

The findings, from a single-center study of 15 patients with AVD and 15 healthy control persons, were published online in The Lancet Diabetes and Endocrinology.

“Atila and colleagues provide compelling evidence for a clinically relevant oxytocin deficiency in this population of patients, which appears to be at least partly responsible for the associated increase in psychopathological findings,” say Mirela Diana Ilie, MD, an endocrinologist in training at the National Institute of Endocrinology, Bucharest, Romania, and Gérald Raverot, MD, professor of endocrinology at Lyon (France) University Hospital, France, in an accompanying editorial.

“From a therapeutic viewpoint, the findings ... pave the way to intervention studies assessing the effect of intranasal oxytocin in patients with [AVD] and better clinical care for these patients,” they add.

However, Dr. Ilie and Dr. Raverot urged caution for a variety of reasons, including the fact that, thus far, only one patient with arginine vasopressin deficiency has been administered oxytocin on a long-term basis. They suggested further studies to answer many pertinent questions, such as what the appropriate doses and frequency of oxytocin administration are, whether the dose should remain constant or be increased during stress or particular acute situations, whether long-term administration is suitable for all patients regardless of the extent of oxytocin deficiency, and how follow-up should be conducted.

“Answering these questions seems all the more important considering that oxytocin therapy has shown conflicting results when administered for psychiatric disorders,” said Dr. Ilie and Dr. Raverot.

In the meantime, “independent of the potential use of oxytocin, given the frequent and important psychological burden of [AVD], clinicians should screen patients for psychological comorbidities and should not hesitate to refer them to appropriate psychological and psychiatric care,” the editorialists wrote.
 

 

 

Eightfold increase in plasma oxytocin levels in patients vs. control persons

The 15 AVD patients and 15 matched healthy control persons were recruited between Feb. 1, 2021, and May 1, 2022. Of those with AVD, eight had an isolated posterior pituitary dysfunction, and seven had a combined pituitary dysfunction. The patients had significantly higher scores on measures of anxiety, alexithymia, and depression, and self-reported mental health was lower, compared with control persons.

All participants were randomly assigned to receive either a single oral dose of MDMA 100 mg or placebo in the first experimental session and the opposite treatment in a second session. There was a 2-week washout period in between.

Median oxytocin concentrations at baseline were 77 pg/mL in the healthy control persons and peaked after MDMA stimulation to 624 pg/mL after 180 minutes, with a maximum of 659 pg/mL. In contrast, among the patients with AVD, baseline oxytocin levels were 60 pg/mL and peaked to just 92 pg/mL after 150 minutes, with a maximum change in concentration of 66 pg/mL.

In response to MDMA, there was an eightfold increase in plasma oxytocin area under the curve among the control persons versus no notable increase in the patients with AVD.

The net incremental oxytocin area under the curve after MDMA administration was 82% higher among control persons than patients (P < .0001).

The MDMA-induced increase in oxytocin was associated with reduced anxiety scores among the control persons but not the AVD patients. Similar results were seen for subjective prosocial and empathic effects.

The most frequently reported adverse effects of the MDMA provocation in both groups were fatigue, lack of appetite, and dry mouth, all of which occurred in more than half of participants.

“These findings contradict the previous theory that oxytocin stimulation has only a secondary role in the effects of MDMA. Our results, by contrast, suggest a paradigm shift and underline the importance of oxytocin as a key feature of the effects of MDMA,” Dr. Atila and colleagues concluded.

Dr. Atila, Dr. Ilie, and Dr. Raverot have disclosed no relevant financial relationships. One study coauthor owns stock in MiniMed.

A version of this article first appeared on Medscape.com.

Patients with arginine vasopressin deficiency (AVD) appear to also be deficient in oxytocin, the “love hormone,” suggesting a possible pathway for treating the psychological symptoms associated with the illness.
 

Formerly known as central diabetes insipidus, AVD is a rare neuroendocrine condition in which fluid isn’t regulated, leading to polydipsia and polyuria. The vasopressin receptor 2 agonist desmopressin treats those symptoms, but patients often also experience psychopathological problems, such as increased anxiety, depression, and emotional withdrawal.

It has been hypothesized that those symptoms are caused by a concurrent deficiency of the so-called “love hormone” oxytocin, given the anatomic proximity of vasopressin and oxytocin production in the brain.

Now, for the first time, researchers have demonstrated evidence of that phenomenon using 3,4-methylenedioxymethamphetamine (MDMA, also known as “ecstasy”) to provoke oxytocin release. In individuals without AVD, use of MDMA resulted in large increases in plasma oxytocin concentrations, whereas there was very little response among those with AVD, suggesting that the latter patients were deficient in oxytocin.

“These findings are suggestive of a new hypothalamic–pituitary disease entity and contribute to deepening our understanding of oxytocin as a key hormone in centrally generated socioemotional effects, as reflected by reduced prosocial, empathic, and anxiolytic effects in patients with an oxytocin deficiency,” Cihan Atila, MD, of the University of Basel (Switzerland), and colleagues wrote.

“Future studies should evaluate whether oxytocin replacement therapy can alleviate residual symptoms related to oxytocin deficiency in patients with [AVD],” they added.

The findings, from a single-center study of 15 patients with AVD and 15 healthy control persons, were published online in The Lancet Diabetes and Endocrinology.

“Atila and colleagues provide compelling evidence for a clinically relevant oxytocin deficiency in this population of patients, which appears to be at least partly responsible for the associated increase in psychopathological findings,” say Mirela Diana Ilie, MD, an endocrinologist in training at the National Institute of Endocrinology, Bucharest, Romania, and Gérald Raverot, MD, professor of endocrinology at Lyon (France) University Hospital, France, in an accompanying editorial.

“From a therapeutic viewpoint, the findings ... pave the way to intervention studies assessing the effect of intranasal oxytocin in patients with [AVD] and better clinical care for these patients,” they add.

However, Dr. Ilie and Dr. Raverot urged caution for a variety of reasons, including the fact that, thus far, only one patient with arginine vasopressin deficiency has been administered oxytocin on a long-term basis. They suggested further studies to answer many pertinent questions, such as what the appropriate doses and frequency of oxytocin administration are, whether the dose should remain constant or be increased during stress or particular acute situations, whether long-term administration is suitable for all patients regardless of the extent of oxytocin deficiency, and how follow-up should be conducted.

“Answering these questions seems all the more important considering that oxytocin therapy has shown conflicting results when administered for psychiatric disorders,” said Dr. Ilie and Dr. Raverot.

In the meantime, “independent of the potential use of oxytocin, given the frequent and important psychological burden of [AVD], clinicians should screen patients for psychological comorbidities and should not hesitate to refer them to appropriate psychological and psychiatric care,” the editorialists wrote.
 

 

 

Eightfold increase in plasma oxytocin levels in patients vs. control persons

The 15 AVD patients and 15 matched healthy control persons were recruited between Feb. 1, 2021, and May 1, 2022. Of those with AVD, eight had an isolated posterior pituitary dysfunction, and seven had a combined pituitary dysfunction. The patients had significantly higher scores on measures of anxiety, alexithymia, and depression, and self-reported mental health was lower, compared with control persons.

All participants were randomly assigned to receive either a single oral dose of MDMA 100 mg or placebo in the first experimental session and the opposite treatment in a second session. There was a 2-week washout period in between.

Median oxytocin concentrations at baseline were 77 pg/mL in the healthy control persons and peaked after MDMA stimulation to 624 pg/mL after 180 minutes, with a maximum of 659 pg/mL. In contrast, among the patients with AVD, baseline oxytocin levels were 60 pg/mL and peaked to just 92 pg/mL after 150 minutes, with a maximum change in concentration of 66 pg/mL.

In response to MDMA, there was an eightfold increase in plasma oxytocin area under the curve among the control persons versus no notable increase in the patients with AVD.

The net incremental oxytocin area under the curve after MDMA administration was 82% higher among control persons than patients (P < .0001).

The MDMA-induced increase in oxytocin was associated with reduced anxiety scores among the control persons but not the AVD patients. Similar results were seen for subjective prosocial and empathic effects.

The most frequently reported adverse effects of the MDMA provocation in both groups were fatigue, lack of appetite, and dry mouth, all of which occurred in more than half of participants.

“These findings contradict the previous theory that oxytocin stimulation has only a secondary role in the effects of MDMA. Our results, by contrast, suggest a paradigm shift and underline the importance of oxytocin as a key feature of the effects of MDMA,” Dr. Atila and colleagues concluded.

Dr. Atila, Dr. Ilie, and Dr. Raverot have disclosed no relevant financial relationships. One study coauthor owns stock in MiniMed.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE LANCET DIABETES & ENDOCRINOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Machine-learning model improves MI diagnosis

Article Type
Changed
Tue, 05/23/2023 - 08:52

Use of a machine-learning model that incorporates information from a single troponin test as well as other clinical data was superior to current practice as an aid to the diagnosis of myocardial infarction in the emergency department in a new study.

“Our results suggest that by using this machine-learning model, compared to the currently recommended approach, we could double the proportion of patients who are identified correctly as having a low probability of an MI on arrival to enable immediate discharge and free up space in the emergency department,” senior author Nicholas L. Mills, MD, University of Edinburgh, Scotland, said in an interview.

Dr. Nicholas L. Mills

“And, perhaps even more importantly, use of this model could also increase the proportion of patients who are correctly identified as at a high probability of having an MI,” he added.

The study was published online in Nature Medicine.

The authors explained that at present, the likelihood of an MI diagnosis for patients presenting to the emergency department with chest pain is based on a fixed troponin threshold in serial tests at specific time points, but there are several problems with this approach.

First, a fixed troponin threshold is generally used for all patients, which does not account for age, sex, or comorbidities that are known to influence cardiac troponin concentrations. Second, the need to perform tests at specific time points for serial testing can be challenging in busy emergency departments.

And third, patients are categorized as being at low, intermediate, or high risk of MI on the basis of troponin thresholds alone, and the test does not take into account other important factors, such as the time of symptom onset or findings on the electrocardiogram.

“Our current practice of using the same threshold to rule in and rule out an MI for everyone, regardless of whether they are an 18-year-old female without a history of heart disease or an 85-year-old male with known heart failure, doesn’t perform well, and there’s a significant risk of misdiagnosis. There is also a high likelihood for inequalities in care, particularly between men and women,” Dr. Mills said.

The current study evaluated whether use of a machine learning model known as CoDE-ACS to guide decision-making could overcome some of these challenges.

The machine learning model assesses the whole spectrum of troponin levels as a continuous variable (rather than use of a single threshold) and turns this measurement into a probability that an individual patient is having an MI after accounting for other factors, including age, sex, comorbidities, and time from symptom onset.

For the current study, the CoDE-ACS model was trained in 10,000 patients with suspected acute coronary syndrome (ACS) who presented to 10 hospitals in Scotland as part of the High-STEACS trial evaluating the implementation of a high-sensitivity cardiac troponin I assay. The results were then validated in another 10,000 patients from six countries around the world.

“Using this model, the patient can have a troponin test on arrival at the emergency department. The other information on age, sex, clinical history, and time since symptom onset is keyed in, and it gives a probability on a scale of 0–100 as to whether the patient is having an MI,” Dr. Mills noted.

“It also has the capacity to incorporate more information over time. So, if there is a second troponin measurement made, then the model automatically refines the probability score,” he added.

The current study showed that use of the CoDE-ACS model identified more patients at presentation as having a low probability of having an MI than fixed cardiac troponin thresholds (61% vs. 27%) with a similar negative predictive value.

It also identified fewer patients as having a high probability of having an MI (10% vs. 16%) with a greater positive predictive value.

Among patients who were identified as having a low probability of MI, the rate of cardiac death was lower than the rate among those with intermediate or high probability at 30 days (0.1% vs. 0.5% and 1.8%) and 1 year (0.3% vs. 2.8% and 4.2%).

“The results show that the machine learning model doubles the proportion of patients who can be discharged with a single test compared to the current practice of using the threshold approach. It really is a game changer in terms of its potential to improve health efficiency,” Dr. Mills said.

In terms of ruling patients in as possibly having an MI, he pointed out that troponin levels are increased in patients with a wide range of other conditions, including heart failure, kidney failure, and atrial fibrillation.

“When using the threshold approach, only one in four patients with an elevated troponin level will actually be having an MI, and that leads to confusion,” he said. “This model takes into consideration these other conditions and so it can correctly identify three out of four patients with a high probability of having an MI. We can therefore be more confident that it is appropriate to refer those patients to cardiology and save a lot of potentially unnecessary investigations and treatments in the others.”

Dr. Mills said the model also seems to work when assessing patients early on.

“Around one-third of patients present within 3 hours of symptom onset, and actually these are a high-risk group because people who have genuine cardiac pain are normally extremely uncomfortable and tend to present quickly. Current guidelines require that we do two tests in all these individuals, but this new model incorporates the time of symptom onset into its estimates of probability and therefore allows us to rule out patients even when they present very early.”

He reported that a second test is required in only one in five patients – those whose first test indicated intermediate probability.

“The second test allows us to refine the probability further, allowing us to rule another half of those patients out. We are then left with a small proportion of patients – about 1 in 10 – who remain of intermediate probability and will require additional clinical judgment.”
 

 

 

Should improve inequities in MI diagnosis

Dr. Mills said the CoDE-ACS model will improve current inequities in MI diagnosis, because of which MI is underrecognized in women and younger people.

“Women have troponin concentrations that are half those of men, and although sex-specific troponin thresholds are recommended in the guidelines, they are not widely used. This automatically leads to underrecognition of heart disease in women. But this new machine learning model performs identically in men and women because it has been trained to recognize the different normal levels in men and women,” he explained.

“It will also help us not to underdiagnose MI in younger people who often have a less classical presentation of MI, and they also generally have very low concentrations of troponin, so any increase in troponin way below the current diagnostic threshold may be very relevant to their risk,” he added.

The researchers are planning a randomized trial of the new model to demonstrate the impact it could have on equality of care and overcrowding in the emergency department. In the trial, patients will be randomly assigned to undergo decision-making on the basis of troponin thresholds (current practice) or to undergo decision-making through the CoDE-ACS model.

“The hope is that this trial will show reductions in unnecessary hospital admissions and length of stay in the emergency department,” Dr. Mills said. Results are expected sometime next year.

“This algorithm is very well trained. It has learned on 20,000 patients, so it has a lot more experience than I have, and I have been a professor of cardiology for 20 years,” Dr. Mills said.

He said he believes these models will get even smarter in the future as more data are added.

“I think the future for good decision-making in emergency care will be informed by clinical decision support from well-trained machine learning algorithms and they will help us guide not just the diagnosis of MI but also heart failure and other important cardiac conditions,” he said.
 

‘Elegant and exciting’ data

Commenting on the study, John W. McEvoy, MB, University of Galway, Ireland, said: “These are elegant and exciting data; however, the inputs into the machine learning algorithm include all the necessary information to actually diagnose MI. So why predict MI, when a human diagnosis can just be made directly? The answer to this question may depend on whether we trust machines more than humans.”

Dr. Mills noted that clinical judgment will always be an important part of MI diagnosis.

“Currently, using the troponin threshold approach, experienced clinicians will be able to nuance the results, but invariably, there is disagreement on this, and this can be a major source of tension within clinical care. By providing more individualized information, this will help enormously in the decision-making process,” he said.

“This model is not about replacing clinical decision-making. It’s more about augmenting decision-making and giving clinicians guidance to be able to improve efficiency and reduce inequality,” he added.

The study was funded with support from the National Institute for Health Research and NHSX, the British Heart Foundation, and Wellcome Leap. Dr. Mills has received honoraria or consultancy from Abbott Diagnostics, Roche Diagnostics, Siemens Healthineers, and LumiraDx. He is employed by the University of Edinburgh, which has filed a patent on the Collaboration for the Diagnosis and Evaluation of Acute Coronary Syndrome score.
 

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Use of a machine-learning model that incorporates information from a single troponin test as well as other clinical data was superior to current practice as an aid to the diagnosis of myocardial infarction in the emergency department in a new study.

“Our results suggest that by using this machine-learning model, compared to the currently recommended approach, we could double the proportion of patients who are identified correctly as having a low probability of an MI on arrival to enable immediate discharge and free up space in the emergency department,” senior author Nicholas L. Mills, MD, University of Edinburgh, Scotland, said in an interview.

Dr. Nicholas L. Mills

“And, perhaps even more importantly, use of this model could also increase the proportion of patients who are correctly identified as at a high probability of having an MI,” he added.

The study was published online in Nature Medicine.

The authors explained that at present, the likelihood of an MI diagnosis for patients presenting to the emergency department with chest pain is based on a fixed troponin threshold in serial tests at specific time points, but there are several problems with this approach.

First, a fixed troponin threshold is generally used for all patients, which does not account for age, sex, or comorbidities that are known to influence cardiac troponin concentrations. Second, the need to perform tests at specific time points for serial testing can be challenging in busy emergency departments.

And third, patients are categorized as being at low, intermediate, or high risk of MI on the basis of troponin thresholds alone, and the test does not take into account other important factors, such as the time of symptom onset or findings on the electrocardiogram.

“Our current practice of using the same threshold to rule in and rule out an MI for everyone, regardless of whether they are an 18-year-old female without a history of heart disease or an 85-year-old male with known heart failure, doesn’t perform well, and there’s a significant risk of misdiagnosis. There is also a high likelihood for inequalities in care, particularly between men and women,” Dr. Mills said.

The current study evaluated whether use of a machine learning model known as CoDE-ACS to guide decision-making could overcome some of these challenges.

The machine learning model assesses the whole spectrum of troponin levels as a continuous variable (rather than use of a single threshold) and turns this measurement into a probability that an individual patient is having an MI after accounting for other factors, including age, sex, comorbidities, and time from symptom onset.

For the current study, the CoDE-ACS model was trained in 10,000 patients with suspected acute coronary syndrome (ACS) who presented to 10 hospitals in Scotland as part of the High-STEACS trial evaluating the implementation of a high-sensitivity cardiac troponin I assay. The results were then validated in another 10,000 patients from six countries around the world.

“Using this model, the patient can have a troponin test on arrival at the emergency department. The other information on age, sex, clinical history, and time since symptom onset is keyed in, and it gives a probability on a scale of 0–100 as to whether the patient is having an MI,” Dr. Mills noted.

“It also has the capacity to incorporate more information over time. So, if there is a second troponin measurement made, then the model automatically refines the probability score,” he added.

The current study showed that use of the CoDE-ACS model identified more patients at presentation as having a low probability of having an MI than fixed cardiac troponin thresholds (61% vs. 27%) with a similar negative predictive value.

It also identified fewer patients as having a high probability of having an MI (10% vs. 16%) with a greater positive predictive value.

Among patients who were identified as having a low probability of MI, the rate of cardiac death was lower than the rate among those with intermediate or high probability at 30 days (0.1% vs. 0.5% and 1.8%) and 1 year (0.3% vs. 2.8% and 4.2%).

“The results show that the machine learning model doubles the proportion of patients who can be discharged with a single test compared to the current practice of using the threshold approach. It really is a game changer in terms of its potential to improve health efficiency,” Dr. Mills said.

In terms of ruling patients in as possibly having an MI, he pointed out that troponin levels are increased in patients with a wide range of other conditions, including heart failure, kidney failure, and atrial fibrillation.

“When using the threshold approach, only one in four patients with an elevated troponin level will actually be having an MI, and that leads to confusion,” he said. “This model takes into consideration these other conditions and so it can correctly identify three out of four patients with a high probability of having an MI. We can therefore be more confident that it is appropriate to refer those patients to cardiology and save a lot of potentially unnecessary investigations and treatments in the others.”

Dr. Mills said the model also seems to work when assessing patients early on.

“Around one-third of patients present within 3 hours of symptom onset, and actually these are a high-risk group because people who have genuine cardiac pain are normally extremely uncomfortable and tend to present quickly. Current guidelines require that we do two tests in all these individuals, but this new model incorporates the time of symptom onset into its estimates of probability and therefore allows us to rule out patients even when they present very early.”

He reported that a second test is required in only one in five patients – those whose first test indicated intermediate probability.

“The second test allows us to refine the probability further, allowing us to rule another half of those patients out. We are then left with a small proportion of patients – about 1 in 10 – who remain of intermediate probability and will require additional clinical judgment.”
 

 

 

Should improve inequities in MI diagnosis

Dr. Mills said the CoDE-ACS model will improve current inequities in MI diagnosis, because of which MI is underrecognized in women and younger people.

“Women have troponin concentrations that are half those of men, and although sex-specific troponin thresholds are recommended in the guidelines, they are not widely used. This automatically leads to underrecognition of heart disease in women. But this new machine learning model performs identically in men and women because it has been trained to recognize the different normal levels in men and women,” he explained.

“It will also help us not to underdiagnose MI in younger people who often have a less classical presentation of MI, and they also generally have very low concentrations of troponin, so any increase in troponin way below the current diagnostic threshold may be very relevant to their risk,” he added.

The researchers are planning a randomized trial of the new model to demonstrate the impact it could have on equality of care and overcrowding in the emergency department. In the trial, patients will be randomly assigned to undergo decision-making on the basis of troponin thresholds (current practice) or to undergo decision-making through the CoDE-ACS model.

“The hope is that this trial will show reductions in unnecessary hospital admissions and length of stay in the emergency department,” Dr. Mills said. Results are expected sometime next year.

“This algorithm is very well trained. It has learned on 20,000 patients, so it has a lot more experience than I have, and I have been a professor of cardiology for 20 years,” Dr. Mills said.

He said he believes these models will get even smarter in the future as more data are added.

“I think the future for good decision-making in emergency care will be informed by clinical decision support from well-trained machine learning algorithms and they will help us guide not just the diagnosis of MI but also heart failure and other important cardiac conditions,” he said.
 

‘Elegant and exciting’ data

Commenting on the study, John W. McEvoy, MB, University of Galway, Ireland, said: “These are elegant and exciting data; however, the inputs into the machine learning algorithm include all the necessary information to actually diagnose MI. So why predict MI, when a human diagnosis can just be made directly? The answer to this question may depend on whether we trust machines more than humans.”

Dr. Mills noted that clinical judgment will always be an important part of MI diagnosis.

“Currently, using the troponin threshold approach, experienced clinicians will be able to nuance the results, but invariably, there is disagreement on this, and this can be a major source of tension within clinical care. By providing more individualized information, this will help enormously in the decision-making process,” he said.

“This model is not about replacing clinical decision-making. It’s more about augmenting decision-making and giving clinicians guidance to be able to improve efficiency and reduce inequality,” he added.

The study was funded with support from the National Institute for Health Research and NHSX, the British Heart Foundation, and Wellcome Leap. Dr. Mills has received honoraria or consultancy from Abbott Diagnostics, Roche Diagnostics, Siemens Healthineers, and LumiraDx. He is employed by the University of Edinburgh, which has filed a patent on the Collaboration for the Diagnosis and Evaluation of Acute Coronary Syndrome score.
 

A version of this article first appeared on Medscape.com.

Use of a machine-learning model that incorporates information from a single troponin test as well as other clinical data was superior to current practice as an aid to the diagnosis of myocardial infarction in the emergency department in a new study.

“Our results suggest that by using this machine-learning model, compared to the currently recommended approach, we could double the proportion of patients who are identified correctly as having a low probability of an MI on arrival to enable immediate discharge and free up space in the emergency department,” senior author Nicholas L. Mills, MD, University of Edinburgh, Scotland, said in an interview.

Dr. Nicholas L. Mills

“And, perhaps even more importantly, use of this model could also increase the proportion of patients who are correctly identified as at a high probability of having an MI,” he added.

The study was published online in Nature Medicine.

The authors explained that at present, the likelihood of an MI diagnosis for patients presenting to the emergency department with chest pain is based on a fixed troponin threshold in serial tests at specific time points, but there are several problems with this approach.

First, a fixed troponin threshold is generally used for all patients, which does not account for age, sex, or comorbidities that are known to influence cardiac troponin concentrations. Second, the need to perform tests at specific time points for serial testing can be challenging in busy emergency departments.

And third, patients are categorized as being at low, intermediate, or high risk of MI on the basis of troponin thresholds alone, and the test does not take into account other important factors, such as the time of symptom onset or findings on the electrocardiogram.

“Our current practice of using the same threshold to rule in and rule out an MI for everyone, regardless of whether they are an 18-year-old female without a history of heart disease or an 85-year-old male with known heart failure, doesn’t perform well, and there’s a significant risk of misdiagnosis. There is also a high likelihood for inequalities in care, particularly between men and women,” Dr. Mills said.

The current study evaluated whether use of a machine learning model known as CoDE-ACS to guide decision-making could overcome some of these challenges.

The machine learning model assesses the whole spectrum of troponin levels as a continuous variable (rather than use of a single threshold) and turns this measurement into a probability that an individual patient is having an MI after accounting for other factors, including age, sex, comorbidities, and time from symptom onset.

For the current study, the CoDE-ACS model was trained in 10,000 patients with suspected acute coronary syndrome (ACS) who presented to 10 hospitals in Scotland as part of the High-STEACS trial evaluating the implementation of a high-sensitivity cardiac troponin I assay. The results were then validated in another 10,000 patients from six countries around the world.

“Using this model, the patient can have a troponin test on arrival at the emergency department. The other information on age, sex, clinical history, and time since symptom onset is keyed in, and it gives a probability on a scale of 0–100 as to whether the patient is having an MI,” Dr. Mills noted.

“It also has the capacity to incorporate more information over time. So, if there is a second troponin measurement made, then the model automatically refines the probability score,” he added.

The current study showed that use of the CoDE-ACS model identified more patients at presentation as having a low probability of having an MI than fixed cardiac troponin thresholds (61% vs. 27%) with a similar negative predictive value.

It also identified fewer patients as having a high probability of having an MI (10% vs. 16%) with a greater positive predictive value.

Among patients who were identified as having a low probability of MI, the rate of cardiac death was lower than the rate among those with intermediate or high probability at 30 days (0.1% vs. 0.5% and 1.8%) and 1 year (0.3% vs. 2.8% and 4.2%).

“The results show that the machine learning model doubles the proportion of patients who can be discharged with a single test compared to the current practice of using the threshold approach. It really is a game changer in terms of its potential to improve health efficiency,” Dr. Mills said.

In terms of ruling patients in as possibly having an MI, he pointed out that troponin levels are increased in patients with a wide range of other conditions, including heart failure, kidney failure, and atrial fibrillation.

“When using the threshold approach, only one in four patients with an elevated troponin level will actually be having an MI, and that leads to confusion,” he said. “This model takes into consideration these other conditions and so it can correctly identify three out of four patients with a high probability of having an MI. We can therefore be more confident that it is appropriate to refer those patients to cardiology and save a lot of potentially unnecessary investigations and treatments in the others.”

Dr. Mills said the model also seems to work when assessing patients early on.

“Around one-third of patients present within 3 hours of symptom onset, and actually these are a high-risk group because people who have genuine cardiac pain are normally extremely uncomfortable and tend to present quickly. Current guidelines require that we do two tests in all these individuals, but this new model incorporates the time of symptom onset into its estimates of probability and therefore allows us to rule out patients even when they present very early.”

He reported that a second test is required in only one in five patients – those whose first test indicated intermediate probability.

“The second test allows us to refine the probability further, allowing us to rule another half of those patients out. We are then left with a small proportion of patients – about 1 in 10 – who remain of intermediate probability and will require additional clinical judgment.”
 

 

 

Should improve inequities in MI diagnosis

Dr. Mills said the CoDE-ACS model will improve current inequities in MI diagnosis, because of which MI is underrecognized in women and younger people.

“Women have troponin concentrations that are half those of men, and although sex-specific troponin thresholds are recommended in the guidelines, they are not widely used. This automatically leads to underrecognition of heart disease in women. But this new machine learning model performs identically in men and women because it has been trained to recognize the different normal levels in men and women,” he explained.

“It will also help us not to underdiagnose MI in younger people who often have a less classical presentation of MI, and they also generally have very low concentrations of troponin, so any increase in troponin way below the current diagnostic threshold may be very relevant to their risk,” he added.

The researchers are planning a randomized trial of the new model to demonstrate the impact it could have on equality of care and overcrowding in the emergency department. In the trial, patients will be randomly assigned to undergo decision-making on the basis of troponin thresholds (current practice) or to undergo decision-making through the CoDE-ACS model.

“The hope is that this trial will show reductions in unnecessary hospital admissions and length of stay in the emergency department,” Dr. Mills said. Results are expected sometime next year.

“This algorithm is very well trained. It has learned on 20,000 patients, so it has a lot more experience than I have, and I have been a professor of cardiology for 20 years,” Dr. Mills said.

He said he believes these models will get even smarter in the future as more data are added.

“I think the future for good decision-making in emergency care will be informed by clinical decision support from well-trained machine learning algorithms and they will help us guide not just the diagnosis of MI but also heart failure and other important cardiac conditions,” he said.
 

‘Elegant and exciting’ data

Commenting on the study, John W. McEvoy, MB, University of Galway, Ireland, said: “These are elegant and exciting data; however, the inputs into the machine learning algorithm include all the necessary information to actually diagnose MI. So why predict MI, when a human diagnosis can just be made directly? The answer to this question may depend on whether we trust machines more than humans.”

Dr. Mills noted that clinical judgment will always be an important part of MI diagnosis.

“Currently, using the troponin threshold approach, experienced clinicians will be able to nuance the results, but invariably, there is disagreement on this, and this can be a major source of tension within clinical care. By providing more individualized information, this will help enormously in the decision-making process,” he said.

“This model is not about replacing clinical decision-making. It’s more about augmenting decision-making and giving clinicians guidance to be able to improve efficiency and reduce inequality,” he added.

The study was funded with support from the National Institute for Health Research and NHSX, the British Heart Foundation, and Wellcome Leap. Dr. Mills has received honoraria or consultancy from Abbott Diagnostics, Roche Diagnostics, Siemens Healthineers, and LumiraDx. He is employed by the University of Edinburgh, which has filed a patent on the Collaboration for the Diagnosis and Evaluation of Acute Coronary Syndrome score.
 

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM NATURE MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Eating disorders in children is a global public health emergency

Article Type
Changed
Mon, 05/22/2023 - 20:48

multicenter study published in JAMA Pediatrics indicates that an elevated proportion of children and adolescents around the world, particularly girls or those with high body mass index (BMI), experience disordered eating. The high figures are concerning from a public health perspective and highlight the need to implement strategies for preventing eating disorders.
 

These disorders include anorexia nervosa, bulimia nervosa, binge eating disorder, and eating disorder–not otherwise specified. The prevalence of these disorders in young people has markedly increased globally over the past 50 years. Eating disorders are among the most life-threatening mental disorders; they were responsible for 318 deaths worldwide in 2019.

Because some individuals with eating disorders conceal core symptoms and avoid or delay seeking specialist care because of feelings of embarrassment, stigma, or ambivalence toward treatment, most cases of eating disorders remain undetected and untreated.

Brazilian researchers conducted studies to assess risky behaviors and predisposing factors among young people. The researchers observed that the probability of experiencing eating disorders was higher among young people who had an intense fear of gaining weight, who experienced thin-ideal internalization, who were excessively concerned about food, who experienced compulsive eating episodes, or who used laxatives. As previously reported, most participants in these studies had never sought professional help.

A study conducted in 2020 concluded that the media greatly influences the construction of one’s body image and the creation of aesthetic standards, particularly for adolescents. Adolescents then change their eating patterns and become more vulnerable to mental disorders related to eating.

A group of researchers from several countries, including Brazilians connected to the State University of Londrina, conducted the Global Proportion of Disordered Eating in Children and Adolescents – A Systematic Review and Meta-analysis. The study was coordinated by José Francisco López-Gil, PhD, of the University of Castilla–La Mancha (Spain). The investigators determined the rate of disordered eating among children and adolescents using the SCOFF (Sick, Control, One, Fat, Food) questionnaire, which is the most widely used screening measure for eating disorders.
 

Methods and results

Four databases were systematically searched (PubMed, Scopus, Web of Science, and the Cochrane Library); date limits were from January 1999 to November 2022. Studies were required to meet the following criteria: participants – studies of community samples of children and adolescents aged 6-18 years – and outcome – disordered eating assessed by the SCOFF questionnaire. The exclusion criteria were studies conducted with young people who had been diagnosed with physical or mental disorders; studies that were published before 1999, because the SCOFF questionnaire was designed in that year; studies in which data were collected during the COVID-19 pandemic, because of the possibility of selection bias; studies that employed data from the same surveys/studies, to avoid duplication; and systematic reviews and/or meta-analyses and qualitative and case studies.

In all, 32 studies, which involved a total of 63,181 participants from 16 countries, were included in the systematic review and meta-analysis, according to the Preferred Reporting Items for Systematic Reviews and Meta-analyses guidelines. The overall proportion of children and adolescents with disordered eating was 22.36% (95% confidence interval, 18.84%-26.09%; P < .001; n = 63,181). According to the researchers, girls were significantly more likely to report disordered eating (30.03%; 95% CI, 25.61%-34.65%; n = 27,548) than boys (16.98%; 95% CI, 13.46%-20.81%; n = 26,170; P < .001). It was also observed that disordered eating became more elevated with increasing age (B, 0.03; 95% CI, 0-0.06; P = .049) and BMI (B, 0.03; 95% CI, 0.01-0.05; P < .001).
 

 

 

Translation of outcomes

According to the authors, this was the first meta-analysis that comprehensively examined the overall proportion of children and adolescents with disordered eating in terms of gender, mean age, and BMI. They identified 14,856 (22.36%) children and adolescents with disordered eating in the population analyzed (n = 63,181). A relevant consideration made by the researchers is that, in general, disordered eating and eating disorders are not similar. “Not all children and adolescents who reported disordered eating behaviors (for example, selective eating) will necessarily be diagnosed with an eating disorder.” However, disordered eating in childhood or adolescence may predict outcomes associated with eating disorders in early adulthood. “For this reason, this high proportion found is worrisome and calls for urgent action to try to address this situation.”

The study also found that the proportion of children and adolescents with disordered eating was higher among girls than boys. The reasons for the difference in the prevalence with respect to the sex of the participants are not well understood. Boys are presumed to underreport the problem because of the societal perception that these disorders mostly affect girls and because disordered eating has usually been thought by the general population to be exclusive to girls and women. In addition, it has been noted that the current diagnostic criteria for eating disorders fail to detect disordered eating behaviors more commonly observed in boys than girls, such as intensely engaging in muscle mass gain and weight gain with the goal of improving body image satisfaction. On the other hand, the proportion of young people with disordered eating increased with increasing age and BMI. This finding is in line with the scientific literature worldwide.

The study has certain limitations. First, only studies that analyzed disordered eating using the SCOFF questionnaire were included. Second, because of the cross-sectional nature of most of the included studies, a causal relationship cannot be established. Third, owing to the inclusion of binge eating disorder and other eating disorders specified in the DSM-5, there is not enough evidence to support the use of SCOFF in primary care and community-based settings for screening for the range of eating disorders. Fourth, the meta-analysis included studies in which self-report questionnaires were used to assess disordered eating, and consequently, social desirability and recall bias could have influenced the findings.
 

Quick measures required

Identifying the magnitude of disordered eating and its distribution in at-risk populations is crucial for planning and executing actions aimed at preventing, detecting, and treating them. Eating disorders are a global public health problem that health care professionals, families, and other community members involved in caring for children and adolescents must not ignore, the researchers said. In addition to diagnosed eating disorders, parents, guardians, and health care professionals should be aware of symptoms of disordered eating, which include behaviors such as weight-loss dieting, binge eating, self-induced vomiting, excessive exercise, and the use of laxatives or diuretics without medical prescription.

This article was translated from the Medscape Portuguese Edition. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

multicenter study published in JAMA Pediatrics indicates that an elevated proportion of children and adolescents around the world, particularly girls or those with high body mass index (BMI), experience disordered eating. The high figures are concerning from a public health perspective and highlight the need to implement strategies for preventing eating disorders.
 

These disorders include anorexia nervosa, bulimia nervosa, binge eating disorder, and eating disorder–not otherwise specified. The prevalence of these disorders in young people has markedly increased globally over the past 50 years. Eating disorders are among the most life-threatening mental disorders; they were responsible for 318 deaths worldwide in 2019.

Because some individuals with eating disorders conceal core symptoms and avoid or delay seeking specialist care because of feelings of embarrassment, stigma, or ambivalence toward treatment, most cases of eating disorders remain undetected and untreated.

Brazilian researchers conducted studies to assess risky behaviors and predisposing factors among young people. The researchers observed that the probability of experiencing eating disorders was higher among young people who had an intense fear of gaining weight, who experienced thin-ideal internalization, who were excessively concerned about food, who experienced compulsive eating episodes, or who used laxatives. As previously reported, most participants in these studies had never sought professional help.

A study conducted in 2020 concluded that the media greatly influences the construction of one’s body image and the creation of aesthetic standards, particularly for adolescents. Adolescents then change their eating patterns and become more vulnerable to mental disorders related to eating.

A group of researchers from several countries, including Brazilians connected to the State University of Londrina, conducted the Global Proportion of Disordered Eating in Children and Adolescents – A Systematic Review and Meta-analysis. The study was coordinated by José Francisco López-Gil, PhD, of the University of Castilla–La Mancha (Spain). The investigators determined the rate of disordered eating among children and adolescents using the SCOFF (Sick, Control, One, Fat, Food) questionnaire, which is the most widely used screening measure for eating disorders.
 

Methods and results

Four databases were systematically searched (PubMed, Scopus, Web of Science, and the Cochrane Library); date limits were from January 1999 to November 2022. Studies were required to meet the following criteria: participants – studies of community samples of children and adolescents aged 6-18 years – and outcome – disordered eating assessed by the SCOFF questionnaire. The exclusion criteria were studies conducted with young people who had been diagnosed with physical or mental disorders; studies that were published before 1999, because the SCOFF questionnaire was designed in that year; studies in which data were collected during the COVID-19 pandemic, because of the possibility of selection bias; studies that employed data from the same surveys/studies, to avoid duplication; and systematic reviews and/or meta-analyses and qualitative and case studies.

In all, 32 studies, which involved a total of 63,181 participants from 16 countries, were included in the systematic review and meta-analysis, according to the Preferred Reporting Items for Systematic Reviews and Meta-analyses guidelines. The overall proportion of children and adolescents with disordered eating was 22.36% (95% confidence interval, 18.84%-26.09%; P < .001; n = 63,181). According to the researchers, girls were significantly more likely to report disordered eating (30.03%; 95% CI, 25.61%-34.65%; n = 27,548) than boys (16.98%; 95% CI, 13.46%-20.81%; n = 26,170; P < .001). It was also observed that disordered eating became more elevated with increasing age (B, 0.03; 95% CI, 0-0.06; P = .049) and BMI (B, 0.03; 95% CI, 0.01-0.05; P < .001).
 

 

 

Translation of outcomes

According to the authors, this was the first meta-analysis that comprehensively examined the overall proportion of children and adolescents with disordered eating in terms of gender, mean age, and BMI. They identified 14,856 (22.36%) children and adolescents with disordered eating in the population analyzed (n = 63,181). A relevant consideration made by the researchers is that, in general, disordered eating and eating disorders are not similar. “Not all children and adolescents who reported disordered eating behaviors (for example, selective eating) will necessarily be diagnosed with an eating disorder.” However, disordered eating in childhood or adolescence may predict outcomes associated with eating disorders in early adulthood. “For this reason, this high proportion found is worrisome and calls for urgent action to try to address this situation.”

The study also found that the proportion of children and adolescents with disordered eating was higher among girls than boys. The reasons for the difference in the prevalence with respect to the sex of the participants are not well understood. Boys are presumed to underreport the problem because of the societal perception that these disorders mostly affect girls and because disordered eating has usually been thought by the general population to be exclusive to girls and women. In addition, it has been noted that the current diagnostic criteria for eating disorders fail to detect disordered eating behaviors more commonly observed in boys than girls, such as intensely engaging in muscle mass gain and weight gain with the goal of improving body image satisfaction. On the other hand, the proportion of young people with disordered eating increased with increasing age and BMI. This finding is in line with the scientific literature worldwide.

The study has certain limitations. First, only studies that analyzed disordered eating using the SCOFF questionnaire were included. Second, because of the cross-sectional nature of most of the included studies, a causal relationship cannot be established. Third, owing to the inclusion of binge eating disorder and other eating disorders specified in the DSM-5, there is not enough evidence to support the use of SCOFF in primary care and community-based settings for screening for the range of eating disorders. Fourth, the meta-analysis included studies in which self-report questionnaires were used to assess disordered eating, and consequently, social desirability and recall bias could have influenced the findings.
 

Quick measures required

Identifying the magnitude of disordered eating and its distribution in at-risk populations is crucial for planning and executing actions aimed at preventing, detecting, and treating them. Eating disorders are a global public health problem that health care professionals, families, and other community members involved in caring for children and adolescents must not ignore, the researchers said. In addition to diagnosed eating disorders, parents, guardians, and health care professionals should be aware of symptoms of disordered eating, which include behaviors such as weight-loss dieting, binge eating, self-induced vomiting, excessive exercise, and the use of laxatives or diuretics without medical prescription.

This article was translated from the Medscape Portuguese Edition. A version of this article appeared on Medscape.com.

multicenter study published in JAMA Pediatrics indicates that an elevated proportion of children and adolescents around the world, particularly girls or those with high body mass index (BMI), experience disordered eating. The high figures are concerning from a public health perspective and highlight the need to implement strategies for preventing eating disorders.
 

These disorders include anorexia nervosa, bulimia nervosa, binge eating disorder, and eating disorder–not otherwise specified. The prevalence of these disorders in young people has markedly increased globally over the past 50 years. Eating disorders are among the most life-threatening mental disorders; they were responsible for 318 deaths worldwide in 2019.

Because some individuals with eating disorders conceal core symptoms and avoid or delay seeking specialist care because of feelings of embarrassment, stigma, or ambivalence toward treatment, most cases of eating disorders remain undetected and untreated.

Brazilian researchers conducted studies to assess risky behaviors and predisposing factors among young people. The researchers observed that the probability of experiencing eating disorders was higher among young people who had an intense fear of gaining weight, who experienced thin-ideal internalization, who were excessively concerned about food, who experienced compulsive eating episodes, or who used laxatives. As previously reported, most participants in these studies had never sought professional help.

A study conducted in 2020 concluded that the media greatly influences the construction of one’s body image and the creation of aesthetic standards, particularly for adolescents. Adolescents then change their eating patterns and become more vulnerable to mental disorders related to eating.

A group of researchers from several countries, including Brazilians connected to the State University of Londrina, conducted the Global Proportion of Disordered Eating in Children and Adolescents – A Systematic Review and Meta-analysis. The study was coordinated by José Francisco López-Gil, PhD, of the University of Castilla–La Mancha (Spain). The investigators determined the rate of disordered eating among children and adolescents using the SCOFF (Sick, Control, One, Fat, Food) questionnaire, which is the most widely used screening measure for eating disorders.
 

Methods and results

Four databases were systematically searched (PubMed, Scopus, Web of Science, and the Cochrane Library); date limits were from January 1999 to November 2022. Studies were required to meet the following criteria: participants – studies of community samples of children and adolescents aged 6-18 years – and outcome – disordered eating assessed by the SCOFF questionnaire. The exclusion criteria were studies conducted with young people who had been diagnosed with physical or mental disorders; studies that were published before 1999, because the SCOFF questionnaire was designed in that year; studies in which data were collected during the COVID-19 pandemic, because of the possibility of selection bias; studies that employed data from the same surveys/studies, to avoid duplication; and systematic reviews and/or meta-analyses and qualitative and case studies.

In all, 32 studies, which involved a total of 63,181 participants from 16 countries, were included in the systematic review and meta-analysis, according to the Preferred Reporting Items for Systematic Reviews and Meta-analyses guidelines. The overall proportion of children and adolescents with disordered eating was 22.36% (95% confidence interval, 18.84%-26.09%; P < .001; n = 63,181). According to the researchers, girls were significantly more likely to report disordered eating (30.03%; 95% CI, 25.61%-34.65%; n = 27,548) than boys (16.98%; 95% CI, 13.46%-20.81%; n = 26,170; P < .001). It was also observed that disordered eating became more elevated with increasing age (B, 0.03; 95% CI, 0-0.06; P = .049) and BMI (B, 0.03; 95% CI, 0.01-0.05; P < .001).
 

 

 

Translation of outcomes

According to the authors, this was the first meta-analysis that comprehensively examined the overall proportion of children and adolescents with disordered eating in terms of gender, mean age, and BMI. They identified 14,856 (22.36%) children and adolescents with disordered eating in the population analyzed (n = 63,181). A relevant consideration made by the researchers is that, in general, disordered eating and eating disorders are not similar. “Not all children and adolescents who reported disordered eating behaviors (for example, selective eating) will necessarily be diagnosed with an eating disorder.” However, disordered eating in childhood or adolescence may predict outcomes associated with eating disorders in early adulthood. “For this reason, this high proportion found is worrisome and calls for urgent action to try to address this situation.”

The study also found that the proportion of children and adolescents with disordered eating was higher among girls than boys. The reasons for the difference in the prevalence with respect to the sex of the participants are not well understood. Boys are presumed to underreport the problem because of the societal perception that these disorders mostly affect girls and because disordered eating has usually been thought by the general population to be exclusive to girls and women. In addition, it has been noted that the current diagnostic criteria for eating disorders fail to detect disordered eating behaviors more commonly observed in boys than girls, such as intensely engaging in muscle mass gain and weight gain with the goal of improving body image satisfaction. On the other hand, the proportion of young people with disordered eating increased with increasing age and BMI. This finding is in line with the scientific literature worldwide.

The study has certain limitations. First, only studies that analyzed disordered eating using the SCOFF questionnaire were included. Second, because of the cross-sectional nature of most of the included studies, a causal relationship cannot be established. Third, owing to the inclusion of binge eating disorder and other eating disorders specified in the DSM-5, there is not enough evidence to support the use of SCOFF in primary care and community-based settings for screening for the range of eating disorders. Fourth, the meta-analysis included studies in which self-report questionnaires were used to assess disordered eating, and consequently, social desirability and recall bias could have influenced the findings.
 

Quick measures required

Identifying the magnitude of disordered eating and its distribution in at-risk populations is crucial for planning and executing actions aimed at preventing, detecting, and treating them. Eating disorders are a global public health problem that health care professionals, families, and other community members involved in caring for children and adolescents must not ignore, the researchers said. In addition to diagnosed eating disorders, parents, guardians, and health care professionals should be aware of symptoms of disordered eating, which include behaviors such as weight-loss dieting, binge eating, self-induced vomiting, excessive exercise, and the use of laxatives or diuretics without medical prescription.

This article was translated from the Medscape Portuguese Edition. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA PEDIATRICS

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article