The Journal of Clinical Outcomes Management® is an independent, peer-reviewed journal offering evidence-based, practical information for improving the quality, safety, and value of health care.

Theme
medstat_jcom
jcom
Main menu
JCOM Main
Explore menu
JCOM Explore
Proclivity ID
18843001
Unpublish
Negative Keywords Excluded Elements
header[@id='header']
div[contains(@class, 'header__large-screen')]
div[contains(@class, 'read-next-article')]
div[contains(@class, 'nav-primary')]
nav[contains(@class, 'nav-primary')]
section[contains(@class, 'footer-nav-section-wrapper')]
footer[@id='footer']
div[contains(@class, 'main-prefix')]
section[contains(@class, 'nav-hidden')]
div[contains(@class, 'ce-card-content')]
nav[contains(@class, 'nav-ce-stack')]
Altmetric
Click for Credit Button Label
Click For Credit
DSM Affiliated
Display in offset block
Enable Disqus
Display Author and Disclosure Link
Publication Type
Clinical
Slot System
Featured Buckets
Disable Sticky Ads
Disable Ad Block Mitigation
Featured Buckets Admin
Show Ads on this Publication's Homepage
Consolidated Pub
Show Article Page Numbers on TOC
Use larger logo size
On
publication_blueconic_enabled
Off
Show More Destinations Menu
Disable Adhesion on Publication
Off
Restore Menu Label on Mobile Navigation
Disable Facebook Pixel from Publication
Exclude this publication from publication selection on articles and quiz
Gating Strategy
First Peek Free
Challenge Center
Disable Inline Native ads

Obesity drugs overpriced, change needed to tackle issue

Article Type
Changed
Fri, 04/28/2023 - 08:25

 

The lowest available national prices of drugs to treat obesity are up to 20 times higher than the estimated cost of profitable generic versions of the same agents, according to a new analysis.
 

The findings by Jacob Levi, MBBS, and colleagues were published in Obesity.

“Our study highlights the inequality in pricing that exists for effective antiobesity medications, which are largely unaffordable in most countries,” Dr. Levi, from Royal Free Hospital NHS Trust, London, said in a press release.

“We show that these drugs can actually be produced and sold profitably for low prices,” he summarized. “A public health approach that prioritizes improving access to medications should be adopted, instead of allowing companies to maximize profits,” Dr. Levi urged.

Dr. Levi and colleagues studied the oral agents orlistat, naltrexone/bupropion, topiramate/phentermine, and semaglutide, and subcutaneous liraglutide, semaglutide, and tirzepatide (all approved by the U.S. Food and Drug Administration to treat obesity, except for oral semaglutide and subcutaneous tirzepatide, which are not yet approved to treat obesity in the absence of type 2 diabetes).

“Worldwide, more people are dying from diabetes and clinical obesity than HIV, tuberculosis, and malaria combined now,” senior author Andrew Hill, MD, department of pharmacology and therapeutics, University of Liverpool, England, pointed out.
 

We need to repeat the low-cost success story with obesity drugs

“Millions of lives have been saved by treating infectious diseases at low cost in poor countries,” Dr. Hill continued. “Now we need to repeat this medical success story, with mass treatment of diabetes and clinical obesity at low prices.”

However, in an accompanying editorial, Eric A. Finkelstein, MD, and Junxing Chay, PhD, Duke-NUS Medical School, Singapore, maintain that “It would be great if everyone had affordable access to all medicines that might improve their health. Yet that is simply not possible, nor will it ever be.”

“What is truly needed is a better way to ration the health care dollars currently available in efforts to maximize population health. That is the challenge ahead not just for [antiobesity medications] but for all treatments,” they say.

“Greater use of cost-effectiveness analysis and direct negotiations, while maintaining the patent system, represents an appropriate approach for allocating scarce health care resources in the United States and beyond,” they continue.
 

Lowest current patented drug prices vs. estimated generic drug prices

New medications for obesity were highly effective in recent clinical trials, but high prices limit the ability of patients to get these medications, Dr. Levi and colleagues write.

They analyzed prices for obesity drugs in 16 low-, middle-, and high-income countries: Australia, Bangladesh, China, France, Germany, India, Kenya, Morocco, Norway, Peru, Pakistan, South Africa, Turkey, the United Kingdom, the United States, and Vietnam.

The researchers assessed the price of a 30-day supply of each of the studied branded drugs based on the lowest available price (in 2021 U.S. dollars) from multiple online national price databases.

Then they calculated the estimated minimum price of a 30-day supply of a potential generic version of these drugs, which included the cost of the active medicinal ingredients, the excipients (nonactive ingredients), the prefilled injectable device plus needles (for subcutaneous drugs), transportation, 10% profit, and 27% tax on profit.

The national prices of the branded medications for obesity were significantly higher than the estimated minimum prices of potential generic drugs (see Table).



The highest national price for a branded oral drug for obesity vs. the estimated minimum price for a potential generic version was $100 vs. $7 for orlistat, $199 vs. $5 for phentermine/topiramate, and $326 vs. $54 for naltrexone/bupropion, for a 30-day supply.

There was an even greater difference between highest national branded drug price vs. estimated minimum generic drug price for the newer subcutaneously injectable drugs for obesity.

For example, the price of a 30-day course of subcutaneous semaglutide ranged from $804 (United States) to $95 (Turkey), while the estimated minimum potential generic drug price was $40 (which is 20 times lower).

The study was funded by grants from the Make Medicines Affordable/International Treatment Preparedness Coalition and from the National Heart, Lung, and Blood Institute of the National Institutes of Health. Coauthor Francois Venter has reported receiving support from the Bill and Melinda Gates Foundation, U.S. Agency for International Development, Unitaid, SA Medical Research Council, Foundation for Innovative New Diagnostics, the Children’s Investment Fund Foundation, Gilead, ViiV, Mylan, Merck, Adcock Ingram, Aspen, Abbott, Roche, Johnson & Johnson, Sanofi, Virology Education, SA HIV Clinicians Society, and Dira Sengwe. The other authors and Dr. Chay have reported no relevant financial relationships. Dr. Finkelstein has reported receiving support for serving on the WW scientific advisory board and an educational grant unrelated to the present work from Novo Nordisk.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

The lowest available national prices of drugs to treat obesity are up to 20 times higher than the estimated cost of profitable generic versions of the same agents, according to a new analysis.
 

The findings by Jacob Levi, MBBS, and colleagues were published in Obesity.

“Our study highlights the inequality in pricing that exists for effective antiobesity medications, which are largely unaffordable in most countries,” Dr. Levi, from Royal Free Hospital NHS Trust, London, said in a press release.

“We show that these drugs can actually be produced and sold profitably for low prices,” he summarized. “A public health approach that prioritizes improving access to medications should be adopted, instead of allowing companies to maximize profits,” Dr. Levi urged.

Dr. Levi and colleagues studied the oral agents orlistat, naltrexone/bupropion, topiramate/phentermine, and semaglutide, and subcutaneous liraglutide, semaglutide, and tirzepatide (all approved by the U.S. Food and Drug Administration to treat obesity, except for oral semaglutide and subcutaneous tirzepatide, which are not yet approved to treat obesity in the absence of type 2 diabetes).

“Worldwide, more people are dying from diabetes and clinical obesity than HIV, tuberculosis, and malaria combined now,” senior author Andrew Hill, MD, department of pharmacology and therapeutics, University of Liverpool, England, pointed out.
 

We need to repeat the low-cost success story with obesity drugs

“Millions of lives have been saved by treating infectious diseases at low cost in poor countries,” Dr. Hill continued. “Now we need to repeat this medical success story, with mass treatment of diabetes and clinical obesity at low prices.”

However, in an accompanying editorial, Eric A. Finkelstein, MD, and Junxing Chay, PhD, Duke-NUS Medical School, Singapore, maintain that “It would be great if everyone had affordable access to all medicines that might improve their health. Yet that is simply not possible, nor will it ever be.”

“What is truly needed is a better way to ration the health care dollars currently available in efforts to maximize population health. That is the challenge ahead not just for [antiobesity medications] but for all treatments,” they say.

“Greater use of cost-effectiveness analysis and direct negotiations, while maintaining the patent system, represents an appropriate approach for allocating scarce health care resources in the United States and beyond,” they continue.
 

Lowest current patented drug prices vs. estimated generic drug prices

New medications for obesity were highly effective in recent clinical trials, but high prices limit the ability of patients to get these medications, Dr. Levi and colleagues write.

They analyzed prices for obesity drugs in 16 low-, middle-, and high-income countries: Australia, Bangladesh, China, France, Germany, India, Kenya, Morocco, Norway, Peru, Pakistan, South Africa, Turkey, the United Kingdom, the United States, and Vietnam.

The researchers assessed the price of a 30-day supply of each of the studied branded drugs based on the lowest available price (in 2021 U.S. dollars) from multiple online national price databases.

Then they calculated the estimated minimum price of a 30-day supply of a potential generic version of these drugs, which included the cost of the active medicinal ingredients, the excipients (nonactive ingredients), the prefilled injectable device plus needles (for subcutaneous drugs), transportation, 10% profit, and 27% tax on profit.

The national prices of the branded medications for obesity were significantly higher than the estimated minimum prices of potential generic drugs (see Table).



The highest national price for a branded oral drug for obesity vs. the estimated minimum price for a potential generic version was $100 vs. $7 for orlistat, $199 vs. $5 for phentermine/topiramate, and $326 vs. $54 for naltrexone/bupropion, for a 30-day supply.

There was an even greater difference between highest national branded drug price vs. estimated minimum generic drug price for the newer subcutaneously injectable drugs for obesity.

For example, the price of a 30-day course of subcutaneous semaglutide ranged from $804 (United States) to $95 (Turkey), while the estimated minimum potential generic drug price was $40 (which is 20 times lower).

The study was funded by grants from the Make Medicines Affordable/International Treatment Preparedness Coalition and from the National Heart, Lung, and Blood Institute of the National Institutes of Health. Coauthor Francois Venter has reported receiving support from the Bill and Melinda Gates Foundation, U.S. Agency for International Development, Unitaid, SA Medical Research Council, Foundation for Innovative New Diagnostics, the Children’s Investment Fund Foundation, Gilead, ViiV, Mylan, Merck, Adcock Ingram, Aspen, Abbott, Roche, Johnson & Johnson, Sanofi, Virology Education, SA HIV Clinicians Society, and Dira Sengwe. The other authors and Dr. Chay have reported no relevant financial relationships. Dr. Finkelstein has reported receiving support for serving on the WW scientific advisory board and an educational grant unrelated to the present work from Novo Nordisk.

A version of this article first appeared on Medscape.com.

 

The lowest available national prices of drugs to treat obesity are up to 20 times higher than the estimated cost of profitable generic versions of the same agents, according to a new analysis.
 

The findings by Jacob Levi, MBBS, and colleagues were published in Obesity.

“Our study highlights the inequality in pricing that exists for effective antiobesity medications, which are largely unaffordable in most countries,” Dr. Levi, from Royal Free Hospital NHS Trust, London, said in a press release.

“We show that these drugs can actually be produced and sold profitably for low prices,” he summarized. “A public health approach that prioritizes improving access to medications should be adopted, instead of allowing companies to maximize profits,” Dr. Levi urged.

Dr. Levi and colleagues studied the oral agents orlistat, naltrexone/bupropion, topiramate/phentermine, and semaglutide, and subcutaneous liraglutide, semaglutide, and tirzepatide (all approved by the U.S. Food and Drug Administration to treat obesity, except for oral semaglutide and subcutaneous tirzepatide, which are not yet approved to treat obesity in the absence of type 2 diabetes).

“Worldwide, more people are dying from diabetes and clinical obesity than HIV, tuberculosis, and malaria combined now,” senior author Andrew Hill, MD, department of pharmacology and therapeutics, University of Liverpool, England, pointed out.
 

We need to repeat the low-cost success story with obesity drugs

“Millions of lives have been saved by treating infectious diseases at low cost in poor countries,” Dr. Hill continued. “Now we need to repeat this medical success story, with mass treatment of diabetes and clinical obesity at low prices.”

However, in an accompanying editorial, Eric A. Finkelstein, MD, and Junxing Chay, PhD, Duke-NUS Medical School, Singapore, maintain that “It would be great if everyone had affordable access to all medicines that might improve their health. Yet that is simply not possible, nor will it ever be.”

“What is truly needed is a better way to ration the health care dollars currently available in efforts to maximize population health. That is the challenge ahead not just for [antiobesity medications] but for all treatments,” they say.

“Greater use of cost-effectiveness analysis and direct negotiations, while maintaining the patent system, represents an appropriate approach for allocating scarce health care resources in the United States and beyond,” they continue.
 

Lowest current patented drug prices vs. estimated generic drug prices

New medications for obesity were highly effective in recent clinical trials, but high prices limit the ability of patients to get these medications, Dr. Levi and colleagues write.

They analyzed prices for obesity drugs in 16 low-, middle-, and high-income countries: Australia, Bangladesh, China, France, Germany, India, Kenya, Morocco, Norway, Peru, Pakistan, South Africa, Turkey, the United Kingdom, the United States, and Vietnam.

The researchers assessed the price of a 30-day supply of each of the studied branded drugs based on the lowest available price (in 2021 U.S. dollars) from multiple online national price databases.

Then they calculated the estimated minimum price of a 30-day supply of a potential generic version of these drugs, which included the cost of the active medicinal ingredients, the excipients (nonactive ingredients), the prefilled injectable device plus needles (for subcutaneous drugs), transportation, 10% profit, and 27% tax on profit.

The national prices of the branded medications for obesity were significantly higher than the estimated minimum prices of potential generic drugs (see Table).



The highest national price for a branded oral drug for obesity vs. the estimated minimum price for a potential generic version was $100 vs. $7 for orlistat, $199 vs. $5 for phentermine/topiramate, and $326 vs. $54 for naltrexone/bupropion, for a 30-day supply.

There was an even greater difference between highest national branded drug price vs. estimated minimum generic drug price for the newer subcutaneously injectable drugs for obesity.

For example, the price of a 30-day course of subcutaneous semaglutide ranged from $804 (United States) to $95 (Turkey), while the estimated minimum potential generic drug price was $40 (which is 20 times lower).

The study was funded by grants from the Make Medicines Affordable/International Treatment Preparedness Coalition and from the National Heart, Lung, and Blood Institute of the National Institutes of Health. Coauthor Francois Venter has reported receiving support from the Bill and Melinda Gates Foundation, U.S. Agency for International Development, Unitaid, SA Medical Research Council, Foundation for Innovative New Diagnostics, the Children’s Investment Fund Foundation, Gilead, ViiV, Mylan, Merck, Adcock Ingram, Aspen, Abbott, Roche, Johnson & Johnson, Sanofi, Virology Education, SA HIV Clinicians Society, and Dira Sengwe. The other authors and Dr. Chay have reported no relevant financial relationships. Dr. Finkelstein has reported receiving support for serving on the WW scientific advisory board and an educational grant unrelated to the present work from Novo Nordisk.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Unawareness of memory slips could indicate risk for Alzheimer’s

Article Type
Changed
Fri, 04/28/2023 - 08:26

Everyone’s memory fades to some extent as we age, but not everyone will develop Alzheimer’s disease. Screening the most likely people to develop Alzheimer’s remains an ongoing challenge, as some people present only unambiguous symptoms once their disease is advanced.

A new study in JAMA Network Open suggests that one early clue is found in people’s own self-perception of their memory skills. People who are more aware of their own declining memory capacity are less likely to develop Alzheimer’s, the study suggests.

“Some people are very aware of changes in their memory, but many people are unaware,” said study author Patrizia Vannini, PhD, a neurologist at Brigham and Women’s Hospital in Boston. There are gradations of unawareness of memory loss, Dr. Vannini said, from complete unawareness that anything is wrong, to a partial unawareness that memory is declining.

The study compared the records of 436 participants in the Alzheimer’s Disease Neuroimaging Initiative, an Alzheimer’s research institute housed at the University of Southern California. More than 90% of the participants were White, and generally had a college education. Their average age was 75 years, and 53% of participants were women.

Dr. Vannini and colleagues tracked people whose cognitive function was normal at the beginning of the study, based on the Clinical Dementia Rating. Throughout the course of the study, which included data from 2010 to 2021, 91 of the 436 participants experienced a sustained decline in their Clinical Dementia Rating scores, indicating a risk for eventual Alzheimer’s, whereas the other participants held steady.

The people who declined in cognitive function were less aware of slips in their memory, as assessed by discrepancies between people’s self-reports of their own memory skills and the perceptions of someone in their lives. For this part of the study, Dr. Vannini and colleagues used the Everyday Cognition Questionnaire, which evaluates memory tasks such as shopping without a grocery list or recalling conversations from a few days ago. Both the participant and the study partner rated their performance on such tasks compared to 10 years earlier. Those who were less aware of their memory slips were more likely to experience declines in the Clinical Dementia Rating, compared with people with a heightened concern about memory loss (as measured by being more concerned about memory decline than their study partners).

“Partial or complete unawareness is often related to delayed diagnosis of Alzheimer’s, because the patient is unaware they are having problems,” Dr. Vannini said, adding that this is associated with a poorer prognosis as well.
 

Implications for clinicians

Soo Borson, MD, professor of clinical family medicine at the University of Southern California and coleader of a CDC-funded early dementia detection center at New York University, pointed out that sometimes people are genuinely unaware that their memory is declining, while at other times they know it all too well but say everything is fine when a doctor asks about their current memory status. That may be because people fear the label of “Alzheimer’s,” Dr. Borson suggested, or simply because they don’t want to start a protracted diagnostic pathway that could involve lots of tests and time.

Dr. Borson, who was not involved in the study, noted that the population was predominantly White and well-educated, and by definition included people who were concerned enough about potential memory loss to become part of an Alzheimer’s research network. This limits the generalizability of this study’s results to other populations, Dr. Borson said.

Despite that limitation, in Dr. Borson’s view the study points to the continued importance of clinicians (ideally a primary care doctor who knows the patient well) engaging with patients about their brain health once they reach midlife. A doctor could ask if patients have noticed a decline in their thinking or memory over the last year, for example, or a more open-ended question about any memory concerns.

Although some patients may choose to withhold concerns about their memory, Dr. Borson acknowledged, the overall thrust of these questions is to provide a safe space for patients to air their concerns if they so choose. In some cases it would be appropriate to do a simple memory test on the spot, and then proceed accordingly – either for further tests if something of concern emerges, or to reassure the patient if the test doesn’t yield anything of note. In the latter case some patients will still want further tests for additional reassurance, and Dr. Borson thinks doctors should facilitate that request even if in their own judgment nothing is wrong.

“This is not like testing for impaired kidney function by doing a serum creatinine test,” Dr. Borson said. While the orientation of the health care system is toward quick and easy answers for everything, detecting possible dementia eludes such an approach.

Dr. Vannini reports funding from the National Institutes of Health National Institute on Aging. Dr. Borson reported no disclosures.

Publications
Topics
Sections

Everyone’s memory fades to some extent as we age, but not everyone will develop Alzheimer’s disease. Screening the most likely people to develop Alzheimer’s remains an ongoing challenge, as some people present only unambiguous symptoms once their disease is advanced.

A new study in JAMA Network Open suggests that one early clue is found in people’s own self-perception of their memory skills. People who are more aware of their own declining memory capacity are less likely to develop Alzheimer’s, the study suggests.

“Some people are very aware of changes in their memory, but many people are unaware,” said study author Patrizia Vannini, PhD, a neurologist at Brigham and Women’s Hospital in Boston. There are gradations of unawareness of memory loss, Dr. Vannini said, from complete unawareness that anything is wrong, to a partial unawareness that memory is declining.

The study compared the records of 436 participants in the Alzheimer’s Disease Neuroimaging Initiative, an Alzheimer’s research institute housed at the University of Southern California. More than 90% of the participants were White, and generally had a college education. Their average age was 75 years, and 53% of participants were women.

Dr. Vannini and colleagues tracked people whose cognitive function was normal at the beginning of the study, based on the Clinical Dementia Rating. Throughout the course of the study, which included data from 2010 to 2021, 91 of the 436 participants experienced a sustained decline in their Clinical Dementia Rating scores, indicating a risk for eventual Alzheimer’s, whereas the other participants held steady.

The people who declined in cognitive function were less aware of slips in their memory, as assessed by discrepancies between people’s self-reports of their own memory skills and the perceptions of someone in their lives. For this part of the study, Dr. Vannini and colleagues used the Everyday Cognition Questionnaire, which evaluates memory tasks such as shopping without a grocery list or recalling conversations from a few days ago. Both the participant and the study partner rated their performance on such tasks compared to 10 years earlier. Those who were less aware of their memory slips were more likely to experience declines in the Clinical Dementia Rating, compared with people with a heightened concern about memory loss (as measured by being more concerned about memory decline than their study partners).

“Partial or complete unawareness is often related to delayed diagnosis of Alzheimer’s, because the patient is unaware they are having problems,” Dr. Vannini said, adding that this is associated with a poorer prognosis as well.
 

Implications for clinicians

Soo Borson, MD, professor of clinical family medicine at the University of Southern California and coleader of a CDC-funded early dementia detection center at New York University, pointed out that sometimes people are genuinely unaware that their memory is declining, while at other times they know it all too well but say everything is fine when a doctor asks about their current memory status. That may be because people fear the label of “Alzheimer’s,” Dr. Borson suggested, or simply because they don’t want to start a protracted diagnostic pathway that could involve lots of tests and time.

Dr. Borson, who was not involved in the study, noted that the population was predominantly White and well-educated, and by definition included people who were concerned enough about potential memory loss to become part of an Alzheimer’s research network. This limits the generalizability of this study’s results to other populations, Dr. Borson said.

Despite that limitation, in Dr. Borson’s view the study points to the continued importance of clinicians (ideally a primary care doctor who knows the patient well) engaging with patients about their brain health once they reach midlife. A doctor could ask if patients have noticed a decline in their thinking or memory over the last year, for example, or a more open-ended question about any memory concerns.

Although some patients may choose to withhold concerns about their memory, Dr. Borson acknowledged, the overall thrust of these questions is to provide a safe space for patients to air their concerns if they so choose. In some cases it would be appropriate to do a simple memory test on the spot, and then proceed accordingly – either for further tests if something of concern emerges, or to reassure the patient if the test doesn’t yield anything of note. In the latter case some patients will still want further tests for additional reassurance, and Dr. Borson thinks doctors should facilitate that request even if in their own judgment nothing is wrong.

“This is not like testing for impaired kidney function by doing a serum creatinine test,” Dr. Borson said. While the orientation of the health care system is toward quick and easy answers for everything, detecting possible dementia eludes such an approach.

Dr. Vannini reports funding from the National Institutes of Health National Institute on Aging. Dr. Borson reported no disclosures.

Everyone’s memory fades to some extent as we age, but not everyone will develop Alzheimer’s disease. Screening the most likely people to develop Alzheimer’s remains an ongoing challenge, as some people present only unambiguous symptoms once their disease is advanced.

A new study in JAMA Network Open suggests that one early clue is found in people’s own self-perception of their memory skills. People who are more aware of their own declining memory capacity are less likely to develop Alzheimer’s, the study suggests.

“Some people are very aware of changes in their memory, but many people are unaware,” said study author Patrizia Vannini, PhD, a neurologist at Brigham and Women’s Hospital in Boston. There are gradations of unawareness of memory loss, Dr. Vannini said, from complete unawareness that anything is wrong, to a partial unawareness that memory is declining.

The study compared the records of 436 participants in the Alzheimer’s Disease Neuroimaging Initiative, an Alzheimer’s research institute housed at the University of Southern California. More than 90% of the participants were White, and generally had a college education. Their average age was 75 years, and 53% of participants were women.

Dr. Vannini and colleagues tracked people whose cognitive function was normal at the beginning of the study, based on the Clinical Dementia Rating. Throughout the course of the study, which included data from 2010 to 2021, 91 of the 436 participants experienced a sustained decline in their Clinical Dementia Rating scores, indicating a risk for eventual Alzheimer’s, whereas the other participants held steady.

The people who declined in cognitive function were less aware of slips in their memory, as assessed by discrepancies between people’s self-reports of their own memory skills and the perceptions of someone in their lives. For this part of the study, Dr. Vannini and colleagues used the Everyday Cognition Questionnaire, which evaluates memory tasks such as shopping without a grocery list or recalling conversations from a few days ago. Both the participant and the study partner rated their performance on such tasks compared to 10 years earlier. Those who were less aware of their memory slips were more likely to experience declines in the Clinical Dementia Rating, compared with people with a heightened concern about memory loss (as measured by being more concerned about memory decline than their study partners).

“Partial or complete unawareness is often related to delayed diagnosis of Alzheimer’s, because the patient is unaware they are having problems,” Dr. Vannini said, adding that this is associated with a poorer prognosis as well.
 

Implications for clinicians

Soo Borson, MD, professor of clinical family medicine at the University of Southern California and coleader of a CDC-funded early dementia detection center at New York University, pointed out that sometimes people are genuinely unaware that their memory is declining, while at other times they know it all too well but say everything is fine when a doctor asks about their current memory status. That may be because people fear the label of “Alzheimer’s,” Dr. Borson suggested, or simply because they don’t want to start a protracted diagnostic pathway that could involve lots of tests and time.

Dr. Borson, who was not involved in the study, noted that the population was predominantly White and well-educated, and by definition included people who were concerned enough about potential memory loss to become part of an Alzheimer’s research network. This limits the generalizability of this study’s results to other populations, Dr. Borson said.

Despite that limitation, in Dr. Borson’s view the study points to the continued importance of clinicians (ideally a primary care doctor who knows the patient well) engaging with patients about their brain health once they reach midlife. A doctor could ask if patients have noticed a decline in their thinking or memory over the last year, for example, or a more open-ended question about any memory concerns.

Although some patients may choose to withhold concerns about their memory, Dr. Borson acknowledged, the overall thrust of these questions is to provide a safe space for patients to air their concerns if they so choose. In some cases it would be appropriate to do a simple memory test on the spot, and then proceed accordingly – either for further tests if something of concern emerges, or to reassure the patient if the test doesn’t yield anything of note. In the latter case some patients will still want further tests for additional reassurance, and Dr. Borson thinks doctors should facilitate that request even if in their own judgment nothing is wrong.

“This is not like testing for impaired kidney function by doing a serum creatinine test,” Dr. Borson said. While the orientation of the health care system is toward quick and easy answers for everything, detecting possible dementia eludes such an approach.

Dr. Vannini reports funding from the National Institutes of Health National Institute on Aging. Dr. Borson reported no disclosures.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Drive, chip, and putt your way to osteoarthritis relief

Article Type
Changed
Tue, 05/16/2023 - 02:28

 

Taking a swing against arthritis

Osteoarthritis is a tough disease to manage. Exercise helps ease the stiffness and pain of the joints, but at the same time, the disease makes it difficult to do that beneficial exercise. Even a relatively simple activity like jogging can hurt more than it helps. If only there were a low-impact exercise that was incredibly popular among the generally older population who are likely to have arthritis.

We love a good golf study here at LOTME, and a group of Australian and U.K. researchers have provided. Osteoarthritis affects 2 million people in the land down under, making it the most common source of disability there. In that population, only 64% reported their physical health to be good, very good, or excellent. Among the 459 golfers with OA that the study authors surveyed, however, the percentage reporting good health rose to more than 90%.

jacoblund/Getty Images

A similar story emerged when they looked at mental health. Nearly a quarter of nongolfers with OA reported high or very high levels of psychological distress, compared with just 8% of golfers. This pattern of improved physical and mental health remained when the researchers looked at the general, non-OA population.

This isn’t the first time golf’s been connected with improved health, and previous studies have shown golf to reduce the risks of cardiovascular disease, diabetes, and obesity, among other things. Just walking one 18-hole round significantly exceeds the CDC’s recommended 150 minutes of physical activity per week. Go out multiple times a week – leaving the cart and beer at home, American golfers – and you’ll be fit for a lifetime.

The golfers on our staff, however, are still waiting for those mental health benefits to kick in. Because when we’re adding up our scorecard after that string of four double bogeys to end the round, we’re most definitely thinking: “Yes, this sport is reducing my psychological distress. I am having fun right now.”
 

Battle of the sexes’ intestines

There are, we’re sure you’ve noticed, some differences between males and females. Females, for one thing, have longer small intestines than males. Everybody knows that, right? You didn’t know? Really? … Really?

Afif Ramdhasuma/Unsplash

Well, then, we’re guessing you haven’t read “Hidden diversity: Comparative functional morphology of humans and other species” by Erin A. McKenney, PhD, of North Carolina State University, Raleigh, and associates, which just appeared in PeerJ. We couldn’t put it down, even in the shower – a real page-turner/scroller. (It’s a great way to clean a phone, for those who also like to scroll, text, or talk on the toilet.)

The researchers got out their rulers, calipers, and string and took many measurements of the digestive systems of 45 human cadavers (21 female and 24 male), which were compared with data from 10 rats, 10 pigs, and 10 bullfrogs, which had been collected (the measurements, not the animals) by undergraduate students enrolled in a comparative anatomy laboratory course at the university.

There was little intestinal-length variation among the four-legged subjects, but when it comes to humans, females have “consistently and significantly longer small intestines than males,” the investigators noted.

The women’s small intestines, almost 14 feet long on average, were about a foot longer than the men’s, which suggests that women are better able to extract nutrients from food and “supports the canalization hypothesis, which posits that women are better able to survive during periods of stress,” coauthor Amanda Hale said in a written statement from the school. The way to a man’s heart may be through his stomach, but the way to a woman’s heart is through her duodenum, it seems.

Fascinating stuff, to be sure, but the thing that really caught our eye in the PeerJ article was the authors’ suggestion “that organs behave independently of one another, both within and across species.” Organs behaving independently? A somewhat ominous concept, no doubt, but it does explain a lot of the sounds we hear coming from our guts, which can get pretty frightening, especially on chili night.
 

 

 

Dog walking is dangerous business

Yes, you did read that right. A lot of strange things can send you to the emergency department. Go ahead and add dog walking onto that list.

Investigators from Johns Hopkins University estimate that over 422,000 adults presented to U.S. emergency departments with leash-dependent dog walking-related injuries between 2001 and 2020.

freestocks/Unsplash

With almost 53% of U.S. households owning at least one dog in 2021-2022 in the wake of the COVID pet boom, this kind of occurrence is becoming more common than you think. The annual number of dog-walking injuries more than quadrupled from 7,300 to 32,000 over the course of the study, and the researchers link that spike to the promotion of dog walking for fitness, along with the boost of ownership itself.

The most common injuries listed in the National Electronic Injury Surveillance System database were finger fracture, traumatic brain injury, and shoulder sprain or strain. These mostly involved falls from being pulled, tripped, or tangled up in the leash while walking. For those aged 65 years and older, traumatic brain injury and hip fracture were the most common.

Women were 50% more likely to sustain a fracture than were men, and dog owners aged 65 and older were three times as likely to fall, twice as likely to get a fracture, and 60% more likely to have brain injury than were younger people. Now, that’s not to say younger people don’t also get hurt. After all, dogs aren’t ageists. The researchers have that data but it’s coming out later.

Meanwhile, the pitfalls involved with just trying to get our daily steps in while letting Muffin do her business have us on the lookout for random squirrels.

Publications
Topics
Sections

 

Taking a swing against arthritis

Osteoarthritis is a tough disease to manage. Exercise helps ease the stiffness and pain of the joints, but at the same time, the disease makes it difficult to do that beneficial exercise. Even a relatively simple activity like jogging can hurt more than it helps. If only there were a low-impact exercise that was incredibly popular among the generally older population who are likely to have arthritis.

We love a good golf study here at LOTME, and a group of Australian and U.K. researchers have provided. Osteoarthritis affects 2 million people in the land down under, making it the most common source of disability there. In that population, only 64% reported their physical health to be good, very good, or excellent. Among the 459 golfers with OA that the study authors surveyed, however, the percentage reporting good health rose to more than 90%.

jacoblund/Getty Images

A similar story emerged when they looked at mental health. Nearly a quarter of nongolfers with OA reported high or very high levels of psychological distress, compared with just 8% of golfers. This pattern of improved physical and mental health remained when the researchers looked at the general, non-OA population.

This isn’t the first time golf’s been connected with improved health, and previous studies have shown golf to reduce the risks of cardiovascular disease, diabetes, and obesity, among other things. Just walking one 18-hole round significantly exceeds the CDC’s recommended 150 minutes of physical activity per week. Go out multiple times a week – leaving the cart and beer at home, American golfers – and you’ll be fit for a lifetime.

The golfers on our staff, however, are still waiting for those mental health benefits to kick in. Because when we’re adding up our scorecard after that string of four double bogeys to end the round, we’re most definitely thinking: “Yes, this sport is reducing my psychological distress. I am having fun right now.”
 

Battle of the sexes’ intestines

There are, we’re sure you’ve noticed, some differences between males and females. Females, for one thing, have longer small intestines than males. Everybody knows that, right? You didn’t know? Really? … Really?

Afif Ramdhasuma/Unsplash

Well, then, we’re guessing you haven’t read “Hidden diversity: Comparative functional morphology of humans and other species” by Erin A. McKenney, PhD, of North Carolina State University, Raleigh, and associates, which just appeared in PeerJ. We couldn’t put it down, even in the shower – a real page-turner/scroller. (It’s a great way to clean a phone, for those who also like to scroll, text, or talk on the toilet.)

The researchers got out their rulers, calipers, and string and took many measurements of the digestive systems of 45 human cadavers (21 female and 24 male), which were compared with data from 10 rats, 10 pigs, and 10 bullfrogs, which had been collected (the measurements, not the animals) by undergraduate students enrolled in a comparative anatomy laboratory course at the university.

There was little intestinal-length variation among the four-legged subjects, but when it comes to humans, females have “consistently and significantly longer small intestines than males,” the investigators noted.

The women’s small intestines, almost 14 feet long on average, were about a foot longer than the men’s, which suggests that women are better able to extract nutrients from food and “supports the canalization hypothesis, which posits that women are better able to survive during periods of stress,” coauthor Amanda Hale said in a written statement from the school. The way to a man’s heart may be through his stomach, but the way to a woman’s heart is through her duodenum, it seems.

Fascinating stuff, to be sure, but the thing that really caught our eye in the PeerJ article was the authors’ suggestion “that organs behave independently of one another, both within and across species.” Organs behaving independently? A somewhat ominous concept, no doubt, but it does explain a lot of the sounds we hear coming from our guts, which can get pretty frightening, especially on chili night.
 

 

 

Dog walking is dangerous business

Yes, you did read that right. A lot of strange things can send you to the emergency department. Go ahead and add dog walking onto that list.

Investigators from Johns Hopkins University estimate that over 422,000 adults presented to U.S. emergency departments with leash-dependent dog walking-related injuries between 2001 and 2020.

freestocks/Unsplash

With almost 53% of U.S. households owning at least one dog in 2021-2022 in the wake of the COVID pet boom, this kind of occurrence is becoming more common than you think. The annual number of dog-walking injuries more than quadrupled from 7,300 to 32,000 over the course of the study, and the researchers link that spike to the promotion of dog walking for fitness, along with the boost of ownership itself.

The most common injuries listed in the National Electronic Injury Surveillance System database were finger fracture, traumatic brain injury, and shoulder sprain or strain. These mostly involved falls from being pulled, tripped, or tangled up in the leash while walking. For those aged 65 years and older, traumatic brain injury and hip fracture were the most common.

Women were 50% more likely to sustain a fracture than were men, and dog owners aged 65 and older were three times as likely to fall, twice as likely to get a fracture, and 60% more likely to have brain injury than were younger people. Now, that’s not to say younger people don’t also get hurt. After all, dogs aren’t ageists. The researchers have that data but it’s coming out later.

Meanwhile, the pitfalls involved with just trying to get our daily steps in while letting Muffin do her business have us on the lookout for random squirrels.

 

Taking a swing against arthritis

Osteoarthritis is a tough disease to manage. Exercise helps ease the stiffness and pain of the joints, but at the same time, the disease makes it difficult to do that beneficial exercise. Even a relatively simple activity like jogging can hurt more than it helps. If only there were a low-impact exercise that was incredibly popular among the generally older population who are likely to have arthritis.

We love a good golf study here at LOTME, and a group of Australian and U.K. researchers have provided. Osteoarthritis affects 2 million people in the land down under, making it the most common source of disability there. In that population, only 64% reported their physical health to be good, very good, or excellent. Among the 459 golfers with OA that the study authors surveyed, however, the percentage reporting good health rose to more than 90%.

jacoblund/Getty Images

A similar story emerged when they looked at mental health. Nearly a quarter of nongolfers with OA reported high or very high levels of psychological distress, compared with just 8% of golfers. This pattern of improved physical and mental health remained when the researchers looked at the general, non-OA population.

This isn’t the first time golf’s been connected with improved health, and previous studies have shown golf to reduce the risks of cardiovascular disease, diabetes, and obesity, among other things. Just walking one 18-hole round significantly exceeds the CDC’s recommended 150 minutes of physical activity per week. Go out multiple times a week – leaving the cart and beer at home, American golfers – and you’ll be fit for a lifetime.

The golfers on our staff, however, are still waiting for those mental health benefits to kick in. Because when we’re adding up our scorecard after that string of four double bogeys to end the round, we’re most definitely thinking: “Yes, this sport is reducing my psychological distress. I am having fun right now.”
 

Battle of the sexes’ intestines

There are, we’re sure you’ve noticed, some differences between males and females. Females, for one thing, have longer small intestines than males. Everybody knows that, right? You didn’t know? Really? … Really?

Afif Ramdhasuma/Unsplash

Well, then, we’re guessing you haven’t read “Hidden diversity: Comparative functional morphology of humans and other species” by Erin A. McKenney, PhD, of North Carolina State University, Raleigh, and associates, which just appeared in PeerJ. We couldn’t put it down, even in the shower – a real page-turner/scroller. (It’s a great way to clean a phone, for those who also like to scroll, text, or talk on the toilet.)

The researchers got out their rulers, calipers, and string and took many measurements of the digestive systems of 45 human cadavers (21 female and 24 male), which were compared with data from 10 rats, 10 pigs, and 10 bullfrogs, which had been collected (the measurements, not the animals) by undergraduate students enrolled in a comparative anatomy laboratory course at the university.

There was little intestinal-length variation among the four-legged subjects, but when it comes to humans, females have “consistently and significantly longer small intestines than males,” the investigators noted.

The women’s small intestines, almost 14 feet long on average, were about a foot longer than the men’s, which suggests that women are better able to extract nutrients from food and “supports the canalization hypothesis, which posits that women are better able to survive during periods of stress,” coauthor Amanda Hale said in a written statement from the school. The way to a man’s heart may be through his stomach, but the way to a woman’s heart is through her duodenum, it seems.

Fascinating stuff, to be sure, but the thing that really caught our eye in the PeerJ article was the authors’ suggestion “that organs behave independently of one another, both within and across species.” Organs behaving independently? A somewhat ominous concept, no doubt, but it does explain a lot of the sounds we hear coming from our guts, which can get pretty frightening, especially on chili night.
 

 

 

Dog walking is dangerous business

Yes, you did read that right. A lot of strange things can send you to the emergency department. Go ahead and add dog walking onto that list.

Investigators from Johns Hopkins University estimate that over 422,000 adults presented to U.S. emergency departments with leash-dependent dog walking-related injuries between 2001 and 2020.

freestocks/Unsplash

With almost 53% of U.S. households owning at least one dog in 2021-2022 in the wake of the COVID pet boom, this kind of occurrence is becoming more common than you think. The annual number of dog-walking injuries more than quadrupled from 7,300 to 32,000 over the course of the study, and the researchers link that spike to the promotion of dog walking for fitness, along with the boost of ownership itself.

The most common injuries listed in the National Electronic Injury Surveillance System database were finger fracture, traumatic brain injury, and shoulder sprain or strain. These mostly involved falls from being pulled, tripped, or tangled up in the leash while walking. For those aged 65 years and older, traumatic brain injury and hip fracture were the most common.

Women were 50% more likely to sustain a fracture than were men, and dog owners aged 65 and older were three times as likely to fall, twice as likely to get a fracture, and 60% more likely to have brain injury than were younger people. Now, that’s not to say younger people don’t also get hurt. After all, dogs aren’t ageists. The researchers have that data but it’s coming out later.

Meanwhile, the pitfalls involved with just trying to get our daily steps in while letting Muffin do her business have us on the lookout for random squirrels.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

New ABIM fees to stay listed as ‘board certified’ irk physicians

Article Type
Changed
Fri, 04/28/2023 - 14:59

 

Abdul Moiz Hafiz, MD, was flabbergasted when he received a phone call from his institution’s credentialing office telling him that he was not certified for interventional cardiology – even though he had passed that exam in 2016.

Dr. Hafiz, who directs the Advanced Structural Heart Disease Program at Southern Illinois University, phoned the American Board of Internal Medicine (ABIM), where he learned that to restore his credentials, he would need to pay $1,225 in maintenance of certification (MOC) fees.

Like Dr. Hafiz, many physicians have been dismayed to learn that the ABIM is now listing as “not certified” physicians who have passed board exams but have not paid annual MOC fees of $220 per year for the first certificate and $120 for each additional certificate.

Even doctors who are participating in mandatory continuing education outside the ABIM’s auspices are finding themselves listed as “not certified.” Some physicians learned of the policy change only after applying for hospital privileges or for jobs that require ABIM certification.

Now that increasing numbers of physicians are employed by hospitals and health care organizations that require ABIM certification, many doctors have no option but to pony up the fees if they want to continue to practice medicine.

“We have no say in the matter,” said Dr. Hafiz, “and there’s no appeal process.”

The change affects nearly 330,000 physicians. Responses to the policy on Twitter included accusations of extortion and denunciations of the ABIM’s “money grab policies.”

Sunil Rao, MD, director of interventional cardiology at NYU Langone Health and president of the Society for Cardiovascular Angiography and Interventions (SCAI), has heard from many SCAI members who had experiences similar to Dr. Hafiz’s. While Dr. Rao describes some of the Twitter outrage as “emotional,” he does acknowledge that the ABIM’s moves appear to be financially motivated.

“The issue here was that as soon as they paid the fee, all of a sudden, ABIM flipped the switch and said they were certified,” he said. “It certainly sounds like a purely financial kind of structure.”

Richard Baron, MD, president and CEO of the ABIM, said doctors are misunderstanding the policy change.

“No doctor loses certification solely for failure to pay fees,” Dr. Baron told this news organization. “What caused them to be reported as not certified was that we didn’t have evidence that they had met program requirements. They could say, ‘But I did meet program requirements, you just didn’t know it.’ To which our answer would be, for us to know it, we have to process them. And our policy is that we don’t process them unless you are current on your fees.”

This is not the first time ABIM policies have alienated physicians.

Last year, the ABIM raised its MOC fees from $165 to $220. That also prompted a wave of outrage. Other grievances go further back. At one time, being board certified was a lifetime credential. However, in 1990 the ABIM made periodic recertification mandatory.

The process, which came to be known as “maintenance of certification,” had to be completed every 10 years, and fees were charged for each certification. At that point, said Dr. Baron, the relationship between the ABIM and physicians changed from a one-time interaction to a career-long relationship. He advises doctors to check in periodically on their portal page at the ABIM or download the app so they will always know their status.

Many physicians would prefer not to be bound to a lifetime relationship with the ABIM. There is an alternative licensing board, the National Board of Physicians and Surgeons (NBPAS), but it is accepted by only a limited number of hospitals.

“Until the NBPAS gains wide recognition,” said Dr. Hafiz, “the ABIM is going to continue to have basically a monopoly over the market.”

The value of MOC itself has been called into question. “There are no direct data supporting the value of the MOC process in either improving care, making patient care safer, or making patient care higher quality,” said Dr. Rao. This feeds frustration in a clinical community already dealing with onerous training requirements and expensive board certification exams and adds to the perception that it is a purely financial transaction, he said. (Studies examining whether the MOC system improves patient care have shown mixed results.)

The true value of the ABIM to physicians, Dr. Baron contends, is that the organization is an independent third party that differentiates those doctors from people who don’t have their skills, training, and expertise. “In these days, where anyone can be an ‘expert’ on the Internet, that’s more valuable than ever before,” he said.
 

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

Abdul Moiz Hafiz, MD, was flabbergasted when he received a phone call from his institution’s credentialing office telling him that he was not certified for interventional cardiology – even though he had passed that exam in 2016.

Dr. Hafiz, who directs the Advanced Structural Heart Disease Program at Southern Illinois University, phoned the American Board of Internal Medicine (ABIM), where he learned that to restore his credentials, he would need to pay $1,225 in maintenance of certification (MOC) fees.

Like Dr. Hafiz, many physicians have been dismayed to learn that the ABIM is now listing as “not certified” physicians who have passed board exams but have not paid annual MOC fees of $220 per year for the first certificate and $120 for each additional certificate.

Even doctors who are participating in mandatory continuing education outside the ABIM’s auspices are finding themselves listed as “not certified.” Some physicians learned of the policy change only after applying for hospital privileges or for jobs that require ABIM certification.

Now that increasing numbers of physicians are employed by hospitals and health care organizations that require ABIM certification, many doctors have no option but to pony up the fees if they want to continue to practice medicine.

“We have no say in the matter,” said Dr. Hafiz, “and there’s no appeal process.”

The change affects nearly 330,000 physicians. Responses to the policy on Twitter included accusations of extortion and denunciations of the ABIM’s “money grab policies.”

Sunil Rao, MD, director of interventional cardiology at NYU Langone Health and president of the Society for Cardiovascular Angiography and Interventions (SCAI), has heard from many SCAI members who had experiences similar to Dr. Hafiz’s. While Dr. Rao describes some of the Twitter outrage as “emotional,” he does acknowledge that the ABIM’s moves appear to be financially motivated.

“The issue here was that as soon as they paid the fee, all of a sudden, ABIM flipped the switch and said they were certified,” he said. “It certainly sounds like a purely financial kind of structure.”

Richard Baron, MD, president and CEO of the ABIM, said doctors are misunderstanding the policy change.

“No doctor loses certification solely for failure to pay fees,” Dr. Baron told this news organization. “What caused them to be reported as not certified was that we didn’t have evidence that they had met program requirements. They could say, ‘But I did meet program requirements, you just didn’t know it.’ To which our answer would be, for us to know it, we have to process them. And our policy is that we don’t process them unless you are current on your fees.”

This is not the first time ABIM policies have alienated physicians.

Last year, the ABIM raised its MOC fees from $165 to $220. That also prompted a wave of outrage. Other grievances go further back. At one time, being board certified was a lifetime credential. However, in 1990 the ABIM made periodic recertification mandatory.

The process, which came to be known as “maintenance of certification,” had to be completed every 10 years, and fees were charged for each certification. At that point, said Dr. Baron, the relationship between the ABIM and physicians changed from a one-time interaction to a career-long relationship. He advises doctors to check in periodically on their portal page at the ABIM or download the app so they will always know their status.

Many physicians would prefer not to be bound to a lifetime relationship with the ABIM. There is an alternative licensing board, the National Board of Physicians and Surgeons (NBPAS), but it is accepted by only a limited number of hospitals.

“Until the NBPAS gains wide recognition,” said Dr. Hafiz, “the ABIM is going to continue to have basically a monopoly over the market.”

The value of MOC itself has been called into question. “There are no direct data supporting the value of the MOC process in either improving care, making patient care safer, or making patient care higher quality,” said Dr. Rao. This feeds frustration in a clinical community already dealing with onerous training requirements and expensive board certification exams and adds to the perception that it is a purely financial transaction, he said. (Studies examining whether the MOC system improves patient care have shown mixed results.)

The true value of the ABIM to physicians, Dr. Baron contends, is that the organization is an independent third party that differentiates those doctors from people who don’t have their skills, training, and expertise. “In these days, where anyone can be an ‘expert’ on the Internet, that’s more valuable than ever before,” he said.
 

A version of this article first appeared on Medscape.com.

 

Abdul Moiz Hafiz, MD, was flabbergasted when he received a phone call from his institution’s credentialing office telling him that he was not certified for interventional cardiology – even though he had passed that exam in 2016.

Dr. Hafiz, who directs the Advanced Structural Heart Disease Program at Southern Illinois University, phoned the American Board of Internal Medicine (ABIM), where he learned that to restore his credentials, he would need to pay $1,225 in maintenance of certification (MOC) fees.

Like Dr. Hafiz, many physicians have been dismayed to learn that the ABIM is now listing as “not certified” physicians who have passed board exams but have not paid annual MOC fees of $220 per year for the first certificate and $120 for each additional certificate.

Even doctors who are participating in mandatory continuing education outside the ABIM’s auspices are finding themselves listed as “not certified.” Some physicians learned of the policy change only after applying for hospital privileges or for jobs that require ABIM certification.

Now that increasing numbers of physicians are employed by hospitals and health care organizations that require ABIM certification, many doctors have no option but to pony up the fees if they want to continue to practice medicine.

“We have no say in the matter,” said Dr. Hafiz, “and there’s no appeal process.”

The change affects nearly 330,000 physicians. Responses to the policy on Twitter included accusations of extortion and denunciations of the ABIM’s “money grab policies.”

Sunil Rao, MD, director of interventional cardiology at NYU Langone Health and president of the Society for Cardiovascular Angiography and Interventions (SCAI), has heard from many SCAI members who had experiences similar to Dr. Hafiz’s. While Dr. Rao describes some of the Twitter outrage as “emotional,” he does acknowledge that the ABIM’s moves appear to be financially motivated.

“The issue here was that as soon as they paid the fee, all of a sudden, ABIM flipped the switch and said they were certified,” he said. “It certainly sounds like a purely financial kind of structure.”

Richard Baron, MD, president and CEO of the ABIM, said doctors are misunderstanding the policy change.

“No doctor loses certification solely for failure to pay fees,” Dr. Baron told this news organization. “What caused them to be reported as not certified was that we didn’t have evidence that they had met program requirements. They could say, ‘But I did meet program requirements, you just didn’t know it.’ To which our answer would be, for us to know it, we have to process them. And our policy is that we don’t process them unless you are current on your fees.”

This is not the first time ABIM policies have alienated physicians.

Last year, the ABIM raised its MOC fees from $165 to $220. That also prompted a wave of outrage. Other grievances go further back. At one time, being board certified was a lifetime credential. However, in 1990 the ABIM made periodic recertification mandatory.

The process, which came to be known as “maintenance of certification,” had to be completed every 10 years, and fees were charged for each certification. At that point, said Dr. Baron, the relationship between the ABIM and physicians changed from a one-time interaction to a career-long relationship. He advises doctors to check in periodically on their portal page at the ABIM or download the app so they will always know their status.

Many physicians would prefer not to be bound to a lifetime relationship with the ABIM. There is an alternative licensing board, the National Board of Physicians and Surgeons (NBPAS), but it is accepted by only a limited number of hospitals.

“Until the NBPAS gains wide recognition,” said Dr. Hafiz, “the ABIM is going to continue to have basically a monopoly over the market.”

The value of MOC itself has been called into question. “There are no direct data supporting the value of the MOC process in either improving care, making patient care safer, or making patient care higher quality,” said Dr. Rao. This feeds frustration in a clinical community already dealing with onerous training requirements and expensive board certification exams and adds to the perception that it is a purely financial transaction, he said. (Studies examining whether the MOC system improves patient care have shown mixed results.)

The true value of the ABIM to physicians, Dr. Baron contends, is that the organization is an independent third party that differentiates those doctors from people who don’t have their skills, training, and expertise. “In these days, where anyone can be an ‘expert’ on the Internet, that’s more valuable than ever before,” he said.
 

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

BMI is a flawed measure of obesity. What are alternatives?

Article Type
Changed
Mon, 05/01/2023 - 13:53

“BMI is trash. Full stop.” This controversial tweet, which received thousands of likes and retweets, was cited in a recent article by one doctor on when physicians might stop using body mass index (BMI) to diagnose obesity.

BMI has for years been the consensus default method for assessing whether a person is overweight or has obesity, and is still widely used as the gatekeeper metric for treatment eligibility for certain weight-loss agents and bariatric surgery.

But growing appreciation of the limitations of BMI is causing many clinicians to consider alternative measures of obesity that can better assess both the amount of adiposity as well as its body location, an important determinant of the cardiometabolic consequences of fat.

Alternative metrics include waist circumference and/or waist-to-height ratio (WHtR); imaging methods such as CT, MRI, and dual-energy x-ray absorptiometry (DXA); and bioelectrical impedance to assess fat volume and location. All have made some inroads on the tight grip BMI has had on obesity assessment.

Chances are, however, that BMI will not fade away anytime soon given how entrenched it has become in clinical practice and for insurance coverage, as well as its relative simplicity and precision.

“BMI is embedded in a wide range of guidelines on the use of medications and surgery. It’s embedded in Food and Drug Administration regulations and for billing and insurance coverage. It would take extremely strong data and years of work to undo the infrastructure built around BMI and replace it with something else. I don’t see that happening [anytime soon],” commented Daniel H. Bessesen, MD, a professor at the University of Colorado at Denver, Aurora, and chief of endocrinology for Denver Health.

“It would be almost impossible to replace all the studies that have used BMI with investigations using some other measure,” he said.
 

BMI Is ‘imperfect’

The entrenched position of BMI as the go-to metric doesn’t keep detractors from weighing in. As noted in a commentary on current clinical challenges surrounding obesity recently published in Annals of Internal Medicine, the journal’s editor-in-chief, Christine Laine, MD, and senior deputy editor Christina C. Wee, MD, listed six top issues clinicians must deal with, one of which, they say, is the need for a better measure of obesity than BMI.

“Unfortunately, BMI is an imperfect measure of body composition that differs with ethnicity, sex, body frame, and muscle mass,” noted Dr. Laine and Dr. Wee.

BMI is based on a person’s weight in kilograms divided by the square of their height in meters. A “healthy” BMI is between 18.5 and 24.9 kg/m2, overweight is 25-29.9, and 30 or greater is considered to represent obesity. However, certain ethnic groups have lower cutoffs for overweight or obesity because of evidence that such individuals can be at higher risk of obesity-related comorbidities at lower BMIs.

“BMI was chosen as the initial screening tool [for obesity] not because anyone thought it was perfect or the best measure but because of its simplicity. All you need is height, weight, and a calculator,” Dr. Wee said in an interview.

Numerous online calculators are available, including one from the Centers for Disease Control and Prevention where height in feet and inches and weight in pounds can be entered to generate the BMI.

BMI is also inherently limited by being “a proxy for adiposity” and not a direct measure, added Dr. Wee, who is also director of the Obesity Research Program of Beth Israel Deaconess Medical Center, Boston.

As such, BMI can’t distinguish between fat and muscle because it relies on weight only to gauge adiposity, noted Tiffany Powell-Wiley, MD, an obesity researcher at the National Heart, Lung, and Blood Institute in Bethesda, Md. Another shortcoming of BMI is that it “is good for distinguishing population-level risk for cardiovascular disease and other chronic diseases, but it does not help as much for distinguishing risk at an individual level,” she said in an interview.

These and other drawbacks have prompted researchers to look for other useful metrics. WHtR, for example, has recently made headway as a potential BMI alternative or complement.
 

 

 

The case for WHtR

Concern about overreliance on BMI despite its limitations is not new. In 2015, an American Heart Association scientific statement from the group’s Obesity Committee concluded that “BMI alone, even with lower thresholds, is a useful but not an ideal tool for identification of obesity or assessment of cardiovascular risk,” especially for people from Asian, Black, Hispanic, and Pacific Islander populations.

The writing panel also recommended that clinicians measure waist circumference annually and use that information along with BMI “to better gauge cardiovascular risk in diverse populations.”

Momentum for moving beyond BMI alone has continued to build following the AHA statement.

In September 2022, the National Institute for Health and Care Excellence, which sets policies for the United Kingdom’s National Health Service, revised its guidancefor assessment and management of people with obesity. The updated guidance recommends that when clinicians assess “adults with BMI below 35 kg/m2, measure and use their WHtR, as well as their BMI, as a practical estimate of central adiposity and use these measurements to help to assess and predict health risks.”

NICE released an extensive literature review with the revision, and based on the evidence, said that “using waist-to-height ratio as well as BMI would help give a practical estimate of central adiposity in adults with BMI under 35 kg/m2. This would in turn help professionals assess and predict health risks.”

However, the review added that, “because people with a BMI over 35 kg/m2 are always likely to have a high WHtR, the committee recognized that it may not be a useful addition for predicting health risks in this group.” The 2022 NICE review also said that it is “important to estimate central adiposity when assessing future health risks, including for people whose BMI is in the healthy-weight category.”

This new emphasis by NICE on measuring and using WHtR as part of obesity assessment “represents an important change in population health policy,” commented Dr. Powell-Wiley. “I expect more professional organizations will endorse use of waist circumference or waist-to-height ratio now that NICE has taken this step,” she predicted.

Waist circumference and WHtR may become standard measures of adiposity in clinical practice over the next 5-10 years.

The recent move by NICE to highlight a complementary role for WHtR “is another acknowledgment that BMI is an imperfect tool for stratifying cardiometabolic risk in a diverse population, especially in people with lower BMIs” because of its variability, commented Jamie Almandoz, MD, medical director of the weight wellness program at UT Southwestern Medical Center, Dallas.
 

WHtR vs. BMI

Another recent step forward for WHtR came with the publication of a post hoc analysis of data collected in the PARADIGM-HF trial, a study that had the primary purpose of comparing two medications for improving outcomes in more than 8,000 patients with heart failure with reduced ejection fraction.

The new analysis showed that “two indices that incorporate waist circumference and height, but not weight, showed a clearer association between greater adiposity and a higher risk of heart failure hospitalization,” compared with BMI.

WHtR was one of the two indices identified as being a better correlate for the adverse effect of excess adiposity compared with BMI.

The authors of the post hoc analysis did not design their analysis to compare WHtR with BMI. Instead, their goal was to better understand what’s known as the “obesity paradox” in people with heart failure with reduced ejection fraction: The recurring observation that, when these patients with heart failure have lower BMIs they fare worse, with higher rates of mortality and adverse cardiovascular outcomes, compared with patients with higher BMIs.

The new analysis showed that this paradox disappeared when WHtR was substituted for BMI as the obesity metric.

This “provides meaningful data about the superiority of WHtR, compared with BMI, for predicting heart failure outcomes,” said Dr. Powell-Wiley, although she cautioned that the analysis was limited by scant data in diverse populations and did not look at other important cardiovascular disease outcomes. While Dr. Powell-Wiley does not think that WHtR needs assessment in a prospective, controlled trial, she called for analysis of pooled prospective studies with more diverse populations to better document the advantages of WHtR over BMI.

The PARADIGM-HF post hoc analysis shows again how flawed BMI is for health assessment and the relative importance of an individualized understanding of a person’s body composition, Dr. Almandoz said in an interview. “As we collect more data, there is increasing awareness of how imperfect BMI is.”
 

 

 

Measuring waist circumference is tricky

Although WHtR looks promising as a substitute for or add-on to BMI, it has its own limitations, particularly the challenge of accurately measuring waist circumference.

Measuring waist circumference “not only takes more time but requires the assessor to be well trained about where to put the tape measure and making sure it’s measured at the same place each time,” even when different people take serial measurements from individual patients, noted Dr. Wee. Determining waist circumference can also be technically difficult when done on larger people, she added, and collectively these challenges make waist circumference “less reproducible from measurement to measurement.”

“It’s relatively clear how to standardize measurement of weight and height, but there is a huge amount of variability when the waist is measured,” agreed Dr. Almandoz. “And waist circumference also differs by ethnicity, race, sex, and body frame. There are significant differences in waist circumference levels that associate with increased health risks” between, for example, White and South Asian people.

Another limitation of waist circumference and WHtR is that they “cannot differentiate between visceral and abdominal subcutaneous adipose tissue, which are vastly different regarding cardiometabolic risk, commented Ian Neeland, MD, director of cardiovascular prevention at the University Hospitals Harrington Heart & Vascular Institute, Cleveland.
 

The imaging option

“Waist-to-height ratio is not the ultimate answer,” Dr. Neeland said in an interview. He instead endorsed “advanced imaging for body fat distribution,” such as CT or MRI scans, as his pick for what should be the standard obesity metric, “given that it is much more specific and actionable for both risk assessment and response to therapy. I expect slow but steady advancements that move away from BMI cutoffs, for example for bariatric surgery, given that BMI is an imprecise and crude tool.”

But although imaging with methods like CT and MRI may provide the best accuracy and precision for tracking the volume of a person’s cardiometabolically dangerous fat, they are also hampered by relatively high cost and, for CT and DXA, the issue of radiation exposure.

“CT, MRI, and DXA scans give more in-depth assessment of body composition, but should we expose people to the radiation and the cost?” Dr. Almandoz wondered.

“Height, weight, and waist circumference cost nothing to obtain,” creating a big relative disadvantage for imaging, said Naveed Sattar, MD, professor of metabolic medicine at the University of Glasgow.

“Data would need to show that imaging gives clinicians substantially more information about future risk” to justify its price, Dr. Sattar emphasized.
 

BMI’s limits mean adding on

Regardless of whichever alternatives to BMI end up getting used most, experts generally agree that BMI alone is looking increasingly inadequate.

“Over the next 5 years, BMI will come to be seen as a screening tool that categorizes people into general risk groups” that also needs “other metrics and variables, such as age, race, ethnicity, family history, blood glucose, and blood pressure to better describe health risk in an individual,” predicted Dr. Bessesen.

The endorsement of WHtR by NICE “will lead to more research into how to incorporate WHtR into routine practice. We need more evidence to translate what NICE said into practice,” said Dr. Sattar. “I don’t think we’ll see a shift away from BMI, but we’ll add alternative measures that are particularly useful in certain patients.”

“Because we live in diverse societies, we need to individualize risk assessment and couple that with technology that makes analysis of body composition more accessible,” agreed Dr. Almandoz. He noted that the UT Southwestern weight wellness program where he practices has, for about the past decade, routinely collected waist circumference and bioelectrical impedance data as well as BMI on all people seen in the practice for obesity concerns. Making these additional measurements on a routine basis also helps strengthen patient engagement.

“We get into trouble when we make rigid health policy and clinical decisions based on BMI alone without looking at the patient holistically,” said Dr. Wee. “Patients are more than arbitrary numbers, and clinicians should make clinical decisions based on the totality of evidence for each individual patient.”

Dr. Bessesen, Dr. Wee, Dr. Powell-Wiley, and Dr. Almandoz reported no relevant financial relationships. Dr. Neeland has reported being a consultant for Merck. Dr. Sattar has reported being a consultant or speaker for Abbott Laboratories, Afimmune, Amgen, AstraZeneca, Boehringer Ingelheim, Eli Lilly, Hanmi Pharmaceuticals, Janssen, MSD, Novartis, Novo Nordisk, Pfizer, Roche Diagnostics, and Sanofi.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

“BMI is trash. Full stop.” This controversial tweet, which received thousands of likes and retweets, was cited in a recent article by one doctor on when physicians might stop using body mass index (BMI) to diagnose obesity.

BMI has for years been the consensus default method for assessing whether a person is overweight or has obesity, and is still widely used as the gatekeeper metric for treatment eligibility for certain weight-loss agents and bariatric surgery.

But growing appreciation of the limitations of BMI is causing many clinicians to consider alternative measures of obesity that can better assess both the amount of adiposity as well as its body location, an important determinant of the cardiometabolic consequences of fat.

Alternative metrics include waist circumference and/or waist-to-height ratio (WHtR); imaging methods such as CT, MRI, and dual-energy x-ray absorptiometry (DXA); and bioelectrical impedance to assess fat volume and location. All have made some inroads on the tight grip BMI has had on obesity assessment.

Chances are, however, that BMI will not fade away anytime soon given how entrenched it has become in clinical practice and for insurance coverage, as well as its relative simplicity and precision.

“BMI is embedded in a wide range of guidelines on the use of medications and surgery. It’s embedded in Food and Drug Administration regulations and for billing and insurance coverage. It would take extremely strong data and years of work to undo the infrastructure built around BMI and replace it with something else. I don’t see that happening [anytime soon],” commented Daniel H. Bessesen, MD, a professor at the University of Colorado at Denver, Aurora, and chief of endocrinology for Denver Health.

“It would be almost impossible to replace all the studies that have used BMI with investigations using some other measure,” he said.
 

BMI Is ‘imperfect’

The entrenched position of BMI as the go-to metric doesn’t keep detractors from weighing in. As noted in a commentary on current clinical challenges surrounding obesity recently published in Annals of Internal Medicine, the journal’s editor-in-chief, Christine Laine, MD, and senior deputy editor Christina C. Wee, MD, listed six top issues clinicians must deal with, one of which, they say, is the need for a better measure of obesity than BMI.

“Unfortunately, BMI is an imperfect measure of body composition that differs with ethnicity, sex, body frame, and muscle mass,” noted Dr. Laine and Dr. Wee.

BMI is based on a person’s weight in kilograms divided by the square of their height in meters. A “healthy” BMI is between 18.5 and 24.9 kg/m2, overweight is 25-29.9, and 30 or greater is considered to represent obesity. However, certain ethnic groups have lower cutoffs for overweight or obesity because of evidence that such individuals can be at higher risk of obesity-related comorbidities at lower BMIs.

“BMI was chosen as the initial screening tool [for obesity] not because anyone thought it was perfect or the best measure but because of its simplicity. All you need is height, weight, and a calculator,” Dr. Wee said in an interview.

Numerous online calculators are available, including one from the Centers for Disease Control and Prevention where height in feet and inches and weight in pounds can be entered to generate the BMI.

BMI is also inherently limited by being “a proxy for adiposity” and not a direct measure, added Dr. Wee, who is also director of the Obesity Research Program of Beth Israel Deaconess Medical Center, Boston.

As such, BMI can’t distinguish between fat and muscle because it relies on weight only to gauge adiposity, noted Tiffany Powell-Wiley, MD, an obesity researcher at the National Heart, Lung, and Blood Institute in Bethesda, Md. Another shortcoming of BMI is that it “is good for distinguishing population-level risk for cardiovascular disease and other chronic diseases, but it does not help as much for distinguishing risk at an individual level,” she said in an interview.

These and other drawbacks have prompted researchers to look for other useful metrics. WHtR, for example, has recently made headway as a potential BMI alternative or complement.
 

 

 

The case for WHtR

Concern about overreliance on BMI despite its limitations is not new. In 2015, an American Heart Association scientific statement from the group’s Obesity Committee concluded that “BMI alone, even with lower thresholds, is a useful but not an ideal tool for identification of obesity or assessment of cardiovascular risk,” especially for people from Asian, Black, Hispanic, and Pacific Islander populations.

The writing panel also recommended that clinicians measure waist circumference annually and use that information along with BMI “to better gauge cardiovascular risk in diverse populations.”

Momentum for moving beyond BMI alone has continued to build following the AHA statement.

In September 2022, the National Institute for Health and Care Excellence, which sets policies for the United Kingdom’s National Health Service, revised its guidancefor assessment and management of people with obesity. The updated guidance recommends that when clinicians assess “adults with BMI below 35 kg/m2, measure and use their WHtR, as well as their BMI, as a practical estimate of central adiposity and use these measurements to help to assess and predict health risks.”

NICE released an extensive literature review with the revision, and based on the evidence, said that “using waist-to-height ratio as well as BMI would help give a practical estimate of central adiposity in adults with BMI under 35 kg/m2. This would in turn help professionals assess and predict health risks.”

However, the review added that, “because people with a BMI over 35 kg/m2 are always likely to have a high WHtR, the committee recognized that it may not be a useful addition for predicting health risks in this group.” The 2022 NICE review also said that it is “important to estimate central adiposity when assessing future health risks, including for people whose BMI is in the healthy-weight category.”

This new emphasis by NICE on measuring and using WHtR as part of obesity assessment “represents an important change in population health policy,” commented Dr. Powell-Wiley. “I expect more professional organizations will endorse use of waist circumference or waist-to-height ratio now that NICE has taken this step,” she predicted.

Waist circumference and WHtR may become standard measures of adiposity in clinical practice over the next 5-10 years.

The recent move by NICE to highlight a complementary role for WHtR “is another acknowledgment that BMI is an imperfect tool for stratifying cardiometabolic risk in a diverse population, especially in people with lower BMIs” because of its variability, commented Jamie Almandoz, MD, medical director of the weight wellness program at UT Southwestern Medical Center, Dallas.
 

WHtR vs. BMI

Another recent step forward for WHtR came with the publication of a post hoc analysis of data collected in the PARADIGM-HF trial, a study that had the primary purpose of comparing two medications for improving outcomes in more than 8,000 patients with heart failure with reduced ejection fraction.

The new analysis showed that “two indices that incorporate waist circumference and height, but not weight, showed a clearer association between greater adiposity and a higher risk of heart failure hospitalization,” compared with BMI.

WHtR was one of the two indices identified as being a better correlate for the adverse effect of excess adiposity compared with BMI.

The authors of the post hoc analysis did not design their analysis to compare WHtR with BMI. Instead, their goal was to better understand what’s known as the “obesity paradox” in people with heart failure with reduced ejection fraction: The recurring observation that, when these patients with heart failure have lower BMIs they fare worse, with higher rates of mortality and adverse cardiovascular outcomes, compared with patients with higher BMIs.

The new analysis showed that this paradox disappeared when WHtR was substituted for BMI as the obesity metric.

This “provides meaningful data about the superiority of WHtR, compared with BMI, for predicting heart failure outcomes,” said Dr. Powell-Wiley, although she cautioned that the analysis was limited by scant data in diverse populations and did not look at other important cardiovascular disease outcomes. While Dr. Powell-Wiley does not think that WHtR needs assessment in a prospective, controlled trial, she called for analysis of pooled prospective studies with more diverse populations to better document the advantages of WHtR over BMI.

The PARADIGM-HF post hoc analysis shows again how flawed BMI is for health assessment and the relative importance of an individualized understanding of a person’s body composition, Dr. Almandoz said in an interview. “As we collect more data, there is increasing awareness of how imperfect BMI is.”
 

 

 

Measuring waist circumference is tricky

Although WHtR looks promising as a substitute for or add-on to BMI, it has its own limitations, particularly the challenge of accurately measuring waist circumference.

Measuring waist circumference “not only takes more time but requires the assessor to be well trained about where to put the tape measure and making sure it’s measured at the same place each time,” even when different people take serial measurements from individual patients, noted Dr. Wee. Determining waist circumference can also be technically difficult when done on larger people, she added, and collectively these challenges make waist circumference “less reproducible from measurement to measurement.”

“It’s relatively clear how to standardize measurement of weight and height, but there is a huge amount of variability when the waist is measured,” agreed Dr. Almandoz. “And waist circumference also differs by ethnicity, race, sex, and body frame. There are significant differences in waist circumference levels that associate with increased health risks” between, for example, White and South Asian people.

Another limitation of waist circumference and WHtR is that they “cannot differentiate between visceral and abdominal subcutaneous adipose tissue, which are vastly different regarding cardiometabolic risk, commented Ian Neeland, MD, director of cardiovascular prevention at the University Hospitals Harrington Heart & Vascular Institute, Cleveland.
 

The imaging option

“Waist-to-height ratio is not the ultimate answer,” Dr. Neeland said in an interview. He instead endorsed “advanced imaging for body fat distribution,” such as CT or MRI scans, as his pick for what should be the standard obesity metric, “given that it is much more specific and actionable for both risk assessment and response to therapy. I expect slow but steady advancements that move away from BMI cutoffs, for example for bariatric surgery, given that BMI is an imprecise and crude tool.”

But although imaging with methods like CT and MRI may provide the best accuracy and precision for tracking the volume of a person’s cardiometabolically dangerous fat, they are also hampered by relatively high cost and, for CT and DXA, the issue of radiation exposure.

“CT, MRI, and DXA scans give more in-depth assessment of body composition, but should we expose people to the radiation and the cost?” Dr. Almandoz wondered.

“Height, weight, and waist circumference cost nothing to obtain,” creating a big relative disadvantage for imaging, said Naveed Sattar, MD, professor of metabolic medicine at the University of Glasgow.

“Data would need to show that imaging gives clinicians substantially more information about future risk” to justify its price, Dr. Sattar emphasized.
 

BMI’s limits mean adding on

Regardless of whichever alternatives to BMI end up getting used most, experts generally agree that BMI alone is looking increasingly inadequate.

“Over the next 5 years, BMI will come to be seen as a screening tool that categorizes people into general risk groups” that also needs “other metrics and variables, such as age, race, ethnicity, family history, blood glucose, and blood pressure to better describe health risk in an individual,” predicted Dr. Bessesen.

The endorsement of WHtR by NICE “will lead to more research into how to incorporate WHtR into routine practice. We need more evidence to translate what NICE said into practice,” said Dr. Sattar. “I don’t think we’ll see a shift away from BMI, but we’ll add alternative measures that are particularly useful in certain patients.”

“Because we live in diverse societies, we need to individualize risk assessment and couple that with technology that makes analysis of body composition more accessible,” agreed Dr. Almandoz. He noted that the UT Southwestern weight wellness program where he practices has, for about the past decade, routinely collected waist circumference and bioelectrical impedance data as well as BMI on all people seen in the practice for obesity concerns. Making these additional measurements on a routine basis also helps strengthen patient engagement.

“We get into trouble when we make rigid health policy and clinical decisions based on BMI alone without looking at the patient holistically,” said Dr. Wee. “Patients are more than arbitrary numbers, and clinicians should make clinical decisions based on the totality of evidence for each individual patient.”

Dr. Bessesen, Dr. Wee, Dr. Powell-Wiley, and Dr. Almandoz reported no relevant financial relationships. Dr. Neeland has reported being a consultant for Merck. Dr. Sattar has reported being a consultant or speaker for Abbott Laboratories, Afimmune, Amgen, AstraZeneca, Boehringer Ingelheim, Eli Lilly, Hanmi Pharmaceuticals, Janssen, MSD, Novartis, Novo Nordisk, Pfizer, Roche Diagnostics, and Sanofi.

A version of this article originally appeared on Medscape.com.

“BMI is trash. Full stop.” This controversial tweet, which received thousands of likes and retweets, was cited in a recent article by one doctor on when physicians might stop using body mass index (BMI) to diagnose obesity.

BMI has for years been the consensus default method for assessing whether a person is overweight or has obesity, and is still widely used as the gatekeeper metric for treatment eligibility for certain weight-loss agents and bariatric surgery.

But growing appreciation of the limitations of BMI is causing many clinicians to consider alternative measures of obesity that can better assess both the amount of adiposity as well as its body location, an important determinant of the cardiometabolic consequences of fat.

Alternative metrics include waist circumference and/or waist-to-height ratio (WHtR); imaging methods such as CT, MRI, and dual-energy x-ray absorptiometry (DXA); and bioelectrical impedance to assess fat volume and location. All have made some inroads on the tight grip BMI has had on obesity assessment.

Chances are, however, that BMI will not fade away anytime soon given how entrenched it has become in clinical practice and for insurance coverage, as well as its relative simplicity and precision.

“BMI is embedded in a wide range of guidelines on the use of medications and surgery. It’s embedded in Food and Drug Administration regulations and for billing and insurance coverage. It would take extremely strong data and years of work to undo the infrastructure built around BMI and replace it with something else. I don’t see that happening [anytime soon],” commented Daniel H. Bessesen, MD, a professor at the University of Colorado at Denver, Aurora, and chief of endocrinology for Denver Health.

“It would be almost impossible to replace all the studies that have used BMI with investigations using some other measure,” he said.
 

BMI Is ‘imperfect’

The entrenched position of BMI as the go-to metric doesn’t keep detractors from weighing in. As noted in a commentary on current clinical challenges surrounding obesity recently published in Annals of Internal Medicine, the journal’s editor-in-chief, Christine Laine, MD, and senior deputy editor Christina C. Wee, MD, listed six top issues clinicians must deal with, one of which, they say, is the need for a better measure of obesity than BMI.

“Unfortunately, BMI is an imperfect measure of body composition that differs with ethnicity, sex, body frame, and muscle mass,” noted Dr. Laine and Dr. Wee.

BMI is based on a person’s weight in kilograms divided by the square of their height in meters. A “healthy” BMI is between 18.5 and 24.9 kg/m2, overweight is 25-29.9, and 30 or greater is considered to represent obesity. However, certain ethnic groups have lower cutoffs for overweight or obesity because of evidence that such individuals can be at higher risk of obesity-related comorbidities at lower BMIs.

“BMI was chosen as the initial screening tool [for obesity] not because anyone thought it was perfect or the best measure but because of its simplicity. All you need is height, weight, and a calculator,” Dr. Wee said in an interview.

Numerous online calculators are available, including one from the Centers for Disease Control and Prevention where height in feet and inches and weight in pounds can be entered to generate the BMI.

BMI is also inherently limited by being “a proxy for adiposity” and not a direct measure, added Dr. Wee, who is also director of the Obesity Research Program of Beth Israel Deaconess Medical Center, Boston.

As such, BMI can’t distinguish between fat and muscle because it relies on weight only to gauge adiposity, noted Tiffany Powell-Wiley, MD, an obesity researcher at the National Heart, Lung, and Blood Institute in Bethesda, Md. Another shortcoming of BMI is that it “is good for distinguishing population-level risk for cardiovascular disease and other chronic diseases, but it does not help as much for distinguishing risk at an individual level,” she said in an interview.

These and other drawbacks have prompted researchers to look for other useful metrics. WHtR, for example, has recently made headway as a potential BMI alternative or complement.
 

 

 

The case for WHtR

Concern about overreliance on BMI despite its limitations is not new. In 2015, an American Heart Association scientific statement from the group’s Obesity Committee concluded that “BMI alone, even with lower thresholds, is a useful but not an ideal tool for identification of obesity or assessment of cardiovascular risk,” especially for people from Asian, Black, Hispanic, and Pacific Islander populations.

The writing panel also recommended that clinicians measure waist circumference annually and use that information along with BMI “to better gauge cardiovascular risk in diverse populations.”

Momentum for moving beyond BMI alone has continued to build following the AHA statement.

In September 2022, the National Institute for Health and Care Excellence, which sets policies for the United Kingdom’s National Health Service, revised its guidancefor assessment and management of people with obesity. The updated guidance recommends that when clinicians assess “adults with BMI below 35 kg/m2, measure and use their WHtR, as well as their BMI, as a practical estimate of central adiposity and use these measurements to help to assess and predict health risks.”

NICE released an extensive literature review with the revision, and based on the evidence, said that “using waist-to-height ratio as well as BMI would help give a practical estimate of central adiposity in adults with BMI under 35 kg/m2. This would in turn help professionals assess and predict health risks.”

However, the review added that, “because people with a BMI over 35 kg/m2 are always likely to have a high WHtR, the committee recognized that it may not be a useful addition for predicting health risks in this group.” The 2022 NICE review also said that it is “important to estimate central adiposity when assessing future health risks, including for people whose BMI is in the healthy-weight category.”

This new emphasis by NICE on measuring and using WHtR as part of obesity assessment “represents an important change in population health policy,” commented Dr. Powell-Wiley. “I expect more professional organizations will endorse use of waist circumference or waist-to-height ratio now that NICE has taken this step,” she predicted.

Waist circumference and WHtR may become standard measures of adiposity in clinical practice over the next 5-10 years.

The recent move by NICE to highlight a complementary role for WHtR “is another acknowledgment that BMI is an imperfect tool for stratifying cardiometabolic risk in a diverse population, especially in people with lower BMIs” because of its variability, commented Jamie Almandoz, MD, medical director of the weight wellness program at UT Southwestern Medical Center, Dallas.
 

WHtR vs. BMI

Another recent step forward for WHtR came with the publication of a post hoc analysis of data collected in the PARADIGM-HF trial, a study that had the primary purpose of comparing two medications for improving outcomes in more than 8,000 patients with heart failure with reduced ejection fraction.

The new analysis showed that “two indices that incorporate waist circumference and height, but not weight, showed a clearer association between greater adiposity and a higher risk of heart failure hospitalization,” compared with BMI.

WHtR was one of the two indices identified as being a better correlate for the adverse effect of excess adiposity compared with BMI.

The authors of the post hoc analysis did not design their analysis to compare WHtR with BMI. Instead, their goal was to better understand what’s known as the “obesity paradox” in people with heart failure with reduced ejection fraction: The recurring observation that, when these patients with heart failure have lower BMIs they fare worse, with higher rates of mortality and adverse cardiovascular outcomes, compared with patients with higher BMIs.

The new analysis showed that this paradox disappeared when WHtR was substituted for BMI as the obesity metric.

This “provides meaningful data about the superiority of WHtR, compared with BMI, for predicting heart failure outcomes,” said Dr. Powell-Wiley, although she cautioned that the analysis was limited by scant data in diverse populations and did not look at other important cardiovascular disease outcomes. While Dr. Powell-Wiley does not think that WHtR needs assessment in a prospective, controlled trial, she called for analysis of pooled prospective studies with more diverse populations to better document the advantages of WHtR over BMI.

The PARADIGM-HF post hoc analysis shows again how flawed BMI is for health assessment and the relative importance of an individualized understanding of a person’s body composition, Dr. Almandoz said in an interview. “As we collect more data, there is increasing awareness of how imperfect BMI is.”
 

 

 

Measuring waist circumference is tricky

Although WHtR looks promising as a substitute for or add-on to BMI, it has its own limitations, particularly the challenge of accurately measuring waist circumference.

Measuring waist circumference “not only takes more time but requires the assessor to be well trained about where to put the tape measure and making sure it’s measured at the same place each time,” even when different people take serial measurements from individual patients, noted Dr. Wee. Determining waist circumference can also be technically difficult when done on larger people, she added, and collectively these challenges make waist circumference “less reproducible from measurement to measurement.”

“It’s relatively clear how to standardize measurement of weight and height, but there is a huge amount of variability when the waist is measured,” agreed Dr. Almandoz. “And waist circumference also differs by ethnicity, race, sex, and body frame. There are significant differences in waist circumference levels that associate with increased health risks” between, for example, White and South Asian people.

Another limitation of waist circumference and WHtR is that they “cannot differentiate between visceral and abdominal subcutaneous adipose tissue, which are vastly different regarding cardiometabolic risk, commented Ian Neeland, MD, director of cardiovascular prevention at the University Hospitals Harrington Heart & Vascular Institute, Cleveland.
 

The imaging option

“Waist-to-height ratio is not the ultimate answer,” Dr. Neeland said in an interview. He instead endorsed “advanced imaging for body fat distribution,” such as CT or MRI scans, as his pick for what should be the standard obesity metric, “given that it is much more specific and actionable for both risk assessment and response to therapy. I expect slow but steady advancements that move away from BMI cutoffs, for example for bariatric surgery, given that BMI is an imprecise and crude tool.”

But although imaging with methods like CT and MRI may provide the best accuracy and precision for tracking the volume of a person’s cardiometabolically dangerous fat, they are also hampered by relatively high cost and, for CT and DXA, the issue of radiation exposure.

“CT, MRI, and DXA scans give more in-depth assessment of body composition, but should we expose people to the radiation and the cost?” Dr. Almandoz wondered.

“Height, weight, and waist circumference cost nothing to obtain,” creating a big relative disadvantage for imaging, said Naveed Sattar, MD, professor of metabolic medicine at the University of Glasgow.

“Data would need to show that imaging gives clinicians substantially more information about future risk” to justify its price, Dr. Sattar emphasized.
 

BMI’s limits mean adding on

Regardless of whichever alternatives to BMI end up getting used most, experts generally agree that BMI alone is looking increasingly inadequate.

“Over the next 5 years, BMI will come to be seen as a screening tool that categorizes people into general risk groups” that also needs “other metrics and variables, such as age, race, ethnicity, family history, blood glucose, and blood pressure to better describe health risk in an individual,” predicted Dr. Bessesen.

The endorsement of WHtR by NICE “will lead to more research into how to incorporate WHtR into routine practice. We need more evidence to translate what NICE said into practice,” said Dr. Sattar. “I don’t think we’ll see a shift away from BMI, but we’ll add alternative measures that are particularly useful in certain patients.”

“Because we live in diverse societies, we need to individualize risk assessment and couple that with technology that makes analysis of body composition more accessible,” agreed Dr. Almandoz. He noted that the UT Southwestern weight wellness program where he practices has, for about the past decade, routinely collected waist circumference and bioelectrical impedance data as well as BMI on all people seen in the practice for obesity concerns. Making these additional measurements on a routine basis also helps strengthen patient engagement.

“We get into trouble when we make rigid health policy and clinical decisions based on BMI alone without looking at the patient holistically,” said Dr. Wee. “Patients are more than arbitrary numbers, and clinicians should make clinical decisions based on the totality of evidence for each individual patient.”

Dr. Bessesen, Dr. Wee, Dr. Powell-Wiley, and Dr. Almandoz reported no relevant financial relationships. Dr. Neeland has reported being a consultant for Merck. Dr. Sattar has reported being a consultant or speaker for Abbott Laboratories, Afimmune, Amgen, AstraZeneca, Boehringer Ingelheim, Eli Lilly, Hanmi Pharmaceuticals, Janssen, MSD, Novartis, Novo Nordisk, Pfizer, Roche Diagnostics, and Sanofi.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Diagnosis by dog: Canines detect COVID in schoolchildren with no symptoms

Article Type
Changed
Fri, 04/28/2023 - 00:44

Scent-detecting dogs have long been used to sniff out medical conditions ranging from low blood sugar and cancer to malaria, impending seizures, and migraines – not to mention explosives and narcotics.

Recently, the sensitivity of the canine nose has been tested as a strategy for screening for SARS-CoV-2 infection in schoolchildren showing no outward symptoms of the virus. A pilot study led by Carol A. Glaser, DVM, MD, of the California Department of Public Health in Richmond, found that trained dogs had an accuracy of more than 95% for detecting the odor of volatile organic compounds, or VOCs, produced by COVID-infected individuals.

California Department of Public Health
Dr. Carol A. Glaser

The authors believe that odor-based diagnosis with dogs could eventually provide a rapid, inexpensive, and noninvasive way to screen large groups for COVID-19 without the need for antigen testing.

“This is a new program with research ongoing, so it would be premature to consider it from a consumer’s perspective,” Dr. Glaser said in an interview. “However, the data look promising and we are hopeful we can continue to pilot various programs in various settings to see where, and if, dogs can be used for biomedical detection.”
 

In the lab and in the field

In a study published online in JAMA Pediatrics, Dr. Glaser’s group found that after 2 months’ training on COVID-19 scent samples in the laboratory, the dogs detected the presence of the virus more than 95% of the time. Antigen tests were used as a comparative reference.

In medical terms, the dogs achieved a greater than 95% accuracy on two important measures of effectiveness: sensitivity – a test’s ability to correctly detect the positive presence of disease – and specificity – the ability of a test to accurately rule out the presence of disease and identify as negative an uninfected person.

Next, the researchers piloted field tests in 50 visits at 27 schools from April 1 to May 25, 2022, to compare dogs’ detection ability with that of standard laboratory antigen testing. Participants in the completely voluntary screening numbered 1,558 and ranged in age from 9 to 17 years. Of these, 56% were girls and 89% were students. Almost 70% were screened at least twice.

Overall, the field test compared 3,897 paired antigen-vs.-dog screenings. The dogs accurately signaled the presence of 85 infections and ruled out 3,411 infections, for an overall accuracy of 90%. In 383 cases, however, they inaccurately signaled the presence of infection (false positives) and missed 18 actual infections (false negatives). That translated to a sensitivity in the field of 83%, considerably lower than that of their lab performance.

Direct screening of individuals with dogs outside of the lab involved circumstantial factors that likely contributed to decreased sensitivity and specificity, the authors acknowledged. These included such distractions as noise and the presence of excitable young children as well environmental conditions such as wind and other odors. What about dog phobia and dog hair allergy? “Dog screening takes only a few seconds per student and the dogs do not generally touch the participant as they run a line and sniff at ankles,” Dr. Glaser explained.

As for allergies, the rapid, ankle-level screening occurred in outdoor settings. “The chance of allergies is very low. This would be similar to someone who is out walking on the sidewalk and walks by a dog,” Dr. Glaser said.

Last year, a British trial of almost 4,000 adults tested six dogs trained to detect differences in VOCs between COVID-infected and uninfected individuals. Given samples from both groups, the dogs were able to distinguish between infected and uninfected samples with a sensitivity for detecting the virus ranging from 82% to 94% and a specificity for ruling it out of 76% to 92%. And they were able to smell the VOCs even when the viral load was low. The study also tested organic sensors, which proved even more accurate than the canines.

According to lead author James G. Logan, PhD, a disease control expert at the London School of Hygiene & Tropical Medicine in London, “Odour-based diagnostics using dogs and/or sensors may prove a rapid and effective tool for screening large numbers of people. Mathematical modelling suggests that dog screening plus a confirmatory PCR test could detect up to 89% of SARS-CoV-2 infections, averting up to 2.2 times as much transmission compared to isolation of symptomatic individuals only.”

Funding was provided by the Centers for Disease Control and Prevention Foundation (CDCF) to Early Alert Canines for the purchase and care of the dogs and the support of the handlers and trainers. The CDCF had no other role in the study. Coauthor Carol A. Edwards of Early Alert Canines reported receiving grants from the CDCF.

Publications
Topics
Sections

Scent-detecting dogs have long been used to sniff out medical conditions ranging from low blood sugar and cancer to malaria, impending seizures, and migraines – not to mention explosives and narcotics.

Recently, the sensitivity of the canine nose has been tested as a strategy for screening for SARS-CoV-2 infection in schoolchildren showing no outward symptoms of the virus. A pilot study led by Carol A. Glaser, DVM, MD, of the California Department of Public Health in Richmond, found that trained dogs had an accuracy of more than 95% for detecting the odor of volatile organic compounds, or VOCs, produced by COVID-infected individuals.

California Department of Public Health
Dr. Carol A. Glaser

The authors believe that odor-based diagnosis with dogs could eventually provide a rapid, inexpensive, and noninvasive way to screen large groups for COVID-19 without the need for antigen testing.

“This is a new program with research ongoing, so it would be premature to consider it from a consumer’s perspective,” Dr. Glaser said in an interview. “However, the data look promising and we are hopeful we can continue to pilot various programs in various settings to see where, and if, dogs can be used for biomedical detection.”
 

In the lab and in the field

In a study published online in JAMA Pediatrics, Dr. Glaser’s group found that after 2 months’ training on COVID-19 scent samples in the laboratory, the dogs detected the presence of the virus more than 95% of the time. Antigen tests were used as a comparative reference.

In medical terms, the dogs achieved a greater than 95% accuracy on two important measures of effectiveness: sensitivity – a test’s ability to correctly detect the positive presence of disease – and specificity – the ability of a test to accurately rule out the presence of disease and identify as negative an uninfected person.

Next, the researchers piloted field tests in 50 visits at 27 schools from April 1 to May 25, 2022, to compare dogs’ detection ability with that of standard laboratory antigen testing. Participants in the completely voluntary screening numbered 1,558 and ranged in age from 9 to 17 years. Of these, 56% were girls and 89% were students. Almost 70% were screened at least twice.

Overall, the field test compared 3,897 paired antigen-vs.-dog screenings. The dogs accurately signaled the presence of 85 infections and ruled out 3,411 infections, for an overall accuracy of 90%. In 383 cases, however, they inaccurately signaled the presence of infection (false positives) and missed 18 actual infections (false negatives). That translated to a sensitivity in the field of 83%, considerably lower than that of their lab performance.

Direct screening of individuals with dogs outside of the lab involved circumstantial factors that likely contributed to decreased sensitivity and specificity, the authors acknowledged. These included such distractions as noise and the presence of excitable young children as well environmental conditions such as wind and other odors. What about dog phobia and dog hair allergy? “Dog screening takes only a few seconds per student and the dogs do not generally touch the participant as they run a line and sniff at ankles,” Dr. Glaser explained.

As for allergies, the rapid, ankle-level screening occurred in outdoor settings. “The chance of allergies is very low. This would be similar to someone who is out walking on the sidewalk and walks by a dog,” Dr. Glaser said.

Last year, a British trial of almost 4,000 adults tested six dogs trained to detect differences in VOCs between COVID-infected and uninfected individuals. Given samples from both groups, the dogs were able to distinguish between infected and uninfected samples with a sensitivity for detecting the virus ranging from 82% to 94% and a specificity for ruling it out of 76% to 92%. And they were able to smell the VOCs even when the viral load was low. The study also tested organic sensors, which proved even more accurate than the canines.

According to lead author James G. Logan, PhD, a disease control expert at the London School of Hygiene & Tropical Medicine in London, “Odour-based diagnostics using dogs and/or sensors may prove a rapid and effective tool for screening large numbers of people. Mathematical modelling suggests that dog screening plus a confirmatory PCR test could detect up to 89% of SARS-CoV-2 infections, averting up to 2.2 times as much transmission compared to isolation of symptomatic individuals only.”

Funding was provided by the Centers for Disease Control and Prevention Foundation (CDCF) to Early Alert Canines for the purchase and care of the dogs and the support of the handlers and trainers. The CDCF had no other role in the study. Coauthor Carol A. Edwards of Early Alert Canines reported receiving grants from the CDCF.

Scent-detecting dogs have long been used to sniff out medical conditions ranging from low blood sugar and cancer to malaria, impending seizures, and migraines – not to mention explosives and narcotics.

Recently, the sensitivity of the canine nose has been tested as a strategy for screening for SARS-CoV-2 infection in schoolchildren showing no outward symptoms of the virus. A pilot study led by Carol A. Glaser, DVM, MD, of the California Department of Public Health in Richmond, found that trained dogs had an accuracy of more than 95% for detecting the odor of volatile organic compounds, or VOCs, produced by COVID-infected individuals.

California Department of Public Health
Dr. Carol A. Glaser

The authors believe that odor-based diagnosis with dogs could eventually provide a rapid, inexpensive, and noninvasive way to screen large groups for COVID-19 without the need for antigen testing.

“This is a new program with research ongoing, so it would be premature to consider it from a consumer’s perspective,” Dr. Glaser said in an interview. “However, the data look promising and we are hopeful we can continue to pilot various programs in various settings to see where, and if, dogs can be used for biomedical detection.”
 

In the lab and in the field

In a study published online in JAMA Pediatrics, Dr. Glaser’s group found that after 2 months’ training on COVID-19 scent samples in the laboratory, the dogs detected the presence of the virus more than 95% of the time. Antigen tests were used as a comparative reference.

In medical terms, the dogs achieved a greater than 95% accuracy on two important measures of effectiveness: sensitivity – a test’s ability to correctly detect the positive presence of disease – and specificity – the ability of a test to accurately rule out the presence of disease and identify as negative an uninfected person.

Next, the researchers piloted field tests in 50 visits at 27 schools from April 1 to May 25, 2022, to compare dogs’ detection ability with that of standard laboratory antigen testing. Participants in the completely voluntary screening numbered 1,558 and ranged in age from 9 to 17 years. Of these, 56% were girls and 89% were students. Almost 70% were screened at least twice.

Overall, the field test compared 3,897 paired antigen-vs.-dog screenings. The dogs accurately signaled the presence of 85 infections and ruled out 3,411 infections, for an overall accuracy of 90%. In 383 cases, however, they inaccurately signaled the presence of infection (false positives) and missed 18 actual infections (false negatives). That translated to a sensitivity in the field of 83%, considerably lower than that of their lab performance.

Direct screening of individuals with dogs outside of the lab involved circumstantial factors that likely contributed to decreased sensitivity and specificity, the authors acknowledged. These included such distractions as noise and the presence of excitable young children as well environmental conditions such as wind and other odors. What about dog phobia and dog hair allergy? “Dog screening takes only a few seconds per student and the dogs do not generally touch the participant as they run a line and sniff at ankles,” Dr. Glaser explained.

As for allergies, the rapid, ankle-level screening occurred in outdoor settings. “The chance of allergies is very low. This would be similar to someone who is out walking on the sidewalk and walks by a dog,” Dr. Glaser said.

Last year, a British trial of almost 4,000 adults tested six dogs trained to detect differences in VOCs between COVID-infected and uninfected individuals. Given samples from both groups, the dogs were able to distinguish between infected and uninfected samples with a sensitivity for detecting the virus ranging from 82% to 94% and a specificity for ruling it out of 76% to 92%. And they were able to smell the VOCs even when the viral load was low. The study also tested organic sensors, which proved even more accurate than the canines.

According to lead author James G. Logan, PhD, a disease control expert at the London School of Hygiene & Tropical Medicine in London, “Odour-based diagnostics using dogs and/or sensors may prove a rapid and effective tool for screening large numbers of people. Mathematical modelling suggests that dog screening plus a confirmatory PCR test could detect up to 89% of SARS-CoV-2 infections, averting up to 2.2 times as much transmission compared to isolation of symptomatic individuals only.”

Funding was provided by the Centers for Disease Control and Prevention Foundation (CDCF) to Early Alert Canines for the purchase and care of the dogs and the support of the handlers and trainers. The CDCF had no other role in the study. Coauthor Carol A. Edwards of Early Alert Canines reported receiving grants from the CDCF.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA PEDIATRICS

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Ablation for atrial fibrillation may protect the aging brain

Article Type
Changed
Wed, 04/26/2023 - 10:08

Treating atrial fibrillation with catheter ablation in addition to medical management may offer greater protection against cognitive impairment than medical management alone, new research suggests.

Investigators found adults who had previously undergone catheter ablation were significantly less likely to be cognitively impaired during the 2-year study period, compared with those who receive medical management alone.

“Catheter ablation is intended to stop atrial fibrillation and restore the normal rhythm of the heart. By doing so, there is an improved cerebral hemodynamic profile,” said Bahadar S. Srichawla, DO, department of neurology, University of Massachusetts, Worcester.

“Thus, long-term cognitive outcomes may be improved due to improved blood flow to the brain by restoring the normal rhythm of the heart,” he added.

This research was presented at the 2023 annual meeting of the American Academy of Neurology.
 

Heart-brain connection

The study involved 887 older adults (mean age 75; 49% women) with atrial fibrillation participating in the SAGE-AF (Systematic Assessment of Geriatric Elements) study. A total of 193 (22%) participants underwent catheter ablation prior to enrollment. These individuals more frequently had an implantable cardiac device (46% vs. 28%, P < .001) and persistent atrial fibrillation (31% vs. 23%, P < .05).

Cognitive function was assessed using the Montreal Cognitive Assessment (MoCA) tool at baseline and 1 and 2 years, with cognitive impairment defined as a MoCA score of 23 or below. Individuals who had catheter ablation had an average MoCA score of 25, compared with an average score of 23 in those who didn’t have catheter ablation.

After adjusting for potential confounding factors such as heart disease, renal disease, sleep apnea, and atrial fibrillation risk score, those who underwent catheter ablation were 36% less likely to develop cognitive impairment over 2 years than those who were treated only with medication (adjusted odds ratio, 0.64; 95% CI, 0.46-0.88).

During his presentation, Dr. Srichawla noted there is a hypothesis that individuals who are anticoagulated with warfarin may be prone to cerebral microbleeds and may be more cognitively impaired over time.

However, in a subgroup analysis, “cognitive function was similar at 2-year follow-up in those anticoagulated with warfarin, compared with all other anticoagulants. However, it should be noted that in this study, a direct head-to-head comparison was not done,” Dr. Srichawla told attendees.

“In patients with atrial fibrillation, catheter ablation should be discussed as a potential treatment strategy, particularly in patients who have or are at risk for cognitive decline and dementia,” Dr. Srichawla said.
 

Intriguing findings

Commenting on the research, Percy Griffin, PhD, Alzheimer’s Association director of scientific engagement, said the study is “intriguing and adds to what we know from previous research connecting cardiovascular and cognitive health.”

“However, there are limitations to this study,” Dr. Griffin said, “including its predominantly White cohort and the use of only neuropsychiatric testing to diagnose dementia. More research is needed to fully understand the impact of atrial fibrillation on cognitive outcomes in all people.”

“It’s well known that the heart and the brain are intimately connected. Individuals experiencing any cardiovascular issues should speak to their doctor,” Dr. Griffin added.

Shaheen Lakhan, MD, PhD, a neurologist and researcher in Boston, agreed. “If you ever get up too quickly and feel woozy, that is your brain not getting enough blood flow and you are getting all the warning signs to correct that – or else! Similarly, with atrial fibrillation, the heart is contracting, but not effectively pumping blood to the brain,” he said.

“This line of research shows that correcting the abnormal heart rhythm by zapping the faulty circuit with a catheter is actually better for your brain health than just taking medications alone,” added Dr. Lakhan, who was not involved with the study.

The study had no commercial funding. Dr. Srichawla, Dr. Griffin, and Dr. Lakhan report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Treating atrial fibrillation with catheter ablation in addition to medical management may offer greater protection against cognitive impairment than medical management alone, new research suggests.

Investigators found adults who had previously undergone catheter ablation were significantly less likely to be cognitively impaired during the 2-year study period, compared with those who receive medical management alone.

“Catheter ablation is intended to stop atrial fibrillation and restore the normal rhythm of the heart. By doing so, there is an improved cerebral hemodynamic profile,” said Bahadar S. Srichawla, DO, department of neurology, University of Massachusetts, Worcester.

“Thus, long-term cognitive outcomes may be improved due to improved blood flow to the brain by restoring the normal rhythm of the heart,” he added.

This research was presented at the 2023 annual meeting of the American Academy of Neurology.
 

Heart-brain connection

The study involved 887 older adults (mean age 75; 49% women) with atrial fibrillation participating in the SAGE-AF (Systematic Assessment of Geriatric Elements) study. A total of 193 (22%) participants underwent catheter ablation prior to enrollment. These individuals more frequently had an implantable cardiac device (46% vs. 28%, P < .001) and persistent atrial fibrillation (31% vs. 23%, P < .05).

Cognitive function was assessed using the Montreal Cognitive Assessment (MoCA) tool at baseline and 1 and 2 years, with cognitive impairment defined as a MoCA score of 23 or below. Individuals who had catheter ablation had an average MoCA score of 25, compared with an average score of 23 in those who didn’t have catheter ablation.

After adjusting for potential confounding factors such as heart disease, renal disease, sleep apnea, and atrial fibrillation risk score, those who underwent catheter ablation were 36% less likely to develop cognitive impairment over 2 years than those who were treated only with medication (adjusted odds ratio, 0.64; 95% CI, 0.46-0.88).

During his presentation, Dr. Srichawla noted there is a hypothesis that individuals who are anticoagulated with warfarin may be prone to cerebral microbleeds and may be more cognitively impaired over time.

However, in a subgroup analysis, “cognitive function was similar at 2-year follow-up in those anticoagulated with warfarin, compared with all other anticoagulants. However, it should be noted that in this study, a direct head-to-head comparison was not done,” Dr. Srichawla told attendees.

“In patients with atrial fibrillation, catheter ablation should be discussed as a potential treatment strategy, particularly in patients who have or are at risk for cognitive decline and dementia,” Dr. Srichawla said.
 

Intriguing findings

Commenting on the research, Percy Griffin, PhD, Alzheimer’s Association director of scientific engagement, said the study is “intriguing and adds to what we know from previous research connecting cardiovascular and cognitive health.”

“However, there are limitations to this study,” Dr. Griffin said, “including its predominantly White cohort and the use of only neuropsychiatric testing to diagnose dementia. More research is needed to fully understand the impact of atrial fibrillation on cognitive outcomes in all people.”

“It’s well known that the heart and the brain are intimately connected. Individuals experiencing any cardiovascular issues should speak to their doctor,” Dr. Griffin added.

Shaheen Lakhan, MD, PhD, a neurologist and researcher in Boston, agreed. “If you ever get up too quickly and feel woozy, that is your brain not getting enough blood flow and you are getting all the warning signs to correct that – or else! Similarly, with atrial fibrillation, the heart is contracting, but not effectively pumping blood to the brain,” he said.

“This line of research shows that correcting the abnormal heart rhythm by zapping the faulty circuit with a catheter is actually better for your brain health than just taking medications alone,” added Dr. Lakhan, who was not involved with the study.

The study had no commercial funding. Dr. Srichawla, Dr. Griffin, and Dr. Lakhan report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Treating atrial fibrillation with catheter ablation in addition to medical management may offer greater protection against cognitive impairment than medical management alone, new research suggests.

Investigators found adults who had previously undergone catheter ablation were significantly less likely to be cognitively impaired during the 2-year study period, compared with those who receive medical management alone.

“Catheter ablation is intended to stop atrial fibrillation and restore the normal rhythm of the heart. By doing so, there is an improved cerebral hemodynamic profile,” said Bahadar S. Srichawla, DO, department of neurology, University of Massachusetts, Worcester.

“Thus, long-term cognitive outcomes may be improved due to improved blood flow to the brain by restoring the normal rhythm of the heart,” he added.

This research was presented at the 2023 annual meeting of the American Academy of Neurology.
 

Heart-brain connection

The study involved 887 older adults (mean age 75; 49% women) with atrial fibrillation participating in the SAGE-AF (Systematic Assessment of Geriatric Elements) study. A total of 193 (22%) participants underwent catheter ablation prior to enrollment. These individuals more frequently had an implantable cardiac device (46% vs. 28%, P < .001) and persistent atrial fibrillation (31% vs. 23%, P < .05).

Cognitive function was assessed using the Montreal Cognitive Assessment (MoCA) tool at baseline and 1 and 2 years, with cognitive impairment defined as a MoCA score of 23 or below. Individuals who had catheter ablation had an average MoCA score of 25, compared with an average score of 23 in those who didn’t have catheter ablation.

After adjusting for potential confounding factors such as heart disease, renal disease, sleep apnea, and atrial fibrillation risk score, those who underwent catheter ablation were 36% less likely to develop cognitive impairment over 2 years than those who were treated only with medication (adjusted odds ratio, 0.64; 95% CI, 0.46-0.88).

During his presentation, Dr. Srichawla noted there is a hypothesis that individuals who are anticoagulated with warfarin may be prone to cerebral microbleeds and may be more cognitively impaired over time.

However, in a subgroup analysis, “cognitive function was similar at 2-year follow-up in those anticoagulated with warfarin, compared with all other anticoagulants. However, it should be noted that in this study, a direct head-to-head comparison was not done,” Dr. Srichawla told attendees.

“In patients with atrial fibrillation, catheter ablation should be discussed as a potential treatment strategy, particularly in patients who have or are at risk for cognitive decline and dementia,” Dr. Srichawla said.
 

Intriguing findings

Commenting on the research, Percy Griffin, PhD, Alzheimer’s Association director of scientific engagement, said the study is “intriguing and adds to what we know from previous research connecting cardiovascular and cognitive health.”

“However, there are limitations to this study,” Dr. Griffin said, “including its predominantly White cohort and the use of only neuropsychiatric testing to diagnose dementia. More research is needed to fully understand the impact of atrial fibrillation on cognitive outcomes in all people.”

“It’s well known that the heart and the brain are intimately connected. Individuals experiencing any cardiovascular issues should speak to their doctor,” Dr. Griffin added.

Shaheen Lakhan, MD, PhD, a neurologist and researcher in Boston, agreed. “If you ever get up too quickly and feel woozy, that is your brain not getting enough blood flow and you are getting all the warning signs to correct that – or else! Similarly, with atrial fibrillation, the heart is contracting, but not effectively pumping blood to the brain,” he said.

“This line of research shows that correcting the abnormal heart rhythm by zapping the faulty circuit with a catheter is actually better for your brain health than just taking medications alone,” added Dr. Lakhan, who was not involved with the study.

The study had no commercial funding. Dr. Srichawla, Dr. Griffin, and Dr. Lakhan report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AAN 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

What new cardiovascular disease risk factors have emerged?

Article Type
Changed
Wed, 04/26/2023 - 10:10

Cardiovascular disease (CVD) is the main cause of premature death and disability in the general population, and according to the World Health Organization, the incidence of CVD is increasing throughout the world. Conventional risk factors that contribute to the occurrence and worsening of CVD have been identified and widely studied. They include high cholesterol levels, high blood pressure, diabetes, obesity, smoking, and lack of physical activity. Despite the introduction of measures to prevent and treat these risk factors with lipid-lowering drugs, antihypertensives, antiplatelet drugs, and anticoagulants, the mortality rate related to CVD remains high.
 

Despite the effectiveness of many currently available treatment options, there are still significant gaps in risk assessment and treatment of CVD.

In the past few years, new coronary risk factors have emerged. They are detailed in an editorial published in The American Journal of Medicine that describes their role and their impact on our cardiovascular health.
 

Systemic inflammation

The new coronary risk factors include the following diseases characterized by systemic inflammation:

  • Gout – Among patients who have experienced a recent flare of gout, the probability of experiencing an acute cardiovascular event such as a myocardial infarction or stroke is increased.
  • Rheumatoid arthritis and systemic lupus erythematous – Patients with one or both of these conditions are at higher odds of experiencing concomitant premature and extremely premature coronary artery disease.
  • Inflammatory bowel disease (Crohn’s disease or ulcerative colitis) – Patients with this disease have increased odds of developing coronary artery disease.
  • Psoriasis – Patients with psoriasis are up to 50% more likely to develop CVD.

Maternal and childhood factors

The following maternal and childhood factors are associated with an increased risk of developing coronary artery disease: gestational diabetes; preeclampsia; delivering a child of low birth weight; preterm delivery; and premature or surgical menopause. The factor or factors that increase the risk of coronary artery disease associated with each of these conditions are not known but may be the result of increased cytokine and oxidative stress.

An unusual and yet unexplained association has been observed between migraine headaches with aura in women and incident CVD.

Also of interest is the association of early life trauma and the risk of adverse cardiovascular outcomes in young and middle-aged individuals who have a history of myocardial infarction.

Transgender patients who present for gender-affirming care are also at increased cardiovascular risk. Among these patients, the increase in coronary artery disease risk may be related to high rates of anxiety and depression.
 

Environmental factors

Low socioeconomic status has emerged as a risk factor. Increased psychosocial stressors, limited educational and economic opportunities, and lack of peer influence favoring healthier lifestyle choices may be causative elements leading to enhanced coronary artery disease among individuals with low socioeconomic living conditions.

Air pollution was estimated to have caused 9 million deaths worldwide in 2019, with 62% due to CVD and 31.7% to coronary artery disease. Severely polluted environmental aerosols contain several toxic metals, such as lead, mercury, arsenic, and cadmium. Transient exposure to various air pollutants may trigger the onset of an acute coronary syndrome.
 

Lifestyle factors

Long working hours by patients who have experienced a first myocardial infarction increase the risk for a recurrent event, possibly because of prolonged exposure to work stressors.

Skipping breakfast has been linked to increased cardiovascular and all-cause mortality.

Long-term consumption of drinks containing sugar and artificial sweeteners has also been associated with increased cardiovascular mortality.

Recognizing the presence of one or more of these new risk factors could help prompt and improve behaviors for reducing more conventional CV risk factors to a minimum.

This article was translated from Univadis Italy, which is part of the Medscape Professional Network.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Cardiovascular disease (CVD) is the main cause of premature death and disability in the general population, and according to the World Health Organization, the incidence of CVD is increasing throughout the world. Conventional risk factors that contribute to the occurrence and worsening of CVD have been identified and widely studied. They include high cholesterol levels, high blood pressure, diabetes, obesity, smoking, and lack of physical activity. Despite the introduction of measures to prevent and treat these risk factors with lipid-lowering drugs, antihypertensives, antiplatelet drugs, and anticoagulants, the mortality rate related to CVD remains high.
 

Despite the effectiveness of many currently available treatment options, there are still significant gaps in risk assessment and treatment of CVD.

In the past few years, new coronary risk factors have emerged. They are detailed in an editorial published in The American Journal of Medicine that describes their role and their impact on our cardiovascular health.
 

Systemic inflammation

The new coronary risk factors include the following diseases characterized by systemic inflammation:

  • Gout – Among patients who have experienced a recent flare of gout, the probability of experiencing an acute cardiovascular event such as a myocardial infarction or stroke is increased.
  • Rheumatoid arthritis and systemic lupus erythematous – Patients with one or both of these conditions are at higher odds of experiencing concomitant premature and extremely premature coronary artery disease.
  • Inflammatory bowel disease (Crohn’s disease or ulcerative colitis) – Patients with this disease have increased odds of developing coronary artery disease.
  • Psoriasis – Patients with psoriasis are up to 50% more likely to develop CVD.

Maternal and childhood factors

The following maternal and childhood factors are associated with an increased risk of developing coronary artery disease: gestational diabetes; preeclampsia; delivering a child of low birth weight; preterm delivery; and premature or surgical menopause. The factor or factors that increase the risk of coronary artery disease associated with each of these conditions are not known but may be the result of increased cytokine and oxidative stress.

An unusual and yet unexplained association has been observed between migraine headaches with aura in women and incident CVD.

Also of interest is the association of early life trauma and the risk of adverse cardiovascular outcomes in young and middle-aged individuals who have a history of myocardial infarction.

Transgender patients who present for gender-affirming care are also at increased cardiovascular risk. Among these patients, the increase in coronary artery disease risk may be related to high rates of anxiety and depression.
 

Environmental factors

Low socioeconomic status has emerged as a risk factor. Increased psychosocial stressors, limited educational and economic opportunities, and lack of peer influence favoring healthier lifestyle choices may be causative elements leading to enhanced coronary artery disease among individuals with low socioeconomic living conditions.

Air pollution was estimated to have caused 9 million deaths worldwide in 2019, with 62% due to CVD and 31.7% to coronary artery disease. Severely polluted environmental aerosols contain several toxic metals, such as lead, mercury, arsenic, and cadmium. Transient exposure to various air pollutants may trigger the onset of an acute coronary syndrome.
 

Lifestyle factors

Long working hours by patients who have experienced a first myocardial infarction increase the risk for a recurrent event, possibly because of prolonged exposure to work stressors.

Skipping breakfast has been linked to increased cardiovascular and all-cause mortality.

Long-term consumption of drinks containing sugar and artificial sweeteners has also been associated with increased cardiovascular mortality.

Recognizing the presence of one or more of these new risk factors could help prompt and improve behaviors for reducing more conventional CV risk factors to a minimum.

This article was translated from Univadis Italy, which is part of the Medscape Professional Network.

A version of this article first appeared on Medscape.com.

Cardiovascular disease (CVD) is the main cause of premature death and disability in the general population, and according to the World Health Organization, the incidence of CVD is increasing throughout the world. Conventional risk factors that contribute to the occurrence and worsening of CVD have been identified and widely studied. They include high cholesterol levels, high blood pressure, diabetes, obesity, smoking, and lack of physical activity. Despite the introduction of measures to prevent and treat these risk factors with lipid-lowering drugs, antihypertensives, antiplatelet drugs, and anticoagulants, the mortality rate related to CVD remains high.
 

Despite the effectiveness of many currently available treatment options, there are still significant gaps in risk assessment and treatment of CVD.

In the past few years, new coronary risk factors have emerged. They are detailed in an editorial published in The American Journal of Medicine that describes their role and their impact on our cardiovascular health.
 

Systemic inflammation

The new coronary risk factors include the following diseases characterized by systemic inflammation:

  • Gout – Among patients who have experienced a recent flare of gout, the probability of experiencing an acute cardiovascular event such as a myocardial infarction or stroke is increased.
  • Rheumatoid arthritis and systemic lupus erythematous – Patients with one or both of these conditions are at higher odds of experiencing concomitant premature and extremely premature coronary artery disease.
  • Inflammatory bowel disease (Crohn’s disease or ulcerative colitis) – Patients with this disease have increased odds of developing coronary artery disease.
  • Psoriasis – Patients with psoriasis are up to 50% more likely to develop CVD.

Maternal and childhood factors

The following maternal and childhood factors are associated with an increased risk of developing coronary artery disease: gestational diabetes; preeclampsia; delivering a child of low birth weight; preterm delivery; and premature or surgical menopause. The factor or factors that increase the risk of coronary artery disease associated with each of these conditions are not known but may be the result of increased cytokine and oxidative stress.

An unusual and yet unexplained association has been observed between migraine headaches with aura in women and incident CVD.

Also of interest is the association of early life trauma and the risk of adverse cardiovascular outcomes in young and middle-aged individuals who have a history of myocardial infarction.

Transgender patients who present for gender-affirming care are also at increased cardiovascular risk. Among these patients, the increase in coronary artery disease risk may be related to high rates of anxiety and depression.
 

Environmental factors

Low socioeconomic status has emerged as a risk factor. Increased psychosocial stressors, limited educational and economic opportunities, and lack of peer influence favoring healthier lifestyle choices may be causative elements leading to enhanced coronary artery disease among individuals with low socioeconomic living conditions.

Air pollution was estimated to have caused 9 million deaths worldwide in 2019, with 62% due to CVD and 31.7% to coronary artery disease. Severely polluted environmental aerosols contain several toxic metals, such as lead, mercury, arsenic, and cadmium. Transient exposure to various air pollutants may trigger the onset of an acute coronary syndrome.
 

Lifestyle factors

Long working hours by patients who have experienced a first myocardial infarction increase the risk for a recurrent event, possibly because of prolonged exposure to work stressors.

Skipping breakfast has been linked to increased cardiovascular and all-cause mortality.

Long-term consumption of drinks containing sugar and artificial sweeteners has also been associated with increased cardiovascular mortality.

Recognizing the presence of one or more of these new risk factors could help prompt and improve behaviors for reducing more conventional CV risk factors to a minimum.

This article was translated from Univadis Italy, which is part of the Medscape Professional Network.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE AMERICAN JOURNAL OF MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Seasonal variation in thyroid hormone TSH may lead to overprescribing

Article Type
Changed
Wed, 04/26/2023 - 10:12

Seasonal variation in one of the hormones used to monitor thyroid function could in turn lead to false diagnoses of subclinical hypothyroidism and unnecessary prescriptions of levothyroxine, according to Yale clinical chemist Joe M. El-Khoury, PhD.

A Japanese study of more than 7,000 healthy individuals showed that thyrotropin-stimulating hormone (TSH) varies widely throughout the seasons, he said, peaking in the northern hemisphere’s winter months (January to February) with its low in the summer months (June to August). That paper was published last year in the Journal of the Endocrine Society.

Sebastian Kaulitzki/Fotolia

But free thyroxine (FT4) levels in the Japanese population remained relatively stable, he wrote in a letter recently published in Clinical Chemistry.

“If you end up with a mildly elevated TSH result and a normal FT4, try getting retested 2-3 months later to make sure this is not a seasonal artifact or transient increase before prescribing/taking levothyroxine unnecessarily,” advised Dr. El-Khoury, director of Yale University’s Clinical Chemistry Laboratory, New Haven, Conn.

“Because the [population-based, laboratory] reference ranges don’t account for seasonal variation, we’re flagging a significant number of people as high TSH when they’re normal, and physicians are prescribing levothyroxine inappropriately to healthy people who don’t need it,” he told this news organization, adding that overtreatment can be harmful, particularly for elderly people.

This seasonal variation in TSH could account for between a third to a half of the 90% of all levothyroxine prescriptions that were found to be unnecessary, according to a U.S. study in 2021, Dr. El-Khoury added.

In a comment, Trisha Cubb, MD, said that Dr. El-Khoury’s letter “raises a good point, that we really need to look at our reference ranges, especially when more and more studies are showing that so many thyroid hormone prescriptions may not be necessary.”

Dr. Cubb, thyroid section director and assistant professor of clinical medicine at Weill Cornell Medical College/Houston Methodist Academic Institute, Texas, also agrees with Dr. El-Khoury’s suggestion to repeat lab results in some instances.

“I think repeating results, especially in our patients with subclinical disease, is important,” she noted.

And she pointed out that seasonal variation isn’t the only relevant variable. “We also know that multiple clinical factors like pregnancy status, coexisting comorbidities, or age can all influence what we as clinicians consider an acceptable TSH range in an individual patient.” And other medications, such as steroids, or supplements like biotin, “can all affect thyroid lab values,” she noted.

“Ensuring that minor abnormalities aren’t transient is important prior to initiating medical therapy. With any medical therapy there are possible side effects, along with time, cost, [and] monitoring, all of which can be associated with thyroid hormone replacement.”
 

TSH reference ranges should be adapted for subpopulations

Dr. El-Khoury explained that to get an idea of how big the seasonal differences in TSH observed in the Japanese study were, “the upper end of the population they tracked goes from 5.2 [mIU/L] in January to 3.4 [mIU/L] in August. So you have almost a 2-unit change in concentration that can happen in the reference population. But laboratory reference ranges, or ‘normal ranges,’ are usually fixed and don’t change by season.”

The higher the TSH, the more likely a person is to have hypothyroidism. Major recent studies have found no benefit of levothyroxine treatment with TSH levels below 7.0-10.0 mIU/L, he said.

“So, I suggest that the limit should be 7.0 [mIU/L] to be safe, but it could be as high as 10 [mIU/L]. In any case, let’s shift the mindset to clinical outcome–based treatment cutoffs,” he said, noting that this approach is currently used for decisions on cholesterol-lowering therapy or vitamin D supplementation, for example.

Regarding this suggestion of using a TSH cutoff of 7 mIU/L to diagnose subclinical hypothyroidism, Dr. Cubb said: “It really depends on the specific population. In an elderly patient, a higher TSH may be of less clinical concern when compared to a female who is actively trying to get pregnant.

“Overall, I think we do need to better understand what appropriate TSH ranges are in specific subpopulations, and then with time, make this more understandable and available for general medicine as well as subspecialty providers to be able to utilize,” she noted.

Regarding the particular Japanese findings cited by Dr. El-Khoury, Dr. Cubb observed that this was a very specific study population, “so we would need more data showing that this is more generalizable.”

And she noted that there’s also diurnal variation in TSH. “In the [Japanese] paper, patients had their thyroid labs drawn between 8:00 a.m. and 9:00 a.m. in a fasting state. Oftentimes in the U.S., thyroid labs are not drawn at specific times or [during] fasting. I think this is one of many factors that should be considered.”
 

Acknowledging seasonal variation would be a start

But overall, Dr. Cubb said that both the Japanese study and Dr. El-Khoury’s letter highlight “how season, in and of itself, which is not something we usually think about, can affect thyroid lab results. I believe as more data come out, more generalizable data, that’s how evidence-based guidelines are generated over time.”

According to Dr. El-Khoury, fixing the laboratory reference range issues would likely require a joint effort of professional medical societies, reference laboratories, and assay manufacturers. But with seasonal variation, that might be a difficult task.

“The problem is, in laboratory medicine, we don’t have rules for an analyte that changes by season to do anything different. My goal is to get people to at least acknowledge this is a problem and do something,” he concluded.

Dr. El-Khoury and Dr. Cubb have reported no relevant financial relationships.
 

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Seasonal variation in one of the hormones used to monitor thyroid function could in turn lead to false diagnoses of subclinical hypothyroidism and unnecessary prescriptions of levothyroxine, according to Yale clinical chemist Joe M. El-Khoury, PhD.

A Japanese study of more than 7,000 healthy individuals showed that thyrotropin-stimulating hormone (TSH) varies widely throughout the seasons, he said, peaking in the northern hemisphere’s winter months (January to February) with its low in the summer months (June to August). That paper was published last year in the Journal of the Endocrine Society.

Sebastian Kaulitzki/Fotolia

But free thyroxine (FT4) levels in the Japanese population remained relatively stable, he wrote in a letter recently published in Clinical Chemistry.

“If you end up with a mildly elevated TSH result and a normal FT4, try getting retested 2-3 months later to make sure this is not a seasonal artifact or transient increase before prescribing/taking levothyroxine unnecessarily,” advised Dr. El-Khoury, director of Yale University’s Clinical Chemistry Laboratory, New Haven, Conn.

“Because the [population-based, laboratory] reference ranges don’t account for seasonal variation, we’re flagging a significant number of people as high TSH when they’re normal, and physicians are prescribing levothyroxine inappropriately to healthy people who don’t need it,” he told this news organization, adding that overtreatment can be harmful, particularly for elderly people.

This seasonal variation in TSH could account for between a third to a half of the 90% of all levothyroxine prescriptions that were found to be unnecessary, according to a U.S. study in 2021, Dr. El-Khoury added.

In a comment, Trisha Cubb, MD, said that Dr. El-Khoury’s letter “raises a good point, that we really need to look at our reference ranges, especially when more and more studies are showing that so many thyroid hormone prescriptions may not be necessary.”

Dr. Cubb, thyroid section director and assistant professor of clinical medicine at Weill Cornell Medical College/Houston Methodist Academic Institute, Texas, also agrees with Dr. El-Khoury’s suggestion to repeat lab results in some instances.

“I think repeating results, especially in our patients with subclinical disease, is important,” she noted.

And she pointed out that seasonal variation isn’t the only relevant variable. “We also know that multiple clinical factors like pregnancy status, coexisting comorbidities, or age can all influence what we as clinicians consider an acceptable TSH range in an individual patient.” And other medications, such as steroids, or supplements like biotin, “can all affect thyroid lab values,” she noted.

“Ensuring that minor abnormalities aren’t transient is important prior to initiating medical therapy. With any medical therapy there are possible side effects, along with time, cost, [and] monitoring, all of which can be associated with thyroid hormone replacement.”
 

TSH reference ranges should be adapted for subpopulations

Dr. El-Khoury explained that to get an idea of how big the seasonal differences in TSH observed in the Japanese study were, “the upper end of the population they tracked goes from 5.2 [mIU/L] in January to 3.4 [mIU/L] in August. So you have almost a 2-unit change in concentration that can happen in the reference population. But laboratory reference ranges, or ‘normal ranges,’ are usually fixed and don’t change by season.”

The higher the TSH, the more likely a person is to have hypothyroidism. Major recent studies have found no benefit of levothyroxine treatment with TSH levels below 7.0-10.0 mIU/L, he said.

“So, I suggest that the limit should be 7.0 [mIU/L] to be safe, but it could be as high as 10 [mIU/L]. In any case, let’s shift the mindset to clinical outcome–based treatment cutoffs,” he said, noting that this approach is currently used for decisions on cholesterol-lowering therapy or vitamin D supplementation, for example.

Regarding this suggestion of using a TSH cutoff of 7 mIU/L to diagnose subclinical hypothyroidism, Dr. Cubb said: “It really depends on the specific population. In an elderly patient, a higher TSH may be of less clinical concern when compared to a female who is actively trying to get pregnant.

“Overall, I think we do need to better understand what appropriate TSH ranges are in specific subpopulations, and then with time, make this more understandable and available for general medicine as well as subspecialty providers to be able to utilize,” she noted.

Regarding the particular Japanese findings cited by Dr. El-Khoury, Dr. Cubb observed that this was a very specific study population, “so we would need more data showing that this is more generalizable.”

And she noted that there’s also diurnal variation in TSH. “In the [Japanese] paper, patients had their thyroid labs drawn between 8:00 a.m. and 9:00 a.m. in a fasting state. Oftentimes in the U.S., thyroid labs are not drawn at specific times or [during] fasting. I think this is one of many factors that should be considered.”
 

Acknowledging seasonal variation would be a start

But overall, Dr. Cubb said that both the Japanese study and Dr. El-Khoury’s letter highlight “how season, in and of itself, which is not something we usually think about, can affect thyroid lab results. I believe as more data come out, more generalizable data, that’s how evidence-based guidelines are generated over time.”

According to Dr. El-Khoury, fixing the laboratory reference range issues would likely require a joint effort of professional medical societies, reference laboratories, and assay manufacturers. But with seasonal variation, that might be a difficult task.

“The problem is, in laboratory medicine, we don’t have rules for an analyte that changes by season to do anything different. My goal is to get people to at least acknowledge this is a problem and do something,” he concluded.

Dr. El-Khoury and Dr. Cubb have reported no relevant financial relationships.
 

A version of this article first appeared on Medscape.com.

Seasonal variation in one of the hormones used to monitor thyroid function could in turn lead to false diagnoses of subclinical hypothyroidism and unnecessary prescriptions of levothyroxine, according to Yale clinical chemist Joe M. El-Khoury, PhD.

A Japanese study of more than 7,000 healthy individuals showed that thyrotropin-stimulating hormone (TSH) varies widely throughout the seasons, he said, peaking in the northern hemisphere’s winter months (January to February) with its low in the summer months (June to August). That paper was published last year in the Journal of the Endocrine Society.

Sebastian Kaulitzki/Fotolia

But free thyroxine (FT4) levels in the Japanese population remained relatively stable, he wrote in a letter recently published in Clinical Chemistry.

“If you end up with a mildly elevated TSH result and a normal FT4, try getting retested 2-3 months later to make sure this is not a seasonal artifact or transient increase before prescribing/taking levothyroxine unnecessarily,” advised Dr. El-Khoury, director of Yale University’s Clinical Chemistry Laboratory, New Haven, Conn.

“Because the [population-based, laboratory] reference ranges don’t account for seasonal variation, we’re flagging a significant number of people as high TSH when they’re normal, and physicians are prescribing levothyroxine inappropriately to healthy people who don’t need it,” he told this news organization, adding that overtreatment can be harmful, particularly for elderly people.

This seasonal variation in TSH could account for between a third to a half of the 90% of all levothyroxine prescriptions that were found to be unnecessary, according to a U.S. study in 2021, Dr. El-Khoury added.

In a comment, Trisha Cubb, MD, said that Dr. El-Khoury’s letter “raises a good point, that we really need to look at our reference ranges, especially when more and more studies are showing that so many thyroid hormone prescriptions may not be necessary.”

Dr. Cubb, thyroid section director and assistant professor of clinical medicine at Weill Cornell Medical College/Houston Methodist Academic Institute, Texas, also agrees with Dr. El-Khoury’s suggestion to repeat lab results in some instances.

“I think repeating results, especially in our patients with subclinical disease, is important,” she noted.

And she pointed out that seasonal variation isn’t the only relevant variable. “We also know that multiple clinical factors like pregnancy status, coexisting comorbidities, or age can all influence what we as clinicians consider an acceptable TSH range in an individual patient.” And other medications, such as steroids, or supplements like biotin, “can all affect thyroid lab values,” she noted.

“Ensuring that minor abnormalities aren’t transient is important prior to initiating medical therapy. With any medical therapy there are possible side effects, along with time, cost, [and] monitoring, all of which can be associated with thyroid hormone replacement.”
 

TSH reference ranges should be adapted for subpopulations

Dr. El-Khoury explained that to get an idea of how big the seasonal differences in TSH observed in the Japanese study were, “the upper end of the population they tracked goes from 5.2 [mIU/L] in January to 3.4 [mIU/L] in August. So you have almost a 2-unit change in concentration that can happen in the reference population. But laboratory reference ranges, or ‘normal ranges,’ are usually fixed and don’t change by season.”

The higher the TSH, the more likely a person is to have hypothyroidism. Major recent studies have found no benefit of levothyroxine treatment with TSH levels below 7.0-10.0 mIU/L, he said.

“So, I suggest that the limit should be 7.0 [mIU/L] to be safe, but it could be as high as 10 [mIU/L]. In any case, let’s shift the mindset to clinical outcome–based treatment cutoffs,” he said, noting that this approach is currently used for decisions on cholesterol-lowering therapy or vitamin D supplementation, for example.

Regarding this suggestion of using a TSH cutoff of 7 mIU/L to diagnose subclinical hypothyroidism, Dr. Cubb said: “It really depends on the specific population. In an elderly patient, a higher TSH may be of less clinical concern when compared to a female who is actively trying to get pregnant.

“Overall, I think we do need to better understand what appropriate TSH ranges are in specific subpopulations, and then with time, make this more understandable and available for general medicine as well as subspecialty providers to be able to utilize,” she noted.

Regarding the particular Japanese findings cited by Dr. El-Khoury, Dr. Cubb observed that this was a very specific study population, “so we would need more data showing that this is more generalizable.”

And she noted that there’s also diurnal variation in TSH. “In the [Japanese] paper, patients had their thyroid labs drawn between 8:00 a.m. and 9:00 a.m. in a fasting state. Oftentimes in the U.S., thyroid labs are not drawn at specific times or [during] fasting. I think this is one of many factors that should be considered.”
 

Acknowledging seasonal variation would be a start

But overall, Dr. Cubb said that both the Japanese study and Dr. El-Khoury’s letter highlight “how season, in and of itself, which is not something we usually think about, can affect thyroid lab results. I believe as more data come out, more generalizable data, that’s how evidence-based guidelines are generated over time.”

According to Dr. El-Khoury, fixing the laboratory reference range issues would likely require a joint effort of professional medical societies, reference laboratories, and assay manufacturers. But with seasonal variation, that might be a difficult task.

“The problem is, in laboratory medicine, we don’t have rules for an analyte that changes by season to do anything different. My goal is to get people to at least acknowledge this is a problem and do something,” he concluded.

Dr. El-Khoury and Dr. Cubb have reported no relevant financial relationships.
 

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL CHEMISTRY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Guidelines for assessing cancer risk may need updating

Article Type
Changed
Wed, 04/26/2023 - 10:13

A genetic sequencing effort identified more patients to be carriers of risk genes for hereditary breast and ovarian cancer or Lynch syndrome than would have been discovered by following existing genetic testing guidelines, according to new research.

The authors of the clinical trial suggest that these guidelines may need to be revised.

Individuals with hereditary breast and ovarian cancer (HBOC) have an 80% lifetime risk of breast cancer and are at greater risk of ovarian cancer, pancreatic cancer, prostate cancer, and melanoma. Those with Lynch syndrome (LS) have an 80% lifetime risk of colorectal cancer, a 60% lifetime risk of endometrial cancer, and heightened risk of upper gastrointestinal, urinary tract, skin, and other tumors, said study coauthor N. Jewel Samadder, MD in a statement.

The National Cancer Control Network has guidelines for determining family risk for colorectal cancer and breast, ovarian, and pancreatic cancer to identify individuals who should be screened for LS and HBOC, but these rely on personal and family health histories.

“These criteria were created at a time when genetic testing was cost prohibitive and thus aimed to identify those at the greatest chance of being a mutation carrier in the absence of population-wide whole-exome sequencing. However, [LS and HBOC] are poorly identified in current practice, and many patients are not aware of their cancer risk,” said Dr. Samadder, professor of medicine and coleader of the precision oncology program at the Mayo Clinic Comprehensive Cancer Center, Phoenix, in the statement.

Whole-exome sequencing covers only protein-coding regions of the genome, which is less than 2% of the total genome but includes more than 85% of known disease-related genetic variants, according to Emily Gay, who presented the trial results (Abstract 5768) on April 18 at the annual meeting of the American Association for Cancer Research.

“In recent years, the cost of whole-exome sequencing has been rapidly decreasing, allowing us to complete this test on saliva samples from thousands, if not tens of thousands of patients covering large populations and large health systems,” said Ms. Gay, a genetic counseling graduate student at the University of Arizona, during her presentation.

She described results from the TAPESTRY clinical trial, with 44,306 participants from Mayo Clinic centers in Arizona, Florida, and Minnesota, who were identified as definitely or likely to be harboring pathogenic mutations and consented to whole-exome sequencing from saliva samples. They used electronic health records to determine whether patients would satisfy the testing criteria from NCCN guidelines.

The researchers identified 1.24% of participants to be carriers of HBOC or LS. Of the HBOC carriers, 62.8% were female, and of the LS carriers, 62.6% were female. The percentages of HBOC and LS carriers who were White were 88.6 and 94.5, respectively. The median age of both groups was 57 years. Of HBOC carriers, 47.3% had personal histories of cancers; for LS carries, the percentage was 44.2.

Of HBOC carriers, 49.1% had been previously unaware of their genetic condition, while an even higher percentage of patients with LS – 59.3% – fell into that category. Thirty-two percent of those with HBOC and 56.2% of those with LS would not have qualified for screening using the relevant NCCN guidelines.

“Most strikingly,” 63.8% of individuals with mutations in the MSH6 gene and 83.7% of those mutations in the PMS2 gene would not have met NCCN criteria, Ms. Gay said.

Having a cancer type not known to be related to a genetic syndrome was a reason for 58.6% of individuals failing to meet NCCN guidelines, while 60.5% did not meet the guidelines because of an insufficient number of relatives known to have a history of cancer, and 63.3% did not because they had no personal history of cancer. Among individuals with a pathogenic mutation who met NCCN criteria, 34% were not aware of their condition.

“This suggests that the NCCN guidelines are underutilized in clinical practice, potentially due to the busy schedule of clinicians or because the complexity of using these criteria,” said Ms. Gay.

The numbers were even more striking among minorities: “There is additional data analysis and research needed in this area, but based on our preliminary findings, we saw that nearly 50% of the individuals who are [part of an underrepresented minority group] did not meet criteria, compared with 32% of the white cohort,” said Ms. Gay.

Asked what new NCCN guidelines should be, Ms. Gay replied: “I think maybe limiting the number of relatives that you have to have with a certain type of cancer, especially as we see families get smaller and smaller, especially in the United States – that family data isn’t necessarily available or as useful. And then also, I think, incorporating in the size of a family into the calculation, so more of maybe a point-based system like we see with other genetic conditions rather than a ‘yes you meet or no, you don’t.’ More of a range to say ‘you fall on the low-risk, medium-risk, or high-risk stage,’” said Ms. Gay.

During the Q&A period, session cochair Andrew Godwin, PhD, who is a professor of molecular oncology and pathology at University of Kansas Medical Center, Kansas City, said he wondered if whole-exome sequencing was capable of picking up cancer risk mutations that standard targeted tests don’t look for.

Dr. Samadder, who was in the audience, answered the question, saying that targeted tests are actually better at picking up some types of mutations like intronic mutations, single-nucleotide polymorphisms, and deletions.

“There are some limitations to whole-exome sequencing. Our estimate here of 1.2% [of participants carrying HBOC or LS mutations] is probably an underestimate. There are additional variants that exome sequencing probably doesn’t pick up easily or as well. That’s why we qualify that exome sequencing is a screening test, not a diagnostic,” he continued.

Ms. Gay and Dr. Samadder have no relevant financial disclosures. Dr. Godwin has financial relationships with Clara Biotech, VITRAC Therapeutics, and Sinochips Diagnostics.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

A genetic sequencing effort identified more patients to be carriers of risk genes for hereditary breast and ovarian cancer or Lynch syndrome than would have been discovered by following existing genetic testing guidelines, according to new research.

The authors of the clinical trial suggest that these guidelines may need to be revised.

Individuals with hereditary breast and ovarian cancer (HBOC) have an 80% lifetime risk of breast cancer and are at greater risk of ovarian cancer, pancreatic cancer, prostate cancer, and melanoma. Those with Lynch syndrome (LS) have an 80% lifetime risk of colorectal cancer, a 60% lifetime risk of endometrial cancer, and heightened risk of upper gastrointestinal, urinary tract, skin, and other tumors, said study coauthor N. Jewel Samadder, MD in a statement.

The National Cancer Control Network has guidelines for determining family risk for colorectal cancer and breast, ovarian, and pancreatic cancer to identify individuals who should be screened for LS and HBOC, but these rely on personal and family health histories.

“These criteria were created at a time when genetic testing was cost prohibitive and thus aimed to identify those at the greatest chance of being a mutation carrier in the absence of population-wide whole-exome sequencing. However, [LS and HBOC] are poorly identified in current practice, and many patients are not aware of their cancer risk,” said Dr. Samadder, professor of medicine and coleader of the precision oncology program at the Mayo Clinic Comprehensive Cancer Center, Phoenix, in the statement.

Whole-exome sequencing covers only protein-coding regions of the genome, which is less than 2% of the total genome but includes more than 85% of known disease-related genetic variants, according to Emily Gay, who presented the trial results (Abstract 5768) on April 18 at the annual meeting of the American Association for Cancer Research.

“In recent years, the cost of whole-exome sequencing has been rapidly decreasing, allowing us to complete this test on saliva samples from thousands, if not tens of thousands of patients covering large populations and large health systems,” said Ms. Gay, a genetic counseling graduate student at the University of Arizona, during her presentation.

She described results from the TAPESTRY clinical trial, with 44,306 participants from Mayo Clinic centers in Arizona, Florida, and Minnesota, who were identified as definitely or likely to be harboring pathogenic mutations and consented to whole-exome sequencing from saliva samples. They used electronic health records to determine whether patients would satisfy the testing criteria from NCCN guidelines.

The researchers identified 1.24% of participants to be carriers of HBOC or LS. Of the HBOC carriers, 62.8% were female, and of the LS carriers, 62.6% were female. The percentages of HBOC and LS carriers who were White were 88.6 and 94.5, respectively. The median age of both groups was 57 years. Of HBOC carriers, 47.3% had personal histories of cancers; for LS carries, the percentage was 44.2.

Of HBOC carriers, 49.1% had been previously unaware of their genetic condition, while an even higher percentage of patients with LS – 59.3% – fell into that category. Thirty-two percent of those with HBOC and 56.2% of those with LS would not have qualified for screening using the relevant NCCN guidelines.

“Most strikingly,” 63.8% of individuals with mutations in the MSH6 gene and 83.7% of those mutations in the PMS2 gene would not have met NCCN criteria, Ms. Gay said.

Having a cancer type not known to be related to a genetic syndrome was a reason for 58.6% of individuals failing to meet NCCN guidelines, while 60.5% did not meet the guidelines because of an insufficient number of relatives known to have a history of cancer, and 63.3% did not because they had no personal history of cancer. Among individuals with a pathogenic mutation who met NCCN criteria, 34% were not aware of their condition.

“This suggests that the NCCN guidelines are underutilized in clinical practice, potentially due to the busy schedule of clinicians or because the complexity of using these criteria,” said Ms. Gay.

The numbers were even more striking among minorities: “There is additional data analysis and research needed in this area, but based on our preliminary findings, we saw that nearly 50% of the individuals who are [part of an underrepresented minority group] did not meet criteria, compared with 32% of the white cohort,” said Ms. Gay.

Asked what new NCCN guidelines should be, Ms. Gay replied: “I think maybe limiting the number of relatives that you have to have with a certain type of cancer, especially as we see families get smaller and smaller, especially in the United States – that family data isn’t necessarily available or as useful. And then also, I think, incorporating in the size of a family into the calculation, so more of maybe a point-based system like we see with other genetic conditions rather than a ‘yes you meet or no, you don’t.’ More of a range to say ‘you fall on the low-risk, medium-risk, or high-risk stage,’” said Ms. Gay.

During the Q&A period, session cochair Andrew Godwin, PhD, who is a professor of molecular oncology and pathology at University of Kansas Medical Center, Kansas City, said he wondered if whole-exome sequencing was capable of picking up cancer risk mutations that standard targeted tests don’t look for.

Dr. Samadder, who was in the audience, answered the question, saying that targeted tests are actually better at picking up some types of mutations like intronic mutations, single-nucleotide polymorphisms, and deletions.

“There are some limitations to whole-exome sequencing. Our estimate here of 1.2% [of participants carrying HBOC or LS mutations] is probably an underestimate. There are additional variants that exome sequencing probably doesn’t pick up easily or as well. That’s why we qualify that exome sequencing is a screening test, not a diagnostic,” he continued.

Ms. Gay and Dr. Samadder have no relevant financial disclosures. Dr. Godwin has financial relationships with Clara Biotech, VITRAC Therapeutics, and Sinochips Diagnostics.

A genetic sequencing effort identified more patients to be carriers of risk genes for hereditary breast and ovarian cancer or Lynch syndrome than would have been discovered by following existing genetic testing guidelines, according to new research.

The authors of the clinical trial suggest that these guidelines may need to be revised.

Individuals with hereditary breast and ovarian cancer (HBOC) have an 80% lifetime risk of breast cancer and are at greater risk of ovarian cancer, pancreatic cancer, prostate cancer, and melanoma. Those with Lynch syndrome (LS) have an 80% lifetime risk of colorectal cancer, a 60% lifetime risk of endometrial cancer, and heightened risk of upper gastrointestinal, urinary tract, skin, and other tumors, said study coauthor N. Jewel Samadder, MD in a statement.

The National Cancer Control Network has guidelines for determining family risk for colorectal cancer and breast, ovarian, and pancreatic cancer to identify individuals who should be screened for LS and HBOC, but these rely on personal and family health histories.

“These criteria were created at a time when genetic testing was cost prohibitive and thus aimed to identify those at the greatest chance of being a mutation carrier in the absence of population-wide whole-exome sequencing. However, [LS and HBOC] are poorly identified in current practice, and many patients are not aware of their cancer risk,” said Dr. Samadder, professor of medicine and coleader of the precision oncology program at the Mayo Clinic Comprehensive Cancer Center, Phoenix, in the statement.

Whole-exome sequencing covers only protein-coding regions of the genome, which is less than 2% of the total genome but includes more than 85% of known disease-related genetic variants, according to Emily Gay, who presented the trial results (Abstract 5768) on April 18 at the annual meeting of the American Association for Cancer Research.

“In recent years, the cost of whole-exome sequencing has been rapidly decreasing, allowing us to complete this test on saliva samples from thousands, if not tens of thousands of patients covering large populations and large health systems,” said Ms. Gay, a genetic counseling graduate student at the University of Arizona, during her presentation.

She described results from the TAPESTRY clinical trial, with 44,306 participants from Mayo Clinic centers in Arizona, Florida, and Minnesota, who were identified as definitely or likely to be harboring pathogenic mutations and consented to whole-exome sequencing from saliva samples. They used electronic health records to determine whether patients would satisfy the testing criteria from NCCN guidelines.

The researchers identified 1.24% of participants to be carriers of HBOC or LS. Of the HBOC carriers, 62.8% were female, and of the LS carriers, 62.6% were female. The percentages of HBOC and LS carriers who were White were 88.6 and 94.5, respectively. The median age of both groups was 57 years. Of HBOC carriers, 47.3% had personal histories of cancers; for LS carries, the percentage was 44.2.

Of HBOC carriers, 49.1% had been previously unaware of their genetic condition, while an even higher percentage of patients with LS – 59.3% – fell into that category. Thirty-two percent of those with HBOC and 56.2% of those with LS would not have qualified for screening using the relevant NCCN guidelines.

“Most strikingly,” 63.8% of individuals with mutations in the MSH6 gene and 83.7% of those mutations in the PMS2 gene would not have met NCCN criteria, Ms. Gay said.

Having a cancer type not known to be related to a genetic syndrome was a reason for 58.6% of individuals failing to meet NCCN guidelines, while 60.5% did not meet the guidelines because of an insufficient number of relatives known to have a history of cancer, and 63.3% did not because they had no personal history of cancer. Among individuals with a pathogenic mutation who met NCCN criteria, 34% were not aware of their condition.

“This suggests that the NCCN guidelines are underutilized in clinical practice, potentially due to the busy schedule of clinicians or because the complexity of using these criteria,” said Ms. Gay.

The numbers were even more striking among minorities: “There is additional data analysis and research needed in this area, but based on our preliminary findings, we saw that nearly 50% of the individuals who are [part of an underrepresented minority group] did not meet criteria, compared with 32% of the white cohort,” said Ms. Gay.

Asked what new NCCN guidelines should be, Ms. Gay replied: “I think maybe limiting the number of relatives that you have to have with a certain type of cancer, especially as we see families get smaller and smaller, especially in the United States – that family data isn’t necessarily available or as useful. And then also, I think, incorporating in the size of a family into the calculation, so more of maybe a point-based system like we see with other genetic conditions rather than a ‘yes you meet or no, you don’t.’ More of a range to say ‘you fall on the low-risk, medium-risk, or high-risk stage,’” said Ms. Gay.

During the Q&A period, session cochair Andrew Godwin, PhD, who is a professor of molecular oncology and pathology at University of Kansas Medical Center, Kansas City, said he wondered if whole-exome sequencing was capable of picking up cancer risk mutations that standard targeted tests don’t look for.

Dr. Samadder, who was in the audience, answered the question, saying that targeted tests are actually better at picking up some types of mutations like intronic mutations, single-nucleotide polymorphisms, and deletions.

“There are some limitations to whole-exome sequencing. Our estimate here of 1.2% [of participants carrying HBOC or LS mutations] is probably an underestimate. There are additional variants that exome sequencing probably doesn’t pick up easily or as well. That’s why we qualify that exome sequencing is a screening test, not a diagnostic,” he continued.

Ms. Gay and Dr. Samadder have no relevant financial disclosures. Dr. Godwin has financial relationships with Clara Biotech, VITRAC Therapeutics, and Sinochips Diagnostics.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AACR 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article