Clinical Endocrinology News is an independent news source that provides endocrinologists with timely and relevant news and commentary about clinical developments and the impact of health care policy on the endocrinologist's practice. Specialty topics include Diabetes, Lipid & Metabolic Disorders Menopause, Obesity, Osteoporosis, Pediatric Endocrinology, Pituitary, Thyroid & Adrenal Disorders, and Reproductive Endocrinology. Featured content includes Commentaries, Implementin Health Reform, Law & Medicine, and In the Loop, the blog of Clinical Endocrinology News. Clinical Endocrinology News is owned by Frontline Medical Communications.

Theme
medstat_cen
Top Sections
Commentary
Law & Medicine
endo
Main menu
CEN Main Menu
Explore menu
CEN Explore Menu
Proclivity ID
18807001
Unpublish
Specialty Focus
Men's Health
Diabetes
Pituitary, Thyroid & Adrenal Disorders
Endocrine Cancer
Menopause
Negative Keywords
a child less than 6
addict
addicted
addicting
addiction
adult sites
alcohol
antibody
ass
attorney
audit
auditor
babies
babpa
baby
ban
banned
banning
best
bisexual
bitch
bleach
blog
blow job
bondage
boobs
booty
buy
cannabis
certificate
certification
certified
cheap
cheapest
class action
cocaine
cock
counterfeit drug
crack
crap
crime
criminal
cunt
curable
cure
dangerous
dangers
dead
deadly
death
defend
defended
depedent
dependence
dependent
detergent
dick
die
dildo
drug abuse
drug recall
dying
fag
fake
fatal
fatalities
fatality
free
fuck
gangs
gingivitis
guns
hardcore
herbal
herbs
heroin
herpes
home remedies
homo
horny
hypersensitivity
hypoglycemia treatment
illegal drug use
illegal use of prescription
incest
infant
infants
job
ketoacidosis
kill
killer
killing
kinky
law suit
lawsuit
lawyer
lesbian
marijuana
medicine for hypoglycemia
murder
naked
natural
newborn
nigger
noise
nude
nudity
orgy
over the counter
overdosage
overdose
overdosed
overdosing
penis
pimp
pistol
porn
porno
pornographic
pornography
prison
profanity
purchase
purchasing
pussy
queer
rape
rapist
recall
recreational drug
rob
robberies
sale
sales
sex
sexual
shit
shoot
slut
slutty
stole
stolen
store
sue
suicidal
suicide
supplements
supply company
theft
thief
thieves
tit
toddler
toddlers
toxic
toxin
tragedy
treating dka
treating hypoglycemia
treatment for hypoglycemia
vagina
violence
whore
withdrawal
without prescription
Negative Keywords Excluded Elements
header[@id='header']
section[contains(@class, 'nav-hidden')]
footer[@id='footer']
div[contains(@class, 'pane-pub-article-imn')]
div[contains(@class, 'pane-pub-home-imn')]
div[contains(@class, 'pane-pub-topic-imn')]
div[contains(@class, 'panel-panel-inner')]
div[contains(@class, 'pane-node-field-article-topics')]
section[contains(@class, 'footer-nav-section-wrapper')]
Altmetric
Article Authors "autobrand" affiliation
Clinical Endocrinology News
DSM Affiliated
Display in offset block
Disqus Exclude
Best Practices
CE/CME
Education Center
Medical Education Library
Enable Disqus
Display Author and Disclosure Link
Publication Type
News
Slot System
Featured Buckets
Disable Sticky Ads
Disable Ad Block Mitigation
Featured Buckets Admin
Show Ads on this Publication's Homepage
Consolidated Pub
Show Article Page Numbers on TOC
Use larger logo size
Off

FDA approves first-ever OTC erectile dysfunction gel

Article Type
Changed
Wed, 06/14/2023 - 11:26

A topical gel that may work faster than erectile dysfunction pills has been approved for over-the-counter use in the United States. The gel, which can help users get an erection within 10 minutes, is already available without a prescription in Europe.

The Food and Drug Administration has approved the drug, called Eroxon, noting that it is a first-of-its-kind treatment. Eroxon is made by the British pharmaceutical company Futura Medical, which specializes in drugs that are given through the skin.

According to the product’s leaflet, Eroxon “stimulates blood flow in the penis through a unique physical cooling then warming effect, helping you get and keep an erection hard enough for sex.” The company said on the product’s website that 65% of people who used the drug were able to have sex. 

A company spokesperson told CNN that the price of the product has not been set in the United States, but it costs the equivalent of about $31 in the United Kingdom. Futura Medical has not announced when it will be available in the United States.

Harvard Health reports that 30 million people in the United States have erectile dysfunction, which means a person cannot get an erection at all or one firm enough to have sex. The disorder is often linked to other physical or mental health problems, such as heart problems or clogged arteries.

Erectile dysfunction affects 1% of men in their 40s, 17% of men in their 60s, and nearly 50% of men who are age 75 or older, according to Harvard Health.

A version of this article originally appeared on WebMD.com.

Publications
Topics
Sections

A topical gel that may work faster than erectile dysfunction pills has been approved for over-the-counter use in the United States. The gel, which can help users get an erection within 10 minutes, is already available without a prescription in Europe.

The Food and Drug Administration has approved the drug, called Eroxon, noting that it is a first-of-its-kind treatment. Eroxon is made by the British pharmaceutical company Futura Medical, which specializes in drugs that are given through the skin.

According to the product’s leaflet, Eroxon “stimulates blood flow in the penis through a unique physical cooling then warming effect, helping you get and keep an erection hard enough for sex.” The company said on the product’s website that 65% of people who used the drug were able to have sex. 

A company spokesperson told CNN that the price of the product has not been set in the United States, but it costs the equivalent of about $31 in the United Kingdom. Futura Medical has not announced when it will be available in the United States.

Harvard Health reports that 30 million people in the United States have erectile dysfunction, which means a person cannot get an erection at all or one firm enough to have sex. The disorder is often linked to other physical or mental health problems, such as heart problems or clogged arteries.

Erectile dysfunction affects 1% of men in their 40s, 17% of men in their 60s, and nearly 50% of men who are age 75 or older, according to Harvard Health.

A version of this article originally appeared on WebMD.com.

A topical gel that may work faster than erectile dysfunction pills has been approved for over-the-counter use in the United States. The gel, which can help users get an erection within 10 minutes, is already available without a prescription in Europe.

The Food and Drug Administration has approved the drug, called Eroxon, noting that it is a first-of-its-kind treatment. Eroxon is made by the British pharmaceutical company Futura Medical, which specializes in drugs that are given through the skin.

According to the product’s leaflet, Eroxon “stimulates blood flow in the penis through a unique physical cooling then warming effect, helping you get and keep an erection hard enough for sex.” The company said on the product’s website that 65% of people who used the drug were able to have sex. 

A company spokesperson told CNN that the price of the product has not been set in the United States, but it costs the equivalent of about $31 in the United Kingdom. Futura Medical has not announced when it will be available in the United States.

Harvard Health reports that 30 million people in the United States have erectile dysfunction, which means a person cannot get an erection at all or one firm enough to have sex. The disorder is often linked to other physical or mental health problems, such as heart problems or clogged arteries.

Erectile dysfunction affects 1% of men in their 40s, 17% of men in their 60s, and nearly 50% of men who are age 75 or older, according to Harvard Health.

A version of this article originally appeared on WebMD.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Could semaglutide treat addiction as well as obesity?

Article Type
Changed
Tue, 06/13/2023 - 15:04

Could glucagonlike peptide–1 (GLP-1) receptor agonists such as semaglutide – approved as Ozempic to treat type 2 diabetes and as Wegovy to treat obesity, both from Novo Nordisk – also curb addictions and compulsive behaviors?

As demand for semaglutide for weight loss grew following approval of Wegovy by the U.S. Food and Drug Administration in 2021, anecdotal reports of unexpected potential added benefits also began to surface.

Some patients taking these drugs for type 2 diabetes or weight loss also lost interest in addictive and compulsive behaviors such as drinking alcohol, smoking, shopping, nail biting, and skin picking, as reported in articles in the New York Times and The Atlantic, among others.

There is also some preliminary research to support these observations.

This news organization invited three experts to weigh in.
 

Recent and upcoming studies

The senior author of a recent randomized controlled trial of 127 patients with alcohol use disorder (AUD), Anders Fink-Jensen, MD, said: “I hope that GLP-1 analogs in the future can be used against AUD, but before that can happen, several GLP-1 trials [are needed to] prove an effect on alcohol intake.”

His study involved patients who received exenatide (Byetta, Bydureon, AstraZeneca), the first-generation GLP-1 agonist approved for type 2 diabetes, over 26 weeks, but treatment did not reduce the number of heavy drinking days (the primary outcome), compared with placebo.  

However, in post hoc, exploratory analyses, heavy drinking days and total alcohol intake were significantly reduced in the subgroup of patients with AUD and obesity (body mass index > 30 kg/m2).

The participants were also shown pictures of alcohol or neutral subjects while they underwent functional magnetic resonance imaging. Those who had received exenatide, compared with placebo, had significantly less activation of brain reward centers when shown the pictures of alcohol.

“Something is happening in the brain and activation of the reward center is hampered by the GLP-1 compound,” Dr. Fink-Jensen, a clinical psychologist at the Psychiatric Centre Copenhagen, remarked in an email.

“If patients with AUD already fulfill the criteria for semaglutide (or other GLP-1 analogs) by having type 2 diabetes and/or a BMI over 30 kg/m2, they can of course use the compound right now,” he noted.

His team is also beginning a study in patients with AUD and a BMI ≥ 30 kg/m2 to investigate the effects on alcohol intake of semaglutide up to 2.4 mg weekly, the maximum dose currently approved for obesity in the United States.

“Based on the potency of exenatide and semaglutide,” Dr. Fink-Jensen said, “we expect that semaglutide will cause a stronger reduction in alcohol intake” than exenatide.

Animal studies have also shown that GLP-1 agonists suppress alcohol-induced reward, alcohol intake, motivation to consume alcohol, alcohol seeking, and relapse drinking of alcohol, Elisabet Jerlhag Holm, PhD, noted.

Interestingly, these agents also suppress the reward, intake, and motivation to consume other addictive drugs like cocaine, amphetamine, nicotine, and some opioids, Jerlhag Holm, professor, department of pharmacology, University of Gothenburg, Sweden, noted in an email.

In a recently published preclinical study, her group provides evidence to help explain anecdotal reports from patients with obesity treated with semaglutide who claim they also reduced their alcohol intake. In the study, semaglutide both reduced alcohol intake (and relapse-like drinking) and decreased body weight of rats of both sexes.

“Future research should explore the possibility of semaglutide decreasing alcohol intake in patients with AUD, particularly those who are overweight,” said Prof. Holm.

“AUD is a heterogenous disorder, and one medication is most likely not helpful for all AUD patients,” she added. “Therefore, an arsenal of different medications is beneficial when treating AUD.”

Janice J. Hwang, MD, MHS, echoed these thoughts: “Anecdotally, there are a lot of reports from patients (and in the news) that this class of medication [GLP-1 agonists] impacts cravings and could impact addictive behaviors.”

“I would say, overall, the jury is still out,” as to whether anecdotal reports of GLP-1 agonists curbing addictions will be borne out in randomized controlled trials.

“I think it is much too early to tell” whether these drugs might be approved for treating addictions without more solid clinical trial data, noted Dr. Hwang, who is an associate professor of medicine and chief, division of endocrinology and metabolism, at the University of North Carolina at Chapel Hill.

Meanwhile, another research group at the University of North Carolina at Chapel Hill, led by psychiatrist Christian Hendershot, PhD, is conducting a clinical trial in 48 participants with AUD who are also smokers.

They aim to determine if patients who receive semaglutide at escalating doses (0.25 mg to 1.0 mg per week via subcutaneous injection) over 9 weeks will consume less alcohol (the primary outcome) and smoke less (a secondary outcome) than those who receive a sham placebo injection. Results are expected in October 2023.

Dr. Fink-Jensen has received an unrestricted research grant from Novo Nordisk to investigate the effects of GLP-1 receptor stimulation on weight gain and metabolic disturbances in patients with schizophrenia treated with an antipsychotic.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Could glucagonlike peptide–1 (GLP-1) receptor agonists such as semaglutide – approved as Ozempic to treat type 2 diabetes and as Wegovy to treat obesity, both from Novo Nordisk – also curb addictions and compulsive behaviors?

As demand for semaglutide for weight loss grew following approval of Wegovy by the U.S. Food and Drug Administration in 2021, anecdotal reports of unexpected potential added benefits also began to surface.

Some patients taking these drugs for type 2 diabetes or weight loss also lost interest in addictive and compulsive behaviors such as drinking alcohol, smoking, shopping, nail biting, and skin picking, as reported in articles in the New York Times and The Atlantic, among others.

There is also some preliminary research to support these observations.

This news organization invited three experts to weigh in.
 

Recent and upcoming studies

The senior author of a recent randomized controlled trial of 127 patients with alcohol use disorder (AUD), Anders Fink-Jensen, MD, said: “I hope that GLP-1 analogs in the future can be used against AUD, but before that can happen, several GLP-1 trials [are needed to] prove an effect on alcohol intake.”

His study involved patients who received exenatide (Byetta, Bydureon, AstraZeneca), the first-generation GLP-1 agonist approved for type 2 diabetes, over 26 weeks, but treatment did not reduce the number of heavy drinking days (the primary outcome), compared with placebo.  

However, in post hoc, exploratory analyses, heavy drinking days and total alcohol intake were significantly reduced in the subgroup of patients with AUD and obesity (body mass index > 30 kg/m2).

The participants were also shown pictures of alcohol or neutral subjects while they underwent functional magnetic resonance imaging. Those who had received exenatide, compared with placebo, had significantly less activation of brain reward centers when shown the pictures of alcohol.

“Something is happening in the brain and activation of the reward center is hampered by the GLP-1 compound,” Dr. Fink-Jensen, a clinical psychologist at the Psychiatric Centre Copenhagen, remarked in an email.

“If patients with AUD already fulfill the criteria for semaglutide (or other GLP-1 analogs) by having type 2 diabetes and/or a BMI over 30 kg/m2, they can of course use the compound right now,” he noted.

His team is also beginning a study in patients with AUD and a BMI ≥ 30 kg/m2 to investigate the effects on alcohol intake of semaglutide up to 2.4 mg weekly, the maximum dose currently approved for obesity in the United States.

“Based on the potency of exenatide and semaglutide,” Dr. Fink-Jensen said, “we expect that semaglutide will cause a stronger reduction in alcohol intake” than exenatide.

Animal studies have also shown that GLP-1 agonists suppress alcohol-induced reward, alcohol intake, motivation to consume alcohol, alcohol seeking, and relapse drinking of alcohol, Elisabet Jerlhag Holm, PhD, noted.

Interestingly, these agents also suppress the reward, intake, and motivation to consume other addictive drugs like cocaine, amphetamine, nicotine, and some opioids, Jerlhag Holm, professor, department of pharmacology, University of Gothenburg, Sweden, noted in an email.

In a recently published preclinical study, her group provides evidence to help explain anecdotal reports from patients with obesity treated with semaglutide who claim they also reduced their alcohol intake. In the study, semaglutide both reduced alcohol intake (and relapse-like drinking) and decreased body weight of rats of both sexes.

“Future research should explore the possibility of semaglutide decreasing alcohol intake in patients with AUD, particularly those who are overweight,” said Prof. Holm.

“AUD is a heterogenous disorder, and one medication is most likely not helpful for all AUD patients,” she added. “Therefore, an arsenal of different medications is beneficial when treating AUD.”

Janice J. Hwang, MD, MHS, echoed these thoughts: “Anecdotally, there are a lot of reports from patients (and in the news) that this class of medication [GLP-1 agonists] impacts cravings and could impact addictive behaviors.”

“I would say, overall, the jury is still out,” as to whether anecdotal reports of GLP-1 agonists curbing addictions will be borne out in randomized controlled trials.

“I think it is much too early to tell” whether these drugs might be approved for treating addictions without more solid clinical trial data, noted Dr. Hwang, who is an associate professor of medicine and chief, division of endocrinology and metabolism, at the University of North Carolina at Chapel Hill.

Meanwhile, another research group at the University of North Carolina at Chapel Hill, led by psychiatrist Christian Hendershot, PhD, is conducting a clinical trial in 48 participants with AUD who are also smokers.

They aim to determine if patients who receive semaglutide at escalating doses (0.25 mg to 1.0 mg per week via subcutaneous injection) over 9 weeks will consume less alcohol (the primary outcome) and smoke less (a secondary outcome) than those who receive a sham placebo injection. Results are expected in October 2023.

Dr. Fink-Jensen has received an unrestricted research grant from Novo Nordisk to investigate the effects of GLP-1 receptor stimulation on weight gain and metabolic disturbances in patients with schizophrenia treated with an antipsychotic.

A version of this article first appeared on Medscape.com.

Could glucagonlike peptide–1 (GLP-1) receptor agonists such as semaglutide – approved as Ozempic to treat type 2 diabetes and as Wegovy to treat obesity, both from Novo Nordisk – also curb addictions and compulsive behaviors?

As demand for semaglutide for weight loss grew following approval of Wegovy by the U.S. Food and Drug Administration in 2021, anecdotal reports of unexpected potential added benefits also began to surface.

Some patients taking these drugs for type 2 diabetes or weight loss also lost interest in addictive and compulsive behaviors such as drinking alcohol, smoking, shopping, nail biting, and skin picking, as reported in articles in the New York Times and The Atlantic, among others.

There is also some preliminary research to support these observations.

This news organization invited three experts to weigh in.
 

Recent and upcoming studies

The senior author of a recent randomized controlled trial of 127 patients with alcohol use disorder (AUD), Anders Fink-Jensen, MD, said: “I hope that GLP-1 analogs in the future can be used against AUD, but before that can happen, several GLP-1 trials [are needed to] prove an effect on alcohol intake.”

His study involved patients who received exenatide (Byetta, Bydureon, AstraZeneca), the first-generation GLP-1 agonist approved for type 2 diabetes, over 26 weeks, but treatment did not reduce the number of heavy drinking days (the primary outcome), compared with placebo.  

However, in post hoc, exploratory analyses, heavy drinking days and total alcohol intake were significantly reduced in the subgroup of patients with AUD and obesity (body mass index > 30 kg/m2).

The participants were also shown pictures of alcohol or neutral subjects while they underwent functional magnetic resonance imaging. Those who had received exenatide, compared with placebo, had significantly less activation of brain reward centers when shown the pictures of alcohol.

“Something is happening in the brain and activation of the reward center is hampered by the GLP-1 compound,” Dr. Fink-Jensen, a clinical psychologist at the Psychiatric Centre Copenhagen, remarked in an email.

“If patients with AUD already fulfill the criteria for semaglutide (or other GLP-1 analogs) by having type 2 diabetes and/or a BMI over 30 kg/m2, they can of course use the compound right now,” he noted.

His team is also beginning a study in patients with AUD and a BMI ≥ 30 kg/m2 to investigate the effects on alcohol intake of semaglutide up to 2.4 mg weekly, the maximum dose currently approved for obesity in the United States.

“Based on the potency of exenatide and semaglutide,” Dr. Fink-Jensen said, “we expect that semaglutide will cause a stronger reduction in alcohol intake” than exenatide.

Animal studies have also shown that GLP-1 agonists suppress alcohol-induced reward, alcohol intake, motivation to consume alcohol, alcohol seeking, and relapse drinking of alcohol, Elisabet Jerlhag Holm, PhD, noted.

Interestingly, these agents also suppress the reward, intake, and motivation to consume other addictive drugs like cocaine, amphetamine, nicotine, and some opioids, Jerlhag Holm, professor, department of pharmacology, University of Gothenburg, Sweden, noted in an email.

In a recently published preclinical study, her group provides evidence to help explain anecdotal reports from patients with obesity treated with semaglutide who claim they also reduced their alcohol intake. In the study, semaglutide both reduced alcohol intake (and relapse-like drinking) and decreased body weight of rats of both sexes.

“Future research should explore the possibility of semaglutide decreasing alcohol intake in patients with AUD, particularly those who are overweight,” said Prof. Holm.

“AUD is a heterogenous disorder, and one medication is most likely not helpful for all AUD patients,” she added. “Therefore, an arsenal of different medications is beneficial when treating AUD.”

Janice J. Hwang, MD, MHS, echoed these thoughts: “Anecdotally, there are a lot of reports from patients (and in the news) that this class of medication [GLP-1 agonists] impacts cravings and could impact addictive behaviors.”

“I would say, overall, the jury is still out,” as to whether anecdotal reports of GLP-1 agonists curbing addictions will be borne out in randomized controlled trials.

“I think it is much too early to tell” whether these drugs might be approved for treating addictions without more solid clinical trial data, noted Dr. Hwang, who is an associate professor of medicine and chief, division of endocrinology and metabolism, at the University of North Carolina at Chapel Hill.

Meanwhile, another research group at the University of North Carolina at Chapel Hill, led by psychiatrist Christian Hendershot, PhD, is conducting a clinical trial in 48 participants with AUD who are also smokers.

They aim to determine if patients who receive semaglutide at escalating doses (0.25 mg to 1.0 mg per week via subcutaneous injection) over 9 weeks will consume less alcohol (the primary outcome) and smoke less (a secondary outcome) than those who receive a sham placebo injection. Results are expected in October 2023.

Dr. Fink-Jensen has received an unrestricted research grant from Novo Nordisk to investigate the effects of GLP-1 receptor stimulation on weight gain and metabolic disturbances in patients with schizophrenia treated with an antipsychotic.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Hormone therapies still ‘most effective’ in treating menopausal vasomotor symptoms

Article Type
Changed
Wed, 06/14/2023 - 15:32

Despite new options in non–hormone-based treatments, hormone therapy remains the most effective treatment for vasomotor symptoms (VMS) and should be considered for healthy menopausal women without contraindications within 10 years of their final menstrual periods.

This recommendation emerged from an updated position statement from the North American Menopause Society in its first review of the scientific literature since 2015. The statement specifically targets nonhormonal management of symptoms such as hot flashes and night sweats, which occur in as many as 80% of menopausal women but are undertreated. The statement appears in the June issue of the Journal of The North American Menopause Society.

“Women with contraindications or objections to hormone treatment should be informed by professionals of evidence-based effective nonhormone treatment options,” stated a NAMS advisory panel led by Chrisandra L. Shufelt, MD, MS, professor and chair of the division of general internal medicine and associate director of the Women’s Health Research Center at the Mayo Clinic in Jacksonville, Fla. The statement is one of multiple NAMS updates performed at regular intervals, said Dr. Shufelt, also past president of NAMS, in an interview. “But the research has changed, and we wanted to make clinicians aware of new medications. One of our interesting findings was more evidence that off-label use of the nonhormonal overactive bladder drug oxybutynin can lower the rate of hot flashes.”

Dr. Shufelt noted that many of the current update’s findings align with previous research, and stressed that the therapeutic recommendations apply specifically to VMS. “Not all menopause-related symptoms are vasomotor, however,” she said. “While a lot of the lifestyle options such as cooling techniques and exercise are not recommended for controlling hot flashes, diet and exercise changes can be beneficial for other health reasons.”

Although it’s the most effective option for VMS, hormone therapy is not suitable for women with contraindications such as a previous blood clot, an estrogen-dependent cancer, a family history of such cancers, or a personal preference against hormone use, Dr. Shufelt added, so nonhormonal alternatives are important to prevent women from wasting time and money on ineffective remedies. “Women need to know what works and what doesn’t,” she said.
 

Recommended nonhormonal therapies

Based on a rigorous review of the scientific evidence to date, NAMS found the following therapies to be effective: cognitive-behavioral therapy; clinical hypnosis; SSRIs and serotonin-norepinephrine reuptake inhibitors – which yield mild to moderate improvements; gabapentin – which lessens the frequency and severity of hot flashes; fezolinetant (Veozah), a novel first-in-class neurokinin B antagonist that was Food and Drug Administration–approved in May for VSM; and oxybutynin, an antimuscarinic, anticholinergic drug, that reduces moderate to severe VMS, although long-term use in older adults may be linked to cognitive decline, weight loss, and stellate ganglion block.

Therapies that were ineffective, associated with adverse effects (AEs), or lacking adequate evidence of efficacy and thus not recommended for VMS included: paced respiration; supplemental and herbal remedies such as black cohosh, milk thistle, and evening primrose; cooling techniques; trigger avoidance; exercise and yoga; mindfulness-based intervention and relaxation; suvorexant, a dual orexin-receptor antagonist used for insomnia; soy foods, extracts, and the soy metabolite equol; cannabinoids; acupuncture; calibration of neural oscillations; chiropractics; clonidine, an alpha-2 adrenergic agonist that is associated with significant AEs with no recent evidence of benefit over placebo; dietary modification; and pregabalin – which is associated with significant AEs and has controlled-substance prescribing restrictions.

Ultimately, clinicians should individualize menopause care to each patient. For example, “if a patient says that avoiding caffeine in the morning stops her from having hot flashes in the afternoon, that’s fine,” Dr. Shufelt said.
 

 

 

HT still most effective

“This statement is excellent, comprehensive, and evidence-based,” commented Jill M. Rabin MD, vice chair of education and development, obstetrics and gynecology, at Northshore University Hospital/LIJ Medical Center in Manhasset, N.Y., and professor of obstetrics and gynecology at the Donald and Barbara Zucker School of Medicine at Hofstra/Northwell Health in Hempstead, N.Y.

Dr. Jill Rabin

Dr. Rabin, coauthor of Mind Over Bladder was not involved in compiling the statement.

She agreed that hormone therapy is the most effective option for VMS and regularly prescribes it for suitable candidates in different forms depending on the type and severity of menopausal symptoms. As for nonhormonal options, Dr. Rabin added in an interview, some of those not recommended in the current NAMS statement could yet prove to be effective as more data accumulate. Suvorexant may be one to watch, for instance, but currently there are not enough data on its effectiveness.

“It’s really important to keep up on this nonhormonal research,” Dr. Rabin said. “As the population ages, more and more women will be in the peri- and postmenopausal periods and some have medical reasons for not taking hormone therapy.” It’s important to recommend nonhormonal therapies of proven benefit according to current high-level evidence, she said, “but also to keep your ear to the ground about those still under investigation.”

As for the lifestyle and alternative remedies of unproven benefit, Dr. Rabin added, there’s little harm in trying them. “As far as I know, no one’s ever died of relaxation and paced breathing.” In addition, a patient’s interaction with and sense of control over her own physiology provided by these techniques may be beneficial in themselves.

Dr. Shufelt reported grant support from the National Institutes of Health. Numerous authors reported consulting fees from and other financial ties to private-sector companies. Dr. Rabin had no relevant competing interests to disclose with regard to her comments.

Publications
Topics
Sections

Despite new options in non–hormone-based treatments, hormone therapy remains the most effective treatment for vasomotor symptoms (VMS) and should be considered for healthy menopausal women without contraindications within 10 years of their final menstrual periods.

This recommendation emerged from an updated position statement from the North American Menopause Society in its first review of the scientific literature since 2015. The statement specifically targets nonhormonal management of symptoms such as hot flashes and night sweats, which occur in as many as 80% of menopausal women but are undertreated. The statement appears in the June issue of the Journal of The North American Menopause Society.

“Women with contraindications or objections to hormone treatment should be informed by professionals of evidence-based effective nonhormone treatment options,” stated a NAMS advisory panel led by Chrisandra L. Shufelt, MD, MS, professor and chair of the division of general internal medicine and associate director of the Women’s Health Research Center at the Mayo Clinic in Jacksonville, Fla. The statement is one of multiple NAMS updates performed at regular intervals, said Dr. Shufelt, also past president of NAMS, in an interview. “But the research has changed, and we wanted to make clinicians aware of new medications. One of our interesting findings was more evidence that off-label use of the nonhormonal overactive bladder drug oxybutynin can lower the rate of hot flashes.”

Dr. Shufelt noted that many of the current update’s findings align with previous research, and stressed that the therapeutic recommendations apply specifically to VMS. “Not all menopause-related symptoms are vasomotor, however,” she said. “While a lot of the lifestyle options such as cooling techniques and exercise are not recommended for controlling hot flashes, diet and exercise changes can be beneficial for other health reasons.”

Although it’s the most effective option for VMS, hormone therapy is not suitable for women with contraindications such as a previous blood clot, an estrogen-dependent cancer, a family history of such cancers, or a personal preference against hormone use, Dr. Shufelt added, so nonhormonal alternatives are important to prevent women from wasting time and money on ineffective remedies. “Women need to know what works and what doesn’t,” she said.
 

Recommended nonhormonal therapies

Based on a rigorous review of the scientific evidence to date, NAMS found the following therapies to be effective: cognitive-behavioral therapy; clinical hypnosis; SSRIs and serotonin-norepinephrine reuptake inhibitors – which yield mild to moderate improvements; gabapentin – which lessens the frequency and severity of hot flashes; fezolinetant (Veozah), a novel first-in-class neurokinin B antagonist that was Food and Drug Administration–approved in May for VSM; and oxybutynin, an antimuscarinic, anticholinergic drug, that reduces moderate to severe VMS, although long-term use in older adults may be linked to cognitive decline, weight loss, and stellate ganglion block.

Therapies that were ineffective, associated with adverse effects (AEs), or lacking adequate evidence of efficacy and thus not recommended for VMS included: paced respiration; supplemental and herbal remedies such as black cohosh, milk thistle, and evening primrose; cooling techniques; trigger avoidance; exercise and yoga; mindfulness-based intervention and relaxation; suvorexant, a dual orexin-receptor antagonist used for insomnia; soy foods, extracts, and the soy metabolite equol; cannabinoids; acupuncture; calibration of neural oscillations; chiropractics; clonidine, an alpha-2 adrenergic agonist that is associated with significant AEs with no recent evidence of benefit over placebo; dietary modification; and pregabalin – which is associated with significant AEs and has controlled-substance prescribing restrictions.

Ultimately, clinicians should individualize menopause care to each patient. For example, “if a patient says that avoiding caffeine in the morning stops her from having hot flashes in the afternoon, that’s fine,” Dr. Shufelt said.
 

 

 

HT still most effective

“This statement is excellent, comprehensive, and evidence-based,” commented Jill M. Rabin MD, vice chair of education and development, obstetrics and gynecology, at Northshore University Hospital/LIJ Medical Center in Manhasset, N.Y., and professor of obstetrics and gynecology at the Donald and Barbara Zucker School of Medicine at Hofstra/Northwell Health in Hempstead, N.Y.

Dr. Jill Rabin

Dr. Rabin, coauthor of Mind Over Bladder was not involved in compiling the statement.

She agreed that hormone therapy is the most effective option for VMS and regularly prescribes it for suitable candidates in different forms depending on the type and severity of menopausal symptoms. As for nonhormonal options, Dr. Rabin added in an interview, some of those not recommended in the current NAMS statement could yet prove to be effective as more data accumulate. Suvorexant may be one to watch, for instance, but currently there are not enough data on its effectiveness.

“It’s really important to keep up on this nonhormonal research,” Dr. Rabin said. “As the population ages, more and more women will be in the peri- and postmenopausal periods and some have medical reasons for not taking hormone therapy.” It’s important to recommend nonhormonal therapies of proven benefit according to current high-level evidence, she said, “but also to keep your ear to the ground about those still under investigation.”

As for the lifestyle and alternative remedies of unproven benefit, Dr. Rabin added, there’s little harm in trying them. “As far as I know, no one’s ever died of relaxation and paced breathing.” In addition, a patient’s interaction with and sense of control over her own physiology provided by these techniques may be beneficial in themselves.

Dr. Shufelt reported grant support from the National Institutes of Health. Numerous authors reported consulting fees from and other financial ties to private-sector companies. Dr. Rabin had no relevant competing interests to disclose with regard to her comments.

Despite new options in non–hormone-based treatments, hormone therapy remains the most effective treatment for vasomotor symptoms (VMS) and should be considered for healthy menopausal women without contraindications within 10 years of their final menstrual periods.

This recommendation emerged from an updated position statement from the North American Menopause Society in its first review of the scientific literature since 2015. The statement specifically targets nonhormonal management of symptoms such as hot flashes and night sweats, which occur in as many as 80% of menopausal women but are undertreated. The statement appears in the June issue of the Journal of The North American Menopause Society.

“Women with contraindications or objections to hormone treatment should be informed by professionals of evidence-based effective nonhormone treatment options,” stated a NAMS advisory panel led by Chrisandra L. Shufelt, MD, MS, professor and chair of the division of general internal medicine and associate director of the Women’s Health Research Center at the Mayo Clinic in Jacksonville, Fla. The statement is one of multiple NAMS updates performed at regular intervals, said Dr. Shufelt, also past president of NAMS, in an interview. “But the research has changed, and we wanted to make clinicians aware of new medications. One of our interesting findings was more evidence that off-label use of the nonhormonal overactive bladder drug oxybutynin can lower the rate of hot flashes.”

Dr. Shufelt noted that many of the current update’s findings align with previous research, and stressed that the therapeutic recommendations apply specifically to VMS. “Not all menopause-related symptoms are vasomotor, however,” she said. “While a lot of the lifestyle options such as cooling techniques and exercise are not recommended for controlling hot flashes, diet and exercise changes can be beneficial for other health reasons.”

Although it’s the most effective option for VMS, hormone therapy is not suitable for women with contraindications such as a previous blood clot, an estrogen-dependent cancer, a family history of such cancers, or a personal preference against hormone use, Dr. Shufelt added, so nonhormonal alternatives are important to prevent women from wasting time and money on ineffective remedies. “Women need to know what works and what doesn’t,” she said.
 

Recommended nonhormonal therapies

Based on a rigorous review of the scientific evidence to date, NAMS found the following therapies to be effective: cognitive-behavioral therapy; clinical hypnosis; SSRIs and serotonin-norepinephrine reuptake inhibitors – which yield mild to moderate improvements; gabapentin – which lessens the frequency and severity of hot flashes; fezolinetant (Veozah), a novel first-in-class neurokinin B antagonist that was Food and Drug Administration–approved in May for VSM; and oxybutynin, an antimuscarinic, anticholinergic drug, that reduces moderate to severe VMS, although long-term use in older adults may be linked to cognitive decline, weight loss, and stellate ganglion block.

Therapies that were ineffective, associated with adverse effects (AEs), or lacking adequate evidence of efficacy and thus not recommended for VMS included: paced respiration; supplemental and herbal remedies such as black cohosh, milk thistle, and evening primrose; cooling techniques; trigger avoidance; exercise and yoga; mindfulness-based intervention and relaxation; suvorexant, a dual orexin-receptor antagonist used for insomnia; soy foods, extracts, and the soy metabolite equol; cannabinoids; acupuncture; calibration of neural oscillations; chiropractics; clonidine, an alpha-2 adrenergic agonist that is associated with significant AEs with no recent evidence of benefit over placebo; dietary modification; and pregabalin – which is associated with significant AEs and has controlled-substance prescribing restrictions.

Ultimately, clinicians should individualize menopause care to each patient. For example, “if a patient says that avoiding caffeine in the morning stops her from having hot flashes in the afternoon, that’s fine,” Dr. Shufelt said.
 

 

 

HT still most effective

“This statement is excellent, comprehensive, and evidence-based,” commented Jill M. Rabin MD, vice chair of education and development, obstetrics and gynecology, at Northshore University Hospital/LIJ Medical Center in Manhasset, N.Y., and professor of obstetrics and gynecology at the Donald and Barbara Zucker School of Medicine at Hofstra/Northwell Health in Hempstead, N.Y.

Dr. Jill Rabin

Dr. Rabin, coauthor of Mind Over Bladder was not involved in compiling the statement.

She agreed that hormone therapy is the most effective option for VMS and regularly prescribes it for suitable candidates in different forms depending on the type and severity of menopausal symptoms. As for nonhormonal options, Dr. Rabin added in an interview, some of those not recommended in the current NAMS statement could yet prove to be effective as more data accumulate. Suvorexant may be one to watch, for instance, but currently there are not enough data on its effectiveness.

“It’s really important to keep up on this nonhormonal research,” Dr. Rabin said. “As the population ages, more and more women will be in the peri- and postmenopausal periods and some have medical reasons for not taking hormone therapy.” It’s important to recommend nonhormonal therapies of proven benefit according to current high-level evidence, she said, “but also to keep your ear to the ground about those still under investigation.”

As for the lifestyle and alternative remedies of unproven benefit, Dr. Rabin added, there’s little harm in trying them. “As far as I know, no one’s ever died of relaxation and paced breathing.” In addition, a patient’s interaction with and sense of control over her own physiology provided by these techniques may be beneficial in themselves.

Dr. Shufelt reported grant support from the National Institutes of Health. Numerous authors reported consulting fees from and other financial ties to private-sector companies. Dr. Rabin had no relevant competing interests to disclose with regard to her comments.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF THE NORTH AMERICAN MENOPAUSE SOCIETY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

WHO advises against nonsugar sweeteners for weight control

Article Type
Changed
Wed, 06/14/2023 - 09:43

 

A new guideline from the World Health Organization on nonsugar sweeteners (NSSs) recommends not using them to control weight or reduce the risk for diabetes, heart disease, or cancer. These sweeteners include aspartame, acesulfame K, advantame, saccharine, sucralose, stevia, and stevia derivatives.

The recommendation is based on the findings of a systematic review that collected data from 283 studies in adults, children, pregnant women, and mixed populations.

The findings suggest that use of NSSs does not confer any long-term benefit in reducing body fat in adults or children. They also suggest that long-term use of NSSs may have potential undesirable effects.

To clarify, short-term NSS use results in a small reduction in body weight and body mass index in adults without significant effects on other measures of adiposity or cardiometabolic health, including fasting glucose, insulin, blood lipids, and blood pressure.

Conversely, on a long-term basis, results from prospective cohort studies suggest that higher NSS intake is associated with increased risk for type 2 diabetes, cardiovascular diseases, and all-cause mortality in adults (very low– to low-certainty evidence). 

Regarding the risk for cancer, results from case-control studies suggest an association between saccharine intake and bladder cancer (very low certainty evidence), but significant associations for other types of cancer were not observed in case-control studies or meta-analysis of prospective cohort studies.

Relatively fewer studies were found for children, and results were largely inconclusive.

Finally, results for pregnant women suggest that higher NSS intake is associated with increased risk for preterm birth (low-certainty evidence) and possibly adiposity in offspring (very low–certainty evidence).
 

Reducing sugar consumption

“Replacing free sugars with NSS does not help with weight control in the long-term. People need to consider other ways to reduce free sugars intake, such as consuming food with naturally occurring sugars, like fruit, or unsweetened food and beverages,” Francesco Branca, MD, PhD, WHO director of the department of nutrition and food safety, said in a press release. 

“NSSs are not essential dietary factors and have no nutritional value. People should reduce the sweetness of the diet altogether, starting early in life, to improve their health,” he added.
 

Applying the guideline

The recommendation applies to all people except individuals with preexisting diabetes and includes all synthetic and naturally occurring or modified nonnutritive sweeteners, said the WHO. 

The recommendation does not apply to personal care and hygiene products containing NSSs, such as toothpaste, skin cream, and medications, or to low-calorie sugars and sugar alcohols (polyols).

Because the link observed in the evidence between NSSs and disease outcomes might be confounded by the baseline characteristics of study participants and complicated patterns of NSS use, the recommendation has been assessed as “conditional” by the WHO. 

“This signals that policy decisions based on this recommendation may require substantive discussion in specific country contexts, linked for example to the extent of consumption in different age groups,” said the WHO press release. 

This article was translated from the Medscape French Edition . A version of the article appeared on Medscape.com.

Publications
Topics
Sections

 

A new guideline from the World Health Organization on nonsugar sweeteners (NSSs) recommends not using them to control weight or reduce the risk for diabetes, heart disease, or cancer. These sweeteners include aspartame, acesulfame K, advantame, saccharine, sucralose, stevia, and stevia derivatives.

The recommendation is based on the findings of a systematic review that collected data from 283 studies in adults, children, pregnant women, and mixed populations.

The findings suggest that use of NSSs does not confer any long-term benefit in reducing body fat in adults or children. They also suggest that long-term use of NSSs may have potential undesirable effects.

To clarify, short-term NSS use results in a small reduction in body weight and body mass index in adults without significant effects on other measures of adiposity or cardiometabolic health, including fasting glucose, insulin, blood lipids, and blood pressure.

Conversely, on a long-term basis, results from prospective cohort studies suggest that higher NSS intake is associated with increased risk for type 2 diabetes, cardiovascular diseases, and all-cause mortality in adults (very low– to low-certainty evidence). 

Regarding the risk for cancer, results from case-control studies suggest an association between saccharine intake and bladder cancer (very low certainty evidence), but significant associations for other types of cancer were not observed in case-control studies or meta-analysis of prospective cohort studies.

Relatively fewer studies were found for children, and results were largely inconclusive.

Finally, results for pregnant women suggest that higher NSS intake is associated with increased risk for preterm birth (low-certainty evidence) and possibly adiposity in offspring (very low–certainty evidence).
 

Reducing sugar consumption

“Replacing free sugars with NSS does not help with weight control in the long-term. People need to consider other ways to reduce free sugars intake, such as consuming food with naturally occurring sugars, like fruit, or unsweetened food and beverages,” Francesco Branca, MD, PhD, WHO director of the department of nutrition and food safety, said in a press release. 

“NSSs are not essential dietary factors and have no nutritional value. People should reduce the sweetness of the diet altogether, starting early in life, to improve their health,” he added.
 

Applying the guideline

The recommendation applies to all people except individuals with preexisting diabetes and includes all synthetic and naturally occurring or modified nonnutritive sweeteners, said the WHO. 

The recommendation does not apply to personal care and hygiene products containing NSSs, such as toothpaste, skin cream, and medications, or to low-calorie sugars and sugar alcohols (polyols).

Because the link observed in the evidence between NSSs and disease outcomes might be confounded by the baseline characteristics of study participants and complicated patterns of NSS use, the recommendation has been assessed as “conditional” by the WHO. 

“This signals that policy decisions based on this recommendation may require substantive discussion in specific country contexts, linked for example to the extent of consumption in different age groups,” said the WHO press release. 

This article was translated from the Medscape French Edition . A version of the article appeared on Medscape.com.

 

A new guideline from the World Health Organization on nonsugar sweeteners (NSSs) recommends not using them to control weight or reduce the risk for diabetes, heart disease, or cancer. These sweeteners include aspartame, acesulfame K, advantame, saccharine, sucralose, stevia, and stevia derivatives.

The recommendation is based on the findings of a systematic review that collected data from 283 studies in adults, children, pregnant women, and mixed populations.

The findings suggest that use of NSSs does not confer any long-term benefit in reducing body fat in adults or children. They also suggest that long-term use of NSSs may have potential undesirable effects.

To clarify, short-term NSS use results in a small reduction in body weight and body mass index in adults without significant effects on other measures of adiposity or cardiometabolic health, including fasting glucose, insulin, blood lipids, and blood pressure.

Conversely, on a long-term basis, results from prospective cohort studies suggest that higher NSS intake is associated with increased risk for type 2 diabetes, cardiovascular diseases, and all-cause mortality in adults (very low– to low-certainty evidence). 

Regarding the risk for cancer, results from case-control studies suggest an association between saccharine intake and bladder cancer (very low certainty evidence), but significant associations for other types of cancer were not observed in case-control studies or meta-analysis of prospective cohort studies.

Relatively fewer studies were found for children, and results were largely inconclusive.

Finally, results for pregnant women suggest that higher NSS intake is associated with increased risk for preterm birth (low-certainty evidence) and possibly adiposity in offspring (very low–certainty evidence).
 

Reducing sugar consumption

“Replacing free sugars with NSS does not help with weight control in the long-term. People need to consider other ways to reduce free sugars intake, such as consuming food with naturally occurring sugars, like fruit, or unsweetened food and beverages,” Francesco Branca, MD, PhD, WHO director of the department of nutrition and food safety, said in a press release. 

“NSSs are not essential dietary factors and have no nutritional value. People should reduce the sweetness of the diet altogether, starting early in life, to improve their health,” he added.
 

Applying the guideline

The recommendation applies to all people except individuals with preexisting diabetes and includes all synthetic and naturally occurring or modified nonnutritive sweeteners, said the WHO. 

The recommendation does not apply to personal care and hygiene products containing NSSs, such as toothpaste, skin cream, and medications, or to low-calorie sugars and sugar alcohols (polyols).

Because the link observed in the evidence between NSSs and disease outcomes might be confounded by the baseline characteristics of study participants and complicated patterns of NSS use, the recommendation has been assessed as “conditional” by the WHO. 

“This signals that policy decisions based on this recommendation may require substantive discussion in specific country contexts, linked for example to the extent of consumption in different age groups,” said the WHO press release. 

This article was translated from the Medscape French Edition . A version of the article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Low-carb breakfast key to lower glucose variability in T2D?

Article Type
Changed
Tue, 06/13/2023 - 09:01

 

A low-carbohydrate breakfast was better than a control (low-fat) breakfast to decrease glycemic variability throughout the day in type 2 diabetes, in new research.

These findings from a 3-month randomized study in 121 patients in Canada and Australia were published online recently in the American Journal of Clinical Nutrition.

The researchers aimed to determine whether a low-carbohydrate, high-fat breakfast (focused around eggs), compared with a standard, low-fat control breakfast (designed to have no/minimal eggs), would improve blood glucose control in individuals with type 2 diabetes.

“We’ve determined that if the first meal of the day is low-carb and higher in protein and fat we can limit hyperglycemic swings,” lead author Barbara Oliveira, PhD, School of Health and Exercise Sciences, University of British Columbia, Kelowna, said in a press release from the university.

“Having fewer carbs for breakfast not only aligns better with how people with [type 2 diabetes] handle glucose throughout the day,” she noted, “but it also has incredible potential for people with [type 2 diabetes] who struggle with their glucose levels in the morning.”

“By making a small adjustment to the carb content of a single meal rather than the entire diet,” Dr. Oliveira added, “we have the potential to increase adherence significantly while still obtaining significant benefits.”

The researchers conclude that “this trial provides evidence that advice to consume a low-carbohydrate breakfast could be a simple, feasible, and effective approach to manage postprandial hyperglycemia and lower glycemic variability in people living with type 2 diabetes.”
 

Could breakfast tweak improve glucose control?

People with type 2 diabetes have higher levels of insulin resistance and greater glucose intolerance in the morning, the researchers write.

And consuming a low-fat, high-carbohydrate meal in line with most dietary guidelines appears to incur the highest hyperglycemia spike and leads to higher glycemic variability.

They speculated that eating a low-carb breakfast, compared with a low-fat breakfast, might be an easy way to mitigate this.

They recruited participants from online ads in three provinces in Canada and four states in Australia, and they conducted the study from a site in British Columbia and one in Wollongong, Australia.

The participants were aged 20-79 years and diagnosed with type 2 diabetes. They also had a current hemoglobin A1c < 8.5% and no allergies to eggs, and they were able to follow remote, online guidance.

After screening, the participants had a phone or video conference call with a member of the research team who explained the study.

The researchers randomly assigned 75 participants in Canada and 46 participants in Australia 1:1 to the low-carbohydrate intervention or the control intervention.

The participants had a mean age of 64 and 53% were women. They had a mean weight of 93 kg (204 lb), body mass index of 32 kg/m2, and A1c of 7.0%.

Registered dietitians in Canada and Australia each designed 8-10 recipes/menus for low-carb breakfasts and an equal number of recipes/menus for control (low-fat) breakfasts that were specific for those countries.

Each recipe contains about 450 kcal, and they are available in Supplemental Appendix 1A and 1B, with the article.

Each low-carbohydrate breakfast contains about 25 g protein, 8 g carbohydrates, and 37 g fat. For example, one breakfast is a three-egg omelet with spinach.

Each control (low-fat) recipe contains about 20 g protein, 56 g carbohydrates, and 15 g fat. For example, one breakfast is a small blueberry muffin and a small plain Greek yogurt.

The participants were advised to select one of these breakfasts every day and follow it exactly (they were also required to upload a photograph of their breakfast every morning). They were not given any guidance or calorie restriction for the other meals of the day.

The participants also filled in 3-day food records and answered a questionnaire about exercise, hunger, and satiety, at the beginning, middle, and end of the intervention.

They provided self-reported height, weight, and waist circumference, and they were given requisitions for blood tests for A1c to be done at a local laboratory, at the beginning and end of the intervention.

The participants also wore a continuous glucose monitor (CGM) during the first and last 14 days of the intervention.
 

 

 

Intervention improved CGM measures

There was no significant difference in the primary outcome, change in A1c, at the end of 12 weeks, in the two groups. The mean A1c decreased by 0.3% in the intervention group vs 0.1% in the control group (P = .06).

Similarly, in secondary outcomes, weight and BMI each decreased about 1% and waist circumference decreased by about 2.5 cm in each group at 12 weeks (no significant difference). There were also no significant differences in hunger, satiety, or physical activity between the two groups.

However, the 24-hour CGM data showed that mean and maximum glucose, glycemic variability, and time above range were all significantly lower in participants in the low-carbohydrate breakfast intervention group vs. those in the control group (all P < .05).

Time in range was significantly higher among participants in the intervention group (P < .05).

In addition, the 2-hour postprandial CGM data showed that mean glucose and maximum glucose after breakfast were lower in participants in the low-carbohydrate breakfast group than in the control group.

This work was supported by investigator-initiated operating grants to senior author Jonathan P. Little, PhD, School of Health and Exercise Sciences, University of British Columbia, from the Egg Nutrition Center, United States, and Egg Farmers of Canada. The authors declare that they have no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

 

A low-carbohydrate breakfast was better than a control (low-fat) breakfast to decrease glycemic variability throughout the day in type 2 diabetes, in new research.

These findings from a 3-month randomized study in 121 patients in Canada and Australia were published online recently in the American Journal of Clinical Nutrition.

The researchers aimed to determine whether a low-carbohydrate, high-fat breakfast (focused around eggs), compared with a standard, low-fat control breakfast (designed to have no/minimal eggs), would improve blood glucose control in individuals with type 2 diabetes.

“We’ve determined that if the first meal of the day is low-carb and higher in protein and fat we can limit hyperglycemic swings,” lead author Barbara Oliveira, PhD, School of Health and Exercise Sciences, University of British Columbia, Kelowna, said in a press release from the university.

“Having fewer carbs for breakfast not only aligns better with how people with [type 2 diabetes] handle glucose throughout the day,” she noted, “but it also has incredible potential for people with [type 2 diabetes] who struggle with their glucose levels in the morning.”

“By making a small adjustment to the carb content of a single meal rather than the entire diet,” Dr. Oliveira added, “we have the potential to increase adherence significantly while still obtaining significant benefits.”

The researchers conclude that “this trial provides evidence that advice to consume a low-carbohydrate breakfast could be a simple, feasible, and effective approach to manage postprandial hyperglycemia and lower glycemic variability in people living with type 2 diabetes.”
 

Could breakfast tweak improve glucose control?

People with type 2 diabetes have higher levels of insulin resistance and greater glucose intolerance in the morning, the researchers write.

And consuming a low-fat, high-carbohydrate meal in line with most dietary guidelines appears to incur the highest hyperglycemia spike and leads to higher glycemic variability.

They speculated that eating a low-carb breakfast, compared with a low-fat breakfast, might be an easy way to mitigate this.

They recruited participants from online ads in three provinces in Canada and four states in Australia, and they conducted the study from a site in British Columbia and one in Wollongong, Australia.

The participants were aged 20-79 years and diagnosed with type 2 diabetes. They also had a current hemoglobin A1c < 8.5% and no allergies to eggs, and they were able to follow remote, online guidance.

After screening, the participants had a phone or video conference call with a member of the research team who explained the study.

The researchers randomly assigned 75 participants in Canada and 46 participants in Australia 1:1 to the low-carbohydrate intervention or the control intervention.

The participants had a mean age of 64 and 53% were women. They had a mean weight of 93 kg (204 lb), body mass index of 32 kg/m2, and A1c of 7.0%.

Registered dietitians in Canada and Australia each designed 8-10 recipes/menus for low-carb breakfasts and an equal number of recipes/menus for control (low-fat) breakfasts that were specific for those countries.

Each recipe contains about 450 kcal, and they are available in Supplemental Appendix 1A and 1B, with the article.

Each low-carbohydrate breakfast contains about 25 g protein, 8 g carbohydrates, and 37 g fat. For example, one breakfast is a three-egg omelet with spinach.

Each control (low-fat) recipe contains about 20 g protein, 56 g carbohydrates, and 15 g fat. For example, one breakfast is a small blueberry muffin and a small plain Greek yogurt.

The participants were advised to select one of these breakfasts every day and follow it exactly (they were also required to upload a photograph of their breakfast every morning). They were not given any guidance or calorie restriction for the other meals of the day.

The participants also filled in 3-day food records and answered a questionnaire about exercise, hunger, and satiety, at the beginning, middle, and end of the intervention.

They provided self-reported height, weight, and waist circumference, and they were given requisitions for blood tests for A1c to be done at a local laboratory, at the beginning and end of the intervention.

The participants also wore a continuous glucose monitor (CGM) during the first and last 14 days of the intervention.
 

 

 

Intervention improved CGM measures

There was no significant difference in the primary outcome, change in A1c, at the end of 12 weeks, in the two groups. The mean A1c decreased by 0.3% in the intervention group vs 0.1% in the control group (P = .06).

Similarly, in secondary outcomes, weight and BMI each decreased about 1% and waist circumference decreased by about 2.5 cm in each group at 12 weeks (no significant difference). There were also no significant differences in hunger, satiety, or physical activity between the two groups.

However, the 24-hour CGM data showed that mean and maximum glucose, glycemic variability, and time above range were all significantly lower in participants in the low-carbohydrate breakfast intervention group vs. those in the control group (all P < .05).

Time in range was significantly higher among participants in the intervention group (P < .05).

In addition, the 2-hour postprandial CGM data showed that mean glucose and maximum glucose after breakfast were lower in participants in the low-carbohydrate breakfast group than in the control group.

This work was supported by investigator-initiated operating grants to senior author Jonathan P. Little, PhD, School of Health and Exercise Sciences, University of British Columbia, from the Egg Nutrition Center, United States, and Egg Farmers of Canada. The authors declare that they have no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

 

A low-carbohydrate breakfast was better than a control (low-fat) breakfast to decrease glycemic variability throughout the day in type 2 diabetes, in new research.

These findings from a 3-month randomized study in 121 patients in Canada and Australia were published online recently in the American Journal of Clinical Nutrition.

The researchers aimed to determine whether a low-carbohydrate, high-fat breakfast (focused around eggs), compared with a standard, low-fat control breakfast (designed to have no/minimal eggs), would improve blood glucose control in individuals with type 2 diabetes.

“We’ve determined that if the first meal of the day is low-carb and higher in protein and fat we can limit hyperglycemic swings,” lead author Barbara Oliveira, PhD, School of Health and Exercise Sciences, University of British Columbia, Kelowna, said in a press release from the university.

“Having fewer carbs for breakfast not only aligns better with how people with [type 2 diabetes] handle glucose throughout the day,” she noted, “but it also has incredible potential for people with [type 2 diabetes] who struggle with their glucose levels in the morning.”

“By making a small adjustment to the carb content of a single meal rather than the entire diet,” Dr. Oliveira added, “we have the potential to increase adherence significantly while still obtaining significant benefits.”

The researchers conclude that “this trial provides evidence that advice to consume a low-carbohydrate breakfast could be a simple, feasible, and effective approach to manage postprandial hyperglycemia and lower glycemic variability in people living with type 2 diabetes.”
 

Could breakfast tweak improve glucose control?

People with type 2 diabetes have higher levels of insulin resistance and greater glucose intolerance in the morning, the researchers write.

And consuming a low-fat, high-carbohydrate meal in line with most dietary guidelines appears to incur the highest hyperglycemia spike and leads to higher glycemic variability.

They speculated that eating a low-carb breakfast, compared with a low-fat breakfast, might be an easy way to mitigate this.

They recruited participants from online ads in three provinces in Canada and four states in Australia, and they conducted the study from a site in British Columbia and one in Wollongong, Australia.

The participants were aged 20-79 years and diagnosed with type 2 diabetes. They also had a current hemoglobin A1c < 8.5% and no allergies to eggs, and they were able to follow remote, online guidance.

After screening, the participants had a phone or video conference call with a member of the research team who explained the study.

The researchers randomly assigned 75 participants in Canada and 46 participants in Australia 1:1 to the low-carbohydrate intervention or the control intervention.

The participants had a mean age of 64 and 53% were women. They had a mean weight of 93 kg (204 lb), body mass index of 32 kg/m2, and A1c of 7.0%.

Registered dietitians in Canada and Australia each designed 8-10 recipes/menus for low-carb breakfasts and an equal number of recipes/menus for control (low-fat) breakfasts that were specific for those countries.

Each recipe contains about 450 kcal, and they are available in Supplemental Appendix 1A and 1B, with the article.

Each low-carbohydrate breakfast contains about 25 g protein, 8 g carbohydrates, and 37 g fat. For example, one breakfast is a three-egg omelet with spinach.

Each control (low-fat) recipe contains about 20 g protein, 56 g carbohydrates, and 15 g fat. For example, one breakfast is a small blueberry muffin and a small plain Greek yogurt.

The participants were advised to select one of these breakfasts every day and follow it exactly (they were also required to upload a photograph of their breakfast every morning). They were not given any guidance or calorie restriction for the other meals of the day.

The participants also filled in 3-day food records and answered a questionnaire about exercise, hunger, and satiety, at the beginning, middle, and end of the intervention.

They provided self-reported height, weight, and waist circumference, and they were given requisitions for blood tests for A1c to be done at a local laboratory, at the beginning and end of the intervention.

The participants also wore a continuous glucose monitor (CGM) during the first and last 14 days of the intervention.
 

 

 

Intervention improved CGM measures

There was no significant difference in the primary outcome, change in A1c, at the end of 12 weeks, in the two groups. The mean A1c decreased by 0.3% in the intervention group vs 0.1% in the control group (P = .06).

Similarly, in secondary outcomes, weight and BMI each decreased about 1% and waist circumference decreased by about 2.5 cm in each group at 12 weeks (no significant difference). There were also no significant differences in hunger, satiety, or physical activity between the two groups.

However, the 24-hour CGM data showed that mean and maximum glucose, glycemic variability, and time above range were all significantly lower in participants in the low-carbohydrate breakfast intervention group vs. those in the control group (all P < .05).

Time in range was significantly higher among participants in the intervention group (P < .05).

In addition, the 2-hour postprandial CGM data showed that mean glucose and maximum glucose after breakfast were lower in participants in the low-carbohydrate breakfast group than in the control group.

This work was supported by investigator-initiated operating grants to senior author Jonathan P. Little, PhD, School of Health and Exercise Sciences, University of British Columbia, from the Egg Nutrition Center, United States, and Egg Farmers of Canada. The authors declare that they have no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE AMERICAN JOURNAL OF CLINICAL NUTRITION

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

High Lp(a) tied to higher coronary plaque volume, progression

Article Type
Changed
Sun, 06/11/2023 - 11:27

Patients with high lipoprotein(a) (Lp[a]) levels not only have an almost twofold higher coronary plaque burden than those with low levels but also a faster rate of plaque progression, an observational imaging study shows.

This could explain the greater risk for major adverse cardiovascular events seen in patients with high Lp(a) levels, suggests the research, presented during the annual European Atherosclerosis Society Congress.

The team performed follow-up coronary CT angiography (CCTA) on almost 275 patients who had undergone imaging approximately 10 years earlier, finding that almost one-third had high Lp(a) levels.

At baseline, per cent plaque volumes were 1.8 times greater in high Lp(a) patients versus those with low levels of the protein. After 10 years, plaque volumes were 3.3 times larger in patients with high Lp(a) levels.

Over this period, the rate of increase of plaque volume was 1.9 times greater in patients with high Lp(a) levels.

Study presenter Nick S. Nurmohamed, MD, PhD candidate, department of vascular medicine, Amsterdam University Medical Centers, also showed that high Lp(a) levels were associated with a 2.1-fold increase in rates of MACE.

He said in an interview that this finding could be related to Lp(a) increasing inflammatory signaling in the plaque, “making it more prone to rupture, and we saw that on the CCTA scans,” where high Lp(a) levels were associated with the presence of more high-risk plaques.

He added that in the absence of drugs that target Lp(a) levels directly, the results underline the need to focus on other means of lipid-lowering, as well as “creating awareness that Lp(a) is associated with plaque formation.”

Dr. Nurmohamed said that “for the moment, we have to treat patients with high Lp(a) with other risk-lowering therapies, such as low-density lipoprotein [LDL] cholesterol–lowering drugs, and the management of other risk factors.”

However, he noted that “there are a couple of Lp(a)-lowering medications in trials,” with results expected in the next 2-3 years.

“Then we will have the means to treat those patients, and with CCTA we can identify the patients with the biggest risk,” Dr. Nurmohamed added.
 

Plaque burden

Philippe Moulin, MD, PhD, head of endocrinology and professor of human nutrition at Faculté Lyon Est, Claude Bernard Lyon (France) 1 University, said that the association between Lp(a) and plaque burden has been seen previously in the literature in a very similar study but with only 1-year follow-up.

Similarly, registry data have suggested that Lp(a) is associated with worsening plaque progression over time.

“Here, with 10-year follow-up, [the study] is much more interesting,” due to its greater statistical power, he said in an interview. It is also “well-documented” and uses an “appropriate” methodology.

But Dr. Moulin underlined that the number of patients with high Lp(a) levels included in the study is relatively small.

Consequently, the researchers were not able to look at the level and rate of progression of atherosclerosis between different quartiles of Lp(a), “so you have no dose-response analysis.”

It also does not “establish causality,” as it remains an observational study, despite being longitudinal, “well done, and so on.”

Dr. Moulin added that the study nevertheless adds “one more stone” to the construct of the idea of high risk around high Lp(a) levels, and “prepares the ground” for the availability of two drugs to decrease Lp(a) levels, expected in 2026 and 2027.

These are expected to substantially reduce Lp(a) levels and achieve a reduction in cardiovascular risk of around 20%-40%, “which would be interesting,” especially as “we have patients who have Lp(a) levels four times above the upper normal value.”

Crucially, they may already have normal LDL cholesterol levels, meaning that, for some patients, “there is clearly a need for such treatment, as long as it is proven that it will decrease cardiovascular risk.”

For the moment, however, the strategy for managing patients with high Lp(a) remains to increase the dose of statin and to have more stringent targets, although Dr. Moulin pointed out that, “when you give statins, you raise slightly Lp(a) levels.”

Dr. Nurmohamed said in an interview that “we know from largely genetic and observational studies that Lp(a) is causally associated with atherosclerotic cardiovascular disease.”

What is less clear is the exact underlying mechanism, he said, noting that there have been several imaging studies in high and low Lp(a) patients that have yielded conflicting results in terms of the relationship with plaque burden.

To investigate the impact of Lp(a) levels on long-term coronary plaque progression, the team invited patients who had taken part in a previous CCTA study to undergo repeat CCTA, regardless of their underlying symptoms.

In all, 299 patients underwent follow-up imaging a median of 10.2 years after their original scan. Plaque volumes were quantified and adjusted for vessel volumes, and the patients were classified as having high (≥ 70 nmol/L) or low (< 70 nmol/L) Lp(a) levels.

After excluding patients who had undergone coronary artery bypass grafting, the team analyzed 274 patients with a mean age at baseline of 57 years. Of these, 159 (58%) were men. High Lp(a) levels were identified in 87 (32%) patients.

The team found that at baseline, patients with high Lp(a) levels had significantly larger percent atheroma volumes than those with low levels, at 3.92% versus 2.17%, or an absolute difference of 1.75% (P = .013).

The difference between the two groups was even greater at the follow-up, when percent atheroma volumes reached 8.75% in patients with high Lp(a) levels versus 3.90% for those with low levels, or an absolute difference of 4.85% (P = .005).

Similar findings were seen when looking separately at percentage of noncalcified and calcified plaque volumes as well as when analyzing for the presence of low-density plaques.

Multivariate analysis taking into account clinical risk factors, statin use, and CT tube voltage found that high Lp(a) levels were associated with a greater percent atheroma volume at baseline, at an odds ratio versus low Lp(a) of 1.83 (95% confidence interval, 0.12-3.54; P = .037).

High Lp(a) levels were also linked to a larger percent atheroma volume on follow-up imaging, at an odds ratio of 3.25 (95% CI, 0.80-5.71; P = .010), and a significantly greater change in atheroma volume from baseline to follow-up imaging, at an odds ratio of 1.86 (95% CI, 0.59-3.14; P = .005)

Finally, the team showed that, after adjusting for clinical risk factors, high baseline Lp(a) levels were associated with an increased risk of MACE during the follow-up period, at a hazard ratio versus low Lp(a) levels of 2.10 (95% CI, 1.01-4.29, P = .048).

No funding was declared. Dr. Nurmohamed is cofounder of Lipid Tools. Other authors declare relationships with Amgen, Novartis, Esperion, Sanofi-Regeneron, Ackee, Cleerly, GW Heart and Vascular Institute, Siemens Healthineers, and HeartFlow.

 

 

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Patients with high lipoprotein(a) (Lp[a]) levels not only have an almost twofold higher coronary plaque burden than those with low levels but also a faster rate of plaque progression, an observational imaging study shows.

This could explain the greater risk for major adverse cardiovascular events seen in patients with high Lp(a) levels, suggests the research, presented during the annual European Atherosclerosis Society Congress.

The team performed follow-up coronary CT angiography (CCTA) on almost 275 patients who had undergone imaging approximately 10 years earlier, finding that almost one-third had high Lp(a) levels.

At baseline, per cent plaque volumes were 1.8 times greater in high Lp(a) patients versus those with low levels of the protein. After 10 years, plaque volumes were 3.3 times larger in patients with high Lp(a) levels.

Over this period, the rate of increase of plaque volume was 1.9 times greater in patients with high Lp(a) levels.

Study presenter Nick S. Nurmohamed, MD, PhD candidate, department of vascular medicine, Amsterdam University Medical Centers, also showed that high Lp(a) levels were associated with a 2.1-fold increase in rates of MACE.

He said in an interview that this finding could be related to Lp(a) increasing inflammatory signaling in the plaque, “making it more prone to rupture, and we saw that on the CCTA scans,” where high Lp(a) levels were associated with the presence of more high-risk plaques.

He added that in the absence of drugs that target Lp(a) levels directly, the results underline the need to focus on other means of lipid-lowering, as well as “creating awareness that Lp(a) is associated with plaque formation.”

Dr. Nurmohamed said that “for the moment, we have to treat patients with high Lp(a) with other risk-lowering therapies, such as low-density lipoprotein [LDL] cholesterol–lowering drugs, and the management of other risk factors.”

However, he noted that “there are a couple of Lp(a)-lowering medications in trials,” with results expected in the next 2-3 years.

“Then we will have the means to treat those patients, and with CCTA we can identify the patients with the biggest risk,” Dr. Nurmohamed added.
 

Plaque burden

Philippe Moulin, MD, PhD, head of endocrinology and professor of human nutrition at Faculté Lyon Est, Claude Bernard Lyon (France) 1 University, said that the association between Lp(a) and plaque burden has been seen previously in the literature in a very similar study but with only 1-year follow-up.

Similarly, registry data have suggested that Lp(a) is associated with worsening plaque progression over time.

“Here, with 10-year follow-up, [the study] is much more interesting,” due to its greater statistical power, he said in an interview. It is also “well-documented” and uses an “appropriate” methodology.

But Dr. Moulin underlined that the number of patients with high Lp(a) levels included in the study is relatively small.

Consequently, the researchers were not able to look at the level and rate of progression of atherosclerosis between different quartiles of Lp(a), “so you have no dose-response analysis.”

It also does not “establish causality,” as it remains an observational study, despite being longitudinal, “well done, and so on.”

Dr. Moulin added that the study nevertheless adds “one more stone” to the construct of the idea of high risk around high Lp(a) levels, and “prepares the ground” for the availability of two drugs to decrease Lp(a) levels, expected in 2026 and 2027.

These are expected to substantially reduce Lp(a) levels and achieve a reduction in cardiovascular risk of around 20%-40%, “which would be interesting,” especially as “we have patients who have Lp(a) levels four times above the upper normal value.”

Crucially, they may already have normal LDL cholesterol levels, meaning that, for some patients, “there is clearly a need for such treatment, as long as it is proven that it will decrease cardiovascular risk.”

For the moment, however, the strategy for managing patients with high Lp(a) remains to increase the dose of statin and to have more stringent targets, although Dr. Moulin pointed out that, “when you give statins, you raise slightly Lp(a) levels.”

Dr. Nurmohamed said in an interview that “we know from largely genetic and observational studies that Lp(a) is causally associated with atherosclerotic cardiovascular disease.”

What is less clear is the exact underlying mechanism, he said, noting that there have been several imaging studies in high and low Lp(a) patients that have yielded conflicting results in terms of the relationship with plaque burden.

To investigate the impact of Lp(a) levels on long-term coronary plaque progression, the team invited patients who had taken part in a previous CCTA study to undergo repeat CCTA, regardless of their underlying symptoms.

In all, 299 patients underwent follow-up imaging a median of 10.2 years after their original scan. Plaque volumes were quantified and adjusted for vessel volumes, and the patients were classified as having high (≥ 70 nmol/L) or low (< 70 nmol/L) Lp(a) levels.

After excluding patients who had undergone coronary artery bypass grafting, the team analyzed 274 patients with a mean age at baseline of 57 years. Of these, 159 (58%) were men. High Lp(a) levels were identified in 87 (32%) patients.

The team found that at baseline, patients with high Lp(a) levels had significantly larger percent atheroma volumes than those with low levels, at 3.92% versus 2.17%, or an absolute difference of 1.75% (P = .013).

The difference between the two groups was even greater at the follow-up, when percent atheroma volumes reached 8.75% in patients with high Lp(a) levels versus 3.90% for those with low levels, or an absolute difference of 4.85% (P = .005).

Similar findings were seen when looking separately at percentage of noncalcified and calcified plaque volumes as well as when analyzing for the presence of low-density plaques.

Multivariate analysis taking into account clinical risk factors, statin use, and CT tube voltage found that high Lp(a) levels were associated with a greater percent atheroma volume at baseline, at an odds ratio versus low Lp(a) of 1.83 (95% confidence interval, 0.12-3.54; P = .037).

High Lp(a) levels were also linked to a larger percent atheroma volume on follow-up imaging, at an odds ratio of 3.25 (95% CI, 0.80-5.71; P = .010), and a significantly greater change in atheroma volume from baseline to follow-up imaging, at an odds ratio of 1.86 (95% CI, 0.59-3.14; P = .005)

Finally, the team showed that, after adjusting for clinical risk factors, high baseline Lp(a) levels were associated with an increased risk of MACE during the follow-up period, at a hazard ratio versus low Lp(a) levels of 2.10 (95% CI, 1.01-4.29, P = .048).

No funding was declared. Dr. Nurmohamed is cofounder of Lipid Tools. Other authors declare relationships with Amgen, Novartis, Esperion, Sanofi-Regeneron, Ackee, Cleerly, GW Heart and Vascular Institute, Siemens Healthineers, and HeartFlow.

 

 

A version of this article first appeared on Medscape.com.

Patients with high lipoprotein(a) (Lp[a]) levels not only have an almost twofold higher coronary plaque burden than those with low levels but also a faster rate of plaque progression, an observational imaging study shows.

This could explain the greater risk for major adverse cardiovascular events seen in patients with high Lp(a) levels, suggests the research, presented during the annual European Atherosclerosis Society Congress.

The team performed follow-up coronary CT angiography (CCTA) on almost 275 patients who had undergone imaging approximately 10 years earlier, finding that almost one-third had high Lp(a) levels.

At baseline, per cent plaque volumes were 1.8 times greater in high Lp(a) patients versus those with low levels of the protein. After 10 years, plaque volumes were 3.3 times larger in patients with high Lp(a) levels.

Over this period, the rate of increase of plaque volume was 1.9 times greater in patients with high Lp(a) levels.

Study presenter Nick S. Nurmohamed, MD, PhD candidate, department of vascular medicine, Amsterdam University Medical Centers, also showed that high Lp(a) levels were associated with a 2.1-fold increase in rates of MACE.

He said in an interview that this finding could be related to Lp(a) increasing inflammatory signaling in the plaque, “making it more prone to rupture, and we saw that on the CCTA scans,” where high Lp(a) levels were associated with the presence of more high-risk plaques.

He added that in the absence of drugs that target Lp(a) levels directly, the results underline the need to focus on other means of lipid-lowering, as well as “creating awareness that Lp(a) is associated with plaque formation.”

Dr. Nurmohamed said that “for the moment, we have to treat patients with high Lp(a) with other risk-lowering therapies, such as low-density lipoprotein [LDL] cholesterol–lowering drugs, and the management of other risk factors.”

However, he noted that “there are a couple of Lp(a)-lowering medications in trials,” with results expected in the next 2-3 years.

“Then we will have the means to treat those patients, and with CCTA we can identify the patients with the biggest risk,” Dr. Nurmohamed added.
 

Plaque burden

Philippe Moulin, MD, PhD, head of endocrinology and professor of human nutrition at Faculté Lyon Est, Claude Bernard Lyon (France) 1 University, said that the association between Lp(a) and plaque burden has been seen previously in the literature in a very similar study but with only 1-year follow-up.

Similarly, registry data have suggested that Lp(a) is associated with worsening plaque progression over time.

“Here, with 10-year follow-up, [the study] is much more interesting,” due to its greater statistical power, he said in an interview. It is also “well-documented” and uses an “appropriate” methodology.

But Dr. Moulin underlined that the number of patients with high Lp(a) levels included in the study is relatively small.

Consequently, the researchers were not able to look at the level and rate of progression of atherosclerosis between different quartiles of Lp(a), “so you have no dose-response analysis.”

It also does not “establish causality,” as it remains an observational study, despite being longitudinal, “well done, and so on.”

Dr. Moulin added that the study nevertheless adds “one more stone” to the construct of the idea of high risk around high Lp(a) levels, and “prepares the ground” for the availability of two drugs to decrease Lp(a) levels, expected in 2026 and 2027.

These are expected to substantially reduce Lp(a) levels and achieve a reduction in cardiovascular risk of around 20%-40%, “which would be interesting,” especially as “we have patients who have Lp(a) levels four times above the upper normal value.”

Crucially, they may already have normal LDL cholesterol levels, meaning that, for some patients, “there is clearly a need for such treatment, as long as it is proven that it will decrease cardiovascular risk.”

For the moment, however, the strategy for managing patients with high Lp(a) remains to increase the dose of statin and to have more stringent targets, although Dr. Moulin pointed out that, “when you give statins, you raise slightly Lp(a) levels.”

Dr. Nurmohamed said in an interview that “we know from largely genetic and observational studies that Lp(a) is causally associated with atherosclerotic cardiovascular disease.”

What is less clear is the exact underlying mechanism, he said, noting that there have been several imaging studies in high and low Lp(a) patients that have yielded conflicting results in terms of the relationship with plaque burden.

To investigate the impact of Lp(a) levels on long-term coronary plaque progression, the team invited patients who had taken part in a previous CCTA study to undergo repeat CCTA, regardless of their underlying symptoms.

In all, 299 patients underwent follow-up imaging a median of 10.2 years after their original scan. Plaque volumes were quantified and adjusted for vessel volumes, and the patients were classified as having high (≥ 70 nmol/L) or low (< 70 nmol/L) Lp(a) levels.

After excluding patients who had undergone coronary artery bypass grafting, the team analyzed 274 patients with a mean age at baseline of 57 years. Of these, 159 (58%) were men. High Lp(a) levels were identified in 87 (32%) patients.

The team found that at baseline, patients with high Lp(a) levels had significantly larger percent atheroma volumes than those with low levels, at 3.92% versus 2.17%, or an absolute difference of 1.75% (P = .013).

The difference between the two groups was even greater at the follow-up, when percent atheroma volumes reached 8.75% in patients with high Lp(a) levels versus 3.90% for those with low levels, or an absolute difference of 4.85% (P = .005).

Similar findings were seen when looking separately at percentage of noncalcified and calcified plaque volumes as well as when analyzing for the presence of low-density plaques.

Multivariate analysis taking into account clinical risk factors, statin use, and CT tube voltage found that high Lp(a) levels were associated with a greater percent atheroma volume at baseline, at an odds ratio versus low Lp(a) of 1.83 (95% confidence interval, 0.12-3.54; P = .037).

High Lp(a) levels were also linked to a larger percent atheroma volume on follow-up imaging, at an odds ratio of 3.25 (95% CI, 0.80-5.71; P = .010), and a significantly greater change in atheroma volume from baseline to follow-up imaging, at an odds ratio of 1.86 (95% CI, 0.59-3.14; P = .005)

Finally, the team showed that, after adjusting for clinical risk factors, high baseline Lp(a) levels were associated with an increased risk of MACE during the follow-up period, at a hazard ratio versus low Lp(a) levels of 2.10 (95% CI, 1.01-4.29, P = .048).

No funding was declared. Dr. Nurmohamed is cofounder of Lipid Tools. Other authors declare relationships with Amgen, Novartis, Esperion, Sanofi-Regeneron, Ackee, Cleerly, GW Heart and Vascular Institute, Siemens Healthineers, and HeartFlow.

 

 

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT EAS 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Is ChatGPT a friend or foe of medical publishing?

Article Type
Changed
Wed, 06/14/2023 - 15:46

 

Researchers may use artificial intelligence (AI) language models such as ChatGPT to write and revise scientific manuscripts, according to a new announcement from the International Committee of Medical Journal Editors. These tools should not be listed as authors, and researchers must denote how AI-assisted technologies were used, the committee said.

These new guidelines are the latest effort for medical journals to define policies for using these large-scale language models (LLMs) in scientific publication. While these AI-assisted tools can help with tasks such as writing, analyzing data, catching mistakes, and much more, they are also prone to errors, noted Casey Greene, PhD, a professor of biomedical informatics at the University of Colorado at Denver, Aurora. It is also not totally clear how information is stored and processed in these kinds of tools, and who has access to that information, he noted.

At the same time, experts argue that these AI tools could have a positive impact on the field by limiting some of the linguistic disparities in scientific publishing as well as alleviating the burden of some monotonous or mechanical tasks that come along with manuscript writing.

What experts can agree on, though, is that the use of AI tools is here to stay. “This is going to become a common tool,” Dr. Greene said. “I don’t think there’s a way out of that at this point.”
 

A change in medical publishing

OpenAI released ChatGPT in November 2022. In its own words, ChatGPT is “a deep learning model that has been trained on a massive amount of text data to understand and generate humanlike text.” Enter a question or a prompt, and it will respond. For example, when asked how the AI tool can be used in scientific publishing, ChatGPT responded:

“ChatGPT can aid scientific publishing by generating ideas, clarifying concepts, conducting preliminary literature reviews, providing proofreading and editing suggestions, and assisting with outlining and organizing scientific papers. However, it should be used as a supportive tool alongside domain expertise and validation from trusted scientific sources.”

Just a few months after ChatGPT became available, researchers began using this tool in their own work. One individual, Som Biswas, MD, a radiologist at the University of Tennessee Health Science Center in Memphis, reportedly used ChatGPT to author 16 scientific articles in just 4 months, according to the Daily Beast. Five of these articles have been published in four different journals. Dr. Biswas declined to be interviewed for this article.

There were also reports of papers with ChatGPT as one of the listed authors, which sparked backlash. In response, JAMA, Nature, and Science all published editorials in January outlining their policies for using ChatGPT and other large language models in the scientific authoring process. Editors from the journals of the American College of Cardiology and the American College of Rheumatology also updated their policies to reflect the influence of AI authoring tools.

The consensus is that AI has no place on the author byline.

“We think that’s not appropriate, because coauthorship means that you are taking responsibility for the analysis and the generation of data that are included in a manuscript. A machine that is dictated by AI can’t take responsibility,” said Daniel Solomon, MD, MPH, a rheumatologist at Brigham and Women’s Hospital, Boston, and the editor in chief of the ACR journal Arthritis & Rheumatology.
 

 

 

Issues with AI

One of the big concerns around using AI in writing is that it can generate text that seems plausible but is untrue or not supported by data. For example, Dr. Greene and colleague Milton Pividori, PhD, also of the University of Colorado, were writing a journal article about new software they developed that uses a large language model to revise scientific manuscripts.

“We used the same software to revise that article and at one point, it added a line that noted that the large language model had been fine-tuned on a data set of manuscripts from within the same field. This makes a lot of sense, and is absolutely something you could do, but was not something that we did,” Dr. Greene said. “Without a really careful review of the content, it becomes possible to invent things that were not actually done.”

In another case, ChatGPT falsely stated that a prominent law professor had been accused of sexual assault, citing a Washington Post article that did not exist.

“We live in a society where we are extremely concerned about fake news,” Dr. Pividori added, “and [these kinds of errors] could certainly exacerbate that in the scientific community, which is very concerning because science informs public policy.”

Another issue is the lack of transparency around how large language models like ChatGPT process and store data used to make queries.

“We have no idea how they are recording all the prompts and things that we input into ChatGPT and their systems,” Dr. Pividori said.

OpenAI recently addressed some privacy concerns by allowing users to turn off their chat history with the AI chatbot, so conversations cannot be used to train or improve the company’s models. But Dr. Greene noted that the terms of service “still remain pretty nebulous.”

Dr. Solomon is also concerned with researchers using these AI tools in authoring without knowing how they work. “The thing we are really concerned about is that fact that [LLMs] are a bit of a black box – people don’t really understand the methodologies,” he said.
 

A positive tool?

But despite these concerns, many think that these types of AI-assisted tools could have a positive impact on medical publishing, particularly for researchers for whom English is not their first language, noted Catherine Gao, MD, a pulmonary and critical care instructor at Northwestern University, Chicago. She recently led research comparing scientific abstracts written by ChatGPT and real abstracts and discovered that reviewers found it “surprisingly difficult” to differentiate the two.

“The majority of research is published in English,” she said in an email. “Responsible use of LLMs can potentially reduce the burden of writing for busy scientists and improve equity for those who are not native English speakers.”

Dr. Pividori agreed, adding that as a non-native English speaker, he spends much more time working on the structure and grammar of sentences when authoring a manuscript, compared with people who speak English as a first language. He noted that these tools can also be used to automate some of the more monotonous tasks that come along with writing manuscripts and allow researchers to focus on the more creative aspects.

In the future, “I want to focus more on the things that only a human can do and let these tools do all the rest of it,” he said.
 

 

 

New rules

But despite how individual researchers feel about LLMs, they agree that these AI tools are here to stay.

“I think that we should anticipate that they will become part of the medical research establishment over time, when we figure out how to use them appropriately,” Dr. Solomon said.

While the debate of how to best use AI in medical publications will continue, journal editors agree that all authors of a manuscript are solely responsible for content in articles that used AI-assisted technology.

“Authors should carefully review and edit the result because AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased,” the ICMJE guidelines state. “Authors should be able to assert that there is no plagiarism in their paper, including in text and images produced by the AI.” This includes appropriate attribution of all cited materials.

The committee also recommends that authors write in both the cover letter and submitted work how AI was used in the manuscript writing process. Recently updated guidelines from the World Association of Medical Editors recommend that all prompts used to generate new text or analytical work should be provided in submitted work. Dr. Greene also noted that if authors used an AI tool to revise their work, they can include a version of the manuscript untouched by LLMs.

It is similar to a preprint, he said, but rather than publishing a version of a paper prior to peer review, someone is showing a version of a manuscript before it was reviewed and revised by AI. “This type of practice could be a path that lets us benefit from these models,” he said, “without having the drawbacks that many are concerned about.”

Dr. Solomon has financial relationships with AbbVie, Amgen, Janssen, CorEvitas, and Moderna. Both Dr. Greene and Dr. Pividori are inventors in the U.S. Provisional Patent Application No. 63/486,706 that the University of Colorado has filed for the “Publishing Infrastructure For AI-Assisted Academic Authoring” invention with the U.S. Patent and Trademark Office. Dr. Greene and Dr. Pividori also received a grant from the Alfred P. Sloan Foundation to improve their AI-based manuscript revision tool. Dr. Gao reported no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

 

Researchers may use artificial intelligence (AI) language models such as ChatGPT to write and revise scientific manuscripts, according to a new announcement from the International Committee of Medical Journal Editors. These tools should not be listed as authors, and researchers must denote how AI-assisted technologies were used, the committee said.

These new guidelines are the latest effort for medical journals to define policies for using these large-scale language models (LLMs) in scientific publication. While these AI-assisted tools can help with tasks such as writing, analyzing data, catching mistakes, and much more, they are also prone to errors, noted Casey Greene, PhD, a professor of biomedical informatics at the University of Colorado at Denver, Aurora. It is also not totally clear how information is stored and processed in these kinds of tools, and who has access to that information, he noted.

At the same time, experts argue that these AI tools could have a positive impact on the field by limiting some of the linguistic disparities in scientific publishing as well as alleviating the burden of some monotonous or mechanical tasks that come along with manuscript writing.

What experts can agree on, though, is that the use of AI tools is here to stay. “This is going to become a common tool,” Dr. Greene said. “I don’t think there’s a way out of that at this point.”
 

A change in medical publishing

OpenAI released ChatGPT in November 2022. In its own words, ChatGPT is “a deep learning model that has been trained on a massive amount of text data to understand and generate humanlike text.” Enter a question or a prompt, and it will respond. For example, when asked how the AI tool can be used in scientific publishing, ChatGPT responded:

“ChatGPT can aid scientific publishing by generating ideas, clarifying concepts, conducting preliminary literature reviews, providing proofreading and editing suggestions, and assisting with outlining and organizing scientific papers. However, it should be used as a supportive tool alongside domain expertise and validation from trusted scientific sources.”

Just a few months after ChatGPT became available, researchers began using this tool in their own work. One individual, Som Biswas, MD, a radiologist at the University of Tennessee Health Science Center in Memphis, reportedly used ChatGPT to author 16 scientific articles in just 4 months, according to the Daily Beast. Five of these articles have been published in four different journals. Dr. Biswas declined to be interviewed for this article.

There were also reports of papers with ChatGPT as one of the listed authors, which sparked backlash. In response, JAMA, Nature, and Science all published editorials in January outlining their policies for using ChatGPT and other large language models in the scientific authoring process. Editors from the journals of the American College of Cardiology and the American College of Rheumatology also updated their policies to reflect the influence of AI authoring tools.

The consensus is that AI has no place on the author byline.

“We think that’s not appropriate, because coauthorship means that you are taking responsibility for the analysis and the generation of data that are included in a manuscript. A machine that is dictated by AI can’t take responsibility,” said Daniel Solomon, MD, MPH, a rheumatologist at Brigham and Women’s Hospital, Boston, and the editor in chief of the ACR journal Arthritis & Rheumatology.
 

 

 

Issues with AI

One of the big concerns around using AI in writing is that it can generate text that seems plausible but is untrue or not supported by data. For example, Dr. Greene and colleague Milton Pividori, PhD, also of the University of Colorado, were writing a journal article about new software they developed that uses a large language model to revise scientific manuscripts.

“We used the same software to revise that article and at one point, it added a line that noted that the large language model had been fine-tuned on a data set of manuscripts from within the same field. This makes a lot of sense, and is absolutely something you could do, but was not something that we did,” Dr. Greene said. “Without a really careful review of the content, it becomes possible to invent things that were not actually done.”

In another case, ChatGPT falsely stated that a prominent law professor had been accused of sexual assault, citing a Washington Post article that did not exist.

“We live in a society where we are extremely concerned about fake news,” Dr. Pividori added, “and [these kinds of errors] could certainly exacerbate that in the scientific community, which is very concerning because science informs public policy.”

Another issue is the lack of transparency around how large language models like ChatGPT process and store data used to make queries.

“We have no idea how they are recording all the prompts and things that we input into ChatGPT and their systems,” Dr. Pividori said.

OpenAI recently addressed some privacy concerns by allowing users to turn off their chat history with the AI chatbot, so conversations cannot be used to train or improve the company’s models. But Dr. Greene noted that the terms of service “still remain pretty nebulous.”

Dr. Solomon is also concerned with researchers using these AI tools in authoring without knowing how they work. “The thing we are really concerned about is that fact that [LLMs] are a bit of a black box – people don’t really understand the methodologies,” he said.
 

A positive tool?

But despite these concerns, many think that these types of AI-assisted tools could have a positive impact on medical publishing, particularly for researchers for whom English is not their first language, noted Catherine Gao, MD, a pulmonary and critical care instructor at Northwestern University, Chicago. She recently led research comparing scientific abstracts written by ChatGPT and real abstracts and discovered that reviewers found it “surprisingly difficult” to differentiate the two.

“The majority of research is published in English,” she said in an email. “Responsible use of LLMs can potentially reduce the burden of writing for busy scientists and improve equity for those who are not native English speakers.”

Dr. Pividori agreed, adding that as a non-native English speaker, he spends much more time working on the structure and grammar of sentences when authoring a manuscript, compared with people who speak English as a first language. He noted that these tools can also be used to automate some of the more monotonous tasks that come along with writing manuscripts and allow researchers to focus on the more creative aspects.

In the future, “I want to focus more on the things that only a human can do and let these tools do all the rest of it,” he said.
 

 

 

New rules

But despite how individual researchers feel about LLMs, they agree that these AI tools are here to stay.

“I think that we should anticipate that they will become part of the medical research establishment over time, when we figure out how to use them appropriately,” Dr. Solomon said.

While the debate of how to best use AI in medical publications will continue, journal editors agree that all authors of a manuscript are solely responsible for content in articles that used AI-assisted technology.

“Authors should carefully review and edit the result because AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased,” the ICMJE guidelines state. “Authors should be able to assert that there is no plagiarism in their paper, including in text and images produced by the AI.” This includes appropriate attribution of all cited materials.

The committee also recommends that authors write in both the cover letter and submitted work how AI was used in the manuscript writing process. Recently updated guidelines from the World Association of Medical Editors recommend that all prompts used to generate new text or analytical work should be provided in submitted work. Dr. Greene also noted that if authors used an AI tool to revise their work, they can include a version of the manuscript untouched by LLMs.

It is similar to a preprint, he said, but rather than publishing a version of a paper prior to peer review, someone is showing a version of a manuscript before it was reviewed and revised by AI. “This type of practice could be a path that lets us benefit from these models,” he said, “without having the drawbacks that many are concerned about.”

Dr. Solomon has financial relationships with AbbVie, Amgen, Janssen, CorEvitas, and Moderna. Both Dr. Greene and Dr. Pividori are inventors in the U.S. Provisional Patent Application No. 63/486,706 that the University of Colorado has filed for the “Publishing Infrastructure For AI-Assisted Academic Authoring” invention with the U.S. Patent and Trademark Office. Dr. Greene and Dr. Pividori also received a grant from the Alfred P. Sloan Foundation to improve their AI-based manuscript revision tool. Dr. Gao reported no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

 

Researchers may use artificial intelligence (AI) language models such as ChatGPT to write and revise scientific manuscripts, according to a new announcement from the International Committee of Medical Journal Editors. These tools should not be listed as authors, and researchers must denote how AI-assisted technologies were used, the committee said.

These new guidelines are the latest effort for medical journals to define policies for using these large-scale language models (LLMs) in scientific publication. While these AI-assisted tools can help with tasks such as writing, analyzing data, catching mistakes, and much more, they are also prone to errors, noted Casey Greene, PhD, a professor of biomedical informatics at the University of Colorado at Denver, Aurora. It is also not totally clear how information is stored and processed in these kinds of tools, and who has access to that information, he noted.

At the same time, experts argue that these AI tools could have a positive impact on the field by limiting some of the linguistic disparities in scientific publishing as well as alleviating the burden of some monotonous or mechanical tasks that come along with manuscript writing.

What experts can agree on, though, is that the use of AI tools is here to stay. “This is going to become a common tool,” Dr. Greene said. “I don’t think there’s a way out of that at this point.”
 

A change in medical publishing

OpenAI released ChatGPT in November 2022. In its own words, ChatGPT is “a deep learning model that has been trained on a massive amount of text data to understand and generate humanlike text.” Enter a question or a prompt, and it will respond. For example, when asked how the AI tool can be used in scientific publishing, ChatGPT responded:

“ChatGPT can aid scientific publishing by generating ideas, clarifying concepts, conducting preliminary literature reviews, providing proofreading and editing suggestions, and assisting with outlining and organizing scientific papers. However, it should be used as a supportive tool alongside domain expertise and validation from trusted scientific sources.”

Just a few months after ChatGPT became available, researchers began using this tool in their own work. One individual, Som Biswas, MD, a radiologist at the University of Tennessee Health Science Center in Memphis, reportedly used ChatGPT to author 16 scientific articles in just 4 months, according to the Daily Beast. Five of these articles have been published in four different journals. Dr. Biswas declined to be interviewed for this article.

There were also reports of papers with ChatGPT as one of the listed authors, which sparked backlash. In response, JAMA, Nature, and Science all published editorials in January outlining their policies for using ChatGPT and other large language models in the scientific authoring process. Editors from the journals of the American College of Cardiology and the American College of Rheumatology also updated their policies to reflect the influence of AI authoring tools.

The consensus is that AI has no place on the author byline.

“We think that’s not appropriate, because coauthorship means that you are taking responsibility for the analysis and the generation of data that are included in a manuscript. A machine that is dictated by AI can’t take responsibility,” said Daniel Solomon, MD, MPH, a rheumatologist at Brigham and Women’s Hospital, Boston, and the editor in chief of the ACR journal Arthritis & Rheumatology.
 

 

 

Issues with AI

One of the big concerns around using AI in writing is that it can generate text that seems plausible but is untrue or not supported by data. For example, Dr. Greene and colleague Milton Pividori, PhD, also of the University of Colorado, were writing a journal article about new software they developed that uses a large language model to revise scientific manuscripts.

“We used the same software to revise that article and at one point, it added a line that noted that the large language model had been fine-tuned on a data set of manuscripts from within the same field. This makes a lot of sense, and is absolutely something you could do, but was not something that we did,” Dr. Greene said. “Without a really careful review of the content, it becomes possible to invent things that were not actually done.”

In another case, ChatGPT falsely stated that a prominent law professor had been accused of sexual assault, citing a Washington Post article that did not exist.

“We live in a society where we are extremely concerned about fake news,” Dr. Pividori added, “and [these kinds of errors] could certainly exacerbate that in the scientific community, which is very concerning because science informs public policy.”

Another issue is the lack of transparency around how large language models like ChatGPT process and store data used to make queries.

“We have no idea how they are recording all the prompts and things that we input into ChatGPT and their systems,” Dr. Pividori said.

OpenAI recently addressed some privacy concerns by allowing users to turn off their chat history with the AI chatbot, so conversations cannot be used to train or improve the company’s models. But Dr. Greene noted that the terms of service “still remain pretty nebulous.”

Dr. Solomon is also concerned with researchers using these AI tools in authoring without knowing how they work. “The thing we are really concerned about is that fact that [LLMs] are a bit of a black box – people don’t really understand the methodologies,” he said.
 

A positive tool?

But despite these concerns, many think that these types of AI-assisted tools could have a positive impact on medical publishing, particularly for researchers for whom English is not their first language, noted Catherine Gao, MD, a pulmonary and critical care instructor at Northwestern University, Chicago. She recently led research comparing scientific abstracts written by ChatGPT and real abstracts and discovered that reviewers found it “surprisingly difficult” to differentiate the two.

“The majority of research is published in English,” she said in an email. “Responsible use of LLMs can potentially reduce the burden of writing for busy scientists and improve equity for those who are not native English speakers.”

Dr. Pividori agreed, adding that as a non-native English speaker, he spends much more time working on the structure and grammar of sentences when authoring a manuscript, compared with people who speak English as a first language. He noted that these tools can also be used to automate some of the more monotonous tasks that come along with writing manuscripts and allow researchers to focus on the more creative aspects.

In the future, “I want to focus more on the things that only a human can do and let these tools do all the rest of it,” he said.
 

 

 

New rules

But despite how individual researchers feel about LLMs, they agree that these AI tools are here to stay.

“I think that we should anticipate that they will become part of the medical research establishment over time, when we figure out how to use them appropriately,” Dr. Solomon said.

While the debate of how to best use AI in medical publications will continue, journal editors agree that all authors of a manuscript are solely responsible for content in articles that used AI-assisted technology.

“Authors should carefully review and edit the result because AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased,” the ICMJE guidelines state. “Authors should be able to assert that there is no plagiarism in their paper, including in text and images produced by the AI.” This includes appropriate attribution of all cited materials.

The committee also recommends that authors write in both the cover letter and submitted work how AI was used in the manuscript writing process. Recently updated guidelines from the World Association of Medical Editors recommend that all prompts used to generate new text or analytical work should be provided in submitted work. Dr. Greene also noted that if authors used an AI tool to revise their work, they can include a version of the manuscript untouched by LLMs.

It is similar to a preprint, he said, but rather than publishing a version of a paper prior to peer review, someone is showing a version of a manuscript before it was reviewed and revised by AI. “This type of practice could be a path that lets us benefit from these models,” he said, “without having the drawbacks that many are concerned about.”

Dr. Solomon has financial relationships with AbbVie, Amgen, Janssen, CorEvitas, and Moderna. Both Dr. Greene and Dr. Pividori are inventors in the U.S. Provisional Patent Application No. 63/486,706 that the University of Colorado has filed for the “Publishing Infrastructure For AI-Assisted Academic Authoring” invention with the U.S. Patent and Trademark Office. Dr. Greene and Dr. Pividori also received a grant from the Alfred P. Sloan Foundation to improve their AI-based manuscript revision tool. Dr. Gao reported no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Nitroglycerin patches do not improve menopause symptoms

Article Type
Changed
Fri, 06/09/2023 - 09:50

Vasomotor symptoms affect as many as 75% of menopausal women in the United States. Characterized by a sudden onset of flushing, sweating, and chills, symptoms of hot flashes can be managed with hormone therapy, but prolonged use of the treatment poses health risks. In a study published in JAMA Internal Medicine, researchers found that the use of nitroglycerin patches did not result in lasting improvements in the frequency and severity of hot flashes, but there was a short-term benefit.

METHODOLOGY

  • The  was a randomized, double-blinded trial involving 134 California women aged 40-62 years.
  • Between July 2018 and December 2021, participants self-administered either a nitroglycerin patch at a dosage of 0.2 to 0.6 mg/h or a placebo patch every night.
  • Participants were in the late stages of menopause or had already undergone menopause. They reported having seven or more hot flashes per day; at least four were moderate to severe over a 1-week period.
  • The primary outcome was a change in the frequency of hot flashes over 5 and 12 weeks.

TAKEAWAY

  • Over 5 weeks, the frequency of moderate to severe hot flashes decreased by 3.3 episodes per day in the nitroglycerine group, compared with 2.2 episodes per day in the placebo group (95% CI, −2.2 to 0; P = .05).
  • The reduction in overall frequency of hot flashes – either mild, moderate, or severe – over the 5-week period was not statistically significant.
  • Over the 12-week period, no statistically significant reductions in hot flashes occurred.
  • More than two thirds of participants assigned to the nitroglycerin patches reported having headaches, while three reported chest pain and one had a syncopal episode.

IN PRACTICE

The findings do not support daily use of nitroglycerin patches to treat vasomotor symptoms, the researchers conclude.

“The bottom line is that our study doesn’t allow us to recommend nitroglycerin skin patches as a strategy for consumers to suppress hot flashes in the long term,” Alison Huang, MD, MAS, lead author of the study, said in a press release. “The menopause field is still lacking in effective treatment approaches that don’t involve hormones.”
 

STUDY DETAILS

The study was led by Alison Huang, MD, MAS, a professor of medicine at the University of California, San Francisco. Two of the authors reported grants from the National Institute on Aging.

LIMITATIONS

Almost 20% of women who used the nitroglycerin patches discontinued treatment before the end of the trial because they could not tolerate the medication, experienced an adverse event, or their symptoms did not improve, according to the researchers. In addition, the 1-week period used to screen for severity and frequency of hot flashes may have been too short to confirm that symptoms were prolonged, which could explain the better-than-expected results in the placebo group.

DISCLOSURES

One author served on the medical advisory board of SomaLogic. Another author is an unpaid consultant to Astellas Pharma. Another author reported grants from the National Institutes of Health.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Vasomotor symptoms affect as many as 75% of menopausal women in the United States. Characterized by a sudden onset of flushing, sweating, and chills, symptoms of hot flashes can be managed with hormone therapy, but prolonged use of the treatment poses health risks. In a study published in JAMA Internal Medicine, researchers found that the use of nitroglycerin patches did not result in lasting improvements in the frequency and severity of hot flashes, but there was a short-term benefit.

METHODOLOGY

  • The  was a randomized, double-blinded trial involving 134 California women aged 40-62 years.
  • Between July 2018 and December 2021, participants self-administered either a nitroglycerin patch at a dosage of 0.2 to 0.6 mg/h or a placebo patch every night.
  • Participants were in the late stages of menopause or had already undergone menopause. They reported having seven or more hot flashes per day; at least four were moderate to severe over a 1-week period.
  • The primary outcome was a change in the frequency of hot flashes over 5 and 12 weeks.

TAKEAWAY

  • Over 5 weeks, the frequency of moderate to severe hot flashes decreased by 3.3 episodes per day in the nitroglycerine group, compared with 2.2 episodes per day in the placebo group (95% CI, −2.2 to 0; P = .05).
  • The reduction in overall frequency of hot flashes – either mild, moderate, or severe – over the 5-week period was not statistically significant.
  • Over the 12-week period, no statistically significant reductions in hot flashes occurred.
  • More than two thirds of participants assigned to the nitroglycerin patches reported having headaches, while three reported chest pain and one had a syncopal episode.

IN PRACTICE

The findings do not support daily use of nitroglycerin patches to treat vasomotor symptoms, the researchers conclude.

“The bottom line is that our study doesn’t allow us to recommend nitroglycerin skin patches as a strategy for consumers to suppress hot flashes in the long term,” Alison Huang, MD, MAS, lead author of the study, said in a press release. “The menopause field is still lacking in effective treatment approaches that don’t involve hormones.”
 

STUDY DETAILS

The study was led by Alison Huang, MD, MAS, a professor of medicine at the University of California, San Francisco. Two of the authors reported grants from the National Institute on Aging.

LIMITATIONS

Almost 20% of women who used the nitroglycerin patches discontinued treatment before the end of the trial because they could not tolerate the medication, experienced an adverse event, or their symptoms did not improve, according to the researchers. In addition, the 1-week period used to screen for severity and frequency of hot flashes may have been too short to confirm that symptoms were prolonged, which could explain the better-than-expected results in the placebo group.

DISCLOSURES

One author served on the medical advisory board of SomaLogic. Another author is an unpaid consultant to Astellas Pharma. Another author reported grants from the National Institutes of Health.

A version of this article first appeared on Medscape.com.

Vasomotor symptoms affect as many as 75% of menopausal women in the United States. Characterized by a sudden onset of flushing, sweating, and chills, symptoms of hot flashes can be managed with hormone therapy, but prolonged use of the treatment poses health risks. In a study published in JAMA Internal Medicine, researchers found that the use of nitroglycerin patches did not result in lasting improvements in the frequency and severity of hot flashes, but there was a short-term benefit.

METHODOLOGY

  • The  was a randomized, double-blinded trial involving 134 California women aged 40-62 years.
  • Between July 2018 and December 2021, participants self-administered either a nitroglycerin patch at a dosage of 0.2 to 0.6 mg/h or a placebo patch every night.
  • Participants were in the late stages of menopause or had already undergone menopause. They reported having seven or more hot flashes per day; at least four were moderate to severe over a 1-week period.
  • The primary outcome was a change in the frequency of hot flashes over 5 and 12 weeks.

TAKEAWAY

  • Over 5 weeks, the frequency of moderate to severe hot flashes decreased by 3.3 episodes per day in the nitroglycerine group, compared with 2.2 episodes per day in the placebo group (95% CI, −2.2 to 0; P = .05).
  • The reduction in overall frequency of hot flashes – either mild, moderate, or severe – over the 5-week period was not statistically significant.
  • Over the 12-week period, no statistically significant reductions in hot flashes occurred.
  • More than two thirds of participants assigned to the nitroglycerin patches reported having headaches, while three reported chest pain and one had a syncopal episode.

IN PRACTICE

The findings do not support daily use of nitroglycerin patches to treat vasomotor symptoms, the researchers conclude.

“The bottom line is that our study doesn’t allow us to recommend nitroglycerin skin patches as a strategy for consumers to suppress hot flashes in the long term,” Alison Huang, MD, MAS, lead author of the study, said in a press release. “The menopause field is still lacking in effective treatment approaches that don’t involve hormones.”
 

STUDY DETAILS

The study was led by Alison Huang, MD, MAS, a professor of medicine at the University of California, San Francisco. Two of the authors reported grants from the National Institute on Aging.

LIMITATIONS

Almost 20% of women who used the nitroglycerin patches discontinued treatment before the end of the trial because they could not tolerate the medication, experienced an adverse event, or their symptoms did not improve, according to the researchers. In addition, the 1-week period used to screen for severity and frequency of hot flashes may have been too short to confirm that symptoms were prolonged, which could explain the better-than-expected results in the placebo group.

DISCLOSURES

One author served on the medical advisory board of SomaLogic. Another author is an unpaid consultant to Astellas Pharma. Another author reported grants from the National Institutes of Health.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

When could you be sued for AI malpractice? You’re likely using it now

Article Type
Changed
Mon, 06/12/2023 - 10:45

The ways in which artificial intelligence (AI) may transform the future of medicine is making headlines across the globe. But chances are, you’re already using AI in your practice every day – you may just not realize it.
 

And whether you recognize the presence of AI or not, the technology could be putting you in danger of a lawsuit, legal experts say.

The use of AI in your daily practice can come with hidden liabilities, say legal experts, and as hospitals and medical groups deploy AI into more areas of health care, new liability exposures may be on the horizon.

“For physicians, AI has also not yet drastically changed or improved the way care is provided or consumed,” said Michael LeTang, chief nursing informatics officer and vice president of risk management and patient safety at Healthcare Risk Advisors, part of TDC Group. “Consequently, it may seem like AI is not present in their work streams, but in reality, it has been utilized in health care for several years. As AI technologies continue to develop and become more sophisticated, we can expect them to play an increasingly significant role in health care.”

Today, most AI applications in health care use narrow AI, which is designed to complete a single task without human assistance, as opposed to artificial general intelligence (AGI), which pertains to human-level reasoning and problem solving across a broad spectrum. Here are some ways doctors are using AI throughout the day – sometimes being aware of its assistance, and sometimes being unaware:

  • Many doctors use electronic health records (EHRs) with integrated AI that include computerized clinical decision support tools designed to reduce the risk of diagnostic error and to integrate decision-making in the medication ordering function.
  • Cardiologists, pathologists, and dermatologists use AI in the interpretation of vast amounts of images, tracings, and complex patterns.
  • Surgeons are using AI-enhanced surgical robotics for orthopedic surgeries, such as joint replacement and spine surgery.
  • A growing number of doctors are using ChatGPT to assist in drafting prior authorization letters for insurers. Experts say more doctors are also experimenting with ChatGPT to support medical decision-making.
  • Within oncology, physicians use machine learning techniques in the form of computer-aided detection systems for early breast cancer detection.
  • AI algorithms are often used by health systems for workflow, staffing optimization, population management, and care coordination.
  • Some systems within EHRs use AI to indicate high-risk patients.
  • Physicians are using AI applications for the early recognition of sepsis, including EHR-integrated decision tools, such as the Hospital Corporation of America Healthcare’s Sepsis Prediction and Optimization Therapy and the Sepsis Early Risk Assessment algorithm.
  • About 30% of radiologists use AI in their practice to analyze x-rays and CT scans.
  • Epic Systems recently announced a partnership with Microsoft to integrate ChatGPT into MyChart, Epic’s patient portal system. Pilot hospitals will utilize ChatGPT to automatically generate responses to patient-generated questions sent via the portal.
 

 

The growth of AI in health care has been enormous, and it’s only going to continue, said Ravi B. Parikh, MD, an assistant professor in the department of medical ethics and health policy and medicine at the University of Pennsylvania, Philadelphia.

“What’s really critical is that physicians, clinicians, and nurses using AI are provided with the tools to understand how artificial intelligence works and, most importantly, understand that they are still accountable for making the ultimate decision,” Mr. LeTang said, “The information is not always going to be the right thing to do or the most accurate thing to do. They’re still liable for making a bad decision, even if AI is driving that.”
 

What are the top AI legal dangers of today?

A pressing legal risk is becoming too reliant on the suggestions that AI-based systems provide, which can lead to poor care decisions, said Kenneth Rashbaum, a New York–based cybersecurity attorney with more than 25 years of experience in medical malpractice defense.

This can occur, for example, when using clinical support systems that leverage AI, machine learning, or statistical pattern recognition. Today, clinical support systems are commonly administered through EHRs and other computerized clinical workflows. In general, such systems match a patient’s characteristics to a computerized clinical knowledge base. An assessment or recommendation is then presented to the physician for a decision.

“If the clinician blindly accepts it without considering whether it’s appropriate for this patient at this time with this presentation, the clinician may bear some responsibility if there is an untoward result,” Mr. Rashbaum said.

“A common claim even in the days before the EMR [electronic medical record] and AI, was that the clinician did not take all available information into account in rendering treatment, including history of past and present condition, as reflected in the records, communication with past and other present treating clinicians, lab and radiology results, discussions with the patient, and physical examination findings,” he said. “So, if the clinician relied upon the support prompt to the exclusion of these other sources of information, that could be a very strong argument for the plaintiff.”

Chatbots, such OpenAI’s ChatGPT, are another form of AI raising legal red flags. ChatGPT, trained on a massive set of text data, can carry out conversations, write code, draft emails, and answer any question posed. The chatbot has gained considerable credibility for accurately diagnosing rare conditions in seconds, and it recently passed the U.S. Medical Licensing Examination.

It’s unclear how many doctors are signing onto the ChatGPT website daily, but physicians are actively using the chatbot, particularly for assistance with prior authorization letters and to support decision-making processes in their practices, said Mr. LeTang.

When physicians ask ChatGPT a question, however, they should be mindful that ChatGPT could “hallucinate,” a term that refers to a generated response that sounds plausible but is factually incorrect or is unrelated to the context, explains Harvey Castro, MD, an emergency physician, ChatGPT health care expert, and author of the 2023 book “ChatGPT and Healthcare: Unlocking the Potential of Patient Empowerment.”

Acting on ChatGPT’s response without vetting the information places doctors at serious risk of a lawsuit, he said.

“Sometimes, the response is half true and half false,” he said. “Say, I go outside my specialty of emergency medicine and ask it about a pediatric surgical procedure. It could give me a response that sounds medically correct, but then I ask a pediatric cardiologist, and he says, ‘We don’t even do this. This doesn’t even exist!’ Physicians really have to make sure they are vetting the information provided.”

In response to ChatGPT’s growing usage by health care professionals, hospitals and practices are quickly implementing guidelines, policies, and restrictions that caution physicians about the accuracy of ChatGPT-generated information, adds Mr. LeTang.

Emerging best practices include avoiding the input of patient health information, personally identifiable information, or any data that could be commercially valuable or considered the intellectual property of a hospital or health system, he said.

“Another crucial guideline is not to rely solely on ChatGPT as a definitive source for clinical decision-making; physicians must exercise their professional judgment,” he said. “If best practices are not adhered to, the associated risks are present today. However, these risks may become more significant as AI technologies continue to evolve and become increasingly integrated into health care.”

The potential for misdiagnosis by AI systems and the risk of unnecessary procedures if physicians do not thoroughly evaluate and validate AI predictions are other dangers.

As an example, Mr. LeTang described a case in which a physician documents in the EHR that a patient has presented to the emergency department with chest pains and other signs of a heart attack, and an AI algorithm predicts that the patient is experiencing an active myocardial infarction. If the physician then sends the patient for stenting or an angioplasty without other concrete evidence or tests to confirm the diagnosis, the doctor could later face a misdiagnosis complaint if the costly procedures were unnecessary.

“That’s one of the risks of using artificial intelligence,” he said. “A large percentage of malpractice claims is failure to diagnose, delayed diagnosis, or inaccurate diagnosis. What falls in the category of failure to diagnose is sending a patient for an unnecessary procedure or having an adverse event or bad outcome because of the failure to diagnose.”

So far, no AI lawsuits have been filed, but they may make an appearance soon, said Sue Boisvert, senior patient safety risk manager at The Doctors Company, a national medical liability insurer.

“There are hundreds of AI programs currently in use in health care,” she said. “At some point, a provider will make a decision that is contrary to what the AI recommended. The AI may be wrong, or the provider may be wrong. Either way, the provider will neglect to document their clinical reasoning, a patient will be harmed, and we will have the first AI claim.”
 

 

 

Upcoming AI legal risks to watch for

Lawsuits that allege biased patient care by physicians on the basis of algorithmic bias may also be forthcoming, analysts warn.

Much has been written about algorithmic bias that compounds and worsens inequities in socioeconomic status, ethnicity, sexual orientation, and gender in health systems. In 2019, a groundbreaking article in Science shed light on commonly used algorithms that are considered racially biased and how health care professionals often use such information to make medical decisions.

No claims involving AI bias have come down the pipeline yet, but it’s an area to watch, said Ms. Boisvert. She noted a website that highlights complaints and accusations of AI bias, including in health care.

“We need to be sure the training of the AI is appropriate, current, and broad enough so that there is no bias in the AI when it’s participating in the decision-making,” said Ms. Boisvert. “Imagine if the AI is diagnosing based on a dataset that is not local. It doesn’t represent the population at that particular hospital, and it’s providing inaccurate information to the physicians who are then making decisions about treatment.”

In pain management, for example, there are known differences in how patients experience pain, Ms. Boisvert said. If AI was being used to develop an algorithm for how a particular patient’s postoperative pain should be managed, and the algorithm did not include the differences, the pain control for a certain patient could be inappropriate. A poor outcome resulting from the treatment could lead to a claim against the physician or hospital that used the biased AI system, she said.

In the future, as AI becomes more integrated and accepted in medicine, there may be a risk of legal complaints against doctors for not using AI, said Saurabh Jha, MD, an associate professor of radiology at the University of Pennsylvania, Philadelphia, and a scholar of AI in radiology.

“Ultimately, we might get to a place where AI starts helping physicians detect more or reduce the miss of certain conditions, and it becomes the standard of care,” Dr. Jha said. “For example, if it became part of the standard of care for pulmonary embolism [PE] detection, and you didn’t use it for PE detection, and there was a miss. That could put you at legal risk. We’re not at that stage yet, but that is one future possibility.”

Dr. Parikh envisions an even cloudier liability landscape as the potential grows for AI to control patient care decisions. In such a scenario, rather than just issuing an alert or prediction to a physician, the AI system could trigger an action.

For instance, if an algorithm is trained to predict sepsis and, once triggered, the AI could initiate a nurse-led rapid response or a change in patient care outside the clinician’s control, said Dr. Parikh, who coauthored a recent article on AI and medical liability in The Milbank Quarterly.

“That’s still very much the minority of how AI is being used, but as evidence is growing that AI-based diagnostic tools perform equivalent or even superior to physicians, these autonomous workflows are being considered,” Dr. Parikh said. “When the ultimate action upon the patient is more determined by the AI than what the clinician does, then I think the liability picture gets murkier, and we should be thinking about how we can respond to that from a liability framework.”
 

 

 

How you can prevent AI-related lawsuits

The first step to preventing an AI-related claim is being aware of when and how you are using AI.

Ensure you’re informed about how the AI was trained, Ms. Boisvert stresses.

“Ask questions!” she said. “Is the AI safe? Are the recommendations accurate? Does the AI perform better than current systems? In what way? What databases were used, and did the programmers consider bias? Do I understand how to use the results?”

Never blindly trust the AI but rather view it as a data point in a medical decision, said Dr. Parikh. Ensure that other sources of medical information are properly accessed and that best practices for your specialty are still being followed.

When using any form of AI, document your usage, adds Mr. Rashbaum. A record that clearly outlines how the physician incorporated the AI is critical if a claim later arises in which the doctor is accused of AI-related malpractice, he said.

“Indicating how the AI tool was used, why it was used, and that it was used in conjunction with available clinical information and the clinician’s best judgment could reduce the risk of being found responsible as a result of AI use in a particular case,” he said.

Use chatbots, such as ChatGPT, the way they were intended, as support tools, rather than definitive diagnostic instruments, adds Dr. Castro.

“Doctors should also be well-trained in interpreting and understanding the suggestions provided by ChatGPT and should use their clinical judgment and experience alongside the AI tool for more accurate decision-making,” he said.

In addition, because no AI insurance product exists on the market, physicians and organizations using AI – particularly for direct health care – should evaluate their current insurance or insurance-like products to determine where a claim involving AI might fall and whether the policy would respond, said Ms. Boisvert. The AI vendor/manufacturer will likely have indemnified themselves in the purchase and sale agreement or contract, she said.

It will also become increasingly important for medical practices, hospitals, and health systems to put in place strong data governance strategies, Mr. LeTang said.

“AI relies on good data,” he said. “A data governance strategy is a key component to making sure we understand where the data is coming from, what is represents, how accurate it is, if it’s reproducible, what controls are in place to ensure the right people have the right access, and that if we’re starting to use it to build algorithms, that it’s deidentified.”

While no malpractice claims associated with the use of AI have yet surfaced, this may change as legal courts catch up on the backlog of malpractice claims that were delayed because of COVID-19, and even more so as AI becomes more prevalent in health care, Mr. LeTang said.

“Similar to the attention that autonomous driving systems, like Tesla, receive when the system fails and accidents occur, we can be assured that media outlets will widely publicize AI-related medical adverse events,” he said. “It is crucial for health care professionals, AI developers, and regulatory authorities to work together to ensure the responsible use of AI in health care, with patient safety as the top priority. By doing so, they can mitigate the risks associated with AI implementation and minimize the potential for legal disputes arising from AI-related medical errors.”

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

The ways in which artificial intelligence (AI) may transform the future of medicine is making headlines across the globe. But chances are, you’re already using AI in your practice every day – you may just not realize it.
 

And whether you recognize the presence of AI or not, the technology could be putting you in danger of a lawsuit, legal experts say.

The use of AI in your daily practice can come with hidden liabilities, say legal experts, and as hospitals and medical groups deploy AI into more areas of health care, new liability exposures may be on the horizon.

“For physicians, AI has also not yet drastically changed or improved the way care is provided or consumed,” said Michael LeTang, chief nursing informatics officer and vice president of risk management and patient safety at Healthcare Risk Advisors, part of TDC Group. “Consequently, it may seem like AI is not present in their work streams, but in reality, it has been utilized in health care for several years. As AI technologies continue to develop and become more sophisticated, we can expect them to play an increasingly significant role in health care.”

Today, most AI applications in health care use narrow AI, which is designed to complete a single task without human assistance, as opposed to artificial general intelligence (AGI), which pertains to human-level reasoning and problem solving across a broad spectrum. Here are some ways doctors are using AI throughout the day – sometimes being aware of its assistance, and sometimes being unaware:

  • Many doctors use electronic health records (EHRs) with integrated AI that include computerized clinical decision support tools designed to reduce the risk of diagnostic error and to integrate decision-making in the medication ordering function.
  • Cardiologists, pathologists, and dermatologists use AI in the interpretation of vast amounts of images, tracings, and complex patterns.
  • Surgeons are using AI-enhanced surgical robotics for orthopedic surgeries, such as joint replacement and spine surgery.
  • A growing number of doctors are using ChatGPT to assist in drafting prior authorization letters for insurers. Experts say more doctors are also experimenting with ChatGPT to support medical decision-making.
  • Within oncology, physicians use machine learning techniques in the form of computer-aided detection systems for early breast cancer detection.
  • AI algorithms are often used by health systems for workflow, staffing optimization, population management, and care coordination.
  • Some systems within EHRs use AI to indicate high-risk patients.
  • Physicians are using AI applications for the early recognition of sepsis, including EHR-integrated decision tools, such as the Hospital Corporation of America Healthcare’s Sepsis Prediction and Optimization Therapy and the Sepsis Early Risk Assessment algorithm.
  • About 30% of radiologists use AI in their practice to analyze x-rays and CT scans.
  • Epic Systems recently announced a partnership with Microsoft to integrate ChatGPT into MyChart, Epic’s patient portal system. Pilot hospitals will utilize ChatGPT to automatically generate responses to patient-generated questions sent via the portal.
 

 

The growth of AI in health care has been enormous, and it’s only going to continue, said Ravi B. Parikh, MD, an assistant professor in the department of medical ethics and health policy and medicine at the University of Pennsylvania, Philadelphia.

“What’s really critical is that physicians, clinicians, and nurses using AI are provided with the tools to understand how artificial intelligence works and, most importantly, understand that they are still accountable for making the ultimate decision,” Mr. LeTang said, “The information is not always going to be the right thing to do or the most accurate thing to do. They’re still liable for making a bad decision, even if AI is driving that.”
 

What are the top AI legal dangers of today?

A pressing legal risk is becoming too reliant on the suggestions that AI-based systems provide, which can lead to poor care decisions, said Kenneth Rashbaum, a New York–based cybersecurity attorney with more than 25 years of experience in medical malpractice defense.

This can occur, for example, when using clinical support systems that leverage AI, machine learning, or statistical pattern recognition. Today, clinical support systems are commonly administered through EHRs and other computerized clinical workflows. In general, such systems match a patient’s characteristics to a computerized clinical knowledge base. An assessment or recommendation is then presented to the physician for a decision.

“If the clinician blindly accepts it without considering whether it’s appropriate for this patient at this time with this presentation, the clinician may bear some responsibility if there is an untoward result,” Mr. Rashbaum said.

“A common claim even in the days before the EMR [electronic medical record] and AI, was that the clinician did not take all available information into account in rendering treatment, including history of past and present condition, as reflected in the records, communication with past and other present treating clinicians, lab and radiology results, discussions with the patient, and physical examination findings,” he said. “So, if the clinician relied upon the support prompt to the exclusion of these other sources of information, that could be a very strong argument for the plaintiff.”

Chatbots, such OpenAI’s ChatGPT, are another form of AI raising legal red flags. ChatGPT, trained on a massive set of text data, can carry out conversations, write code, draft emails, and answer any question posed. The chatbot has gained considerable credibility for accurately diagnosing rare conditions in seconds, and it recently passed the U.S. Medical Licensing Examination.

It’s unclear how many doctors are signing onto the ChatGPT website daily, but physicians are actively using the chatbot, particularly for assistance with prior authorization letters and to support decision-making processes in their practices, said Mr. LeTang.

When physicians ask ChatGPT a question, however, they should be mindful that ChatGPT could “hallucinate,” a term that refers to a generated response that sounds plausible but is factually incorrect or is unrelated to the context, explains Harvey Castro, MD, an emergency physician, ChatGPT health care expert, and author of the 2023 book “ChatGPT and Healthcare: Unlocking the Potential of Patient Empowerment.”

Acting on ChatGPT’s response without vetting the information places doctors at serious risk of a lawsuit, he said.

“Sometimes, the response is half true and half false,” he said. “Say, I go outside my specialty of emergency medicine and ask it about a pediatric surgical procedure. It could give me a response that sounds medically correct, but then I ask a pediatric cardiologist, and he says, ‘We don’t even do this. This doesn’t even exist!’ Physicians really have to make sure they are vetting the information provided.”

In response to ChatGPT’s growing usage by health care professionals, hospitals and practices are quickly implementing guidelines, policies, and restrictions that caution physicians about the accuracy of ChatGPT-generated information, adds Mr. LeTang.

Emerging best practices include avoiding the input of patient health information, personally identifiable information, or any data that could be commercially valuable or considered the intellectual property of a hospital or health system, he said.

“Another crucial guideline is not to rely solely on ChatGPT as a definitive source for clinical decision-making; physicians must exercise their professional judgment,” he said. “If best practices are not adhered to, the associated risks are present today. However, these risks may become more significant as AI technologies continue to evolve and become increasingly integrated into health care.”

The potential for misdiagnosis by AI systems and the risk of unnecessary procedures if physicians do not thoroughly evaluate and validate AI predictions are other dangers.

As an example, Mr. LeTang described a case in which a physician documents in the EHR that a patient has presented to the emergency department with chest pains and other signs of a heart attack, and an AI algorithm predicts that the patient is experiencing an active myocardial infarction. If the physician then sends the patient for stenting or an angioplasty without other concrete evidence or tests to confirm the diagnosis, the doctor could later face a misdiagnosis complaint if the costly procedures were unnecessary.

“That’s one of the risks of using artificial intelligence,” he said. “A large percentage of malpractice claims is failure to diagnose, delayed diagnosis, or inaccurate diagnosis. What falls in the category of failure to diagnose is sending a patient for an unnecessary procedure or having an adverse event or bad outcome because of the failure to diagnose.”

So far, no AI lawsuits have been filed, but they may make an appearance soon, said Sue Boisvert, senior patient safety risk manager at The Doctors Company, a national medical liability insurer.

“There are hundreds of AI programs currently in use in health care,” she said. “At some point, a provider will make a decision that is contrary to what the AI recommended. The AI may be wrong, or the provider may be wrong. Either way, the provider will neglect to document their clinical reasoning, a patient will be harmed, and we will have the first AI claim.”
 

 

 

Upcoming AI legal risks to watch for

Lawsuits that allege biased patient care by physicians on the basis of algorithmic bias may also be forthcoming, analysts warn.

Much has been written about algorithmic bias that compounds and worsens inequities in socioeconomic status, ethnicity, sexual orientation, and gender in health systems. In 2019, a groundbreaking article in Science shed light on commonly used algorithms that are considered racially biased and how health care professionals often use such information to make medical decisions.

No claims involving AI bias have come down the pipeline yet, but it’s an area to watch, said Ms. Boisvert. She noted a website that highlights complaints and accusations of AI bias, including in health care.

“We need to be sure the training of the AI is appropriate, current, and broad enough so that there is no bias in the AI when it’s participating in the decision-making,” said Ms. Boisvert. “Imagine if the AI is diagnosing based on a dataset that is not local. It doesn’t represent the population at that particular hospital, and it’s providing inaccurate information to the physicians who are then making decisions about treatment.”

In pain management, for example, there are known differences in how patients experience pain, Ms. Boisvert said. If AI was being used to develop an algorithm for how a particular patient’s postoperative pain should be managed, and the algorithm did not include the differences, the pain control for a certain patient could be inappropriate. A poor outcome resulting from the treatment could lead to a claim against the physician or hospital that used the biased AI system, she said.

In the future, as AI becomes more integrated and accepted in medicine, there may be a risk of legal complaints against doctors for not using AI, said Saurabh Jha, MD, an associate professor of radiology at the University of Pennsylvania, Philadelphia, and a scholar of AI in radiology.

“Ultimately, we might get to a place where AI starts helping physicians detect more or reduce the miss of certain conditions, and it becomes the standard of care,” Dr. Jha said. “For example, if it became part of the standard of care for pulmonary embolism [PE] detection, and you didn’t use it for PE detection, and there was a miss. That could put you at legal risk. We’re not at that stage yet, but that is one future possibility.”

Dr. Parikh envisions an even cloudier liability landscape as the potential grows for AI to control patient care decisions. In such a scenario, rather than just issuing an alert or prediction to a physician, the AI system could trigger an action.

For instance, if an algorithm is trained to predict sepsis and, once triggered, the AI could initiate a nurse-led rapid response or a change in patient care outside the clinician’s control, said Dr. Parikh, who coauthored a recent article on AI and medical liability in The Milbank Quarterly.

“That’s still very much the minority of how AI is being used, but as evidence is growing that AI-based diagnostic tools perform equivalent or even superior to physicians, these autonomous workflows are being considered,” Dr. Parikh said. “When the ultimate action upon the patient is more determined by the AI than what the clinician does, then I think the liability picture gets murkier, and we should be thinking about how we can respond to that from a liability framework.”
 

 

 

How you can prevent AI-related lawsuits

The first step to preventing an AI-related claim is being aware of when and how you are using AI.

Ensure you’re informed about how the AI was trained, Ms. Boisvert stresses.

“Ask questions!” she said. “Is the AI safe? Are the recommendations accurate? Does the AI perform better than current systems? In what way? What databases were used, and did the programmers consider bias? Do I understand how to use the results?”

Never blindly trust the AI but rather view it as a data point in a medical decision, said Dr. Parikh. Ensure that other sources of medical information are properly accessed and that best practices for your specialty are still being followed.

When using any form of AI, document your usage, adds Mr. Rashbaum. A record that clearly outlines how the physician incorporated the AI is critical if a claim later arises in which the doctor is accused of AI-related malpractice, he said.

“Indicating how the AI tool was used, why it was used, and that it was used in conjunction with available clinical information and the clinician’s best judgment could reduce the risk of being found responsible as a result of AI use in a particular case,” he said.

Use chatbots, such as ChatGPT, the way they were intended, as support tools, rather than definitive diagnostic instruments, adds Dr. Castro.

“Doctors should also be well-trained in interpreting and understanding the suggestions provided by ChatGPT and should use their clinical judgment and experience alongside the AI tool for more accurate decision-making,” he said.

In addition, because no AI insurance product exists on the market, physicians and organizations using AI – particularly for direct health care – should evaluate their current insurance or insurance-like products to determine where a claim involving AI might fall and whether the policy would respond, said Ms. Boisvert. The AI vendor/manufacturer will likely have indemnified themselves in the purchase and sale agreement or contract, she said.

It will also become increasingly important for medical practices, hospitals, and health systems to put in place strong data governance strategies, Mr. LeTang said.

“AI relies on good data,” he said. “A data governance strategy is a key component to making sure we understand where the data is coming from, what is represents, how accurate it is, if it’s reproducible, what controls are in place to ensure the right people have the right access, and that if we’re starting to use it to build algorithms, that it’s deidentified.”

While no malpractice claims associated with the use of AI have yet surfaced, this may change as legal courts catch up on the backlog of malpractice claims that were delayed because of COVID-19, and even more so as AI becomes more prevalent in health care, Mr. LeTang said.

“Similar to the attention that autonomous driving systems, like Tesla, receive when the system fails and accidents occur, we can be assured that media outlets will widely publicize AI-related medical adverse events,” he said. “It is crucial for health care professionals, AI developers, and regulatory authorities to work together to ensure the responsible use of AI in health care, with patient safety as the top priority. By doing so, they can mitigate the risks associated with AI implementation and minimize the potential for legal disputes arising from AI-related medical errors.”

A version of this article first appeared on Medscape.com.

The ways in which artificial intelligence (AI) may transform the future of medicine is making headlines across the globe. But chances are, you’re already using AI in your practice every day – you may just not realize it.
 

And whether you recognize the presence of AI or not, the technology could be putting you in danger of a lawsuit, legal experts say.

The use of AI in your daily practice can come with hidden liabilities, say legal experts, and as hospitals and medical groups deploy AI into more areas of health care, new liability exposures may be on the horizon.

“For physicians, AI has also not yet drastically changed or improved the way care is provided or consumed,” said Michael LeTang, chief nursing informatics officer and vice president of risk management and patient safety at Healthcare Risk Advisors, part of TDC Group. “Consequently, it may seem like AI is not present in their work streams, but in reality, it has been utilized in health care for several years. As AI technologies continue to develop and become more sophisticated, we can expect them to play an increasingly significant role in health care.”

Today, most AI applications in health care use narrow AI, which is designed to complete a single task without human assistance, as opposed to artificial general intelligence (AGI), which pertains to human-level reasoning and problem solving across a broad spectrum. Here are some ways doctors are using AI throughout the day – sometimes being aware of its assistance, and sometimes being unaware:

  • Many doctors use electronic health records (EHRs) with integrated AI that include computerized clinical decision support tools designed to reduce the risk of diagnostic error and to integrate decision-making in the medication ordering function.
  • Cardiologists, pathologists, and dermatologists use AI in the interpretation of vast amounts of images, tracings, and complex patterns.
  • Surgeons are using AI-enhanced surgical robotics for orthopedic surgeries, such as joint replacement and spine surgery.
  • A growing number of doctors are using ChatGPT to assist in drafting prior authorization letters for insurers. Experts say more doctors are also experimenting with ChatGPT to support medical decision-making.
  • Within oncology, physicians use machine learning techniques in the form of computer-aided detection systems for early breast cancer detection.
  • AI algorithms are often used by health systems for workflow, staffing optimization, population management, and care coordination.
  • Some systems within EHRs use AI to indicate high-risk patients.
  • Physicians are using AI applications for the early recognition of sepsis, including EHR-integrated decision tools, such as the Hospital Corporation of America Healthcare’s Sepsis Prediction and Optimization Therapy and the Sepsis Early Risk Assessment algorithm.
  • About 30% of radiologists use AI in their practice to analyze x-rays and CT scans.
  • Epic Systems recently announced a partnership with Microsoft to integrate ChatGPT into MyChart, Epic’s patient portal system. Pilot hospitals will utilize ChatGPT to automatically generate responses to patient-generated questions sent via the portal.
 

 

The growth of AI in health care has been enormous, and it’s only going to continue, said Ravi B. Parikh, MD, an assistant professor in the department of medical ethics and health policy and medicine at the University of Pennsylvania, Philadelphia.

“What’s really critical is that physicians, clinicians, and nurses using AI are provided with the tools to understand how artificial intelligence works and, most importantly, understand that they are still accountable for making the ultimate decision,” Mr. LeTang said, “The information is not always going to be the right thing to do or the most accurate thing to do. They’re still liable for making a bad decision, even if AI is driving that.”
 

What are the top AI legal dangers of today?

A pressing legal risk is becoming too reliant on the suggestions that AI-based systems provide, which can lead to poor care decisions, said Kenneth Rashbaum, a New York–based cybersecurity attorney with more than 25 years of experience in medical malpractice defense.

This can occur, for example, when using clinical support systems that leverage AI, machine learning, or statistical pattern recognition. Today, clinical support systems are commonly administered through EHRs and other computerized clinical workflows. In general, such systems match a patient’s characteristics to a computerized clinical knowledge base. An assessment or recommendation is then presented to the physician for a decision.

“If the clinician blindly accepts it without considering whether it’s appropriate for this patient at this time with this presentation, the clinician may bear some responsibility if there is an untoward result,” Mr. Rashbaum said.

“A common claim even in the days before the EMR [electronic medical record] and AI, was that the clinician did not take all available information into account in rendering treatment, including history of past and present condition, as reflected in the records, communication with past and other present treating clinicians, lab and radiology results, discussions with the patient, and physical examination findings,” he said. “So, if the clinician relied upon the support prompt to the exclusion of these other sources of information, that could be a very strong argument for the plaintiff.”

Chatbots, such OpenAI’s ChatGPT, are another form of AI raising legal red flags. ChatGPT, trained on a massive set of text data, can carry out conversations, write code, draft emails, and answer any question posed. The chatbot has gained considerable credibility for accurately diagnosing rare conditions in seconds, and it recently passed the U.S. Medical Licensing Examination.

It’s unclear how many doctors are signing onto the ChatGPT website daily, but physicians are actively using the chatbot, particularly for assistance with prior authorization letters and to support decision-making processes in their practices, said Mr. LeTang.

When physicians ask ChatGPT a question, however, they should be mindful that ChatGPT could “hallucinate,” a term that refers to a generated response that sounds plausible but is factually incorrect or is unrelated to the context, explains Harvey Castro, MD, an emergency physician, ChatGPT health care expert, and author of the 2023 book “ChatGPT and Healthcare: Unlocking the Potential of Patient Empowerment.”

Acting on ChatGPT’s response without vetting the information places doctors at serious risk of a lawsuit, he said.

“Sometimes, the response is half true and half false,” he said. “Say, I go outside my specialty of emergency medicine and ask it about a pediatric surgical procedure. It could give me a response that sounds medically correct, but then I ask a pediatric cardiologist, and he says, ‘We don’t even do this. This doesn’t even exist!’ Physicians really have to make sure they are vetting the information provided.”

In response to ChatGPT’s growing usage by health care professionals, hospitals and practices are quickly implementing guidelines, policies, and restrictions that caution physicians about the accuracy of ChatGPT-generated information, adds Mr. LeTang.

Emerging best practices include avoiding the input of patient health information, personally identifiable information, or any data that could be commercially valuable or considered the intellectual property of a hospital or health system, he said.

“Another crucial guideline is not to rely solely on ChatGPT as a definitive source for clinical decision-making; physicians must exercise their professional judgment,” he said. “If best practices are not adhered to, the associated risks are present today. However, these risks may become more significant as AI technologies continue to evolve and become increasingly integrated into health care.”

The potential for misdiagnosis by AI systems and the risk of unnecessary procedures if physicians do not thoroughly evaluate and validate AI predictions are other dangers.

As an example, Mr. LeTang described a case in which a physician documents in the EHR that a patient has presented to the emergency department with chest pains and other signs of a heart attack, and an AI algorithm predicts that the patient is experiencing an active myocardial infarction. If the physician then sends the patient for stenting or an angioplasty without other concrete evidence or tests to confirm the diagnosis, the doctor could later face a misdiagnosis complaint if the costly procedures were unnecessary.

“That’s one of the risks of using artificial intelligence,” he said. “A large percentage of malpractice claims is failure to diagnose, delayed diagnosis, or inaccurate diagnosis. What falls in the category of failure to diagnose is sending a patient for an unnecessary procedure or having an adverse event or bad outcome because of the failure to diagnose.”

So far, no AI lawsuits have been filed, but they may make an appearance soon, said Sue Boisvert, senior patient safety risk manager at The Doctors Company, a national medical liability insurer.

“There are hundreds of AI programs currently in use in health care,” she said. “At some point, a provider will make a decision that is contrary to what the AI recommended. The AI may be wrong, or the provider may be wrong. Either way, the provider will neglect to document their clinical reasoning, a patient will be harmed, and we will have the first AI claim.”
 

 

 

Upcoming AI legal risks to watch for

Lawsuits that allege biased patient care by physicians on the basis of algorithmic bias may also be forthcoming, analysts warn.

Much has been written about algorithmic bias that compounds and worsens inequities in socioeconomic status, ethnicity, sexual orientation, and gender in health systems. In 2019, a groundbreaking article in Science shed light on commonly used algorithms that are considered racially biased and how health care professionals often use such information to make medical decisions.

No claims involving AI bias have come down the pipeline yet, but it’s an area to watch, said Ms. Boisvert. She noted a website that highlights complaints and accusations of AI bias, including in health care.

“We need to be sure the training of the AI is appropriate, current, and broad enough so that there is no bias in the AI when it’s participating in the decision-making,” said Ms. Boisvert. “Imagine if the AI is diagnosing based on a dataset that is not local. It doesn’t represent the population at that particular hospital, and it’s providing inaccurate information to the physicians who are then making decisions about treatment.”

In pain management, for example, there are known differences in how patients experience pain, Ms. Boisvert said. If AI was being used to develop an algorithm for how a particular patient’s postoperative pain should be managed, and the algorithm did not include the differences, the pain control for a certain patient could be inappropriate. A poor outcome resulting from the treatment could lead to a claim against the physician or hospital that used the biased AI system, she said.

In the future, as AI becomes more integrated and accepted in medicine, there may be a risk of legal complaints against doctors for not using AI, said Saurabh Jha, MD, an associate professor of radiology at the University of Pennsylvania, Philadelphia, and a scholar of AI in radiology.

“Ultimately, we might get to a place where AI starts helping physicians detect more or reduce the miss of certain conditions, and it becomes the standard of care,” Dr. Jha said. “For example, if it became part of the standard of care for pulmonary embolism [PE] detection, and you didn’t use it for PE detection, and there was a miss. That could put you at legal risk. We’re not at that stage yet, but that is one future possibility.”

Dr. Parikh envisions an even cloudier liability landscape as the potential grows for AI to control patient care decisions. In such a scenario, rather than just issuing an alert or prediction to a physician, the AI system could trigger an action.

For instance, if an algorithm is trained to predict sepsis and, once triggered, the AI could initiate a nurse-led rapid response or a change in patient care outside the clinician’s control, said Dr. Parikh, who coauthored a recent article on AI and medical liability in The Milbank Quarterly.

“That’s still very much the minority of how AI is being used, but as evidence is growing that AI-based diagnostic tools perform equivalent or even superior to physicians, these autonomous workflows are being considered,” Dr. Parikh said. “When the ultimate action upon the patient is more determined by the AI than what the clinician does, then I think the liability picture gets murkier, and we should be thinking about how we can respond to that from a liability framework.”
 

 

 

How you can prevent AI-related lawsuits

The first step to preventing an AI-related claim is being aware of when and how you are using AI.

Ensure you’re informed about how the AI was trained, Ms. Boisvert stresses.

“Ask questions!” she said. “Is the AI safe? Are the recommendations accurate? Does the AI perform better than current systems? In what way? What databases were used, and did the programmers consider bias? Do I understand how to use the results?”

Never blindly trust the AI but rather view it as a data point in a medical decision, said Dr. Parikh. Ensure that other sources of medical information are properly accessed and that best practices for your specialty are still being followed.

When using any form of AI, document your usage, adds Mr. Rashbaum. A record that clearly outlines how the physician incorporated the AI is critical if a claim later arises in which the doctor is accused of AI-related malpractice, he said.

“Indicating how the AI tool was used, why it was used, and that it was used in conjunction with available clinical information and the clinician’s best judgment could reduce the risk of being found responsible as a result of AI use in a particular case,” he said.

Use chatbots, such as ChatGPT, the way they were intended, as support tools, rather than definitive diagnostic instruments, adds Dr. Castro.

“Doctors should also be well-trained in interpreting and understanding the suggestions provided by ChatGPT and should use their clinical judgment and experience alongside the AI tool for more accurate decision-making,” he said.

In addition, because no AI insurance product exists on the market, physicians and organizations using AI – particularly for direct health care – should evaluate their current insurance or insurance-like products to determine where a claim involving AI might fall and whether the policy would respond, said Ms. Boisvert. The AI vendor/manufacturer will likely have indemnified themselves in the purchase and sale agreement or contract, she said.

It will also become increasingly important for medical practices, hospitals, and health systems to put in place strong data governance strategies, Mr. LeTang said.

“AI relies on good data,” he said. “A data governance strategy is a key component to making sure we understand where the data is coming from, what is represents, how accurate it is, if it’s reproducible, what controls are in place to ensure the right people have the right access, and that if we’re starting to use it to build algorithms, that it’s deidentified.”

While no malpractice claims associated with the use of AI have yet surfaced, this may change as legal courts catch up on the backlog of malpractice claims that were delayed because of COVID-19, and even more so as AI becomes more prevalent in health care, Mr. LeTang said.

“Similar to the attention that autonomous driving systems, like Tesla, receive when the system fails and accidents occur, we can be assured that media outlets will widely publicize AI-related medical adverse events,” he said. “It is crucial for health care professionals, AI developers, and regulatory authorities to work together to ensure the responsible use of AI in health care, with patient safety as the top priority. By doing so, they can mitigate the risks associated with AI implementation and minimize the potential for legal disputes arising from AI-related medical errors.”

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

The enemy of carcinogenic fumes is my friendly begonia

Article Type
Changed
Thu, 06/08/2023 - 09:25

 

Sowing the seeds of cancer prevention

Are you looking to add to your quality of life, even though pets are not your speed? Might we suggest something with lower maintenance? Something a little greener?

Indoor plants can purify the air that comes from outside. Researchers at the University of Technology Sydney, in partnership with the plantscaping company Ambius, showed that a “green wall” made up of mixed indoor plants was able to suck up 97% of “the most toxic compounds” from the air in just 8 hours. We’re talking about lung-irritating, headache-inducing, cancer risk–boosting compounds from gasoline fumes, including benzene.

Fraser Torpy/University of Technology Sydney

Public health initiatives often strive to reduce cardiovascular and obesity risks, but breathing seems pretty important too. According to the World Health Organization, household air pollution is responsible for about 2.5 million global premature deaths each year. And since 2020 we’ve become accustomed to spending more time inside and at home.

“This new research proves that plants should not just be seen as ‘nice to have,’ but rather a crucial part of every workplace wellness plan,” Ambius General Manager Johan Hodgson said in statement released by the university.

So don’t spend hundreds of dollars on a fancy air filtration system when a wall of plants can do that for next to nothing. Find what works for you and your space and become a plant parent today! Your lungs will thank you.
 

But officer, I had to swerve to miss the duodenal ampulla

Tiny video capsule endoscopes have been around for many years, but they have one big weakness: The ingestible cameras’ journey through the GI tract is passively driven by gravity and the natural movement of the body, so they often miss potential problem areas.

AnX Robotica

Not anymore. That flaw has been addressed by medical technology company AnX Robotica, which has taken endoscopy to the next level by adding that wondrous directional control device of the modern electronic age, a joystick.

The new system “uses an external magnet and hand-held video game style joysticks to move the capsule in three dimensions,” which allows physicians to “remotely drive a miniature video capsule to all regions of the stomach to visualize and photograph potential problem areas,” according to Andrew C. Meltzer, MD, of George Washington University and associates, who conducted a pilot study funded by AnX Robotica.

The video capsule provided a 95% rate of visualization in the stomachs of 40 patients who were examined at a medical office building by an emergency medicine physician who had no previous specialty training in endoscopy. “Capsules were driven by the ER physician and then the study reports were reviewed by an attending gastroenterologist who was physically off site,” the investigators said in a written statement.

The capsule operator did receive some additional training, and development of artificial intelligence to self-drive the capsule is in the works, but for now, we’re talking about a device controlled by a human using a joystick. And we all know that 50-year-olds are not especially known for their joystick skills. For that we need real experts. Yup, we need to put those joystick-controlled capsule endoscopes in the hands of teenage gamers. Who wants to go first?
 

 

 

Maybe AI isn’t ready for the big time after all

“How long before some intrepid stockholder says: ‘Hey, instead of paying doctors, why don’t we just use the free robot instead?’ ” Those words appeared on LOTME but a month ago. After all, the AI is supposed to be smarter and more empathetic than a doctor. And did we mention it’s free? Or at least extremely cheap. Cheaper than, say, a group of recently unionized health care workers.

Teen in bed checking her cell phone
maewjpho/Thinkstock

In early May, the paid employees manning the National Eating Disorders Association emergency hotline voted to unionize, as they felt overwhelmed and underpaid. Apparently, paying six people an extra few thousand a year was too much for NEDA’s leadership, as they decided a few weeks later to fire those workers, fully closing down the hotline. Instead of talking to a real person, people “calling in” for support would be met with Tessa, a wellness chatbot that would hopefully guide them through their crisis. Key word, hopefully.

In perhaps the least surprising twist of the year, NEDA was forced to walk back its decision about a week after its initial announcement. It all started with a viral Instagram post from a woman who called in and received the following advice from Tessa: Lose 1-2 pounds a week, count calories and work for a 500- to 1,000-calorie deficit, weigh herself weekly, and restrict her diet. Unfortunately, all of these suggestions were things that led to the development of the woman’s eating disorder.

Naturally, NEDA responded in good grace, accusing the woman of lying. A NEDA vice president even left some nasty comments on the post, but hastily deleted them a day later when NEDA announced it was shutting down Tessa “until further notice for a complete investigation.” NEDA’s CEO insisted they hadn’t seen that behavior from Tessa before, calling it a “bug” and insisting the bot would only be down temporarily until the triggers causing the bug were fixed.

In the aftermath, several doctors and psychologists chimed in, terming the rush to automate human roles dangerous and risky. After all, much of what makes these hotlines effective is the volunteers speaking from their own experience. An unsupervised bot doesn’t seem to have what it takes to deal with a mental health crisis, but we’re betting that Tessa will be back. As a wise cephalopod once said: Nobody gives a care about the fate of labor as long as they can get their instant gratification.
 

You can’t spell existential without s-t-e-n-t

This week, we’re including a special “bonus” item that, to be honest, has nothing to do with stents. That’s why our editor is making us call this a “bonus” (and making us use quote marks, too): It doesn’t really have anything to do with stents or health care or those who practice health care. Actually, his exact words were, “You can’t just give the readers someone else’s ****ing list and expect to get paid for it.” Did we mention that he looks like Jack Nicklaus but acts like BoJack Horseman?

Anywaaay, we’re pretty sure that the list in question – “America’s Top 10 Most Googled Existential Questions” – says something about the human condition, just not about stents:

1. Why is the sky blue?

2. What do dreams mean?

3. What is the meaning of life?

4. Why am I so tired?

5. Who am I?

6. What is love?

7. Is a hot dog a sandwich?

8. What came first, the chicken or the egg?

9. What should I do?

10. Do animals have souls?

Publications
Topics
Sections

 

Sowing the seeds of cancer prevention

Are you looking to add to your quality of life, even though pets are not your speed? Might we suggest something with lower maintenance? Something a little greener?

Indoor plants can purify the air that comes from outside. Researchers at the University of Technology Sydney, in partnership with the plantscaping company Ambius, showed that a “green wall” made up of mixed indoor plants was able to suck up 97% of “the most toxic compounds” from the air in just 8 hours. We’re talking about lung-irritating, headache-inducing, cancer risk–boosting compounds from gasoline fumes, including benzene.

Fraser Torpy/University of Technology Sydney

Public health initiatives often strive to reduce cardiovascular and obesity risks, but breathing seems pretty important too. According to the World Health Organization, household air pollution is responsible for about 2.5 million global premature deaths each year. And since 2020 we’ve become accustomed to spending more time inside and at home.

“This new research proves that plants should not just be seen as ‘nice to have,’ but rather a crucial part of every workplace wellness plan,” Ambius General Manager Johan Hodgson said in statement released by the university.

So don’t spend hundreds of dollars on a fancy air filtration system when a wall of plants can do that for next to nothing. Find what works for you and your space and become a plant parent today! Your lungs will thank you.
 

But officer, I had to swerve to miss the duodenal ampulla

Tiny video capsule endoscopes have been around for many years, but they have one big weakness: The ingestible cameras’ journey through the GI tract is passively driven by gravity and the natural movement of the body, so they often miss potential problem areas.

AnX Robotica

Not anymore. That flaw has been addressed by medical technology company AnX Robotica, which has taken endoscopy to the next level by adding that wondrous directional control device of the modern electronic age, a joystick.

The new system “uses an external magnet and hand-held video game style joysticks to move the capsule in three dimensions,” which allows physicians to “remotely drive a miniature video capsule to all regions of the stomach to visualize and photograph potential problem areas,” according to Andrew C. Meltzer, MD, of George Washington University and associates, who conducted a pilot study funded by AnX Robotica.

The video capsule provided a 95% rate of visualization in the stomachs of 40 patients who were examined at a medical office building by an emergency medicine physician who had no previous specialty training in endoscopy. “Capsules were driven by the ER physician and then the study reports were reviewed by an attending gastroenterologist who was physically off site,” the investigators said in a written statement.

The capsule operator did receive some additional training, and development of artificial intelligence to self-drive the capsule is in the works, but for now, we’re talking about a device controlled by a human using a joystick. And we all know that 50-year-olds are not especially known for their joystick skills. For that we need real experts. Yup, we need to put those joystick-controlled capsule endoscopes in the hands of teenage gamers. Who wants to go first?
 

 

 

Maybe AI isn’t ready for the big time after all

“How long before some intrepid stockholder says: ‘Hey, instead of paying doctors, why don’t we just use the free robot instead?’ ” Those words appeared on LOTME but a month ago. After all, the AI is supposed to be smarter and more empathetic than a doctor. And did we mention it’s free? Or at least extremely cheap. Cheaper than, say, a group of recently unionized health care workers.

Teen in bed checking her cell phone
maewjpho/Thinkstock

In early May, the paid employees manning the National Eating Disorders Association emergency hotline voted to unionize, as they felt overwhelmed and underpaid. Apparently, paying six people an extra few thousand a year was too much for NEDA’s leadership, as they decided a few weeks later to fire those workers, fully closing down the hotline. Instead of talking to a real person, people “calling in” for support would be met with Tessa, a wellness chatbot that would hopefully guide them through their crisis. Key word, hopefully.

In perhaps the least surprising twist of the year, NEDA was forced to walk back its decision about a week after its initial announcement. It all started with a viral Instagram post from a woman who called in and received the following advice from Tessa: Lose 1-2 pounds a week, count calories and work for a 500- to 1,000-calorie deficit, weigh herself weekly, and restrict her diet. Unfortunately, all of these suggestions were things that led to the development of the woman’s eating disorder.

Naturally, NEDA responded in good grace, accusing the woman of lying. A NEDA vice president even left some nasty comments on the post, but hastily deleted them a day later when NEDA announced it was shutting down Tessa “until further notice for a complete investigation.” NEDA’s CEO insisted they hadn’t seen that behavior from Tessa before, calling it a “bug” and insisting the bot would only be down temporarily until the triggers causing the bug were fixed.

In the aftermath, several doctors and psychologists chimed in, terming the rush to automate human roles dangerous and risky. After all, much of what makes these hotlines effective is the volunteers speaking from their own experience. An unsupervised bot doesn’t seem to have what it takes to deal with a mental health crisis, but we’re betting that Tessa will be back. As a wise cephalopod once said: Nobody gives a care about the fate of labor as long as they can get their instant gratification.
 

You can’t spell existential without s-t-e-n-t

This week, we’re including a special “bonus” item that, to be honest, has nothing to do with stents. That’s why our editor is making us call this a “bonus” (and making us use quote marks, too): It doesn’t really have anything to do with stents or health care or those who practice health care. Actually, his exact words were, “You can’t just give the readers someone else’s ****ing list and expect to get paid for it.” Did we mention that he looks like Jack Nicklaus but acts like BoJack Horseman?

Anywaaay, we’re pretty sure that the list in question – “America’s Top 10 Most Googled Existential Questions” – says something about the human condition, just not about stents:

1. Why is the sky blue?

2. What do dreams mean?

3. What is the meaning of life?

4. Why am I so tired?

5. Who am I?

6. What is love?

7. Is a hot dog a sandwich?

8. What came first, the chicken or the egg?

9. What should I do?

10. Do animals have souls?

 

Sowing the seeds of cancer prevention

Are you looking to add to your quality of life, even though pets are not your speed? Might we suggest something with lower maintenance? Something a little greener?

Indoor plants can purify the air that comes from outside. Researchers at the University of Technology Sydney, in partnership with the plantscaping company Ambius, showed that a “green wall” made up of mixed indoor plants was able to suck up 97% of “the most toxic compounds” from the air in just 8 hours. We’re talking about lung-irritating, headache-inducing, cancer risk–boosting compounds from gasoline fumes, including benzene.

Fraser Torpy/University of Technology Sydney

Public health initiatives often strive to reduce cardiovascular and obesity risks, but breathing seems pretty important too. According to the World Health Organization, household air pollution is responsible for about 2.5 million global premature deaths each year. And since 2020 we’ve become accustomed to spending more time inside and at home.

“This new research proves that plants should not just be seen as ‘nice to have,’ but rather a crucial part of every workplace wellness plan,” Ambius General Manager Johan Hodgson said in statement released by the university.

So don’t spend hundreds of dollars on a fancy air filtration system when a wall of plants can do that for next to nothing. Find what works for you and your space and become a plant parent today! Your lungs will thank you.
 

But officer, I had to swerve to miss the duodenal ampulla

Tiny video capsule endoscopes have been around for many years, but they have one big weakness: The ingestible cameras’ journey through the GI tract is passively driven by gravity and the natural movement of the body, so they often miss potential problem areas.

AnX Robotica

Not anymore. That flaw has been addressed by medical technology company AnX Robotica, which has taken endoscopy to the next level by adding that wondrous directional control device of the modern electronic age, a joystick.

The new system “uses an external magnet and hand-held video game style joysticks to move the capsule in three dimensions,” which allows physicians to “remotely drive a miniature video capsule to all regions of the stomach to visualize and photograph potential problem areas,” according to Andrew C. Meltzer, MD, of George Washington University and associates, who conducted a pilot study funded by AnX Robotica.

The video capsule provided a 95% rate of visualization in the stomachs of 40 patients who were examined at a medical office building by an emergency medicine physician who had no previous specialty training in endoscopy. “Capsules were driven by the ER physician and then the study reports were reviewed by an attending gastroenterologist who was physically off site,” the investigators said in a written statement.

The capsule operator did receive some additional training, and development of artificial intelligence to self-drive the capsule is in the works, but for now, we’re talking about a device controlled by a human using a joystick. And we all know that 50-year-olds are not especially known for their joystick skills. For that we need real experts. Yup, we need to put those joystick-controlled capsule endoscopes in the hands of teenage gamers. Who wants to go first?
 

 

 

Maybe AI isn’t ready for the big time after all

“How long before some intrepid stockholder says: ‘Hey, instead of paying doctors, why don’t we just use the free robot instead?’ ” Those words appeared on LOTME but a month ago. After all, the AI is supposed to be smarter and more empathetic than a doctor. And did we mention it’s free? Or at least extremely cheap. Cheaper than, say, a group of recently unionized health care workers.

Teen in bed checking her cell phone
maewjpho/Thinkstock

In early May, the paid employees manning the National Eating Disorders Association emergency hotline voted to unionize, as they felt overwhelmed and underpaid. Apparently, paying six people an extra few thousand a year was too much for NEDA’s leadership, as they decided a few weeks later to fire those workers, fully closing down the hotline. Instead of talking to a real person, people “calling in” for support would be met with Tessa, a wellness chatbot that would hopefully guide them through their crisis. Key word, hopefully.

In perhaps the least surprising twist of the year, NEDA was forced to walk back its decision about a week after its initial announcement. It all started with a viral Instagram post from a woman who called in and received the following advice from Tessa: Lose 1-2 pounds a week, count calories and work for a 500- to 1,000-calorie deficit, weigh herself weekly, and restrict her diet. Unfortunately, all of these suggestions were things that led to the development of the woman’s eating disorder.

Naturally, NEDA responded in good grace, accusing the woman of lying. A NEDA vice president even left some nasty comments on the post, but hastily deleted them a day later when NEDA announced it was shutting down Tessa “until further notice for a complete investigation.” NEDA’s CEO insisted they hadn’t seen that behavior from Tessa before, calling it a “bug” and insisting the bot would only be down temporarily until the triggers causing the bug were fixed.

In the aftermath, several doctors and psychologists chimed in, terming the rush to automate human roles dangerous and risky. After all, much of what makes these hotlines effective is the volunteers speaking from their own experience. An unsupervised bot doesn’t seem to have what it takes to deal with a mental health crisis, but we’re betting that Tessa will be back. As a wise cephalopod once said: Nobody gives a care about the fate of labor as long as they can get their instant gratification.
 

You can’t spell existential without s-t-e-n-t

This week, we’re including a special “bonus” item that, to be honest, has nothing to do with stents. That’s why our editor is making us call this a “bonus” (and making us use quote marks, too): It doesn’t really have anything to do with stents or health care or those who practice health care. Actually, his exact words were, “You can’t just give the readers someone else’s ****ing list and expect to get paid for it.” Did we mention that he looks like Jack Nicklaus but acts like BoJack Horseman?

Anywaaay, we’re pretty sure that the list in question – “America’s Top 10 Most Googled Existential Questions” – says something about the human condition, just not about stents:

1. Why is the sky blue?

2. What do dreams mean?

3. What is the meaning of life?

4. Why am I so tired?

5. Who am I?

6. What is love?

7. Is a hot dog a sandwich?

8. What came first, the chicken or the egg?

9. What should I do?

10. Do animals have souls?

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article