WOW! You spend that much time on the EHR?

Article Type
Changed

Unlike many of you, maybe even most of you, I can recall when my office records were handwritten, some would say scribbled, on pieces of paper. They were decipherable by a select few. Some veteran assistants never mastered the skill. Pages were sometimes lavishly illustrated with drawings of body parts, often because I couldn’t remember or spell the correct anatomic term. When I needed to send a referring letter to another provider I typed it myself because dictating never quite suited my personality.

When I joined a small primary care group, the computer-savvy lead physician and a programmer developed our own homegrown EHR. It relied on scanning documents, as so many of us still generated handwritten notes. Even the most vociferous Luddites among us loved the system from day 2.

Dr. William G. Wilkoff

However, for a variety of reasons, some defensible some just plain bad, our beloved system needed to be replaced after 7 years. We then invested in an off-the-shelf EHR system that promised more capabilities. We were told there would be a learning curve but the plateau would come quickly and we would enjoy our new electronic assistant.

You’ve lived the rest of the story. The learning curve was steep and long and the plateau was a time gobbler. I was probably the most efficient provider in the group, and after 6 months I was leaving the office an hour later than I had been and was seeing the same number of patients. Most of my coworkers were staying and/or working on the computer at home for an extra 2 hours. This change could be easily documented by speaking with our spouses and children. I understand from my colleagues who have stayed in the business that over the ensuing decade and a half since my first experience with the EHR, its insatiable appetite for a clinician’s time has not abated.

The authors of a recent article in Annals of Family Medicine offer up some advice on how this tragic situation might be brought under control. First, the investigators point out that the phenomenon of after-hours EHR work, sometimes referred to as WOW (work outside of work), has not gone unnoticed by health system administrators and vendors who develop and sell the EHRs. However, analyzing the voluminous data necessary is not any easy task and for the most part has resulted in metrics that cannot be easily applied over a variety of practice scenarios. Many health care organizations, even large ones, have simply given up and rely on the WOW data and recommendations provided by the vendors, obviously lending the situation a faint odor of conflict of interest.

The bottom line is that after a couple of decades, there is no well defined way of quantifying the time-gobbling effect of an EHR system. It would seem to me just asking the spouses and significant others of the clinicians would be sufficient. But, authors of the paper have more specific recommendations. First, they suggest that time working on the computer outside of scheduled time with patients should be separated from any other calculation of EHR usage. They encourage vendors and time-management researchers to develop standardized and validated methods for measuring active EHR use. And, finally they recommend that all EHR work done outside of time scheduled with patients be attributed to WOW. They feel that clearly labeling it work outside of work offers health care organizations a better chance of developing policies that will address the scourge of burnout.

This, unfortunately, is another tragic example of how clinicians have lost control of our work environments. The fact that 20 years have passed and there is still no standardized method for determining how much time we spend on the computer is more evidence we need to raise our voices.
 

Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].

Publications
Topics
Sections

Unlike many of you, maybe even most of you, I can recall when my office records were handwritten, some would say scribbled, on pieces of paper. They were decipherable by a select few. Some veteran assistants never mastered the skill. Pages were sometimes lavishly illustrated with drawings of body parts, often because I couldn’t remember or spell the correct anatomic term. When I needed to send a referring letter to another provider I typed it myself because dictating never quite suited my personality.

When I joined a small primary care group, the computer-savvy lead physician and a programmer developed our own homegrown EHR. It relied on scanning documents, as so many of us still generated handwritten notes. Even the most vociferous Luddites among us loved the system from day 2.

Dr. William G. Wilkoff

However, for a variety of reasons, some defensible some just plain bad, our beloved system needed to be replaced after 7 years. We then invested in an off-the-shelf EHR system that promised more capabilities. We were told there would be a learning curve but the plateau would come quickly and we would enjoy our new electronic assistant.

You’ve lived the rest of the story. The learning curve was steep and long and the plateau was a time gobbler. I was probably the most efficient provider in the group, and after 6 months I was leaving the office an hour later than I had been and was seeing the same number of patients. Most of my coworkers were staying and/or working on the computer at home for an extra 2 hours. This change could be easily documented by speaking with our spouses and children. I understand from my colleagues who have stayed in the business that over the ensuing decade and a half since my first experience with the EHR, its insatiable appetite for a clinician’s time has not abated.

The authors of a recent article in Annals of Family Medicine offer up some advice on how this tragic situation might be brought under control. First, the investigators point out that the phenomenon of after-hours EHR work, sometimes referred to as WOW (work outside of work), has not gone unnoticed by health system administrators and vendors who develop and sell the EHRs. However, analyzing the voluminous data necessary is not any easy task and for the most part has resulted in metrics that cannot be easily applied over a variety of practice scenarios. Many health care organizations, even large ones, have simply given up and rely on the WOW data and recommendations provided by the vendors, obviously lending the situation a faint odor of conflict of interest.

The bottom line is that after a couple of decades, there is no well defined way of quantifying the time-gobbling effect of an EHR system. It would seem to me just asking the spouses and significant others of the clinicians would be sufficient. But, authors of the paper have more specific recommendations. First, they suggest that time working on the computer outside of scheduled time with patients should be separated from any other calculation of EHR usage. They encourage vendors and time-management researchers to develop standardized and validated methods for measuring active EHR use. And, finally they recommend that all EHR work done outside of time scheduled with patients be attributed to WOW. They feel that clearly labeling it work outside of work offers health care organizations a better chance of developing policies that will address the scourge of burnout.

This, unfortunately, is another tragic example of how clinicians have lost control of our work environments. The fact that 20 years have passed and there is still no standardized method for determining how much time we spend on the computer is more evidence we need to raise our voices.
 

Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].

Unlike many of you, maybe even most of you, I can recall when my office records were handwritten, some would say scribbled, on pieces of paper. They were decipherable by a select few. Some veteran assistants never mastered the skill. Pages were sometimes lavishly illustrated with drawings of body parts, often because I couldn’t remember or spell the correct anatomic term. When I needed to send a referring letter to another provider I typed it myself because dictating never quite suited my personality.

When I joined a small primary care group, the computer-savvy lead physician and a programmer developed our own homegrown EHR. It relied on scanning documents, as so many of us still generated handwritten notes. Even the most vociferous Luddites among us loved the system from day 2.

Dr. William G. Wilkoff

However, for a variety of reasons, some defensible some just plain bad, our beloved system needed to be replaced after 7 years. We then invested in an off-the-shelf EHR system that promised more capabilities. We were told there would be a learning curve but the plateau would come quickly and we would enjoy our new electronic assistant.

You’ve lived the rest of the story. The learning curve was steep and long and the plateau was a time gobbler. I was probably the most efficient provider in the group, and after 6 months I was leaving the office an hour later than I had been and was seeing the same number of patients. Most of my coworkers were staying and/or working on the computer at home for an extra 2 hours. This change could be easily documented by speaking with our spouses and children. I understand from my colleagues who have stayed in the business that over the ensuing decade and a half since my first experience with the EHR, its insatiable appetite for a clinician’s time has not abated.

The authors of a recent article in Annals of Family Medicine offer up some advice on how this tragic situation might be brought under control. First, the investigators point out that the phenomenon of after-hours EHR work, sometimes referred to as WOW (work outside of work), has not gone unnoticed by health system administrators and vendors who develop and sell the EHRs. However, analyzing the voluminous data necessary is not any easy task and for the most part has resulted in metrics that cannot be easily applied over a variety of practice scenarios. Many health care organizations, even large ones, have simply given up and rely on the WOW data and recommendations provided by the vendors, obviously lending the situation a faint odor of conflict of interest.

The bottom line is that after a couple of decades, there is no well defined way of quantifying the time-gobbling effect of an EHR system. It would seem to me just asking the spouses and significant others of the clinicians would be sufficient. But, authors of the paper have more specific recommendations. First, they suggest that time working on the computer outside of scheduled time with patients should be separated from any other calculation of EHR usage. They encourage vendors and time-management researchers to develop standardized and validated methods for measuring active EHR use. And, finally they recommend that all EHR work done outside of time scheduled with patients be attributed to WOW. They feel that clearly labeling it work outside of work offers health care organizations a better chance of developing policies that will address the scourge of burnout.

This, unfortunately, is another tragic example of how clinicians have lost control of our work environments. The fact that 20 years have passed and there is still no standardized method for determining how much time we spend on the computer is more evidence we need to raise our voices.
 

Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Could semaglutide treat addiction as well as obesity?

Article Type
Changed

Could glucagonlike peptide–1 (GLP-1) receptor agonists such as semaglutide – approved as Ozempic to treat type 2 diabetes and as Wegovy to treat obesity, both from Novo Nordisk – also curb addictions and compulsive behaviors?

As demand for semaglutide for weight loss grew following approval of Wegovy by the U.S. Food and Drug Administration in 2021, anecdotal reports of unexpected potential added benefits also began to surface.

Some patients taking these drugs for type 2 diabetes or weight loss also lost interest in addictive and compulsive behaviors such as drinking alcohol, smoking, shopping, nail biting, and skin picking, as reported in articles in the New York Times and The Atlantic, among others.

There is also some preliminary research to support these observations.

This news organization invited three experts to weigh in.
 

Recent and upcoming studies

The senior author of a recent randomized controlled trial of 127 patients with alcohol use disorder (AUD), Anders Fink-Jensen, MD, said: “I hope that GLP-1 analogs in the future can be used against AUD, but before that can happen, several GLP-1 trials [are needed to] prove an effect on alcohol intake.”

His study involved patients who received exenatide (Byetta, Bydureon, AstraZeneca), the first-generation GLP-1 agonist approved for type 2 diabetes, over 26 weeks, but treatment did not reduce the number of heavy drinking days (the primary outcome), compared with placebo.  

However, in post hoc, exploratory analyses, heavy drinking days and total alcohol intake were significantly reduced in the subgroup of patients with AUD and obesity (body mass index > 30 kg/m2).

The participants were also shown pictures of alcohol or neutral subjects while they underwent functional magnetic resonance imaging. Those who had received exenatide, compared with placebo, had significantly less activation of brain reward centers when shown the pictures of alcohol.

“Something is happening in the brain and activation of the reward center is hampered by the GLP-1 compound,” Dr. Fink-Jensen, a clinical psychologist at the Psychiatric Centre Copenhagen, remarked in an email.

“If patients with AUD already fulfill the criteria for semaglutide (or other GLP-1 analogs) by having type 2 diabetes and/or a BMI over 30 kg/m2, they can of course use the compound right now,” he noted.

His team is also beginning a study in patients with AUD and a BMI ≥ 30 kg/m2 to investigate the effects on alcohol intake of semaglutide up to 2.4 mg weekly, the maximum dose currently approved for obesity in the United States.

“Based on the potency of exenatide and semaglutide,” Dr. Fink-Jensen said, “we expect that semaglutide will cause a stronger reduction in alcohol intake” than exenatide.

Animal studies have also shown that GLP-1 agonists suppress alcohol-induced reward, alcohol intake, motivation to consume alcohol, alcohol seeking, and relapse drinking of alcohol, Elisabet Jerlhag Holm, PhD, noted.

Interestingly, these agents also suppress the reward, intake, and motivation to consume other addictive drugs like cocaine, amphetamine, nicotine, and some opioids, Jerlhag Holm, professor, department of pharmacology, University of Gothenburg, Sweden, noted in an email.

In a recently published preclinical study, her group provides evidence to help explain anecdotal reports from patients with obesity treated with semaglutide who claim they also reduced their alcohol intake. In the study, semaglutide both reduced alcohol intake (and relapse-like drinking) and decreased body weight of rats of both sexes.

“Future research should explore the possibility of semaglutide decreasing alcohol intake in patients with AUD, particularly those who are overweight,” said Prof. Holm.

“AUD is a heterogenous disorder, and one medication is most likely not helpful for all AUD patients,” she added. “Therefore, an arsenal of different medications is beneficial when treating AUD.”

Janice J. Hwang, MD, MHS, echoed these thoughts: “Anecdotally, there are a lot of reports from patients (and in the news) that this class of medication [GLP-1 agonists] impacts cravings and could impact addictive behaviors.”

“I would say, overall, the jury is still out,” as to whether anecdotal reports of GLP-1 agonists curbing addictions will be borne out in randomized controlled trials.

“I think it is much too early to tell” whether these drugs might be approved for treating addictions without more solid clinical trial data, noted Dr. Hwang, who is an associate professor of medicine and chief, division of endocrinology and metabolism, at the University of North Carolina at Chapel Hill.

Meanwhile, another research group at the University of North Carolina at Chapel Hill, led by psychiatrist Christian Hendershot, PhD, is conducting a clinical trial in 48 participants with AUD who are also smokers.

They aim to determine if patients who receive semaglutide at escalating doses (0.25 mg to 1.0 mg per week via subcutaneous injection) over 9 weeks will consume less alcohol (the primary outcome) and smoke less (a secondary outcome) than those who receive a sham placebo injection. Results are expected in October 2023.

Dr. Fink-Jensen has received an unrestricted research grant from Novo Nordisk to investigate the effects of GLP-1 receptor stimulation on weight gain and metabolic disturbances in patients with schizophrenia treated with an antipsychotic.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Could glucagonlike peptide–1 (GLP-1) receptor agonists such as semaglutide – approved as Ozempic to treat type 2 diabetes and as Wegovy to treat obesity, both from Novo Nordisk – also curb addictions and compulsive behaviors?

As demand for semaglutide for weight loss grew following approval of Wegovy by the U.S. Food and Drug Administration in 2021, anecdotal reports of unexpected potential added benefits also began to surface.

Some patients taking these drugs for type 2 diabetes or weight loss also lost interest in addictive and compulsive behaviors such as drinking alcohol, smoking, shopping, nail biting, and skin picking, as reported in articles in the New York Times and The Atlantic, among others.

There is also some preliminary research to support these observations.

This news organization invited three experts to weigh in.
 

Recent and upcoming studies

The senior author of a recent randomized controlled trial of 127 patients with alcohol use disorder (AUD), Anders Fink-Jensen, MD, said: “I hope that GLP-1 analogs in the future can be used against AUD, but before that can happen, several GLP-1 trials [are needed to] prove an effect on alcohol intake.”

His study involved patients who received exenatide (Byetta, Bydureon, AstraZeneca), the first-generation GLP-1 agonist approved for type 2 diabetes, over 26 weeks, but treatment did not reduce the number of heavy drinking days (the primary outcome), compared with placebo.  

However, in post hoc, exploratory analyses, heavy drinking days and total alcohol intake were significantly reduced in the subgroup of patients with AUD and obesity (body mass index > 30 kg/m2).

The participants were also shown pictures of alcohol or neutral subjects while they underwent functional magnetic resonance imaging. Those who had received exenatide, compared with placebo, had significantly less activation of brain reward centers when shown the pictures of alcohol.

“Something is happening in the brain and activation of the reward center is hampered by the GLP-1 compound,” Dr. Fink-Jensen, a clinical psychologist at the Psychiatric Centre Copenhagen, remarked in an email.

“If patients with AUD already fulfill the criteria for semaglutide (or other GLP-1 analogs) by having type 2 diabetes and/or a BMI over 30 kg/m2, they can of course use the compound right now,” he noted.

His team is also beginning a study in patients with AUD and a BMI ≥ 30 kg/m2 to investigate the effects on alcohol intake of semaglutide up to 2.4 mg weekly, the maximum dose currently approved for obesity in the United States.

“Based on the potency of exenatide and semaglutide,” Dr. Fink-Jensen said, “we expect that semaglutide will cause a stronger reduction in alcohol intake” than exenatide.

Animal studies have also shown that GLP-1 agonists suppress alcohol-induced reward, alcohol intake, motivation to consume alcohol, alcohol seeking, and relapse drinking of alcohol, Elisabet Jerlhag Holm, PhD, noted.

Interestingly, these agents also suppress the reward, intake, and motivation to consume other addictive drugs like cocaine, amphetamine, nicotine, and some opioids, Jerlhag Holm, professor, department of pharmacology, University of Gothenburg, Sweden, noted in an email.

In a recently published preclinical study, her group provides evidence to help explain anecdotal reports from patients with obesity treated with semaglutide who claim they also reduced their alcohol intake. In the study, semaglutide both reduced alcohol intake (and relapse-like drinking) and decreased body weight of rats of both sexes.

“Future research should explore the possibility of semaglutide decreasing alcohol intake in patients with AUD, particularly those who are overweight,” said Prof. Holm.

“AUD is a heterogenous disorder, and one medication is most likely not helpful for all AUD patients,” she added. “Therefore, an arsenal of different medications is beneficial when treating AUD.”

Janice J. Hwang, MD, MHS, echoed these thoughts: “Anecdotally, there are a lot of reports from patients (and in the news) that this class of medication [GLP-1 agonists] impacts cravings and could impact addictive behaviors.”

“I would say, overall, the jury is still out,” as to whether anecdotal reports of GLP-1 agonists curbing addictions will be borne out in randomized controlled trials.

“I think it is much too early to tell” whether these drugs might be approved for treating addictions without more solid clinical trial data, noted Dr. Hwang, who is an associate professor of medicine and chief, division of endocrinology and metabolism, at the University of North Carolina at Chapel Hill.

Meanwhile, another research group at the University of North Carolina at Chapel Hill, led by psychiatrist Christian Hendershot, PhD, is conducting a clinical trial in 48 participants with AUD who are also smokers.

They aim to determine if patients who receive semaglutide at escalating doses (0.25 mg to 1.0 mg per week via subcutaneous injection) over 9 weeks will consume less alcohol (the primary outcome) and smoke less (a secondary outcome) than those who receive a sham placebo injection. Results are expected in October 2023.

Dr. Fink-Jensen has received an unrestricted research grant from Novo Nordisk to investigate the effects of GLP-1 receptor stimulation on weight gain and metabolic disturbances in patients with schizophrenia treated with an antipsychotic.

A version of this article first appeared on Medscape.com.

Could glucagonlike peptide–1 (GLP-1) receptor agonists such as semaglutide – approved as Ozempic to treat type 2 diabetes and as Wegovy to treat obesity, both from Novo Nordisk – also curb addictions and compulsive behaviors?

As demand for semaglutide for weight loss grew following approval of Wegovy by the U.S. Food and Drug Administration in 2021, anecdotal reports of unexpected potential added benefits also began to surface.

Some patients taking these drugs for type 2 diabetes or weight loss also lost interest in addictive and compulsive behaviors such as drinking alcohol, smoking, shopping, nail biting, and skin picking, as reported in articles in the New York Times and The Atlantic, among others.

There is also some preliminary research to support these observations.

This news organization invited three experts to weigh in.
 

Recent and upcoming studies

The senior author of a recent randomized controlled trial of 127 patients with alcohol use disorder (AUD), Anders Fink-Jensen, MD, said: “I hope that GLP-1 analogs in the future can be used against AUD, but before that can happen, several GLP-1 trials [are needed to] prove an effect on alcohol intake.”

His study involved patients who received exenatide (Byetta, Bydureon, AstraZeneca), the first-generation GLP-1 agonist approved for type 2 diabetes, over 26 weeks, but treatment did not reduce the number of heavy drinking days (the primary outcome), compared with placebo.  

However, in post hoc, exploratory analyses, heavy drinking days and total alcohol intake were significantly reduced in the subgroup of patients with AUD and obesity (body mass index > 30 kg/m2).

The participants were also shown pictures of alcohol or neutral subjects while they underwent functional magnetic resonance imaging. Those who had received exenatide, compared with placebo, had significantly less activation of brain reward centers when shown the pictures of alcohol.

“Something is happening in the brain and activation of the reward center is hampered by the GLP-1 compound,” Dr. Fink-Jensen, a clinical psychologist at the Psychiatric Centre Copenhagen, remarked in an email.

“If patients with AUD already fulfill the criteria for semaglutide (or other GLP-1 analogs) by having type 2 diabetes and/or a BMI over 30 kg/m2, they can of course use the compound right now,” he noted.

His team is also beginning a study in patients with AUD and a BMI ≥ 30 kg/m2 to investigate the effects on alcohol intake of semaglutide up to 2.4 mg weekly, the maximum dose currently approved for obesity in the United States.

“Based on the potency of exenatide and semaglutide,” Dr. Fink-Jensen said, “we expect that semaglutide will cause a stronger reduction in alcohol intake” than exenatide.

Animal studies have also shown that GLP-1 agonists suppress alcohol-induced reward, alcohol intake, motivation to consume alcohol, alcohol seeking, and relapse drinking of alcohol, Elisabet Jerlhag Holm, PhD, noted.

Interestingly, these agents also suppress the reward, intake, and motivation to consume other addictive drugs like cocaine, amphetamine, nicotine, and some opioids, Jerlhag Holm, professor, department of pharmacology, University of Gothenburg, Sweden, noted in an email.

In a recently published preclinical study, her group provides evidence to help explain anecdotal reports from patients with obesity treated with semaglutide who claim they also reduced their alcohol intake. In the study, semaglutide both reduced alcohol intake (and relapse-like drinking) and decreased body weight of rats of both sexes.

“Future research should explore the possibility of semaglutide decreasing alcohol intake in patients with AUD, particularly those who are overweight,” said Prof. Holm.

“AUD is a heterogenous disorder, and one medication is most likely not helpful for all AUD patients,” she added. “Therefore, an arsenal of different medications is beneficial when treating AUD.”

Janice J. Hwang, MD, MHS, echoed these thoughts: “Anecdotally, there are a lot of reports from patients (and in the news) that this class of medication [GLP-1 agonists] impacts cravings and could impact addictive behaviors.”

“I would say, overall, the jury is still out,” as to whether anecdotal reports of GLP-1 agonists curbing addictions will be borne out in randomized controlled trials.

“I think it is much too early to tell” whether these drugs might be approved for treating addictions without more solid clinical trial data, noted Dr. Hwang, who is an associate professor of medicine and chief, division of endocrinology and metabolism, at the University of North Carolina at Chapel Hill.

Meanwhile, another research group at the University of North Carolina at Chapel Hill, led by psychiatrist Christian Hendershot, PhD, is conducting a clinical trial in 48 participants with AUD who are also smokers.

They aim to determine if patients who receive semaglutide at escalating doses (0.25 mg to 1.0 mg per week via subcutaneous injection) over 9 weeks will consume less alcohol (the primary outcome) and smoke less (a secondary outcome) than those who receive a sham placebo injection. Results are expected in October 2023.

Dr. Fink-Jensen has received an unrestricted research grant from Novo Nordisk to investigate the effects of GLP-1 receptor stimulation on weight gain and metabolic disturbances in patients with schizophrenia treated with an antipsychotic.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Daily multivitamins boost memory in older adults: A randomized trial

Article Type
Changed

 

This transcript has been edited for clarity.

This is Dr. JoAnn Manson, professor of medicine at Harvard Medical School and Brigham and Women’s Hospital. I’d like to talk with you about a recent randomized clinical trial suggesting that multivitamins may improve memory and slow cognitive aging, compared with placebo, known as COSMOS (Cocoa Supplement and Multivitamins Outcome Study). This is the second COSMOS trial to show a benefit of multivitamins on memory and cognition. This trial involved a collaboration between Brigham and Columbia University and was published in the American Journal of Clinical Nutrition. I’d like to acknowledge that I am a coauthor of this study, together with Dr. Howard Sesso, who co-leads the main COSMOS trial with me.

Preserving memory and cognitive function is of critical importance to older adults. Nutritional interventions play an important role because we know the brain requires several nutrients for optimal health, and deficiencies in one or more of these nutrients may accelerate cognitive decline. Some of the micronutrients that are known to be important for brain health include vitamin B12, thiamin, other B vitamins, lutein, magnesium, and zinc, among others.

The current trial included 3,500 participants aged 60 or older, looking at performance on a web-based memory test. The multivitamin group did significantly better than the placebo group on memory tests and word recall, a finding that was estimated as the equivalent of slowing age-related memory loss by about 3 years. The benefit was first seen at 1 year and was sustained across the 3 years of the trial.

Intriguingly, in both COSMOS and COSMOS-Web, and the earlier COSMOS-Mind study, which was done in collaboration with Wake Forest, the participants with a history of cardiovascular disease showed the greatest benefits from multivitamins, perhaps due to lower nutrient status. But the basis for this finding needs to be explored further.

A few important caveats need to be emphasized. First, multivitamins and other dietary supplements will never be a substitute for a healthy diet and healthy lifestyle and should not distract from those goals. But multivitamins may have a role as a complementary strategy. Another caveat is that the randomized trials tested recommended dietary allowances and not megadoses of these micronutrients. In fact, randomized trials of high doses of isolated micronutrients have not clearly shown cognitive benefits, and this suggests that more is not necessarily better and may be worse. High doses also may be associated with toxicity, or they may interfere with absorption or bioavailability of other nutrients.

In COSMOS, over the average 3.6 years of follow-up and in the earlier Physicians’ Health Study II,  over 1 year of supplementation, multivitamins were found to be safe without any clear risks or safety concerns. A further caveat is that although Centrum Silver was tested in this trial, we would not expect that this is a brand-specific benefit, and other high-quality multivitamin brands would be expected to confer similar benefits. Of course, it’s important to check bottles for quality-control documentation such as the seals of the U.S. Pharmacopeia, National Science Foundation, ConsumerLab.com, and other auditors.

Overall, the finding that a daily multivitamin improved memory and slowed cognitive decline in two separate COSMOS randomized trials is exciting, suggesting that multivitamin supplementation holds promise as a safe, accessible, and affordable approach to protecting cognitive health in older adults. Further research will be needed to understand who is most likely to benefit and the biological mechanisms involved. Expert committees will have to look at the research and decide whether any changes in guidelines are indicated in the future.

Dr. Manson is Professor of Medicine and the Michael and Lee Bell Professor of Women’s Health, Harvard Medical School and director of the Division of Preventive Medicine, Brigham and Women’s Hospital, both in Boston. She reported receiving funding/donations from Mars Symbioscience.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

This transcript has been edited for clarity.

This is Dr. JoAnn Manson, professor of medicine at Harvard Medical School and Brigham and Women’s Hospital. I’d like to talk with you about a recent randomized clinical trial suggesting that multivitamins may improve memory and slow cognitive aging, compared with placebo, known as COSMOS (Cocoa Supplement and Multivitamins Outcome Study). This is the second COSMOS trial to show a benefit of multivitamins on memory and cognition. This trial involved a collaboration between Brigham and Columbia University and was published in the American Journal of Clinical Nutrition. I’d like to acknowledge that I am a coauthor of this study, together with Dr. Howard Sesso, who co-leads the main COSMOS trial with me.

Preserving memory and cognitive function is of critical importance to older adults. Nutritional interventions play an important role because we know the brain requires several nutrients for optimal health, and deficiencies in one or more of these nutrients may accelerate cognitive decline. Some of the micronutrients that are known to be important for brain health include vitamin B12, thiamin, other B vitamins, lutein, magnesium, and zinc, among others.

The current trial included 3,500 participants aged 60 or older, looking at performance on a web-based memory test. The multivitamin group did significantly better than the placebo group on memory tests and word recall, a finding that was estimated as the equivalent of slowing age-related memory loss by about 3 years. The benefit was first seen at 1 year and was sustained across the 3 years of the trial.

Intriguingly, in both COSMOS and COSMOS-Web, and the earlier COSMOS-Mind study, which was done in collaboration with Wake Forest, the participants with a history of cardiovascular disease showed the greatest benefits from multivitamins, perhaps due to lower nutrient status. But the basis for this finding needs to be explored further.

A few important caveats need to be emphasized. First, multivitamins and other dietary supplements will never be a substitute for a healthy diet and healthy lifestyle and should not distract from those goals. But multivitamins may have a role as a complementary strategy. Another caveat is that the randomized trials tested recommended dietary allowances and not megadoses of these micronutrients. In fact, randomized trials of high doses of isolated micronutrients have not clearly shown cognitive benefits, and this suggests that more is not necessarily better and may be worse. High doses also may be associated with toxicity, or they may interfere with absorption or bioavailability of other nutrients.

In COSMOS, over the average 3.6 years of follow-up and in the earlier Physicians’ Health Study II,  over 1 year of supplementation, multivitamins were found to be safe without any clear risks or safety concerns. A further caveat is that although Centrum Silver was tested in this trial, we would not expect that this is a brand-specific benefit, and other high-quality multivitamin brands would be expected to confer similar benefits. Of course, it’s important to check bottles for quality-control documentation such as the seals of the U.S. Pharmacopeia, National Science Foundation, ConsumerLab.com, and other auditors.

Overall, the finding that a daily multivitamin improved memory and slowed cognitive decline in two separate COSMOS randomized trials is exciting, suggesting that multivitamin supplementation holds promise as a safe, accessible, and affordable approach to protecting cognitive health in older adults. Further research will be needed to understand who is most likely to benefit and the biological mechanisms involved. Expert committees will have to look at the research and decide whether any changes in guidelines are indicated in the future.

Dr. Manson is Professor of Medicine and the Michael and Lee Bell Professor of Women’s Health, Harvard Medical School and director of the Division of Preventive Medicine, Brigham and Women’s Hospital, both in Boston. She reported receiving funding/donations from Mars Symbioscience.

A version of this article first appeared on Medscape.com.

 

This transcript has been edited for clarity.

This is Dr. JoAnn Manson, professor of medicine at Harvard Medical School and Brigham and Women’s Hospital. I’d like to talk with you about a recent randomized clinical trial suggesting that multivitamins may improve memory and slow cognitive aging, compared with placebo, known as COSMOS (Cocoa Supplement and Multivitamins Outcome Study). This is the second COSMOS trial to show a benefit of multivitamins on memory and cognition. This trial involved a collaboration between Brigham and Columbia University and was published in the American Journal of Clinical Nutrition. I’d like to acknowledge that I am a coauthor of this study, together with Dr. Howard Sesso, who co-leads the main COSMOS trial with me.

Preserving memory and cognitive function is of critical importance to older adults. Nutritional interventions play an important role because we know the brain requires several nutrients for optimal health, and deficiencies in one or more of these nutrients may accelerate cognitive decline. Some of the micronutrients that are known to be important for brain health include vitamin B12, thiamin, other B vitamins, lutein, magnesium, and zinc, among others.

The current trial included 3,500 participants aged 60 or older, looking at performance on a web-based memory test. The multivitamin group did significantly better than the placebo group on memory tests and word recall, a finding that was estimated as the equivalent of slowing age-related memory loss by about 3 years. The benefit was first seen at 1 year and was sustained across the 3 years of the trial.

Intriguingly, in both COSMOS and COSMOS-Web, and the earlier COSMOS-Mind study, which was done in collaboration with Wake Forest, the participants with a history of cardiovascular disease showed the greatest benefits from multivitamins, perhaps due to lower nutrient status. But the basis for this finding needs to be explored further.

A few important caveats need to be emphasized. First, multivitamins and other dietary supplements will never be a substitute for a healthy diet and healthy lifestyle and should not distract from those goals. But multivitamins may have a role as a complementary strategy. Another caveat is that the randomized trials tested recommended dietary allowances and not megadoses of these micronutrients. In fact, randomized trials of high doses of isolated micronutrients have not clearly shown cognitive benefits, and this suggests that more is not necessarily better and may be worse. High doses also may be associated with toxicity, or they may interfere with absorption or bioavailability of other nutrients.

In COSMOS, over the average 3.6 years of follow-up and in the earlier Physicians’ Health Study II,  over 1 year of supplementation, multivitamins were found to be safe without any clear risks or safety concerns. A further caveat is that although Centrum Silver was tested in this trial, we would not expect that this is a brand-specific benefit, and other high-quality multivitamin brands would be expected to confer similar benefits. Of course, it’s important to check bottles for quality-control documentation such as the seals of the U.S. Pharmacopeia, National Science Foundation, ConsumerLab.com, and other auditors.

Overall, the finding that a daily multivitamin improved memory and slowed cognitive decline in two separate COSMOS randomized trials is exciting, suggesting that multivitamin supplementation holds promise as a safe, accessible, and affordable approach to protecting cognitive health in older adults. Further research will be needed to understand who is most likely to benefit and the biological mechanisms involved. Expert committees will have to look at the research and decide whether any changes in guidelines are indicated in the future.

Dr. Manson is Professor of Medicine and the Michael and Lee Bell Professor of Women’s Health, Harvard Medical School and director of the Division of Preventive Medicine, Brigham and Women’s Hospital, both in Boston. She reported receiving funding/donations from Mars Symbioscience.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Beating jet lag at CHEST 2023

Article Type
Changed

 

Sleep Medicine Network

Non-Respiratory Sleep Section

Want to feel your best when enjoying CHEST 2023 sessions, games, vendors, networking events, and much more on the island paradise of Hawai’i? It’s time to start making plans to align your circadian rhythm with Hawai’i Standard Time (HST).

Dr. Sabra Abbott, a circadian rhythm expert and the Director of the Circadian Medicine Clinic at Northwestern University, recommends “to best adapt to the time zone change, you can take advantage of the time-of-day specific phase shifting properties of light and melatonin.”

Before heading west to the meeting, Dr. Abbott recommends mainland USA travelers get extra light exposure in the evening. On arrival in Hawai’i, morning bright-light exposure should be limited. Luckily, afternoon/early evening light exposure is encouraged, which will help get some extra hours on the beach! Don’t forget your sunglasses to help with blocking light in the morning.

Once the meeting has concluded, attendees from mainland USA will need to advance their internal clocks earlier as they travel east back home. This can be achieved by taking melatonin 0.5 mg around bedtime and seeking bright-light during the mid-to-late morning.

To develop a personalized sleep prescription based on your time zone and preferred sleep times, you can use an online jet lag calculator, such as Jet Lag Rooster (jetlag.sleepopolis.com; no affiliations with authors or Dr. Abbott).

To learn more about circadian rhythm alignment when working and traveling, we’ll see you at the CHEST 2023 session “Shifting to Hawai’i – Jet Lag, Shift Workers, and Sleep for Health Care Providers” (10/8/2023 at 0815-HST).  If you haven't registered for the meeting, make sure to do so soon! You'll find the full schedule, pricing, and more at the CHEST 2023 website.

Paul Chung, DO – Section Fellow-in-Training
Lisa Wolfe, MD – Section Member-at-Large
William Healy, MD – Section Member-at-Large

Publications
Topics
Sections

 

Sleep Medicine Network

Non-Respiratory Sleep Section

Want to feel your best when enjoying CHEST 2023 sessions, games, vendors, networking events, and much more on the island paradise of Hawai’i? It’s time to start making plans to align your circadian rhythm with Hawai’i Standard Time (HST).

Dr. Sabra Abbott, a circadian rhythm expert and the Director of the Circadian Medicine Clinic at Northwestern University, recommends “to best adapt to the time zone change, you can take advantage of the time-of-day specific phase shifting properties of light and melatonin.”

Before heading west to the meeting, Dr. Abbott recommends mainland USA travelers get extra light exposure in the evening. On arrival in Hawai’i, morning bright-light exposure should be limited. Luckily, afternoon/early evening light exposure is encouraged, which will help get some extra hours on the beach! Don’t forget your sunglasses to help with blocking light in the morning.

Once the meeting has concluded, attendees from mainland USA will need to advance their internal clocks earlier as they travel east back home. This can be achieved by taking melatonin 0.5 mg around bedtime and seeking bright-light during the mid-to-late morning.

To develop a personalized sleep prescription based on your time zone and preferred sleep times, you can use an online jet lag calculator, such as Jet Lag Rooster (jetlag.sleepopolis.com; no affiliations with authors or Dr. Abbott).

To learn more about circadian rhythm alignment when working and traveling, we’ll see you at the CHEST 2023 session “Shifting to Hawai’i – Jet Lag, Shift Workers, and Sleep for Health Care Providers” (10/8/2023 at 0815-HST).  If you haven't registered for the meeting, make sure to do so soon! You'll find the full schedule, pricing, and more at the CHEST 2023 website.

Paul Chung, DO – Section Fellow-in-Training
Lisa Wolfe, MD – Section Member-at-Large
William Healy, MD – Section Member-at-Large

 

Sleep Medicine Network

Non-Respiratory Sleep Section

Want to feel your best when enjoying CHEST 2023 sessions, games, vendors, networking events, and much more on the island paradise of Hawai’i? It’s time to start making plans to align your circadian rhythm with Hawai’i Standard Time (HST).

Dr. Sabra Abbott, a circadian rhythm expert and the Director of the Circadian Medicine Clinic at Northwestern University, recommends “to best adapt to the time zone change, you can take advantage of the time-of-day specific phase shifting properties of light and melatonin.”

Before heading west to the meeting, Dr. Abbott recommends mainland USA travelers get extra light exposure in the evening. On arrival in Hawai’i, morning bright-light exposure should be limited. Luckily, afternoon/early evening light exposure is encouraged, which will help get some extra hours on the beach! Don’t forget your sunglasses to help with blocking light in the morning.

Once the meeting has concluded, attendees from mainland USA will need to advance their internal clocks earlier as they travel east back home. This can be achieved by taking melatonin 0.5 mg around bedtime and seeking bright-light during the mid-to-late morning.

To develop a personalized sleep prescription based on your time zone and preferred sleep times, you can use an online jet lag calculator, such as Jet Lag Rooster (jetlag.sleepopolis.com; no affiliations with authors or Dr. Abbott).

To learn more about circadian rhythm alignment when working and traveling, we’ll see you at the CHEST 2023 session “Shifting to Hawai’i – Jet Lag, Shift Workers, and Sleep for Health Care Providers” (10/8/2023 at 0815-HST).  If you haven't registered for the meeting, make sure to do so soon! You'll find the full schedule, pricing, and more at the CHEST 2023 website.

Paul Chung, DO – Section Fellow-in-Training
Lisa Wolfe, MD – Section Member-at-Large
William Healy, MD – Section Member-at-Large

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

AI efforts make strides in predicting progression to RA

Article Type
Changed

MILAN – Two independent efforts to use artificial intelligence (AI) to predict the development of early rheumatoid arthritis (RA) from patients with signs and symptoms not meeting full disease criteria showed good, near expert-level accuracy, according to findings from two studies presented at the annual European Congress of Rheumatology.

In one study, researchers from Leiden University Medical Center in the Netherlands developed an AI-based method to automatically analyze MR scans of extremities in order to predict early rheumatoid arthritis. The second study involved a Japanese research team that used machine learning to create a model capable of predicting progression from undifferentiated arthritis (UA) to RA. Both approaches would facilitate early diagnosis of RA, enabling timely treatment and improved clinical outcomes.

Dr. Lennart Jans
Dr. Lennart Jans

Lennart Jans, MD, PhD, who was not involved in either study but works with AI-assisted imaging analysis on a daily basis as head of clinics in musculoskeletal radiology at Ghent University Hospital and a professor of radiology at Ghent University in Belgium, said that integrating AI into health care poses several challenging aspects that need to be addressed. “There are three main challenges associated with the development and implementation of AI-based tools in clinical practice,” he said. “Firstly, obtaining heterogeneous datasets from different image hardware vendors, diverse racial and ethnic backgrounds, and various ages and genders is crucial for training and testing the AI algorithms. Secondly, AI algorithms need to achieve a predetermined performance level depending on the specific use case. Finally, a regulatory pathway must be followed to obtain the necessary FDA or MDR [medical devices regulation] certification before applying an AI use case in clinical practice.”
 

RA prediction

Yanli Li, the first author of the study and a member of the division of image processing at Leiden University Medical Center, explained the potential benefits of early RA prediction. “If we could determine whether a patient presenting with clinically suspected arthralgia (CSA) or early onset arthritis (EAC) is likely to develop RA in the near future, physicians could initiate treatment earlier, reducing the risk of disease progression.”

Currently, rheumatologists estimate the likelihood of developing RA by visually scoring MR scans using the RAMRIS scoring system. “We decided to explore the use of AI,” Dr. Li explained, “because it could save time, reduce costs and labor, eliminate the need for scoring training, and allow for hypothesis-free discoveries.”

The research team collected MR scans of the hands and feet from Leiden University Medical Center’s radiology department. The dataset consisted of images from 177 healthy individuals, 692 subjects with CSA (including 113 who developed RA), and 969 with EAC (including 447 who developed RA). The images underwent automated preprocessing to remove artifacts and standardize the input for the computer. Subsequently, a deep learning model was trained to predict RA development within a 2-year time frame.

The training process involved several steps. Initially, the researchers pretrained the model to learn anatomy by masking parts of the images and tasking the computer with reconstructing them. Subsequently, the AI was trained to differentiate between the groups (EAC vs. healthy and CSA vs. healthy), then between RA and other disorders. Finally, the AI model was trained to predict RA.

The accuracy of the model was evaluated using the area under the receiver operator characteristic curve (AUROC). The model that was trained using MR scans of the hands (including the wrist and metacarpophalangeal joints) achieved a mean AUROC of 0.84 for distinguishing EAC from healthy subjects and 0.83 for distinguishing CSA from healthy subjects. The model trained using MR scans of both the hands and feet achieved a mean AUROC of 0.71 for distinguishing RA from non-RA cases in EAC. The accuracy of the model in predicting RA using MR scans of the hands was 0.73, which closely matches the reported accuracy of visual scoring by human experts (0.74). Importantly, the generation and analysis of heat maps suggested that the deep learning model predicts RA based on known inflammatory signals.

“Automatic RA prediction using AI interpretation of MR scans is feasible,” Dr. Li said. “Incorporating additional clinical data will likely further enhance the AI prediction, and the heat maps may contribute to the discovery of new MRI biomarkers for RA development.”

“AI models and engines have achieved near-expertise levels for various use cases, including the early detection of RA on MRI scans of the hands,” said Dr. Jans, the Ghent University radiologist. “We are observing the same progress in AI detection of rheumatic diseases in other imaging modalities, such as radiography, CT, and ultrasound. However, it is important to note that the reported performances often apply to selected cohorts with standardized imaging protocols. The next challenge [for Dr. Li and colleagues, and others] will be to train and test these algorithms using more heterogeneous datasets to make them applicable in real-world settings.”
 

 

 

A ‘transitional phase’ of applying AI techniques

“In a medical setting, as computer scientists, we face unique challenges,” pointed out Berend C. Stoel, MSc, PhD, the senior author of the Leiden study. “Our team consists of approximately 30-35 researchers, primarily electrical engineers or computer scientists, situated within the radiology department of Leiden University Medical Center. Our focus is on image processing, seeking AI-based solutions for image analysis, particularly utilizing deep learning techniques.”

Their objective is to validate this method more broadly, and to achieve that, they require collaboration with other hospitals. Up until now, they have primarily worked with a specific type of MR images, extremity MR scans. These scans are conducted in only a few centers equipped with extremity MR scanners, which can accommodate only hands or feet.

“We are currently in a transitional phase, aiming to apply our methods to standard MR scans, which are more widely available,” Dr. Stoel informed this news organization. “We are engaged in various projects. One project, nearing completion, involves the scoring of early RA, where we train the computer to imitate the actions of rheumatologists or radiologists. We started with a relatively straightforward approach, but AI offers a multitude of possibilities. In the project presented at EULAR, we manipulated the images in a different manner, attempting to predict future events. We also have a parallel project where we employ AI to detect inflammatory changes over time by analyzing sequences of images (MR scans). Furthermore, we have developed AI models to distinguish between treatment and placebo groups. Once the neural network has been trained for this task, we can inquire about the location and timing of changes, thereby gaining insights into the therapy’s response.

“When considering the history of AI, it has experienced both ups and downs. We are currently in a promising phase, but if certain projects fail, expectations might diminish. My hope is that we will indeed revolutionize and enhance disease diagnosis, monitoring, and prediction. Additionally, AI may provide us with additional information that we, as humans, may not be able to extract from these images. However, it is difficult to predict where we will stand in 5-10 years,” he concluded.
 

Predicting disease progression

The second study, which explored the application of AI in predicting the progression of undifferentiated arthritis (UA) to RA, was presented by Takayuki Fujii, MD, PhD, assistant professor in the department of advanced medicine for rheumatic diseases at Kyoto University’s Graduate School of Medicine in Japan. “Predicting the progression of RA from UA remains an unmet medical need,” he reminded the audience.

Dr. Takayuki Fujii
Dr. Takayuki Fujii

Dr. Fujii’s team used data from the KURAMA cohort, a large observational RA cohort from a single center, to develop a machine learning model. The study included a total of 322 patients initially diagnosed with UA. The deep neural network (DNN) model was trained using 24 clinical features that are easily obtainable in routine clinical practice, such as age, sex, C-reactive protein (CRP) levels, and disease activity score in 28 joints using erythrocyte sedimentation rate (DAS28-ESR). The DNN model achieved a prediction accuracy of 85.1% in the training cohort. When the model was applied to validation data from an external dataset consisting of 88 patients from the ANSWER cohort, a large multicenter observational RA cohort, the prediction accuracy was 80%.

“We have developed a machine learning model that can predict the progression of RA from UA using clinical parameters,” Dr. Fujii concluded. “This model has the potential to assist rheumatologists in providing appropriate care and timely intervention for patients with UA.”

“Dr. Fujii presented a fascinating study,” Dr. Jans said. “They achieved an accuracy of 80% when applying a DNN model to predict progression from UA to RA. This level of accuracy is relatively high and certainly promising. However, it is important to consider that a pre-test probability of 30% [for progressing from UA to RA]  is also relatively high, which partially explains the high accuracy. Nonetheless, this study represents a significant step forward in the clinical management of patients with UA, as it helps identify those who may benefit the most from regular clinical follow-up.”

Dr. Li and Dr. Stoel report no relevant financial relationships with industry. Dr. Fujii has received speaking fees from Asahi Kasei, AbbVie, Chugai, and Tanabe Mitsubishi Pharma. Dr. Jans has received speaking fees from AbbVie, UCB, Lilly, and Novartis; he is cofounder of RheumaFinder. The Leiden study was funded by the Dutch Research Council and the China Scholarship Council. The study by Dr. Fujii and colleagues had no outside funding.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

MILAN – Two independent efforts to use artificial intelligence (AI) to predict the development of early rheumatoid arthritis (RA) from patients with signs and symptoms not meeting full disease criteria showed good, near expert-level accuracy, according to findings from two studies presented at the annual European Congress of Rheumatology.

In one study, researchers from Leiden University Medical Center in the Netherlands developed an AI-based method to automatically analyze MR scans of extremities in order to predict early rheumatoid arthritis. The second study involved a Japanese research team that used machine learning to create a model capable of predicting progression from undifferentiated arthritis (UA) to RA. Both approaches would facilitate early diagnosis of RA, enabling timely treatment and improved clinical outcomes.

Dr. Lennart Jans
Dr. Lennart Jans

Lennart Jans, MD, PhD, who was not involved in either study but works with AI-assisted imaging analysis on a daily basis as head of clinics in musculoskeletal radiology at Ghent University Hospital and a professor of radiology at Ghent University in Belgium, said that integrating AI into health care poses several challenging aspects that need to be addressed. “There are three main challenges associated with the development and implementation of AI-based tools in clinical practice,” he said. “Firstly, obtaining heterogeneous datasets from different image hardware vendors, diverse racial and ethnic backgrounds, and various ages and genders is crucial for training and testing the AI algorithms. Secondly, AI algorithms need to achieve a predetermined performance level depending on the specific use case. Finally, a regulatory pathway must be followed to obtain the necessary FDA or MDR [medical devices regulation] certification before applying an AI use case in clinical practice.”
 

RA prediction

Yanli Li, the first author of the study and a member of the division of image processing at Leiden University Medical Center, explained the potential benefits of early RA prediction. “If we could determine whether a patient presenting with clinically suspected arthralgia (CSA) or early onset arthritis (EAC) is likely to develop RA in the near future, physicians could initiate treatment earlier, reducing the risk of disease progression.”

Currently, rheumatologists estimate the likelihood of developing RA by visually scoring MR scans using the RAMRIS scoring system. “We decided to explore the use of AI,” Dr. Li explained, “because it could save time, reduce costs and labor, eliminate the need for scoring training, and allow for hypothesis-free discoveries.”

The research team collected MR scans of the hands and feet from Leiden University Medical Center’s radiology department. The dataset consisted of images from 177 healthy individuals, 692 subjects with CSA (including 113 who developed RA), and 969 with EAC (including 447 who developed RA). The images underwent automated preprocessing to remove artifacts and standardize the input for the computer. Subsequently, a deep learning model was trained to predict RA development within a 2-year time frame.

The training process involved several steps. Initially, the researchers pretrained the model to learn anatomy by masking parts of the images and tasking the computer with reconstructing them. Subsequently, the AI was trained to differentiate between the groups (EAC vs. healthy and CSA vs. healthy), then between RA and other disorders. Finally, the AI model was trained to predict RA.

The accuracy of the model was evaluated using the area under the receiver operator characteristic curve (AUROC). The model that was trained using MR scans of the hands (including the wrist and metacarpophalangeal joints) achieved a mean AUROC of 0.84 for distinguishing EAC from healthy subjects and 0.83 for distinguishing CSA from healthy subjects. The model trained using MR scans of both the hands and feet achieved a mean AUROC of 0.71 for distinguishing RA from non-RA cases in EAC. The accuracy of the model in predicting RA using MR scans of the hands was 0.73, which closely matches the reported accuracy of visual scoring by human experts (0.74). Importantly, the generation and analysis of heat maps suggested that the deep learning model predicts RA based on known inflammatory signals.

“Automatic RA prediction using AI interpretation of MR scans is feasible,” Dr. Li said. “Incorporating additional clinical data will likely further enhance the AI prediction, and the heat maps may contribute to the discovery of new MRI biomarkers for RA development.”

“AI models and engines have achieved near-expertise levels for various use cases, including the early detection of RA on MRI scans of the hands,” said Dr. Jans, the Ghent University radiologist. “We are observing the same progress in AI detection of rheumatic diseases in other imaging modalities, such as radiography, CT, and ultrasound. However, it is important to note that the reported performances often apply to selected cohorts with standardized imaging protocols. The next challenge [for Dr. Li and colleagues, and others] will be to train and test these algorithms using more heterogeneous datasets to make them applicable in real-world settings.”
 

 

 

A ‘transitional phase’ of applying AI techniques

“In a medical setting, as computer scientists, we face unique challenges,” pointed out Berend C. Stoel, MSc, PhD, the senior author of the Leiden study. “Our team consists of approximately 30-35 researchers, primarily electrical engineers or computer scientists, situated within the radiology department of Leiden University Medical Center. Our focus is on image processing, seeking AI-based solutions for image analysis, particularly utilizing deep learning techniques.”

Their objective is to validate this method more broadly, and to achieve that, they require collaboration with other hospitals. Up until now, they have primarily worked with a specific type of MR images, extremity MR scans. These scans are conducted in only a few centers equipped with extremity MR scanners, which can accommodate only hands or feet.

“We are currently in a transitional phase, aiming to apply our methods to standard MR scans, which are more widely available,” Dr. Stoel informed this news organization. “We are engaged in various projects. One project, nearing completion, involves the scoring of early RA, where we train the computer to imitate the actions of rheumatologists or radiologists. We started with a relatively straightforward approach, but AI offers a multitude of possibilities. In the project presented at EULAR, we manipulated the images in a different manner, attempting to predict future events. We also have a parallel project where we employ AI to detect inflammatory changes over time by analyzing sequences of images (MR scans). Furthermore, we have developed AI models to distinguish between treatment and placebo groups. Once the neural network has been trained for this task, we can inquire about the location and timing of changes, thereby gaining insights into the therapy’s response.

“When considering the history of AI, it has experienced both ups and downs. We are currently in a promising phase, but if certain projects fail, expectations might diminish. My hope is that we will indeed revolutionize and enhance disease diagnosis, monitoring, and prediction. Additionally, AI may provide us with additional information that we, as humans, may not be able to extract from these images. However, it is difficult to predict where we will stand in 5-10 years,” he concluded.
 

Predicting disease progression

The second study, which explored the application of AI in predicting the progression of undifferentiated arthritis (UA) to RA, was presented by Takayuki Fujii, MD, PhD, assistant professor in the department of advanced medicine for rheumatic diseases at Kyoto University’s Graduate School of Medicine in Japan. “Predicting the progression of RA from UA remains an unmet medical need,” he reminded the audience.

Dr. Takayuki Fujii
Dr. Takayuki Fujii

Dr. Fujii’s team used data from the KURAMA cohort, a large observational RA cohort from a single center, to develop a machine learning model. The study included a total of 322 patients initially diagnosed with UA. The deep neural network (DNN) model was trained using 24 clinical features that are easily obtainable in routine clinical practice, such as age, sex, C-reactive protein (CRP) levels, and disease activity score in 28 joints using erythrocyte sedimentation rate (DAS28-ESR). The DNN model achieved a prediction accuracy of 85.1% in the training cohort. When the model was applied to validation data from an external dataset consisting of 88 patients from the ANSWER cohort, a large multicenter observational RA cohort, the prediction accuracy was 80%.

“We have developed a machine learning model that can predict the progression of RA from UA using clinical parameters,” Dr. Fujii concluded. “This model has the potential to assist rheumatologists in providing appropriate care and timely intervention for patients with UA.”

“Dr. Fujii presented a fascinating study,” Dr. Jans said. “They achieved an accuracy of 80% when applying a DNN model to predict progression from UA to RA. This level of accuracy is relatively high and certainly promising. However, it is important to consider that a pre-test probability of 30% [for progressing from UA to RA]  is also relatively high, which partially explains the high accuracy. Nonetheless, this study represents a significant step forward in the clinical management of patients with UA, as it helps identify those who may benefit the most from regular clinical follow-up.”

Dr. Li and Dr. Stoel report no relevant financial relationships with industry. Dr. Fujii has received speaking fees from Asahi Kasei, AbbVie, Chugai, and Tanabe Mitsubishi Pharma. Dr. Jans has received speaking fees from AbbVie, UCB, Lilly, and Novartis; he is cofounder of RheumaFinder. The Leiden study was funded by the Dutch Research Council and the China Scholarship Council. The study by Dr. Fujii and colleagues had no outside funding.

A version of this article first appeared on Medscape.com.

MILAN – Two independent efforts to use artificial intelligence (AI) to predict the development of early rheumatoid arthritis (RA) from patients with signs and symptoms not meeting full disease criteria showed good, near expert-level accuracy, according to findings from two studies presented at the annual European Congress of Rheumatology.

In one study, researchers from Leiden University Medical Center in the Netherlands developed an AI-based method to automatically analyze MR scans of extremities in order to predict early rheumatoid arthritis. The second study involved a Japanese research team that used machine learning to create a model capable of predicting progression from undifferentiated arthritis (UA) to RA. Both approaches would facilitate early diagnosis of RA, enabling timely treatment and improved clinical outcomes.

Dr. Lennart Jans
Dr. Lennart Jans

Lennart Jans, MD, PhD, who was not involved in either study but works with AI-assisted imaging analysis on a daily basis as head of clinics in musculoskeletal radiology at Ghent University Hospital and a professor of radiology at Ghent University in Belgium, said that integrating AI into health care poses several challenging aspects that need to be addressed. “There are three main challenges associated with the development and implementation of AI-based tools in clinical practice,” he said. “Firstly, obtaining heterogeneous datasets from different image hardware vendors, diverse racial and ethnic backgrounds, and various ages and genders is crucial for training and testing the AI algorithms. Secondly, AI algorithms need to achieve a predetermined performance level depending on the specific use case. Finally, a regulatory pathway must be followed to obtain the necessary FDA or MDR [medical devices regulation] certification before applying an AI use case in clinical practice.”
 

RA prediction

Yanli Li, the first author of the study and a member of the division of image processing at Leiden University Medical Center, explained the potential benefits of early RA prediction. “If we could determine whether a patient presenting with clinically suspected arthralgia (CSA) or early onset arthritis (EAC) is likely to develop RA in the near future, physicians could initiate treatment earlier, reducing the risk of disease progression.”

Currently, rheumatologists estimate the likelihood of developing RA by visually scoring MR scans using the RAMRIS scoring system. “We decided to explore the use of AI,” Dr. Li explained, “because it could save time, reduce costs and labor, eliminate the need for scoring training, and allow for hypothesis-free discoveries.”

The research team collected MR scans of the hands and feet from Leiden University Medical Center’s radiology department. The dataset consisted of images from 177 healthy individuals, 692 subjects with CSA (including 113 who developed RA), and 969 with EAC (including 447 who developed RA). The images underwent automated preprocessing to remove artifacts and standardize the input for the computer. Subsequently, a deep learning model was trained to predict RA development within a 2-year time frame.

The training process involved several steps. Initially, the researchers pretrained the model to learn anatomy by masking parts of the images and tasking the computer with reconstructing them. Subsequently, the AI was trained to differentiate between the groups (EAC vs. healthy and CSA vs. healthy), then between RA and other disorders. Finally, the AI model was trained to predict RA.

The accuracy of the model was evaluated using the area under the receiver operator characteristic curve (AUROC). The model that was trained using MR scans of the hands (including the wrist and metacarpophalangeal joints) achieved a mean AUROC of 0.84 for distinguishing EAC from healthy subjects and 0.83 for distinguishing CSA from healthy subjects. The model trained using MR scans of both the hands and feet achieved a mean AUROC of 0.71 for distinguishing RA from non-RA cases in EAC. The accuracy of the model in predicting RA using MR scans of the hands was 0.73, which closely matches the reported accuracy of visual scoring by human experts (0.74). Importantly, the generation and analysis of heat maps suggested that the deep learning model predicts RA based on known inflammatory signals.

“Automatic RA prediction using AI interpretation of MR scans is feasible,” Dr. Li said. “Incorporating additional clinical data will likely further enhance the AI prediction, and the heat maps may contribute to the discovery of new MRI biomarkers for RA development.”

“AI models and engines have achieved near-expertise levels for various use cases, including the early detection of RA on MRI scans of the hands,” said Dr. Jans, the Ghent University radiologist. “We are observing the same progress in AI detection of rheumatic diseases in other imaging modalities, such as radiography, CT, and ultrasound. However, it is important to note that the reported performances often apply to selected cohorts with standardized imaging protocols. The next challenge [for Dr. Li and colleagues, and others] will be to train and test these algorithms using more heterogeneous datasets to make them applicable in real-world settings.”
 

 

 

A ‘transitional phase’ of applying AI techniques

“In a medical setting, as computer scientists, we face unique challenges,” pointed out Berend C. Stoel, MSc, PhD, the senior author of the Leiden study. “Our team consists of approximately 30-35 researchers, primarily electrical engineers or computer scientists, situated within the radiology department of Leiden University Medical Center. Our focus is on image processing, seeking AI-based solutions for image analysis, particularly utilizing deep learning techniques.”

Their objective is to validate this method more broadly, and to achieve that, they require collaboration with other hospitals. Up until now, they have primarily worked with a specific type of MR images, extremity MR scans. These scans are conducted in only a few centers equipped with extremity MR scanners, which can accommodate only hands or feet.

“We are currently in a transitional phase, aiming to apply our methods to standard MR scans, which are more widely available,” Dr. Stoel informed this news organization. “We are engaged in various projects. One project, nearing completion, involves the scoring of early RA, where we train the computer to imitate the actions of rheumatologists or radiologists. We started with a relatively straightforward approach, but AI offers a multitude of possibilities. In the project presented at EULAR, we manipulated the images in a different manner, attempting to predict future events. We also have a parallel project where we employ AI to detect inflammatory changes over time by analyzing sequences of images (MR scans). Furthermore, we have developed AI models to distinguish between treatment and placebo groups. Once the neural network has been trained for this task, we can inquire about the location and timing of changes, thereby gaining insights into the therapy’s response.

“When considering the history of AI, it has experienced both ups and downs. We are currently in a promising phase, but if certain projects fail, expectations might diminish. My hope is that we will indeed revolutionize and enhance disease diagnosis, monitoring, and prediction. Additionally, AI may provide us with additional information that we, as humans, may not be able to extract from these images. However, it is difficult to predict where we will stand in 5-10 years,” he concluded.
 

Predicting disease progression

The second study, which explored the application of AI in predicting the progression of undifferentiated arthritis (UA) to RA, was presented by Takayuki Fujii, MD, PhD, assistant professor in the department of advanced medicine for rheumatic diseases at Kyoto University’s Graduate School of Medicine in Japan. “Predicting the progression of RA from UA remains an unmet medical need,” he reminded the audience.

Dr. Takayuki Fujii
Dr. Takayuki Fujii

Dr. Fujii’s team used data from the KURAMA cohort, a large observational RA cohort from a single center, to develop a machine learning model. The study included a total of 322 patients initially diagnosed with UA. The deep neural network (DNN) model was trained using 24 clinical features that are easily obtainable in routine clinical practice, such as age, sex, C-reactive protein (CRP) levels, and disease activity score in 28 joints using erythrocyte sedimentation rate (DAS28-ESR). The DNN model achieved a prediction accuracy of 85.1% in the training cohort. When the model was applied to validation data from an external dataset consisting of 88 patients from the ANSWER cohort, a large multicenter observational RA cohort, the prediction accuracy was 80%.

“We have developed a machine learning model that can predict the progression of RA from UA using clinical parameters,” Dr. Fujii concluded. “This model has the potential to assist rheumatologists in providing appropriate care and timely intervention for patients with UA.”

“Dr. Fujii presented a fascinating study,” Dr. Jans said. “They achieved an accuracy of 80% when applying a DNN model to predict progression from UA to RA. This level of accuracy is relatively high and certainly promising. However, it is important to consider that a pre-test probability of 30% [for progressing from UA to RA]  is also relatively high, which partially explains the high accuracy. Nonetheless, this study represents a significant step forward in the clinical management of patients with UA, as it helps identify those who may benefit the most from regular clinical follow-up.”

Dr. Li and Dr. Stoel report no relevant financial relationships with industry. Dr. Fujii has received speaking fees from Asahi Kasei, AbbVie, Chugai, and Tanabe Mitsubishi Pharma. Dr. Jans has received speaking fees from AbbVie, UCB, Lilly, and Novartis; he is cofounder of RheumaFinder. The Leiden study was funded by the Dutch Research Council and the China Scholarship Council. The study by Dr. Fujii and colleagues had no outside funding.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT EULAR 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Sewer data says Ohio person has had COVID for 2 years

Article Type
Changed

 

Scientists think that a person in Ohio who has been infected with COVID-19 for 2 years is shedding thousands of times more of the virus than normal, according to wastewater monitoring data. The strain of the virus appears to be unique, the researchers said. 

The mutated version of the virus was discovered by a team of researchers, led by University of Missouri–Columbia virologist Marc Johnson, PhD, that has been studying standalone mutations identified in wastewater. On Twitter, Dr. Johnson said their work could help warn people of a potential risk.

“If you knew of an exposure of a group of people to a deadly disease, there would be an obligation to inform them,” he wrote.

He believes the infected person lives in Columbus, works at a courthouse in a nearby county, and has gut health problems. The county where the person works has a population of just 15,000 people but had record COVID wastewater levels in May, The Columbus Dispatch reported. The unique COVID strain that Dr. Johnson is researching was the only COVID strain found in Fayette County’s wastewater.

“This person was shedding thousands of times more material than a normal person ever would,” Dr. Johnson told the Dispatch. “I think this person isn’t well. ... I’m guessing they have GI issues.”

Monitoring wastewater for COVID-19 is only used to inform public health officials of community levels and spread of the virus. People with COVID are not tracked down using such information.

The Centers for Disease Control and Prevention told the Dispatch that the findings do not mean there’s a public health threat.

“Unusual or ‘cryptic’ sequences identified in wastewater may represent viruses that can replicate in particular individuals, but not in the general population,” the CDC wrote in a statement to the newspaper. “This can be because of a compromised immune system. CDC and other institutions conduct studies in immunocompromised individuals to understand persistent infection and virus evolution.”

Ohio health officials told the newspaper that they don’t consider the situation a public health threat because the cryptic strain hasn’t spread beyond two sewer sheds for those 2  years.

Dr. Johnson and colleagues have been researching other unique COVID strains found in wastewater. They wrote a paper about a case in Wisconsin currently in preprint.

In the paper, the researchers suggest some people are persistently infected, calling them “prolonged shedders.” The researchers wrote that prolonged shedders could be human or “nonhuman,” and that “increased global monitoring of such lineages in wastewater could help anticipate future circulating mutations and/or variants of concern.”

Earlier in 2023, the CDC announced it was ending its community-level reporting of COVID test data and would rely more heavily on hospitalization reports and wastewater monitoring. COVID hospitalizations dipped to 7,212 nationally for the week of June 1-8, which is a 6% decline from the week prior, according to the CDC. That number of hospitalizations equals about two hospitalizations per 100,000 people.

A version of this article first appeared on WebMD.com.

Publications
Topics
Sections

 

Scientists think that a person in Ohio who has been infected with COVID-19 for 2 years is shedding thousands of times more of the virus than normal, according to wastewater monitoring data. The strain of the virus appears to be unique, the researchers said. 

The mutated version of the virus was discovered by a team of researchers, led by University of Missouri–Columbia virologist Marc Johnson, PhD, that has been studying standalone mutations identified in wastewater. On Twitter, Dr. Johnson said their work could help warn people of a potential risk.

“If you knew of an exposure of a group of people to a deadly disease, there would be an obligation to inform them,” he wrote.

He believes the infected person lives in Columbus, works at a courthouse in a nearby county, and has gut health problems. The county where the person works has a population of just 15,000 people but had record COVID wastewater levels in May, The Columbus Dispatch reported. The unique COVID strain that Dr. Johnson is researching was the only COVID strain found in Fayette County’s wastewater.

“This person was shedding thousands of times more material than a normal person ever would,” Dr. Johnson told the Dispatch. “I think this person isn’t well. ... I’m guessing they have GI issues.”

Monitoring wastewater for COVID-19 is only used to inform public health officials of community levels and spread of the virus. People with COVID are not tracked down using such information.

The Centers for Disease Control and Prevention told the Dispatch that the findings do not mean there’s a public health threat.

“Unusual or ‘cryptic’ sequences identified in wastewater may represent viruses that can replicate in particular individuals, but not in the general population,” the CDC wrote in a statement to the newspaper. “This can be because of a compromised immune system. CDC and other institutions conduct studies in immunocompromised individuals to understand persistent infection and virus evolution.”

Ohio health officials told the newspaper that they don’t consider the situation a public health threat because the cryptic strain hasn’t spread beyond two sewer sheds for those 2  years.

Dr. Johnson and colleagues have been researching other unique COVID strains found in wastewater. They wrote a paper about a case in Wisconsin currently in preprint.

In the paper, the researchers suggest some people are persistently infected, calling them “prolonged shedders.” The researchers wrote that prolonged shedders could be human or “nonhuman,” and that “increased global monitoring of such lineages in wastewater could help anticipate future circulating mutations and/or variants of concern.”

Earlier in 2023, the CDC announced it was ending its community-level reporting of COVID test data and would rely more heavily on hospitalization reports and wastewater monitoring. COVID hospitalizations dipped to 7,212 nationally for the week of June 1-8, which is a 6% decline from the week prior, according to the CDC. That number of hospitalizations equals about two hospitalizations per 100,000 people.

A version of this article first appeared on WebMD.com.

 

Scientists think that a person in Ohio who has been infected with COVID-19 for 2 years is shedding thousands of times more of the virus than normal, according to wastewater monitoring data. The strain of the virus appears to be unique, the researchers said. 

The mutated version of the virus was discovered by a team of researchers, led by University of Missouri–Columbia virologist Marc Johnson, PhD, that has been studying standalone mutations identified in wastewater. On Twitter, Dr. Johnson said their work could help warn people of a potential risk.

“If you knew of an exposure of a group of people to a deadly disease, there would be an obligation to inform them,” he wrote.

He believes the infected person lives in Columbus, works at a courthouse in a nearby county, and has gut health problems. The county where the person works has a population of just 15,000 people but had record COVID wastewater levels in May, The Columbus Dispatch reported. The unique COVID strain that Dr. Johnson is researching was the only COVID strain found in Fayette County’s wastewater.

“This person was shedding thousands of times more material than a normal person ever would,” Dr. Johnson told the Dispatch. “I think this person isn’t well. ... I’m guessing they have GI issues.”

Monitoring wastewater for COVID-19 is only used to inform public health officials of community levels and spread of the virus. People with COVID are not tracked down using such information.

The Centers for Disease Control and Prevention told the Dispatch that the findings do not mean there’s a public health threat.

“Unusual or ‘cryptic’ sequences identified in wastewater may represent viruses that can replicate in particular individuals, but not in the general population,” the CDC wrote in a statement to the newspaper. “This can be because of a compromised immune system. CDC and other institutions conduct studies in immunocompromised individuals to understand persistent infection and virus evolution.”

Ohio health officials told the newspaper that they don’t consider the situation a public health threat because the cryptic strain hasn’t spread beyond two sewer sheds for those 2  years.

Dr. Johnson and colleagues have been researching other unique COVID strains found in wastewater. They wrote a paper about a case in Wisconsin currently in preprint.

In the paper, the researchers suggest some people are persistently infected, calling them “prolonged shedders.” The researchers wrote that prolonged shedders could be human or “nonhuman,” and that “increased global monitoring of such lineages in wastewater could help anticipate future circulating mutations and/or variants of concern.”

Earlier in 2023, the CDC announced it was ending its community-level reporting of COVID test data and would rely more heavily on hospitalization reports and wastewater monitoring. COVID hospitalizations dipped to 7,212 nationally for the week of June 1-8, which is a 6% decline from the week prior, according to the CDC. That number of hospitalizations equals about two hospitalizations per 100,000 people.

A version of this article first appeared on WebMD.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

RA and demyelinating disease: No consistent link to TNFi

Article Type
Changed

 

– Treatment of rheumatoid arthritis with tumor necrosis factor inhibitors (TNFi) does not appear to demonstrate a consistent and significant risk for demyelinating disease, according to a systematic literature review presented at the annual European Congress of Rheumatology.

The review was conducted by Isabel Castrejon, MD, of the rheumatology department at Gregorio Marañón General University Hospital, Madrid, and colleagues. “In male RA patients, a marginal and slight increase in risk was found. The low number of events provides reassurance regarding the use of these drugs. However, careful consideration is recommended for individuals at the highest risk of demyelinating diseases,” Dr. Castrejon said in an interview. “Health care providers should evaluate the potential benefits and risks of TNFi treatment on a case-by-case basis and closely monitor patients for any signs or symptoms of demyelinating events.”

The researchers performed the review because early data from biologic registries did not provide sufficient clarity, and the association between TNFi exposure and inflammatory central nervous system events remains poorly understood.
 

Key findings from the analyzed data

Dr. Castrejon and colleagues’ review considered 368 studies that included patients with RA, treatment with any biologic including TNFi and synthetic disease-modifying antirheumatic drugs (DMARDs), and demyelinating event.

The studies focused on assessing the risk of demyelinating events following treatment with biologics, particularly TNFi. Some studies included only patients with RA, while others examined mixed forms of arthritis. In cases involving mixed populations, patients with RA were analyzed separately. Additionally, certain studies solely considered multiple sclerosis, while others encompassed various types of demyelinating events. Dr. Castrejon said that a meta-analysis of the studies could not be performed because of their heterogeneity.

Among the 368 studies, four observational cohort studies and three nested case-control studies reported a risk of demyelinating events following treatment with biologics. Two nested case-control studies indicated an increased risk in mixed populations but did not separately analyze the subgroup of patients with RA. Two observational cohort studies revealed a marginally increased risk in men with RA who undergo TNFi treatment. In the first study, the incidence was 19.7/100,000 patient-years (95% confidence interval, 13.7-27.3) with a standardized incidence ratio of 1.38 (95% CI, 0.96-1.92), a definite case risk ratio of 0.83 (95% CI, 0.51-1.26), and an RR for male patients of 2.75 (95% CI, 1.31-5.06). The second study had an SIR of 1.11 (95% CI, 0.63-1.93), a RR for patients with RA of 0.65 (95% CI, 0.24-1.72), and male RR of 3.48 (95% CI, 1.45-8.37).

An unresolved question is whether demyelinating events are attributable to the underlying disease itself, which may not have been recognized at the time of diagnosis, or whether they are caused by DMARDs. Additionally, the articles that the reviewers analyzed did not consider patient characteristics that could interact with other factors, such as comorbidities or smoking, that might influence their susceptibility to the development of demyelinating events.

How should clinicians manage patients with RA who are at high risk of developing demyelinating diseases? “Typically, we initiate treatment with conventional synthetic disease-modifying methotrexate and then progress to other drugs,” Maya H. Buch, MBChB, PhD, professor of rheumatology at the University of Leeds (England), said in an interview. “For patients in high-risk groups, there are alternative treatment strategies, especially in comparison to TNFi, where there may be a rationale for their use.” Dr. Buch was not involved in the review.

Dr. Castrejon and colleagues reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– Treatment of rheumatoid arthritis with tumor necrosis factor inhibitors (TNFi) does not appear to demonstrate a consistent and significant risk for demyelinating disease, according to a systematic literature review presented at the annual European Congress of Rheumatology.

The review was conducted by Isabel Castrejon, MD, of the rheumatology department at Gregorio Marañón General University Hospital, Madrid, and colleagues. “In male RA patients, a marginal and slight increase in risk was found. The low number of events provides reassurance regarding the use of these drugs. However, careful consideration is recommended for individuals at the highest risk of demyelinating diseases,” Dr. Castrejon said in an interview. “Health care providers should evaluate the potential benefits and risks of TNFi treatment on a case-by-case basis and closely monitor patients for any signs or symptoms of demyelinating events.”

The researchers performed the review because early data from biologic registries did not provide sufficient clarity, and the association between TNFi exposure and inflammatory central nervous system events remains poorly understood.
 

Key findings from the analyzed data

Dr. Castrejon and colleagues’ review considered 368 studies that included patients with RA, treatment with any biologic including TNFi and synthetic disease-modifying antirheumatic drugs (DMARDs), and demyelinating event.

The studies focused on assessing the risk of demyelinating events following treatment with biologics, particularly TNFi. Some studies included only patients with RA, while others examined mixed forms of arthritis. In cases involving mixed populations, patients with RA were analyzed separately. Additionally, certain studies solely considered multiple sclerosis, while others encompassed various types of demyelinating events. Dr. Castrejon said that a meta-analysis of the studies could not be performed because of their heterogeneity.

Among the 368 studies, four observational cohort studies and three nested case-control studies reported a risk of demyelinating events following treatment with biologics. Two nested case-control studies indicated an increased risk in mixed populations but did not separately analyze the subgroup of patients with RA. Two observational cohort studies revealed a marginally increased risk in men with RA who undergo TNFi treatment. In the first study, the incidence was 19.7/100,000 patient-years (95% confidence interval, 13.7-27.3) with a standardized incidence ratio of 1.38 (95% CI, 0.96-1.92), a definite case risk ratio of 0.83 (95% CI, 0.51-1.26), and an RR for male patients of 2.75 (95% CI, 1.31-5.06). The second study had an SIR of 1.11 (95% CI, 0.63-1.93), a RR for patients with RA of 0.65 (95% CI, 0.24-1.72), and male RR of 3.48 (95% CI, 1.45-8.37).

An unresolved question is whether demyelinating events are attributable to the underlying disease itself, which may not have been recognized at the time of diagnosis, or whether they are caused by DMARDs. Additionally, the articles that the reviewers analyzed did not consider patient characteristics that could interact with other factors, such as comorbidities or smoking, that might influence their susceptibility to the development of demyelinating events.

How should clinicians manage patients with RA who are at high risk of developing demyelinating diseases? “Typically, we initiate treatment with conventional synthetic disease-modifying methotrexate and then progress to other drugs,” Maya H. Buch, MBChB, PhD, professor of rheumatology at the University of Leeds (England), said in an interview. “For patients in high-risk groups, there are alternative treatment strategies, especially in comparison to TNFi, where there may be a rationale for their use.” Dr. Buch was not involved in the review.

Dr. Castrejon and colleagues reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

 

– Treatment of rheumatoid arthritis with tumor necrosis factor inhibitors (TNFi) does not appear to demonstrate a consistent and significant risk for demyelinating disease, according to a systematic literature review presented at the annual European Congress of Rheumatology.

The review was conducted by Isabel Castrejon, MD, of the rheumatology department at Gregorio Marañón General University Hospital, Madrid, and colleagues. “In male RA patients, a marginal and slight increase in risk was found. The low number of events provides reassurance regarding the use of these drugs. However, careful consideration is recommended for individuals at the highest risk of demyelinating diseases,” Dr. Castrejon said in an interview. “Health care providers should evaluate the potential benefits and risks of TNFi treatment on a case-by-case basis and closely monitor patients for any signs or symptoms of demyelinating events.”

The researchers performed the review because early data from biologic registries did not provide sufficient clarity, and the association between TNFi exposure and inflammatory central nervous system events remains poorly understood.
 

Key findings from the analyzed data

Dr. Castrejon and colleagues’ review considered 368 studies that included patients with RA, treatment with any biologic including TNFi and synthetic disease-modifying antirheumatic drugs (DMARDs), and demyelinating event.

The studies focused on assessing the risk of demyelinating events following treatment with biologics, particularly TNFi. Some studies included only patients with RA, while others examined mixed forms of arthritis. In cases involving mixed populations, patients with RA were analyzed separately. Additionally, certain studies solely considered multiple sclerosis, while others encompassed various types of demyelinating events. Dr. Castrejon said that a meta-analysis of the studies could not be performed because of their heterogeneity.

Among the 368 studies, four observational cohort studies and three nested case-control studies reported a risk of demyelinating events following treatment with biologics. Two nested case-control studies indicated an increased risk in mixed populations but did not separately analyze the subgroup of patients with RA. Two observational cohort studies revealed a marginally increased risk in men with RA who undergo TNFi treatment. In the first study, the incidence was 19.7/100,000 patient-years (95% confidence interval, 13.7-27.3) with a standardized incidence ratio of 1.38 (95% CI, 0.96-1.92), a definite case risk ratio of 0.83 (95% CI, 0.51-1.26), and an RR for male patients of 2.75 (95% CI, 1.31-5.06). The second study had an SIR of 1.11 (95% CI, 0.63-1.93), a RR for patients with RA of 0.65 (95% CI, 0.24-1.72), and male RR of 3.48 (95% CI, 1.45-8.37).

An unresolved question is whether demyelinating events are attributable to the underlying disease itself, which may not have been recognized at the time of diagnosis, or whether they are caused by DMARDs. Additionally, the articles that the reviewers analyzed did not consider patient characteristics that could interact with other factors, such as comorbidities or smoking, that might influence their susceptibility to the development of demyelinating events.

How should clinicians manage patients with RA who are at high risk of developing demyelinating diseases? “Typically, we initiate treatment with conventional synthetic disease-modifying methotrexate and then progress to other drugs,” Maya H. Buch, MBChB, PhD, professor of rheumatology at the University of Leeds (England), said in an interview. “For patients in high-risk groups, there are alternative treatment strategies, especially in comparison to TNFi, where there may be a rationale for their use.” Dr. Buch was not involved in the review.

Dr. Castrejon and colleagues reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT EULAR 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

High-intensity interval training has sustainable effects in patients with inflammatory arthritis

Article Type
Changed

– High-intensity interval training (HIIT) has been shown to enhance cardiorespiratory fitness (CRF) and mitigate cardiovascular disease (CVD) risk factors in patients with inflammatory joint diseases (IJD) in a randomized trial. Notably, the positive response in CRF did not coincide with changes in pain or fatigue.

Kristine Norden, of the Center for Treatment of Rheumatic and Musculoskeletal Diseases, Norwegian National Advisory Unit on Rehabilitation in Rheumatology, Diakonhjemmet Hospital, Oslo, presented the late-breaking results of the ExeHeart trial at the annual European Congress of Rheumatology. The trial aimed to evaluate the short- and long-term effects of 12 weeks of supervised HIIT in patients with IJD.

Viktor Cap/Thinkstock

Ms. Norden said in an interview that “HIIT is a feasible physiotherapeutic intervention with sustainable effects in patients with IJD. It does not exacerbate symptoms of IJD and can be implemented in primary care settings.”
 

The trial

The ExeHeart trial is a randomized controlled trial designed to assess the effects of HIIT on CRF, CVD risk, and disease activity in patients with IJD. The trial is a collaborative effort with patient research partners and aligns with patients’ requests for effective nonpharmacologic treatments. The outcomes being evaluated include CRF (primary outcome), CVD risk factors, anthropometric measures, disease activity, and patient-reported outcomes related to pain, fatigue, disease, physical activity, and exercise.

A total of 60 patients with IJD were recruited from the Preventive Cardio-Rheuma clinic at Diakonhjemmet. They were randomly assigned to receive either standard care (including relevant lifestyle advice and cardiopreventive medication) or standard care along with a 12-week HIIT intervention supervised by physiotherapists. Assessments were conducted at baseline, at 3 months (primary endpoint), and at 6 months post baseline. There was no supervised intervention between the 3- and 6-month time points.

The median age of the participants was 59 years, with 34 participants (57%) being women. The types of IJD among the participants included rheumatoid arthritis in 45%, spondyloarthritis in 32%, and psoriatic arthritis in 23%. Furthermore, 49 patients (82%) had a high risk for CVD.

The participants were divided into two groups: a control group (n = 30) and a HIIT group (n = 30). The HIIT group underwent a 12-week intervention consisting of twice-a-week supervised 4x4-minute HIIT sessions at 90%-95% of peak heart rate, alternated with moderate activity at 70%. The control group engaged in unsupervised moderate-intensity exercise sessions. The primary outcome measured was the change in CRF, assessed through peak oxygen uptake (VO2 max) using a cardiopulmonary exercise test. Secondary outcomes – pain and fatigue – were evaluated using a questionnaire (Numeric Rating Scale 0-10, where 0 represents no pain or fatigue).

Following HIIT, a statistically significant difference was observed in VO2 max (2.5 mL/kg per min; P < .01) in favor of the exercise group at 3 months, while no significant differences were found in pain and fatigue. This discrepancy in VO2 max between the groups was maintained at 6 months (2.6 mL/kg per min; P < .01), with no notable disparities in pain and fatigue. A per-protocol analysis at 3 months demonstrated a difference in VO2 max between the groups (3.2 mL/kg per min; P < .01).

Ms. Norden concluded that the clinical implications of these findings are significant, as increased CRF achieved through HIIT reflects an improvement in the body’s ability to deliver oxygen to working muscles. Consequently, this enhancement in CRF can lead to overall health improvements and a reduced risk for CVD.
 

 

 

Long-lasting effects

Christopher Edwards, MBBS, MD, honorary consultant rheumatologist at University Hospital Southampton (England) NHS Foundation Trust Medicine, University of Southampton, was concerned about future maintenance of increased CRF. “I really wish we had data on these patients at 12 months as well, so we could see if the effects last even longer. Regarding intensity, there are clear indications that engaging in moderate and high-intensity workouts is more beneficial,” Dr. Norden said. “So, I would certainly recommend at least one high-intensity exercise session per week for those patients, while also incorporating lower and moderate-intensity exercises if desired. However, for individuals aiming to maximize their oxygen uptake, high-intensity exercise is considered the most effective approach.”

There is compelling evidence supporting the benefits of physical activity in improving disease activity among patients with IJD, making it a critical component of nonpharmacologic treatment. However, individuals with rheumatic and musculoskeletal conditions generally exhibit lower levels of physical activity, compared with their healthy counterparts. Recognizing the importance of CVD prevention in patients with IJD, EULAR recommends routine CVD screening for individuals diagnosed with IJD.

Ms. Norden and coauthors report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

– High-intensity interval training (HIIT) has been shown to enhance cardiorespiratory fitness (CRF) and mitigate cardiovascular disease (CVD) risk factors in patients with inflammatory joint diseases (IJD) in a randomized trial. Notably, the positive response in CRF did not coincide with changes in pain or fatigue.

Kristine Norden, of the Center for Treatment of Rheumatic and Musculoskeletal Diseases, Norwegian National Advisory Unit on Rehabilitation in Rheumatology, Diakonhjemmet Hospital, Oslo, presented the late-breaking results of the ExeHeart trial at the annual European Congress of Rheumatology. The trial aimed to evaluate the short- and long-term effects of 12 weeks of supervised HIIT in patients with IJD.

Viktor Cap/Thinkstock

Ms. Norden said in an interview that “HIIT is a feasible physiotherapeutic intervention with sustainable effects in patients with IJD. It does not exacerbate symptoms of IJD and can be implemented in primary care settings.”
 

The trial

The ExeHeart trial is a randomized controlled trial designed to assess the effects of HIIT on CRF, CVD risk, and disease activity in patients with IJD. The trial is a collaborative effort with patient research partners and aligns with patients’ requests for effective nonpharmacologic treatments. The outcomes being evaluated include CRF (primary outcome), CVD risk factors, anthropometric measures, disease activity, and patient-reported outcomes related to pain, fatigue, disease, physical activity, and exercise.

A total of 60 patients with IJD were recruited from the Preventive Cardio-Rheuma clinic at Diakonhjemmet. They were randomly assigned to receive either standard care (including relevant lifestyle advice and cardiopreventive medication) or standard care along with a 12-week HIIT intervention supervised by physiotherapists. Assessments were conducted at baseline, at 3 months (primary endpoint), and at 6 months post baseline. There was no supervised intervention between the 3- and 6-month time points.

The median age of the participants was 59 years, with 34 participants (57%) being women. The types of IJD among the participants included rheumatoid arthritis in 45%, spondyloarthritis in 32%, and psoriatic arthritis in 23%. Furthermore, 49 patients (82%) had a high risk for CVD.

The participants were divided into two groups: a control group (n = 30) and a HIIT group (n = 30). The HIIT group underwent a 12-week intervention consisting of twice-a-week supervised 4x4-minute HIIT sessions at 90%-95% of peak heart rate, alternated with moderate activity at 70%. The control group engaged in unsupervised moderate-intensity exercise sessions. The primary outcome measured was the change in CRF, assessed through peak oxygen uptake (VO2 max) using a cardiopulmonary exercise test. Secondary outcomes – pain and fatigue – were evaluated using a questionnaire (Numeric Rating Scale 0-10, where 0 represents no pain or fatigue).

Following HIIT, a statistically significant difference was observed in VO2 max (2.5 mL/kg per min; P < .01) in favor of the exercise group at 3 months, while no significant differences were found in pain and fatigue. This discrepancy in VO2 max between the groups was maintained at 6 months (2.6 mL/kg per min; P < .01), with no notable disparities in pain and fatigue. A per-protocol analysis at 3 months demonstrated a difference in VO2 max between the groups (3.2 mL/kg per min; P < .01).

Ms. Norden concluded that the clinical implications of these findings are significant, as increased CRF achieved through HIIT reflects an improvement in the body’s ability to deliver oxygen to working muscles. Consequently, this enhancement in CRF can lead to overall health improvements and a reduced risk for CVD.
 

 

 

Long-lasting effects

Christopher Edwards, MBBS, MD, honorary consultant rheumatologist at University Hospital Southampton (England) NHS Foundation Trust Medicine, University of Southampton, was concerned about future maintenance of increased CRF. “I really wish we had data on these patients at 12 months as well, so we could see if the effects last even longer. Regarding intensity, there are clear indications that engaging in moderate and high-intensity workouts is more beneficial,” Dr. Norden said. “So, I would certainly recommend at least one high-intensity exercise session per week for those patients, while also incorporating lower and moderate-intensity exercises if desired. However, for individuals aiming to maximize their oxygen uptake, high-intensity exercise is considered the most effective approach.”

There is compelling evidence supporting the benefits of physical activity in improving disease activity among patients with IJD, making it a critical component of nonpharmacologic treatment. However, individuals with rheumatic and musculoskeletal conditions generally exhibit lower levels of physical activity, compared with their healthy counterparts. Recognizing the importance of CVD prevention in patients with IJD, EULAR recommends routine CVD screening for individuals diagnosed with IJD.

Ms. Norden and coauthors report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

– High-intensity interval training (HIIT) has been shown to enhance cardiorespiratory fitness (CRF) and mitigate cardiovascular disease (CVD) risk factors in patients with inflammatory joint diseases (IJD) in a randomized trial. Notably, the positive response in CRF did not coincide with changes in pain or fatigue.

Kristine Norden, of the Center for Treatment of Rheumatic and Musculoskeletal Diseases, Norwegian National Advisory Unit on Rehabilitation in Rheumatology, Diakonhjemmet Hospital, Oslo, presented the late-breaking results of the ExeHeart trial at the annual European Congress of Rheumatology. The trial aimed to evaluate the short- and long-term effects of 12 weeks of supervised HIIT in patients with IJD.

Viktor Cap/Thinkstock

Ms. Norden said in an interview that “HIIT is a feasible physiotherapeutic intervention with sustainable effects in patients with IJD. It does not exacerbate symptoms of IJD and can be implemented in primary care settings.”
 

The trial

The ExeHeart trial is a randomized controlled trial designed to assess the effects of HIIT on CRF, CVD risk, and disease activity in patients with IJD. The trial is a collaborative effort with patient research partners and aligns with patients’ requests for effective nonpharmacologic treatments. The outcomes being evaluated include CRF (primary outcome), CVD risk factors, anthropometric measures, disease activity, and patient-reported outcomes related to pain, fatigue, disease, physical activity, and exercise.

A total of 60 patients with IJD were recruited from the Preventive Cardio-Rheuma clinic at Diakonhjemmet. They were randomly assigned to receive either standard care (including relevant lifestyle advice and cardiopreventive medication) or standard care along with a 12-week HIIT intervention supervised by physiotherapists. Assessments were conducted at baseline, at 3 months (primary endpoint), and at 6 months post baseline. There was no supervised intervention between the 3- and 6-month time points.

The median age of the participants was 59 years, with 34 participants (57%) being women. The types of IJD among the participants included rheumatoid arthritis in 45%, spondyloarthritis in 32%, and psoriatic arthritis in 23%. Furthermore, 49 patients (82%) had a high risk for CVD.

The participants were divided into two groups: a control group (n = 30) and a HIIT group (n = 30). The HIIT group underwent a 12-week intervention consisting of twice-a-week supervised 4x4-minute HIIT sessions at 90%-95% of peak heart rate, alternated with moderate activity at 70%. The control group engaged in unsupervised moderate-intensity exercise sessions. The primary outcome measured was the change in CRF, assessed through peak oxygen uptake (VO2 max) using a cardiopulmonary exercise test. Secondary outcomes – pain and fatigue – were evaluated using a questionnaire (Numeric Rating Scale 0-10, where 0 represents no pain or fatigue).

Following HIIT, a statistically significant difference was observed in VO2 max (2.5 mL/kg per min; P < .01) in favor of the exercise group at 3 months, while no significant differences were found in pain and fatigue. This discrepancy in VO2 max between the groups was maintained at 6 months (2.6 mL/kg per min; P < .01), with no notable disparities in pain and fatigue. A per-protocol analysis at 3 months demonstrated a difference in VO2 max between the groups (3.2 mL/kg per min; P < .01).

Ms. Norden concluded that the clinical implications of these findings are significant, as increased CRF achieved through HIIT reflects an improvement in the body’s ability to deliver oxygen to working muscles. Consequently, this enhancement in CRF can lead to overall health improvements and a reduced risk for CVD.
 

 

 

Long-lasting effects

Christopher Edwards, MBBS, MD, honorary consultant rheumatologist at University Hospital Southampton (England) NHS Foundation Trust Medicine, University of Southampton, was concerned about future maintenance of increased CRF. “I really wish we had data on these patients at 12 months as well, so we could see if the effects last even longer. Regarding intensity, there are clear indications that engaging in moderate and high-intensity workouts is more beneficial,” Dr. Norden said. “So, I would certainly recommend at least one high-intensity exercise session per week for those patients, while also incorporating lower and moderate-intensity exercises if desired. However, for individuals aiming to maximize their oxygen uptake, high-intensity exercise is considered the most effective approach.”

There is compelling evidence supporting the benefits of physical activity in improving disease activity among patients with IJD, making it a critical component of nonpharmacologic treatment. However, individuals with rheumatic and musculoskeletal conditions generally exhibit lower levels of physical activity, compared with their healthy counterparts. Recognizing the importance of CVD prevention in patients with IJD, EULAR recommends routine CVD screening for individuals diagnosed with IJD.

Ms. Norden and coauthors report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT EULAR 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article