User login
Using AI to ID Osteoporosis: A Medico-Legal Minefield?
Could an artificial intelligence (AI)–driven tool that mines medical records for suspected cases of osteoporosis be so successful that it becomes a potential liability? Yes, according to Christopher White, PhD, executive director of Maridulu Budyari Gumal, the Sydney Partnership for Health, Education, Research, and Enterprise, a research translation center in Liverpool, Australia.
In a thought-provoking presentation at the Endocrine Society’s AI in Healthcare Virtual Summit, White described the results after his fracture liaison team at Prince of Wales Hospital in Randwick, Australia, tried to plug the “osteoporosis treatment gap” by mining medical records to identify patients with the disorder.
‘Be Careful What You Wish For’
White and colleagues developed a robust standalone database over 20 years that informed fracture risk among patients with osteoporosis in Sydney. The database included all relevant clinical information, as well as bone density measurements, on about 30,000 patients and could be interrogated for randomized controlled trial recruitment.
However, a “crisis” occurred around 2011, when the team received a recruitment request for the first head-to-head comparison of alendronate with romosozumab. “We had numerous postmenopausal women in the age range with the required bone density, but we hadn’t captured the severity of their vertebral fracture or how many they actually had,” White told the this news organization. For recruitment into the study, participants must have had at least two moderate or severe vertebral fractures or a proximal vertebral fracture that was sustained between 3 and 24 months before recruitment.
White turned to his hospital’s mainframe, which had coding data and time intervals for patients who were admitted with vertebral or hip fractures. He calculated how many patients who met the study criteria had been discharged and how many of those he thought he’d be able to capture through the mainframe. He was confident he would have enough, but he was wrong. He underrecruited and could not participate in the trial.
Determined not to wind up in a similar situation in the future, he investigated and found that other centers were struggling with similar problems. This led to a collaboration with four investigators who were using AI and Advanced Encryption Standard (AES) coding to identify patients at risk for osteoporotic fractures. White, meanwhile, had developed a natural language processing tool called XRAIT that also identified patients at fracture risk. A study comparing the two electronic search programs, which screen medical records for fractures, found that both reliably identified patients who had had a fracture. White and his colleagues concluded that hybrid tools combining XRAIT and AES would likely improve the identification of patients with osteoporosis who would require follow-up or might participate in future trials.
Those patients were not being identified sooner for multiple reasons, White explained. Sometimes, the radiologist would report osteoporosis, but it wouldn’t get coded. Or, in the emergency department, a patient with a fracture would be treated and then sent home, and the possibility of osteoporosis wasn’t reported.
“As we went deeper and deeper with our tools into the medical record, we found more and more patients who hadn’t been coded or reported but who actually had osteoporosis,” White said. “It was incredibly prevalent.”
But the number of patients identified was more than the hospital could comfortably handle.
Ironically, he added, “To my relief and probably not to the benefit of the patients, there was a system upgrade of the radiology reporting system, which was incompatible with the natural language processing technology that I had installed. The AI was turned off at that point, but I had a look over the edge and into the mine pit.”
“The lesson learned,” White told this news organization, is “If you mine the medical record for unidentified patients before you know what to do with the output, you create a medico-legal minefield. You need to be careful what you wish for with technology, because it may actually come true.”
Grappling With the Treatment Gap
An (over)abundance of patients is likely contributing to the “osteoporosis treatment gap” that Australia’s fracture liaison services, which handle many of these patients, are grappling with. One recent meta-analysis showed that not all eligible patients are treated and that not all patients who are treated actually start treatment. Another study showed that only a minority of patients — anywhere between 20% and 40% — who start are still persisting at about 3 years, White said.
Various types of fracture liaison services exist, he noted. The model that has been shown to best promote adherence is the one requiring clinicians to “identify, educate [usually, the primary care physician], evaluate, start treatment, continue treatment, and follow-up at 12 months for to confirm that there is adherence.”
What’s happening now, he said, is that the technology is identifying a high number of vertebral crush fractures, and there’s no education or evaluation. “The radiologist just refers the patient to a primary care physician and hopes for the best. AI isn’t contributing to solving the treatment gap problem; it’s amplifying it. It’s ahead of the ability of organizations to accommodate the findings.”
Solutions, he said, would require support at the top of health systems and organizations, and funding to proceed; data surveys concentrating on vertical integration of the medical record to follow patients wherever they are — eg, hospital, primary care — in their health journeys; a workflow with synchronous diagnosis and treatment planning, delivery, monitoring, and payment; and clinical and community champions advocating and “leading the charge in health tech.”
Furthermore, he advised, organizations need to be “very, very careful with safety and security — that is, managing the digital risks.”
“Oscar Wilde said there are two tragedies in life: One is not getting what one wants, and the other is getting it,” White concluded. “In my career, we’ve moved on from not knowing how to treat osteoporosis to knowing how to treat it. And that is both an asset and a liability.”
A version of this article first appeared on Medscape.com.
Could an artificial intelligence (AI)–driven tool that mines medical records for suspected cases of osteoporosis be so successful that it becomes a potential liability? Yes, according to Christopher White, PhD, executive director of Maridulu Budyari Gumal, the Sydney Partnership for Health, Education, Research, and Enterprise, a research translation center in Liverpool, Australia.
In a thought-provoking presentation at the Endocrine Society’s AI in Healthcare Virtual Summit, White described the results after his fracture liaison team at Prince of Wales Hospital in Randwick, Australia, tried to plug the “osteoporosis treatment gap” by mining medical records to identify patients with the disorder.
‘Be Careful What You Wish For’
White and colleagues developed a robust standalone database over 20 years that informed fracture risk among patients with osteoporosis in Sydney. The database included all relevant clinical information, as well as bone density measurements, on about 30,000 patients and could be interrogated for randomized controlled trial recruitment.
However, a “crisis” occurred around 2011, when the team received a recruitment request for the first head-to-head comparison of alendronate with romosozumab. “We had numerous postmenopausal women in the age range with the required bone density, but we hadn’t captured the severity of their vertebral fracture or how many they actually had,” White told the this news organization. For recruitment into the study, participants must have had at least two moderate or severe vertebral fractures or a proximal vertebral fracture that was sustained between 3 and 24 months before recruitment.
White turned to his hospital’s mainframe, which had coding data and time intervals for patients who were admitted with vertebral or hip fractures. He calculated how many patients who met the study criteria had been discharged and how many of those he thought he’d be able to capture through the mainframe. He was confident he would have enough, but he was wrong. He underrecruited and could not participate in the trial.
Determined not to wind up in a similar situation in the future, he investigated and found that other centers were struggling with similar problems. This led to a collaboration with four investigators who were using AI and Advanced Encryption Standard (AES) coding to identify patients at risk for osteoporotic fractures. White, meanwhile, had developed a natural language processing tool called XRAIT that also identified patients at fracture risk. A study comparing the two electronic search programs, which screen medical records for fractures, found that both reliably identified patients who had had a fracture. White and his colleagues concluded that hybrid tools combining XRAIT and AES would likely improve the identification of patients with osteoporosis who would require follow-up or might participate in future trials.
Those patients were not being identified sooner for multiple reasons, White explained. Sometimes, the radiologist would report osteoporosis, but it wouldn’t get coded. Or, in the emergency department, a patient with a fracture would be treated and then sent home, and the possibility of osteoporosis wasn’t reported.
“As we went deeper and deeper with our tools into the medical record, we found more and more patients who hadn’t been coded or reported but who actually had osteoporosis,” White said. “It was incredibly prevalent.”
But the number of patients identified was more than the hospital could comfortably handle.
Ironically, he added, “To my relief and probably not to the benefit of the patients, there was a system upgrade of the radiology reporting system, which was incompatible with the natural language processing technology that I had installed. The AI was turned off at that point, but I had a look over the edge and into the mine pit.”
“The lesson learned,” White told this news organization, is “If you mine the medical record for unidentified patients before you know what to do with the output, you create a medico-legal minefield. You need to be careful what you wish for with technology, because it may actually come true.”
Grappling With the Treatment Gap
An (over)abundance of patients is likely contributing to the “osteoporosis treatment gap” that Australia’s fracture liaison services, which handle many of these patients, are grappling with. One recent meta-analysis showed that not all eligible patients are treated and that not all patients who are treated actually start treatment. Another study showed that only a minority of patients — anywhere between 20% and 40% — who start are still persisting at about 3 years, White said.
Various types of fracture liaison services exist, he noted. The model that has been shown to best promote adherence is the one requiring clinicians to “identify, educate [usually, the primary care physician], evaluate, start treatment, continue treatment, and follow-up at 12 months for to confirm that there is adherence.”
What’s happening now, he said, is that the technology is identifying a high number of vertebral crush fractures, and there’s no education or evaluation. “The radiologist just refers the patient to a primary care physician and hopes for the best. AI isn’t contributing to solving the treatment gap problem; it’s amplifying it. It’s ahead of the ability of organizations to accommodate the findings.”
Solutions, he said, would require support at the top of health systems and organizations, and funding to proceed; data surveys concentrating on vertical integration of the medical record to follow patients wherever they are — eg, hospital, primary care — in their health journeys; a workflow with synchronous diagnosis and treatment planning, delivery, monitoring, and payment; and clinical and community champions advocating and “leading the charge in health tech.”
Furthermore, he advised, organizations need to be “very, very careful with safety and security — that is, managing the digital risks.”
“Oscar Wilde said there are two tragedies in life: One is not getting what one wants, and the other is getting it,” White concluded. “In my career, we’ve moved on from not knowing how to treat osteoporosis to knowing how to treat it. And that is both an asset and a liability.”
A version of this article first appeared on Medscape.com.
Could an artificial intelligence (AI)–driven tool that mines medical records for suspected cases of osteoporosis be so successful that it becomes a potential liability? Yes, according to Christopher White, PhD, executive director of Maridulu Budyari Gumal, the Sydney Partnership for Health, Education, Research, and Enterprise, a research translation center in Liverpool, Australia.
In a thought-provoking presentation at the Endocrine Society’s AI in Healthcare Virtual Summit, White described the results after his fracture liaison team at Prince of Wales Hospital in Randwick, Australia, tried to plug the “osteoporosis treatment gap” by mining medical records to identify patients with the disorder.
‘Be Careful What You Wish For’
White and colleagues developed a robust standalone database over 20 years that informed fracture risk among patients with osteoporosis in Sydney. The database included all relevant clinical information, as well as bone density measurements, on about 30,000 patients and could be interrogated for randomized controlled trial recruitment.
However, a “crisis” occurred around 2011, when the team received a recruitment request for the first head-to-head comparison of alendronate with romosozumab. “We had numerous postmenopausal women in the age range with the required bone density, but we hadn’t captured the severity of their vertebral fracture or how many they actually had,” White told the this news organization. For recruitment into the study, participants must have had at least two moderate or severe vertebral fractures or a proximal vertebral fracture that was sustained between 3 and 24 months before recruitment.
White turned to his hospital’s mainframe, which had coding data and time intervals for patients who were admitted with vertebral or hip fractures. He calculated how many patients who met the study criteria had been discharged and how many of those he thought he’d be able to capture through the mainframe. He was confident he would have enough, but he was wrong. He underrecruited and could not participate in the trial.
Determined not to wind up in a similar situation in the future, he investigated and found that other centers were struggling with similar problems. This led to a collaboration with four investigators who were using AI and Advanced Encryption Standard (AES) coding to identify patients at risk for osteoporotic fractures. White, meanwhile, had developed a natural language processing tool called XRAIT that also identified patients at fracture risk. A study comparing the two electronic search programs, which screen medical records for fractures, found that both reliably identified patients who had had a fracture. White and his colleagues concluded that hybrid tools combining XRAIT and AES would likely improve the identification of patients with osteoporosis who would require follow-up or might participate in future trials.
Those patients were not being identified sooner for multiple reasons, White explained. Sometimes, the radiologist would report osteoporosis, but it wouldn’t get coded. Or, in the emergency department, a patient with a fracture would be treated and then sent home, and the possibility of osteoporosis wasn’t reported.
“As we went deeper and deeper with our tools into the medical record, we found more and more patients who hadn’t been coded or reported but who actually had osteoporosis,” White said. “It was incredibly prevalent.”
But the number of patients identified was more than the hospital could comfortably handle.
Ironically, he added, “To my relief and probably not to the benefit of the patients, there was a system upgrade of the radiology reporting system, which was incompatible with the natural language processing technology that I had installed. The AI was turned off at that point, but I had a look over the edge and into the mine pit.”
“The lesson learned,” White told this news organization, is “If you mine the medical record for unidentified patients before you know what to do with the output, you create a medico-legal minefield. You need to be careful what you wish for with technology, because it may actually come true.”
Grappling With the Treatment Gap
An (over)abundance of patients is likely contributing to the “osteoporosis treatment gap” that Australia’s fracture liaison services, which handle many of these patients, are grappling with. One recent meta-analysis showed that not all eligible patients are treated and that not all patients who are treated actually start treatment. Another study showed that only a minority of patients — anywhere between 20% and 40% — who start are still persisting at about 3 years, White said.
Various types of fracture liaison services exist, he noted. The model that has been shown to best promote adherence is the one requiring clinicians to “identify, educate [usually, the primary care physician], evaluate, start treatment, continue treatment, and follow-up at 12 months for to confirm that there is adherence.”
What’s happening now, he said, is that the technology is identifying a high number of vertebral crush fractures, and there’s no education or evaluation. “The radiologist just refers the patient to a primary care physician and hopes for the best. AI isn’t contributing to solving the treatment gap problem; it’s amplifying it. It’s ahead of the ability of organizations to accommodate the findings.”
Solutions, he said, would require support at the top of health systems and organizations, and funding to proceed; data surveys concentrating on vertical integration of the medical record to follow patients wherever they are — eg, hospital, primary care — in their health journeys; a workflow with synchronous diagnosis and treatment planning, delivery, monitoring, and payment; and clinical and community champions advocating and “leading the charge in health tech.”
Furthermore, he advised, organizations need to be “very, very careful with safety and security — that is, managing the digital risks.”
“Oscar Wilde said there are two tragedies in life: One is not getting what one wants, and the other is getting it,” White concluded. “In my career, we’ve moved on from not knowing how to treat osteoporosis to knowing how to treat it. And that is both an asset and a liability.”
A version of this article first appeared on Medscape.com.
Broken Sleep Linked to MASLD
TOPLINE:
Fragmented sleep — that is, increased wakefulness and reduced sleep efficiency — is a sign of metabolic dysfunction–associated steatotic liver disease (MASLD), a study using actigraphy showed.
METHODOLOGY:
- Researchers assessed sleep-wake rhythms in 35 patients with MASLD (median age, 58 years; 66% were men; 80% with metabolic syndrome) and 16 matched healthy controls (median age, 61 years; 50% were men) using data collected 24/7 via actigraphy for 4 weeks.
- Sub-analyses were conducted with MASLD comparator groups: 16 patients with MASH, 8 with MASH with cirrhosis, and 11 with non-MASH–related cirrhosis.
- All participants visited the clinic at baseline, week 2, and week 4 to undergo a clinical investigation and complete questionnaires about their sleep.
- A standardized sleep hygiene education session was conducted at week 2.
TAKEAWAY:
- Actigraphy data from patients with MASLD did not reveal significant differences in bedtime, sleep-onset latency, sleep duration, wake-up time, or time in bed compared with controls.
- However, compared with controls, those with MASLD woke 55% more often at night (8.5 vs 5.5), lay awake 113% longer after having first fallen asleep (45.4 minutes vs 21.3 minutes), and slept more often and longer during the day (decreased sleep efficiency).
- Subgroup analyses showed that actigraphy-measured sleep patterns and quality were similarly impaired in patients with MASH, MASH with cirrhosis, and non–MASH-related cirrhosis.
- Patients with MASLD self-reported their fragmented sleep as shorter sleep with a delayed onset. In sleep diaries, 32% of patients with MASLD reported sleep disturbances caused by psychological stress, compared with only 6.25% of controls and 9% of patients with cirrhosis.
- The sleep education session did not change the actigraphy measures or the sleep parameters assessed with sleep questionnaires at the end of the study.
IN PRACTICE:
“We concluded from our data that sleep fragmentation plays a role in the pathogenesis of human MASLD. Whether MASLD causes sleep disorders or vice versa remains unknown. The underlying mechanism presumably involves genetics, environmental factors, and the activation of immune responses — ultimately driven by obesity and metabolic syndrome,” said corresponding author.
SOURCE:
The study, led by Sofia Schaeffer, PhD, University of Basel, Switzerland, was published online in Frontiers in Network Physiology.
LIMITATIONS:
The study had several limitations. There was a significant difference in body mass index between patients with MASLD (median, 31) and controls (median, 23.5), representing a potential confounder that could explain the differences in sleep behavior. Undetected obstructive sleep apnea could also be a confounding factor. The small number of participants limited the interpretation and generalization of the data, especially in the MASLD subgroups.
DISCLOSURES:
This study was supported by a grant from the University of Basel. One coauthor received a research grant from the University Center for Gastrointestinal and Liver Diseases, Basel, Switzerland. Another coauthor was employed by NovoLytiX. Schaeffer and the remaining coauthors declared that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
A version of this article first appeared on Medscape.com.
TOPLINE:
Fragmented sleep — that is, increased wakefulness and reduced sleep efficiency — is a sign of metabolic dysfunction–associated steatotic liver disease (MASLD), a study using actigraphy showed.
METHODOLOGY:
- Researchers assessed sleep-wake rhythms in 35 patients with MASLD (median age, 58 years; 66% were men; 80% with metabolic syndrome) and 16 matched healthy controls (median age, 61 years; 50% were men) using data collected 24/7 via actigraphy for 4 weeks.
- Sub-analyses were conducted with MASLD comparator groups: 16 patients with MASH, 8 with MASH with cirrhosis, and 11 with non-MASH–related cirrhosis.
- All participants visited the clinic at baseline, week 2, and week 4 to undergo a clinical investigation and complete questionnaires about their sleep.
- A standardized sleep hygiene education session was conducted at week 2.
TAKEAWAY:
- Actigraphy data from patients with MASLD did not reveal significant differences in bedtime, sleep-onset latency, sleep duration, wake-up time, or time in bed compared with controls.
- However, compared with controls, those with MASLD woke 55% more often at night (8.5 vs 5.5), lay awake 113% longer after having first fallen asleep (45.4 minutes vs 21.3 minutes), and slept more often and longer during the day (decreased sleep efficiency).
- Subgroup analyses showed that actigraphy-measured sleep patterns and quality were similarly impaired in patients with MASH, MASH with cirrhosis, and non–MASH-related cirrhosis.
- Patients with MASLD self-reported their fragmented sleep as shorter sleep with a delayed onset. In sleep diaries, 32% of patients with MASLD reported sleep disturbances caused by psychological stress, compared with only 6.25% of controls and 9% of patients with cirrhosis.
- The sleep education session did not change the actigraphy measures or the sleep parameters assessed with sleep questionnaires at the end of the study.
IN PRACTICE:
“We concluded from our data that sleep fragmentation plays a role in the pathogenesis of human MASLD. Whether MASLD causes sleep disorders or vice versa remains unknown. The underlying mechanism presumably involves genetics, environmental factors, and the activation of immune responses — ultimately driven by obesity and metabolic syndrome,” said corresponding author.
SOURCE:
The study, led by Sofia Schaeffer, PhD, University of Basel, Switzerland, was published online in Frontiers in Network Physiology.
LIMITATIONS:
The study had several limitations. There was a significant difference in body mass index between patients with MASLD (median, 31) and controls (median, 23.5), representing a potential confounder that could explain the differences in sleep behavior. Undetected obstructive sleep apnea could also be a confounding factor. The small number of participants limited the interpretation and generalization of the data, especially in the MASLD subgroups.
DISCLOSURES:
This study was supported by a grant from the University of Basel. One coauthor received a research grant from the University Center for Gastrointestinal and Liver Diseases, Basel, Switzerland. Another coauthor was employed by NovoLytiX. Schaeffer and the remaining coauthors declared that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
A version of this article first appeared on Medscape.com.
TOPLINE:
Fragmented sleep — that is, increased wakefulness and reduced sleep efficiency — is a sign of metabolic dysfunction–associated steatotic liver disease (MASLD), a study using actigraphy showed.
METHODOLOGY:
- Researchers assessed sleep-wake rhythms in 35 patients with MASLD (median age, 58 years; 66% were men; 80% with metabolic syndrome) and 16 matched healthy controls (median age, 61 years; 50% were men) using data collected 24/7 via actigraphy for 4 weeks.
- Sub-analyses were conducted with MASLD comparator groups: 16 patients with MASH, 8 with MASH with cirrhosis, and 11 with non-MASH–related cirrhosis.
- All participants visited the clinic at baseline, week 2, and week 4 to undergo a clinical investigation and complete questionnaires about their sleep.
- A standardized sleep hygiene education session was conducted at week 2.
TAKEAWAY:
- Actigraphy data from patients with MASLD did not reveal significant differences in bedtime, sleep-onset latency, sleep duration, wake-up time, or time in bed compared with controls.
- However, compared with controls, those with MASLD woke 55% more often at night (8.5 vs 5.5), lay awake 113% longer after having first fallen asleep (45.4 minutes vs 21.3 minutes), and slept more often and longer during the day (decreased sleep efficiency).
- Subgroup analyses showed that actigraphy-measured sleep patterns and quality were similarly impaired in patients with MASH, MASH with cirrhosis, and non–MASH-related cirrhosis.
- Patients with MASLD self-reported their fragmented sleep as shorter sleep with a delayed onset. In sleep diaries, 32% of patients with MASLD reported sleep disturbances caused by psychological stress, compared with only 6.25% of controls and 9% of patients with cirrhosis.
- The sleep education session did not change the actigraphy measures or the sleep parameters assessed with sleep questionnaires at the end of the study.
IN PRACTICE:
“We concluded from our data that sleep fragmentation plays a role in the pathogenesis of human MASLD. Whether MASLD causes sleep disorders or vice versa remains unknown. The underlying mechanism presumably involves genetics, environmental factors, and the activation of immune responses — ultimately driven by obesity and metabolic syndrome,” said corresponding author.
SOURCE:
The study, led by Sofia Schaeffer, PhD, University of Basel, Switzerland, was published online in Frontiers in Network Physiology.
LIMITATIONS:
The study had several limitations. There was a significant difference in body mass index between patients with MASLD (median, 31) and controls (median, 23.5), representing a potential confounder that could explain the differences in sleep behavior. Undetected obstructive sleep apnea could also be a confounding factor. The small number of participants limited the interpretation and generalization of the data, especially in the MASLD subgroups.
DISCLOSURES:
This study was supported by a grant from the University of Basel. One coauthor received a research grant from the University Center for Gastrointestinal and Liver Diseases, Basel, Switzerland. Another coauthor was employed by NovoLytiX. Schaeffer and the remaining coauthors declared that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
A version of this article first appeared on Medscape.com.
Coffee Consumption Linked to Specific Gut Bacterium
TOPLINE:
METHODOLOGY:
- The researchers selected coffee as a model to investigate the interplay between specific foods and the intestinal microbial community.
- They conducted a multicohort, multiomic analysis of US and UK populations with detailed dietary information from 22,867 participants, which they then integrated with public data from 211 cohorts comprising 54,198 participants.
- They conducted various in vitro experiments to expand and validate their findings, including adding coffee to media containing the L asaccharolyticus species that had been isolated from human feces.
TAKEAWAY:
- L asaccharolyticus is highly prevalent, with about fourfold higher average abundance in coffee drinkers, and its growth is stimulated in vitro by coffee supplementation.
- The link between coffee consumption and the microbiome was highly reproducible across different populations (area under the curve, 0.89), driven largely by the presence and abundance of L asaccharolyticus.
- Similar associations were found in analyses of data from 25 countries. The prevalence of the bacterium was high in European countries with high per capita coffee consumption, such as Luxembourg, Denmark, and Sweden, and very low in countries with low per capita coffee consumption, such as China, Argentina, and India.
- Plasma metabolomics on 438 samples identified several metabolites enriched among coffee drinkers, with quinic acid and its potential derivatives associated with both coffee and L asaccharolyticus.
IN PRACTICE:
“Our study provides insights into how the gut microbiome potentially mediates the chemistry — and thus health benefits — of coffee,” the study authors wrote. “The microbial mechanisms underlying the metabolism of coffee are a step towards mapping the role of specific foods on the gut microbiome, and similar patterns of microorganism–food interactions for other dietary elements should be sought with systematic epidemiologic and metagenomic investigations.”
SOURCE:
Paolo Manghi, PhD, University of Trento, Italy, led the study, which was published online in Nature Microbiology.
LIMITATIONS:
The authors relied on food questionnaires to assess coffee intake. The study is observational, and the clinical implications are unknown.
DISCLOSURES:
This work was supported by ZOE, a biotech company, and TwinsUK, an adult twin registry funded by the Wellcome Trust, Medical Research Council, Versus Arthritis, European Union Horizon 2020, Chronic Disease Research Foundation, the National Institute for Health and Care Research — Clinical Research Network and Biomedical Research Centre based at Guy’s and St. Thomas’ NHS Foundation Trust in partnership with King’s College London. Manghi had no competing interests. Several other coauthors reported financial relationships with ZOE, and three are cofounders of the company.
A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- The researchers selected coffee as a model to investigate the interplay between specific foods and the intestinal microbial community.
- They conducted a multicohort, multiomic analysis of US and UK populations with detailed dietary information from 22,867 participants, which they then integrated with public data from 211 cohorts comprising 54,198 participants.
- They conducted various in vitro experiments to expand and validate their findings, including adding coffee to media containing the L asaccharolyticus species that had been isolated from human feces.
TAKEAWAY:
- L asaccharolyticus is highly prevalent, with about fourfold higher average abundance in coffee drinkers, and its growth is stimulated in vitro by coffee supplementation.
- The link between coffee consumption and the microbiome was highly reproducible across different populations (area under the curve, 0.89), driven largely by the presence and abundance of L asaccharolyticus.
- Similar associations were found in analyses of data from 25 countries. The prevalence of the bacterium was high in European countries with high per capita coffee consumption, such as Luxembourg, Denmark, and Sweden, and very low in countries with low per capita coffee consumption, such as China, Argentina, and India.
- Plasma metabolomics on 438 samples identified several metabolites enriched among coffee drinkers, with quinic acid and its potential derivatives associated with both coffee and L asaccharolyticus.
IN PRACTICE:
“Our study provides insights into how the gut microbiome potentially mediates the chemistry — and thus health benefits — of coffee,” the study authors wrote. “The microbial mechanisms underlying the metabolism of coffee are a step towards mapping the role of specific foods on the gut microbiome, and similar patterns of microorganism–food interactions for other dietary elements should be sought with systematic epidemiologic and metagenomic investigations.”
SOURCE:
Paolo Manghi, PhD, University of Trento, Italy, led the study, which was published online in Nature Microbiology.
LIMITATIONS:
The authors relied on food questionnaires to assess coffee intake. The study is observational, and the clinical implications are unknown.
DISCLOSURES:
This work was supported by ZOE, a biotech company, and TwinsUK, an adult twin registry funded by the Wellcome Trust, Medical Research Council, Versus Arthritis, European Union Horizon 2020, Chronic Disease Research Foundation, the National Institute for Health and Care Research — Clinical Research Network and Biomedical Research Centre based at Guy’s and St. Thomas’ NHS Foundation Trust in partnership with King’s College London. Manghi had no competing interests. Several other coauthors reported financial relationships with ZOE, and three are cofounders of the company.
A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- The researchers selected coffee as a model to investigate the interplay between specific foods and the intestinal microbial community.
- They conducted a multicohort, multiomic analysis of US and UK populations with detailed dietary information from 22,867 participants, which they then integrated with public data from 211 cohorts comprising 54,198 participants.
- They conducted various in vitro experiments to expand and validate their findings, including adding coffee to media containing the L asaccharolyticus species that had been isolated from human feces.
TAKEAWAY:
- L asaccharolyticus is highly prevalent, with about fourfold higher average abundance in coffee drinkers, and its growth is stimulated in vitro by coffee supplementation.
- The link between coffee consumption and the microbiome was highly reproducible across different populations (area under the curve, 0.89), driven largely by the presence and abundance of L asaccharolyticus.
- Similar associations were found in analyses of data from 25 countries. The prevalence of the bacterium was high in European countries with high per capita coffee consumption, such as Luxembourg, Denmark, and Sweden, and very low in countries with low per capita coffee consumption, such as China, Argentina, and India.
- Plasma metabolomics on 438 samples identified several metabolites enriched among coffee drinkers, with quinic acid and its potential derivatives associated with both coffee and L asaccharolyticus.
IN PRACTICE:
“Our study provides insights into how the gut microbiome potentially mediates the chemistry — and thus health benefits — of coffee,” the study authors wrote. “The microbial mechanisms underlying the metabolism of coffee are a step towards mapping the role of specific foods on the gut microbiome, and similar patterns of microorganism–food interactions for other dietary elements should be sought with systematic epidemiologic and metagenomic investigations.”
SOURCE:
Paolo Manghi, PhD, University of Trento, Italy, led the study, which was published online in Nature Microbiology.
LIMITATIONS:
The authors relied on food questionnaires to assess coffee intake. The study is observational, and the clinical implications are unknown.
DISCLOSURES:
This work was supported by ZOE, a biotech company, and TwinsUK, an adult twin registry funded by the Wellcome Trust, Medical Research Council, Versus Arthritis, European Union Horizon 2020, Chronic Disease Research Foundation, the National Institute for Health and Care Research — Clinical Research Network and Biomedical Research Centre based at Guy’s and St. Thomas’ NHS Foundation Trust in partnership with King’s College London. Manghi had no competing interests. Several other coauthors reported financial relationships with ZOE, and three are cofounders of the company.
A version of this article first appeared on Medscape.com.
Plant-Based Food Prioritized Over Meat in Dietary Guidelines Report
The scientific report that offers evidence-based guidance for the next iteration of the Dietary Guidelines for Americans has been submitted to federal agencies, and the document — which already has generated controversy because of its emphasis on plant-based foods — is now open for public comment.
“We saw something over and over again — when you look at a population level, diets for which the predominant composition was plants performed better when it came to health outcomes,” advisory committee member Cheryl Anderson, PhD, MPH, who is a professor and dean of the Herbert Wertheim School of Public Health and Human Longevity Science at the University of California, San Diego, said in an interview. “There’s a pretty consistent body of literature showing benefits of fruits, vegetables, and legumes and reductions in salt, added sugars, and saturated fats.”
Clinicians should read and comment on the report, said Anderson.
“Commenting sends the right signal that they are interested in what’s needed for nutrition education,” she said. “It will also activate a conversation with the people who are writing the guidelines.”
Instructions for submitting comments online through February 10, 2025, and for participating in the oral comment meeting on January 16, 2025, are posted online.
The Department of Agriculture (USDA) and the Department of Health & Human Services will use the report as a key resource, alongside the public comments and agency input, as they jointly develop the Dietary Guidelines for Americans, 2025-2030.
Meat Given a Back Seat
Overall, the advisory committee defined a “healthy dietary pattern” as one that is “higher in vegetables, fruits, legumes (ie, beans, peas, lentils), nuts, whole grains, fish/seafood, and vegetable oils higher in unsaturated fat — and lower in red and processed meats, sugar-sweetened foods and beverages, refined grains, and saturated fat.”
The report emphasizes “plain drinking water” as the primary beverage for people to consume and states that sugar-sweetened beverage consumption should be limited.
It recommends limiting total saturated fat intake to less than 10% of daily calories and replacing it with unsaturated fat, particularly polyunsaturated fats.
Notably, the report advocates increasing the consumption of beans, peas, and lentils and decreasing starchy vegetables (such as potatoes), as well as reducing total protein foods by reducing meat, poultry, and eggs. This recommendation and the report’s broad emphasis on plant-based foods have drawn criticism, mainly from the food industry.
Also likely to be controversial are the recommendations to move beans, peas, and lentils from the vegetable group to the protein group and the proposed reorganization of the order of the protein foods group to list beans, peas, and lentils first, followed by nuts, seeds, and soy products; then seafood; and finally meats, poultry, and eggs.
Gastroenterologists and dietitians should support the emphasis on plant-based protein sources, water for hydration, and the importance of personalized nutrition plans, including culturally diverse and ethnic food options, said Stephanie Gold, MD, assistant professor of medicine at the Icahn School of Medicine at Mount Sinai and a gastroenterologist at Mount Sinai Hospital, both in New York City.
“The newly proposed 2025 Dietary Guidelines are approaching a Mediterranean-style diet by focusing on plant-based protein sources while limiting red meat and saturated fats, as well as added sugar. By including these legumes in the protein category (not only as a starchy vegetable), the proposed guideline recognizes both the health benefits and sustainability of plant-based proteins,” Gold said in an interview.
Although the report recognizes “the potential negative impact and the varying definitions of ultra-processed foods, it does not provide concrete recommendations regarding intake, and perhaps, this could be an area of focus going forward,” she added.
Anderson noted that the science around ultra-processed food is “underdeveloped.” However, the definition of a healthy diet “has never suggested that we have foods that are extremely processed in it.”
“Right now, there’s a lot of interest in ultra-processed foods and what they mean for health, but the science is going to need to catch up with that interest,” Anderson said.
What’s Next
The release of the scientific report is part of a five-step process to develop the new guidelines that included input from the public during the report’s development. According to the USDA, the advisory committee received approximately 9900 public comments, more than any other previous committee.
Once the new dietary guidelines are complete, Anderson said, “clinicians have an opportunity to really lean into a science-based framework to talk about overall health concerns and reducing the burden of diet-related illnesses with their patients.”
Meanwhile, they can voice their approval or concerns about the scientific report.
Anderson and Gold reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
The scientific report that offers evidence-based guidance for the next iteration of the Dietary Guidelines for Americans has been submitted to federal agencies, and the document — which already has generated controversy because of its emphasis on plant-based foods — is now open for public comment.
“We saw something over and over again — when you look at a population level, diets for which the predominant composition was plants performed better when it came to health outcomes,” advisory committee member Cheryl Anderson, PhD, MPH, who is a professor and dean of the Herbert Wertheim School of Public Health and Human Longevity Science at the University of California, San Diego, said in an interview. “There’s a pretty consistent body of literature showing benefits of fruits, vegetables, and legumes and reductions in salt, added sugars, and saturated fats.”
Clinicians should read and comment on the report, said Anderson.
“Commenting sends the right signal that they are interested in what’s needed for nutrition education,” she said. “It will also activate a conversation with the people who are writing the guidelines.”
Instructions for submitting comments online through February 10, 2025, and for participating in the oral comment meeting on January 16, 2025, are posted online.
The Department of Agriculture (USDA) and the Department of Health & Human Services will use the report as a key resource, alongside the public comments and agency input, as they jointly develop the Dietary Guidelines for Americans, 2025-2030.
Meat Given a Back Seat
Overall, the advisory committee defined a “healthy dietary pattern” as one that is “higher in vegetables, fruits, legumes (ie, beans, peas, lentils), nuts, whole grains, fish/seafood, and vegetable oils higher in unsaturated fat — and lower in red and processed meats, sugar-sweetened foods and beverages, refined grains, and saturated fat.”
The report emphasizes “plain drinking water” as the primary beverage for people to consume and states that sugar-sweetened beverage consumption should be limited.
It recommends limiting total saturated fat intake to less than 10% of daily calories and replacing it with unsaturated fat, particularly polyunsaturated fats.
Notably, the report advocates increasing the consumption of beans, peas, and lentils and decreasing starchy vegetables (such as potatoes), as well as reducing total protein foods by reducing meat, poultry, and eggs. This recommendation and the report’s broad emphasis on plant-based foods have drawn criticism, mainly from the food industry.
Also likely to be controversial are the recommendations to move beans, peas, and lentils from the vegetable group to the protein group and the proposed reorganization of the order of the protein foods group to list beans, peas, and lentils first, followed by nuts, seeds, and soy products; then seafood; and finally meats, poultry, and eggs.
Gastroenterologists and dietitians should support the emphasis on plant-based protein sources, water for hydration, and the importance of personalized nutrition plans, including culturally diverse and ethnic food options, said Stephanie Gold, MD, assistant professor of medicine at the Icahn School of Medicine at Mount Sinai and a gastroenterologist at Mount Sinai Hospital, both in New York City.
“The newly proposed 2025 Dietary Guidelines are approaching a Mediterranean-style diet by focusing on plant-based protein sources while limiting red meat and saturated fats, as well as added sugar. By including these legumes in the protein category (not only as a starchy vegetable), the proposed guideline recognizes both the health benefits and sustainability of plant-based proteins,” Gold said in an interview.
Although the report recognizes “the potential negative impact and the varying definitions of ultra-processed foods, it does not provide concrete recommendations regarding intake, and perhaps, this could be an area of focus going forward,” she added.
Anderson noted that the science around ultra-processed food is “underdeveloped.” However, the definition of a healthy diet “has never suggested that we have foods that are extremely processed in it.”
“Right now, there’s a lot of interest in ultra-processed foods and what they mean for health, but the science is going to need to catch up with that interest,” Anderson said.
What’s Next
The release of the scientific report is part of a five-step process to develop the new guidelines that included input from the public during the report’s development. According to the USDA, the advisory committee received approximately 9900 public comments, more than any other previous committee.
Once the new dietary guidelines are complete, Anderson said, “clinicians have an opportunity to really lean into a science-based framework to talk about overall health concerns and reducing the burden of diet-related illnesses with their patients.”
Meanwhile, they can voice their approval or concerns about the scientific report.
Anderson and Gold reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
The scientific report that offers evidence-based guidance for the next iteration of the Dietary Guidelines for Americans has been submitted to federal agencies, and the document — which already has generated controversy because of its emphasis on plant-based foods — is now open for public comment.
“We saw something over and over again — when you look at a population level, diets for which the predominant composition was plants performed better when it came to health outcomes,” advisory committee member Cheryl Anderson, PhD, MPH, who is a professor and dean of the Herbert Wertheim School of Public Health and Human Longevity Science at the University of California, San Diego, said in an interview. “There’s a pretty consistent body of literature showing benefits of fruits, vegetables, and legumes and reductions in salt, added sugars, and saturated fats.”
Clinicians should read and comment on the report, said Anderson.
“Commenting sends the right signal that they are interested in what’s needed for nutrition education,” she said. “It will also activate a conversation with the people who are writing the guidelines.”
Instructions for submitting comments online through February 10, 2025, and for participating in the oral comment meeting on January 16, 2025, are posted online.
The Department of Agriculture (USDA) and the Department of Health & Human Services will use the report as a key resource, alongside the public comments and agency input, as they jointly develop the Dietary Guidelines for Americans, 2025-2030.
Meat Given a Back Seat
Overall, the advisory committee defined a “healthy dietary pattern” as one that is “higher in vegetables, fruits, legumes (ie, beans, peas, lentils), nuts, whole grains, fish/seafood, and vegetable oils higher in unsaturated fat — and lower in red and processed meats, sugar-sweetened foods and beverages, refined grains, and saturated fat.”
The report emphasizes “plain drinking water” as the primary beverage for people to consume and states that sugar-sweetened beverage consumption should be limited.
It recommends limiting total saturated fat intake to less than 10% of daily calories and replacing it with unsaturated fat, particularly polyunsaturated fats.
Notably, the report advocates increasing the consumption of beans, peas, and lentils and decreasing starchy vegetables (such as potatoes), as well as reducing total protein foods by reducing meat, poultry, and eggs. This recommendation and the report’s broad emphasis on plant-based foods have drawn criticism, mainly from the food industry.
Also likely to be controversial are the recommendations to move beans, peas, and lentils from the vegetable group to the protein group and the proposed reorganization of the order of the protein foods group to list beans, peas, and lentils first, followed by nuts, seeds, and soy products; then seafood; and finally meats, poultry, and eggs.
Gastroenterologists and dietitians should support the emphasis on plant-based protein sources, water for hydration, and the importance of personalized nutrition plans, including culturally diverse and ethnic food options, said Stephanie Gold, MD, assistant professor of medicine at the Icahn School of Medicine at Mount Sinai and a gastroenterologist at Mount Sinai Hospital, both in New York City.
“The newly proposed 2025 Dietary Guidelines are approaching a Mediterranean-style diet by focusing on plant-based protein sources while limiting red meat and saturated fats, as well as added sugar. By including these legumes in the protein category (not only as a starchy vegetable), the proposed guideline recognizes both the health benefits and sustainability of plant-based proteins,” Gold said in an interview.
Although the report recognizes “the potential negative impact and the varying definitions of ultra-processed foods, it does not provide concrete recommendations regarding intake, and perhaps, this could be an area of focus going forward,” she added.
Anderson noted that the science around ultra-processed food is “underdeveloped.” However, the definition of a healthy diet “has never suggested that we have foods that are extremely processed in it.”
“Right now, there’s a lot of interest in ultra-processed foods and what they mean for health, but the science is going to need to catch up with that interest,” Anderson said.
What’s Next
The release of the scientific report is part of a five-step process to develop the new guidelines that included input from the public during the report’s development. According to the USDA, the advisory committee received approximately 9900 public comments, more than any other previous committee.
Once the new dietary guidelines are complete, Anderson said, “clinicians have an opportunity to really lean into a science-based framework to talk about overall health concerns and reducing the burden of diet-related illnesses with their patients.”
Meanwhile, they can voice their approval or concerns about the scientific report.
Anderson and Gold reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
Lung CT Can Detect Coronary Artery Disease, Predict Death
“The high prevalence of asymptomatic coronary artery disease (83%) was surprising, as was the prevalence of extensive CAC (30%),” principal investigator Gary Small, MBChB, PhD, a cardiologist at the University of Ottawa Heart Institute in Ontario, Canada, said in an interview.
“The size of effect was also surprising, as was the persistence of the effect even in the presence of elevated mortality risk from other causes,” he said. “Extensive coronary disease was associated with a twofold increase in risk for death or cardiovascular events over 4 years of follow-up,” even after adjustment for risk for death from cancer and other comorbidities such as chronic obstructive pulmonary disease.
“CAC as reported on chest CT exams is often ignored and not factored into clinical practice,” he noted. “The presence of CAC, however, provides a very real and very personal perspective on an individual’s cardiovascular risk. It is a true example of personalized medicine.”
The study was published online in The Canadian Medical Association Journal.
Potential Risk Reduction
In March 2017, Ontario Health launched a pilot low-dose CT lung cancer screening program for high-risk individuals between the ages of 55 and 74 years, Small explained. As CAC, a marker of coronary artery disease, is seen easily during such a scan, the researchers analyzed the lung CTs to determine the prevalence of coronary artery disease and whether CAC was associated with increased risk.
The team quantified CAC using an estimated Agatston score and identified the composite primary outcome of all-cause death and cardiovascular events using linked electronic medical record data from Ottawa Hospital up to December 2023. Among the 1486 people who underwent screening (mean age, 66 years; 52% men; 68% current smokers), CAC was detected in 1232 (82.9%). CAC was mild to moderate in 793 participants (53.4%) and extensive in 439 (29.5%). No CAC was detected in 254 (17.1%) participants.
At follow-up, 78 participants (5.2%) experienced the primary composite outcome, including 39 (8.9%) with extensive CAC, 32 (4.0%) with mild to moderate CAC, and 7 (2.8%) with no CAC.
A total of 49 deaths occurred, including 16 cardiovascular deaths and 19 cancer deaths, of which 10 were from lung cancer. Cardiovascular events included sudden cardiac death (eight participants), fatal stroke (six participants), and one each from heart failure and peripheral vascular disease.
On multivariable analysis, extensive CAC was associated with the composite primary outcome (adjusted hazard ratio [aHR], 2.13), all-cause mortality (aHR, 2.39), and cardiovascular events (aHR, 2.06).
Extensive CAC remained predictive of cardiovascular events even after adjustment for noncardiovascular death as a competing risk (HR, 2.05).
“Our data highlight to lung cancer screening professionals the prevalence of this silent risk factor and re-emphasize the importance of this finding [ie, CAC] as an opportunity for risk reduction,” Small said.
“In terms of next steps, the journey toward cardiovascular risk reduction begins with a clear report of CAC on the lung cancer screening record,” he noted. “Following this step, professionals involved in the lung cancer screening program might consider a local management pathway to ensure that this opportunity for health improvement is not lost or ignored. Preventive medicine of this type would typically involve primary care.”
Managing Other Findings
Commenting on the study, Anna Bader, MD, assistant professor of radiology and biomedical imaging at the Yale School of Medicine in New Haven, Connecticut, said that “low-dose CT for lung cancer screening offers valuable insights beyond nodule detection, with CAC being among the most significant incidental findings.”
However, she added, a “robust mechanism” to effectively manage other findings — such as thoracic aortic disease, low bone density, and abnormalities in the thyroid or upper abdominal organs — without overdiagnosis, is needed. A mechanism also is needed to notify cardiologists or primary care providers about severe CAC findings.
Challenges that need to be overcome before such mechanisms can be put in place, she said, “include ensuring standardized CAC reporting, avoiding overburdening healthcare providers, mitigating the risk of excessive downstream testing, and ensuring equitable access to follow-up care for underserved and rural communities.”
Providers involved in lung cancer screening “must be trained to recognize the importance of CAC findings and act upon them,” she added. “Awareness campaigns or continuing medical education modules could address this.”
Multidisciplinary lung cancer screening programs can help with patient education, she noted. “Clear communication about potential findings, including the significance of incidental CAC, should be prioritized and addressed proactively, ideally before the exam, to enhance patient understanding and engagement.”
Matthew Tomey, MD, assistant professor of medicine at the Icahn School of Medicine at Mount Sinai in New York City, said that, “as a practicing cardiologist, I find it very helpful to look at my patients’ recent or past CT scans to look for vascular calcification. Whether or not a scan is specifically protocoled as a cardiac study, we can often appreciate vascular calcification when it is present. I would encourage every physician involved in helping their patients to prevent heart disease to take advantage of looking at any prior CT scans for evidence of vascular calcification.
“Systems of care to facilitate recognition of patients with incidentally discovered vascular calcification would be welcome and, on a large scale, could help prevent cardiovascular events,” he noted. “Such a system might involve facilitating referral to a prevention specialist. It could involve evidence-based guidance for referring physicians who ordered scans.”
Like Bader, he noted the importance of patient education, adding that it could be quite powerful. “We should be doing more to empower our patients to understand the findings of their imaging and to give them actionable, evidence-based guidance on how they can promote their own cardiovascular health,” he concluded.
No funding for the study was reported. Small reported receiving a research grant for amyloid research from Pfizer and honoraria from Pfizer and Alnylam (all paid to the institution, outside the submitted work). Bader and Tomey declared no relevant conflicts.
A version of this article first appeared on Medscape.com.
“The high prevalence of asymptomatic coronary artery disease (83%) was surprising, as was the prevalence of extensive CAC (30%),” principal investigator Gary Small, MBChB, PhD, a cardiologist at the University of Ottawa Heart Institute in Ontario, Canada, said in an interview.
“The size of effect was also surprising, as was the persistence of the effect even in the presence of elevated mortality risk from other causes,” he said. “Extensive coronary disease was associated with a twofold increase in risk for death or cardiovascular events over 4 years of follow-up,” even after adjustment for risk for death from cancer and other comorbidities such as chronic obstructive pulmonary disease.
“CAC as reported on chest CT exams is often ignored and not factored into clinical practice,” he noted. “The presence of CAC, however, provides a very real and very personal perspective on an individual’s cardiovascular risk. It is a true example of personalized medicine.”
The study was published online in The Canadian Medical Association Journal.
Potential Risk Reduction
In March 2017, Ontario Health launched a pilot low-dose CT lung cancer screening program for high-risk individuals between the ages of 55 and 74 years, Small explained. As CAC, a marker of coronary artery disease, is seen easily during such a scan, the researchers analyzed the lung CTs to determine the prevalence of coronary artery disease and whether CAC was associated with increased risk.
The team quantified CAC using an estimated Agatston score and identified the composite primary outcome of all-cause death and cardiovascular events using linked electronic medical record data from Ottawa Hospital up to December 2023. Among the 1486 people who underwent screening (mean age, 66 years; 52% men; 68% current smokers), CAC was detected in 1232 (82.9%). CAC was mild to moderate in 793 participants (53.4%) and extensive in 439 (29.5%). No CAC was detected in 254 (17.1%) participants.
At follow-up, 78 participants (5.2%) experienced the primary composite outcome, including 39 (8.9%) with extensive CAC, 32 (4.0%) with mild to moderate CAC, and 7 (2.8%) with no CAC.
A total of 49 deaths occurred, including 16 cardiovascular deaths and 19 cancer deaths, of which 10 were from lung cancer. Cardiovascular events included sudden cardiac death (eight participants), fatal stroke (six participants), and one each from heart failure and peripheral vascular disease.
On multivariable analysis, extensive CAC was associated with the composite primary outcome (adjusted hazard ratio [aHR], 2.13), all-cause mortality (aHR, 2.39), and cardiovascular events (aHR, 2.06).
Extensive CAC remained predictive of cardiovascular events even after adjustment for noncardiovascular death as a competing risk (HR, 2.05).
“Our data highlight to lung cancer screening professionals the prevalence of this silent risk factor and re-emphasize the importance of this finding [ie, CAC] as an opportunity for risk reduction,” Small said.
“In terms of next steps, the journey toward cardiovascular risk reduction begins with a clear report of CAC on the lung cancer screening record,” he noted. “Following this step, professionals involved in the lung cancer screening program might consider a local management pathway to ensure that this opportunity for health improvement is not lost or ignored. Preventive medicine of this type would typically involve primary care.”
Managing Other Findings
Commenting on the study, Anna Bader, MD, assistant professor of radiology and biomedical imaging at the Yale School of Medicine in New Haven, Connecticut, said that “low-dose CT for lung cancer screening offers valuable insights beyond nodule detection, with CAC being among the most significant incidental findings.”
However, she added, a “robust mechanism” to effectively manage other findings — such as thoracic aortic disease, low bone density, and abnormalities in the thyroid or upper abdominal organs — without overdiagnosis, is needed. A mechanism also is needed to notify cardiologists or primary care providers about severe CAC findings.
Challenges that need to be overcome before such mechanisms can be put in place, she said, “include ensuring standardized CAC reporting, avoiding overburdening healthcare providers, mitigating the risk of excessive downstream testing, and ensuring equitable access to follow-up care for underserved and rural communities.”
Providers involved in lung cancer screening “must be trained to recognize the importance of CAC findings and act upon them,” she added. “Awareness campaigns or continuing medical education modules could address this.”
Multidisciplinary lung cancer screening programs can help with patient education, she noted. “Clear communication about potential findings, including the significance of incidental CAC, should be prioritized and addressed proactively, ideally before the exam, to enhance patient understanding and engagement.”
Matthew Tomey, MD, assistant professor of medicine at the Icahn School of Medicine at Mount Sinai in New York City, said that, “as a practicing cardiologist, I find it very helpful to look at my patients’ recent or past CT scans to look for vascular calcification. Whether or not a scan is specifically protocoled as a cardiac study, we can often appreciate vascular calcification when it is present. I would encourage every physician involved in helping their patients to prevent heart disease to take advantage of looking at any prior CT scans for evidence of vascular calcification.
“Systems of care to facilitate recognition of patients with incidentally discovered vascular calcification would be welcome and, on a large scale, could help prevent cardiovascular events,” he noted. “Such a system might involve facilitating referral to a prevention specialist. It could involve evidence-based guidance for referring physicians who ordered scans.”
Like Bader, he noted the importance of patient education, adding that it could be quite powerful. “We should be doing more to empower our patients to understand the findings of their imaging and to give them actionable, evidence-based guidance on how they can promote their own cardiovascular health,” he concluded.
No funding for the study was reported. Small reported receiving a research grant for amyloid research from Pfizer and honoraria from Pfizer and Alnylam (all paid to the institution, outside the submitted work). Bader and Tomey declared no relevant conflicts.
A version of this article first appeared on Medscape.com.
“The high prevalence of asymptomatic coronary artery disease (83%) was surprising, as was the prevalence of extensive CAC (30%),” principal investigator Gary Small, MBChB, PhD, a cardiologist at the University of Ottawa Heart Institute in Ontario, Canada, said in an interview.
“The size of effect was also surprising, as was the persistence of the effect even in the presence of elevated mortality risk from other causes,” he said. “Extensive coronary disease was associated with a twofold increase in risk for death or cardiovascular events over 4 years of follow-up,” even after adjustment for risk for death from cancer and other comorbidities such as chronic obstructive pulmonary disease.
“CAC as reported on chest CT exams is often ignored and not factored into clinical practice,” he noted. “The presence of CAC, however, provides a very real and very personal perspective on an individual’s cardiovascular risk. It is a true example of personalized medicine.”
The study was published online in The Canadian Medical Association Journal.
Potential Risk Reduction
In March 2017, Ontario Health launched a pilot low-dose CT lung cancer screening program for high-risk individuals between the ages of 55 and 74 years, Small explained. As CAC, a marker of coronary artery disease, is seen easily during such a scan, the researchers analyzed the lung CTs to determine the prevalence of coronary artery disease and whether CAC was associated with increased risk.
The team quantified CAC using an estimated Agatston score and identified the composite primary outcome of all-cause death and cardiovascular events using linked electronic medical record data from Ottawa Hospital up to December 2023. Among the 1486 people who underwent screening (mean age, 66 years; 52% men; 68% current smokers), CAC was detected in 1232 (82.9%). CAC was mild to moderate in 793 participants (53.4%) and extensive in 439 (29.5%). No CAC was detected in 254 (17.1%) participants.
At follow-up, 78 participants (5.2%) experienced the primary composite outcome, including 39 (8.9%) with extensive CAC, 32 (4.0%) with mild to moderate CAC, and 7 (2.8%) with no CAC.
A total of 49 deaths occurred, including 16 cardiovascular deaths and 19 cancer deaths, of which 10 were from lung cancer. Cardiovascular events included sudden cardiac death (eight participants), fatal stroke (six participants), and one each from heart failure and peripheral vascular disease.
On multivariable analysis, extensive CAC was associated with the composite primary outcome (adjusted hazard ratio [aHR], 2.13), all-cause mortality (aHR, 2.39), and cardiovascular events (aHR, 2.06).
Extensive CAC remained predictive of cardiovascular events even after adjustment for noncardiovascular death as a competing risk (HR, 2.05).
“Our data highlight to lung cancer screening professionals the prevalence of this silent risk factor and re-emphasize the importance of this finding [ie, CAC] as an opportunity for risk reduction,” Small said.
“In terms of next steps, the journey toward cardiovascular risk reduction begins with a clear report of CAC on the lung cancer screening record,” he noted. “Following this step, professionals involved in the lung cancer screening program might consider a local management pathway to ensure that this opportunity for health improvement is not lost or ignored. Preventive medicine of this type would typically involve primary care.”
Managing Other Findings
Commenting on the study, Anna Bader, MD, assistant professor of radiology and biomedical imaging at the Yale School of Medicine in New Haven, Connecticut, said that “low-dose CT for lung cancer screening offers valuable insights beyond nodule detection, with CAC being among the most significant incidental findings.”
However, she added, a “robust mechanism” to effectively manage other findings — such as thoracic aortic disease, low bone density, and abnormalities in the thyroid or upper abdominal organs — without overdiagnosis, is needed. A mechanism also is needed to notify cardiologists or primary care providers about severe CAC findings.
Challenges that need to be overcome before such mechanisms can be put in place, she said, “include ensuring standardized CAC reporting, avoiding overburdening healthcare providers, mitigating the risk of excessive downstream testing, and ensuring equitable access to follow-up care for underserved and rural communities.”
Providers involved in lung cancer screening “must be trained to recognize the importance of CAC findings and act upon them,” she added. “Awareness campaigns or continuing medical education modules could address this.”
Multidisciplinary lung cancer screening programs can help with patient education, she noted. “Clear communication about potential findings, including the significance of incidental CAC, should be prioritized and addressed proactively, ideally before the exam, to enhance patient understanding and engagement.”
Matthew Tomey, MD, assistant professor of medicine at the Icahn School of Medicine at Mount Sinai in New York City, said that, “as a practicing cardiologist, I find it very helpful to look at my patients’ recent or past CT scans to look for vascular calcification. Whether or not a scan is specifically protocoled as a cardiac study, we can often appreciate vascular calcification when it is present. I would encourage every physician involved in helping their patients to prevent heart disease to take advantage of looking at any prior CT scans for evidence of vascular calcification.
“Systems of care to facilitate recognition of patients with incidentally discovered vascular calcification would be welcome and, on a large scale, could help prevent cardiovascular events,” he noted. “Such a system might involve facilitating referral to a prevention specialist. It could involve evidence-based guidance for referring physicians who ordered scans.”
Like Bader, he noted the importance of patient education, adding that it could be quite powerful. “We should be doing more to empower our patients to understand the findings of their imaging and to give them actionable, evidence-based guidance on how they can promote their own cardiovascular health,” he concluded.
No funding for the study was reported. Small reported receiving a research grant for amyloid research from Pfizer and honoraria from Pfizer and Alnylam (all paid to the institution, outside the submitted work). Bader and Tomey declared no relevant conflicts.
A version of this article first appeared on Medscape.com.
FROM THE CANADIAN MEDICAL ASSOCIATION JOURNAL
Telehealth Vs In-Person Diabetes Care: Is One Better?
Adults with diabetes who participated in telehealth visits reported similar levels of care, trust in the healthcare system, and patient-centered communication compared to those who had in-person visits, a cross-sectional study suggested.
The authors urged continued integration of telehealth into diabetes care beyond December 31, 2024, when the pandemic public health emergency ends, potentially limiting such services.
The study “provides population-level evidence that telehealth can deliver care quality comparable to in-person visits in diabetes management,” lead author Young-Rock Hong, PhD, MPH, an assistant professor in the University of Florida, Gainesville, told this news organization.
“Perhaps the most meaningful finding was the high utilization of telephone-only visits among older adults,” he said. “This has important policy implications, particularly as some insurers and healthcare systems have pushed to restrict telehealth coverage to video-only visits.”
“Maintaining telephone visit coverage is crucial for equitable access, especially for older adults who may be less comfortable with video technology; those with limited internet access; or patients facing other barriers to video visits,” he explained.
The study was published online in BMJ Open.
Video-only, Voice-only, Both
The researchers did a secondary analysis of data from the 2022 Health Information National Trends Survey, a nationally representative survey that includes information on health communication and knowledge and perceptions about all health conditions among US adults aged ≥ 18 years.
Participants had a self-reported diagnosis of type 1 or type 2 diabetes. The mean age was 59.4 years; 50% were women; and 53% were non-Hispanic White individuals.
Primary and secondary outcomes were use of telehealth in the last 12-months; telehealth modality; overall perception of quality of care; perceived trust in the healthcare system; and patient-centered communication score.
In the analysis of 1116 participants representing 33.6 million individuals, 48.1% reported telehealth use in the past 12 months.
Telehealth users were more likely to be younger and women with higher household incomes and health insurance coverage; live in metropolitan areas; and have multiple chronic conditions, poorer perceived health status, and more frequent physician visits than nonusers.
After adjustment, adults aged ≥ 65 years had a significantly lower likelihood of telehealth use than those ages 18-49 years (odds ratio [OR], 0.43).
Higher income and more frequent healthcare visits were predictors of telehealth usage, with no significant differences across race, education, or location.
Those with a household income between $35,000 and $74,999 had more than double the likelihood of telehealth use (OR, 2.14) than those with incomes below $35,000.
Among telehealth users, 39.3% reported having video-only; 35%, phone (voice)-only; and 25.7%, both modalities. Among those aged ≥ 65 years, 55.5% used phone calls only and 25.5% used video only. In contrast, those aged 18-49 years had higher rates of video-only use (36.1%) and combined video/phone use (31.2%).
Healthcare provider recommendation (68.1%) was the most common reason for telehealth use, followed by convenience (57.7%), avoiding potential COVID-19 exposure (48.1%), and obtaining advice about the need for in-person care (23.6%).
Nonusers said they preferred in-person visits and also cited privacy concerns and technology challenges.
Patient-reported quality-of-care outcomes were comparable between telehealth users and nonusers, with no significant differences by telehealth modality or area of residence (urban or rural).
Around 70% of individuals with diabetes in both groups rated their quality of care as “excellent” and “very good;” fewer than 10% rated their care as “fair” and “poor.”
Similarly, trust in the healthcare system was comparable between users and nonusers: 41.3% of telehealth users 41% of nonusers reported trusting the healthcare system “very much.” Patient-centered communication scores were also similar between users and nonusers.
Telehealth appears to be a good option from the providers’ perspective as well, according to the authors. A previous study by the team found more than 80% of US physicians intended to continue telehealth beyond the pandemic.
“The recent unanimous bipartisan passage of the Telehealth Modernization Act by the House Energy & Commerce Committee signals strong political support for extending telehealth flexibilities through 2026,” Hong said. “The bill addresses key access issues by permanently removing geographic restrictions, expanding eligible providers, and maintaining audio-only coverage — provisions that align with our study’s findings about the importance of telephone visits, particularly for older adults and underserved populations.”
There is concern that extending telehealth services might increase Medicare spending by over $2 billion, he added. “While this may be a valid concern, there is a need for more robust evidence regarding the overall value of telehealth services — ie, the ‘benefits’ they provide relative to their costs and outcomes.”
Reassuring, but More Research Needed
COVID prompted “dramatic shifts” in care delivery from in-person to telehealth, Kevin Peterson, MD, MPH, American Diabetes Association vice president of primary care told this news organization. “The authors’ findings provide reassurance that these changes provided for additional convenience in care delivery without being associated with compromises in patient-reported care quality.”
However, he said, “the study does not necessarily capture representative samples of rural and underserved populations, making the impact of telehealth on health equity difficult to determine.” In addition, although patient-perceived care quality did not change with telehealth delivery, the study “does not address impacts on safety, clinical outcomes, equity, costs, or other important measures.”
Furthermore, he noted, “this is an association study that occurred during the dramatic changes brought about by COVID. It may not represent provider or patient preferences that characterize the role of telehealth under more normal circumstances.”
For now, clinicians should be aware that “initial evidence suggests that telehealth can be integrated into care without significantly compromising the patient’s perception of the quality of care,” he concluded.
No funding was declared. Hong and Peterson reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
Adults with diabetes who participated in telehealth visits reported similar levels of care, trust in the healthcare system, and patient-centered communication compared to those who had in-person visits, a cross-sectional study suggested.
The authors urged continued integration of telehealth into diabetes care beyond December 31, 2024, when the pandemic public health emergency ends, potentially limiting such services.
The study “provides population-level evidence that telehealth can deliver care quality comparable to in-person visits in diabetes management,” lead author Young-Rock Hong, PhD, MPH, an assistant professor in the University of Florida, Gainesville, told this news organization.
“Perhaps the most meaningful finding was the high utilization of telephone-only visits among older adults,” he said. “This has important policy implications, particularly as some insurers and healthcare systems have pushed to restrict telehealth coverage to video-only visits.”
“Maintaining telephone visit coverage is crucial for equitable access, especially for older adults who may be less comfortable with video technology; those with limited internet access; or patients facing other barriers to video visits,” he explained.
The study was published online in BMJ Open.
Video-only, Voice-only, Both
The researchers did a secondary analysis of data from the 2022 Health Information National Trends Survey, a nationally representative survey that includes information on health communication and knowledge and perceptions about all health conditions among US adults aged ≥ 18 years.
Participants had a self-reported diagnosis of type 1 or type 2 diabetes. The mean age was 59.4 years; 50% were women; and 53% were non-Hispanic White individuals.
Primary and secondary outcomes were use of telehealth in the last 12-months; telehealth modality; overall perception of quality of care; perceived trust in the healthcare system; and patient-centered communication score.
In the analysis of 1116 participants representing 33.6 million individuals, 48.1% reported telehealth use in the past 12 months.
Telehealth users were more likely to be younger and women with higher household incomes and health insurance coverage; live in metropolitan areas; and have multiple chronic conditions, poorer perceived health status, and more frequent physician visits than nonusers.
After adjustment, adults aged ≥ 65 years had a significantly lower likelihood of telehealth use than those ages 18-49 years (odds ratio [OR], 0.43).
Higher income and more frequent healthcare visits were predictors of telehealth usage, with no significant differences across race, education, or location.
Those with a household income between $35,000 and $74,999 had more than double the likelihood of telehealth use (OR, 2.14) than those with incomes below $35,000.
Among telehealth users, 39.3% reported having video-only; 35%, phone (voice)-only; and 25.7%, both modalities. Among those aged ≥ 65 years, 55.5% used phone calls only and 25.5% used video only. In contrast, those aged 18-49 years had higher rates of video-only use (36.1%) and combined video/phone use (31.2%).
Healthcare provider recommendation (68.1%) was the most common reason for telehealth use, followed by convenience (57.7%), avoiding potential COVID-19 exposure (48.1%), and obtaining advice about the need for in-person care (23.6%).
Nonusers said they preferred in-person visits and also cited privacy concerns and technology challenges.
Patient-reported quality-of-care outcomes were comparable between telehealth users and nonusers, with no significant differences by telehealth modality or area of residence (urban or rural).
Around 70% of individuals with diabetes in both groups rated their quality of care as “excellent” and “very good;” fewer than 10% rated their care as “fair” and “poor.”
Similarly, trust in the healthcare system was comparable between users and nonusers: 41.3% of telehealth users 41% of nonusers reported trusting the healthcare system “very much.” Patient-centered communication scores were also similar between users and nonusers.
Telehealth appears to be a good option from the providers’ perspective as well, according to the authors. A previous study by the team found more than 80% of US physicians intended to continue telehealth beyond the pandemic.
“The recent unanimous bipartisan passage of the Telehealth Modernization Act by the House Energy & Commerce Committee signals strong political support for extending telehealth flexibilities through 2026,” Hong said. “The bill addresses key access issues by permanently removing geographic restrictions, expanding eligible providers, and maintaining audio-only coverage — provisions that align with our study’s findings about the importance of telephone visits, particularly for older adults and underserved populations.”
There is concern that extending telehealth services might increase Medicare spending by over $2 billion, he added. “While this may be a valid concern, there is a need for more robust evidence regarding the overall value of telehealth services — ie, the ‘benefits’ they provide relative to their costs and outcomes.”
Reassuring, but More Research Needed
COVID prompted “dramatic shifts” in care delivery from in-person to telehealth, Kevin Peterson, MD, MPH, American Diabetes Association vice president of primary care told this news organization. “The authors’ findings provide reassurance that these changes provided for additional convenience in care delivery without being associated with compromises in patient-reported care quality.”
However, he said, “the study does not necessarily capture representative samples of rural and underserved populations, making the impact of telehealth on health equity difficult to determine.” In addition, although patient-perceived care quality did not change with telehealth delivery, the study “does not address impacts on safety, clinical outcomes, equity, costs, or other important measures.”
Furthermore, he noted, “this is an association study that occurred during the dramatic changes brought about by COVID. It may not represent provider or patient preferences that characterize the role of telehealth under more normal circumstances.”
For now, clinicians should be aware that “initial evidence suggests that telehealth can be integrated into care without significantly compromising the patient’s perception of the quality of care,” he concluded.
No funding was declared. Hong and Peterson reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
Adults with diabetes who participated in telehealth visits reported similar levels of care, trust in the healthcare system, and patient-centered communication compared to those who had in-person visits, a cross-sectional study suggested.
The authors urged continued integration of telehealth into diabetes care beyond December 31, 2024, when the pandemic public health emergency ends, potentially limiting such services.
The study “provides population-level evidence that telehealth can deliver care quality comparable to in-person visits in diabetes management,” lead author Young-Rock Hong, PhD, MPH, an assistant professor in the University of Florida, Gainesville, told this news organization.
“Perhaps the most meaningful finding was the high utilization of telephone-only visits among older adults,” he said. “This has important policy implications, particularly as some insurers and healthcare systems have pushed to restrict telehealth coverage to video-only visits.”
“Maintaining telephone visit coverage is crucial for equitable access, especially for older adults who may be less comfortable with video technology; those with limited internet access; or patients facing other barriers to video visits,” he explained.
The study was published online in BMJ Open.
Video-only, Voice-only, Both
The researchers did a secondary analysis of data from the 2022 Health Information National Trends Survey, a nationally representative survey that includes information on health communication and knowledge and perceptions about all health conditions among US adults aged ≥ 18 years.
Participants had a self-reported diagnosis of type 1 or type 2 diabetes. The mean age was 59.4 years; 50% were women; and 53% were non-Hispanic White individuals.
Primary and secondary outcomes were use of telehealth in the last 12-months; telehealth modality; overall perception of quality of care; perceived trust in the healthcare system; and patient-centered communication score.
In the analysis of 1116 participants representing 33.6 million individuals, 48.1% reported telehealth use in the past 12 months.
Telehealth users were more likely to be younger and women with higher household incomes and health insurance coverage; live in metropolitan areas; and have multiple chronic conditions, poorer perceived health status, and more frequent physician visits than nonusers.
After adjustment, adults aged ≥ 65 years had a significantly lower likelihood of telehealth use than those ages 18-49 years (odds ratio [OR], 0.43).
Higher income and more frequent healthcare visits were predictors of telehealth usage, with no significant differences across race, education, or location.
Those with a household income between $35,000 and $74,999 had more than double the likelihood of telehealth use (OR, 2.14) than those with incomes below $35,000.
Among telehealth users, 39.3% reported having video-only; 35%, phone (voice)-only; and 25.7%, both modalities. Among those aged ≥ 65 years, 55.5% used phone calls only and 25.5% used video only. In contrast, those aged 18-49 years had higher rates of video-only use (36.1%) and combined video/phone use (31.2%).
Healthcare provider recommendation (68.1%) was the most common reason for telehealth use, followed by convenience (57.7%), avoiding potential COVID-19 exposure (48.1%), and obtaining advice about the need for in-person care (23.6%).
Nonusers said they preferred in-person visits and also cited privacy concerns and technology challenges.
Patient-reported quality-of-care outcomes were comparable between telehealth users and nonusers, with no significant differences by telehealth modality or area of residence (urban or rural).
Around 70% of individuals with diabetes in both groups rated their quality of care as “excellent” and “very good;” fewer than 10% rated their care as “fair” and “poor.”
Similarly, trust in the healthcare system was comparable between users and nonusers: 41.3% of telehealth users 41% of nonusers reported trusting the healthcare system “very much.” Patient-centered communication scores were also similar between users and nonusers.
Telehealth appears to be a good option from the providers’ perspective as well, according to the authors. A previous study by the team found more than 80% of US physicians intended to continue telehealth beyond the pandemic.
“The recent unanimous bipartisan passage of the Telehealth Modernization Act by the House Energy & Commerce Committee signals strong political support for extending telehealth flexibilities through 2026,” Hong said. “The bill addresses key access issues by permanently removing geographic restrictions, expanding eligible providers, and maintaining audio-only coverage — provisions that align with our study’s findings about the importance of telephone visits, particularly for older adults and underserved populations.”
There is concern that extending telehealth services might increase Medicare spending by over $2 billion, he added. “While this may be a valid concern, there is a need for more robust evidence regarding the overall value of telehealth services — ie, the ‘benefits’ they provide relative to their costs and outcomes.”
Reassuring, but More Research Needed
COVID prompted “dramatic shifts” in care delivery from in-person to telehealth, Kevin Peterson, MD, MPH, American Diabetes Association vice president of primary care told this news organization. “The authors’ findings provide reassurance that these changes provided for additional convenience in care delivery without being associated with compromises in patient-reported care quality.”
However, he said, “the study does not necessarily capture representative samples of rural and underserved populations, making the impact of telehealth on health equity difficult to determine.” In addition, although patient-perceived care quality did not change with telehealth delivery, the study “does not address impacts on safety, clinical outcomes, equity, costs, or other important measures.”
Furthermore, he noted, “this is an association study that occurred during the dramatic changes brought about by COVID. It may not represent provider or patient preferences that characterize the role of telehealth under more normal circumstances.”
For now, clinicians should be aware that “initial evidence suggests that telehealth can be integrated into care without significantly compromising the patient’s perception of the quality of care,” he concluded.
No funding was declared. Hong and Peterson reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
FROM BMJ OPEN
How to Manage Patients on GLP-1s Before Surgery
, as does the US Food and Drug Administration’s (FDA’s) labeling for the drugs. The changes can be challenging to keep up with, and endocrinologists seem to be making their own decisions based on clinical experience and their interpretations of the potential impact and value of the emerging information.
The latest FDA label change warns about the risk for pulmonary aspiration but notes “insufficient” data to inform recommendations to mitigate the risk in vulnerable patients. Yet, the latest multi-society guidance, led by the American Society of Anesthesiologists (ASA) and based on consensus, not evidence, has nuanced advice for managing patients at risk.
Does the FDA’s label change make a difference regarding the multi-society guidance, which was published earlier? “The answer is no,” Girish Joshi, MD, vice chair, ASA Committee on Practice Parameters, told this news organization. “The concern of increased pulmonary aspiration in patients who are on GLP-1 receptor agonists has been known, and that concern still exists. So, we started with not an assumption but the premise that patients on GLP-1 receptor agonists are at a higher risk of aspiration during sedation, analgesia, and/or general anesthesia. The FDA basically confirms what we say in the guidance.”
Joshi, professor in the Anesthesiology and Pain Management Department at UT Southwestern Medical Center, Dallas, aimed to make the guidance, which was published simultaneously in several society journals, more implementable with a letter to the editor of Anesthesiology. The key, he said, is to identify patients at higher risk for aspiration; all others would follow treatment as usual.
The letter highlights three overarching recommendations and then expands upon them: Standardized preoperative assessment for risk for delayed gastric emptying (yes/no); selective preoperative care plan based on delayed gastric emptying assessment and shared decision-making; and on the day of the procedure, reassess for delayed gastric emptying and mitigate risk if there is clinical concern.
But it seems as though, for now, endocrinologists are managing these patients as they see fit, within the parameters of any institutional guidance requirements. Here is what they said about their practice:
Amy E. Rothberg, MD, DABOM, director of the Weight Management Program & Rewind at the University of Michigan, Ann Arbor, Michigan, said, “I think it makes sense to inform our patients of the labeling and rare but potential adverse effects if they intend to undergo anesthesia for a scheduled procedure/surgery. There is never no risk of aspiration during anesthesia.”
“I find it a bit curious that ASA implies that those who experience GI side effects are more likely than those who do not to have this potential risk. I doubt there is evidence that those without GI side effects are necessarily ‘safer’ and a study to determine that is unlikely to take be conducted.”
“My institution does require a 1-week pause on GLP-1s for those undergoing anesthesia for surgery,” she added. “That’s not evidence-based either, but probably reduces the risk of aspiration during anesthesia — but I don’t know what the actual denominator is for aspiration in those who continued vs those who took a pause from GLP-1s. Pausing does certainly (anecdotally) increase the traffic of communications between physicians and their patients about what to do in the interval.”
Anne Peters, MD, a professor of clinical medicine and a clinical scholar at the Keck School of Medicine of the University of Southern California, Los Angeles, said, “The FDA label change is a warning that really doesn’t say exactly who on GLP-1 RAs is at highest risk or what to do, and if any intervention has been shown to help. The ASA recommendations seem much more nuanced and practical, including point-of-care gastric ultrasound to see if there is retained food/fluid prior to surgery.”
“In my practice, I individualize what I say, depending on the person and the circumstance,” she said. “Mostly, I have people hold one dose before planned surgery, so they have been 10 days at least without a dose. But if worried about gastrointestinal symptoms or gastroparesis, I have them do a clear liquid diet for 24 hours presurgery. Or at least avoid heavy fat meals the day before.”
“There is a risk of aspiration with anything that slows gastric emptying — maybe even in patients with gastroparesis at baseline due to physiologic, not pharmacological, reasons — and anesthesiologists should be aware of the need to assess patients individually.”
Michael A. Weintraub, MD, of NYU Langone Health Diabetes & Endocrine Associates in New York City, observed, “The risk of a pulmonary aspiration event with GLP-1 medication is quite rare, but not zero. On the other hand, stopping the GLP-1 can cause hyperglycemia or rebound weight gain. Furthermore, it can become complicated to restart GLP1 dosing, particularly given the existing medication shortages.”
“In most cases, stopping a weekly GLP-1 medication 1 week prior to the procedure minimizes the risks of pulmonary aspiration and prevents any worsening hyperglycemia or weight gain,” he said. However, taking the drug 7 days prior to the procedure is optimal. “That way, they would be due for the next dose on the day of the procedure, and taking it the day following procedure minimizes disruption in their once-weekly regimen.”
Malini Gupta, MD, director of G2Endo Endocrinology & Metabolism, Memphis, Tennessee, advised that physicians weigh the risk of stopping the medication (which can cause a glycemic spike) vs risk for aspiration.
“In my opinion, all patients should follow a strict liquid diet or NPO status prior to a surgery to further decrease the risk of aspiration,” she said. “I generally hold the GLP-1 RA for a week before a surgery. If additional glycemic control is necessary, I will add to or adjust one of the patient’s other diabetes medications.”
Jaime Almandoz, MD, associate professor of medicine and medical director of the Weight Wellness Program in Dallas, said, “As endocrinologists, we typically rely on our anesthesia colleagues for guidance on perioperative management. In light of emerging guidelines for holding GLP-1 medications, we also recommend patients adopt a liquid diet 24 hours prior to surgery, along with the fasting protocol.”
“For those managing diabetes with GLP-1 therapies, it is crucial to establish a blood sugar management plan while off these medications, especially during fasting or postoperative periods, which can be further influenced by many factors, including nausea, pain medications, and antibiotics after the procedure.”
Joshi added that at Parkland Hospital in Dallas, “we do a huge number of cases using the same information. We identify patients who are at risk, and then we tell our proceduralists and our surgeons if they’re in the escalating phase of the dosing or if they have GI symptoms; don’t even schedule them as an elective case; wait till the escalation phase is over and then schedule them.”
“That way,” he said, “it becomes logistically easy to manage because the recommendation from the group is that patients who are at higher risk should receive a 24-hour liquid diet — the same as colonoscopy. But sometimes it can be challenging to do so.”
Joshi has received honoraria for consultation from Merck Sharp & Dohme, Vertex Pharmaceuticals, and Haisco-USA Pharmaceuticals. Gupta is on the speakers bureau for Amgen (Tepezza) and IBSA (Tirosint) and is a creative consultant for AbbVie. Almandoz serves on advisory boards for Novo Nordisk, Eli Lilly, and Boehringer Ingelheim. The other experts declared no relevant relationships.
A version of this article first appeared on Medscape.com.
, as does the US Food and Drug Administration’s (FDA’s) labeling for the drugs. The changes can be challenging to keep up with, and endocrinologists seem to be making their own decisions based on clinical experience and their interpretations of the potential impact and value of the emerging information.
The latest FDA label change warns about the risk for pulmonary aspiration but notes “insufficient” data to inform recommendations to mitigate the risk in vulnerable patients. Yet, the latest multi-society guidance, led by the American Society of Anesthesiologists (ASA) and based on consensus, not evidence, has nuanced advice for managing patients at risk.
Does the FDA’s label change make a difference regarding the multi-society guidance, which was published earlier? “The answer is no,” Girish Joshi, MD, vice chair, ASA Committee on Practice Parameters, told this news organization. “The concern of increased pulmonary aspiration in patients who are on GLP-1 receptor agonists has been known, and that concern still exists. So, we started with not an assumption but the premise that patients on GLP-1 receptor agonists are at a higher risk of aspiration during sedation, analgesia, and/or general anesthesia. The FDA basically confirms what we say in the guidance.”
Joshi, professor in the Anesthesiology and Pain Management Department at UT Southwestern Medical Center, Dallas, aimed to make the guidance, which was published simultaneously in several society journals, more implementable with a letter to the editor of Anesthesiology. The key, he said, is to identify patients at higher risk for aspiration; all others would follow treatment as usual.
The letter highlights three overarching recommendations and then expands upon them: Standardized preoperative assessment for risk for delayed gastric emptying (yes/no); selective preoperative care plan based on delayed gastric emptying assessment and shared decision-making; and on the day of the procedure, reassess for delayed gastric emptying and mitigate risk if there is clinical concern.
But it seems as though, for now, endocrinologists are managing these patients as they see fit, within the parameters of any institutional guidance requirements. Here is what they said about their practice:
Amy E. Rothberg, MD, DABOM, director of the Weight Management Program & Rewind at the University of Michigan, Ann Arbor, Michigan, said, “I think it makes sense to inform our patients of the labeling and rare but potential adverse effects if they intend to undergo anesthesia for a scheduled procedure/surgery. There is never no risk of aspiration during anesthesia.”
“I find it a bit curious that ASA implies that those who experience GI side effects are more likely than those who do not to have this potential risk. I doubt there is evidence that those without GI side effects are necessarily ‘safer’ and a study to determine that is unlikely to take be conducted.”
“My institution does require a 1-week pause on GLP-1s for those undergoing anesthesia for surgery,” she added. “That’s not evidence-based either, but probably reduces the risk of aspiration during anesthesia — but I don’t know what the actual denominator is for aspiration in those who continued vs those who took a pause from GLP-1s. Pausing does certainly (anecdotally) increase the traffic of communications between physicians and their patients about what to do in the interval.”
Anne Peters, MD, a professor of clinical medicine and a clinical scholar at the Keck School of Medicine of the University of Southern California, Los Angeles, said, “The FDA label change is a warning that really doesn’t say exactly who on GLP-1 RAs is at highest risk or what to do, and if any intervention has been shown to help. The ASA recommendations seem much more nuanced and practical, including point-of-care gastric ultrasound to see if there is retained food/fluid prior to surgery.”
“In my practice, I individualize what I say, depending on the person and the circumstance,” she said. “Mostly, I have people hold one dose before planned surgery, so they have been 10 days at least without a dose. But if worried about gastrointestinal symptoms or gastroparesis, I have them do a clear liquid diet for 24 hours presurgery. Or at least avoid heavy fat meals the day before.”
“There is a risk of aspiration with anything that slows gastric emptying — maybe even in patients with gastroparesis at baseline due to physiologic, not pharmacological, reasons — and anesthesiologists should be aware of the need to assess patients individually.”
Michael A. Weintraub, MD, of NYU Langone Health Diabetes & Endocrine Associates in New York City, observed, “The risk of a pulmonary aspiration event with GLP-1 medication is quite rare, but not zero. On the other hand, stopping the GLP-1 can cause hyperglycemia or rebound weight gain. Furthermore, it can become complicated to restart GLP1 dosing, particularly given the existing medication shortages.”
“In most cases, stopping a weekly GLP-1 medication 1 week prior to the procedure minimizes the risks of pulmonary aspiration and prevents any worsening hyperglycemia or weight gain,” he said. However, taking the drug 7 days prior to the procedure is optimal. “That way, they would be due for the next dose on the day of the procedure, and taking it the day following procedure minimizes disruption in their once-weekly regimen.”
Malini Gupta, MD, director of G2Endo Endocrinology & Metabolism, Memphis, Tennessee, advised that physicians weigh the risk of stopping the medication (which can cause a glycemic spike) vs risk for aspiration.
“In my opinion, all patients should follow a strict liquid diet or NPO status prior to a surgery to further decrease the risk of aspiration,” she said. “I generally hold the GLP-1 RA for a week before a surgery. If additional glycemic control is necessary, I will add to or adjust one of the patient’s other diabetes medications.”
Jaime Almandoz, MD, associate professor of medicine and medical director of the Weight Wellness Program in Dallas, said, “As endocrinologists, we typically rely on our anesthesia colleagues for guidance on perioperative management. In light of emerging guidelines for holding GLP-1 medications, we also recommend patients adopt a liquid diet 24 hours prior to surgery, along with the fasting protocol.”
“For those managing diabetes with GLP-1 therapies, it is crucial to establish a blood sugar management plan while off these medications, especially during fasting or postoperative periods, which can be further influenced by many factors, including nausea, pain medications, and antibiotics after the procedure.”
Joshi added that at Parkland Hospital in Dallas, “we do a huge number of cases using the same information. We identify patients who are at risk, and then we tell our proceduralists and our surgeons if they’re in the escalating phase of the dosing or if they have GI symptoms; don’t even schedule them as an elective case; wait till the escalation phase is over and then schedule them.”
“That way,” he said, “it becomes logistically easy to manage because the recommendation from the group is that patients who are at higher risk should receive a 24-hour liquid diet — the same as colonoscopy. But sometimes it can be challenging to do so.”
Joshi has received honoraria for consultation from Merck Sharp & Dohme, Vertex Pharmaceuticals, and Haisco-USA Pharmaceuticals. Gupta is on the speakers bureau for Amgen (Tepezza) and IBSA (Tirosint) and is a creative consultant for AbbVie. Almandoz serves on advisory boards for Novo Nordisk, Eli Lilly, and Boehringer Ingelheim. The other experts declared no relevant relationships.
A version of this article first appeared on Medscape.com.
, as does the US Food and Drug Administration’s (FDA’s) labeling for the drugs. The changes can be challenging to keep up with, and endocrinologists seem to be making their own decisions based on clinical experience and their interpretations of the potential impact and value of the emerging information.
The latest FDA label change warns about the risk for pulmonary aspiration but notes “insufficient” data to inform recommendations to mitigate the risk in vulnerable patients. Yet, the latest multi-society guidance, led by the American Society of Anesthesiologists (ASA) and based on consensus, not evidence, has nuanced advice for managing patients at risk.
Does the FDA’s label change make a difference regarding the multi-society guidance, which was published earlier? “The answer is no,” Girish Joshi, MD, vice chair, ASA Committee on Practice Parameters, told this news organization. “The concern of increased pulmonary aspiration in patients who are on GLP-1 receptor agonists has been known, and that concern still exists. So, we started with not an assumption but the premise that patients on GLP-1 receptor agonists are at a higher risk of aspiration during sedation, analgesia, and/or general anesthesia. The FDA basically confirms what we say in the guidance.”
Joshi, professor in the Anesthesiology and Pain Management Department at UT Southwestern Medical Center, Dallas, aimed to make the guidance, which was published simultaneously in several society journals, more implementable with a letter to the editor of Anesthesiology. The key, he said, is to identify patients at higher risk for aspiration; all others would follow treatment as usual.
The letter highlights three overarching recommendations and then expands upon them: Standardized preoperative assessment for risk for delayed gastric emptying (yes/no); selective preoperative care plan based on delayed gastric emptying assessment and shared decision-making; and on the day of the procedure, reassess for delayed gastric emptying and mitigate risk if there is clinical concern.
But it seems as though, for now, endocrinologists are managing these patients as they see fit, within the parameters of any institutional guidance requirements. Here is what they said about their practice:
Amy E. Rothberg, MD, DABOM, director of the Weight Management Program & Rewind at the University of Michigan, Ann Arbor, Michigan, said, “I think it makes sense to inform our patients of the labeling and rare but potential adverse effects if they intend to undergo anesthesia for a scheduled procedure/surgery. There is never no risk of aspiration during anesthesia.”
“I find it a bit curious that ASA implies that those who experience GI side effects are more likely than those who do not to have this potential risk. I doubt there is evidence that those without GI side effects are necessarily ‘safer’ and a study to determine that is unlikely to take be conducted.”
“My institution does require a 1-week pause on GLP-1s for those undergoing anesthesia for surgery,” she added. “That’s not evidence-based either, but probably reduces the risk of aspiration during anesthesia — but I don’t know what the actual denominator is for aspiration in those who continued vs those who took a pause from GLP-1s. Pausing does certainly (anecdotally) increase the traffic of communications between physicians and their patients about what to do in the interval.”
Anne Peters, MD, a professor of clinical medicine and a clinical scholar at the Keck School of Medicine of the University of Southern California, Los Angeles, said, “The FDA label change is a warning that really doesn’t say exactly who on GLP-1 RAs is at highest risk or what to do, and if any intervention has been shown to help. The ASA recommendations seem much more nuanced and practical, including point-of-care gastric ultrasound to see if there is retained food/fluid prior to surgery.”
“In my practice, I individualize what I say, depending on the person and the circumstance,” she said. “Mostly, I have people hold one dose before planned surgery, so they have been 10 days at least without a dose. But if worried about gastrointestinal symptoms or gastroparesis, I have them do a clear liquid diet for 24 hours presurgery. Or at least avoid heavy fat meals the day before.”
“There is a risk of aspiration with anything that slows gastric emptying — maybe even in patients with gastroparesis at baseline due to physiologic, not pharmacological, reasons — and anesthesiologists should be aware of the need to assess patients individually.”
Michael A. Weintraub, MD, of NYU Langone Health Diabetes & Endocrine Associates in New York City, observed, “The risk of a pulmonary aspiration event with GLP-1 medication is quite rare, but not zero. On the other hand, stopping the GLP-1 can cause hyperglycemia or rebound weight gain. Furthermore, it can become complicated to restart GLP1 dosing, particularly given the existing medication shortages.”
“In most cases, stopping a weekly GLP-1 medication 1 week prior to the procedure minimizes the risks of pulmonary aspiration and prevents any worsening hyperglycemia or weight gain,” he said. However, taking the drug 7 days prior to the procedure is optimal. “That way, they would be due for the next dose on the day of the procedure, and taking it the day following procedure minimizes disruption in their once-weekly regimen.”
Malini Gupta, MD, director of G2Endo Endocrinology & Metabolism, Memphis, Tennessee, advised that physicians weigh the risk of stopping the medication (which can cause a glycemic spike) vs risk for aspiration.
“In my opinion, all patients should follow a strict liquid diet or NPO status prior to a surgery to further decrease the risk of aspiration,” she said. “I generally hold the GLP-1 RA for a week before a surgery. If additional glycemic control is necessary, I will add to or adjust one of the patient’s other diabetes medications.”
Jaime Almandoz, MD, associate professor of medicine and medical director of the Weight Wellness Program in Dallas, said, “As endocrinologists, we typically rely on our anesthesia colleagues for guidance on perioperative management. In light of emerging guidelines for holding GLP-1 medications, we also recommend patients adopt a liquid diet 24 hours prior to surgery, along with the fasting protocol.”
“For those managing diabetes with GLP-1 therapies, it is crucial to establish a blood sugar management plan while off these medications, especially during fasting or postoperative periods, which can be further influenced by many factors, including nausea, pain medications, and antibiotics after the procedure.”
Joshi added that at Parkland Hospital in Dallas, “we do a huge number of cases using the same information. We identify patients who are at risk, and then we tell our proceduralists and our surgeons if they’re in the escalating phase of the dosing or if they have GI symptoms; don’t even schedule them as an elective case; wait till the escalation phase is over and then schedule them.”
“That way,” he said, “it becomes logistically easy to manage because the recommendation from the group is that patients who are at higher risk should receive a 24-hour liquid diet — the same as colonoscopy. But sometimes it can be challenging to do so.”
Joshi has received honoraria for consultation from Merck Sharp & Dohme, Vertex Pharmaceuticals, and Haisco-USA Pharmaceuticals. Gupta is on the speakers bureau for Amgen (Tepezza) and IBSA (Tirosint) and is a creative consultant for AbbVie. Almandoz serves on advisory boards for Novo Nordisk, Eli Lilly, and Boehringer Ingelheim. The other experts declared no relevant relationships.
A version of this article first appeared on Medscape.com.
Continuous Glucose Monitors for All? Opinions Remain Mixed
The recent US Food and Drug Administration (FDA) clearance of two over-the-counter (OTC) continuous glucose monitors (CGMs) — Dexcom’s Stelo and Abbott’s Lingo — has sparked interest in potentially expanding their use to those without diabetes or prediabetes.
There are several valid questions about how the general population might benefit from CGMs. Can they motivate those struggling with overweight to shed pounds? Would they prompt users to follow more healthful eating patterns? Can they act as a canary in the coal mine, alerting users to prediabetes?
The short answer to these questions is, we don’t know.
“Glucose levels fluctuate in everyone in response to meals, exercise, stress, etc, but there has been no credible research to support CGM use by most people who do not have diabetes,” Jill Crandall, MD, chief of endocrinology at Albert Einstein College of Medicine and Montefiore Health System in New York City, said in an interview.
“The utility of CGM for people without diabetes hasn’t been established and the drive to market CGM as an OTC device seems largely driven by financial considerations,” Crandall said. She advocates instead for a strategy directed at more meaningful objectives.
“For now, efforts should be focused on making CGMs available to patients who will clearly benefit — ie, people with diabetes, especially those who are using insulin and those who are struggling to achieve desired levels of glucose control.”
Nicole Spartano, PhD, assistant professor of medicine in endocrinology, diabetes, nutrition and weight management at Boston University’s Chobanian & Avedisian School of Medicine in Massachusetts, agreed with this assessment.
“It is definitely too early to make recommendations for patients without diabetes based on their CGM data,” said Spartano, who also serves as the director of the Glucose Monitoring Station at the Framingham Heart Study in Framingham, Massachusetts. “We simply do not have enough follow-up data to tell us which CGM metrics are associated with higher risk for disease.”
Spartano served as the lead author of a recent study showing time spent in various CGM ranges in a large cohort of individuals without diabetes using the Dexcom G6 Pro model. In the future, she said the data may be used to establish reference ranges for clinicians and individuals.
“We are working on another paper surveying diabetologists and CGM experts about how they interpret CGM reports from individuals without diabetes,” she said in an interview. Although the data are not yet published, Spartano said, “we are finding that clinicians are currently very discordant in how they interpret these reports.”
Potential Benefits Right Now
Satish Garg, MD, director of the Adult Clinic at the Barbara Davis Center for Diabetes at the University of Colorado Anschutz Medical Campus, Aurora, and editor-in-chief of Diabetes Technology & Therapeutics, is convinced that glucose should be considered another vital sign, like blood pressure, pulse rate, respiration rate, and body temperature. Therefore, he sees the use of a CGM in people without diabetes as a way to build awareness and perhaps prompt behavior modification.
“Someone with an A1c of 4.9 on a normal day may notice that they’ve gained a little bit of weight, and if they use an OTC CGM and start seeing changes, it might help them to modulate their diet themselves, whether they see a dietitian or not,” Garg said.
He gave the example of “a natural behavioral change” occurring when someone using a CGM declines to eat a post-meal dessert after seeing their blood glucose had already risen to 170.
Wearing a CGM also has the potential to alert the user to high blood glucose, leading them to an earlier diagnosis of prediabetes or diabetes, Shichun Bao, MD, PhD, Diabetes Technology Program Leader at the Vanderbilt Eskind Diabetes Clinic of Vanderbilt University in Nashville, Tennessee, said in an interview. She has had cases where a family member of someone with diabetes used the patient’s fingerstick meter, found that their glucose was 280, and self-diagnosed with diabetes.
“It’s the same thing with the CGM,” she said. “If they somehow did not know they have diabetes and they wear a CGM and it shows their sugar is high, that will help them to know to see their provider to get a diagnosis, get treated, and track progression.”
Given the shortage of endocrinologists and long waits for appointments in the United States and elsewhere, it is very likely that primary care physicians will be the ones fielding questions from individuals without diabetes interested in purchasing an OTC CGM. Internist Douglas Paauw, MD, a professor at the University of Washington School of Medicine, Seattle, said in an interview that, for his practice, “the benefits outweigh some of the limitations.”
“I don’t really think somebody who doesn’t have diabetes needs to be using a CGM all the time or long term,” he said. “But I have used it in a few people without diabetes, and I think if someone can afford to use it for 2-4 weeks, especially if they’ve been gaining weight, then they can really recognize what happens to their bodies when they eat certain foods.”
Paauw added that CGMs are a more effective means of teaching his patients than them receiving a lecture from him on healthy eating. “There’s nothing like immediate feedback on what happens to your body to change behavior.”
Similarly, William Golden, medical director at Arkansas Medicaid and professor of medicine and public health at the University of Arkansas for Medical Sciences, Little Rock, said in an interview that “it is difficult to justify coverage for CGMs on demand — but if people want to invest in their own devices and the technology motivates them to eat better and/or lose weight, then there are benefits to be had.”
Potential Downsides
Although it may seem simple to use an OTC CGM to measure blood glucose on the fly, in the real world it can take patients time to understand these devices, “especially the first day or so, when users are going to get false lows,” Bao said. “Clinicians need to tell them if you don’t feel like your sugar is low and the device says it’s low, whether they do or don’t have diabetes, they should do a fingerstick glucose test to confirm the low before rushing to take in sugar. On the other hand, if they drink a lot of juice, their sugar will go high. So, it can create problems and false results either way.”
Many factors affect glucose, she said. “When you’re sick, glucose can go high, and when you’re very sick, in the ICU, sometimes it can be low. It depends on the situation.” Bao noted that certain vitamins and drugs can also interfere with readings.
Bao doesn’t see value in having people without diabetes monitor their glucose continuously. “If they want to see what foods or exercise do to their body, they will probably benefit from a short trial to gain some insight; otherwise, they’re wasting money,” she said.
Another potential downside is that there’s no head-to-head comparison data with the approved devices, Garg said. “But it’s clear to us that Stelo’s range is very narrow, 70 to 200, whereas the Lingo ranges are pretty much full, from 40 to 400 or 55 to 400. So, we don’t know the accuracy of these sensors.”
Golden observed that for certain patients, CGMs may lead to psychological distress rather than providing a sense of control over their blood glucose levels.
“I have had a nondiabetic patient or two that obsessed about their blood sugars and a device would only magnify their anxiety/neurosis,” he said. “The bottom line is that it’s a tool for a balanced approach to health management, but the daily results must be kept in perspective!”
Educate Patients, Primary Care Physicians
To maximize potential benefits for patients without diabetes, clinicians need to be well trained in the use and interpretation of results from the devices, Bao said. They can then better educate their patients, including discussing with them possible pitfalls surrounding their use.
“For example, a patient may see that their blood glucose, as measured by a fingerstick, is 95, whereas the CGM says 140, and ask, ‘Which one do I trust?’ ”
This is where the patient can be educated about the difference between interstitial glucose, as measured by the CGM, and blood glucose, as measured by the fingerstick. Because it takes about 15 minutes for blood glucose to get to the interstitial tissue, there’s lag time, and the two measurements will differ.
“A discrepancy of 20% is totally acceptable for that reason,” Bao said.
She has also seen several examples where patients were misled by their CGM when its censor became dislodged.
“Sometimes when a sensor has moved, the patient may push it back in because they don’t want to throw it away. But it doesn’t work that way, and they end up with inaccurate readings.”
At a minimum, Bao added, clinicians and patients should read the package insert but also be aware that it doesn’t list everything that might go wrong or interfere with the device’s accuracy.
Manufacturers of OTC devices should be training primary care and family practice doctors in their use, given the expected “huge” influx of patients wanting to use them, according to Garg.
“If you are expecting endos or diabetes specialists to see these people, that’s never going to happen,” he said. “We have a big shortage of these specialists, so industry has to train these doctors. Patients will bring their doctor’s data, and the clinicians need to learn the basics of how to interpret the glucose values they see. Then they can treat these patients rather than shipping all of them to endos who likely are not available.”
Paauw agreed that CGM training should be directed largely toward primary care professionals, who can help their under-resourced endocrinologist colleagues from seeing an uptick in “the worried well.”
“The bottom line is that primary care professionals do need to understand the CGM,” he said. “They do need to get comfortable with it. They do need to come up with opinions on how to use it. The public’s going to be using it, and we need to be competent in it and use our subspecialists appropriately.”
Spartano received funding for an investigator-initiated research grant from Novo Nordisk unrelated to the cited CGM studies. Garg , Bao, Paauw, Golden, and Crandall declared no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
The recent US Food and Drug Administration (FDA) clearance of two over-the-counter (OTC) continuous glucose monitors (CGMs) — Dexcom’s Stelo and Abbott’s Lingo — has sparked interest in potentially expanding their use to those without diabetes or prediabetes.
There are several valid questions about how the general population might benefit from CGMs. Can they motivate those struggling with overweight to shed pounds? Would they prompt users to follow more healthful eating patterns? Can they act as a canary in the coal mine, alerting users to prediabetes?
The short answer to these questions is, we don’t know.
“Glucose levels fluctuate in everyone in response to meals, exercise, stress, etc, but there has been no credible research to support CGM use by most people who do not have diabetes,” Jill Crandall, MD, chief of endocrinology at Albert Einstein College of Medicine and Montefiore Health System in New York City, said in an interview.
“The utility of CGM for people without diabetes hasn’t been established and the drive to market CGM as an OTC device seems largely driven by financial considerations,” Crandall said. She advocates instead for a strategy directed at more meaningful objectives.
“For now, efforts should be focused on making CGMs available to patients who will clearly benefit — ie, people with diabetes, especially those who are using insulin and those who are struggling to achieve desired levels of glucose control.”
Nicole Spartano, PhD, assistant professor of medicine in endocrinology, diabetes, nutrition and weight management at Boston University’s Chobanian & Avedisian School of Medicine in Massachusetts, agreed with this assessment.
“It is definitely too early to make recommendations for patients without diabetes based on their CGM data,” said Spartano, who also serves as the director of the Glucose Monitoring Station at the Framingham Heart Study in Framingham, Massachusetts. “We simply do not have enough follow-up data to tell us which CGM metrics are associated with higher risk for disease.”
Spartano served as the lead author of a recent study showing time spent in various CGM ranges in a large cohort of individuals without diabetes using the Dexcom G6 Pro model. In the future, she said the data may be used to establish reference ranges for clinicians and individuals.
“We are working on another paper surveying diabetologists and CGM experts about how they interpret CGM reports from individuals without diabetes,” she said in an interview. Although the data are not yet published, Spartano said, “we are finding that clinicians are currently very discordant in how they interpret these reports.”
Potential Benefits Right Now
Satish Garg, MD, director of the Adult Clinic at the Barbara Davis Center for Diabetes at the University of Colorado Anschutz Medical Campus, Aurora, and editor-in-chief of Diabetes Technology & Therapeutics, is convinced that glucose should be considered another vital sign, like blood pressure, pulse rate, respiration rate, and body temperature. Therefore, he sees the use of a CGM in people without diabetes as a way to build awareness and perhaps prompt behavior modification.
“Someone with an A1c of 4.9 on a normal day may notice that they’ve gained a little bit of weight, and if they use an OTC CGM and start seeing changes, it might help them to modulate their diet themselves, whether they see a dietitian or not,” Garg said.
He gave the example of “a natural behavioral change” occurring when someone using a CGM declines to eat a post-meal dessert after seeing their blood glucose had already risen to 170.
Wearing a CGM also has the potential to alert the user to high blood glucose, leading them to an earlier diagnosis of prediabetes or diabetes, Shichun Bao, MD, PhD, Diabetes Technology Program Leader at the Vanderbilt Eskind Diabetes Clinic of Vanderbilt University in Nashville, Tennessee, said in an interview. She has had cases where a family member of someone with diabetes used the patient’s fingerstick meter, found that their glucose was 280, and self-diagnosed with diabetes.
“It’s the same thing with the CGM,” she said. “If they somehow did not know they have diabetes and they wear a CGM and it shows their sugar is high, that will help them to know to see their provider to get a diagnosis, get treated, and track progression.”
Given the shortage of endocrinologists and long waits for appointments in the United States and elsewhere, it is very likely that primary care physicians will be the ones fielding questions from individuals without diabetes interested in purchasing an OTC CGM. Internist Douglas Paauw, MD, a professor at the University of Washington School of Medicine, Seattle, said in an interview that, for his practice, “the benefits outweigh some of the limitations.”
“I don’t really think somebody who doesn’t have diabetes needs to be using a CGM all the time or long term,” he said. “But I have used it in a few people without diabetes, and I think if someone can afford to use it for 2-4 weeks, especially if they’ve been gaining weight, then they can really recognize what happens to their bodies when they eat certain foods.”
Paauw added that CGMs are a more effective means of teaching his patients than them receiving a lecture from him on healthy eating. “There’s nothing like immediate feedback on what happens to your body to change behavior.”
Similarly, William Golden, medical director at Arkansas Medicaid and professor of medicine and public health at the University of Arkansas for Medical Sciences, Little Rock, said in an interview that “it is difficult to justify coverage for CGMs on demand — but if people want to invest in their own devices and the technology motivates them to eat better and/or lose weight, then there are benefits to be had.”
Potential Downsides
Although it may seem simple to use an OTC CGM to measure blood glucose on the fly, in the real world it can take patients time to understand these devices, “especially the first day or so, when users are going to get false lows,” Bao said. “Clinicians need to tell them if you don’t feel like your sugar is low and the device says it’s low, whether they do or don’t have diabetes, they should do a fingerstick glucose test to confirm the low before rushing to take in sugar. On the other hand, if they drink a lot of juice, their sugar will go high. So, it can create problems and false results either way.”
Many factors affect glucose, she said. “When you’re sick, glucose can go high, and when you’re very sick, in the ICU, sometimes it can be low. It depends on the situation.” Bao noted that certain vitamins and drugs can also interfere with readings.
Bao doesn’t see value in having people without diabetes monitor their glucose continuously. “If they want to see what foods or exercise do to their body, they will probably benefit from a short trial to gain some insight; otherwise, they’re wasting money,” she said.
Another potential downside is that there’s no head-to-head comparison data with the approved devices, Garg said. “But it’s clear to us that Stelo’s range is very narrow, 70 to 200, whereas the Lingo ranges are pretty much full, from 40 to 400 or 55 to 400. So, we don’t know the accuracy of these sensors.”
Golden observed that for certain patients, CGMs may lead to psychological distress rather than providing a sense of control over their blood glucose levels.
“I have had a nondiabetic patient or two that obsessed about their blood sugars and a device would only magnify their anxiety/neurosis,” he said. “The bottom line is that it’s a tool for a balanced approach to health management, but the daily results must be kept in perspective!”
Educate Patients, Primary Care Physicians
To maximize potential benefits for patients without diabetes, clinicians need to be well trained in the use and interpretation of results from the devices, Bao said. They can then better educate their patients, including discussing with them possible pitfalls surrounding their use.
“For example, a patient may see that their blood glucose, as measured by a fingerstick, is 95, whereas the CGM says 140, and ask, ‘Which one do I trust?’ ”
This is where the patient can be educated about the difference between interstitial glucose, as measured by the CGM, and blood glucose, as measured by the fingerstick. Because it takes about 15 minutes for blood glucose to get to the interstitial tissue, there’s lag time, and the two measurements will differ.
“A discrepancy of 20% is totally acceptable for that reason,” Bao said.
She has also seen several examples where patients were misled by their CGM when its censor became dislodged.
“Sometimes when a sensor has moved, the patient may push it back in because they don’t want to throw it away. But it doesn’t work that way, and they end up with inaccurate readings.”
At a minimum, Bao added, clinicians and patients should read the package insert but also be aware that it doesn’t list everything that might go wrong or interfere with the device’s accuracy.
Manufacturers of OTC devices should be training primary care and family practice doctors in their use, given the expected “huge” influx of patients wanting to use them, according to Garg.
“If you are expecting endos or diabetes specialists to see these people, that’s never going to happen,” he said. “We have a big shortage of these specialists, so industry has to train these doctors. Patients will bring their doctor’s data, and the clinicians need to learn the basics of how to interpret the glucose values they see. Then they can treat these patients rather than shipping all of them to endos who likely are not available.”
Paauw agreed that CGM training should be directed largely toward primary care professionals, who can help their under-resourced endocrinologist colleagues from seeing an uptick in “the worried well.”
“The bottom line is that primary care professionals do need to understand the CGM,” he said. “They do need to get comfortable with it. They do need to come up with opinions on how to use it. The public’s going to be using it, and we need to be competent in it and use our subspecialists appropriately.”
Spartano received funding for an investigator-initiated research grant from Novo Nordisk unrelated to the cited CGM studies. Garg , Bao, Paauw, Golden, and Crandall declared no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
The recent US Food and Drug Administration (FDA) clearance of two over-the-counter (OTC) continuous glucose monitors (CGMs) — Dexcom’s Stelo and Abbott’s Lingo — has sparked interest in potentially expanding their use to those without diabetes or prediabetes.
There are several valid questions about how the general population might benefit from CGMs. Can they motivate those struggling with overweight to shed pounds? Would they prompt users to follow more healthful eating patterns? Can they act as a canary in the coal mine, alerting users to prediabetes?
The short answer to these questions is, we don’t know.
“Glucose levels fluctuate in everyone in response to meals, exercise, stress, etc, but there has been no credible research to support CGM use by most people who do not have diabetes,” Jill Crandall, MD, chief of endocrinology at Albert Einstein College of Medicine and Montefiore Health System in New York City, said in an interview.
“The utility of CGM for people without diabetes hasn’t been established and the drive to market CGM as an OTC device seems largely driven by financial considerations,” Crandall said. She advocates instead for a strategy directed at more meaningful objectives.
“For now, efforts should be focused on making CGMs available to patients who will clearly benefit — ie, people with diabetes, especially those who are using insulin and those who are struggling to achieve desired levels of glucose control.”
Nicole Spartano, PhD, assistant professor of medicine in endocrinology, diabetes, nutrition and weight management at Boston University’s Chobanian & Avedisian School of Medicine in Massachusetts, agreed with this assessment.
“It is definitely too early to make recommendations for patients without diabetes based on their CGM data,” said Spartano, who also serves as the director of the Glucose Monitoring Station at the Framingham Heart Study in Framingham, Massachusetts. “We simply do not have enough follow-up data to tell us which CGM metrics are associated with higher risk for disease.”
Spartano served as the lead author of a recent study showing time spent in various CGM ranges in a large cohort of individuals without diabetes using the Dexcom G6 Pro model. In the future, she said the data may be used to establish reference ranges for clinicians and individuals.
“We are working on another paper surveying diabetologists and CGM experts about how they interpret CGM reports from individuals without diabetes,” she said in an interview. Although the data are not yet published, Spartano said, “we are finding that clinicians are currently very discordant in how they interpret these reports.”
Potential Benefits Right Now
Satish Garg, MD, director of the Adult Clinic at the Barbara Davis Center for Diabetes at the University of Colorado Anschutz Medical Campus, Aurora, and editor-in-chief of Diabetes Technology & Therapeutics, is convinced that glucose should be considered another vital sign, like blood pressure, pulse rate, respiration rate, and body temperature. Therefore, he sees the use of a CGM in people without diabetes as a way to build awareness and perhaps prompt behavior modification.
“Someone with an A1c of 4.9 on a normal day may notice that they’ve gained a little bit of weight, and if they use an OTC CGM and start seeing changes, it might help them to modulate their diet themselves, whether they see a dietitian or not,” Garg said.
He gave the example of “a natural behavioral change” occurring when someone using a CGM declines to eat a post-meal dessert after seeing their blood glucose had already risen to 170.
Wearing a CGM also has the potential to alert the user to high blood glucose, leading them to an earlier diagnosis of prediabetes or diabetes, Shichun Bao, MD, PhD, Diabetes Technology Program Leader at the Vanderbilt Eskind Diabetes Clinic of Vanderbilt University in Nashville, Tennessee, said in an interview. She has had cases where a family member of someone with diabetes used the patient’s fingerstick meter, found that their glucose was 280, and self-diagnosed with diabetes.
“It’s the same thing with the CGM,” she said. “If they somehow did not know they have diabetes and they wear a CGM and it shows their sugar is high, that will help them to know to see their provider to get a diagnosis, get treated, and track progression.”
Given the shortage of endocrinologists and long waits for appointments in the United States and elsewhere, it is very likely that primary care physicians will be the ones fielding questions from individuals without diabetes interested in purchasing an OTC CGM. Internist Douglas Paauw, MD, a professor at the University of Washington School of Medicine, Seattle, said in an interview that, for his practice, “the benefits outweigh some of the limitations.”
“I don’t really think somebody who doesn’t have diabetes needs to be using a CGM all the time or long term,” he said. “But I have used it in a few people without diabetes, and I think if someone can afford to use it for 2-4 weeks, especially if they’ve been gaining weight, then they can really recognize what happens to their bodies when they eat certain foods.”
Paauw added that CGMs are a more effective means of teaching his patients than them receiving a lecture from him on healthy eating. “There’s nothing like immediate feedback on what happens to your body to change behavior.”
Similarly, William Golden, medical director at Arkansas Medicaid and professor of medicine and public health at the University of Arkansas for Medical Sciences, Little Rock, said in an interview that “it is difficult to justify coverage for CGMs on demand — but if people want to invest in their own devices and the technology motivates them to eat better and/or lose weight, then there are benefits to be had.”
Potential Downsides
Although it may seem simple to use an OTC CGM to measure blood glucose on the fly, in the real world it can take patients time to understand these devices, “especially the first day or so, when users are going to get false lows,” Bao said. “Clinicians need to tell them if you don’t feel like your sugar is low and the device says it’s low, whether they do or don’t have diabetes, they should do a fingerstick glucose test to confirm the low before rushing to take in sugar. On the other hand, if they drink a lot of juice, their sugar will go high. So, it can create problems and false results either way.”
Many factors affect glucose, she said. “When you’re sick, glucose can go high, and when you’re very sick, in the ICU, sometimes it can be low. It depends on the situation.” Bao noted that certain vitamins and drugs can also interfere with readings.
Bao doesn’t see value in having people without diabetes monitor their glucose continuously. “If they want to see what foods or exercise do to their body, they will probably benefit from a short trial to gain some insight; otherwise, they’re wasting money,” she said.
Another potential downside is that there’s no head-to-head comparison data with the approved devices, Garg said. “But it’s clear to us that Stelo’s range is very narrow, 70 to 200, whereas the Lingo ranges are pretty much full, from 40 to 400 or 55 to 400. So, we don’t know the accuracy of these sensors.”
Golden observed that for certain patients, CGMs may lead to psychological distress rather than providing a sense of control over their blood glucose levels.
“I have had a nondiabetic patient or two that obsessed about their blood sugars and a device would only magnify their anxiety/neurosis,” he said. “The bottom line is that it’s a tool for a balanced approach to health management, but the daily results must be kept in perspective!”
Educate Patients, Primary Care Physicians
To maximize potential benefits for patients without diabetes, clinicians need to be well trained in the use and interpretation of results from the devices, Bao said. They can then better educate their patients, including discussing with them possible pitfalls surrounding their use.
“For example, a patient may see that their blood glucose, as measured by a fingerstick, is 95, whereas the CGM says 140, and ask, ‘Which one do I trust?’ ”
This is where the patient can be educated about the difference between interstitial glucose, as measured by the CGM, and blood glucose, as measured by the fingerstick. Because it takes about 15 minutes for blood glucose to get to the interstitial tissue, there’s lag time, and the two measurements will differ.
“A discrepancy of 20% is totally acceptable for that reason,” Bao said.
She has also seen several examples where patients were misled by their CGM when its censor became dislodged.
“Sometimes when a sensor has moved, the patient may push it back in because they don’t want to throw it away. But it doesn’t work that way, and they end up with inaccurate readings.”
At a minimum, Bao added, clinicians and patients should read the package insert but also be aware that it doesn’t list everything that might go wrong or interfere with the device’s accuracy.
Manufacturers of OTC devices should be training primary care and family practice doctors in their use, given the expected “huge” influx of patients wanting to use them, according to Garg.
“If you are expecting endos or diabetes specialists to see these people, that’s never going to happen,” he said. “We have a big shortage of these specialists, so industry has to train these doctors. Patients will bring their doctor’s data, and the clinicians need to learn the basics of how to interpret the glucose values they see. Then they can treat these patients rather than shipping all of them to endos who likely are not available.”
Paauw agreed that CGM training should be directed largely toward primary care professionals, who can help their under-resourced endocrinologist colleagues from seeing an uptick in “the worried well.”
“The bottom line is that primary care professionals do need to understand the CGM,” he said. “They do need to get comfortable with it. They do need to come up with opinions on how to use it. The public’s going to be using it, and we need to be competent in it and use our subspecialists appropriately.”
Spartano received funding for an investigator-initiated research grant from Novo Nordisk unrelated to the cited CGM studies. Garg , Bao, Paauw, Golden, and Crandall declared no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
Is Acute Kidney Injury Really a Single Disease?
The search for a better biomarker than creatine for acute kidney injury (AKI) has been “long and elusive.” However, could researchers be on the right path now?
“The thinking is moving away from trying to find one biomarker that can be used for different types of kidney injury to a recognition that AKI is not just a single disease that a patient has or doesn’t have,” Rob D. Nerenz, PhD, an associate professor in the Department of Pathology and Laboratory Medicine at the Medical College of Wisconsin, Milwaukee, told this news organization. “It’s lots of different diseases that all affect the kidney in different ways.”
AKI is actually a “loose collection” of hepatorenal, cardiorenal, nephrotoxic, and sepsis-associated syndromes, as well as acute interstitial nephritis (AIN), he said. “So the question is not: ‘Is AKI present — yes or no?’ It’s: ‘What kind of AKI is present, and how do I treat it?’ ”
‘Mediocre Markers’
AKI affects about 10%-30% of hospitalized patients, according to Nerenz. It’s associated with an increased risk for adverse outcomes, including post-AKI chronic kidney disease and a mortality rate of approximately 24%.
Currently, AKI is defined by a rapid increase in serum creatinine, a decrease in urine output, or both.
“Those are mediocre markers,” Nerenz said, as serum creatinine is not very sensitive to acute change, and the increase is often detected after the therapeutic window of intervention has passed. In addition, “it only tells us that the kidneys are unhappy; it doesn’t say anything about the cause.”
Urine output is limited as a marker because many conditions affect it. “If you’re dehydrated, urine output is going to decrease,” he said. “And in some forms of AKI, urine output actually goes up.”
What’s needed, he said, is a more sensitive biomarker that’s detectable within a shorter timeframe of 2-6 hours following injury.
“Right now, we’re looking at 48 hours before a change becomes apparent, and that’s just too long. Plus, it should be kidney specific. One of the major limitations of the biomarkers that have been evaluated to this point is that, yes, they’re released by the kidney, but they’re also released by other tissue types within the body, and that hinders their effectiveness as a marker.”
Neutrophil Gelatinase-Associated Lipocalin (NGAL)
Although research on better biomarkers is ongoing, “there’s also a recognition that some of the protein markers that have been around for a while, if used appropriately, can provide value,” Nerenz said. These include, among others, NGAL.
NGAL works well in pediatric patients without other comorbidities, but it has been less useful in adult patients because it is also released by other cell types. However, recent research suggests it shows promise in patients with both cirrhosis and AKI.
There are three main causes of AKI in cirrhosis, Nerenz explained. The first is prerenal and can be primarily addressed through rehydration.
“When these patients come in, clinicians won’t do anything right away other than provide fluids. If creatinine improves over the 48-hour period of fluid replenishment, then the patient is sent home because there really isn’t extensive damage to the kidneys.”
If improvement isn’t seen after those 48 hours, then it could be one of two things: Hepatorenal syndrome or acute tubular necrosis. Patients with hepatorenal syndrome are candidates for terlipressin, which the Food and Drug Administration (FDA) approved for this indication in 2022 after it displayed notable efficacy in a double-blind study.
“You don’t want to give terlipressin to just anybody because if the issue is not a diminished blood supply to the kidney, it’s not going to help, and comes with some serious side effects, such as respiratory failure,” Nerenz explained. “Having a biomarker that can distinguish between hepatorenal syndrome and acute tubular necrosis really helps clinicians confidently identify which patients are good candidates for this drug. Right now, we’re flying blind to a certain extent, basically using clinical intuition.”
Currently, the determination of NGAL is FDA cleared only for pediatric use. One way hospitals have dealt with that is by making the test in their own labs, using appropriate reagents, validation, and so forth. These tests are then safe for use in adults but haven’t gone through the FDA approval process.
However, the FDA’s recent announcement stating that the agency should oversee lab-developed tests has made this situation unclear, Nerenz said.
“At this point, we don’t know if there’s still an opportunity to take the NGAL test (or any other cleared biomarker) and validate it for use in a different patient population. Many hospital labs simply don’t have the resources to take these tests through the whole FDA approval process.”
A New Biomarker for AIN?
Meanwhile, research is also moving forward on a better biomarker for AIN, which is also under the AKI umbrella.
“It’s important to diagnose AIN because it has a very specific treatment,” Dennis G. Moledina, MD, PhD, Yale School of Medicine in New Haven, Connecticut, told this news organization.
“AIN is caused by a bunch of different medications, such as proton pump inhibitors, cancer drugs, nonsteroidal anti-inflammatory drugs, and antibiotics, so when someone has this condition, you have to stop potentially life-saving medications and give unnecessary and potentially toxic immunosuppressive drugs, like prednisone,” he said. “If you get the diagnosis wrong, you’re stopping vital drugs and giving immunosuppression for no reason. And if you miss the diagnosis, AIN can lead to permanent chronic kidney disease.”
“Right now, the only way to diagnose AIN is to do a kidney biopsy, which is risky because it can often lead to significant bleeding,” he said. “Some people can’t undergo a biopsy because they’re on medications that increase the risk of bleeding, and they can’t be stopped.”
Furthermore, he noted, “the longer a patient takes a drug that’s causing AIN without getting a diagnosis, the less the chances of recovery because the longer you let this kidney inflammation go on, the more fibrosis and permanent damage develops. So it is important to diagnose it as early as possible, and that’s again why we have a real need for a noninvasive biomarker that can be tested rapidly.”
Moledina and colleagues have been working on identifying a suitable biomarker for close to 10 years, the latest example of which is their 2023 study validating urinary CXCL9 as just such a marker.
“We’re most excited about CXCL9 because it’s already used to diagnose some other diseases in plasma,” Moledina said. “We think that we can convince labs to test it in urine.”
In an accompanying editorial, Mark Canney, PhD, and colleagues at the University of Ottawa and The Ottawa Hospital in Ontario, Canada, wrote that the CXCL9 study findings “are exciting because they provide a road map of where diagnostics can get to for this common, yet poorly identified and treated, cause of kidney damage. The need for a different approach can be readily identified from the fact that clinicians’ gestalt for diagnosing AIN was almost tantamount to tossing a coin (AUC, 0.57). CXCL9 alone outperformed not only the clinician’s prebiopsy suspicion but also an existing diagnostic model and other candidate biomarkers both in the discovery and external validation cohorts.”
Like NGAL, CXCL9 will have to go through the FDA approval process before it can be used for AIN. Therefore, it may be a few years before it can become routinely available, Moledina said.
Nevertheless, Nerenz added, “I think the next steps for AKI are probably continuing on this path of context-dependent, selective biomarker use. I anticipate that we’ll see ongoing development in this space, just expanding to a wider variety of clinical scenarios.”
Nerenz declared receiving research funding from Abbott Labs for evaluation of an AKI biomarker. Moledina is a co-inventor on a pending patent, “Methods and Systems for Diagnosis of Acute Interstitial Nephritis”; a cofounder of the diagnostics company Predict AIN; and a consultant for Biohaven.
A version of this article first appeared on Medscape.com.
The search for a better biomarker than creatine for acute kidney injury (AKI) has been “long and elusive.” However, could researchers be on the right path now?
“The thinking is moving away from trying to find one biomarker that can be used for different types of kidney injury to a recognition that AKI is not just a single disease that a patient has or doesn’t have,” Rob D. Nerenz, PhD, an associate professor in the Department of Pathology and Laboratory Medicine at the Medical College of Wisconsin, Milwaukee, told this news organization. “It’s lots of different diseases that all affect the kidney in different ways.”
AKI is actually a “loose collection” of hepatorenal, cardiorenal, nephrotoxic, and sepsis-associated syndromes, as well as acute interstitial nephritis (AIN), he said. “So the question is not: ‘Is AKI present — yes or no?’ It’s: ‘What kind of AKI is present, and how do I treat it?’ ”
‘Mediocre Markers’
AKI affects about 10%-30% of hospitalized patients, according to Nerenz. It’s associated with an increased risk for adverse outcomes, including post-AKI chronic kidney disease and a mortality rate of approximately 24%.
Currently, AKI is defined by a rapid increase in serum creatinine, a decrease in urine output, or both.
“Those are mediocre markers,” Nerenz said, as serum creatinine is not very sensitive to acute change, and the increase is often detected after the therapeutic window of intervention has passed. In addition, “it only tells us that the kidneys are unhappy; it doesn’t say anything about the cause.”
Urine output is limited as a marker because many conditions affect it. “If you’re dehydrated, urine output is going to decrease,” he said. “And in some forms of AKI, urine output actually goes up.”
What’s needed, he said, is a more sensitive biomarker that’s detectable within a shorter timeframe of 2-6 hours following injury.
“Right now, we’re looking at 48 hours before a change becomes apparent, and that’s just too long. Plus, it should be kidney specific. One of the major limitations of the biomarkers that have been evaluated to this point is that, yes, they’re released by the kidney, but they’re also released by other tissue types within the body, and that hinders their effectiveness as a marker.”
Neutrophil Gelatinase-Associated Lipocalin (NGAL)
Although research on better biomarkers is ongoing, “there’s also a recognition that some of the protein markers that have been around for a while, if used appropriately, can provide value,” Nerenz said. These include, among others, NGAL.
NGAL works well in pediatric patients without other comorbidities, but it has been less useful in adult patients because it is also released by other cell types. However, recent research suggests it shows promise in patients with both cirrhosis and AKI.
There are three main causes of AKI in cirrhosis, Nerenz explained. The first is prerenal and can be primarily addressed through rehydration.
“When these patients come in, clinicians won’t do anything right away other than provide fluids. If creatinine improves over the 48-hour period of fluid replenishment, then the patient is sent home because there really isn’t extensive damage to the kidneys.”
If improvement isn’t seen after those 48 hours, then it could be one of two things: Hepatorenal syndrome or acute tubular necrosis. Patients with hepatorenal syndrome are candidates for terlipressin, which the Food and Drug Administration (FDA) approved for this indication in 2022 after it displayed notable efficacy in a double-blind study.
“You don’t want to give terlipressin to just anybody because if the issue is not a diminished blood supply to the kidney, it’s not going to help, and comes with some serious side effects, such as respiratory failure,” Nerenz explained. “Having a biomarker that can distinguish between hepatorenal syndrome and acute tubular necrosis really helps clinicians confidently identify which patients are good candidates for this drug. Right now, we’re flying blind to a certain extent, basically using clinical intuition.”
Currently, the determination of NGAL is FDA cleared only for pediatric use. One way hospitals have dealt with that is by making the test in their own labs, using appropriate reagents, validation, and so forth. These tests are then safe for use in adults but haven’t gone through the FDA approval process.
However, the FDA’s recent announcement stating that the agency should oversee lab-developed tests has made this situation unclear, Nerenz said.
“At this point, we don’t know if there’s still an opportunity to take the NGAL test (or any other cleared biomarker) and validate it for use in a different patient population. Many hospital labs simply don’t have the resources to take these tests through the whole FDA approval process.”
A New Biomarker for AIN?
Meanwhile, research is also moving forward on a better biomarker for AIN, which is also under the AKI umbrella.
“It’s important to diagnose AIN because it has a very specific treatment,” Dennis G. Moledina, MD, PhD, Yale School of Medicine in New Haven, Connecticut, told this news organization.
“AIN is caused by a bunch of different medications, such as proton pump inhibitors, cancer drugs, nonsteroidal anti-inflammatory drugs, and antibiotics, so when someone has this condition, you have to stop potentially life-saving medications and give unnecessary and potentially toxic immunosuppressive drugs, like prednisone,” he said. “If you get the diagnosis wrong, you’re stopping vital drugs and giving immunosuppression for no reason. And if you miss the diagnosis, AIN can lead to permanent chronic kidney disease.”
“Right now, the only way to diagnose AIN is to do a kidney biopsy, which is risky because it can often lead to significant bleeding,” he said. “Some people can’t undergo a biopsy because they’re on medications that increase the risk of bleeding, and they can’t be stopped.”
Furthermore, he noted, “the longer a patient takes a drug that’s causing AIN without getting a diagnosis, the less the chances of recovery because the longer you let this kidney inflammation go on, the more fibrosis and permanent damage develops. So it is important to diagnose it as early as possible, and that’s again why we have a real need for a noninvasive biomarker that can be tested rapidly.”
Moledina and colleagues have been working on identifying a suitable biomarker for close to 10 years, the latest example of which is their 2023 study validating urinary CXCL9 as just such a marker.
“We’re most excited about CXCL9 because it’s already used to diagnose some other diseases in plasma,” Moledina said. “We think that we can convince labs to test it in urine.”
In an accompanying editorial, Mark Canney, PhD, and colleagues at the University of Ottawa and The Ottawa Hospital in Ontario, Canada, wrote that the CXCL9 study findings “are exciting because they provide a road map of where diagnostics can get to for this common, yet poorly identified and treated, cause of kidney damage. The need for a different approach can be readily identified from the fact that clinicians’ gestalt for diagnosing AIN was almost tantamount to tossing a coin (AUC, 0.57). CXCL9 alone outperformed not only the clinician’s prebiopsy suspicion but also an existing diagnostic model and other candidate biomarkers both in the discovery and external validation cohorts.”
Like NGAL, CXCL9 will have to go through the FDA approval process before it can be used for AIN. Therefore, it may be a few years before it can become routinely available, Moledina said.
Nevertheless, Nerenz added, “I think the next steps for AKI are probably continuing on this path of context-dependent, selective biomarker use. I anticipate that we’ll see ongoing development in this space, just expanding to a wider variety of clinical scenarios.”
Nerenz declared receiving research funding from Abbott Labs for evaluation of an AKI biomarker. Moledina is a co-inventor on a pending patent, “Methods and Systems for Diagnosis of Acute Interstitial Nephritis”; a cofounder of the diagnostics company Predict AIN; and a consultant for Biohaven.
A version of this article first appeared on Medscape.com.
The search for a better biomarker than creatine for acute kidney injury (AKI) has been “long and elusive.” However, could researchers be on the right path now?
“The thinking is moving away from trying to find one biomarker that can be used for different types of kidney injury to a recognition that AKI is not just a single disease that a patient has or doesn’t have,” Rob D. Nerenz, PhD, an associate professor in the Department of Pathology and Laboratory Medicine at the Medical College of Wisconsin, Milwaukee, told this news organization. “It’s lots of different diseases that all affect the kidney in different ways.”
AKI is actually a “loose collection” of hepatorenal, cardiorenal, nephrotoxic, and sepsis-associated syndromes, as well as acute interstitial nephritis (AIN), he said. “So the question is not: ‘Is AKI present — yes or no?’ It’s: ‘What kind of AKI is present, and how do I treat it?’ ”
‘Mediocre Markers’
AKI affects about 10%-30% of hospitalized patients, according to Nerenz. It’s associated with an increased risk for adverse outcomes, including post-AKI chronic kidney disease and a mortality rate of approximately 24%.
Currently, AKI is defined by a rapid increase in serum creatinine, a decrease in urine output, or both.
“Those are mediocre markers,” Nerenz said, as serum creatinine is not very sensitive to acute change, and the increase is often detected after the therapeutic window of intervention has passed. In addition, “it only tells us that the kidneys are unhappy; it doesn’t say anything about the cause.”
Urine output is limited as a marker because many conditions affect it. “If you’re dehydrated, urine output is going to decrease,” he said. “And in some forms of AKI, urine output actually goes up.”
What’s needed, he said, is a more sensitive biomarker that’s detectable within a shorter timeframe of 2-6 hours following injury.
“Right now, we’re looking at 48 hours before a change becomes apparent, and that’s just too long. Plus, it should be kidney specific. One of the major limitations of the biomarkers that have been evaluated to this point is that, yes, they’re released by the kidney, but they’re also released by other tissue types within the body, and that hinders their effectiveness as a marker.”
Neutrophil Gelatinase-Associated Lipocalin (NGAL)
Although research on better biomarkers is ongoing, “there’s also a recognition that some of the protein markers that have been around for a while, if used appropriately, can provide value,” Nerenz said. These include, among others, NGAL.
NGAL works well in pediatric patients without other comorbidities, but it has been less useful in adult patients because it is also released by other cell types. However, recent research suggests it shows promise in patients with both cirrhosis and AKI.
There are three main causes of AKI in cirrhosis, Nerenz explained. The first is prerenal and can be primarily addressed through rehydration.
“When these patients come in, clinicians won’t do anything right away other than provide fluids. If creatinine improves over the 48-hour period of fluid replenishment, then the patient is sent home because there really isn’t extensive damage to the kidneys.”
If improvement isn’t seen after those 48 hours, then it could be one of two things: Hepatorenal syndrome or acute tubular necrosis. Patients with hepatorenal syndrome are candidates for terlipressin, which the Food and Drug Administration (FDA) approved for this indication in 2022 after it displayed notable efficacy in a double-blind study.
“You don’t want to give terlipressin to just anybody because if the issue is not a diminished blood supply to the kidney, it’s not going to help, and comes with some serious side effects, such as respiratory failure,” Nerenz explained. “Having a biomarker that can distinguish between hepatorenal syndrome and acute tubular necrosis really helps clinicians confidently identify which patients are good candidates for this drug. Right now, we’re flying blind to a certain extent, basically using clinical intuition.”
Currently, the determination of NGAL is FDA cleared only for pediatric use. One way hospitals have dealt with that is by making the test in their own labs, using appropriate reagents, validation, and so forth. These tests are then safe for use in adults but haven’t gone through the FDA approval process.
However, the FDA’s recent announcement stating that the agency should oversee lab-developed tests has made this situation unclear, Nerenz said.
“At this point, we don’t know if there’s still an opportunity to take the NGAL test (or any other cleared biomarker) and validate it for use in a different patient population. Many hospital labs simply don’t have the resources to take these tests through the whole FDA approval process.”
A New Biomarker for AIN?
Meanwhile, research is also moving forward on a better biomarker for AIN, which is also under the AKI umbrella.
“It’s important to diagnose AIN because it has a very specific treatment,” Dennis G. Moledina, MD, PhD, Yale School of Medicine in New Haven, Connecticut, told this news organization.
“AIN is caused by a bunch of different medications, such as proton pump inhibitors, cancer drugs, nonsteroidal anti-inflammatory drugs, and antibiotics, so when someone has this condition, you have to stop potentially life-saving medications and give unnecessary and potentially toxic immunosuppressive drugs, like prednisone,” he said. “If you get the diagnosis wrong, you’re stopping vital drugs and giving immunosuppression for no reason. And if you miss the diagnosis, AIN can lead to permanent chronic kidney disease.”
“Right now, the only way to diagnose AIN is to do a kidney biopsy, which is risky because it can often lead to significant bleeding,” he said. “Some people can’t undergo a biopsy because they’re on medications that increase the risk of bleeding, and they can’t be stopped.”
Furthermore, he noted, “the longer a patient takes a drug that’s causing AIN without getting a diagnosis, the less the chances of recovery because the longer you let this kidney inflammation go on, the more fibrosis and permanent damage develops. So it is important to diagnose it as early as possible, and that’s again why we have a real need for a noninvasive biomarker that can be tested rapidly.”
Moledina and colleagues have been working on identifying a suitable biomarker for close to 10 years, the latest example of which is their 2023 study validating urinary CXCL9 as just such a marker.
“We’re most excited about CXCL9 because it’s already used to diagnose some other diseases in plasma,” Moledina said. “We think that we can convince labs to test it in urine.”
In an accompanying editorial, Mark Canney, PhD, and colleagues at the University of Ottawa and The Ottawa Hospital in Ontario, Canada, wrote that the CXCL9 study findings “are exciting because they provide a road map of where diagnostics can get to for this common, yet poorly identified and treated, cause of kidney damage. The need for a different approach can be readily identified from the fact that clinicians’ gestalt for diagnosing AIN was almost tantamount to tossing a coin (AUC, 0.57). CXCL9 alone outperformed not only the clinician’s prebiopsy suspicion but also an existing diagnostic model and other candidate biomarkers both in the discovery and external validation cohorts.”
Like NGAL, CXCL9 will have to go through the FDA approval process before it can be used for AIN. Therefore, it may be a few years before it can become routinely available, Moledina said.
Nevertheless, Nerenz added, “I think the next steps for AKI are probably continuing on this path of context-dependent, selective biomarker use. I anticipate that we’ll see ongoing development in this space, just expanding to a wider variety of clinical scenarios.”
Nerenz declared receiving research funding from Abbott Labs for evaluation of an AKI biomarker. Moledina is a co-inventor on a pending patent, “Methods and Systems for Diagnosis of Acute Interstitial Nephritis”; a cofounder of the diagnostics company Predict AIN; and a consultant for Biohaven.
A version of this article first appeared on Medscape.com.
Cannabis Use Linked to Brain Thinning in Adolescents
, research in mice and humans suggested.
The multilevel study demonstrated that tetrahydrocannabinol (THC), an active substance in cannabis, causes shrinkage of dendritic arborization — the neurons’ network of antennae that play a critical role in communication between brain cells.
The connection between dendritic arborization and cortical thickness was hinted at in an earlier study by Tomáš Paus, MD, PhD, professor of psychiatry and addictology at the University of Montreal, Quebec, Canada, and colleagues, who found that cannabis use in early adolescence was associated with lower cortical thickness in boys with a high genetic risk for schizophrenia.
“We speculated at that time that the differences in cortical thickness might be related to differences in dendritic arborization, and our current study confirmed it,” Paus said.
That confirmation came in the mouse part of the study, when coauthor Graciela Piñeyro, MD, PhD, also of the University of Montreal, counted the dendritic branches of mice exposed to THC and compared the total with the number of dendritic branches in unexposed mice. “What surprised me was finding that THC in the mice was targeting the same type of cells and structures that Dr. Paus had predicted would be affected from the human studies,” she said. “Structurally, they were mostly the neurons that contribute to synapses in the cortex, and their branching was reduced.”
Paus explained that in humans, a decrease in input from the affected dendrites “makes it harder for the brain to learn new things, interact with people, cope with new situations, et cetera. In other words, it makes the brain more vulnerable to everything that can happen in a young person’s life.”
The study was published online on October 9 in the Journal of Neuroscience.
Of Mice, Men, and Cannabis
Although associations between cannabis use by teenagers and variations in brain maturation have been well studied, the cellular and molecular underpinnings of these associations were unclear, according to the authors.
To investigate further, they conducted this three-step study. First, they exposed adolescent male mice to THC or a synthetic cannabinoid (WIN 55,212-2) and assessed differentially expressed genes, spine numbers, and the extent of dendritic complexity in the frontal cortex of each mouse.
Next, using MRI, they examined differences in cortical thickness in 34 brain regions in 140 male adolescents who experimented with cannabis before age 16 years and 327 who did not.
Then, they again conducted experiments in mice and found that 13 THC-related genes correlated with variations in cortical thickness. Virtual histology revealed that these 13 genes were coexpressed with cell markers of astrocytes, microglia, and a type of pyramidal cell enriched in genes that regulate dendritic expression.
Similarly, the WIN-related genes correlated with differences in cortical thickness and showed coexpression patterns with the same three cell types.
Furthermore, the affected genes were also found in humans, particularly in the thinner cortical regions of the adolescents who experimented with cannabis.
By acting on microglia, THC seems to promote the removal of synapses and, eventually, the reduction of the dendritic tree in mice, Piñeyro explained. That’s important not only because a similar mechanism may be at work in humans but also because “we now might have a model to test different types of cannabis products to see which ones are producing the greatest effect on neurons and therefore greater removal of synapses through the microglia. This could be a way of testing drugs that are out in the street to see which would be the most or least dangerous to the synapses in the brain.”
‘Significant Implications’
Commenting on the study, Yasmin Hurd, PhD, Ward-Coleman chair of translational neuroscience at the Icahn School of Medicine at Mount Sinai and director of the Addiction Institute of Mount Sinai in New York City, said, “These findings are in line with previous results, so they are feasible. This study adds more depth by showing that cortical genes that were differentially altered by adolescent THC correlated with cannabis-related changes in cortical thickness based on human neuroimaging data.” Hurd did not participate in the research.
“The results emphasize that consumption of potent cannabis products during adolescence can impact cortical function, which has significant implications for decision-making and risky behavior as well. It also can increase vulnerability to psychiatric disorders such as schizophrenia.”
Although a mouse model is “not truly the same as the human condition, the fact that the animal model also showed evidence of the morphological changes indicative of reduced cortical thickness, [like] the humans, is strong,” she said.
Additional research could include women and assess potential sex differences, she added.
Ronald Ellis, MD, PhD, an investigator in the Center for Medicinal Cannabis Research at the University of California, San Diego School of Medicine, said, “The findings are plausible and extend prior work showing evidence of increased risk for psychotic disorders later in life in adolescents who use cannabis.” Ellis did not participate in the research.
“Future studies should explore how these findings might vary across different demographic groups, which could provide a more inclusive understanding of how cannabis impacts the brain,” he said. “Additionally, longitudinal studies to track changes in the brain over time could help to establish causal relationships more robustly.
“The take-home message to clinicians at this point is to discuss cannabis use history carefully and confidentially with adolescent patients to better provide advice on its potential risks,” he concluded.
Paus added that he would tell patients, “If you’re going to use cannabis, don’t start early. If you have to, then do so in moderation. And if you have family history of mental illness, be very careful.”
No funding for the study was reported. Paus, Piñeyro, Hurd, and Ellis declared having no relevant financial relationships.
A version of this article appeared on Medscape.com.
, research in mice and humans suggested.
The multilevel study demonstrated that tetrahydrocannabinol (THC), an active substance in cannabis, causes shrinkage of dendritic arborization — the neurons’ network of antennae that play a critical role in communication between brain cells.
The connection between dendritic arborization and cortical thickness was hinted at in an earlier study by Tomáš Paus, MD, PhD, professor of psychiatry and addictology at the University of Montreal, Quebec, Canada, and colleagues, who found that cannabis use in early adolescence was associated with lower cortical thickness in boys with a high genetic risk for schizophrenia.
“We speculated at that time that the differences in cortical thickness might be related to differences in dendritic arborization, and our current study confirmed it,” Paus said.
That confirmation came in the mouse part of the study, when coauthor Graciela Piñeyro, MD, PhD, also of the University of Montreal, counted the dendritic branches of mice exposed to THC and compared the total with the number of dendritic branches in unexposed mice. “What surprised me was finding that THC in the mice was targeting the same type of cells and structures that Dr. Paus had predicted would be affected from the human studies,” she said. “Structurally, they were mostly the neurons that contribute to synapses in the cortex, and their branching was reduced.”
Paus explained that in humans, a decrease in input from the affected dendrites “makes it harder for the brain to learn new things, interact with people, cope with new situations, et cetera. In other words, it makes the brain more vulnerable to everything that can happen in a young person’s life.”
The study was published online on October 9 in the Journal of Neuroscience.
Of Mice, Men, and Cannabis
Although associations between cannabis use by teenagers and variations in brain maturation have been well studied, the cellular and molecular underpinnings of these associations were unclear, according to the authors.
To investigate further, they conducted this three-step study. First, they exposed adolescent male mice to THC or a synthetic cannabinoid (WIN 55,212-2) and assessed differentially expressed genes, spine numbers, and the extent of dendritic complexity in the frontal cortex of each mouse.
Next, using MRI, they examined differences in cortical thickness in 34 brain regions in 140 male adolescents who experimented with cannabis before age 16 years and 327 who did not.
Then, they again conducted experiments in mice and found that 13 THC-related genes correlated with variations in cortical thickness. Virtual histology revealed that these 13 genes were coexpressed with cell markers of astrocytes, microglia, and a type of pyramidal cell enriched in genes that regulate dendritic expression.
Similarly, the WIN-related genes correlated with differences in cortical thickness and showed coexpression patterns with the same three cell types.
Furthermore, the affected genes were also found in humans, particularly in the thinner cortical regions of the adolescents who experimented with cannabis.
By acting on microglia, THC seems to promote the removal of synapses and, eventually, the reduction of the dendritic tree in mice, Piñeyro explained. That’s important not only because a similar mechanism may be at work in humans but also because “we now might have a model to test different types of cannabis products to see which ones are producing the greatest effect on neurons and therefore greater removal of synapses through the microglia. This could be a way of testing drugs that are out in the street to see which would be the most or least dangerous to the synapses in the brain.”
‘Significant Implications’
Commenting on the study, Yasmin Hurd, PhD, Ward-Coleman chair of translational neuroscience at the Icahn School of Medicine at Mount Sinai and director of the Addiction Institute of Mount Sinai in New York City, said, “These findings are in line with previous results, so they are feasible. This study adds more depth by showing that cortical genes that were differentially altered by adolescent THC correlated with cannabis-related changes in cortical thickness based on human neuroimaging data.” Hurd did not participate in the research.
“The results emphasize that consumption of potent cannabis products during adolescence can impact cortical function, which has significant implications for decision-making and risky behavior as well. It also can increase vulnerability to psychiatric disorders such as schizophrenia.”
Although a mouse model is “not truly the same as the human condition, the fact that the animal model also showed evidence of the morphological changes indicative of reduced cortical thickness, [like] the humans, is strong,” she said.
Additional research could include women and assess potential sex differences, she added.
Ronald Ellis, MD, PhD, an investigator in the Center for Medicinal Cannabis Research at the University of California, San Diego School of Medicine, said, “The findings are plausible and extend prior work showing evidence of increased risk for psychotic disorders later in life in adolescents who use cannabis.” Ellis did not participate in the research.
“Future studies should explore how these findings might vary across different demographic groups, which could provide a more inclusive understanding of how cannabis impacts the brain,” he said. “Additionally, longitudinal studies to track changes in the brain over time could help to establish causal relationships more robustly.
“The take-home message to clinicians at this point is to discuss cannabis use history carefully and confidentially with adolescent patients to better provide advice on its potential risks,” he concluded.
Paus added that he would tell patients, “If you’re going to use cannabis, don’t start early. If you have to, then do so in moderation. And if you have family history of mental illness, be very careful.”
No funding for the study was reported. Paus, Piñeyro, Hurd, and Ellis declared having no relevant financial relationships.
A version of this article appeared on Medscape.com.
, research in mice and humans suggested.
The multilevel study demonstrated that tetrahydrocannabinol (THC), an active substance in cannabis, causes shrinkage of dendritic arborization — the neurons’ network of antennae that play a critical role in communication between brain cells.
The connection between dendritic arborization and cortical thickness was hinted at in an earlier study by Tomáš Paus, MD, PhD, professor of psychiatry and addictology at the University of Montreal, Quebec, Canada, and colleagues, who found that cannabis use in early adolescence was associated with lower cortical thickness in boys with a high genetic risk for schizophrenia.
“We speculated at that time that the differences in cortical thickness might be related to differences in dendritic arborization, and our current study confirmed it,” Paus said.
That confirmation came in the mouse part of the study, when coauthor Graciela Piñeyro, MD, PhD, also of the University of Montreal, counted the dendritic branches of mice exposed to THC and compared the total with the number of dendritic branches in unexposed mice. “What surprised me was finding that THC in the mice was targeting the same type of cells and structures that Dr. Paus had predicted would be affected from the human studies,” she said. “Structurally, they were mostly the neurons that contribute to synapses in the cortex, and their branching was reduced.”
Paus explained that in humans, a decrease in input from the affected dendrites “makes it harder for the brain to learn new things, interact with people, cope with new situations, et cetera. In other words, it makes the brain more vulnerable to everything that can happen in a young person’s life.”
The study was published online on October 9 in the Journal of Neuroscience.
Of Mice, Men, and Cannabis
Although associations between cannabis use by teenagers and variations in brain maturation have been well studied, the cellular and molecular underpinnings of these associations were unclear, according to the authors.
To investigate further, they conducted this three-step study. First, they exposed adolescent male mice to THC or a synthetic cannabinoid (WIN 55,212-2) and assessed differentially expressed genes, spine numbers, and the extent of dendritic complexity in the frontal cortex of each mouse.
Next, using MRI, they examined differences in cortical thickness in 34 brain regions in 140 male adolescents who experimented with cannabis before age 16 years and 327 who did not.
Then, they again conducted experiments in mice and found that 13 THC-related genes correlated with variations in cortical thickness. Virtual histology revealed that these 13 genes were coexpressed with cell markers of astrocytes, microglia, and a type of pyramidal cell enriched in genes that regulate dendritic expression.
Similarly, the WIN-related genes correlated with differences in cortical thickness and showed coexpression patterns with the same three cell types.
Furthermore, the affected genes were also found in humans, particularly in the thinner cortical regions of the adolescents who experimented with cannabis.
By acting on microglia, THC seems to promote the removal of synapses and, eventually, the reduction of the dendritic tree in mice, Piñeyro explained. That’s important not only because a similar mechanism may be at work in humans but also because “we now might have a model to test different types of cannabis products to see which ones are producing the greatest effect on neurons and therefore greater removal of synapses through the microglia. This could be a way of testing drugs that are out in the street to see which would be the most or least dangerous to the synapses in the brain.”
‘Significant Implications’
Commenting on the study, Yasmin Hurd, PhD, Ward-Coleman chair of translational neuroscience at the Icahn School of Medicine at Mount Sinai and director of the Addiction Institute of Mount Sinai in New York City, said, “These findings are in line with previous results, so they are feasible. This study adds more depth by showing that cortical genes that were differentially altered by adolescent THC correlated with cannabis-related changes in cortical thickness based on human neuroimaging data.” Hurd did not participate in the research.
“The results emphasize that consumption of potent cannabis products during adolescence can impact cortical function, which has significant implications for decision-making and risky behavior as well. It also can increase vulnerability to psychiatric disorders such as schizophrenia.”
Although a mouse model is “not truly the same as the human condition, the fact that the animal model also showed evidence of the morphological changes indicative of reduced cortical thickness, [like] the humans, is strong,” she said.
Additional research could include women and assess potential sex differences, she added.
Ronald Ellis, MD, PhD, an investigator in the Center for Medicinal Cannabis Research at the University of California, San Diego School of Medicine, said, “The findings are plausible and extend prior work showing evidence of increased risk for psychotic disorders later in life in adolescents who use cannabis.” Ellis did not participate in the research.
“Future studies should explore how these findings might vary across different demographic groups, which could provide a more inclusive understanding of how cannabis impacts the brain,” he said. “Additionally, longitudinal studies to track changes in the brain over time could help to establish causal relationships more robustly.
“The take-home message to clinicians at this point is to discuss cannabis use history carefully and confidentially with adolescent patients to better provide advice on its potential risks,” he concluded.
Paus added that he would tell patients, “If you’re going to use cannabis, don’t start early. If you have to, then do so in moderation. And if you have family history of mental illness, be very careful.”
No funding for the study was reported. Paus, Piñeyro, Hurd, and Ellis declared having no relevant financial relationships.
A version of this article appeared on Medscape.com.
FROM THE JOURNAL OF NEUROSCIENCE