Slot System
Featured Buckets
Featured Buckets Admin
Reverse Chronological Sort

Thoracic cancer approvals differ at FDA, EMA

Article Type
Changed
Fri, 04/07/2023 - 14:34

A comparison of Food and Drug Administration and European Medicines Agency approvals of immune checkpoint inhibitors in the field of thoracic cancer found significantly longer approval times at the European agency, as well as some examples of different perspectives on biomarkers.

The findings of this new study suggest that patients in Europe may face delayed access to new therapies, the authors wrote in a poster presentation at the European Lung Cancer Congress 2023.

They also noted that some FDA approvals occurred before pivotal trial data became available, which can leave doubt about efficacy.

“Effective cancer management relies on availability of therapies which improve patient outcomes, such as immunotherapy. The two largest regulators involved in approving immunotherapies are the FDA and the EMA and therefore we aimed to compare the approval timings between both to see if a difference in approval timings was present,” coauthor Aakash Desai, MD, said in an interview.

Previously, the researchers conducted a study of cancer approval patterns at the FDA and EMA between 2010 and 2019, and found U.S. patients gain access to new cancer therapeutics more quickly than do European patients. Of 89 new therapies approved in that time span, the FDA approval occurred first in 85 cases (95%), though just 72% were submitted to FDA first. The median increased time it took for EMA approval compared with the FDA was 241 days. Thirty-nine percent of U.S. approvals came before the publication of the pivotal clinical trial, versus 9% of EMA approvals.

The new study focuses on thoracic oncology, where lung cancer is the leading cause of death. “As such, prompt approval timings for immunotherapies are crucial for effective treatment. Furthermore, lung cancer immunotherapies target certain biomarkers, of which, PD1 and PD-L1 are key,” said Dr. Desai, a fellow at Mayo Clinic, Rochester, Minn.

Still, Dr. Desai sounded a note of caution. “Just because a therapy is approved more quickly does not necessarily mean it is efficacious, as the clinical trials involving these drugs may not have been completed or fully reported at the time of authorization. [Drug developers] need to have a more global and coordinated approach to evaluating evidence and approval of drugs so the care received by a particular patient is not a factor of where they live,” he said.

The researchers surveyed approvals of seven immune checkpoint inhibitors (ICIs) approved by both the FDA and the EMA for thoracic malignancies, including non–small cell lung cancer (NSCLC), small cell lung cancer (SCLC), and mesothelioma. The FDA approved 22 indications for the novel ICIs in thoracic malignancies, compared with 16 indications at the EMA. The difference in median approval times was larger for SCLC (179 versus 308 days) and mesothelioma (39 versus 280 days) than for NSCLC (242 versus 272 days).

“There are two discrepancies in biomarker requirements between the FDA and EMA, whereby the FDA has a broader requirement, despite these being ranked fairly consistently in terms of evidence of benefit by [European Society for Medical Oncology Magnitude of Clinical Benefit Scale and National Comprehensive Cancer Network] frameworks,” said Dr. Desai. In the case of atezolizumab for adjuvant NSCLC, the FDA required PDL1 levels of 1% or higher, while the EMA required 50% or higher. For durvalumab in unresectable NSCLC, the FDA had no PDL1 requirement, while the EMA required 1% or higher.

Dr. Desai suggested a need for further investigation into the differences between the two agencies. Asked why the two agencies might have different views on the biomarkers, Dr. Desai responded: “That is the million-dollar question. My guess is [the] EMA weighs subgroup data more than [the] FDA.”

Dr. Desai has no relevant financial disclosures.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

A comparison of Food and Drug Administration and European Medicines Agency approvals of immune checkpoint inhibitors in the field of thoracic cancer found significantly longer approval times at the European agency, as well as some examples of different perspectives on biomarkers.

The findings of this new study suggest that patients in Europe may face delayed access to new therapies, the authors wrote in a poster presentation at the European Lung Cancer Congress 2023.

They also noted that some FDA approvals occurred before pivotal trial data became available, which can leave doubt about efficacy.

“Effective cancer management relies on availability of therapies which improve patient outcomes, such as immunotherapy. The two largest regulators involved in approving immunotherapies are the FDA and the EMA and therefore we aimed to compare the approval timings between both to see if a difference in approval timings was present,” coauthor Aakash Desai, MD, said in an interview.

Previously, the researchers conducted a study of cancer approval patterns at the FDA and EMA between 2010 and 2019, and found U.S. patients gain access to new cancer therapeutics more quickly than do European patients. Of 89 new therapies approved in that time span, the FDA approval occurred first in 85 cases (95%), though just 72% were submitted to FDA first. The median increased time it took for EMA approval compared with the FDA was 241 days. Thirty-nine percent of U.S. approvals came before the publication of the pivotal clinical trial, versus 9% of EMA approvals.

The new study focuses on thoracic oncology, where lung cancer is the leading cause of death. “As such, prompt approval timings for immunotherapies are crucial for effective treatment. Furthermore, lung cancer immunotherapies target certain biomarkers, of which, PD1 and PD-L1 are key,” said Dr. Desai, a fellow at Mayo Clinic, Rochester, Minn.

Still, Dr. Desai sounded a note of caution. “Just because a therapy is approved more quickly does not necessarily mean it is efficacious, as the clinical trials involving these drugs may not have been completed or fully reported at the time of authorization. [Drug developers] need to have a more global and coordinated approach to evaluating evidence and approval of drugs so the care received by a particular patient is not a factor of where they live,” he said.

The researchers surveyed approvals of seven immune checkpoint inhibitors (ICIs) approved by both the FDA and the EMA for thoracic malignancies, including non–small cell lung cancer (NSCLC), small cell lung cancer (SCLC), and mesothelioma. The FDA approved 22 indications for the novel ICIs in thoracic malignancies, compared with 16 indications at the EMA. The difference in median approval times was larger for SCLC (179 versus 308 days) and mesothelioma (39 versus 280 days) than for NSCLC (242 versus 272 days).

“There are two discrepancies in biomarker requirements between the FDA and EMA, whereby the FDA has a broader requirement, despite these being ranked fairly consistently in terms of evidence of benefit by [European Society for Medical Oncology Magnitude of Clinical Benefit Scale and National Comprehensive Cancer Network] frameworks,” said Dr. Desai. In the case of atezolizumab for adjuvant NSCLC, the FDA required PDL1 levels of 1% or higher, while the EMA required 50% or higher. For durvalumab in unresectable NSCLC, the FDA had no PDL1 requirement, while the EMA required 1% or higher.

Dr. Desai suggested a need for further investigation into the differences between the two agencies. Asked why the two agencies might have different views on the biomarkers, Dr. Desai responded: “That is the million-dollar question. My guess is [the] EMA weighs subgroup data more than [the] FDA.”

Dr. Desai has no relevant financial disclosures.

A comparison of Food and Drug Administration and European Medicines Agency approvals of immune checkpoint inhibitors in the field of thoracic cancer found significantly longer approval times at the European agency, as well as some examples of different perspectives on biomarkers.

The findings of this new study suggest that patients in Europe may face delayed access to new therapies, the authors wrote in a poster presentation at the European Lung Cancer Congress 2023.

They also noted that some FDA approvals occurred before pivotal trial data became available, which can leave doubt about efficacy.

“Effective cancer management relies on availability of therapies which improve patient outcomes, such as immunotherapy. The two largest regulators involved in approving immunotherapies are the FDA and the EMA and therefore we aimed to compare the approval timings between both to see if a difference in approval timings was present,” coauthor Aakash Desai, MD, said in an interview.

Previously, the researchers conducted a study of cancer approval patterns at the FDA and EMA between 2010 and 2019, and found U.S. patients gain access to new cancer therapeutics more quickly than do European patients. Of 89 new therapies approved in that time span, the FDA approval occurred first in 85 cases (95%), though just 72% were submitted to FDA first. The median increased time it took for EMA approval compared with the FDA was 241 days. Thirty-nine percent of U.S. approvals came before the publication of the pivotal clinical trial, versus 9% of EMA approvals.

The new study focuses on thoracic oncology, where lung cancer is the leading cause of death. “As such, prompt approval timings for immunotherapies are crucial for effective treatment. Furthermore, lung cancer immunotherapies target certain biomarkers, of which, PD1 and PD-L1 are key,” said Dr. Desai, a fellow at Mayo Clinic, Rochester, Minn.

Still, Dr. Desai sounded a note of caution. “Just because a therapy is approved more quickly does not necessarily mean it is efficacious, as the clinical trials involving these drugs may not have been completed or fully reported at the time of authorization. [Drug developers] need to have a more global and coordinated approach to evaluating evidence and approval of drugs so the care received by a particular patient is not a factor of where they live,” he said.

The researchers surveyed approvals of seven immune checkpoint inhibitors (ICIs) approved by both the FDA and the EMA for thoracic malignancies, including non–small cell lung cancer (NSCLC), small cell lung cancer (SCLC), and mesothelioma. The FDA approved 22 indications for the novel ICIs in thoracic malignancies, compared with 16 indications at the EMA. The difference in median approval times was larger for SCLC (179 versus 308 days) and mesothelioma (39 versus 280 days) than for NSCLC (242 versus 272 days).

“There are two discrepancies in biomarker requirements between the FDA and EMA, whereby the FDA has a broader requirement, despite these being ranked fairly consistently in terms of evidence of benefit by [European Society for Medical Oncology Magnitude of Clinical Benefit Scale and National Comprehensive Cancer Network] frameworks,” said Dr. Desai. In the case of atezolizumab for adjuvant NSCLC, the FDA required PDL1 levels of 1% or higher, while the EMA required 50% or higher. For durvalumab in unresectable NSCLC, the FDA had no PDL1 requirement, while the EMA required 1% or higher.

Dr. Desai suggested a need for further investigation into the differences between the two agencies. Asked why the two agencies might have different views on the biomarkers, Dr. Desai responded: “That is the million-dollar question. My guess is [the] EMA weighs subgroup data more than [the] FDA.”

Dr. Desai has no relevant financial disclosures.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ELCC 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

New insight into the growing problem of gaming disorder

Article Type
Changed
Fri, 04/07/2023 - 14:16

Three studies provide new insight into the growing problem of gaming disorder (GD), including the condition’s genesis, effective treatments, and the need for a greater focus on recovery.

A team of international researchers led by Orsolya Király, PhD, of the Institute of Psychology, Eötvös Loránd University, Budapest, reviewed the characteristics and etiology of GD. They concluded that its genesis arises from the interaction of environmental factors, game-specific factors and individual factors, including personality traits, comorbid psychopathology, and genetic predisposition.

“The development of GD is a complex process and we identified three major factors involved,” study coauthor Mark Griffiths, PhD, distinguished professor of behavioral addiction and director of the international gaming research unit, psychology department, Nottingham (England) Trent University, said in an interview. Because of this complexity, “prevention and intervention in GD require multiprofessional action.”

The review was published in Comprehensive Psychiatry.

In a second paper, published online in Frontiers in Psychiatry, Chinese investigators reviewing randomized controlled trials (RCTs) presented “compelling evidence” to support four effective interventions for GD: group counseling, acceptance and cognitive restructuring intervention program (ACRIP), short-term cognitive-behavioral therapy (CBT), and craving behavioral intervention (CBI).

A third paper, published online in the Journal of Behavioral Addictions, in which researchers analyzed close to 50 studies of GD, found that the concept of “recovery” is rarely mentioned in GD research. Lead author Belle Gavriel-Fried, PhD, senior professor, Bob Shapell School of Social Work, Tel Aviv University, said in an interview that recovery is a “holistic concept that taps into many aspects of life.”

Understanding the “differences in the impact and availability” of negative and positive human resources and their effect on recovery “can help clinicians to customize treatment,” she said.
 

Complex interplay

GD is garnering increasing attention in the clinical community, especially since 2019, when the World Health Organization included it in the ICD-11.

“Although for most individuals, gaming is a recreational activity or even a passion, a small group of gamers experiences negative symptoms which impact their mental and physical health and cause functional impairment,” wrote Dr. Király and colleagues.

Dr. Griffiths explained that his team wanted to provide an “up-to-date primer – a ‘one-stop shop’ – on all things etiologic concerning gaming disorder for academics and practitioners” as well as others, such as health policy makers, teachers, and individuals in the gaming industry.

The researchers identified three factors that increase the risk of developing GD, the first being gaming-related factors, which make video games “addictive in a way that vulnerable individuals may develop GD.”

For example, GD is more prevalent among online versus offline game players, possibly because online multiplayer games “provide safe environments in which players can fulfill their social needs while remaining invisible and anonymous.”

Game genre also matters, with massively multiplayer online role-playing games, first-person/third-person shooter games, real-time strategy games, and multiplayer online battle arena games most implicated in problematic gaming. Moreover, the “monetization techniques” of certain games also increase their addictive potential.

The researchers point to individual factors that increase the risk of developing GD, including male sex and younger age, personality traits like impulsivity and sensation-seeking, and comorbidities including ADHD, anxiety, and depression.

Poor self-esteem and lack of social competencies make gaming “an easy and efficient way to compensate for these deficiencies, which in turn, heightens the risk for developing GD,” they add. Neurobiological processes and genetic predisposition also play a role.

Lastly, the authors mentioned environmental factors, including family and peer-group issues, problems at work or school, and cultural factors.

“The take-home messages are that problematic gaming has had a long history of empirical research; that the psychiatric community now views GD as a legitimate mental health issue; and that the reasons for GD are complex, with many different factors involved in the acquisition, development, and maintenance of GD,” said Dr. Griffiths.
 

 

 

Beneficial behavioral therapies

Yuzhou Chen and colleagues, Southwest University, Chongqing, China, conducted a systematic review of RCTs investigating interventions for treating GD. Despite the “large number of intervention approaches developed over the past decade, as yet, there are no authoritative guidelines for what makes an effective GD intervention,” they wrote.

Few studies have focused specifically on GD but instead have focused on a combination of internet addiction and GD. But the interventions used to treat internet addiction may not apply to GD. And few studies have utilized an RCT design. The researchers therefore set out to review studies that specifically used an RCT design to investigate interventions for GD.

They searched six databases to identify RCTs that tested GD interventions from the inception of each database until the end of 2021. To be included, participants had to be diagnosed with GD and receive either a “complete and systematic intervention” or be in a comparator control group receiving no intervention or placebo.

Seven studies met the inclusion criteria (n = 332 participants). The studies tested five interventions:
 

  • Group counseling with three different themes (interpersonal interaction, acceptance and commitment, cognition and behavior)
  • CBI, which addresses cravings
  • Transcranial direct current stimulation (tDCS)
  • ACRIP with the main objectives of reducing GD symptoms and improving psychological well-being
  • Short-term CBT, which addresses maladaptive cognitions

The mean duration of the interventions ranged from 3 to 15 weeks.

The primary outcome was GD severity, with secondary outcomes including depression, anxiety, cognition, game time, self-esteem, self-compassion, shyness, impulsivity, and psychological well-being.

Group counseling, CBI, ACRIP, and short-term CBT interventions had “a significant effect on decreasing the severity of GD,” while tDCS had “no significant effect.”

Behavioral therapy “exerts its effect on the behavioral mechanism of GD; for example, by reducing the association between game-related stimuli and the game player’s response to them,” the authors suggested.



Behavioral therapy “exerts its effect on the behavioral mechanism of GD; for example, by reducing the association between game-related stimuli and the game-player’s response to them,” the authors suggested.
 

Recovery vs. pathology

Recovery “traditionally represents the transition from trauma and illness to health,” Dr. Gavriel-Fried and colleagues noted.

Two paradigms of recovery are “deficit based” and “strength based.” The first assesses recovery in terms of abstinence, sobriety, and symptom reduction; and the second focuses on “growth, rather than a reduction in pathology.”

But although recovery is “embedded within mental health addiction policies and practice,” the concept has received “scant attention” in GD research.

The researchers therefore aimed to “map and summarize the state of the art on recovery from GD,” defining “recovery” as the “ability to handle conflicting feelings and emotions without external mediation.”

They conducted a scoping review of all literature regarding GD or internet GD published before February 2022 (47 studies, 2,924 participants with GD; mean age range, 13-26 years).

Most studies (n = 32) consisted of exclusively male subjects. Only 10 included both sexes, and female participants were in the minority.

Most studies (n = 42) did not address the concept of recovery, although all studies did report significant improvements in gaming-related pathology. Typical terminology used to describe changes in participants’ GD were “reduction” and/or “decrease” in symptom severity.

Although 18 studies mentioned the word “recovery,” only 5 actually discussed issues related to the notion of recovery, and only 5 used the term “abstinence.”

In addition, only 13 studies examined positive components of life in patients with GD, such as increased psychological well-being, life satisfaction, quality of life, improved emotional state, relational skills, and executive control, as well as improved self-care, hygiene, sleep, and interest in school studies.

“As a person and researcher who believes that words shape the way we perceive things, I think we should use the word ‘recovery’ rather than ‘pathology’ much more in research, therapy, and policy,” said Dr. Gavriel-Fried.

She noted that, because GD is a “relatively new behavioral addictive disorder, theories are still being developed and definitions of the symptoms are still being fine-tuned.”

“The field as a whole will benefit from future theoretical work that will lead to practical solutions for treating GD and ways to identify the risk factors,” Dr. Gavriel-Fried said.
 

 

 

Filling a research gap

In a comment, David Greenfield, MD, founder and medical director of the Connecticut-based Center for Internet and Technology Addiction, noted that 3 decades ago, there was almost no research into this area.

“The fact that we have these reviews and studies is good because all of the research adds to the science providing more data about an area we still don’t know that much about, where research is still in its infancy,” said Dr. Greenfield, who was not involved with the present study.

“Although we have definitions, there’s no complete agreement about the definitions of GD, and we do not yet have a unified approach,” continued Dr. Greenfield, who wrote the books Overcoming Internet Addiction for Dummies and Virtual Addiction.

He suggested that “recovery” is rarely used as a concept in GD research perhaps because there’s a “bifurcation in the field of addiction medicine in which behavioral addictions are not seen as equivalent to substance addictions,” and, particularly with GD, the principles of “recovery” have not yet matured.

“Recovery means meaningful life away from the screen, not just abstinence from the screen,” said Dr. Greenfield.

The study by Mr. Chen and colleagues was supported by grants from the National Social Science Foundation of China, the Chongqing Research Program of Basic Research and Frontier Technology, and the Fundamental Research Funds for the Central Universities. Dr. Griffiths has reported receiving research funding from Norsk Tipping (the gambling operator owned by the Norwegian government). The study by Dr. Király and colleagues received support from the Hungarian National Research Development and Innovation Office and the Janos Bolyai Research Scholarship Academy of Sciences to individual investigators. The study by Dr. Gavriel-Fried and colleagues received support from the Hungarian National Research Development and Innovation Office and the Janos Bolyai Research Scholarship Academy of Sciences to individual investigators. Dr. Gavriel-Fried has reported receiving grants from the Israel National Insurance Institute and the Committee for Independent Studies of the Israel Lottery. Dr. Greenfield reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Three studies provide new insight into the growing problem of gaming disorder (GD), including the condition’s genesis, effective treatments, and the need for a greater focus on recovery.

A team of international researchers led by Orsolya Király, PhD, of the Institute of Psychology, Eötvös Loránd University, Budapest, reviewed the characteristics and etiology of GD. They concluded that its genesis arises from the interaction of environmental factors, game-specific factors and individual factors, including personality traits, comorbid psychopathology, and genetic predisposition.

“The development of GD is a complex process and we identified three major factors involved,” study coauthor Mark Griffiths, PhD, distinguished professor of behavioral addiction and director of the international gaming research unit, psychology department, Nottingham (England) Trent University, said in an interview. Because of this complexity, “prevention and intervention in GD require multiprofessional action.”

The review was published in Comprehensive Psychiatry.

In a second paper, published online in Frontiers in Psychiatry, Chinese investigators reviewing randomized controlled trials (RCTs) presented “compelling evidence” to support four effective interventions for GD: group counseling, acceptance and cognitive restructuring intervention program (ACRIP), short-term cognitive-behavioral therapy (CBT), and craving behavioral intervention (CBI).

A third paper, published online in the Journal of Behavioral Addictions, in which researchers analyzed close to 50 studies of GD, found that the concept of “recovery” is rarely mentioned in GD research. Lead author Belle Gavriel-Fried, PhD, senior professor, Bob Shapell School of Social Work, Tel Aviv University, said in an interview that recovery is a “holistic concept that taps into many aspects of life.”

Understanding the “differences in the impact and availability” of negative and positive human resources and their effect on recovery “can help clinicians to customize treatment,” she said.
 

Complex interplay

GD is garnering increasing attention in the clinical community, especially since 2019, when the World Health Organization included it in the ICD-11.

“Although for most individuals, gaming is a recreational activity or even a passion, a small group of gamers experiences negative symptoms which impact their mental and physical health and cause functional impairment,” wrote Dr. Király and colleagues.

Dr. Griffiths explained that his team wanted to provide an “up-to-date primer – a ‘one-stop shop’ – on all things etiologic concerning gaming disorder for academics and practitioners” as well as others, such as health policy makers, teachers, and individuals in the gaming industry.

The researchers identified three factors that increase the risk of developing GD, the first being gaming-related factors, which make video games “addictive in a way that vulnerable individuals may develop GD.”

For example, GD is more prevalent among online versus offline game players, possibly because online multiplayer games “provide safe environments in which players can fulfill their social needs while remaining invisible and anonymous.”

Game genre also matters, with massively multiplayer online role-playing games, first-person/third-person shooter games, real-time strategy games, and multiplayer online battle arena games most implicated in problematic gaming. Moreover, the “monetization techniques” of certain games also increase their addictive potential.

The researchers point to individual factors that increase the risk of developing GD, including male sex and younger age, personality traits like impulsivity and sensation-seeking, and comorbidities including ADHD, anxiety, and depression.

Poor self-esteem and lack of social competencies make gaming “an easy and efficient way to compensate for these deficiencies, which in turn, heightens the risk for developing GD,” they add. Neurobiological processes and genetic predisposition also play a role.

Lastly, the authors mentioned environmental factors, including family and peer-group issues, problems at work or school, and cultural factors.

“The take-home messages are that problematic gaming has had a long history of empirical research; that the psychiatric community now views GD as a legitimate mental health issue; and that the reasons for GD are complex, with many different factors involved in the acquisition, development, and maintenance of GD,” said Dr. Griffiths.
 

 

 

Beneficial behavioral therapies

Yuzhou Chen and colleagues, Southwest University, Chongqing, China, conducted a systematic review of RCTs investigating interventions for treating GD. Despite the “large number of intervention approaches developed over the past decade, as yet, there are no authoritative guidelines for what makes an effective GD intervention,” they wrote.

Few studies have focused specifically on GD but instead have focused on a combination of internet addiction and GD. But the interventions used to treat internet addiction may not apply to GD. And few studies have utilized an RCT design. The researchers therefore set out to review studies that specifically used an RCT design to investigate interventions for GD.

They searched six databases to identify RCTs that tested GD interventions from the inception of each database until the end of 2021. To be included, participants had to be diagnosed with GD and receive either a “complete and systematic intervention” or be in a comparator control group receiving no intervention or placebo.

Seven studies met the inclusion criteria (n = 332 participants). The studies tested five interventions:
 

  • Group counseling with three different themes (interpersonal interaction, acceptance and commitment, cognition and behavior)
  • CBI, which addresses cravings
  • Transcranial direct current stimulation (tDCS)
  • ACRIP with the main objectives of reducing GD symptoms and improving psychological well-being
  • Short-term CBT, which addresses maladaptive cognitions

The mean duration of the interventions ranged from 3 to 15 weeks.

The primary outcome was GD severity, with secondary outcomes including depression, anxiety, cognition, game time, self-esteem, self-compassion, shyness, impulsivity, and psychological well-being.

Group counseling, CBI, ACRIP, and short-term CBT interventions had “a significant effect on decreasing the severity of GD,” while tDCS had “no significant effect.”

Behavioral therapy “exerts its effect on the behavioral mechanism of GD; for example, by reducing the association between game-related stimuli and the game player’s response to them,” the authors suggested.



Behavioral therapy “exerts its effect on the behavioral mechanism of GD; for example, by reducing the association between game-related stimuli and the game-player’s response to them,” the authors suggested.
 

Recovery vs. pathology

Recovery “traditionally represents the transition from trauma and illness to health,” Dr. Gavriel-Fried and colleagues noted.

Two paradigms of recovery are “deficit based” and “strength based.” The first assesses recovery in terms of abstinence, sobriety, and symptom reduction; and the second focuses on “growth, rather than a reduction in pathology.”

But although recovery is “embedded within mental health addiction policies and practice,” the concept has received “scant attention” in GD research.

The researchers therefore aimed to “map and summarize the state of the art on recovery from GD,” defining “recovery” as the “ability to handle conflicting feelings and emotions without external mediation.”

They conducted a scoping review of all literature regarding GD or internet GD published before February 2022 (47 studies, 2,924 participants with GD; mean age range, 13-26 years).

Most studies (n = 32) consisted of exclusively male subjects. Only 10 included both sexes, and female participants were in the minority.

Most studies (n = 42) did not address the concept of recovery, although all studies did report significant improvements in gaming-related pathology. Typical terminology used to describe changes in participants’ GD were “reduction” and/or “decrease” in symptom severity.

Although 18 studies mentioned the word “recovery,” only 5 actually discussed issues related to the notion of recovery, and only 5 used the term “abstinence.”

In addition, only 13 studies examined positive components of life in patients with GD, such as increased psychological well-being, life satisfaction, quality of life, improved emotional state, relational skills, and executive control, as well as improved self-care, hygiene, sleep, and interest in school studies.

“As a person and researcher who believes that words shape the way we perceive things, I think we should use the word ‘recovery’ rather than ‘pathology’ much more in research, therapy, and policy,” said Dr. Gavriel-Fried.

She noted that, because GD is a “relatively new behavioral addictive disorder, theories are still being developed and definitions of the symptoms are still being fine-tuned.”

“The field as a whole will benefit from future theoretical work that will lead to practical solutions for treating GD and ways to identify the risk factors,” Dr. Gavriel-Fried said.
 

 

 

Filling a research gap

In a comment, David Greenfield, MD, founder and medical director of the Connecticut-based Center for Internet and Technology Addiction, noted that 3 decades ago, there was almost no research into this area.

“The fact that we have these reviews and studies is good because all of the research adds to the science providing more data about an area we still don’t know that much about, where research is still in its infancy,” said Dr. Greenfield, who was not involved with the present study.

“Although we have definitions, there’s no complete agreement about the definitions of GD, and we do not yet have a unified approach,” continued Dr. Greenfield, who wrote the books Overcoming Internet Addiction for Dummies and Virtual Addiction.

He suggested that “recovery” is rarely used as a concept in GD research perhaps because there’s a “bifurcation in the field of addiction medicine in which behavioral addictions are not seen as equivalent to substance addictions,” and, particularly with GD, the principles of “recovery” have not yet matured.

“Recovery means meaningful life away from the screen, not just abstinence from the screen,” said Dr. Greenfield.

The study by Mr. Chen and colleagues was supported by grants from the National Social Science Foundation of China, the Chongqing Research Program of Basic Research and Frontier Technology, and the Fundamental Research Funds for the Central Universities. Dr. Griffiths has reported receiving research funding from Norsk Tipping (the gambling operator owned by the Norwegian government). The study by Dr. Király and colleagues received support from the Hungarian National Research Development and Innovation Office and the Janos Bolyai Research Scholarship Academy of Sciences to individual investigators. The study by Dr. Gavriel-Fried and colleagues received support from the Hungarian National Research Development and Innovation Office and the Janos Bolyai Research Scholarship Academy of Sciences to individual investigators. Dr. Gavriel-Fried has reported receiving grants from the Israel National Insurance Institute and the Committee for Independent Studies of the Israel Lottery. Dr. Greenfield reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Three studies provide new insight into the growing problem of gaming disorder (GD), including the condition’s genesis, effective treatments, and the need for a greater focus on recovery.

A team of international researchers led by Orsolya Király, PhD, of the Institute of Psychology, Eötvös Loránd University, Budapest, reviewed the characteristics and etiology of GD. They concluded that its genesis arises from the interaction of environmental factors, game-specific factors and individual factors, including personality traits, comorbid psychopathology, and genetic predisposition.

“The development of GD is a complex process and we identified three major factors involved,” study coauthor Mark Griffiths, PhD, distinguished professor of behavioral addiction and director of the international gaming research unit, psychology department, Nottingham (England) Trent University, said in an interview. Because of this complexity, “prevention and intervention in GD require multiprofessional action.”

The review was published in Comprehensive Psychiatry.

In a second paper, published online in Frontiers in Psychiatry, Chinese investigators reviewing randomized controlled trials (RCTs) presented “compelling evidence” to support four effective interventions for GD: group counseling, acceptance and cognitive restructuring intervention program (ACRIP), short-term cognitive-behavioral therapy (CBT), and craving behavioral intervention (CBI).

A third paper, published online in the Journal of Behavioral Addictions, in which researchers analyzed close to 50 studies of GD, found that the concept of “recovery” is rarely mentioned in GD research. Lead author Belle Gavriel-Fried, PhD, senior professor, Bob Shapell School of Social Work, Tel Aviv University, said in an interview that recovery is a “holistic concept that taps into many aspects of life.”

Understanding the “differences in the impact and availability” of negative and positive human resources and their effect on recovery “can help clinicians to customize treatment,” she said.
 

Complex interplay

GD is garnering increasing attention in the clinical community, especially since 2019, when the World Health Organization included it in the ICD-11.

“Although for most individuals, gaming is a recreational activity or even a passion, a small group of gamers experiences negative symptoms which impact their mental and physical health and cause functional impairment,” wrote Dr. Király and colleagues.

Dr. Griffiths explained that his team wanted to provide an “up-to-date primer – a ‘one-stop shop’ – on all things etiologic concerning gaming disorder for academics and practitioners” as well as others, such as health policy makers, teachers, and individuals in the gaming industry.

The researchers identified three factors that increase the risk of developing GD, the first being gaming-related factors, which make video games “addictive in a way that vulnerable individuals may develop GD.”

For example, GD is more prevalent among online versus offline game players, possibly because online multiplayer games “provide safe environments in which players can fulfill their social needs while remaining invisible and anonymous.”

Game genre also matters, with massively multiplayer online role-playing games, first-person/third-person shooter games, real-time strategy games, and multiplayer online battle arena games most implicated in problematic gaming. Moreover, the “monetization techniques” of certain games also increase their addictive potential.

The researchers point to individual factors that increase the risk of developing GD, including male sex and younger age, personality traits like impulsivity and sensation-seeking, and comorbidities including ADHD, anxiety, and depression.

Poor self-esteem and lack of social competencies make gaming “an easy and efficient way to compensate for these deficiencies, which in turn, heightens the risk for developing GD,” they add. Neurobiological processes and genetic predisposition also play a role.

Lastly, the authors mentioned environmental factors, including family and peer-group issues, problems at work or school, and cultural factors.

“The take-home messages are that problematic gaming has had a long history of empirical research; that the psychiatric community now views GD as a legitimate mental health issue; and that the reasons for GD are complex, with many different factors involved in the acquisition, development, and maintenance of GD,” said Dr. Griffiths.
 

 

 

Beneficial behavioral therapies

Yuzhou Chen and colleagues, Southwest University, Chongqing, China, conducted a systematic review of RCTs investigating interventions for treating GD. Despite the “large number of intervention approaches developed over the past decade, as yet, there are no authoritative guidelines for what makes an effective GD intervention,” they wrote.

Few studies have focused specifically on GD but instead have focused on a combination of internet addiction and GD. But the interventions used to treat internet addiction may not apply to GD. And few studies have utilized an RCT design. The researchers therefore set out to review studies that specifically used an RCT design to investigate interventions for GD.

They searched six databases to identify RCTs that tested GD interventions from the inception of each database until the end of 2021. To be included, participants had to be diagnosed with GD and receive either a “complete and systematic intervention” or be in a comparator control group receiving no intervention or placebo.

Seven studies met the inclusion criteria (n = 332 participants). The studies tested five interventions:
 

  • Group counseling with three different themes (interpersonal interaction, acceptance and commitment, cognition and behavior)
  • CBI, which addresses cravings
  • Transcranial direct current stimulation (tDCS)
  • ACRIP with the main objectives of reducing GD symptoms and improving psychological well-being
  • Short-term CBT, which addresses maladaptive cognitions

The mean duration of the interventions ranged from 3 to 15 weeks.

The primary outcome was GD severity, with secondary outcomes including depression, anxiety, cognition, game time, self-esteem, self-compassion, shyness, impulsivity, and psychological well-being.

Group counseling, CBI, ACRIP, and short-term CBT interventions had “a significant effect on decreasing the severity of GD,” while tDCS had “no significant effect.”

Behavioral therapy “exerts its effect on the behavioral mechanism of GD; for example, by reducing the association between game-related stimuli and the game player’s response to them,” the authors suggested.



Behavioral therapy “exerts its effect on the behavioral mechanism of GD; for example, by reducing the association between game-related stimuli and the game-player’s response to them,” the authors suggested.
 

Recovery vs. pathology

Recovery “traditionally represents the transition from trauma and illness to health,” Dr. Gavriel-Fried and colleagues noted.

Two paradigms of recovery are “deficit based” and “strength based.” The first assesses recovery in terms of abstinence, sobriety, and symptom reduction; and the second focuses on “growth, rather than a reduction in pathology.”

But although recovery is “embedded within mental health addiction policies and practice,” the concept has received “scant attention” in GD research.

The researchers therefore aimed to “map and summarize the state of the art on recovery from GD,” defining “recovery” as the “ability to handle conflicting feelings and emotions without external mediation.”

They conducted a scoping review of all literature regarding GD or internet GD published before February 2022 (47 studies, 2,924 participants with GD; mean age range, 13-26 years).

Most studies (n = 32) consisted of exclusively male subjects. Only 10 included both sexes, and female participants were in the minority.

Most studies (n = 42) did not address the concept of recovery, although all studies did report significant improvements in gaming-related pathology. Typical terminology used to describe changes in participants’ GD were “reduction” and/or “decrease” in symptom severity.

Although 18 studies mentioned the word “recovery,” only 5 actually discussed issues related to the notion of recovery, and only 5 used the term “abstinence.”

In addition, only 13 studies examined positive components of life in patients with GD, such as increased psychological well-being, life satisfaction, quality of life, improved emotional state, relational skills, and executive control, as well as improved self-care, hygiene, sleep, and interest in school studies.

“As a person and researcher who believes that words shape the way we perceive things, I think we should use the word ‘recovery’ rather than ‘pathology’ much more in research, therapy, and policy,” said Dr. Gavriel-Fried.

She noted that, because GD is a “relatively new behavioral addictive disorder, theories are still being developed and definitions of the symptoms are still being fine-tuned.”

“The field as a whole will benefit from future theoretical work that will lead to practical solutions for treating GD and ways to identify the risk factors,” Dr. Gavriel-Fried said.
 

 

 

Filling a research gap

In a comment, David Greenfield, MD, founder and medical director of the Connecticut-based Center for Internet and Technology Addiction, noted that 3 decades ago, there was almost no research into this area.

“The fact that we have these reviews and studies is good because all of the research adds to the science providing more data about an area we still don’t know that much about, where research is still in its infancy,” said Dr. Greenfield, who was not involved with the present study.

“Although we have definitions, there’s no complete agreement about the definitions of GD, and we do not yet have a unified approach,” continued Dr. Greenfield, who wrote the books Overcoming Internet Addiction for Dummies and Virtual Addiction.

He suggested that “recovery” is rarely used as a concept in GD research perhaps because there’s a “bifurcation in the field of addiction medicine in which behavioral addictions are not seen as equivalent to substance addictions,” and, particularly with GD, the principles of “recovery” have not yet matured.

“Recovery means meaningful life away from the screen, not just abstinence from the screen,” said Dr. Greenfield.

The study by Mr. Chen and colleagues was supported by grants from the National Social Science Foundation of China, the Chongqing Research Program of Basic Research and Frontier Technology, and the Fundamental Research Funds for the Central Universities. Dr. Griffiths has reported receiving research funding from Norsk Tipping (the gambling operator owned by the Norwegian government). The study by Dr. Király and colleagues received support from the Hungarian National Research Development and Innovation Office and the Janos Bolyai Research Scholarship Academy of Sciences to individual investigators. The study by Dr. Gavriel-Fried and colleagues received support from the Hungarian National Research Development and Innovation Office and the Janos Bolyai Research Scholarship Academy of Sciences to individual investigators. Dr. Gavriel-Fried has reported receiving grants from the Israel National Insurance Institute and the Committee for Independent Studies of the Israel Lottery. Dr. Greenfield reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

In metastatic NSCLC, better QoL outcomes tied to better outcomes

Article Type
Changed
Fri, 04/07/2023 - 14:17

A large, systematic review of phase III randomized, controlled trials in metastatic non–small cell lung cancer found quality of life (QoL) improvements in progression-free survival (PFS), but not overall survival (OS), among patients treated with targeted therapy.

The authors, including Fabio Salomone of the University of Naples Federico II, department of clinical medicine and surgery, also observed trends toward an association between QoL improvement and PFS among patients treated with chemotherapy and immunotherapy.

The new research was presented during a poster session at European Lung Cancer Congress 2023.

“The findings of the study support the thesis that QoL and survival in patients with NSCLC are linked. Although this is documented in the literature, this study sums up the evidence of a large number of RCTs, and provides detail in the QoL/survival relationship by treatment type. The subgroup analysis by treatment type is a key strength of the study showing that the QoL/survival link is stronger and more reliable in target(ed) therapies,” George Kypriotakis, PhD, who was not involved with the study, said in an interview.

Combining efficacy and quality of life improvement is an important consideration in clinical practice. “It is important that clinicians provide therapies that are also palliative and improve QoL,” said Dr. Kypriotakis, assistant professor of behavioral sciences at University of Texas MD Anderson Cancer Center, Houston. He noted that the finding of a PFS benefit is a good indicator of overall benefit, which is important since OS outcomes require a larger number of patients and longer follow-up to determine.

“PFS can still be a valid surrogate for OS, especially when it is positively associated with QoL,” noted Dr. Kypriotakis.

The study included 81 trials. Sixteen of the studies investigated immunotherapy, 50 investigated targeted therapy, and 17 investigated chemotherapy regimens. Thirty-seven percent of the trials found an improvement in QoL in the treatment arm compared with the control arm, 59.3% found no difference between arms, and 3.7% found a worse QoL in the treatment arm. There was no statistically significant association between an improvement in OS and QoL among the trials (P = .368).
 

Improved QoL tied to improved PFS

The researchers found an association between improved QoL and improved PFS. Among 60 trials that showed improved PFS, 43.3% found a superior QoL in the treatment arm, 53.3% showed no difference, and 3.3% showed reduced QoL. Among 20 trials that found no improvement in PFS, 20% demonstrated an improved QoL, 75% found no change, and 5% showed worse QoL (P = .0473).

A subanalysis of 48 targeted therapy trials found a correlation between PFS and QoL improvement (P = .0196). Among 25 trials involving patients receiving epidermal growth factor receptor (EGFR) and anaplastic lymphoma kinase (ALK) inhibitors showing an improved PFS, 60% showed improved QoL, 36% showed no difference, and 4% showed worsening (P = .0077). Seven of these trials showed no PFS benefit and no change in QoL.
 

Industry sponsorship may affect QOL results

The researchers found potential evidence that industry sponsorship may lead to a spin on QoL outcomes. Among 51 trials that showed no QoL benefit associated with treatment, the description of the QoL outcome in 37 industry-sponsored was judged to be neutral and coherent with the study findings in 26 cases, but unjustifiably favorable in 11 cases. Among 14 with nonprofit support, descriptions of QoL results were found to be neutral in all cases (P = .0232).

“Obviously, industry may be motivated to overemphasize treatment benefits, especially in measures that also have a qualitative/subjective dimension such as QoL. Assuming that the authors used a reliable criterion to evaluate “inappropriateness,” industry may be more likely to emphasize QoL improvements as a surrogate for OS, especially when seeking drug approval,” Dr. Kypriotakis said.

The study is retrospective and cannot prove causation.

Dr. Salomone and Dr. Kypriotakis have no relevant financial disclosures.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

A large, systematic review of phase III randomized, controlled trials in metastatic non–small cell lung cancer found quality of life (QoL) improvements in progression-free survival (PFS), but not overall survival (OS), among patients treated with targeted therapy.

The authors, including Fabio Salomone of the University of Naples Federico II, department of clinical medicine and surgery, also observed trends toward an association between QoL improvement and PFS among patients treated with chemotherapy and immunotherapy.

The new research was presented during a poster session at European Lung Cancer Congress 2023.

“The findings of the study support the thesis that QoL and survival in patients with NSCLC are linked. Although this is documented in the literature, this study sums up the evidence of a large number of RCTs, and provides detail in the QoL/survival relationship by treatment type. The subgroup analysis by treatment type is a key strength of the study showing that the QoL/survival link is stronger and more reliable in target(ed) therapies,” George Kypriotakis, PhD, who was not involved with the study, said in an interview.

Combining efficacy and quality of life improvement is an important consideration in clinical practice. “It is important that clinicians provide therapies that are also palliative and improve QoL,” said Dr. Kypriotakis, assistant professor of behavioral sciences at University of Texas MD Anderson Cancer Center, Houston. He noted that the finding of a PFS benefit is a good indicator of overall benefit, which is important since OS outcomes require a larger number of patients and longer follow-up to determine.

“PFS can still be a valid surrogate for OS, especially when it is positively associated with QoL,” noted Dr. Kypriotakis.

The study included 81 trials. Sixteen of the studies investigated immunotherapy, 50 investigated targeted therapy, and 17 investigated chemotherapy regimens. Thirty-seven percent of the trials found an improvement in QoL in the treatment arm compared with the control arm, 59.3% found no difference between arms, and 3.7% found a worse QoL in the treatment arm. There was no statistically significant association between an improvement in OS and QoL among the trials (P = .368).
 

Improved QoL tied to improved PFS

The researchers found an association between improved QoL and improved PFS. Among 60 trials that showed improved PFS, 43.3% found a superior QoL in the treatment arm, 53.3% showed no difference, and 3.3% showed reduced QoL. Among 20 trials that found no improvement in PFS, 20% demonstrated an improved QoL, 75% found no change, and 5% showed worse QoL (P = .0473).

A subanalysis of 48 targeted therapy trials found a correlation between PFS and QoL improvement (P = .0196). Among 25 trials involving patients receiving epidermal growth factor receptor (EGFR) and anaplastic lymphoma kinase (ALK) inhibitors showing an improved PFS, 60% showed improved QoL, 36% showed no difference, and 4% showed worsening (P = .0077). Seven of these trials showed no PFS benefit and no change in QoL.
 

Industry sponsorship may affect QOL results

The researchers found potential evidence that industry sponsorship may lead to a spin on QoL outcomes. Among 51 trials that showed no QoL benefit associated with treatment, the description of the QoL outcome in 37 industry-sponsored was judged to be neutral and coherent with the study findings in 26 cases, but unjustifiably favorable in 11 cases. Among 14 with nonprofit support, descriptions of QoL results were found to be neutral in all cases (P = .0232).

“Obviously, industry may be motivated to overemphasize treatment benefits, especially in measures that also have a qualitative/subjective dimension such as QoL. Assuming that the authors used a reliable criterion to evaluate “inappropriateness,” industry may be more likely to emphasize QoL improvements as a surrogate for OS, especially when seeking drug approval,” Dr. Kypriotakis said.

The study is retrospective and cannot prove causation.

Dr. Salomone and Dr. Kypriotakis have no relevant financial disclosures.

A large, systematic review of phase III randomized, controlled trials in metastatic non–small cell lung cancer found quality of life (QoL) improvements in progression-free survival (PFS), but not overall survival (OS), among patients treated with targeted therapy.

The authors, including Fabio Salomone of the University of Naples Federico II, department of clinical medicine and surgery, also observed trends toward an association between QoL improvement and PFS among patients treated with chemotherapy and immunotherapy.

The new research was presented during a poster session at European Lung Cancer Congress 2023.

“The findings of the study support the thesis that QoL and survival in patients with NSCLC are linked. Although this is documented in the literature, this study sums up the evidence of a large number of RCTs, and provides detail in the QoL/survival relationship by treatment type. The subgroup analysis by treatment type is a key strength of the study showing that the QoL/survival link is stronger and more reliable in target(ed) therapies,” George Kypriotakis, PhD, who was not involved with the study, said in an interview.

Combining efficacy and quality of life improvement is an important consideration in clinical practice. “It is important that clinicians provide therapies that are also palliative and improve QoL,” said Dr. Kypriotakis, assistant professor of behavioral sciences at University of Texas MD Anderson Cancer Center, Houston. He noted that the finding of a PFS benefit is a good indicator of overall benefit, which is important since OS outcomes require a larger number of patients and longer follow-up to determine.

“PFS can still be a valid surrogate for OS, especially when it is positively associated with QoL,” noted Dr. Kypriotakis.

The study included 81 trials. Sixteen of the studies investigated immunotherapy, 50 investigated targeted therapy, and 17 investigated chemotherapy regimens. Thirty-seven percent of the trials found an improvement in QoL in the treatment arm compared with the control arm, 59.3% found no difference between arms, and 3.7% found a worse QoL in the treatment arm. There was no statistically significant association between an improvement in OS and QoL among the trials (P = .368).
 

Improved QoL tied to improved PFS

The researchers found an association between improved QoL and improved PFS. Among 60 trials that showed improved PFS, 43.3% found a superior QoL in the treatment arm, 53.3% showed no difference, and 3.3% showed reduced QoL. Among 20 trials that found no improvement in PFS, 20% demonstrated an improved QoL, 75% found no change, and 5% showed worse QoL (P = .0473).

A subanalysis of 48 targeted therapy trials found a correlation between PFS and QoL improvement (P = .0196). Among 25 trials involving patients receiving epidermal growth factor receptor (EGFR) and anaplastic lymphoma kinase (ALK) inhibitors showing an improved PFS, 60% showed improved QoL, 36% showed no difference, and 4% showed worsening (P = .0077). Seven of these trials showed no PFS benefit and no change in QoL.
 

Industry sponsorship may affect QOL results

The researchers found potential evidence that industry sponsorship may lead to a spin on QoL outcomes. Among 51 trials that showed no QoL benefit associated with treatment, the description of the QoL outcome in 37 industry-sponsored was judged to be neutral and coherent with the study findings in 26 cases, but unjustifiably favorable in 11 cases. Among 14 with nonprofit support, descriptions of QoL results were found to be neutral in all cases (P = .0232).

“Obviously, industry may be motivated to overemphasize treatment benefits, especially in measures that also have a qualitative/subjective dimension such as QoL. Assuming that the authors used a reliable criterion to evaluate “inappropriateness,” industry may be more likely to emphasize QoL improvements as a surrogate for OS, especially when seeking drug approval,” Dr. Kypriotakis said.

The study is retrospective and cannot prove causation.

Dr. Salomone and Dr. Kypriotakis have no relevant financial disclosures.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ELCC 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Refined incidence rate of HCC with alcohol-associated cirrhosis encourages surveillance

Article Type
Changed
Fri, 04/28/2023 - 12:46

Hepatocellular carcinoma (HCC) is relatively common among patients with alcohol-associated cirrhosis, reaching a cumulative incidence of 9% at the 10-year mark, shows a large pooled analysis.

Incidence rates were higher for cohorts that underwent HCC surveillance versus those that did not undergo surveillance, suggesting that such programs offer significant benefit, lead author Daniel Q. Huang, MBBS, of the University of California, San Diego, and colleagues reported.

National University of Singapore
Dr. Daniel Q. Huang

“A systematic review of the incidence of HCC among patients with alcohol-associated cirrhosis has not been reported,” the investigators wrote in Clinical Gastroenterology and Hepatology, prompting the present research.

Previous studies have described a broad range of annual incidence findings for HCC in this population, from 0.6% to 5.6%, suggesting that a systematic approach was needed.

To this end, Dr. Huang and colleagues analyzed data from 18 studies that involved 148,333 patients with alcohol-associated cirrhosis. The primary analysis aimed to determine cumulative incidence rates over time, while the secondary analysis characterized the impact of participation in HCC surveillance programs.

“This meta-analysis used reconstructed individual participant data, which is considered to be the gold standard for reporting survival data because it accounts for censoring of events,” the investigators noted. “The current study provides important data that are useful for clinical practice and clinical trial design.”

The cumulative incidence rates of HCC were 1%, 3%, and 9% at 1 year, 5 years, and 10 years, respectively. Among 12 of the risk factors studied, smoking, diabetes, and decompensation were all significantly associated with rate of HCC.

“Therefore, patients with alcohol-associated cirrhosis should be screened for diabetes to identify the patients at high risk for HCC development,” the investigators wrote. “In addition, patients with alcohol-associated cirrhosis should be advised to stop smoking, while patients with hepatic decompensation should be monitored carefully for the development of HCC if clinically appropriate.”

The secondary analysis showed that HCC incidence rates were higher among patients participating in HCC surveillance programs than those who did not participate (18.6 vs. 4.8 per 1,000 person-years; P = .001).

“Patients with alcohol-associated cirrhosis are known to have lower HCC surveillance rates, which may be related to poor disease awareness, clinic time constraints caused by other active medical issues, and provider beliefs regarding the likelihood of adherence,” the investigators noted.

Increased efforts are needed to promote surveillance in this population, they added, suggesting a range of communication pathways, including social media, traditional news outlets, and direct mailing.

Dr. Huang and colleagues also suggested that the findings should be validated in large prospective studies.

The study was funded by the National Institute on Alcohol Abuse and Alcoholism, the National Institute of Environmental Health Sciences, the National Center for Advancing Translational Sciences, and others. Dr. Huang disclosed funding from the Singapore Ministry of Health’s National Medical Research Council.

Body

 

The association between cirrhosis and hepatocellular carcinoma (HCC) risk is well known and therefore routine surveillance is recommended by the American Association for the Study of Liver Diseases. More recent data has shown alcohol use to be an independent risk factor for HCC along with various other cancers.

Northwestern Medicine
Dr. Priya Maddur
In this systematic review and meta-analysis by Huang and colleagues, the incidence of HCC in those with alcohol-associated cirrhosis at 1, 5, and 10 years was 1%, 3% and 9%, respectively. Interestingly, this study found lower rates of hepatocellular carcinoma in those patients with cirrhosis related to alcohol as compared with NAFLD and hepatitis C. These findings may, however, be caused by an underestimate of HCC as those enrolled in a surveillance program had higher rates of HCC (18.6 vs. 4.8 per 1,000 person-years; P = .001).

Quite frequently, the focus of management in patients with alcohol-associated liver disease is alcohol cessation to prevent further decompensation, with screening often being overlooked. Previous studies have shown, however, that earlier detection is associated with improved survival. Another interesting finding of this study was that those patients who had concomitant smoking use, diabetes, and hepatic decompensation were more likely to develop HCC. When managing patients with alcohol related liver disease, confounding risk factors should be mitigated (that is, encouragement of smoking cessation, enhanced screening for diabetes, and more rigorous screening in decompensated patients).

This study brings to light the need for improved screening and concomitant risk factor mitigation for hepatocellular carcinoma given higher rates of detection in those undergoing surveillance. Larger, prospective studies are needed, however, to validate the findings in this study given the recent overall increase in rates of alcohol-associated liver disease.

Priya Maddur MD, is a visiting clinical associate professor of medicine, University of Arizona, Tucson. Dr. Maddur has no relevant disclosures.

Publications
Topics
Sections
Body

 

The association between cirrhosis and hepatocellular carcinoma (HCC) risk is well known and therefore routine surveillance is recommended by the American Association for the Study of Liver Diseases. More recent data has shown alcohol use to be an independent risk factor for HCC along with various other cancers.

Northwestern Medicine
Dr. Priya Maddur
In this systematic review and meta-analysis by Huang and colleagues, the incidence of HCC in those with alcohol-associated cirrhosis at 1, 5, and 10 years was 1%, 3% and 9%, respectively. Interestingly, this study found lower rates of hepatocellular carcinoma in those patients with cirrhosis related to alcohol as compared with NAFLD and hepatitis C. These findings may, however, be caused by an underestimate of HCC as those enrolled in a surveillance program had higher rates of HCC (18.6 vs. 4.8 per 1,000 person-years; P = .001).

Quite frequently, the focus of management in patients with alcohol-associated liver disease is alcohol cessation to prevent further decompensation, with screening often being overlooked. Previous studies have shown, however, that earlier detection is associated with improved survival. Another interesting finding of this study was that those patients who had concomitant smoking use, diabetes, and hepatic decompensation were more likely to develop HCC. When managing patients with alcohol related liver disease, confounding risk factors should be mitigated (that is, encouragement of smoking cessation, enhanced screening for diabetes, and more rigorous screening in decompensated patients).

This study brings to light the need for improved screening and concomitant risk factor mitigation for hepatocellular carcinoma given higher rates of detection in those undergoing surveillance. Larger, prospective studies are needed, however, to validate the findings in this study given the recent overall increase in rates of alcohol-associated liver disease.

Priya Maddur MD, is a visiting clinical associate professor of medicine, University of Arizona, Tucson. Dr. Maddur has no relevant disclosures.

Body

 

The association between cirrhosis and hepatocellular carcinoma (HCC) risk is well known and therefore routine surveillance is recommended by the American Association for the Study of Liver Diseases. More recent data has shown alcohol use to be an independent risk factor for HCC along with various other cancers.

Northwestern Medicine
Dr. Priya Maddur
In this systematic review and meta-analysis by Huang and colleagues, the incidence of HCC in those with alcohol-associated cirrhosis at 1, 5, and 10 years was 1%, 3% and 9%, respectively. Interestingly, this study found lower rates of hepatocellular carcinoma in those patients with cirrhosis related to alcohol as compared with NAFLD and hepatitis C. These findings may, however, be caused by an underestimate of HCC as those enrolled in a surveillance program had higher rates of HCC (18.6 vs. 4.8 per 1,000 person-years; P = .001).

Quite frequently, the focus of management in patients with alcohol-associated liver disease is alcohol cessation to prevent further decompensation, with screening often being overlooked. Previous studies have shown, however, that earlier detection is associated with improved survival. Another interesting finding of this study was that those patients who had concomitant smoking use, diabetes, and hepatic decompensation were more likely to develop HCC. When managing patients with alcohol related liver disease, confounding risk factors should be mitigated (that is, encouragement of smoking cessation, enhanced screening for diabetes, and more rigorous screening in decompensated patients).

This study brings to light the need for improved screening and concomitant risk factor mitigation for hepatocellular carcinoma given higher rates of detection in those undergoing surveillance. Larger, prospective studies are needed, however, to validate the findings in this study given the recent overall increase in rates of alcohol-associated liver disease.

Priya Maddur MD, is a visiting clinical associate professor of medicine, University of Arizona, Tucson. Dr. Maddur has no relevant disclosures.

Hepatocellular carcinoma (HCC) is relatively common among patients with alcohol-associated cirrhosis, reaching a cumulative incidence of 9% at the 10-year mark, shows a large pooled analysis.

Incidence rates were higher for cohorts that underwent HCC surveillance versus those that did not undergo surveillance, suggesting that such programs offer significant benefit, lead author Daniel Q. Huang, MBBS, of the University of California, San Diego, and colleagues reported.

National University of Singapore
Dr. Daniel Q. Huang

“A systematic review of the incidence of HCC among patients with alcohol-associated cirrhosis has not been reported,” the investigators wrote in Clinical Gastroenterology and Hepatology, prompting the present research.

Previous studies have described a broad range of annual incidence findings for HCC in this population, from 0.6% to 5.6%, suggesting that a systematic approach was needed.

To this end, Dr. Huang and colleagues analyzed data from 18 studies that involved 148,333 patients with alcohol-associated cirrhosis. The primary analysis aimed to determine cumulative incidence rates over time, while the secondary analysis characterized the impact of participation in HCC surveillance programs.

“This meta-analysis used reconstructed individual participant data, which is considered to be the gold standard for reporting survival data because it accounts for censoring of events,” the investigators noted. “The current study provides important data that are useful for clinical practice and clinical trial design.”

The cumulative incidence rates of HCC were 1%, 3%, and 9% at 1 year, 5 years, and 10 years, respectively. Among 12 of the risk factors studied, smoking, diabetes, and decompensation were all significantly associated with rate of HCC.

“Therefore, patients with alcohol-associated cirrhosis should be screened for diabetes to identify the patients at high risk for HCC development,” the investigators wrote. “In addition, patients with alcohol-associated cirrhosis should be advised to stop smoking, while patients with hepatic decompensation should be monitored carefully for the development of HCC if clinically appropriate.”

The secondary analysis showed that HCC incidence rates were higher among patients participating in HCC surveillance programs than those who did not participate (18.6 vs. 4.8 per 1,000 person-years; P = .001).

“Patients with alcohol-associated cirrhosis are known to have lower HCC surveillance rates, which may be related to poor disease awareness, clinic time constraints caused by other active medical issues, and provider beliefs regarding the likelihood of adherence,” the investigators noted.

Increased efforts are needed to promote surveillance in this population, they added, suggesting a range of communication pathways, including social media, traditional news outlets, and direct mailing.

Dr. Huang and colleagues also suggested that the findings should be validated in large prospective studies.

The study was funded by the National Institute on Alcohol Abuse and Alcoholism, the National Institute of Environmental Health Sciences, the National Center for Advancing Translational Sciences, and others. Dr. Huang disclosed funding from the Singapore Ministry of Health’s National Medical Research Council.

Hepatocellular carcinoma (HCC) is relatively common among patients with alcohol-associated cirrhosis, reaching a cumulative incidence of 9% at the 10-year mark, shows a large pooled analysis.

Incidence rates were higher for cohorts that underwent HCC surveillance versus those that did not undergo surveillance, suggesting that such programs offer significant benefit, lead author Daniel Q. Huang, MBBS, of the University of California, San Diego, and colleagues reported.

National University of Singapore
Dr. Daniel Q. Huang

“A systematic review of the incidence of HCC among patients with alcohol-associated cirrhosis has not been reported,” the investigators wrote in Clinical Gastroenterology and Hepatology, prompting the present research.

Previous studies have described a broad range of annual incidence findings for HCC in this population, from 0.6% to 5.6%, suggesting that a systematic approach was needed.

To this end, Dr. Huang and colleagues analyzed data from 18 studies that involved 148,333 patients with alcohol-associated cirrhosis. The primary analysis aimed to determine cumulative incidence rates over time, while the secondary analysis characterized the impact of participation in HCC surveillance programs.

“This meta-analysis used reconstructed individual participant data, which is considered to be the gold standard for reporting survival data because it accounts for censoring of events,” the investigators noted. “The current study provides important data that are useful for clinical practice and clinical trial design.”

The cumulative incidence rates of HCC were 1%, 3%, and 9% at 1 year, 5 years, and 10 years, respectively. Among 12 of the risk factors studied, smoking, diabetes, and decompensation were all significantly associated with rate of HCC.

“Therefore, patients with alcohol-associated cirrhosis should be screened for diabetes to identify the patients at high risk for HCC development,” the investigators wrote. “In addition, patients with alcohol-associated cirrhosis should be advised to stop smoking, while patients with hepatic decompensation should be monitored carefully for the development of HCC if clinically appropriate.”

The secondary analysis showed that HCC incidence rates were higher among patients participating in HCC surveillance programs than those who did not participate (18.6 vs. 4.8 per 1,000 person-years; P = .001).

“Patients with alcohol-associated cirrhosis are known to have lower HCC surveillance rates, which may be related to poor disease awareness, clinic time constraints caused by other active medical issues, and provider beliefs regarding the likelihood of adherence,” the investigators noted.

Increased efforts are needed to promote surveillance in this population, they added, suggesting a range of communication pathways, including social media, traditional news outlets, and direct mailing.

Dr. Huang and colleagues also suggested that the findings should be validated in large prospective studies.

The study was funded by the National Institute on Alcohol Abuse and Alcoholism, the National Institute of Environmental Health Sciences, the National Center for Advancing Translational Sciences, and others. Dr. Huang disclosed funding from the Singapore Ministry of Health’s National Medical Research Council.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Antibiotic pretreatment reduces liver ischemia/reperfusion injury

Promoting tissue repair
Article Type
Changed
Thu, 04/13/2023 - 16:14

Antibiotic pretreatment may protect against liver ischemia/reperfusion (I/R) injury through altered gut microbiota, glutamine levels, and glutamine downstream products in circulation, according to a recent study in Cellular and Molecular Gastroenterology and Hepatology.

The findings show that gut microbiota and their metabolites play critical roles in hepatic I/R injury by modulating macrophage metabolic reprogramming, wrote Tianfei Lu, with the Abdominal Transplant Surgery Center at Ruijin Hospital and Shanghai Jiao Tong University, China, and colleagues.

“Potential therapies that target macrophage metabolism, including antibiotic therapies and novel immunometabolism modulators, can be exploited for the treatment of liver I/R injury,” the authors wrote.

Liver I/R injury is a common complication of liver resection, transplantation, trauma, and hemorrhagic shock. Previous studies have noted the important role of gut microbiota in liver disease progression, yet the mechanisms in liver I/R injury remain unknown.

The researchers pretreated mice with an antibiotic cocktail to modify the gut microbiome. They found that the pretreatment showed protective effects against hepatic I/R injury, with reductions in serum alanine aminotransferase (ALT), interleukin-1 beta, tumor necrosis factor–alpha, IL-6, IL-12b, and CXCL10.

Through histologic analysis of liver tissues, they also found that the area of necrosis, the degree of congestion and edema, and the presence of vacuole-like lesions were alleviated in the preconditioned mice. Inflammation and necrosis of the liver were also lower, according to both qualitative and quantitative data.

Then, through fecal microbiota transplantation into germ-free mice, they found that the protection from I/R injury was transferable. This finding indicated that the altered gut microbiome, rather than the antibiotic treatment itself, exerted the protective effect.

Because altered gut microbiota can cause changes in metabolites, the researchers used ultra-performance liquid chromatography coupled to tandem mass spectrometry to explore the changes of gut microbiota and metabolites in both feces and portal blood, as well as analyze the mechanisms underlying their protective effects in liver I/R injury.

The researchers found that glutamine and its downstream product called alpha-ketoglutarate (AKG) were present in higher concentrations in feces and blood in the mice with antibiotic pretreatment. Glutamate levels were significantly lower, indicating that glutamine is converted into AKG through glutamate after entering the blood.

In addition, there were increased levels of intermediate products of the tricarboxylic acid (TCA) cycle, as well as pyruvate produced by glycolysis. That led to an increase in M2 macrophages, which are responsible for anti-inflammatory processes and tissue repair.

The authors concluded that elevated glutamine levels in the intestine cause an increase in AKG levels in the blood, and AKG can promote M2 macrophage polarization by fueling the TCA cycle. In turn, the increased number of M2 macrophages can repair hepatic I/R injury.

Finally, the researchers tested oligomycin A, which can block the OXPHOS metabolic pathway and inhibit the mitochondrial ATP synthase. As expected, they wrote, the protective effect of antibiotic pretreatment reversed, M2 macrophages decreased, and serum ALT levels increased.

“The immunometabolism and polarization of macrophages play an important role in host homeostasis and the development of various diseases,” the authors wrote. “The relationship between antibiotics treatment, altered gut microbiota, and liver I/R injury are complex and worthy of further study.”

The study was supported by the China National Science and Technology Major Project, National Natural Science Foundation of China, and Natural Science Foundation exploration project of Zhejiang Province. The authors disclosed no conflicts.

Body

In modern clinical practice, multiple conditions can cause ischemia and reperfusion injury to the liver, including surgical liver resection, liver transplantation, and physical trauma to the organ. Liver damage due to hypoxia is followed by reperfusion injury, resulting in a pre-proinflammatory environment. Liver resident macrophages called Kupffer cells are major mediators of this response, initiating a signaling cascade that leads to recruitment of neutrophils, natural killer cells, and circulating macrophages, which attack sinusoidal endothelial cells and hepatocytes.

Dr. Klaus Kaestner

In the current issue of CMGH, Lu and colleagues address the question of to what extent do the gut microbiome and its metabolite products, which reach the liver via the portal circulation, play a role in the severity of ischemia and reperfusion injury (Cell Mol Gastroenterol Hepatol. 2023 Jan 24. doi: 10.1016/j.jcmgh.2023.01.004). This topic is of clinical relevance, as the microbial load of the gut lumen can be easily reduced by several orders of magnitude using non-absorbed antibiotics. Thus, it is important to establish if pretreatment of patients scheduled for liver resection or transplantation might benefit from preprocedure antibiotic treatment.

Remarkably, Lu and colleagues find that antibiotic preconditioning significantly reduces ischemia and reperfusion injury in an animal model. Mechanistically, they linked the protective effects to a shift of macrophage polarization to the protective M phenotype, which is known to promote tissue repair. These findings suggest that the antibiotic preconditioning of patients who are undergoing procedures with significant ischemia and reperfusion injury should be evaluated in future clinical trials.

Klaus H. Kaestner, PhD, MS, is the Thomas and Evelyn Suor Butterworth Professor in Genetics and associate director of the Penn Diabetes Research Center at the University of Pennsylvania, Philadelphia. He has no relevant financial relationships.

Publications
Topics
Sections
Body

In modern clinical practice, multiple conditions can cause ischemia and reperfusion injury to the liver, including surgical liver resection, liver transplantation, and physical trauma to the organ. Liver damage due to hypoxia is followed by reperfusion injury, resulting in a pre-proinflammatory environment. Liver resident macrophages called Kupffer cells are major mediators of this response, initiating a signaling cascade that leads to recruitment of neutrophils, natural killer cells, and circulating macrophages, which attack sinusoidal endothelial cells and hepatocytes.

Dr. Klaus Kaestner

In the current issue of CMGH, Lu and colleagues address the question of to what extent do the gut microbiome and its metabolite products, which reach the liver via the portal circulation, play a role in the severity of ischemia and reperfusion injury (Cell Mol Gastroenterol Hepatol. 2023 Jan 24. doi: 10.1016/j.jcmgh.2023.01.004). This topic is of clinical relevance, as the microbial load of the gut lumen can be easily reduced by several orders of magnitude using non-absorbed antibiotics. Thus, it is important to establish if pretreatment of patients scheduled for liver resection or transplantation might benefit from preprocedure antibiotic treatment.

Remarkably, Lu and colleagues find that antibiotic preconditioning significantly reduces ischemia and reperfusion injury in an animal model. Mechanistically, they linked the protective effects to a shift of macrophage polarization to the protective M phenotype, which is known to promote tissue repair. These findings suggest that the antibiotic preconditioning of patients who are undergoing procedures with significant ischemia and reperfusion injury should be evaluated in future clinical trials.

Klaus H. Kaestner, PhD, MS, is the Thomas and Evelyn Suor Butterworth Professor in Genetics and associate director of the Penn Diabetes Research Center at the University of Pennsylvania, Philadelphia. He has no relevant financial relationships.

Body

In modern clinical practice, multiple conditions can cause ischemia and reperfusion injury to the liver, including surgical liver resection, liver transplantation, and physical trauma to the organ. Liver damage due to hypoxia is followed by reperfusion injury, resulting in a pre-proinflammatory environment. Liver resident macrophages called Kupffer cells are major mediators of this response, initiating a signaling cascade that leads to recruitment of neutrophils, natural killer cells, and circulating macrophages, which attack sinusoidal endothelial cells and hepatocytes.

Dr. Klaus Kaestner

In the current issue of CMGH, Lu and colleagues address the question of to what extent do the gut microbiome and its metabolite products, which reach the liver via the portal circulation, play a role in the severity of ischemia and reperfusion injury (Cell Mol Gastroenterol Hepatol. 2023 Jan 24. doi: 10.1016/j.jcmgh.2023.01.004). This topic is of clinical relevance, as the microbial load of the gut lumen can be easily reduced by several orders of magnitude using non-absorbed antibiotics. Thus, it is important to establish if pretreatment of patients scheduled for liver resection or transplantation might benefit from preprocedure antibiotic treatment.

Remarkably, Lu and colleagues find that antibiotic preconditioning significantly reduces ischemia and reperfusion injury in an animal model. Mechanistically, they linked the protective effects to a shift of macrophage polarization to the protective M phenotype, which is known to promote tissue repair. These findings suggest that the antibiotic preconditioning of patients who are undergoing procedures with significant ischemia and reperfusion injury should be evaluated in future clinical trials.

Klaus H. Kaestner, PhD, MS, is the Thomas and Evelyn Suor Butterworth Professor in Genetics and associate director of the Penn Diabetes Research Center at the University of Pennsylvania, Philadelphia. He has no relevant financial relationships.

Title
Promoting tissue repair
Promoting tissue repair

Antibiotic pretreatment may protect against liver ischemia/reperfusion (I/R) injury through altered gut microbiota, glutamine levels, and glutamine downstream products in circulation, according to a recent study in Cellular and Molecular Gastroenterology and Hepatology.

The findings show that gut microbiota and their metabolites play critical roles in hepatic I/R injury by modulating macrophage metabolic reprogramming, wrote Tianfei Lu, with the Abdominal Transplant Surgery Center at Ruijin Hospital and Shanghai Jiao Tong University, China, and colleagues.

“Potential therapies that target macrophage metabolism, including antibiotic therapies and novel immunometabolism modulators, can be exploited for the treatment of liver I/R injury,” the authors wrote.

Liver I/R injury is a common complication of liver resection, transplantation, trauma, and hemorrhagic shock. Previous studies have noted the important role of gut microbiota in liver disease progression, yet the mechanisms in liver I/R injury remain unknown.

The researchers pretreated mice with an antibiotic cocktail to modify the gut microbiome. They found that the pretreatment showed protective effects against hepatic I/R injury, with reductions in serum alanine aminotransferase (ALT), interleukin-1 beta, tumor necrosis factor–alpha, IL-6, IL-12b, and CXCL10.

Through histologic analysis of liver tissues, they also found that the area of necrosis, the degree of congestion and edema, and the presence of vacuole-like lesions were alleviated in the preconditioned mice. Inflammation and necrosis of the liver were also lower, according to both qualitative and quantitative data.

Then, through fecal microbiota transplantation into germ-free mice, they found that the protection from I/R injury was transferable. This finding indicated that the altered gut microbiome, rather than the antibiotic treatment itself, exerted the protective effect.

Because altered gut microbiota can cause changes in metabolites, the researchers used ultra-performance liquid chromatography coupled to tandem mass spectrometry to explore the changes of gut microbiota and metabolites in both feces and portal blood, as well as analyze the mechanisms underlying their protective effects in liver I/R injury.

The researchers found that glutamine and its downstream product called alpha-ketoglutarate (AKG) were present in higher concentrations in feces and blood in the mice with antibiotic pretreatment. Glutamate levels were significantly lower, indicating that glutamine is converted into AKG through glutamate after entering the blood.

In addition, there were increased levels of intermediate products of the tricarboxylic acid (TCA) cycle, as well as pyruvate produced by glycolysis. That led to an increase in M2 macrophages, which are responsible for anti-inflammatory processes and tissue repair.

The authors concluded that elevated glutamine levels in the intestine cause an increase in AKG levels in the blood, and AKG can promote M2 macrophage polarization by fueling the TCA cycle. In turn, the increased number of M2 macrophages can repair hepatic I/R injury.

Finally, the researchers tested oligomycin A, which can block the OXPHOS metabolic pathway and inhibit the mitochondrial ATP synthase. As expected, they wrote, the protective effect of antibiotic pretreatment reversed, M2 macrophages decreased, and serum ALT levels increased.

“The immunometabolism and polarization of macrophages play an important role in host homeostasis and the development of various diseases,” the authors wrote. “The relationship between antibiotics treatment, altered gut microbiota, and liver I/R injury are complex and worthy of further study.”

The study was supported by the China National Science and Technology Major Project, National Natural Science Foundation of China, and Natural Science Foundation exploration project of Zhejiang Province. The authors disclosed no conflicts.

Antibiotic pretreatment may protect against liver ischemia/reperfusion (I/R) injury through altered gut microbiota, glutamine levels, and glutamine downstream products in circulation, according to a recent study in Cellular and Molecular Gastroenterology and Hepatology.

The findings show that gut microbiota and their metabolites play critical roles in hepatic I/R injury by modulating macrophage metabolic reprogramming, wrote Tianfei Lu, with the Abdominal Transplant Surgery Center at Ruijin Hospital and Shanghai Jiao Tong University, China, and colleagues.

“Potential therapies that target macrophage metabolism, including antibiotic therapies and novel immunometabolism modulators, can be exploited for the treatment of liver I/R injury,” the authors wrote.

Liver I/R injury is a common complication of liver resection, transplantation, trauma, and hemorrhagic shock. Previous studies have noted the important role of gut microbiota in liver disease progression, yet the mechanisms in liver I/R injury remain unknown.

The researchers pretreated mice with an antibiotic cocktail to modify the gut microbiome. They found that the pretreatment showed protective effects against hepatic I/R injury, with reductions in serum alanine aminotransferase (ALT), interleukin-1 beta, tumor necrosis factor–alpha, IL-6, IL-12b, and CXCL10.

Through histologic analysis of liver tissues, they also found that the area of necrosis, the degree of congestion and edema, and the presence of vacuole-like lesions were alleviated in the preconditioned mice. Inflammation and necrosis of the liver were also lower, according to both qualitative and quantitative data.

Then, through fecal microbiota transplantation into germ-free mice, they found that the protection from I/R injury was transferable. This finding indicated that the altered gut microbiome, rather than the antibiotic treatment itself, exerted the protective effect.

Because altered gut microbiota can cause changes in metabolites, the researchers used ultra-performance liquid chromatography coupled to tandem mass spectrometry to explore the changes of gut microbiota and metabolites in both feces and portal blood, as well as analyze the mechanisms underlying their protective effects in liver I/R injury.

The researchers found that glutamine and its downstream product called alpha-ketoglutarate (AKG) were present in higher concentrations in feces and blood in the mice with antibiotic pretreatment. Glutamate levels were significantly lower, indicating that glutamine is converted into AKG through glutamate after entering the blood.

In addition, there were increased levels of intermediate products of the tricarboxylic acid (TCA) cycle, as well as pyruvate produced by glycolysis. That led to an increase in M2 macrophages, which are responsible for anti-inflammatory processes and tissue repair.

The authors concluded that elevated glutamine levels in the intestine cause an increase in AKG levels in the blood, and AKG can promote M2 macrophage polarization by fueling the TCA cycle. In turn, the increased number of M2 macrophages can repair hepatic I/R injury.

Finally, the researchers tested oligomycin A, which can block the OXPHOS metabolic pathway and inhibit the mitochondrial ATP synthase. As expected, they wrote, the protective effect of antibiotic pretreatment reversed, M2 macrophages decreased, and serum ALT levels increased.

“The immunometabolism and polarization of macrophages play an important role in host homeostasis and the development of various diseases,” the authors wrote. “The relationship between antibiotics treatment, altered gut microbiota, and liver I/R injury are complex and worthy of further study.”

The study was supported by the China National Science and Technology Major Project, National Natural Science Foundation of China, and Natural Science Foundation exploration project of Zhejiang Province. The authors disclosed no conflicts.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CELLULAR AND MOLECULAR GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

New guidelines for cannabis in chronic pain management released

Article Type
Changed
Tue, 04/11/2023 - 08:04

New clinical practice guidelines for cannabis in chronic pain management have been released.

Developed by a group of Canadian researchers, clinicians, and patients, the guidelines note that cannabinoid-based medicines (CBM) may help clinicians offer an effective, less addictive, alternative to opioids in patients with chronic noncancer pain and comorbid conditions.

“We don’t recommend using CBM first line for anything pretty much because there are other alternatives that may be more effective and also offer fewer side effects,” lead guideline author Alan Bell, MD, assistant professor of family and community medicine at the University of Toronto, told this news organization.

University of Toronto
Dr. Alan Bell


“But I would strongly argue that I would use cannabis-based medicine over opioids every time. Why would you use a high potency-high toxicity agent when there’s a low potency-low toxicity alternative?” he said.

The guidelines were published online in the journal Cannabis and Cannabinoid Research.
 

Examining the evidence

A consistent criticism of CBM has been the lack of quality research supporting its therapeutic utility. To develop the current recommendations, the task force reviewed 47 pain management studies enrolling more than 11,000 patients. Almost half of the studies (n = 22) were randomized controlled trials (RCTs) and 12 of the 19 included systematic reviews focused solely on RCTs.

Overall, 38 of the 47 included studies demonstrated that CBM provided at least moderate benefits for chronic pain, resulting in a “strong” recommendation – mostly as an adjunct or replacement treatment in individuals living with chronic pain.

rgbspace/Getty Images

Overall, the guidelines place a high value on improving chronic pain and functionality, and addressing co-occurring conditions such as insomnia, anxiety and depression, mobility, and inflammation. They also provide practical dosing and formulation tips to support the use of CBM in the clinical setting.

When it comes to chronic pain, CBM is not a panacea. However, prior research suggests cannabinoids and opioids share several pharmacologic properties, including independent but possibly related mechanisms for antinociception, making them an intriguing combination.

In the current guidelines, all of the four studies specifically addressing combined opioids and vaporized cannabis flower demonstrated further pain reduction, reinforcing the conclusion that the benefits of CBM for improving pain control in patients taking opioids outweigh the risk of nonserious adverse events (AEs), such as dry mouth, dizziness, increased appetite, sedation, and concentration difficulties.



The recommendations also highlighted evidence demonstrating that a majority of participants were able to reduce use of routine pain medications with concomitant CBM/opioid administration, while simultaneously offering secondary benefits such as improved sleep, anxiety, and mood, as well as prevention of opioid tolerance and dose escalation.

Importantly, the guidelines offer an evidence-based algorithm with a clear framework for tapering patients off opioids, especially those who are on > 50 mg MED, which places them with a twofold greater risk for fatal overdose.

An effective alternative

Commenting on the new guidelines, Mark Wallace, MD, who has extensive experience researching and treating pain patients with medical cannabis, said the genesis of his interest in medical cannabis mirrors the guidelines’ focus.

“What got me interested in medical cannabis was trying to get patients off of opioids,” said Dr. Wallace, professor of anesthesiology and chief of the division of pain medicine in the department of anesthesiology at the University of California, San Diego. Dr. Wallace, who was not involved in the guidelines’ development study, said that he’s “titrated hundreds of patients off of opioids using cannabis.”

Dr. Wallace said he found the guidelines’ dosing recommendations helpful.

“If you stay within the 1- to 5-mg dosing range, the risks are so incredibly low, you’re not going to harm the patient.”

While there are patients who abuse cannabis and CBMs, Dr. Wallace noted that he has seen only one patient in the past 20 years who was overusing the medical cannabis. He added that his patient population does not use medical cannabis to get high and, in fact, wants to avoid doses that produce that effect at all costs.

Also commenting on the guidelines, Christopher Gilligan, MD, MBA, associate chief medical officer and a pain medicine physician at Brigham and Women’s Hospital in Boston, who was not involved in the guidelines’ development, points to the risks.

Brigham and Women's Hospital
Dr. Christopher Gilligan


“When we have an opportunity to use cannabinoids in place of opioids for our patients, I think that that’s a positive thing ... and a wise choice in terms of risk benefit,” Dr. Gilligan said.

On the other hand, he cautioned that “freely prescribing” cannabinoids for chronic pain in patients who aren’t on opioids is not good practice.

“We have to take seriously the potential adverse effects of [cannabis], including marijuana use disorder, interference with learning, memory impairment, and psychotic breakthroughs,” said Dr. Gilligan.  

Given the current climate, it would appear that CBM is a long way from being endorsed by the Food and Drug Administration, but for clinicians interested in trying CBM for chronic pain patients, the guidelines may offer a roadmap for initiation and an alternative to prescribing opioids.

Dr. Bell, Dr. Gilligan, and Dr. Wallace report no relevant financial relationships.
 

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

New clinical practice guidelines for cannabis in chronic pain management have been released.

Developed by a group of Canadian researchers, clinicians, and patients, the guidelines note that cannabinoid-based medicines (CBM) may help clinicians offer an effective, less addictive, alternative to opioids in patients with chronic noncancer pain and comorbid conditions.

“We don’t recommend using CBM first line for anything pretty much because there are other alternatives that may be more effective and also offer fewer side effects,” lead guideline author Alan Bell, MD, assistant professor of family and community medicine at the University of Toronto, told this news organization.

University of Toronto
Dr. Alan Bell


“But I would strongly argue that I would use cannabis-based medicine over opioids every time. Why would you use a high potency-high toxicity agent when there’s a low potency-low toxicity alternative?” he said.

The guidelines were published online in the journal Cannabis and Cannabinoid Research.
 

Examining the evidence

A consistent criticism of CBM has been the lack of quality research supporting its therapeutic utility. To develop the current recommendations, the task force reviewed 47 pain management studies enrolling more than 11,000 patients. Almost half of the studies (n = 22) were randomized controlled trials (RCTs) and 12 of the 19 included systematic reviews focused solely on RCTs.

Overall, 38 of the 47 included studies demonstrated that CBM provided at least moderate benefits for chronic pain, resulting in a “strong” recommendation – mostly as an adjunct or replacement treatment in individuals living with chronic pain.

rgbspace/Getty Images

Overall, the guidelines place a high value on improving chronic pain and functionality, and addressing co-occurring conditions such as insomnia, anxiety and depression, mobility, and inflammation. They also provide practical dosing and formulation tips to support the use of CBM in the clinical setting.

When it comes to chronic pain, CBM is not a panacea. However, prior research suggests cannabinoids and opioids share several pharmacologic properties, including independent but possibly related mechanisms for antinociception, making them an intriguing combination.

In the current guidelines, all of the four studies specifically addressing combined opioids and vaporized cannabis flower demonstrated further pain reduction, reinforcing the conclusion that the benefits of CBM for improving pain control in patients taking opioids outweigh the risk of nonserious adverse events (AEs), such as dry mouth, dizziness, increased appetite, sedation, and concentration difficulties.



The recommendations also highlighted evidence demonstrating that a majority of participants were able to reduce use of routine pain medications with concomitant CBM/opioid administration, while simultaneously offering secondary benefits such as improved sleep, anxiety, and mood, as well as prevention of opioid tolerance and dose escalation.

Importantly, the guidelines offer an evidence-based algorithm with a clear framework for tapering patients off opioids, especially those who are on > 50 mg MED, which places them with a twofold greater risk for fatal overdose.

An effective alternative

Commenting on the new guidelines, Mark Wallace, MD, who has extensive experience researching and treating pain patients with medical cannabis, said the genesis of his interest in medical cannabis mirrors the guidelines’ focus.

“What got me interested in medical cannabis was trying to get patients off of opioids,” said Dr. Wallace, professor of anesthesiology and chief of the division of pain medicine in the department of anesthesiology at the University of California, San Diego. Dr. Wallace, who was not involved in the guidelines’ development study, said that he’s “titrated hundreds of patients off of opioids using cannabis.”

Dr. Wallace said he found the guidelines’ dosing recommendations helpful.

“If you stay within the 1- to 5-mg dosing range, the risks are so incredibly low, you’re not going to harm the patient.”

While there are patients who abuse cannabis and CBMs, Dr. Wallace noted that he has seen only one patient in the past 20 years who was overusing the medical cannabis. He added that his patient population does not use medical cannabis to get high and, in fact, wants to avoid doses that produce that effect at all costs.

Also commenting on the guidelines, Christopher Gilligan, MD, MBA, associate chief medical officer and a pain medicine physician at Brigham and Women’s Hospital in Boston, who was not involved in the guidelines’ development, points to the risks.

Brigham and Women's Hospital
Dr. Christopher Gilligan


“When we have an opportunity to use cannabinoids in place of opioids for our patients, I think that that’s a positive thing ... and a wise choice in terms of risk benefit,” Dr. Gilligan said.

On the other hand, he cautioned that “freely prescribing” cannabinoids for chronic pain in patients who aren’t on opioids is not good practice.

“We have to take seriously the potential adverse effects of [cannabis], including marijuana use disorder, interference with learning, memory impairment, and psychotic breakthroughs,” said Dr. Gilligan.  

Given the current climate, it would appear that CBM is a long way from being endorsed by the Food and Drug Administration, but for clinicians interested in trying CBM for chronic pain patients, the guidelines may offer a roadmap for initiation and an alternative to prescribing opioids.

Dr. Bell, Dr. Gilligan, and Dr. Wallace report no relevant financial relationships.
 

A version of this article first appeared on Medscape.com.

New clinical practice guidelines for cannabis in chronic pain management have been released.

Developed by a group of Canadian researchers, clinicians, and patients, the guidelines note that cannabinoid-based medicines (CBM) may help clinicians offer an effective, less addictive, alternative to opioids in patients with chronic noncancer pain and comorbid conditions.

“We don’t recommend using CBM first line for anything pretty much because there are other alternatives that may be more effective and also offer fewer side effects,” lead guideline author Alan Bell, MD, assistant professor of family and community medicine at the University of Toronto, told this news organization.

University of Toronto
Dr. Alan Bell


“But I would strongly argue that I would use cannabis-based medicine over opioids every time. Why would you use a high potency-high toxicity agent when there’s a low potency-low toxicity alternative?” he said.

The guidelines were published online in the journal Cannabis and Cannabinoid Research.
 

Examining the evidence

A consistent criticism of CBM has been the lack of quality research supporting its therapeutic utility. To develop the current recommendations, the task force reviewed 47 pain management studies enrolling more than 11,000 patients. Almost half of the studies (n = 22) were randomized controlled trials (RCTs) and 12 of the 19 included systematic reviews focused solely on RCTs.

Overall, 38 of the 47 included studies demonstrated that CBM provided at least moderate benefits for chronic pain, resulting in a “strong” recommendation – mostly as an adjunct or replacement treatment in individuals living with chronic pain.

rgbspace/Getty Images

Overall, the guidelines place a high value on improving chronic pain and functionality, and addressing co-occurring conditions such as insomnia, anxiety and depression, mobility, and inflammation. They also provide practical dosing and formulation tips to support the use of CBM in the clinical setting.

When it comes to chronic pain, CBM is not a panacea. However, prior research suggests cannabinoids and opioids share several pharmacologic properties, including independent but possibly related mechanisms for antinociception, making them an intriguing combination.

In the current guidelines, all of the four studies specifically addressing combined opioids and vaporized cannabis flower demonstrated further pain reduction, reinforcing the conclusion that the benefits of CBM for improving pain control in patients taking opioids outweigh the risk of nonserious adverse events (AEs), such as dry mouth, dizziness, increased appetite, sedation, and concentration difficulties.



The recommendations also highlighted evidence demonstrating that a majority of participants were able to reduce use of routine pain medications with concomitant CBM/opioid administration, while simultaneously offering secondary benefits such as improved sleep, anxiety, and mood, as well as prevention of opioid tolerance and dose escalation.

Importantly, the guidelines offer an evidence-based algorithm with a clear framework for tapering patients off opioids, especially those who are on > 50 mg MED, which places them with a twofold greater risk for fatal overdose.

An effective alternative

Commenting on the new guidelines, Mark Wallace, MD, who has extensive experience researching and treating pain patients with medical cannabis, said the genesis of his interest in medical cannabis mirrors the guidelines’ focus.

“What got me interested in medical cannabis was trying to get patients off of opioids,” said Dr. Wallace, professor of anesthesiology and chief of the division of pain medicine in the department of anesthesiology at the University of California, San Diego. Dr. Wallace, who was not involved in the guidelines’ development study, said that he’s “titrated hundreds of patients off of opioids using cannabis.”

Dr. Wallace said he found the guidelines’ dosing recommendations helpful.

“If you stay within the 1- to 5-mg dosing range, the risks are so incredibly low, you’re not going to harm the patient.”

While there are patients who abuse cannabis and CBMs, Dr. Wallace noted that he has seen only one patient in the past 20 years who was overusing the medical cannabis. He added that his patient population does not use medical cannabis to get high and, in fact, wants to avoid doses that produce that effect at all costs.

Also commenting on the guidelines, Christopher Gilligan, MD, MBA, associate chief medical officer and a pain medicine physician at Brigham and Women’s Hospital in Boston, who was not involved in the guidelines’ development, points to the risks.

Brigham and Women's Hospital
Dr. Christopher Gilligan


“When we have an opportunity to use cannabinoids in place of opioids for our patients, I think that that’s a positive thing ... and a wise choice in terms of risk benefit,” Dr. Gilligan said.

On the other hand, he cautioned that “freely prescribing” cannabinoids for chronic pain in patients who aren’t on opioids is not good practice.

“We have to take seriously the potential adverse effects of [cannabis], including marijuana use disorder, interference with learning, memory impairment, and psychotic breakthroughs,” said Dr. Gilligan.  

Given the current climate, it would appear that CBM is a long way from being endorsed by the Food and Drug Administration, but for clinicians interested in trying CBM for chronic pain patients, the guidelines may offer a roadmap for initiation and an alternative to prescribing opioids.

Dr. Bell, Dr. Gilligan, and Dr. Wallace report no relevant financial relationships.
 

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CANNABIS AND CANNABINOID RESEARCH

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Study highlights potential skin cancer risk of UV nail polish dryers

Article Type
Changed
Fri, 04/07/2023 - 13:42

Results of a study recently published in Nature Communications suggests that radiation from ultraviolet nail polish dryers could induce cell death and trigger molecular changes linked to cancer in human cells. According to two experts, these findings raise concerns regarding the safety of frequent use of these nail dryers.

In the study, human and mouse cells were exposed to radiation from UV nail dryers. Exposing human and mice skin cells to UVA light for 20 minutes resulted in the death of 20%-30% of cells; three consecutive 20-minute sessions resulted in the death of 65%-70% of cells. Additionally, surviving cells suffered oxidative damage to their DNA and mitochondria, with mutational patterns similar to those seen in skin cancer, study investigator Maria Zhivagui, PhD, of the University of California, San Diego, and associates reported.  

Dr. Shari Lipner

“This study showed that irradiation of human and mouse cell lines using UV nail polish dryers resulted in DNA damage and genome mutations,” Shari Lipner, MD, PhD, director of the nail division at New York–Presbyterian Hospital/Weill Cornell Medicine, New York, said in an interview. The study “ties together exposure to UV light from nail polish dryers and genetic mutations that are associated with skin cancers,” added Dr. Lipner, who was not involved with the study.

UV nail lamps are commonly used to dry and harden gel nail polish formulas. Often referred to as “mini tanning beds,” these devices emit UVA radiation, classified as a Group 1 Carcinogen by the International Agency for Research on Cancer.

“Both UVA and UVB are main drivers of both melanoma and keratinocyte carcinomas (basal cell carcinoma and squamous cell carcinoma),” said Anthony Rossi, MD, a dermatologic surgeon at Memorial Sloan Kettering Cancer Center, New York, who was also not a study investigator. UV irradiance “produces DNA mutations that are specific to forming types of skin cancer,” he said in an interview.



UVA wavelengths commonly used in nail dryers can penetrate all layers of the epidermis, the top layer of the skin, potentially affecting stem cells in the skin, according to the study.

Dr. Lipner noted that “there have been several case reports of patients with histories of gel manicures using UV nail polish dryers who later developed squamous cell carcinomas on the dorsal hands, fingers, and nails, and articles describing high UV emissions from nail polish dryers, but the direct connection between UV dryers and skin cancer development was tenuous.” The first of its kind, the new study investigated the impact of UV nail drying devices at a cellular level.

The results of this study, in combination with previous case reports suggesting the development of skin cancers following UVA dryer use, raise concern regarding the safety of these commonly used devices. The study, the authors wrote, “does not provide direct evidence for an increased cancer risk in human beings,” but their findings and “prior evidence strongly suggest that radiation emitted by UV nail polish dryers may cause cancers of the hand and that UV nail polish dryers, similar to tanning beds, may increase the risk of early onset skin cancer.”

Courtesy MSKCC
Dr. Anthony Rossi

Dr. Rossi said that, “while this study shows that the UV exposure does affect human cells and causes mutations, the study was not done in vivo in human beings, so further studies are needed to know at what dose and frequency gel manicures would be needed to cause detrimental effects.” However, for people who regularly receive gel manicures involving UV nail dryers, both Dr. Lipner and Dr. Rossi recommend applying a broad-spectrum sunscreen to protect the dorsal hands, fingertips, and skin surrounding the nails, or wearing UV-protective gloves.

The study was supported by an Alfred B. Sloan Research Fellowship to one of the authors and grants from the National Institutes of Health to two authors. One author reported being a compensated consultant and having an equity interest in io9. Dr. Lipner and Dr. Rossi reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Results of a study recently published in Nature Communications suggests that radiation from ultraviolet nail polish dryers could induce cell death and trigger molecular changes linked to cancer in human cells. According to two experts, these findings raise concerns regarding the safety of frequent use of these nail dryers.

In the study, human and mouse cells were exposed to radiation from UV nail dryers. Exposing human and mice skin cells to UVA light for 20 minutes resulted in the death of 20%-30% of cells; three consecutive 20-minute sessions resulted in the death of 65%-70% of cells. Additionally, surviving cells suffered oxidative damage to their DNA and mitochondria, with mutational patterns similar to those seen in skin cancer, study investigator Maria Zhivagui, PhD, of the University of California, San Diego, and associates reported.  

Dr. Shari Lipner

“This study showed that irradiation of human and mouse cell lines using UV nail polish dryers resulted in DNA damage and genome mutations,” Shari Lipner, MD, PhD, director of the nail division at New York–Presbyterian Hospital/Weill Cornell Medicine, New York, said in an interview. The study “ties together exposure to UV light from nail polish dryers and genetic mutations that are associated with skin cancers,” added Dr. Lipner, who was not involved with the study.

UV nail lamps are commonly used to dry and harden gel nail polish formulas. Often referred to as “mini tanning beds,” these devices emit UVA radiation, classified as a Group 1 Carcinogen by the International Agency for Research on Cancer.

“Both UVA and UVB are main drivers of both melanoma and keratinocyte carcinomas (basal cell carcinoma and squamous cell carcinoma),” said Anthony Rossi, MD, a dermatologic surgeon at Memorial Sloan Kettering Cancer Center, New York, who was also not a study investigator. UV irradiance “produces DNA mutations that are specific to forming types of skin cancer,” he said in an interview.



UVA wavelengths commonly used in nail dryers can penetrate all layers of the epidermis, the top layer of the skin, potentially affecting stem cells in the skin, according to the study.

Dr. Lipner noted that “there have been several case reports of patients with histories of gel manicures using UV nail polish dryers who later developed squamous cell carcinomas on the dorsal hands, fingers, and nails, and articles describing high UV emissions from nail polish dryers, but the direct connection between UV dryers and skin cancer development was tenuous.” The first of its kind, the new study investigated the impact of UV nail drying devices at a cellular level.

The results of this study, in combination with previous case reports suggesting the development of skin cancers following UVA dryer use, raise concern regarding the safety of these commonly used devices. The study, the authors wrote, “does not provide direct evidence for an increased cancer risk in human beings,” but their findings and “prior evidence strongly suggest that radiation emitted by UV nail polish dryers may cause cancers of the hand and that UV nail polish dryers, similar to tanning beds, may increase the risk of early onset skin cancer.”

Courtesy MSKCC
Dr. Anthony Rossi

Dr. Rossi said that, “while this study shows that the UV exposure does affect human cells and causes mutations, the study was not done in vivo in human beings, so further studies are needed to know at what dose and frequency gel manicures would be needed to cause detrimental effects.” However, for people who regularly receive gel manicures involving UV nail dryers, both Dr. Lipner and Dr. Rossi recommend applying a broad-spectrum sunscreen to protect the dorsal hands, fingertips, and skin surrounding the nails, or wearing UV-protective gloves.

The study was supported by an Alfred B. Sloan Research Fellowship to one of the authors and grants from the National Institutes of Health to two authors. One author reported being a compensated consultant and having an equity interest in io9. Dr. Lipner and Dr. Rossi reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Results of a study recently published in Nature Communications suggests that radiation from ultraviolet nail polish dryers could induce cell death and trigger molecular changes linked to cancer in human cells. According to two experts, these findings raise concerns regarding the safety of frequent use of these nail dryers.

In the study, human and mouse cells were exposed to radiation from UV nail dryers. Exposing human and mice skin cells to UVA light for 20 minutes resulted in the death of 20%-30% of cells; three consecutive 20-minute sessions resulted in the death of 65%-70% of cells. Additionally, surviving cells suffered oxidative damage to their DNA and mitochondria, with mutational patterns similar to those seen in skin cancer, study investigator Maria Zhivagui, PhD, of the University of California, San Diego, and associates reported.  

Dr. Shari Lipner

“This study showed that irradiation of human and mouse cell lines using UV nail polish dryers resulted in DNA damage and genome mutations,” Shari Lipner, MD, PhD, director of the nail division at New York–Presbyterian Hospital/Weill Cornell Medicine, New York, said in an interview. The study “ties together exposure to UV light from nail polish dryers and genetic mutations that are associated with skin cancers,” added Dr. Lipner, who was not involved with the study.

UV nail lamps are commonly used to dry and harden gel nail polish formulas. Often referred to as “mini tanning beds,” these devices emit UVA radiation, classified as a Group 1 Carcinogen by the International Agency for Research on Cancer.

“Both UVA and UVB are main drivers of both melanoma and keratinocyte carcinomas (basal cell carcinoma and squamous cell carcinoma),” said Anthony Rossi, MD, a dermatologic surgeon at Memorial Sloan Kettering Cancer Center, New York, who was also not a study investigator. UV irradiance “produces DNA mutations that are specific to forming types of skin cancer,” he said in an interview.



UVA wavelengths commonly used in nail dryers can penetrate all layers of the epidermis, the top layer of the skin, potentially affecting stem cells in the skin, according to the study.

Dr. Lipner noted that “there have been several case reports of patients with histories of gel manicures using UV nail polish dryers who later developed squamous cell carcinomas on the dorsal hands, fingers, and nails, and articles describing high UV emissions from nail polish dryers, but the direct connection between UV dryers and skin cancer development was tenuous.” The first of its kind, the new study investigated the impact of UV nail drying devices at a cellular level.

The results of this study, in combination with previous case reports suggesting the development of skin cancers following UVA dryer use, raise concern regarding the safety of these commonly used devices. The study, the authors wrote, “does not provide direct evidence for an increased cancer risk in human beings,” but their findings and “prior evidence strongly suggest that radiation emitted by UV nail polish dryers may cause cancers of the hand and that UV nail polish dryers, similar to tanning beds, may increase the risk of early onset skin cancer.”

Courtesy MSKCC
Dr. Anthony Rossi

Dr. Rossi said that, “while this study shows that the UV exposure does affect human cells and causes mutations, the study was not done in vivo in human beings, so further studies are needed to know at what dose and frequency gel manicures would be needed to cause detrimental effects.” However, for people who regularly receive gel manicures involving UV nail dryers, both Dr. Lipner and Dr. Rossi recommend applying a broad-spectrum sunscreen to protect the dorsal hands, fingertips, and skin surrounding the nails, or wearing UV-protective gloves.

The study was supported by an Alfred B. Sloan Research Fellowship to one of the authors and grants from the National Institutes of Health to two authors. One author reported being a compensated consultant and having an equity interest in io9. Dr. Lipner and Dr. Rossi reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM NATURE COMMUNICATIONS

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

What are the clinical implications of recent skin dysbiosis discoveries?

Article Type
Changed
Fri, 04/07/2023 - 13:43

As the study of cutaneous dysbiosis and its role in the pathogenesis of dermatoses continues to evolve, how the mounting evidence on this topic translates into clinical practice remains largely unknown.

“There’s still a lot for us to learn,” Adam Friedman, MD, professor and chair of dermatology at George Washington University, Washington, said at the annual meeting of the American Academy of Dermatology. “Multiple factors contribute to the variability in the skin microbiota, including age, sex, environment, immune system, host genotype, lifestyle, and pathobiology. The question becomes, when do these factors or impacts on the microbiota become clinically significant?”

According to Dr. Friedman, there are 10 times more bacteria cells than human cells in the human body, “but it’s not a fight to the finish; it’s not us versus them,” he said. “Together, we are a super organism.” There are also more than 500 species of bacteria on human skin excluding viruses and fungi, and each person carries up to 5 pounds of bacteria, which is akin to finding a new organ in the body.

NIH researchers find thousands of new microorganisms living on human skin
Credit: Daryl Leja, NHGRI (National Human Genome Research Institute)

“What’s so unique is that we each have our own bacterial fingerprint,” he said. “Whoever is sitting next to you? Their microbiota makeup is different than yours.”

Beyond genetics and environment, activities that can contribute to alterations in skin flora or skin dysbiosis include topical application of steroids, antibiotics, retinoids, harsh soaps, chemical and physical exfoliants, and resurfacing techniques. “With anything we apply or do to the skin, we are literally changing the home of many microorganisms, for good or bad,” he said.

In the realm of atopic dermatitis (AD), Staphylococcus aureus has been implicated as an offender in the pathophysiology of the disease. “It’s not about one single species of Staphylococcus, though,” said Dr. Friedman, who also is director of translational research at George Washington University. “We’re finding out that, depending on the severity of disease, Staph. epidermis may be part of the problem as opposed to it just being about Staph. aureus. Furthermore, and more importantly, these changes in the microbiota, specifically a decrease in microbial diversity, has been shown to precede a disease flare, highlighting the central role of maintaining microbial diversity and by definition, supporting the living barrier in our management of AD.”

With this in mind, researchers in one study used high-throughput sequencing to evaluate the microbial communities associated with affected and unaffected skin of 49 patients with AD before and after emollient treatment. Following 84 days of emollient application, clinical symptoms of AD improved in 72% of the study population and Stenotrophomonas species were significantly more abundant among responders.
 

Prebiotics, probiotics

“Our treatments certainly can positively impact the microbiota, as we have seen even recently with some of our new targeted therapies, but we can also directly provide support,” he continued. Prebiotics, which he defined as supplements or foods that contain a nondigestible ingredient that selectively stimulates the growth and/or activity of indigenous bacteria, can be found in many over-the-counter moisturizers.

Dr. Adam Friedman

For example, colloidal oatmeal has been found to support the growth of S. epidermidis and enhance the production of lactic acid. “We really don’t know much about what these induced changes mean from a clinical perspective; that has yet to be elucidated,” Dr. Friedman said.

In light of the recent attention to the early application of moisturizers in infants at high risk of developing AD in an effort to prevent or limit AD, “maybe part of this has to do with applying something that’s nurturing an evolving microbiota,” Dr. Friedman noted. “It’s something to think about.”

Yet another area of study involves the use of probiotics, which Dr. Friedman defined as supplements or foods that contain viable microorganisms that alter the microflora of the host. In a first-of-its-kind trial, researchers evaluated the safety and efficacy of self-administered topical Roseomonas mucosa in 10 adults and 5 children with AD. No adverse events or treatment complications were observed, and the topical R. mucosa was associated with significant decreases in measures of disease severity, topical steroid requirement, and S. aureus burden

In a more recent randomized trial of 11 patients with AD, Richard L. Gallo, MD, PhD, chair of dermatology, University of California, San Diego, and colleagues found that application of a personalized topical cream formulated from coagulase-negative Staphylococcus with antimicrobial activity against S. aureus reduced colonization of S. aureus and improved disease severity.



And in another randomized, controlled trial, Italian researchers enrolled 80 adults with mild to severe AD to receive a placebo or a supplement that was a mixture of lactobacilli for 56 days. They found that adults in the treatment arm showed an improvement in skin smoothness, skin moisturization, self-perception, and a decrease in the SCORing Atopic Dermatitis (SCORAD) index as well as in levels of inflammatory markers associated with AD.

Dr. Friedman also discussed postbiotics, nonviable bacterial products or metabolic byproducts from probiotic microorganisms that have biologic activity in the host. In one trial, French researchers enrolled 75 people with AD who ranged in age from 6 to 70 years to receive a cream containing a 5% lysate of the nonpathogenic bacteria Vitreoscilla filiformis, or a vehicle cream for 30 days. They found that compared with the vehicle, V. filiformis lysate significantly decreased SCORAD levels and pruritus; active cream was shown to significantly decrease loss of sleep from day 0 to day 29.

Dr. Friedman characterized these novel approaches to AD as “an exciting area, one we need to pay attention to. But what I really want to know is, aside from these purposefully made and marketed products that have pre- and postprobiotics, is there a difference with some of the products we use already? My assumption is that there is, but we need to see that data.”

Dr. Friedman disclosed that he is a consultant and/or advisory board member for Medscape/SanovaWorks, Oakstone Institute, L’Oréal, La Roche Posay, Galderma, Aveeno, Ortho Dermatologic, Microcures, Pfizer, Novartis, Lilly, Hoth Therapeutics, Zylo Therapeutics, BMS, Vial, Janssen, Novocure, Dermavant, Regeneron/Sanofi, and Incyte. He has also received grants from Pfizer, the Dermatology Foundation, Lilly, Janssen, Incyte, and Galderma.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

As the study of cutaneous dysbiosis and its role in the pathogenesis of dermatoses continues to evolve, how the mounting evidence on this topic translates into clinical practice remains largely unknown.

“There’s still a lot for us to learn,” Adam Friedman, MD, professor and chair of dermatology at George Washington University, Washington, said at the annual meeting of the American Academy of Dermatology. “Multiple factors contribute to the variability in the skin microbiota, including age, sex, environment, immune system, host genotype, lifestyle, and pathobiology. The question becomes, when do these factors or impacts on the microbiota become clinically significant?”

According to Dr. Friedman, there are 10 times more bacteria cells than human cells in the human body, “but it’s not a fight to the finish; it’s not us versus them,” he said. “Together, we are a super organism.” There are also more than 500 species of bacteria on human skin excluding viruses and fungi, and each person carries up to 5 pounds of bacteria, which is akin to finding a new organ in the body.

NIH researchers find thousands of new microorganisms living on human skin
Credit: Daryl Leja, NHGRI (National Human Genome Research Institute)

“What’s so unique is that we each have our own bacterial fingerprint,” he said. “Whoever is sitting next to you? Their microbiota makeup is different than yours.”

Beyond genetics and environment, activities that can contribute to alterations in skin flora or skin dysbiosis include topical application of steroids, antibiotics, retinoids, harsh soaps, chemical and physical exfoliants, and resurfacing techniques. “With anything we apply or do to the skin, we are literally changing the home of many microorganisms, for good or bad,” he said.

In the realm of atopic dermatitis (AD), Staphylococcus aureus has been implicated as an offender in the pathophysiology of the disease. “It’s not about one single species of Staphylococcus, though,” said Dr. Friedman, who also is director of translational research at George Washington University. “We’re finding out that, depending on the severity of disease, Staph. epidermis may be part of the problem as opposed to it just being about Staph. aureus. Furthermore, and more importantly, these changes in the microbiota, specifically a decrease in microbial diversity, has been shown to precede a disease flare, highlighting the central role of maintaining microbial diversity and by definition, supporting the living barrier in our management of AD.”

With this in mind, researchers in one study used high-throughput sequencing to evaluate the microbial communities associated with affected and unaffected skin of 49 patients with AD before and after emollient treatment. Following 84 days of emollient application, clinical symptoms of AD improved in 72% of the study population and Stenotrophomonas species were significantly more abundant among responders.
 

Prebiotics, probiotics

“Our treatments certainly can positively impact the microbiota, as we have seen even recently with some of our new targeted therapies, but we can also directly provide support,” he continued. Prebiotics, which he defined as supplements or foods that contain a nondigestible ingredient that selectively stimulates the growth and/or activity of indigenous bacteria, can be found in many over-the-counter moisturizers.

Dr. Adam Friedman

For example, colloidal oatmeal has been found to support the growth of S. epidermidis and enhance the production of lactic acid. “We really don’t know much about what these induced changes mean from a clinical perspective; that has yet to be elucidated,” Dr. Friedman said.

In light of the recent attention to the early application of moisturizers in infants at high risk of developing AD in an effort to prevent or limit AD, “maybe part of this has to do with applying something that’s nurturing an evolving microbiota,” Dr. Friedman noted. “It’s something to think about.”

Yet another area of study involves the use of probiotics, which Dr. Friedman defined as supplements or foods that contain viable microorganisms that alter the microflora of the host. In a first-of-its-kind trial, researchers evaluated the safety and efficacy of self-administered topical Roseomonas mucosa in 10 adults and 5 children with AD. No adverse events or treatment complications were observed, and the topical R. mucosa was associated with significant decreases in measures of disease severity, topical steroid requirement, and S. aureus burden

In a more recent randomized trial of 11 patients with AD, Richard L. Gallo, MD, PhD, chair of dermatology, University of California, San Diego, and colleagues found that application of a personalized topical cream formulated from coagulase-negative Staphylococcus with antimicrobial activity against S. aureus reduced colonization of S. aureus and improved disease severity.



And in another randomized, controlled trial, Italian researchers enrolled 80 adults with mild to severe AD to receive a placebo or a supplement that was a mixture of lactobacilli for 56 days. They found that adults in the treatment arm showed an improvement in skin smoothness, skin moisturization, self-perception, and a decrease in the SCORing Atopic Dermatitis (SCORAD) index as well as in levels of inflammatory markers associated with AD.

Dr. Friedman also discussed postbiotics, nonviable bacterial products or metabolic byproducts from probiotic microorganisms that have biologic activity in the host. In one trial, French researchers enrolled 75 people with AD who ranged in age from 6 to 70 years to receive a cream containing a 5% lysate of the nonpathogenic bacteria Vitreoscilla filiformis, or a vehicle cream for 30 days. They found that compared with the vehicle, V. filiformis lysate significantly decreased SCORAD levels and pruritus; active cream was shown to significantly decrease loss of sleep from day 0 to day 29.

Dr. Friedman characterized these novel approaches to AD as “an exciting area, one we need to pay attention to. But what I really want to know is, aside from these purposefully made and marketed products that have pre- and postprobiotics, is there a difference with some of the products we use already? My assumption is that there is, but we need to see that data.”

Dr. Friedman disclosed that he is a consultant and/or advisory board member for Medscape/SanovaWorks, Oakstone Institute, L’Oréal, La Roche Posay, Galderma, Aveeno, Ortho Dermatologic, Microcures, Pfizer, Novartis, Lilly, Hoth Therapeutics, Zylo Therapeutics, BMS, Vial, Janssen, Novocure, Dermavant, Regeneron/Sanofi, and Incyte. He has also received grants from Pfizer, the Dermatology Foundation, Lilly, Janssen, Incyte, and Galderma.

As the study of cutaneous dysbiosis and its role in the pathogenesis of dermatoses continues to evolve, how the mounting evidence on this topic translates into clinical practice remains largely unknown.

“There’s still a lot for us to learn,” Adam Friedman, MD, professor and chair of dermatology at George Washington University, Washington, said at the annual meeting of the American Academy of Dermatology. “Multiple factors contribute to the variability in the skin microbiota, including age, sex, environment, immune system, host genotype, lifestyle, and pathobiology. The question becomes, when do these factors or impacts on the microbiota become clinically significant?”

According to Dr. Friedman, there are 10 times more bacteria cells than human cells in the human body, “but it’s not a fight to the finish; it’s not us versus them,” he said. “Together, we are a super organism.” There are also more than 500 species of bacteria on human skin excluding viruses and fungi, and each person carries up to 5 pounds of bacteria, which is akin to finding a new organ in the body.

NIH researchers find thousands of new microorganisms living on human skin
Credit: Daryl Leja, NHGRI (National Human Genome Research Institute)

“What’s so unique is that we each have our own bacterial fingerprint,” he said. “Whoever is sitting next to you? Their microbiota makeup is different than yours.”

Beyond genetics and environment, activities that can contribute to alterations in skin flora or skin dysbiosis include topical application of steroids, antibiotics, retinoids, harsh soaps, chemical and physical exfoliants, and resurfacing techniques. “With anything we apply or do to the skin, we are literally changing the home of many microorganisms, for good or bad,” he said.

In the realm of atopic dermatitis (AD), Staphylococcus aureus has been implicated as an offender in the pathophysiology of the disease. “It’s not about one single species of Staphylococcus, though,” said Dr. Friedman, who also is director of translational research at George Washington University. “We’re finding out that, depending on the severity of disease, Staph. epidermis may be part of the problem as opposed to it just being about Staph. aureus. Furthermore, and more importantly, these changes in the microbiota, specifically a decrease in microbial diversity, has been shown to precede a disease flare, highlighting the central role of maintaining microbial diversity and by definition, supporting the living barrier in our management of AD.”

With this in mind, researchers in one study used high-throughput sequencing to evaluate the microbial communities associated with affected and unaffected skin of 49 patients with AD before and after emollient treatment. Following 84 days of emollient application, clinical symptoms of AD improved in 72% of the study population and Stenotrophomonas species were significantly more abundant among responders.
 

Prebiotics, probiotics

“Our treatments certainly can positively impact the microbiota, as we have seen even recently with some of our new targeted therapies, but we can also directly provide support,” he continued. Prebiotics, which he defined as supplements or foods that contain a nondigestible ingredient that selectively stimulates the growth and/or activity of indigenous bacteria, can be found in many over-the-counter moisturizers.

Dr. Adam Friedman

For example, colloidal oatmeal has been found to support the growth of S. epidermidis and enhance the production of lactic acid. “We really don’t know much about what these induced changes mean from a clinical perspective; that has yet to be elucidated,” Dr. Friedman said.

In light of the recent attention to the early application of moisturizers in infants at high risk of developing AD in an effort to prevent or limit AD, “maybe part of this has to do with applying something that’s nurturing an evolving microbiota,” Dr. Friedman noted. “It’s something to think about.”

Yet another area of study involves the use of probiotics, which Dr. Friedman defined as supplements or foods that contain viable microorganisms that alter the microflora of the host. In a first-of-its-kind trial, researchers evaluated the safety and efficacy of self-administered topical Roseomonas mucosa in 10 adults and 5 children with AD. No adverse events or treatment complications were observed, and the topical R. mucosa was associated with significant decreases in measures of disease severity, topical steroid requirement, and S. aureus burden

In a more recent randomized trial of 11 patients with AD, Richard L. Gallo, MD, PhD, chair of dermatology, University of California, San Diego, and colleagues found that application of a personalized topical cream formulated from coagulase-negative Staphylococcus with antimicrobial activity against S. aureus reduced colonization of S. aureus and improved disease severity.



And in another randomized, controlled trial, Italian researchers enrolled 80 adults with mild to severe AD to receive a placebo or a supplement that was a mixture of lactobacilli for 56 days. They found that adults in the treatment arm showed an improvement in skin smoothness, skin moisturization, self-perception, and a decrease in the SCORing Atopic Dermatitis (SCORAD) index as well as in levels of inflammatory markers associated with AD.

Dr. Friedman also discussed postbiotics, nonviable bacterial products or metabolic byproducts from probiotic microorganisms that have biologic activity in the host. In one trial, French researchers enrolled 75 people with AD who ranged in age from 6 to 70 years to receive a cream containing a 5% lysate of the nonpathogenic bacteria Vitreoscilla filiformis, or a vehicle cream for 30 days. They found that compared with the vehicle, V. filiformis lysate significantly decreased SCORAD levels and pruritus; active cream was shown to significantly decrease loss of sleep from day 0 to day 29.

Dr. Friedman characterized these novel approaches to AD as “an exciting area, one we need to pay attention to. But what I really want to know is, aside from these purposefully made and marketed products that have pre- and postprobiotics, is there a difference with some of the products we use already? My assumption is that there is, but we need to see that data.”

Dr. Friedman disclosed that he is a consultant and/or advisory board member for Medscape/SanovaWorks, Oakstone Institute, L’Oréal, La Roche Posay, Galderma, Aveeno, Ortho Dermatologic, Microcures, Pfizer, Novartis, Lilly, Hoth Therapeutics, Zylo Therapeutics, BMS, Vial, Janssen, Novocure, Dermavant, Regeneron/Sanofi, and Incyte. He has also received grants from Pfizer, the Dermatology Foundation, Lilly, Janssen, Incyte, and Galderma.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT AAD 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Spherical heart may predict cardiomyopathy, AFib

Article Type
Changed
Thu, 04/20/2023 - 17:45

A round heart, or left ventricle sphericity, predicted cardiomyopathy and atrial fibrillation (AFib) in a deep learning analysis of MRI images from close to 39,000 participants in the UK Biobank, a new study shows.

An increase of 1 standard deviation in the sphericity index (short axis length/long axis length) was associated with a 47% increased incidence of cardiomyopathy and a 20% increased incidence of AFib, independent of clinical factors and traditional MRI measures.

Furthermore, a genetic analysis suggested a shared architecture between sphericity and nonischemic cardiomyopathy, pointing to NICM as a possible causal factor for left ventricle sphericity among individuals with normal LV size and function.

“Physicians have known the heart gets rounder after heart attacks and as we get older,” David Ouyang, MD, a cardiologist in the Smidt Heart Institute at Cedars-Sinai Medical Center, Los Angeles, and a researcher in the division of artificial intelligence in medicine, said in an interview. “We wanted to see if this sphericity is prognostic of future disease even in healthy individuals.”

Although it is too early to recommend heart shape assessment in healthy asymptomatic people, he said, “physicians should be extra careful and think about treatments when they notice a patient’s heart is particularly round.”

The study was published online March 29 in the journal Med.
 

Sphericity index key

The investigators hypothesized that there is variation in LV sphericity within the spectrum of normal LV chamber size and systolic function, and that such variation might be a marker of cardiac risk with genetic influences.

To test this hypothesis, they used automated deep-learning segmentation of cardiac MRI data to estimate and analyze the sphericity index in a cohort of 38,897 individuals participating in the UK Biobank.

After adjustment for age at MRI and sex, an increased sphericity index was associated with an increased risk for cardiomyopathy (hazard ratio, 1.57), AFib (HR, 1.35), and heart failure (HR, 1.37).

No significant association was seen with cardiac arrest.

The team then stratified the cohort into quintiles and compared the top 20%, middle 60%, and bottom 20%. The relationship between the sphericity index and risk extended across the distribution; individuals with higher than median sphericity had increased disease incidence, and those with lower than median sphericity had decreased incidence.

Overall, a single standard deviation in the sphericity index was associated with increased risk of cardiomyopathy (HR, 1.47) and of AFib (HR, 1.20), independent of clinical factors and usual MRI measurements.

In a minimally adjusted model, the sphericity index was a predictor of incident cardiomyopathy, AFib, and heart failure.

Adjustment for clinical factors partially attenuated the heart failure association; additional adjustment for MRI measurements fully attenuated that association and partially attenuated the association with AFib.

However, in all adjusted models, the association with cardiomyopathy showed little attenuation.

Furthermore, the team identified four loci associated with sphericity at genomewide significance – PLN, ANGPT1, PDZRN3, and HLA DR/DQ – and Mendelian randomization supported NICM as a cause of LV sphericity.
 

Looking ahead

“While conventional imaging metrics have significant diagnostic and prognostic value, some of these measurements have been adopted out of convenience or tradition,” the authors noted. “By representing a specific multidimensional remodeling phenotype, sphericity has emerged as a distinct morphologic trait with features not adequately captured by conventional measurements.

“We expect that the search space of potential imaging measurements is vast, and we have only begun to scratch at the surface of disease associations.”

Indeed, Dr. Ouyang said his group is “trying to evaluate the sphericity in echocardiograms or heart ultrasounds, which are more common and cheaper than MRI.”

“The main caveat is translating the information directly to patient care,” Richard C. Becker, MD, director and physician-in-chief of the University of Cincinnati Heart, Lung, and Vascular Institute, said in an interview. “Near-term yield could include using the spherical calculation in routine MRI of the heart, and based on the findings, following patients more closely if there is an abnormal shape. Or performing an MRI and targeted gene testing if there is a family history of cardiomyopathy or [of] an abnormal shape of the heart.”

“Validation of the findings and large-scale evaluation of the genes identified, and how they interact with patient and environmental factors, will be very important,” he added.

Nevertheless, “the study was well done and may serve as a foundation for future research,” Dr. Becker said. “The investigators used several powerful tools, including MRI, genomics, and [artificial intelligence] to draw their conclusions. This is precisely the way that ‘big data’ should be used – in a complementary fashion.”

The study authors and Dr. Becker reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

A round heart, or left ventricle sphericity, predicted cardiomyopathy and atrial fibrillation (AFib) in a deep learning analysis of MRI images from close to 39,000 participants in the UK Biobank, a new study shows.

An increase of 1 standard deviation in the sphericity index (short axis length/long axis length) was associated with a 47% increased incidence of cardiomyopathy and a 20% increased incidence of AFib, independent of clinical factors and traditional MRI measures.

Furthermore, a genetic analysis suggested a shared architecture between sphericity and nonischemic cardiomyopathy, pointing to NICM as a possible causal factor for left ventricle sphericity among individuals with normal LV size and function.

“Physicians have known the heart gets rounder after heart attacks and as we get older,” David Ouyang, MD, a cardiologist in the Smidt Heart Institute at Cedars-Sinai Medical Center, Los Angeles, and a researcher in the division of artificial intelligence in medicine, said in an interview. “We wanted to see if this sphericity is prognostic of future disease even in healthy individuals.”

Although it is too early to recommend heart shape assessment in healthy asymptomatic people, he said, “physicians should be extra careful and think about treatments when they notice a patient’s heart is particularly round.”

The study was published online March 29 in the journal Med.
 

Sphericity index key

The investigators hypothesized that there is variation in LV sphericity within the spectrum of normal LV chamber size and systolic function, and that such variation might be a marker of cardiac risk with genetic influences.

To test this hypothesis, they used automated deep-learning segmentation of cardiac MRI data to estimate and analyze the sphericity index in a cohort of 38,897 individuals participating in the UK Biobank.

After adjustment for age at MRI and sex, an increased sphericity index was associated with an increased risk for cardiomyopathy (hazard ratio, 1.57), AFib (HR, 1.35), and heart failure (HR, 1.37).

No significant association was seen with cardiac arrest.

The team then stratified the cohort into quintiles and compared the top 20%, middle 60%, and bottom 20%. The relationship between the sphericity index and risk extended across the distribution; individuals with higher than median sphericity had increased disease incidence, and those with lower than median sphericity had decreased incidence.

Overall, a single standard deviation in the sphericity index was associated with increased risk of cardiomyopathy (HR, 1.47) and of AFib (HR, 1.20), independent of clinical factors and usual MRI measurements.

In a minimally adjusted model, the sphericity index was a predictor of incident cardiomyopathy, AFib, and heart failure.

Adjustment for clinical factors partially attenuated the heart failure association; additional adjustment for MRI measurements fully attenuated that association and partially attenuated the association with AFib.

However, in all adjusted models, the association with cardiomyopathy showed little attenuation.

Furthermore, the team identified four loci associated with sphericity at genomewide significance – PLN, ANGPT1, PDZRN3, and HLA DR/DQ – and Mendelian randomization supported NICM as a cause of LV sphericity.
 

Looking ahead

“While conventional imaging metrics have significant diagnostic and prognostic value, some of these measurements have been adopted out of convenience or tradition,” the authors noted. “By representing a specific multidimensional remodeling phenotype, sphericity has emerged as a distinct morphologic trait with features not adequately captured by conventional measurements.

“We expect that the search space of potential imaging measurements is vast, and we have only begun to scratch at the surface of disease associations.”

Indeed, Dr. Ouyang said his group is “trying to evaluate the sphericity in echocardiograms or heart ultrasounds, which are more common and cheaper than MRI.”

“The main caveat is translating the information directly to patient care,” Richard C. Becker, MD, director and physician-in-chief of the University of Cincinnati Heart, Lung, and Vascular Institute, said in an interview. “Near-term yield could include using the spherical calculation in routine MRI of the heart, and based on the findings, following patients more closely if there is an abnormal shape. Or performing an MRI and targeted gene testing if there is a family history of cardiomyopathy or [of] an abnormal shape of the heart.”

“Validation of the findings and large-scale evaluation of the genes identified, and how they interact with patient and environmental factors, will be very important,” he added.

Nevertheless, “the study was well done and may serve as a foundation for future research,” Dr. Becker said. “The investigators used several powerful tools, including MRI, genomics, and [artificial intelligence] to draw their conclusions. This is precisely the way that ‘big data’ should be used – in a complementary fashion.”

The study authors and Dr. Becker reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

A round heart, or left ventricle sphericity, predicted cardiomyopathy and atrial fibrillation (AFib) in a deep learning analysis of MRI images from close to 39,000 participants in the UK Biobank, a new study shows.

An increase of 1 standard deviation in the sphericity index (short axis length/long axis length) was associated with a 47% increased incidence of cardiomyopathy and a 20% increased incidence of AFib, independent of clinical factors and traditional MRI measures.

Furthermore, a genetic analysis suggested a shared architecture between sphericity and nonischemic cardiomyopathy, pointing to NICM as a possible causal factor for left ventricle sphericity among individuals with normal LV size and function.

“Physicians have known the heart gets rounder after heart attacks and as we get older,” David Ouyang, MD, a cardiologist in the Smidt Heart Institute at Cedars-Sinai Medical Center, Los Angeles, and a researcher in the division of artificial intelligence in medicine, said in an interview. “We wanted to see if this sphericity is prognostic of future disease even in healthy individuals.”

Although it is too early to recommend heart shape assessment in healthy asymptomatic people, he said, “physicians should be extra careful and think about treatments when they notice a patient’s heart is particularly round.”

The study was published online March 29 in the journal Med.
 

Sphericity index key

The investigators hypothesized that there is variation in LV sphericity within the spectrum of normal LV chamber size and systolic function, and that such variation might be a marker of cardiac risk with genetic influences.

To test this hypothesis, they used automated deep-learning segmentation of cardiac MRI data to estimate and analyze the sphericity index in a cohort of 38,897 individuals participating in the UK Biobank.

After adjustment for age at MRI and sex, an increased sphericity index was associated with an increased risk for cardiomyopathy (hazard ratio, 1.57), AFib (HR, 1.35), and heart failure (HR, 1.37).

No significant association was seen with cardiac arrest.

The team then stratified the cohort into quintiles and compared the top 20%, middle 60%, and bottom 20%. The relationship between the sphericity index and risk extended across the distribution; individuals with higher than median sphericity had increased disease incidence, and those with lower than median sphericity had decreased incidence.

Overall, a single standard deviation in the sphericity index was associated with increased risk of cardiomyopathy (HR, 1.47) and of AFib (HR, 1.20), independent of clinical factors and usual MRI measurements.

In a minimally adjusted model, the sphericity index was a predictor of incident cardiomyopathy, AFib, and heart failure.

Adjustment for clinical factors partially attenuated the heart failure association; additional adjustment for MRI measurements fully attenuated that association and partially attenuated the association with AFib.

However, in all adjusted models, the association with cardiomyopathy showed little attenuation.

Furthermore, the team identified four loci associated with sphericity at genomewide significance – PLN, ANGPT1, PDZRN3, and HLA DR/DQ – and Mendelian randomization supported NICM as a cause of LV sphericity.
 

Looking ahead

“While conventional imaging metrics have significant diagnostic and prognostic value, some of these measurements have been adopted out of convenience or tradition,” the authors noted. “By representing a specific multidimensional remodeling phenotype, sphericity has emerged as a distinct morphologic trait with features not adequately captured by conventional measurements.

“We expect that the search space of potential imaging measurements is vast, and we have only begun to scratch at the surface of disease associations.”

Indeed, Dr. Ouyang said his group is “trying to evaluate the sphericity in echocardiograms or heart ultrasounds, which are more common and cheaper than MRI.”

“The main caveat is translating the information directly to patient care,” Richard C. Becker, MD, director and physician-in-chief of the University of Cincinnati Heart, Lung, and Vascular Institute, said in an interview. “Near-term yield could include using the spherical calculation in routine MRI of the heart, and based on the findings, following patients more closely if there is an abnormal shape. Or performing an MRI and targeted gene testing if there is a family history of cardiomyopathy or [of] an abnormal shape of the heart.”

“Validation of the findings and large-scale evaluation of the genes identified, and how they interact with patient and environmental factors, will be very important,” he added.

Nevertheless, “the study was well done and may serve as a foundation for future research,” Dr. Becker said. “The investigators used several powerful tools, including MRI, genomics, and [artificial intelligence] to draw their conclusions. This is precisely the way that ‘big data’ should be used – in a complementary fashion.”

The study authors and Dr. Becker reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM MED

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Survival improved for some patients with metastatic cancers

Article Type
Changed
Fri, 04/07/2023 - 18:26

 

Over the past 30 years, more than 80 new systemic therapies for cancer have been approved, and many patients diagnosed with localized disease have benefited with improved progression-free and overall survival. The same can be said for some – but by no means all – patients with metastatic disease at diagnosis, a new study indicates.

“Our results show that the survival of patients with de novo metastatic cancer improved slowly over 30 years but that these gains were typically modest and unevenly distributed among cancers,” comment the authors, led by Marianne Luyendijk, MSc, from the Netherlands Comprehensive Cancer Organization, Utrecht.

The study was published online  in the Journal of the National Cancer Institute.

The retrospective study compared survival data of patients with de novo metastatic disease diagnosed from 1989 through 1993 with those of patients diagnosed from 2014 to 2018.

The results show that 5-year survival increased by 15% or more among patients with metastatic gastrointestinal stromal tumors; neuroendocrine tumors; melanoma; and cancers of the prostate, breast, thyroid, and testes.

For patients with other cancers, however, the gains in survival were more modest. For example, over the study period, 5-year survival of patients with metastatic non–small cell lung cancer increased by only 6%, a disappointing finding, given the advent of targeted therapies and immunotherapy during the most recent period, the authors note.

In contrast, there was a 16% improvement in long-term survival of patients with metastatic melanoma, likely owing to the introduction of immune checkpoint inhibitors and targeted therapies, such as tyrosine kinase inhibitors.

The data also showed differences over time in the proportion of patients diagnosed with de novo metastatic disease; some cancers, such as NSCLC and small cell lung cancer, were more frequently diagnosed at late stages in the more recent era, possibly owing to increased screening and the use of technology such as FDG-PET imaging.

On the other end of the spectrum, cancers of the prostate, rectum, uterine cervix, breast, gallbladder, and bile ducts were more likely to be caught at an earlier stage during later years of the study period.

The authors say that among the possible explanations for a less than robust reduction over time in metastatic disease is that new drugs do not always translate into improved survival. They cite a 2017 study showing that among 53 new cancer drugs approved by U.S., European, or Australian drug regulators, fewer than half improved overall survival by at least 3 months, and an additional 26% offered survival advantages that were either shorter than 3 months or of unknown benefit.

“This may also explain why the 1- and 5-year survival rates of some cancers have changed little in the last 30 years,” they write. “Nevertheless, even minor benefits in survival or other outcomes (for example, quality of life) may represent progress in treating patients with metastatic cancer.”

The investigators recommend that to improve understanding of the effect of new therapies on survival of metastatic disease, cancer registries include data on therapies used beyond the first line, as well as comorbidities and quality-of-life measures.

The authors did not report a study funding source. Ms. Luyendijk has disclosed no relevant financial relationships. Several co-authors reported financial relationships with pharmaceutical companies.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

Over the past 30 years, more than 80 new systemic therapies for cancer have been approved, and many patients diagnosed with localized disease have benefited with improved progression-free and overall survival. The same can be said for some – but by no means all – patients with metastatic disease at diagnosis, a new study indicates.

“Our results show that the survival of patients with de novo metastatic cancer improved slowly over 30 years but that these gains were typically modest and unevenly distributed among cancers,” comment the authors, led by Marianne Luyendijk, MSc, from the Netherlands Comprehensive Cancer Organization, Utrecht.

The study was published online  in the Journal of the National Cancer Institute.

The retrospective study compared survival data of patients with de novo metastatic disease diagnosed from 1989 through 1993 with those of patients diagnosed from 2014 to 2018.

The results show that 5-year survival increased by 15% or more among patients with metastatic gastrointestinal stromal tumors; neuroendocrine tumors; melanoma; and cancers of the prostate, breast, thyroid, and testes.

For patients with other cancers, however, the gains in survival were more modest. For example, over the study period, 5-year survival of patients with metastatic non–small cell lung cancer increased by only 6%, a disappointing finding, given the advent of targeted therapies and immunotherapy during the most recent period, the authors note.

In contrast, there was a 16% improvement in long-term survival of patients with metastatic melanoma, likely owing to the introduction of immune checkpoint inhibitors and targeted therapies, such as tyrosine kinase inhibitors.

The data also showed differences over time in the proportion of patients diagnosed with de novo metastatic disease; some cancers, such as NSCLC and small cell lung cancer, were more frequently diagnosed at late stages in the more recent era, possibly owing to increased screening and the use of technology such as FDG-PET imaging.

On the other end of the spectrum, cancers of the prostate, rectum, uterine cervix, breast, gallbladder, and bile ducts were more likely to be caught at an earlier stage during later years of the study period.

The authors say that among the possible explanations for a less than robust reduction over time in metastatic disease is that new drugs do not always translate into improved survival. They cite a 2017 study showing that among 53 new cancer drugs approved by U.S., European, or Australian drug regulators, fewer than half improved overall survival by at least 3 months, and an additional 26% offered survival advantages that were either shorter than 3 months or of unknown benefit.

“This may also explain why the 1- and 5-year survival rates of some cancers have changed little in the last 30 years,” they write. “Nevertheless, even minor benefits in survival or other outcomes (for example, quality of life) may represent progress in treating patients with metastatic cancer.”

The investigators recommend that to improve understanding of the effect of new therapies on survival of metastatic disease, cancer registries include data on therapies used beyond the first line, as well as comorbidities and quality-of-life measures.

The authors did not report a study funding source. Ms. Luyendijk has disclosed no relevant financial relationships. Several co-authors reported financial relationships with pharmaceutical companies.

A version of this article first appeared on Medscape.com.

 

Over the past 30 years, more than 80 new systemic therapies for cancer have been approved, and many patients diagnosed with localized disease have benefited with improved progression-free and overall survival. The same can be said for some – but by no means all – patients with metastatic disease at diagnosis, a new study indicates.

“Our results show that the survival of patients with de novo metastatic cancer improved slowly over 30 years but that these gains were typically modest and unevenly distributed among cancers,” comment the authors, led by Marianne Luyendijk, MSc, from the Netherlands Comprehensive Cancer Organization, Utrecht.

The study was published online  in the Journal of the National Cancer Institute.

The retrospective study compared survival data of patients with de novo metastatic disease diagnosed from 1989 through 1993 with those of patients diagnosed from 2014 to 2018.

The results show that 5-year survival increased by 15% or more among patients with metastatic gastrointestinal stromal tumors; neuroendocrine tumors; melanoma; and cancers of the prostate, breast, thyroid, and testes.

For patients with other cancers, however, the gains in survival were more modest. For example, over the study period, 5-year survival of patients with metastatic non–small cell lung cancer increased by only 6%, a disappointing finding, given the advent of targeted therapies and immunotherapy during the most recent period, the authors note.

In contrast, there was a 16% improvement in long-term survival of patients with metastatic melanoma, likely owing to the introduction of immune checkpoint inhibitors and targeted therapies, such as tyrosine kinase inhibitors.

The data also showed differences over time in the proportion of patients diagnosed with de novo metastatic disease; some cancers, such as NSCLC and small cell lung cancer, were more frequently diagnosed at late stages in the more recent era, possibly owing to increased screening and the use of technology such as FDG-PET imaging.

On the other end of the spectrum, cancers of the prostate, rectum, uterine cervix, breast, gallbladder, and bile ducts were more likely to be caught at an earlier stage during later years of the study period.

The authors say that among the possible explanations for a less than robust reduction over time in metastatic disease is that new drugs do not always translate into improved survival. They cite a 2017 study showing that among 53 new cancer drugs approved by U.S., European, or Australian drug regulators, fewer than half improved overall survival by at least 3 months, and an additional 26% offered survival advantages that were either shorter than 3 months or of unknown benefit.

“This may also explain why the 1- and 5-year survival rates of some cancers have changed little in the last 30 years,” they write. “Nevertheless, even minor benefits in survival or other outcomes (for example, quality of life) may represent progress in treating patients with metastatic cancer.”

The investigators recommend that to improve understanding of the effect of new therapies on survival of metastatic disease, cancer registries include data on therapies used beyond the first line, as well as comorbidities and quality-of-life measures.

The authors did not report a study funding source. Ms. Luyendijk has disclosed no relevant financial relationships. Several co-authors reported financial relationships with pharmaceutical companies.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JOURNAL OF THE NATIONAL CANCER INSTITUTE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article