Clinical Psychiatry News is the online destination and multimedia properties of Clinica Psychiatry News, the independent news publication for psychiatrists. Since 1971, Clinical Psychiatry News has been the leading source of news and commentary about clinical developments in psychiatry as well as health care policy and regulations that affect the physician's practice.

Theme
medstat_cpn
Top Sections
Conference Coverage
Families in Psychiatry
Weighty Issues
cpn

Dear Drupal User: You're seeing this because you're logged in to Drupal, and not redirected to MDedge.com/psychiatry. 

Main menu
CPN Main Menu
Explore menu
CPN Explore Menu
Proclivity ID
18814001
Unpublish
Specialty Focus
Addiction Medicine
Bipolar Disorder
Depression
Schizophrenia & Other Psychotic Disorders
Negative Keywords
Bipolar depression
Depression
adolescent depression
adolescent major depressive disorder
adolescent schizophrenia
adolescent with major depressive disorder
animals
autism
baby
brexpiprazole
child
child bipolar
child depression
child schizophrenia
children with bipolar disorder
children with depression
children with major depressive disorder
compulsive behaviors
cure
elderly bipolar
elderly depression
elderly major depressive disorder
elderly schizophrenia
elderly with dementia
first break
first episode
gambling
gaming
geriatric depression
geriatric major depressive disorder
geriatric schizophrenia
infant
ketamine
kid
major depressive disorder
major depressive disorder in adolescents
major depressive disorder in children
parenting
pediatric
pediatric bipolar
pediatric depression
pediatric major depressive disorder
pediatric schizophrenia
pregnancy
pregnant
rexulti
skin care
suicide
teen
wine
Negative Keywords Excluded Elements
header[@id='header']
section[contains(@class, 'nav-hidden')]
footer[@id='footer']
div[contains(@class, 'pane-pub-article-cpn')]
div[contains(@class, 'pane-pub-home-cpn')]
div[contains(@class, 'pane-pub-topic-cpn')]
div[contains(@class, 'panel-panel-inner')]
div[contains(@class, 'pane-node-field-article-topics')]
section[contains(@class, 'footer-nav-section-wrapper')]
Altmetric
Article Authors "autobrand" affiliation
Clinical Psychiatry News
DSM Affiliated
Display in offset block
Disqus Exclude
Best Practices
CE/CME
Education Center
Medical Education Library
Enable Disqus
Display Author and Disclosure Link
Publication Type
News
Slot System
Top 25
Disable Sticky Ads
Disable Ad Block Mitigation
Featured Buckets Admin
Publication LayerRX Default ID
796,797
Show Ads on this Publication's Homepage
Consolidated Pub
Show Article Page Numbers on TOC
Use larger logo size
Off

TNF-alpha, oxidative stress disturbance may play role in schizophrenia pathophysiology

Article Type
Changed
Sat, 02/22/2020 - 09:23

Disturbance of tumor necrosis factor (TNF)–alpha and oxidative stress status may be involved in the pathophysiology of schizophrenia, new study results suggest.

In a study published in Psychoneuroendocrinology, the investigators collected blood samples from 119 patients with schizophrenia and 135 controls. Along with TNF-alpha, assays for the oxidative stress markers superoxide dismutase (SOD), glutathione peroxidase (GSH-Px), catalase (CAT), and malondialdehyde (MDA) were measured. The average illness duration in patients with schizophrenia was 8.23 months, and their average total Positive and Negative Syndrome Scale score was 87.64, reported Shiguang Zhu of Nanjing (China) Medical University and associates.

Serum levels of TNF-alpha and MDA were significantly higher (P = .007 for both), and GSH-Px levels were significantly lower (P = .005), in patients with schizophrenia, compared with controls, after Bonferroni correction. The interaction between GSH-Px and TNF-alpha was negatively associated with the presence of schizophrenia (odds ratio, 0.99; 95% confidence interval, 0.98-0.99; P = .001), and the interaction between MDA and TNF-alpha was positively associated with schizophrenia risk (OR, 1.61, 95% CI, 1.16-2.24, P = .004).

“It is worth[while] to note that [the] immune-inflammatory and oxidative stress hypothesis are just one of the theories for schizophrenic development, and other neurobiological theories such as neurodevelopmental dysfunction and hypothalamus-pituitary-adrenal axis hormones disturbance should be considered,” the investigators wrote. However, their study “suggests that TNF-alpha and disturbance of oxidative stress status as well as their interaction may be involved in the pathophysiology of schizophrenia.”

The study was supported by the National Natural Science Foundation of China, Shanghai Jiao Tong University Medical Engineering Foundation, Shanghai Jiao Tong University School of Medicine, and CAS Key Laboratory of Mental Health. The investigators reported that they had no conflicts of interest.

SOURCE: Zhu S et al. Psychoneuroendocrinology. 2020 Jan 30. doi: 10.1016/j.psyneuen.2020.104595.

Publications
Topics
Sections

Disturbance of tumor necrosis factor (TNF)–alpha and oxidative stress status may be involved in the pathophysiology of schizophrenia, new study results suggest.

In a study published in Psychoneuroendocrinology, the investigators collected blood samples from 119 patients with schizophrenia and 135 controls. Along with TNF-alpha, assays for the oxidative stress markers superoxide dismutase (SOD), glutathione peroxidase (GSH-Px), catalase (CAT), and malondialdehyde (MDA) were measured. The average illness duration in patients with schizophrenia was 8.23 months, and their average total Positive and Negative Syndrome Scale score was 87.64, reported Shiguang Zhu of Nanjing (China) Medical University and associates.

Serum levels of TNF-alpha and MDA were significantly higher (P = .007 for both), and GSH-Px levels were significantly lower (P = .005), in patients with schizophrenia, compared with controls, after Bonferroni correction. The interaction between GSH-Px and TNF-alpha was negatively associated with the presence of schizophrenia (odds ratio, 0.99; 95% confidence interval, 0.98-0.99; P = .001), and the interaction between MDA and TNF-alpha was positively associated with schizophrenia risk (OR, 1.61, 95% CI, 1.16-2.24, P = .004).

“It is worth[while] to note that [the] immune-inflammatory and oxidative stress hypothesis are just one of the theories for schizophrenic development, and other neurobiological theories such as neurodevelopmental dysfunction and hypothalamus-pituitary-adrenal axis hormones disturbance should be considered,” the investigators wrote. However, their study “suggests that TNF-alpha and disturbance of oxidative stress status as well as their interaction may be involved in the pathophysiology of schizophrenia.”

The study was supported by the National Natural Science Foundation of China, Shanghai Jiao Tong University Medical Engineering Foundation, Shanghai Jiao Tong University School of Medicine, and CAS Key Laboratory of Mental Health. The investigators reported that they had no conflicts of interest.

SOURCE: Zhu S et al. Psychoneuroendocrinology. 2020 Jan 30. doi: 10.1016/j.psyneuen.2020.104595.

Disturbance of tumor necrosis factor (TNF)–alpha and oxidative stress status may be involved in the pathophysiology of schizophrenia, new study results suggest.

In a study published in Psychoneuroendocrinology, the investigators collected blood samples from 119 patients with schizophrenia and 135 controls. Along with TNF-alpha, assays for the oxidative stress markers superoxide dismutase (SOD), glutathione peroxidase (GSH-Px), catalase (CAT), and malondialdehyde (MDA) were measured. The average illness duration in patients with schizophrenia was 8.23 months, and their average total Positive and Negative Syndrome Scale score was 87.64, reported Shiguang Zhu of Nanjing (China) Medical University and associates.

Serum levels of TNF-alpha and MDA were significantly higher (P = .007 for both), and GSH-Px levels were significantly lower (P = .005), in patients with schizophrenia, compared with controls, after Bonferroni correction. The interaction between GSH-Px and TNF-alpha was negatively associated with the presence of schizophrenia (odds ratio, 0.99; 95% confidence interval, 0.98-0.99; P = .001), and the interaction between MDA and TNF-alpha was positively associated with schizophrenia risk (OR, 1.61, 95% CI, 1.16-2.24, P = .004).

“It is worth[while] to note that [the] immune-inflammatory and oxidative stress hypothesis are just one of the theories for schizophrenic development, and other neurobiological theories such as neurodevelopmental dysfunction and hypothalamus-pituitary-adrenal axis hormones disturbance should be considered,” the investigators wrote. However, their study “suggests that TNF-alpha and disturbance of oxidative stress status as well as their interaction may be involved in the pathophysiology of schizophrenia.”

The study was supported by the National Natural Science Foundation of China, Shanghai Jiao Tong University Medical Engineering Foundation, Shanghai Jiao Tong University School of Medicine, and CAS Key Laboratory of Mental Health. The investigators reported that they had no conflicts of interest.

SOURCE: Zhu S et al. Psychoneuroendocrinology. 2020 Jan 30. doi: 10.1016/j.psyneuen.2020.104595.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM PSYCHONEUROENDOCRINOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Cigarette smoking is associated with prefrontal function in patients with schizophrenia

Article Type
Changed
Mon, 02/24/2020 - 08:05

 

Patients with schizophrenia have decreased chronnectomic density in the dorsolateral prefrontal cortex, compared with healthy controls, and cigarette smoking in patients with schizophrenia may be associated with a degree of preserved function in that brain region, researchers reported. The results indicate that smoking may be associated with a preservation effect, but it “cannot restore patients’ prefrontal dysfunction to normal levels,” the researchers said.

The chronnectome depicts how brain functional connectivity patterns (i.e., the connectome) vary over time. Prior research has suggested that the chronnectome is altered in patients with schizophrenia and in people with nicotine addiction. “Therefore, the chronnectome may be an effective index to evaluate the smoking-related prefrontal functional changes in schizophrenia,” said Yun-Shuang Fan, a researcher at the Clinical Hospital of Chengdu Brain Science Institute in China, and colleagues in the report, which was published in Progress in Neuro-Psychopharmacology & Biological Psychiatry.

The investigators studied 49 patients with schizophrenia, including 22 smokers and 27 nonsmokers, and 43 healthy controls, including 22 smokers and 21 nonsmokers. Participants underwent resting-state functional magnetic resonance imaging, and the researchers analyzed chronnectomic density using a sliding-window method. The investigators examined interactions between smoking status and diagnosis.

Smoking was associated with reduced chronnectomic density in healthy controls, but increased density in patients with schizophrenia. The study provides a “framework to elaborate upon the self-medication hypothesis in schizophrenia” and sheds “some fresh light on the elevated rates of smoking in schizophrenia,” they said.

The study was relatively small, and patients’ use of antipsychotic medications, which can affect the connectome, may limit the results. In addition, the study’s cross-sectional design precludes knowing whether “smoking behavior is the cause or result of the prefrontal chronnectome alterations in schizophrenia,” the authors added.

The study was supported by the National Natural Science Foundation of China and the Sichuan Science and Technology Program. The researchers had no conflicts of interest.

SOURCE: Fan YS et al. Prog Neuropsychopharmacol Biol Psychiatry. 2020 Apr 20. doi: 10.1016/j.pnpbp.2020.109860.

Publications
Topics
Sections

 

Patients with schizophrenia have decreased chronnectomic density in the dorsolateral prefrontal cortex, compared with healthy controls, and cigarette smoking in patients with schizophrenia may be associated with a degree of preserved function in that brain region, researchers reported. The results indicate that smoking may be associated with a preservation effect, but it “cannot restore patients’ prefrontal dysfunction to normal levels,” the researchers said.

The chronnectome depicts how brain functional connectivity patterns (i.e., the connectome) vary over time. Prior research has suggested that the chronnectome is altered in patients with schizophrenia and in people with nicotine addiction. “Therefore, the chronnectome may be an effective index to evaluate the smoking-related prefrontal functional changes in schizophrenia,” said Yun-Shuang Fan, a researcher at the Clinical Hospital of Chengdu Brain Science Institute in China, and colleagues in the report, which was published in Progress in Neuro-Psychopharmacology & Biological Psychiatry.

The investigators studied 49 patients with schizophrenia, including 22 smokers and 27 nonsmokers, and 43 healthy controls, including 22 smokers and 21 nonsmokers. Participants underwent resting-state functional magnetic resonance imaging, and the researchers analyzed chronnectomic density using a sliding-window method. The investigators examined interactions between smoking status and diagnosis.

Smoking was associated with reduced chronnectomic density in healthy controls, but increased density in patients with schizophrenia. The study provides a “framework to elaborate upon the self-medication hypothesis in schizophrenia” and sheds “some fresh light on the elevated rates of smoking in schizophrenia,” they said.

The study was relatively small, and patients’ use of antipsychotic medications, which can affect the connectome, may limit the results. In addition, the study’s cross-sectional design precludes knowing whether “smoking behavior is the cause or result of the prefrontal chronnectome alterations in schizophrenia,” the authors added.

The study was supported by the National Natural Science Foundation of China and the Sichuan Science and Technology Program. The researchers had no conflicts of interest.

SOURCE: Fan YS et al. Prog Neuropsychopharmacol Biol Psychiatry. 2020 Apr 20. doi: 10.1016/j.pnpbp.2020.109860.

 

Patients with schizophrenia have decreased chronnectomic density in the dorsolateral prefrontal cortex, compared with healthy controls, and cigarette smoking in patients with schizophrenia may be associated with a degree of preserved function in that brain region, researchers reported. The results indicate that smoking may be associated with a preservation effect, but it “cannot restore patients’ prefrontal dysfunction to normal levels,” the researchers said.

The chronnectome depicts how brain functional connectivity patterns (i.e., the connectome) vary over time. Prior research has suggested that the chronnectome is altered in patients with schizophrenia and in people with nicotine addiction. “Therefore, the chronnectome may be an effective index to evaluate the smoking-related prefrontal functional changes in schizophrenia,” said Yun-Shuang Fan, a researcher at the Clinical Hospital of Chengdu Brain Science Institute in China, and colleagues in the report, which was published in Progress in Neuro-Psychopharmacology & Biological Psychiatry.

The investigators studied 49 patients with schizophrenia, including 22 smokers and 27 nonsmokers, and 43 healthy controls, including 22 smokers and 21 nonsmokers. Participants underwent resting-state functional magnetic resonance imaging, and the researchers analyzed chronnectomic density using a sliding-window method. The investigators examined interactions between smoking status and diagnosis.

Smoking was associated with reduced chronnectomic density in healthy controls, but increased density in patients with schizophrenia. The study provides a “framework to elaborate upon the self-medication hypothesis in schizophrenia” and sheds “some fresh light on the elevated rates of smoking in schizophrenia,” they said.

The study was relatively small, and patients’ use of antipsychotic medications, which can affect the connectome, may limit the results. In addition, the study’s cross-sectional design precludes knowing whether “smoking behavior is the cause or result of the prefrontal chronnectome alterations in schizophrenia,” the authors added.

The study was supported by the National Natural Science Foundation of China and the Sichuan Science and Technology Program. The researchers had no conflicts of interest.

SOURCE: Fan YS et al. Prog Neuropsychopharmacol Biol Psychiatry. 2020 Apr 20. doi: 10.1016/j.pnpbp.2020.109860.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM PROGRESS IN NEURO-PSYCHOPHARMACOLOGY & BIOLOGICAL PSYCHIATRY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

EEG signature predicts antidepressant response

Article Type
Changed
Mon, 03/22/2021 - 14:08

Personalized treatment for depression may soon become a reality, thanks to an artificial intelligence (AI) algorithm that accurately predicts antidepressant efficacy in specific patients.

A landmark study of more than 300 patients with major depressive disorder (MDD) showed that a latent-space machine-learning algorithm tailored for resting-state EEG robustly predicted patient response to sertraline. The findings were generalizable across different study sites and EEG equipment.

“We found that the use of the artificial intelligence algorithm can identify the EEG signature for patients who do well on sertraline,” study investigator Madhukar H. Trivedi, MD, professor of psychiatry at the University of Texas Southwestern Medical Center in Dallas, said in an interview.

“Interestingly, when we looked further, it became clear that patients with that same EEG signature do not do well on placebo,” he added.

The study was published online Feb. 10 in Nature Biotechnology (doi: 10.1038/s41587-019-0397-3).

Pivotal study

Currently, major depression is defined using a range of clinical criteria. As such, it encompasses a heterogeneous mix of neurobiological phenotypes. Such heterogeneity may account for the modest superiority of antidepressant medication relative to placebo.

While recent research suggests that resting-state EEG may help identify treatment-predictive heterogeneity in depression, these studies have also been hindered by a lack of cross-validation and small sample sizes.

What’s more, these studies have either identified nonspecific predictors or failed to yield generalizable neural signatures that are predictive at the individual patient level (Am J Psychiatry. 2019 Jan 1;176[1]:44-56).

For these reasons, there is currently no robust neurobiological signature for an antidepressant-responsive phenotype that may help identify which patients would benefit from antidepressant medication. Nevertheless, said Dr. Trivedi, detailing such a signature would promote a neurobiological understanding of treatment response, with the potential for notable clinical implications.

“The idea behind this [National Institutes of Health]–funded study was to develop biomarkers that can distinguish treatment outcomes between drug and placebo,” he said. “To do so, we needed a randomized, placebo-controlled trial that has significant breadth in terms of biomarker evaluation and validation, and this study was designed specifically with this end in mind.

“There has not been a drug-placebo study that has looked at this in patients with depression,” Dr. Trivedi said. “So in that sense, this was really a pivotal study.”

To help address these challenges, the investigators developed a machine-learning algorithm they called SELSER (Sparse EEG Latent Space Regression).

Using data from four separate studies, they first established the resting-state EEG predictive signature by training SELSER on data from 309 patients from the EMBARC (Establishing Moderators and Biosignatures of Antidepressant Response in Clinic Care) study, a neuroimaging-coupled, placebo-controlled, randomized clinical study of antidepressant efficacy.

The generalizability of the antidepressant-predictive signature was then tested in a second independent sample of 72 depressed patients.

In a third independent sample of 24 depressed patients, the researchers assessed the convergent validity and neurobiological significance of the treatment-predictive, resting-state EEG signature.

Finally, a fourth sample of 152 depressed patients was used to test the generalizability of the results.

‘Fantastic’ result but validation needed

These combined efforts were aimed at revealing a treatment responsive phenotype in depression, dissociate between medication and placebo response, establish its mechanistic significance, and provide initial evidence regarding the potential for treatment selection on the basis of a resting-state EEG signature.

The study showed that improvement in patients’ symptoms was robustly predicted by the algorithm. These predictions were specific for sertraline relative to placebo.

When generalized to two depression samples, the researchers also found that the algorithm reflected general antidepressant medication responsivity and related differentially to a repetitive transcranial magnetic stimulation (TMS) treatment outcome.

“Although we only looked at sertraline,” Dr. Trivedi said, “we also applied the signature to a sample of patients who had been treated with transcranial magnetic stimulation. And we found that the signature for TMS [response] is different than the signature for sertraline.”

Interestingly, the antidepressant-predictive signature identified by SELSER was also superior to that of conventional machine-learning models or latent modeling methods, such as independent-component analysis or principal-component analysis. This SELSER signature was also superior to a model trained on clinical data alone and was able to predict outcome using resting-state EEG data acquired at a study site not included in the model training set.

The study also revealed evidence of multimodal convergent validity for the antidepressant-response signature by virtue of its correlation with expression of a task-based functional MRI signature in one of the four datasets.

The strength of the resting-state signature was also found to correlate with prefrontal neural responsivity, as indexed by direct stimulation with single-pulse TMS and EEG.

Given the ability of the algorithm to both predict outcome with sertraline and distinguish response between sertraline and placebo at the individual patient level, the investigators believe SELSER may one day support machine learning–driven personalized approaches to depression treatment.

“Our findings advance the neurobiological understanding of antidepressant treatment through an EEG-tailored computational model and provide a clinical avenue for personalized treatment of depression,” the authors wrote.

Yet, their work is far from over. Among the investigators’ next steps is the development of an AI interface that can be widely integrated with EEGs across the country.

“Identifying this signature was fantastic, but you’ve got to be able to validate it as well,” Dr. Trivedi noted. “And luckily, we were able to validate it in the three additional studies.

“The next question is whether it can be broadened to other illnesses.”

Promising research

Commenting on the findings in an interview, Michele Ferrante, PhD, said he believes there may soon be a time during which algorithms such as this are used to personalize depression treatment.

“It’s well known that there are no good biological tests in psychiatry, but promising computational tools, biomarkers, and behavioral signatures for segregating patients according to treatment response are starting to emerge for depression,” said Dr. Ferrante, program chief of the Theoretical and Computational Neuroscience Program at the National Institute of Mental Health (NIMH).

“Precision in the ability to predict what patient will respond to each treatment will improve over time, I have no doubt,” added Dr. Ferrante, who was not involved with the current study.

However, he noted, such approaches are not without their potential drawbacks.

“The greatest challenge is to continuously validate these computational tools as they keep on learning from more heterogeneous groups. Another challenge will be to make sure that these computational tools become well-established, widely adopted, safe, and regulated by the [Food and Drug Administration] as Software as a Medical Device,” he said.

The current algorithm will also need to undergo further testing, said Dr. Ferrante.

“It has been validated on an external dataset,” he said, “but now we need to do rigorous prospective clinical trials where patients are selectively assigned by the AI to a treatment according to their biosignature, to see if these results hold true.

“Down the road, it would be important to implement computational models [that are] able to assign patients across the multiple treatments available for depression, including pharmaceuticals, psychosocial interventions, and neural devices.”

The study was funded directly and indirectly by the NIMH of the National Institutes of Health, the Stanford Neurosciences Institute, the Hersh Foundation, the National Key Research and Development Plan of China, and the National Natural Science Foundation of China.

Dr. Trivedi disclosed numerous financial relationships with pharmaceutical companies and device manufacturers. He has received grants/research support from the Agency for Healthcare Research and Quality, Cyberonic, the National Alliance for Research in Schizophrenia and Depression, the NIMH, and the National Institute on Drug Abuse.
 

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Personalized treatment for depression may soon become a reality, thanks to an artificial intelligence (AI) algorithm that accurately predicts antidepressant efficacy in specific patients.

A landmark study of more than 300 patients with major depressive disorder (MDD) showed that a latent-space machine-learning algorithm tailored for resting-state EEG robustly predicted patient response to sertraline. The findings were generalizable across different study sites and EEG equipment.

“We found that the use of the artificial intelligence algorithm can identify the EEG signature for patients who do well on sertraline,” study investigator Madhukar H. Trivedi, MD, professor of psychiatry at the University of Texas Southwestern Medical Center in Dallas, said in an interview.

“Interestingly, when we looked further, it became clear that patients with that same EEG signature do not do well on placebo,” he added.

The study was published online Feb. 10 in Nature Biotechnology (doi: 10.1038/s41587-019-0397-3).

Pivotal study

Currently, major depression is defined using a range of clinical criteria. As such, it encompasses a heterogeneous mix of neurobiological phenotypes. Such heterogeneity may account for the modest superiority of antidepressant medication relative to placebo.

While recent research suggests that resting-state EEG may help identify treatment-predictive heterogeneity in depression, these studies have also been hindered by a lack of cross-validation and small sample sizes.

What’s more, these studies have either identified nonspecific predictors or failed to yield generalizable neural signatures that are predictive at the individual patient level (Am J Psychiatry. 2019 Jan 1;176[1]:44-56).

For these reasons, there is currently no robust neurobiological signature for an antidepressant-responsive phenotype that may help identify which patients would benefit from antidepressant medication. Nevertheless, said Dr. Trivedi, detailing such a signature would promote a neurobiological understanding of treatment response, with the potential for notable clinical implications.

“The idea behind this [National Institutes of Health]–funded study was to develop biomarkers that can distinguish treatment outcomes between drug and placebo,” he said. “To do so, we needed a randomized, placebo-controlled trial that has significant breadth in terms of biomarker evaluation and validation, and this study was designed specifically with this end in mind.

“There has not been a drug-placebo study that has looked at this in patients with depression,” Dr. Trivedi said. “So in that sense, this was really a pivotal study.”

To help address these challenges, the investigators developed a machine-learning algorithm they called SELSER (Sparse EEG Latent Space Regression).

Using data from four separate studies, they first established the resting-state EEG predictive signature by training SELSER on data from 309 patients from the EMBARC (Establishing Moderators and Biosignatures of Antidepressant Response in Clinic Care) study, a neuroimaging-coupled, placebo-controlled, randomized clinical study of antidepressant efficacy.

The generalizability of the antidepressant-predictive signature was then tested in a second independent sample of 72 depressed patients.

In a third independent sample of 24 depressed patients, the researchers assessed the convergent validity and neurobiological significance of the treatment-predictive, resting-state EEG signature.

Finally, a fourth sample of 152 depressed patients was used to test the generalizability of the results.

‘Fantastic’ result but validation needed

These combined efforts were aimed at revealing a treatment responsive phenotype in depression, dissociate between medication and placebo response, establish its mechanistic significance, and provide initial evidence regarding the potential for treatment selection on the basis of a resting-state EEG signature.

The study showed that improvement in patients’ symptoms was robustly predicted by the algorithm. These predictions were specific for sertraline relative to placebo.

When generalized to two depression samples, the researchers also found that the algorithm reflected general antidepressant medication responsivity and related differentially to a repetitive transcranial magnetic stimulation (TMS) treatment outcome.

“Although we only looked at sertraline,” Dr. Trivedi said, “we also applied the signature to a sample of patients who had been treated with transcranial magnetic stimulation. And we found that the signature for TMS [response] is different than the signature for sertraline.”

Interestingly, the antidepressant-predictive signature identified by SELSER was also superior to that of conventional machine-learning models or latent modeling methods, such as independent-component analysis or principal-component analysis. This SELSER signature was also superior to a model trained on clinical data alone and was able to predict outcome using resting-state EEG data acquired at a study site not included in the model training set.

The study also revealed evidence of multimodal convergent validity for the antidepressant-response signature by virtue of its correlation with expression of a task-based functional MRI signature in one of the four datasets.

The strength of the resting-state signature was also found to correlate with prefrontal neural responsivity, as indexed by direct stimulation with single-pulse TMS and EEG.

Given the ability of the algorithm to both predict outcome with sertraline and distinguish response between sertraline and placebo at the individual patient level, the investigators believe SELSER may one day support machine learning–driven personalized approaches to depression treatment.

“Our findings advance the neurobiological understanding of antidepressant treatment through an EEG-tailored computational model and provide a clinical avenue for personalized treatment of depression,” the authors wrote.

Yet, their work is far from over. Among the investigators’ next steps is the development of an AI interface that can be widely integrated with EEGs across the country.

“Identifying this signature was fantastic, but you’ve got to be able to validate it as well,” Dr. Trivedi noted. “And luckily, we were able to validate it in the three additional studies.

“The next question is whether it can be broadened to other illnesses.”

Promising research

Commenting on the findings in an interview, Michele Ferrante, PhD, said he believes there may soon be a time during which algorithms such as this are used to personalize depression treatment.

“It’s well known that there are no good biological tests in psychiatry, but promising computational tools, biomarkers, and behavioral signatures for segregating patients according to treatment response are starting to emerge for depression,” said Dr. Ferrante, program chief of the Theoretical and Computational Neuroscience Program at the National Institute of Mental Health (NIMH).

“Precision in the ability to predict what patient will respond to each treatment will improve over time, I have no doubt,” added Dr. Ferrante, who was not involved with the current study.

However, he noted, such approaches are not without their potential drawbacks.

“The greatest challenge is to continuously validate these computational tools as they keep on learning from more heterogeneous groups. Another challenge will be to make sure that these computational tools become well-established, widely adopted, safe, and regulated by the [Food and Drug Administration] as Software as a Medical Device,” he said.

The current algorithm will also need to undergo further testing, said Dr. Ferrante.

“It has been validated on an external dataset,” he said, “but now we need to do rigorous prospective clinical trials where patients are selectively assigned by the AI to a treatment according to their biosignature, to see if these results hold true.

“Down the road, it would be important to implement computational models [that are] able to assign patients across the multiple treatments available for depression, including pharmaceuticals, psychosocial interventions, and neural devices.”

The study was funded directly and indirectly by the NIMH of the National Institutes of Health, the Stanford Neurosciences Institute, the Hersh Foundation, the National Key Research and Development Plan of China, and the National Natural Science Foundation of China.

Dr. Trivedi disclosed numerous financial relationships with pharmaceutical companies and device manufacturers. He has received grants/research support from the Agency for Healthcare Research and Quality, Cyberonic, the National Alliance for Research in Schizophrenia and Depression, the NIMH, and the National Institute on Drug Abuse.
 

A version of this article first appeared on Medscape.com.

Personalized treatment for depression may soon become a reality, thanks to an artificial intelligence (AI) algorithm that accurately predicts antidepressant efficacy in specific patients.

A landmark study of more than 300 patients with major depressive disorder (MDD) showed that a latent-space machine-learning algorithm tailored for resting-state EEG robustly predicted patient response to sertraline. The findings were generalizable across different study sites and EEG equipment.

“We found that the use of the artificial intelligence algorithm can identify the EEG signature for patients who do well on sertraline,” study investigator Madhukar H. Trivedi, MD, professor of psychiatry at the University of Texas Southwestern Medical Center in Dallas, said in an interview.

“Interestingly, when we looked further, it became clear that patients with that same EEG signature do not do well on placebo,” he added.

The study was published online Feb. 10 in Nature Biotechnology (doi: 10.1038/s41587-019-0397-3).

Pivotal study

Currently, major depression is defined using a range of clinical criteria. As such, it encompasses a heterogeneous mix of neurobiological phenotypes. Such heterogeneity may account for the modest superiority of antidepressant medication relative to placebo.

While recent research suggests that resting-state EEG may help identify treatment-predictive heterogeneity in depression, these studies have also been hindered by a lack of cross-validation and small sample sizes.

What’s more, these studies have either identified nonspecific predictors or failed to yield generalizable neural signatures that are predictive at the individual patient level (Am J Psychiatry. 2019 Jan 1;176[1]:44-56).

For these reasons, there is currently no robust neurobiological signature for an antidepressant-responsive phenotype that may help identify which patients would benefit from antidepressant medication. Nevertheless, said Dr. Trivedi, detailing such a signature would promote a neurobiological understanding of treatment response, with the potential for notable clinical implications.

“The idea behind this [National Institutes of Health]–funded study was to develop biomarkers that can distinguish treatment outcomes between drug and placebo,” he said. “To do so, we needed a randomized, placebo-controlled trial that has significant breadth in terms of biomarker evaluation and validation, and this study was designed specifically with this end in mind.

“There has not been a drug-placebo study that has looked at this in patients with depression,” Dr. Trivedi said. “So in that sense, this was really a pivotal study.”

To help address these challenges, the investigators developed a machine-learning algorithm they called SELSER (Sparse EEG Latent Space Regression).

Using data from four separate studies, they first established the resting-state EEG predictive signature by training SELSER on data from 309 patients from the EMBARC (Establishing Moderators and Biosignatures of Antidepressant Response in Clinic Care) study, a neuroimaging-coupled, placebo-controlled, randomized clinical study of antidepressant efficacy.

The generalizability of the antidepressant-predictive signature was then tested in a second independent sample of 72 depressed patients.

In a third independent sample of 24 depressed patients, the researchers assessed the convergent validity and neurobiological significance of the treatment-predictive, resting-state EEG signature.

Finally, a fourth sample of 152 depressed patients was used to test the generalizability of the results.

‘Fantastic’ result but validation needed

These combined efforts were aimed at revealing a treatment responsive phenotype in depression, dissociate between medication and placebo response, establish its mechanistic significance, and provide initial evidence regarding the potential for treatment selection on the basis of a resting-state EEG signature.

The study showed that improvement in patients’ symptoms was robustly predicted by the algorithm. These predictions were specific for sertraline relative to placebo.

When generalized to two depression samples, the researchers also found that the algorithm reflected general antidepressant medication responsivity and related differentially to a repetitive transcranial magnetic stimulation (TMS) treatment outcome.

“Although we only looked at sertraline,” Dr. Trivedi said, “we also applied the signature to a sample of patients who had been treated with transcranial magnetic stimulation. And we found that the signature for TMS [response] is different than the signature for sertraline.”

Interestingly, the antidepressant-predictive signature identified by SELSER was also superior to that of conventional machine-learning models or latent modeling methods, such as independent-component analysis or principal-component analysis. This SELSER signature was also superior to a model trained on clinical data alone and was able to predict outcome using resting-state EEG data acquired at a study site not included in the model training set.

The study also revealed evidence of multimodal convergent validity for the antidepressant-response signature by virtue of its correlation with expression of a task-based functional MRI signature in one of the four datasets.

The strength of the resting-state signature was also found to correlate with prefrontal neural responsivity, as indexed by direct stimulation with single-pulse TMS and EEG.

Given the ability of the algorithm to both predict outcome with sertraline and distinguish response between sertraline and placebo at the individual patient level, the investigators believe SELSER may one day support machine learning–driven personalized approaches to depression treatment.

“Our findings advance the neurobiological understanding of antidepressant treatment through an EEG-tailored computational model and provide a clinical avenue for personalized treatment of depression,” the authors wrote.

Yet, their work is far from over. Among the investigators’ next steps is the development of an AI interface that can be widely integrated with EEGs across the country.

“Identifying this signature was fantastic, but you’ve got to be able to validate it as well,” Dr. Trivedi noted. “And luckily, we were able to validate it in the three additional studies.

“The next question is whether it can be broadened to other illnesses.”

Promising research

Commenting on the findings in an interview, Michele Ferrante, PhD, said he believes there may soon be a time during which algorithms such as this are used to personalize depression treatment.

“It’s well known that there are no good biological tests in psychiatry, but promising computational tools, biomarkers, and behavioral signatures for segregating patients according to treatment response are starting to emerge for depression,” said Dr. Ferrante, program chief of the Theoretical and Computational Neuroscience Program at the National Institute of Mental Health (NIMH).

“Precision in the ability to predict what patient will respond to each treatment will improve over time, I have no doubt,” added Dr. Ferrante, who was not involved with the current study.

However, he noted, such approaches are not without their potential drawbacks.

“The greatest challenge is to continuously validate these computational tools as they keep on learning from more heterogeneous groups. Another challenge will be to make sure that these computational tools become well-established, widely adopted, safe, and regulated by the [Food and Drug Administration] as Software as a Medical Device,” he said.

The current algorithm will also need to undergo further testing, said Dr. Ferrante.

“It has been validated on an external dataset,” he said, “but now we need to do rigorous prospective clinical trials where patients are selectively assigned by the AI to a treatment according to their biosignature, to see if these results hold true.

“Down the road, it would be important to implement computational models [that are] able to assign patients across the multiple treatments available for depression, including pharmaceuticals, psychosocial interventions, and neural devices.”

The study was funded directly and indirectly by the NIMH of the National Institutes of Health, the Stanford Neurosciences Institute, the Hersh Foundation, the National Key Research and Development Plan of China, and the National Natural Science Foundation of China.

Dr. Trivedi disclosed numerous financial relationships with pharmaceutical companies and device manufacturers. He has received grants/research support from the Agency for Healthcare Research and Quality, Cyberonic, the National Alliance for Research in Schizophrenia and Depression, the NIMH, and the National Institute on Drug Abuse.
 

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM NATURE BIOTECHNOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Medscape Article

Excessive masculinity linked to high suicide risk

Article Type
Changed
Mon, 03/22/2021 - 14:08

Excessive masculinity is linked to a significantly increased risk for death by suicide in men, new research suggests.

In the first study to show this association, investigators found that men with high traditional masculinity (HTM) – a set of norms that includes competitiveness, emotional restriction, and aggression – were about two and half times more likely to die by suicide than their counterparts without HTM. The finding underscores the “central role” of gender in suicide death.

“We found that high-traditional-masculinity men were 2.4 times more likely to die by suicide than those who were not [of] high traditional masculinity. We feel this is a significant finding, and one that’s very rare to have evidence for,” study investigator Daniel Coleman, PhD, said in an interview.

“Our other findings are also important and interesting,” added Dr. Coleman, associate professor of social service at Fordham University, New York. “One was that high traditional masculinity was associated with a host of other significant risk factors for suicide death. So not only does high traditional masculinity add to the risk of suicide death, it also may have indirect effects through other variables, such as acting-out behavior.”

The study was published online Feb. 12 in JAMA Psychiatry (doi: 10.1001/jamapsychiatry.2019.4702).
 

First look

In the United States, death by suicide is 3.5 times more common in men than in women. Several potential drivers may explain this phenomenon; one plausible factor may be high levels of what the investigators describe as “traditional masculinity.”

Interestingly, previous studies suggest that HTM men experience suicidal thoughts to a greater degree than do other persons (Soc Psychiatry Psychiatr Epidemiol. 2017 Mar;52[3]:319-27). Nevertheless, the potential influence of HTM and suicide mortality has not been examined before now.

The study is a secondary analysis of the longitudinal Add Health (the National Longitudinal Study of Adolescent to Adult Health) study, which began in 1995 and followed 20,745 adolescents through young adulthood. Not only did that study show a direct association between measures of HTM and death by suicide, but it also corroborated the connection between HTM and other risk factors for suicide revealed in earlier research (Suicide Life Threat Behav. 2016 Apr;46[2]:191-205).

To tease out this relationship, Dr. Coleman and colleagues used data from the nationally representative Add Health study. That earlier research concluded that nine Add Health variables were associated with suicide; these included suicide by a family member, being expelled from school, running away from home, using a weapon, being of white race, a past history of smoking, being in a serious fight in the past year, delinquency, and fighting.

In the current study, the researchers hypothesized that HTM would be associated with these nine variables, in addition to suicide, depression, and gun access.

In the Add Health study, the adolescents were followed over time. In the current analysis, the researchers matched data from that study with death records from the National Death Index from 2014. Death by suicide was defined using National Death Index procedures.

The investigators then used an established procedure for scoring gender-typed attitudes and behaviors. As part of this, a single latent probability variable for identifying oneself as male was generated from 16 gender-discriminating variables.

Participants who were found to score at least a 73% probability of identifying as male (greater than 1 standard deviation above the mean) were classified as HTM.

“There’s been a lot of speculating about masculinity as a risk factor for male suicides,” Dr. Coleman said. “But it’s very difficult to study suicide death and something psychosocial like masculinity. So this was an attempt to fill that gap and test the hypothesis that’s being discussed quite a bit.”
 

 

 

A relevant risk factor

Twenty-two deaths occurred among the Add Health participants. Of those participants, 21 were men (odds ratio, 21.7; 95% confidence interval, 2.9-161; P less than .001).

The analysis showed that all nine risks for suicide that were highlighted in previous research were positively associated with HTM, with small to medium effect sizes. Of these, the most pronounced was family member suicide, with an OR of 1.89 (95% CI, 1.3-2.7).

Most tellingly, HTM men were 2.4 times more likely to end their lives by suicide than were men not defined as such (95% CI, 0.99-6.0; P less than .046). Nevertheless, HTM men were also 1.45 times less likely to report suicidal ideation (OR, 0.69; 95% CI, 0.60-0.81; P less than .001). There was no association between HTM and nonfatal suicide attempts.

Interestingly, HTM men were slightly more likely to report easy access to guns (OR, 1.1; 95% CI, 1.01-1.20; P less than .04), but they had lower levels of depression (Cohen’s d, 0.17; P less than .001).

HTM not only has a direct association with suicide but also with a web of indirect effects as well, thanks to its association with all the other risks identified in the previous study by another group of investigators.

HTM may be an underlying influence in male suicide that increases the probability of externalizing such behavioral risk factors as anger, violence, gun access, and school problems.

The finding that almost all of the people who died by suicide were men underscores the central role that gender plays in these tragedies. As such, the investigators hope that the study prompts more research, as well as intervention efforts aimed at the role of masculinity in suicide.

“There are already things going on around the world to try to address the risk factors of masculinity for suicide death,” Dr. Coleman said. “So even though we haven’t had the evidence that it’s a risk factor, people have been operating under that assumption anyway.

“Hopefully our research contributes to raising the profile that high traditional masculinity is a relevant risk factor that we can organize prevention and treatment around.”
 

An important contribution

Mark S. Kaplan, DrPH, commenting on the findings in an interview, said the study makes an important contribution to suicide research.

“Any study that tries to link a living sample with death data, as they did here, is important,” said Dr. Kaplan, professor of social welfare at the Luskin School of Public Affairs of the University of California, Los Angeles.

“It’s also important because it begins to scratch the surface of more proximal or distal factors that are associated with suicide, and masculinity is one of those factors,” Dr. Kaplan added.

“In an incremental way, it begins to add to the puzzle of why men have a higher mortality rate than their female counterparts. Because when it comes to suicide, men and women really are apples and oranges.”

Dr. Kaplan believes HTM is one of several traits that may lead men to take their own lives.

“There are all sorts of other issues. For example, masculinity might be interacting with some of the harsh socioeconomic conditions that many men face. I think all of this points to the real need to understand why men die from suicide,” he said.

The Add Health study is funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development, with cooperative funding from 23 other federal agencies and foundations. No direct support was received from the grant for the current study. Dr. Coleman and Dr. Kaplan have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Excessive masculinity is linked to a significantly increased risk for death by suicide in men, new research suggests.

In the first study to show this association, investigators found that men with high traditional masculinity (HTM) – a set of norms that includes competitiveness, emotional restriction, and aggression – were about two and half times more likely to die by suicide than their counterparts without HTM. The finding underscores the “central role” of gender in suicide death.

“We found that high-traditional-masculinity men were 2.4 times more likely to die by suicide than those who were not [of] high traditional masculinity. We feel this is a significant finding, and one that’s very rare to have evidence for,” study investigator Daniel Coleman, PhD, said in an interview.

“Our other findings are also important and interesting,” added Dr. Coleman, associate professor of social service at Fordham University, New York. “One was that high traditional masculinity was associated with a host of other significant risk factors for suicide death. So not only does high traditional masculinity add to the risk of suicide death, it also may have indirect effects through other variables, such as acting-out behavior.”

The study was published online Feb. 12 in JAMA Psychiatry (doi: 10.1001/jamapsychiatry.2019.4702).
 

First look

In the United States, death by suicide is 3.5 times more common in men than in women. Several potential drivers may explain this phenomenon; one plausible factor may be high levels of what the investigators describe as “traditional masculinity.”

Interestingly, previous studies suggest that HTM men experience suicidal thoughts to a greater degree than do other persons (Soc Psychiatry Psychiatr Epidemiol. 2017 Mar;52[3]:319-27). Nevertheless, the potential influence of HTM and suicide mortality has not been examined before now.

The study is a secondary analysis of the longitudinal Add Health (the National Longitudinal Study of Adolescent to Adult Health) study, which began in 1995 and followed 20,745 adolescents through young adulthood. Not only did that study show a direct association between measures of HTM and death by suicide, but it also corroborated the connection between HTM and other risk factors for suicide revealed in earlier research (Suicide Life Threat Behav. 2016 Apr;46[2]:191-205).

To tease out this relationship, Dr. Coleman and colleagues used data from the nationally representative Add Health study. That earlier research concluded that nine Add Health variables were associated with suicide; these included suicide by a family member, being expelled from school, running away from home, using a weapon, being of white race, a past history of smoking, being in a serious fight in the past year, delinquency, and fighting.

In the current study, the researchers hypothesized that HTM would be associated with these nine variables, in addition to suicide, depression, and gun access.

In the Add Health study, the adolescents were followed over time. In the current analysis, the researchers matched data from that study with death records from the National Death Index from 2014. Death by suicide was defined using National Death Index procedures.

The investigators then used an established procedure for scoring gender-typed attitudes and behaviors. As part of this, a single latent probability variable for identifying oneself as male was generated from 16 gender-discriminating variables.

Participants who were found to score at least a 73% probability of identifying as male (greater than 1 standard deviation above the mean) were classified as HTM.

“There’s been a lot of speculating about masculinity as a risk factor for male suicides,” Dr. Coleman said. “But it’s very difficult to study suicide death and something psychosocial like masculinity. So this was an attempt to fill that gap and test the hypothesis that’s being discussed quite a bit.”
 

 

 

A relevant risk factor

Twenty-two deaths occurred among the Add Health participants. Of those participants, 21 were men (odds ratio, 21.7; 95% confidence interval, 2.9-161; P less than .001).

The analysis showed that all nine risks for suicide that were highlighted in previous research were positively associated with HTM, with small to medium effect sizes. Of these, the most pronounced was family member suicide, with an OR of 1.89 (95% CI, 1.3-2.7).

Most tellingly, HTM men were 2.4 times more likely to end their lives by suicide than were men not defined as such (95% CI, 0.99-6.0; P less than .046). Nevertheless, HTM men were also 1.45 times less likely to report suicidal ideation (OR, 0.69; 95% CI, 0.60-0.81; P less than .001). There was no association between HTM and nonfatal suicide attempts.

Interestingly, HTM men were slightly more likely to report easy access to guns (OR, 1.1; 95% CI, 1.01-1.20; P less than .04), but they had lower levels of depression (Cohen’s d, 0.17; P less than .001).

HTM not only has a direct association with suicide but also with a web of indirect effects as well, thanks to its association with all the other risks identified in the previous study by another group of investigators.

HTM may be an underlying influence in male suicide that increases the probability of externalizing such behavioral risk factors as anger, violence, gun access, and school problems.

The finding that almost all of the people who died by suicide were men underscores the central role that gender plays in these tragedies. As such, the investigators hope that the study prompts more research, as well as intervention efforts aimed at the role of masculinity in suicide.

“There are already things going on around the world to try to address the risk factors of masculinity for suicide death,” Dr. Coleman said. “So even though we haven’t had the evidence that it’s a risk factor, people have been operating under that assumption anyway.

“Hopefully our research contributes to raising the profile that high traditional masculinity is a relevant risk factor that we can organize prevention and treatment around.”
 

An important contribution

Mark S. Kaplan, DrPH, commenting on the findings in an interview, said the study makes an important contribution to suicide research.

“Any study that tries to link a living sample with death data, as they did here, is important,” said Dr. Kaplan, professor of social welfare at the Luskin School of Public Affairs of the University of California, Los Angeles.

“It’s also important because it begins to scratch the surface of more proximal or distal factors that are associated with suicide, and masculinity is one of those factors,” Dr. Kaplan added.

“In an incremental way, it begins to add to the puzzle of why men have a higher mortality rate than their female counterparts. Because when it comes to suicide, men and women really are apples and oranges.”

Dr. Kaplan believes HTM is one of several traits that may lead men to take their own lives.

“There are all sorts of other issues. For example, masculinity might be interacting with some of the harsh socioeconomic conditions that many men face. I think all of this points to the real need to understand why men die from suicide,” he said.

The Add Health study is funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development, with cooperative funding from 23 other federal agencies and foundations. No direct support was received from the grant for the current study. Dr. Coleman and Dr. Kaplan have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Excessive masculinity is linked to a significantly increased risk for death by suicide in men, new research suggests.

In the first study to show this association, investigators found that men with high traditional masculinity (HTM) – a set of norms that includes competitiveness, emotional restriction, and aggression – were about two and half times more likely to die by suicide than their counterparts without HTM. The finding underscores the “central role” of gender in suicide death.

“We found that high-traditional-masculinity men were 2.4 times more likely to die by suicide than those who were not [of] high traditional masculinity. We feel this is a significant finding, and one that’s very rare to have evidence for,” study investigator Daniel Coleman, PhD, said in an interview.

“Our other findings are also important and interesting,” added Dr. Coleman, associate professor of social service at Fordham University, New York. “One was that high traditional masculinity was associated with a host of other significant risk factors for suicide death. So not only does high traditional masculinity add to the risk of suicide death, it also may have indirect effects through other variables, such as acting-out behavior.”

The study was published online Feb. 12 in JAMA Psychiatry (doi: 10.1001/jamapsychiatry.2019.4702).
 

First look

In the United States, death by suicide is 3.5 times more common in men than in women. Several potential drivers may explain this phenomenon; one plausible factor may be high levels of what the investigators describe as “traditional masculinity.”

Interestingly, previous studies suggest that HTM men experience suicidal thoughts to a greater degree than do other persons (Soc Psychiatry Psychiatr Epidemiol. 2017 Mar;52[3]:319-27). Nevertheless, the potential influence of HTM and suicide mortality has not been examined before now.

The study is a secondary analysis of the longitudinal Add Health (the National Longitudinal Study of Adolescent to Adult Health) study, which began in 1995 and followed 20,745 adolescents through young adulthood. Not only did that study show a direct association between measures of HTM and death by suicide, but it also corroborated the connection between HTM and other risk factors for suicide revealed in earlier research (Suicide Life Threat Behav. 2016 Apr;46[2]:191-205).

To tease out this relationship, Dr. Coleman and colleagues used data from the nationally representative Add Health study. That earlier research concluded that nine Add Health variables were associated with suicide; these included suicide by a family member, being expelled from school, running away from home, using a weapon, being of white race, a past history of smoking, being in a serious fight in the past year, delinquency, and fighting.

In the current study, the researchers hypothesized that HTM would be associated with these nine variables, in addition to suicide, depression, and gun access.

In the Add Health study, the adolescents were followed over time. In the current analysis, the researchers matched data from that study with death records from the National Death Index from 2014. Death by suicide was defined using National Death Index procedures.

The investigators then used an established procedure for scoring gender-typed attitudes and behaviors. As part of this, a single latent probability variable for identifying oneself as male was generated from 16 gender-discriminating variables.

Participants who were found to score at least a 73% probability of identifying as male (greater than 1 standard deviation above the mean) were classified as HTM.

“There’s been a lot of speculating about masculinity as a risk factor for male suicides,” Dr. Coleman said. “But it’s very difficult to study suicide death and something psychosocial like masculinity. So this was an attempt to fill that gap and test the hypothesis that’s being discussed quite a bit.”
 

 

 

A relevant risk factor

Twenty-two deaths occurred among the Add Health participants. Of those participants, 21 were men (odds ratio, 21.7; 95% confidence interval, 2.9-161; P less than .001).

The analysis showed that all nine risks for suicide that were highlighted in previous research were positively associated with HTM, with small to medium effect sizes. Of these, the most pronounced was family member suicide, with an OR of 1.89 (95% CI, 1.3-2.7).

Most tellingly, HTM men were 2.4 times more likely to end their lives by suicide than were men not defined as such (95% CI, 0.99-6.0; P less than .046). Nevertheless, HTM men were also 1.45 times less likely to report suicidal ideation (OR, 0.69; 95% CI, 0.60-0.81; P less than .001). There was no association between HTM and nonfatal suicide attempts.

Interestingly, HTM men were slightly more likely to report easy access to guns (OR, 1.1; 95% CI, 1.01-1.20; P less than .04), but they had lower levels of depression (Cohen’s d, 0.17; P less than .001).

HTM not only has a direct association with suicide but also with a web of indirect effects as well, thanks to its association with all the other risks identified in the previous study by another group of investigators.

HTM may be an underlying influence in male suicide that increases the probability of externalizing such behavioral risk factors as anger, violence, gun access, and school problems.

The finding that almost all of the people who died by suicide were men underscores the central role that gender plays in these tragedies. As such, the investigators hope that the study prompts more research, as well as intervention efforts aimed at the role of masculinity in suicide.

“There are already things going on around the world to try to address the risk factors of masculinity for suicide death,” Dr. Coleman said. “So even though we haven’t had the evidence that it’s a risk factor, people have been operating under that assumption anyway.

“Hopefully our research contributes to raising the profile that high traditional masculinity is a relevant risk factor that we can organize prevention and treatment around.”
 

An important contribution

Mark S. Kaplan, DrPH, commenting on the findings in an interview, said the study makes an important contribution to suicide research.

“Any study that tries to link a living sample with death data, as they did here, is important,” said Dr. Kaplan, professor of social welfare at the Luskin School of Public Affairs of the University of California, Los Angeles.

“It’s also important because it begins to scratch the surface of more proximal or distal factors that are associated with suicide, and masculinity is one of those factors,” Dr. Kaplan added.

“In an incremental way, it begins to add to the puzzle of why men have a higher mortality rate than their female counterparts. Because when it comes to suicide, men and women really are apples and oranges.”

Dr. Kaplan believes HTM is one of several traits that may lead men to take their own lives.

“There are all sorts of other issues. For example, masculinity might be interacting with some of the harsh socioeconomic conditions that many men face. I think all of this points to the real need to understand why men die from suicide,” he said.

The Add Health study is funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development, with cooperative funding from 23 other federal agencies and foundations. No direct support was received from the grant for the current study. Dr. Coleman and Dr. Kaplan have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Medscape Article

Early cognitive screening is key for schizophrenia spectrum disorder

Article Type
Changed
Mon, 02/24/2020 - 09:26

As many as 24% of individuals with schizophrenia spectrum disorder who underwent a comprehensive neurocognitive battery performed above the mean healthy score for some neurocognitive domains, compared with a group of controls, results from a novel study show.

“Based on these findings, we recommend that neurocognitive assessment should be performed as early as possible after illness onset,” researchers led by Lars Helldin, MD, PhD, of the department of psychiatry at NU Health-Care Hospital, Region Västra Götaland, Sweden, wrote in a study published in Schizophrenia Research: Cognition (2020 Jun doi: 10.1016/j.scog.2020.100172). “Early identification of cognitive risk factors for poor real-life functional outcome is necessary in order to alert the clinical and rehabilitation services about patients in need of extra care.”



For the study, 291 men and women suffering from schizophrenia spectrum disorder (SSD) and 302 controls underwent assessment with a series of comprehensive neurocognitive tests, including the Global Assessment of Functioning (GAF), the Positive and Negative Syndrome Scale (PANSS), the Specific Level of Functioning Scale (SLOF), the Rey Auditory Verbal Learning Test (RAVLT), and the Wisconsin Card Sorting Test (WCST). The researchers found that the neurocognitive function of the SSD patients was significantly lower than that of the healthy controls on all assessments, with very large effect sizes. “There was considerable diversity within each group, as subgroups of patients scored higher than the control mean and subgroups of controls scored lower than the patient mean, particularly on tests of working memory, verbal learning and memory, and executive function,” wrote Dr. Helldin and associates.

As for the WSCT score, the cognitively intact group had a significantly lower PANSS negative symptom level (P less than .01), a lower PANSS general pathology level (P less than .05), and a lower PANSS total symptom level (P less than .01). As for the WAIS Vocabulary score, the patient group with a higher score than the controls had a significantly lower PANSS negative symptom level (P less than .05).

“Here, we have linked neurocognitive heterogeneity to functional outcome differences, and suggest that personalized treatment with emphasis on practical daily skills may be of great significance especially for those with large baseline cognitive deficits,” the researchers concluded. “Such efforts are imperative not only in order to reduce personal suffering and increase quality of life for the patients, but also to reduce the enormous society level economic costs of functional deficits.”

The study was funded by the Regional Health Authority, VG Region, Sweden. The authors reported having no financial disclosures.

Publications
Topics
Sections

As many as 24% of individuals with schizophrenia spectrum disorder who underwent a comprehensive neurocognitive battery performed above the mean healthy score for some neurocognitive domains, compared with a group of controls, results from a novel study show.

“Based on these findings, we recommend that neurocognitive assessment should be performed as early as possible after illness onset,” researchers led by Lars Helldin, MD, PhD, of the department of psychiatry at NU Health-Care Hospital, Region Västra Götaland, Sweden, wrote in a study published in Schizophrenia Research: Cognition (2020 Jun doi: 10.1016/j.scog.2020.100172). “Early identification of cognitive risk factors for poor real-life functional outcome is necessary in order to alert the clinical and rehabilitation services about patients in need of extra care.”



For the study, 291 men and women suffering from schizophrenia spectrum disorder (SSD) and 302 controls underwent assessment with a series of comprehensive neurocognitive tests, including the Global Assessment of Functioning (GAF), the Positive and Negative Syndrome Scale (PANSS), the Specific Level of Functioning Scale (SLOF), the Rey Auditory Verbal Learning Test (RAVLT), and the Wisconsin Card Sorting Test (WCST). The researchers found that the neurocognitive function of the SSD patients was significantly lower than that of the healthy controls on all assessments, with very large effect sizes. “There was considerable diversity within each group, as subgroups of patients scored higher than the control mean and subgroups of controls scored lower than the patient mean, particularly on tests of working memory, verbal learning and memory, and executive function,” wrote Dr. Helldin and associates.

As for the WSCT score, the cognitively intact group had a significantly lower PANSS negative symptom level (P less than .01), a lower PANSS general pathology level (P less than .05), and a lower PANSS total symptom level (P less than .01). As for the WAIS Vocabulary score, the patient group with a higher score than the controls had a significantly lower PANSS negative symptom level (P less than .05).

“Here, we have linked neurocognitive heterogeneity to functional outcome differences, and suggest that personalized treatment with emphasis on practical daily skills may be of great significance especially for those with large baseline cognitive deficits,” the researchers concluded. “Such efforts are imperative not only in order to reduce personal suffering and increase quality of life for the patients, but also to reduce the enormous society level economic costs of functional deficits.”

The study was funded by the Regional Health Authority, VG Region, Sweden. The authors reported having no financial disclosures.

As many as 24% of individuals with schizophrenia spectrum disorder who underwent a comprehensive neurocognitive battery performed above the mean healthy score for some neurocognitive domains, compared with a group of controls, results from a novel study show.

“Based on these findings, we recommend that neurocognitive assessment should be performed as early as possible after illness onset,” researchers led by Lars Helldin, MD, PhD, of the department of psychiatry at NU Health-Care Hospital, Region Västra Götaland, Sweden, wrote in a study published in Schizophrenia Research: Cognition (2020 Jun doi: 10.1016/j.scog.2020.100172). “Early identification of cognitive risk factors for poor real-life functional outcome is necessary in order to alert the clinical and rehabilitation services about patients in need of extra care.”



For the study, 291 men and women suffering from schizophrenia spectrum disorder (SSD) and 302 controls underwent assessment with a series of comprehensive neurocognitive tests, including the Global Assessment of Functioning (GAF), the Positive and Negative Syndrome Scale (PANSS), the Specific Level of Functioning Scale (SLOF), the Rey Auditory Verbal Learning Test (RAVLT), and the Wisconsin Card Sorting Test (WCST). The researchers found that the neurocognitive function of the SSD patients was significantly lower than that of the healthy controls on all assessments, with very large effect sizes. “There was considerable diversity within each group, as subgroups of patients scored higher than the control mean and subgroups of controls scored lower than the patient mean, particularly on tests of working memory, verbal learning and memory, and executive function,” wrote Dr. Helldin and associates.

As for the WSCT score, the cognitively intact group had a significantly lower PANSS negative symptom level (P less than .01), a lower PANSS general pathology level (P less than .05), and a lower PANSS total symptom level (P less than .01). As for the WAIS Vocabulary score, the patient group with a higher score than the controls had a significantly lower PANSS negative symptom level (P less than .05).

“Here, we have linked neurocognitive heterogeneity to functional outcome differences, and suggest that personalized treatment with emphasis on practical daily skills may be of great significance especially for those with large baseline cognitive deficits,” the researchers concluded. “Such efforts are imperative not only in order to reduce personal suffering and increase quality of life for the patients, but also to reduce the enormous society level economic costs of functional deficits.”

The study was funded by the Regional Health Authority, VG Region, Sweden. The authors reported having no financial disclosures.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM SCHIZOPHRENIA RESEARCH: COGNITION

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

‘Momentous’ USMLE change: New pass/fail format stuns medicine

Article Type
Changed
Mon, 03/22/2021 - 14:08

News that the United States Medical Licensing Examination (USMLE) program will change its Step 1 scoring from a 3-digit number to pass/fail starting Jan. 1, 2022, has set off a flurry of shocked responses from students and physicians.

J. Bryan Carmody, MD, MPH, an assistant professor at Eastern Virginia Medical School in Norfolk, said in an interview that he was “stunned” when he heard the news on Wednesday and said the switch presents “the single biggest opportunity for medical school education reform since the Flexner Report,” which in 1910 established standards for modern medical education.

 

Numbers will continue for some tests

The USMLE cosponsors – the Federation of State Medical Boards (FSMB) and the National Board of Medical Examiners (NBME) – said that the Step 2 Clinical Knowledge (CK) exam and Step 3 will continue to be scored numerically. Step 2 Clinical Skills (CS) will continue its pass/fail system.

The change was made after Step 1 had been roundly criticized as playing too big a role in the process of becoming a physician and for causing students to study for the test instead of engaging fully in their medical education.

Ramie Fathy, a third-year medical student at the University of Pennsylvania, Philadelphia, currently studying for Step 1, said in an interview that it would have been nice personally to have the pass/fail choice, but he predicts both good and unintended consequences in the change.

The positive news, Mr. Fathy said, is that less emphasis will be put on the Step 1 test, which includes memorizing basic science details that may or not be relevant depending on later specialty choice.

“It’s not necessarily measuring what the test makers intended, which was whether or not a student can understand and apply basic science concepts to the practice of medicine,” he said.

“The current system encourages students to get as high a score as possible, which – after a certain point – translates to memorizing many little details that become increasingly less practically relevant,” Mr. Fathy said.

 

Pressure may move elsewhere?

However, Mr. Fathy worries that, without a scoring system to help decide who stands out in Step 1, residency program directors will depend more on the reputation of candidates’ medical school and the clout of the person writing a letter of recommendation – factors that are often influenced by family resources and social standing. That could wedge a further economic divide into the path to becoming a physician.

Mr. Fathy said he and fellow students are watching for information on what the passing bar will be and what happens with Step 2 Clinical Knowledge exam. USMLE has promised more information as soon as it is available.

“The question is whether that test will replace Step 1 as the standardized metric of student competency,” Mr. Fathy said, which would put more pressure on students further down the medical path.

 

Will Step 2 anxiety increase?

Dr. Carmody agreed that there is the danger that students now will spend their time studying for Step 2 CK at the expense of other parts of their education.

Meaningful reform will depend on the pass/fail move being coupled with other reforms, most importantly application caps, said Dr. Carmody, who teaches preclinical medical students and works with the residency program.

He has been blogging about Step 1 pass/fail for the past year.

Currently students can apply for as many residencies as they can pay for and Carmody said the number of applications per student has been rising over the past decade.

“That puts program directors under an impossible burden,” he said. “With our Step 1-based system, there’s significant inequality in the number of interviews people get. Programs end up overinviting the same group of people who look good on paper.”

People outside that group respond by sending more applications than they need to just to get a few interviews, Dr. Carmody added.

With caps, students would have an incentive to apply to only those programs in which they had a sincere interest, he said. Program directors also would then be better able to evaluate each application.

Switching Step 1 to pass/fail may have some effect on medical school burnout, Dr. Carmody said.

“It’s one thing to work hard when you’re on call and your patients depend on it,” he said. “But I would have a hard time staying up late every night studying something that I know in my heart is not going to help my patients, but I have to do it because I have to do better than the person who’s studying in the apartment next to me.”

 

Test has strayed from original purpose

Joseph Safdieh, MD, an assistant dean for clinical curriculum and director of the medical student neurology clerkship for the Weill Cornell Medicine, New York, sees the move as positive overall.

“We should not be using any single metric to define or describe our students’ overall profile,” he said in an interview.

“This has been a very significant anxiety point for our medical students for quite a number of years,” Dr. Safdieh said. “They were frustrated that their entire 4 years of medical school seemingly came down to one number.”

The test was created originally as one of three parts of licensure, he pointed out.

“Over the past 10 or 15 years, the exam has morphed to become a litmus test for very specific residency programs,” he said.

However, Dr. Safdieh has concerns that Step 2 will cultivate the same anxiety and may get too big a spotlight without the Step 1 metric, “although one could argue that test does more accurately reflect clinical material,” he said.

He also worries that students who have selected a specialty by the time they take Step 2 may find late in the game that they are less competitive in their field than they thought they were and may have to make a last-minute switch.

Dr. Safdieh said he thinks Step 2 will be next to go the pass/fail route. In reading between the lines of the announcement, he believes the test cosponsors didn’t make both pass/fail at once because it would have been “a nuclear bomb to the system.”

He credited the cosponsors with making what he called a “bold and momentous decision to initiate radical change in the overall transition between undergraduate and graduate medical education.”

Dr. Safdieh added that few in medicine were expecting Wednesday’s announcement.

“I think many of us were expecting them to go to quartile grading, not to go this far,” he said.

Dr. Safdieh suggested that, among those who may see downstream effects from the pass/fail move are offshore schools, such as those in the Caribbean. “Those schools rely on Step 1 to demonstrate that their students are meeting the rigor,” he said. But he hopes that this will lead to more holistic review.

“We’re hoping that this will force change in the system so that residency directors will look at more than just test-taking ability. They’ll look at publications and scholarship, community service and advocacy and performance in medical school,” Dr. Safdieh said.

Alison J. Whelan, MD, chief medical education officer of the Association of American Medical Colleges said in a statement, “The transition from medical school to residency training is a matter of great concern throughout academic medicine.

“The decision by the NBME and FSMB to change USMLE Step 1 score reporting to pass/fail was very carefully considered to balance student learning and student well-being,” she said. “The medical education community must now work together to identify and implement additional changes to improve the overall UME-GME [undergraduate and graduate medical education] transition system for all stakeholders and the AAMC is committed to helping lead this work.”

Dr. Fathy, Dr. Carmody, and Dr. Safdieh have disclosed no relevant financial relationships.
 

This article first appeared on Medscape.com.

Publications
Topics
Sections

News that the United States Medical Licensing Examination (USMLE) program will change its Step 1 scoring from a 3-digit number to pass/fail starting Jan. 1, 2022, has set off a flurry of shocked responses from students and physicians.

J. Bryan Carmody, MD, MPH, an assistant professor at Eastern Virginia Medical School in Norfolk, said in an interview that he was “stunned” when he heard the news on Wednesday and said the switch presents “the single biggest opportunity for medical school education reform since the Flexner Report,” which in 1910 established standards for modern medical education.

 

Numbers will continue for some tests

The USMLE cosponsors – the Federation of State Medical Boards (FSMB) and the National Board of Medical Examiners (NBME) – said that the Step 2 Clinical Knowledge (CK) exam and Step 3 will continue to be scored numerically. Step 2 Clinical Skills (CS) will continue its pass/fail system.

The change was made after Step 1 had been roundly criticized as playing too big a role in the process of becoming a physician and for causing students to study for the test instead of engaging fully in their medical education.

Ramie Fathy, a third-year medical student at the University of Pennsylvania, Philadelphia, currently studying for Step 1, said in an interview that it would have been nice personally to have the pass/fail choice, but he predicts both good and unintended consequences in the change.

The positive news, Mr. Fathy said, is that less emphasis will be put on the Step 1 test, which includes memorizing basic science details that may or not be relevant depending on later specialty choice.

“It’s not necessarily measuring what the test makers intended, which was whether or not a student can understand and apply basic science concepts to the practice of medicine,” he said.

“The current system encourages students to get as high a score as possible, which – after a certain point – translates to memorizing many little details that become increasingly less practically relevant,” Mr. Fathy said.

 

Pressure may move elsewhere?

However, Mr. Fathy worries that, without a scoring system to help decide who stands out in Step 1, residency program directors will depend more on the reputation of candidates’ medical school and the clout of the person writing a letter of recommendation – factors that are often influenced by family resources and social standing. That could wedge a further economic divide into the path to becoming a physician.

Mr. Fathy said he and fellow students are watching for information on what the passing bar will be and what happens with Step 2 Clinical Knowledge exam. USMLE has promised more information as soon as it is available.

“The question is whether that test will replace Step 1 as the standardized metric of student competency,” Mr. Fathy said, which would put more pressure on students further down the medical path.

 

Will Step 2 anxiety increase?

Dr. Carmody agreed that there is the danger that students now will spend their time studying for Step 2 CK at the expense of other parts of their education.

Meaningful reform will depend on the pass/fail move being coupled with other reforms, most importantly application caps, said Dr. Carmody, who teaches preclinical medical students and works with the residency program.

He has been blogging about Step 1 pass/fail for the past year.

Currently students can apply for as many residencies as they can pay for and Carmody said the number of applications per student has been rising over the past decade.

“That puts program directors under an impossible burden,” he said. “With our Step 1-based system, there’s significant inequality in the number of interviews people get. Programs end up overinviting the same group of people who look good on paper.”

People outside that group respond by sending more applications than they need to just to get a few interviews, Dr. Carmody added.

With caps, students would have an incentive to apply to only those programs in which they had a sincere interest, he said. Program directors also would then be better able to evaluate each application.

Switching Step 1 to pass/fail may have some effect on medical school burnout, Dr. Carmody said.

“It’s one thing to work hard when you’re on call and your patients depend on it,” he said. “But I would have a hard time staying up late every night studying something that I know in my heart is not going to help my patients, but I have to do it because I have to do better than the person who’s studying in the apartment next to me.”

 

Test has strayed from original purpose

Joseph Safdieh, MD, an assistant dean for clinical curriculum and director of the medical student neurology clerkship for the Weill Cornell Medicine, New York, sees the move as positive overall.

“We should not be using any single metric to define or describe our students’ overall profile,” he said in an interview.

“This has been a very significant anxiety point for our medical students for quite a number of years,” Dr. Safdieh said. “They were frustrated that their entire 4 years of medical school seemingly came down to one number.”

The test was created originally as one of three parts of licensure, he pointed out.

“Over the past 10 or 15 years, the exam has morphed to become a litmus test for very specific residency programs,” he said.

However, Dr. Safdieh has concerns that Step 2 will cultivate the same anxiety and may get too big a spotlight without the Step 1 metric, “although one could argue that test does more accurately reflect clinical material,” he said.

He also worries that students who have selected a specialty by the time they take Step 2 may find late in the game that they are less competitive in their field than they thought they were and may have to make a last-minute switch.

Dr. Safdieh said he thinks Step 2 will be next to go the pass/fail route. In reading between the lines of the announcement, he believes the test cosponsors didn’t make both pass/fail at once because it would have been “a nuclear bomb to the system.”

He credited the cosponsors with making what he called a “bold and momentous decision to initiate radical change in the overall transition between undergraduate and graduate medical education.”

Dr. Safdieh added that few in medicine were expecting Wednesday’s announcement.

“I think many of us were expecting them to go to quartile grading, not to go this far,” he said.

Dr. Safdieh suggested that, among those who may see downstream effects from the pass/fail move are offshore schools, such as those in the Caribbean. “Those schools rely on Step 1 to demonstrate that their students are meeting the rigor,” he said. But he hopes that this will lead to more holistic review.

“We’re hoping that this will force change in the system so that residency directors will look at more than just test-taking ability. They’ll look at publications and scholarship, community service and advocacy and performance in medical school,” Dr. Safdieh said.

Alison J. Whelan, MD, chief medical education officer of the Association of American Medical Colleges said in a statement, “The transition from medical school to residency training is a matter of great concern throughout academic medicine.

“The decision by the NBME and FSMB to change USMLE Step 1 score reporting to pass/fail was very carefully considered to balance student learning and student well-being,” she said. “The medical education community must now work together to identify and implement additional changes to improve the overall UME-GME [undergraduate and graduate medical education] transition system for all stakeholders and the AAMC is committed to helping lead this work.”

Dr. Fathy, Dr. Carmody, and Dr. Safdieh have disclosed no relevant financial relationships.
 

This article first appeared on Medscape.com.

News that the United States Medical Licensing Examination (USMLE) program will change its Step 1 scoring from a 3-digit number to pass/fail starting Jan. 1, 2022, has set off a flurry of shocked responses from students and physicians.

J. Bryan Carmody, MD, MPH, an assistant professor at Eastern Virginia Medical School in Norfolk, said in an interview that he was “stunned” when he heard the news on Wednesday and said the switch presents “the single biggest opportunity for medical school education reform since the Flexner Report,” which in 1910 established standards for modern medical education.

 

Numbers will continue for some tests

The USMLE cosponsors – the Federation of State Medical Boards (FSMB) and the National Board of Medical Examiners (NBME) – said that the Step 2 Clinical Knowledge (CK) exam and Step 3 will continue to be scored numerically. Step 2 Clinical Skills (CS) will continue its pass/fail system.

The change was made after Step 1 had been roundly criticized as playing too big a role in the process of becoming a physician and for causing students to study for the test instead of engaging fully in their medical education.

Ramie Fathy, a third-year medical student at the University of Pennsylvania, Philadelphia, currently studying for Step 1, said in an interview that it would have been nice personally to have the pass/fail choice, but he predicts both good and unintended consequences in the change.

The positive news, Mr. Fathy said, is that less emphasis will be put on the Step 1 test, which includes memorizing basic science details that may or not be relevant depending on later specialty choice.

“It’s not necessarily measuring what the test makers intended, which was whether or not a student can understand and apply basic science concepts to the practice of medicine,” he said.

“The current system encourages students to get as high a score as possible, which – after a certain point – translates to memorizing many little details that become increasingly less practically relevant,” Mr. Fathy said.

 

Pressure may move elsewhere?

However, Mr. Fathy worries that, without a scoring system to help decide who stands out in Step 1, residency program directors will depend more on the reputation of candidates’ medical school and the clout of the person writing a letter of recommendation – factors that are often influenced by family resources and social standing. That could wedge a further economic divide into the path to becoming a physician.

Mr. Fathy said he and fellow students are watching for information on what the passing bar will be and what happens with Step 2 Clinical Knowledge exam. USMLE has promised more information as soon as it is available.

“The question is whether that test will replace Step 1 as the standardized metric of student competency,” Mr. Fathy said, which would put more pressure on students further down the medical path.

 

Will Step 2 anxiety increase?

Dr. Carmody agreed that there is the danger that students now will spend their time studying for Step 2 CK at the expense of other parts of their education.

Meaningful reform will depend on the pass/fail move being coupled with other reforms, most importantly application caps, said Dr. Carmody, who teaches preclinical medical students and works with the residency program.

He has been blogging about Step 1 pass/fail for the past year.

Currently students can apply for as many residencies as they can pay for and Carmody said the number of applications per student has been rising over the past decade.

“That puts program directors under an impossible burden,” he said. “With our Step 1-based system, there’s significant inequality in the number of interviews people get. Programs end up overinviting the same group of people who look good on paper.”

People outside that group respond by sending more applications than they need to just to get a few interviews, Dr. Carmody added.

With caps, students would have an incentive to apply to only those programs in which they had a sincere interest, he said. Program directors also would then be better able to evaluate each application.

Switching Step 1 to pass/fail may have some effect on medical school burnout, Dr. Carmody said.

“It’s one thing to work hard when you’re on call and your patients depend on it,” he said. “But I would have a hard time staying up late every night studying something that I know in my heart is not going to help my patients, but I have to do it because I have to do better than the person who’s studying in the apartment next to me.”

 

Test has strayed from original purpose

Joseph Safdieh, MD, an assistant dean for clinical curriculum and director of the medical student neurology clerkship for the Weill Cornell Medicine, New York, sees the move as positive overall.

“We should not be using any single metric to define or describe our students’ overall profile,” he said in an interview.

“This has been a very significant anxiety point for our medical students for quite a number of years,” Dr. Safdieh said. “They were frustrated that their entire 4 years of medical school seemingly came down to one number.”

The test was created originally as one of three parts of licensure, he pointed out.

“Over the past 10 or 15 years, the exam has morphed to become a litmus test for very specific residency programs,” he said.

However, Dr. Safdieh has concerns that Step 2 will cultivate the same anxiety and may get too big a spotlight without the Step 1 metric, “although one could argue that test does more accurately reflect clinical material,” he said.

He also worries that students who have selected a specialty by the time they take Step 2 may find late in the game that they are less competitive in their field than they thought they were and may have to make a last-minute switch.

Dr. Safdieh said he thinks Step 2 will be next to go the pass/fail route. In reading between the lines of the announcement, he believes the test cosponsors didn’t make both pass/fail at once because it would have been “a nuclear bomb to the system.”

He credited the cosponsors with making what he called a “bold and momentous decision to initiate radical change in the overall transition between undergraduate and graduate medical education.”

Dr. Safdieh added that few in medicine were expecting Wednesday’s announcement.

“I think many of us were expecting them to go to quartile grading, not to go this far,” he said.

Dr. Safdieh suggested that, among those who may see downstream effects from the pass/fail move are offshore schools, such as those in the Caribbean. “Those schools rely on Step 1 to demonstrate that their students are meeting the rigor,” he said. But he hopes that this will lead to more holistic review.

“We’re hoping that this will force change in the system so that residency directors will look at more than just test-taking ability. They’ll look at publications and scholarship, community service and advocacy and performance in medical school,” Dr. Safdieh said.

Alison J. Whelan, MD, chief medical education officer of the Association of American Medical Colleges said in a statement, “The transition from medical school to residency training is a matter of great concern throughout academic medicine.

“The decision by the NBME and FSMB to change USMLE Step 1 score reporting to pass/fail was very carefully considered to balance student learning and student well-being,” she said. “The medical education community must now work together to identify and implement additional changes to improve the overall UME-GME [undergraduate and graduate medical education] transition system for all stakeholders and the AAMC is committed to helping lead this work.”

Dr. Fathy, Dr. Carmody, and Dr. Safdieh have disclosed no relevant financial relationships.
 

This article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Medscape Article

Brain imaging offers new insight into persistent antisocial behavior

Article Type
Changed
Mon, 03/22/2021 - 14:08

Individuals who exhibit antisocial behavior over a lifetime have a thinner cortex and smaller surface area in key brain regions relative to their counterparts who do not engage in antisocial behavior, new research shows.

However, investigators found no widespread structural brain abnormalities in the group of individuals who exhibited antisocial behavior only during adolescence.

These brain differences seem to be “quite specific and unique” to individuals who exhibit persistent antisocial behavior over their life, lead researcher Christina O. Carlisi, PhD, of University College London, said during a press briefing.

“Critically, the findings don’t directly link brain structure abnormalities to antisocial behavior,” she said. Nor do they mean that anyone with a smaller brain or brain area is destined to be antisocial or to commit a crime.

“Our findings support the idea that, for the small proportion of individuals with life-course–persistent antisocial behavior, there may be differences in their brain structure that make it difficult for them to develop social skills that prevent them from engaging in antisocial behavior,” Dr. Carlisi said in a news release. “These people could benefit from more support throughout their lives.”

The study, the investigators noted, provides the first robust evidence to suggest that underlying neuropsychological differences are primarily associated with life-course-persistent persistent antisocial behavior. It was published online Feb. 17 in the Lancet Psychiatry (doi: 10.1016/S2215-0366[20]30002-X).

Support for second chances

Speaking at the press briefing, coauthor Terrie E. Moffitt, PhD, of Duke University, Durham, N.C., said it’s well known that most young criminals are between the ages of 16 and 25.

Breaking the law is not at all rare in this age group, but not all of these young offenders are alike, she noted. Only a few become persistent repeat offenders.

“They start as a young child with aggressive conduct problems and eventually sink into a long-term lifestyle of repetitive serious crime that lasts well into adulthood, but this is a small group,” Dr. Moffitt explained. “In contrast, the larger majority of offenders will have only a short-term brush with lawbreaking and then grow up to become law-abiding members of society.”

The current study suggests that what makes short-term offenders behave differently from long-term offenders might involve some vulnerability at the level of the structure of the brain, Dr. Moffitt said.

The findings stem from 672 individuals in the Dunedin Multidisciplinary Health and Development Study, a population-representative, longitudinal birth cohort that assesses health and behavior.

On the basis of reports from parents, care givers, and teachers, as well as self-reports of conduct problems in persons aged 7-26 years, 80 participants (12%) had “life-course–persistent” antisocial behavior, 151 (23%) had adolescent-only antisocial behavior, and 441 (66%) had “low” antisocial behavior (control group, whose members never had a pervasive or persistent pattern of antisocial behavior).

Brain MRI obtained at age 45 years showed that, among individuals with persistent antisocial behavior, mean surface area was smaller (95% confidence interval, –0.24 to –0.11; P less than .0001) and mean cortical thickness was lower (95% CI, –0.19 to –0.02; P = .020) than was those of their peers in the control group.

For those in the life-course–persistent group, surface area was reduced in 282 of 360 anatomically defined brain parcels, and cortex was thinner in 11 of 360 parcels encompassing frontal and temporal regions (which were associated with executive function, emotion regulation, and motivation), compared with the control group.

Widespread differences in brain surface morphometry were not found in those who exhibited antisocial behavior during adolescence only. Such behavior was likely the result of their having to navigate through socially tough years.

“These findings underscore prior research that really highlights that there are different types of young offenders. They are not all the same; they should not all be treated the same,” coauthor Essi Viding, PhD, who also is affiliated with University College London, told reporters.

The findings support current strategies aimed at giving young offenders “a second chance” as opposed to enforcing harsher policies that prioritize incarceration for all young offenders, Dr. Viding added.

 

 

Important contribution

The authors of an accompanying commentary noted that, despite “remarkable progress in the past 3 decades, the etiology of antisocial behavior remains elusive” (Lancet Psychiatry. 2020 Feb 17. doi: 10.1016/S2215-0366[20]30035-3).

This study makes “an important contribution by identifying structural brain correlates of antisocial behavior that could be used to differentiate among individuals with life-course-persistent antisocial behavior, those with adolescence-limited antisocial behavior, and non-antisocial controls,” write Inti A. Brazil, PhD, of the Donders Institute for Brain, Cognition and Behavior, Radboud University, Nijmegen, the Netherlands, and Macià Buades-Rotger, PhD, of the Institute of Psychology II, University of Lübeck, Germany.

They noted that the findings might help to move the field closer to achieving the long-standing goal of incorporating neural data into assessment protocols for antisocial behavior.

The discovery of “meaningful morphologic differences between individuals with life-course–persistent and adolescence-limited antisocial behavior offers an important advance in the use of brain metrics for differentiating among individuals with antisocial dispositions.

“Importantly, however, it remains to be determined whether and how measuring the brain can be used to bridge the different taxometric views and theories on the etiology of antisocial behavior,” Dr. Brazil and Dr. Buades-Rotger concluded.

The study was funded by the U.S. National Institute on Aging; the Health Research Council of New Zealand; the New Zealand Ministry of Business, Innovation and Employment; the U.K. Medical Research Council; the Avielle Foundation; and the Wellcome Trust. The study authors and the authors of the commentary disclosed no relevant financial relationships.
 

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Individuals who exhibit antisocial behavior over a lifetime have a thinner cortex and smaller surface area in key brain regions relative to their counterparts who do not engage in antisocial behavior, new research shows.

However, investigators found no widespread structural brain abnormalities in the group of individuals who exhibited antisocial behavior only during adolescence.

These brain differences seem to be “quite specific and unique” to individuals who exhibit persistent antisocial behavior over their life, lead researcher Christina O. Carlisi, PhD, of University College London, said during a press briefing.

“Critically, the findings don’t directly link brain structure abnormalities to antisocial behavior,” she said. Nor do they mean that anyone with a smaller brain or brain area is destined to be antisocial or to commit a crime.

“Our findings support the idea that, for the small proportion of individuals with life-course–persistent antisocial behavior, there may be differences in their brain structure that make it difficult for them to develop social skills that prevent them from engaging in antisocial behavior,” Dr. Carlisi said in a news release. “These people could benefit from more support throughout their lives.”

The study, the investigators noted, provides the first robust evidence to suggest that underlying neuropsychological differences are primarily associated with life-course-persistent persistent antisocial behavior. It was published online Feb. 17 in the Lancet Psychiatry (doi: 10.1016/S2215-0366[20]30002-X).

Support for second chances

Speaking at the press briefing, coauthor Terrie E. Moffitt, PhD, of Duke University, Durham, N.C., said it’s well known that most young criminals are between the ages of 16 and 25.

Breaking the law is not at all rare in this age group, but not all of these young offenders are alike, she noted. Only a few become persistent repeat offenders.

“They start as a young child with aggressive conduct problems and eventually sink into a long-term lifestyle of repetitive serious crime that lasts well into adulthood, but this is a small group,” Dr. Moffitt explained. “In contrast, the larger majority of offenders will have only a short-term brush with lawbreaking and then grow up to become law-abiding members of society.”

The current study suggests that what makes short-term offenders behave differently from long-term offenders might involve some vulnerability at the level of the structure of the brain, Dr. Moffitt said.

The findings stem from 672 individuals in the Dunedin Multidisciplinary Health and Development Study, a population-representative, longitudinal birth cohort that assesses health and behavior.

On the basis of reports from parents, care givers, and teachers, as well as self-reports of conduct problems in persons aged 7-26 years, 80 participants (12%) had “life-course–persistent” antisocial behavior, 151 (23%) had adolescent-only antisocial behavior, and 441 (66%) had “low” antisocial behavior (control group, whose members never had a pervasive or persistent pattern of antisocial behavior).

Brain MRI obtained at age 45 years showed that, among individuals with persistent antisocial behavior, mean surface area was smaller (95% confidence interval, –0.24 to –0.11; P less than .0001) and mean cortical thickness was lower (95% CI, –0.19 to –0.02; P = .020) than was those of their peers in the control group.

For those in the life-course–persistent group, surface area was reduced in 282 of 360 anatomically defined brain parcels, and cortex was thinner in 11 of 360 parcels encompassing frontal and temporal regions (which were associated with executive function, emotion regulation, and motivation), compared with the control group.

Widespread differences in brain surface morphometry were not found in those who exhibited antisocial behavior during adolescence only. Such behavior was likely the result of their having to navigate through socially tough years.

“These findings underscore prior research that really highlights that there are different types of young offenders. They are not all the same; they should not all be treated the same,” coauthor Essi Viding, PhD, who also is affiliated with University College London, told reporters.

The findings support current strategies aimed at giving young offenders “a second chance” as opposed to enforcing harsher policies that prioritize incarceration for all young offenders, Dr. Viding added.

 

 

Important contribution

The authors of an accompanying commentary noted that, despite “remarkable progress in the past 3 decades, the etiology of antisocial behavior remains elusive” (Lancet Psychiatry. 2020 Feb 17. doi: 10.1016/S2215-0366[20]30035-3).

This study makes “an important contribution by identifying structural brain correlates of antisocial behavior that could be used to differentiate among individuals with life-course-persistent antisocial behavior, those with adolescence-limited antisocial behavior, and non-antisocial controls,” write Inti A. Brazil, PhD, of the Donders Institute for Brain, Cognition and Behavior, Radboud University, Nijmegen, the Netherlands, and Macià Buades-Rotger, PhD, of the Institute of Psychology II, University of Lübeck, Germany.

They noted that the findings might help to move the field closer to achieving the long-standing goal of incorporating neural data into assessment protocols for antisocial behavior.

The discovery of “meaningful morphologic differences between individuals with life-course–persistent and adolescence-limited antisocial behavior offers an important advance in the use of brain metrics for differentiating among individuals with antisocial dispositions.

“Importantly, however, it remains to be determined whether and how measuring the brain can be used to bridge the different taxometric views and theories on the etiology of antisocial behavior,” Dr. Brazil and Dr. Buades-Rotger concluded.

The study was funded by the U.S. National Institute on Aging; the Health Research Council of New Zealand; the New Zealand Ministry of Business, Innovation and Employment; the U.K. Medical Research Council; the Avielle Foundation; and the Wellcome Trust. The study authors and the authors of the commentary disclosed no relevant financial relationships.
 

A version of this article first appeared on Medscape.com.

Individuals who exhibit antisocial behavior over a lifetime have a thinner cortex and smaller surface area in key brain regions relative to their counterparts who do not engage in antisocial behavior, new research shows.

However, investigators found no widespread structural brain abnormalities in the group of individuals who exhibited antisocial behavior only during adolescence.

These brain differences seem to be “quite specific and unique” to individuals who exhibit persistent antisocial behavior over their life, lead researcher Christina O. Carlisi, PhD, of University College London, said during a press briefing.

“Critically, the findings don’t directly link brain structure abnormalities to antisocial behavior,” she said. Nor do they mean that anyone with a smaller brain or brain area is destined to be antisocial or to commit a crime.

“Our findings support the idea that, for the small proportion of individuals with life-course–persistent antisocial behavior, there may be differences in their brain structure that make it difficult for them to develop social skills that prevent them from engaging in antisocial behavior,” Dr. Carlisi said in a news release. “These people could benefit from more support throughout their lives.”

The study, the investigators noted, provides the first robust evidence to suggest that underlying neuropsychological differences are primarily associated with life-course-persistent persistent antisocial behavior. It was published online Feb. 17 in the Lancet Psychiatry (doi: 10.1016/S2215-0366[20]30002-X).

Support for second chances

Speaking at the press briefing, coauthor Terrie E. Moffitt, PhD, of Duke University, Durham, N.C., said it’s well known that most young criminals are between the ages of 16 and 25.

Breaking the law is not at all rare in this age group, but not all of these young offenders are alike, she noted. Only a few become persistent repeat offenders.

“They start as a young child with aggressive conduct problems and eventually sink into a long-term lifestyle of repetitive serious crime that lasts well into adulthood, but this is a small group,” Dr. Moffitt explained. “In contrast, the larger majority of offenders will have only a short-term brush with lawbreaking and then grow up to become law-abiding members of society.”

The current study suggests that what makes short-term offenders behave differently from long-term offenders might involve some vulnerability at the level of the structure of the brain, Dr. Moffitt said.

The findings stem from 672 individuals in the Dunedin Multidisciplinary Health and Development Study, a population-representative, longitudinal birth cohort that assesses health and behavior.

On the basis of reports from parents, care givers, and teachers, as well as self-reports of conduct problems in persons aged 7-26 years, 80 participants (12%) had “life-course–persistent” antisocial behavior, 151 (23%) had adolescent-only antisocial behavior, and 441 (66%) had “low” antisocial behavior (control group, whose members never had a pervasive or persistent pattern of antisocial behavior).

Brain MRI obtained at age 45 years showed that, among individuals with persistent antisocial behavior, mean surface area was smaller (95% confidence interval, –0.24 to –0.11; P less than .0001) and mean cortical thickness was lower (95% CI, –0.19 to –0.02; P = .020) than was those of their peers in the control group.

For those in the life-course–persistent group, surface area was reduced in 282 of 360 anatomically defined brain parcels, and cortex was thinner in 11 of 360 parcels encompassing frontal and temporal regions (which were associated with executive function, emotion regulation, and motivation), compared with the control group.

Widespread differences in brain surface morphometry were not found in those who exhibited antisocial behavior during adolescence only. Such behavior was likely the result of their having to navigate through socially tough years.

“These findings underscore prior research that really highlights that there are different types of young offenders. They are not all the same; they should not all be treated the same,” coauthor Essi Viding, PhD, who also is affiliated with University College London, told reporters.

The findings support current strategies aimed at giving young offenders “a second chance” as opposed to enforcing harsher policies that prioritize incarceration for all young offenders, Dr. Viding added.

 

 

Important contribution

The authors of an accompanying commentary noted that, despite “remarkable progress in the past 3 decades, the etiology of antisocial behavior remains elusive” (Lancet Psychiatry. 2020 Feb 17. doi: 10.1016/S2215-0366[20]30035-3).

This study makes “an important contribution by identifying structural brain correlates of antisocial behavior that could be used to differentiate among individuals with life-course-persistent antisocial behavior, those with adolescence-limited antisocial behavior, and non-antisocial controls,” write Inti A. Brazil, PhD, of the Donders Institute for Brain, Cognition and Behavior, Radboud University, Nijmegen, the Netherlands, and Macià Buades-Rotger, PhD, of the Institute of Psychology II, University of Lübeck, Germany.

They noted that the findings might help to move the field closer to achieving the long-standing goal of incorporating neural data into assessment protocols for antisocial behavior.

The discovery of “meaningful morphologic differences between individuals with life-course–persistent and adolescence-limited antisocial behavior offers an important advance in the use of brain metrics for differentiating among individuals with antisocial dispositions.

“Importantly, however, it remains to be determined whether and how measuring the brain can be used to bridge the different taxometric views and theories on the etiology of antisocial behavior,” Dr. Brazil and Dr. Buades-Rotger concluded.

The study was funded by the U.S. National Institute on Aging; the Health Research Council of New Zealand; the New Zealand Ministry of Business, Innovation and Employment; the U.K. Medical Research Council; the Avielle Foundation; and the Wellcome Trust. The study authors and the authors of the commentary disclosed no relevant financial relationships.
 

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Medscape Article

My inspiration

Article Type
Changed
Wed, 05/06/2020 - 12:50

Kobe Bryant knew me. Not personally, of course. I never received an autograph or shook his hand. But once in a while if I was up early enough, I’d run into Kobe at the gym in Newport Beach where he and I both worked out. As he did for all his fans at the gym, he’d make eye contact with me and nod hello. He was always focused on his workout – working with a trainer, never with headphones on. In person, he appeared enormous. Unlike most retired professional athletes, he still was in great shape. No doubt he could have suited up in purple and gold, and played against the Clippers that night if needed.

Featureflash Photo Agency
Kobe Bryant at the 90th Academy Awards at the Dolby Theatre, Hollywood, Calf., on March 4, 2018.

Being from New England, I never was a Laker fan. But at Kobe’s peak around 2000, I found him inspiring. I recall watching him play right around the time I was studying for my U.S. medical licensing exams. I thought, if Kobe can head to the gym after midnight and take a 1,000 shots to prepare for a game, then I could set my alarm for 4 a.m. and take a few dozen more questions from my First Aid books. Head down, “Kryptonite” cranked on my iPod, I wasn’t going to let anyone in that test room outwork me. Neither did he. I put in the time and, like Kobe in the 2002 conference finals against Sacramento, I crushed it.*

When we moved to California, I followed Kobe and the Lakers until he retired. To be clear, I didn’t aspire to be like him, firstly because I’m slightly shorter than Michael Bloomberg, but also because although accomplished, Kobe made some poor choices at times. Indeed, it seems he might have been kinder and more considerate when he was at the top. But in his retirement he looked to be toiling to make reparations, refocusing his prodigious energy and talent for the benefit of others rather than for just for scoring 81 points. His Rolls Royce was there before mine at the gym, and I was there early. He was still getting up early and now preparing to be a great venture capitalist, podcaster, author, and father to his girls.

Dr. Jeffrey Benabio

Watching him carry kettle bells across the floor one morning, I wondered, do people like Kobe Bryant look to others for inspiration? Or are they are born with an endless supply of it? For me, I seemed to push harder and faster when watching idols pass by. Whether it was Kobe or Clayton Christensen (author of “The Innovator’s Dilemma”), Joe Jorizzo, or Barack Obama, I found I could do just a bit more if I had them in mind.

On game days, Kobe spoke of arriving at the arena early, long before anyone. He would use the silent, solo time to reflect on what he needed to do perform that night. I tried this last week, arriving at our clinic early, before any patients or staff. I turned the lights on and took a few minutes to think about what we needed to accomplish that day. I previewed patients on my schedule, searched Up to Date for the latest recommendations on a difficult case. I didn’t know Kobe, but I felt like I did.

CC0 1.0 Universal Public Domain Dedication

When I received the text that Kobe Bryant had died, I was actually working on this column. So I decided to change the topic to write about people who inspire me, ironically inspired by him again. May he rest in peace.
 

Dr. Benabio is director of Healthcare Transformation and chief of dermatology at Kaiser Permanente San Diego. The opinions expressed in this column are his own and do not represent those of Kaiser Permanente. Dr. Benabio is @Dermdoc on Twitter. Write to him at [email protected].

*This article was updated 2/19/2020.

Publications
Topics
Sections

Kobe Bryant knew me. Not personally, of course. I never received an autograph or shook his hand. But once in a while if I was up early enough, I’d run into Kobe at the gym in Newport Beach where he and I both worked out. As he did for all his fans at the gym, he’d make eye contact with me and nod hello. He was always focused on his workout – working with a trainer, never with headphones on. In person, he appeared enormous. Unlike most retired professional athletes, he still was in great shape. No doubt he could have suited up in purple and gold, and played against the Clippers that night if needed.

Featureflash Photo Agency
Kobe Bryant at the 90th Academy Awards at the Dolby Theatre, Hollywood, Calf., on March 4, 2018.

Being from New England, I never was a Laker fan. But at Kobe’s peak around 2000, I found him inspiring. I recall watching him play right around the time I was studying for my U.S. medical licensing exams. I thought, if Kobe can head to the gym after midnight and take a 1,000 shots to prepare for a game, then I could set my alarm for 4 a.m. and take a few dozen more questions from my First Aid books. Head down, “Kryptonite” cranked on my iPod, I wasn’t going to let anyone in that test room outwork me. Neither did he. I put in the time and, like Kobe in the 2002 conference finals against Sacramento, I crushed it.*

When we moved to California, I followed Kobe and the Lakers until he retired. To be clear, I didn’t aspire to be like him, firstly because I’m slightly shorter than Michael Bloomberg, but also because although accomplished, Kobe made some poor choices at times. Indeed, it seems he might have been kinder and more considerate when he was at the top. But in his retirement he looked to be toiling to make reparations, refocusing his prodigious energy and talent for the benefit of others rather than for just for scoring 81 points. His Rolls Royce was there before mine at the gym, and I was there early. He was still getting up early and now preparing to be a great venture capitalist, podcaster, author, and father to his girls.

Dr. Jeffrey Benabio

Watching him carry kettle bells across the floor one morning, I wondered, do people like Kobe Bryant look to others for inspiration? Or are they are born with an endless supply of it? For me, I seemed to push harder and faster when watching idols pass by. Whether it was Kobe or Clayton Christensen (author of “The Innovator’s Dilemma”), Joe Jorizzo, or Barack Obama, I found I could do just a bit more if I had them in mind.

On game days, Kobe spoke of arriving at the arena early, long before anyone. He would use the silent, solo time to reflect on what he needed to do perform that night. I tried this last week, arriving at our clinic early, before any patients or staff. I turned the lights on and took a few minutes to think about what we needed to accomplish that day. I previewed patients on my schedule, searched Up to Date for the latest recommendations on a difficult case. I didn’t know Kobe, but I felt like I did.

CC0 1.0 Universal Public Domain Dedication

When I received the text that Kobe Bryant had died, I was actually working on this column. So I decided to change the topic to write about people who inspire me, ironically inspired by him again. May he rest in peace.
 

Dr. Benabio is director of Healthcare Transformation and chief of dermatology at Kaiser Permanente San Diego. The opinions expressed in this column are his own and do not represent those of Kaiser Permanente. Dr. Benabio is @Dermdoc on Twitter. Write to him at [email protected].

*This article was updated 2/19/2020.

Kobe Bryant knew me. Not personally, of course. I never received an autograph or shook his hand. But once in a while if I was up early enough, I’d run into Kobe at the gym in Newport Beach where he and I both worked out. As he did for all his fans at the gym, he’d make eye contact with me and nod hello. He was always focused on his workout – working with a trainer, never with headphones on. In person, he appeared enormous. Unlike most retired professional athletes, he still was in great shape. No doubt he could have suited up in purple and gold, and played against the Clippers that night if needed.

Featureflash Photo Agency
Kobe Bryant at the 90th Academy Awards at the Dolby Theatre, Hollywood, Calf., on March 4, 2018.

Being from New England, I never was a Laker fan. But at Kobe’s peak around 2000, I found him inspiring. I recall watching him play right around the time I was studying for my U.S. medical licensing exams. I thought, if Kobe can head to the gym after midnight and take a 1,000 shots to prepare for a game, then I could set my alarm for 4 a.m. and take a few dozen more questions from my First Aid books. Head down, “Kryptonite” cranked on my iPod, I wasn’t going to let anyone in that test room outwork me. Neither did he. I put in the time and, like Kobe in the 2002 conference finals against Sacramento, I crushed it.*

When we moved to California, I followed Kobe and the Lakers until he retired. To be clear, I didn’t aspire to be like him, firstly because I’m slightly shorter than Michael Bloomberg, but also because although accomplished, Kobe made some poor choices at times. Indeed, it seems he might have been kinder and more considerate when he was at the top. But in his retirement he looked to be toiling to make reparations, refocusing his prodigious energy and talent for the benefit of others rather than for just for scoring 81 points. His Rolls Royce was there before mine at the gym, and I was there early. He was still getting up early and now preparing to be a great venture capitalist, podcaster, author, and father to his girls.

Dr. Jeffrey Benabio

Watching him carry kettle bells across the floor one morning, I wondered, do people like Kobe Bryant look to others for inspiration? Or are they are born with an endless supply of it? For me, I seemed to push harder and faster when watching idols pass by. Whether it was Kobe or Clayton Christensen (author of “The Innovator’s Dilemma”), Joe Jorizzo, or Barack Obama, I found I could do just a bit more if I had them in mind.

On game days, Kobe spoke of arriving at the arena early, long before anyone. He would use the silent, solo time to reflect on what he needed to do perform that night. I tried this last week, arriving at our clinic early, before any patients or staff. I turned the lights on and took a few minutes to think about what we needed to accomplish that day. I previewed patients on my schedule, searched Up to Date for the latest recommendations on a difficult case. I didn’t know Kobe, but I felt like I did.

CC0 1.0 Universal Public Domain Dedication

When I received the text that Kobe Bryant had died, I was actually working on this column. So I decided to change the topic to write about people who inspire me, ironically inspired by him again. May he rest in peace.
 

Dr. Benabio is director of Healthcare Transformation and chief of dermatology at Kaiser Permanente San Diego. The opinions expressed in this column are his own and do not represent those of Kaiser Permanente. Dr. Benabio is @Dermdoc on Twitter. Write to him at [email protected].

*This article was updated 2/19/2020.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

For OUD patients, ‘a lot of work to be done’

Article Type
Changed
Tue, 03/10/2020 - 07:05

Most Americans who need medication-assisted treatment not getting it

– For Karen J. Hartwell, MD, few things in her clinical work bring more reward than providing medication-assisted treatment (MAT) to patients with opioid use disorder.

Doug Brunk/MDedge News
Dr. Karen J. Hartwell

“Seeing people get into recovery on buprenorphine is as exciting as seeing your first person respond to clozapine, or to see a depression remit on your selection of an antidepressant,” she said at an annual psychopharmacology update held by the Nevada Psychiatric Association. “We know that medication-assisted treatment is underused and, sadly, relapse rates remain high.”

According to the Centers for Disease Control and Prevention, there were 70,237 drug-related overdose deaths in 2017 – 47,600 from prescription and illicit opioids. “This is being driven predominately by fentanyl and other high-potency synthetic opioids, followed by prescription opioids and heroin,” said Dr. Hartwell, an associate professor in the addiction sciences division in the department of psychiatry and behavioral sciences at the Medical University of South Carolina, Charleston.

There were an estimated 2 million Americans with an opioid use disorder (OUD) in 2018, she said, and more than 10 million misused prescription opioids. At the same time, prescriptions for opioids have dropped to lowest level in 10 years from a peak in 2012 of 81.3 prescriptions per 100 persons to 58.7 prescriptions per 100 persons in 2017 – total of more than 191 million scripts. “There is a decline in the number of opioid prescriptions, but there is still a lot of diversion, and there are some prescription ‘hot spots’ in the Southeast,” Dr. Hartwell said. “Heroin is a very low cost, and we’re wrestling with the issue of fentanyl.”

To complicate matters, most Americans with opioid use disorder are not in treatment. “In many people, the disorder is never diagnosed, and even fewer engage in care,” she said. “There are challenges with treatment retention, and even fewer achieve remission. There’s a lot of work to be done. One of which is the availability of medication-assisted treatment.”

Dr. Hartwell said that she knows of physician colleagues who have obtained a waiver to prescribe buprenorphine but have yet to prescribe it. “Some people may prefer to avoid the dance [of buprenorphine prescribing],” she said. “I’m here to advise you to dance.” Clinicians can learn about MAT waiver training opportunities by visiting the website of the Providers Clinical Support System, a program funded by the Substance Abuse and Mental Health Services Administration (SAMHSA).

Another option is to join a telementoring session on the topic facilitated by Project ECHO, or Extension for Community Healthcare Outcomes, which is being used by the University of New Mexico, Albuquerque. The goal of this model is to break down the walls between specialty and primary care by linking experts at an academic “hub” with primary care doctors and nurses in nearby communities.

“Our Project ECHO at the Medical University of South Carolina is twice a month on Fridays,” Dr. Hartwell said. “The first half is a case. The second half is a didactic [session], and you get a free hour of CME.”

The most common drugs used for medication-assisted treatment of opioid disorder are buprenorphine (a partial agonist), naltrexone (an antagonist), and methadone (a full agonist). Methadone retention generally is better than buprenorphine or naltrexone. The recommended treatment duration is 6-12 months, yet many studies demonstrate that many only stay on treatment for 30-60 days.



“You want to keep patients on treatment as long as they benefit from the medication,” Dr. Hartwell said. One large study of Medicaid claims data found that the risk of acute care service use and overdose were high following buprenorphine discontinuation, regardless of treatment duration. Superior outcomes became significant with treatment duration beyond 15 months, although rates of the primary adverse outcomes remained high (Am J Psychiatry. 2020 Feb 1;177[2]:117-24). About 5% of patients across all cohorts experienced one or more medically treated overdoses.

“One thing I don’t want is for people to drop out of treatment and not come back to see me,” Dr. Hartwell said. “This is a time for us to use our shared decision-making skills. I like to use the Tapering Readiness Inventory, a list of 16 questions. It asks such things as ‘Are you able to cope with difficult situations without using?’ and ‘Do you have all of the [drug] paraphernalia out of the house?’ We then have a discussion. If the patient decides to go ahead and do a taper, I always leave the door open. So, as that taper persists and someone says, ‘I’m starting to think about using, Doctor,’ I’ll put them back on [buprenorphine]. Or, if they come off the drug and they find themselves at risk of relapsing, they come back in and see me.”

There’s also some evidence that contingency management might be helpful, both in terms of opioid negative urines, and retention and treatment. Meanwhile, extended-release forms of buprenorphine are emerging.

In 2017, the Food and Drug Administration approved Sublocade, the first once-monthly injectable buprenorphine product for the treatment of moderate-to-severe OUD in adult patients who have initiated treatment with a transmucosal buprenorphine-containing product. “The recommendations are that you have about a 7-day lead-in of sublingual buprenorphine, and then 2 months of a 300-mg IV injection,” Dr. Hartwell said. “This is followed by either 100-mg injections monthly or 300-mg maintenance in select cases. There is some pain at the injection site. Some clinicians are getting around this by using a little bit of lidocaine prior to giving the injection.”

Another product, Brixadi, is an extended-release weekly (8 mg, 16 mg, 24 mg, 32 mg) and monthly (64 mg, 96 mg, 128 mg) buprenorphine injection used for the treatment of moderate to severe OUD. It is expected to be available in December 2020.

In 2016, the FDA approved Probuphine, the first buprenorphine implant for the maintenance treatment of opioid dependence. Probuphine is designed to provide a constant, low-level dose of buprenorphine for 6 months in patients who are already stable on low to moderate doses of other forms of buprenorphine, as part of a complete treatment program. “The 6-month duration kind of takes the issue of adherence off the table,” Dr. Hartwell said. “The caveat with this is that you have to be stable on 8 mg of buprenorphine per day or less. The majority of my patients require much higher doses.”

Dr. Hartwell reported having no relevant disclosures.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Most Americans who need medication-assisted treatment not getting it

Most Americans who need medication-assisted treatment not getting it

– For Karen J. Hartwell, MD, few things in her clinical work bring more reward than providing medication-assisted treatment (MAT) to patients with opioid use disorder.

Doug Brunk/MDedge News
Dr. Karen J. Hartwell

“Seeing people get into recovery on buprenorphine is as exciting as seeing your first person respond to clozapine, or to see a depression remit on your selection of an antidepressant,” she said at an annual psychopharmacology update held by the Nevada Psychiatric Association. “We know that medication-assisted treatment is underused and, sadly, relapse rates remain high.”

According to the Centers for Disease Control and Prevention, there were 70,237 drug-related overdose deaths in 2017 – 47,600 from prescription and illicit opioids. “This is being driven predominately by fentanyl and other high-potency synthetic opioids, followed by prescription opioids and heroin,” said Dr. Hartwell, an associate professor in the addiction sciences division in the department of psychiatry and behavioral sciences at the Medical University of South Carolina, Charleston.

There were an estimated 2 million Americans with an opioid use disorder (OUD) in 2018, she said, and more than 10 million misused prescription opioids. At the same time, prescriptions for opioids have dropped to lowest level in 10 years from a peak in 2012 of 81.3 prescriptions per 100 persons to 58.7 prescriptions per 100 persons in 2017 – total of more than 191 million scripts. “There is a decline in the number of opioid prescriptions, but there is still a lot of diversion, and there are some prescription ‘hot spots’ in the Southeast,” Dr. Hartwell said. “Heroin is a very low cost, and we’re wrestling with the issue of fentanyl.”

To complicate matters, most Americans with opioid use disorder are not in treatment. “In many people, the disorder is never diagnosed, and even fewer engage in care,” she said. “There are challenges with treatment retention, and even fewer achieve remission. There’s a lot of work to be done. One of which is the availability of medication-assisted treatment.”

Dr. Hartwell said that she knows of physician colleagues who have obtained a waiver to prescribe buprenorphine but have yet to prescribe it. “Some people may prefer to avoid the dance [of buprenorphine prescribing],” she said. “I’m here to advise you to dance.” Clinicians can learn about MAT waiver training opportunities by visiting the website of the Providers Clinical Support System, a program funded by the Substance Abuse and Mental Health Services Administration (SAMHSA).

Another option is to join a telementoring session on the topic facilitated by Project ECHO, or Extension for Community Healthcare Outcomes, which is being used by the University of New Mexico, Albuquerque. The goal of this model is to break down the walls between specialty and primary care by linking experts at an academic “hub” with primary care doctors and nurses in nearby communities.

“Our Project ECHO at the Medical University of South Carolina is twice a month on Fridays,” Dr. Hartwell said. “The first half is a case. The second half is a didactic [session], and you get a free hour of CME.”

The most common drugs used for medication-assisted treatment of opioid disorder are buprenorphine (a partial agonist), naltrexone (an antagonist), and methadone (a full agonist). Methadone retention generally is better than buprenorphine or naltrexone. The recommended treatment duration is 6-12 months, yet many studies demonstrate that many only stay on treatment for 30-60 days.



“You want to keep patients on treatment as long as they benefit from the medication,” Dr. Hartwell said. One large study of Medicaid claims data found that the risk of acute care service use and overdose were high following buprenorphine discontinuation, regardless of treatment duration. Superior outcomes became significant with treatment duration beyond 15 months, although rates of the primary adverse outcomes remained high (Am J Psychiatry. 2020 Feb 1;177[2]:117-24). About 5% of patients across all cohorts experienced one or more medically treated overdoses.

“One thing I don’t want is for people to drop out of treatment and not come back to see me,” Dr. Hartwell said. “This is a time for us to use our shared decision-making skills. I like to use the Tapering Readiness Inventory, a list of 16 questions. It asks such things as ‘Are you able to cope with difficult situations without using?’ and ‘Do you have all of the [drug] paraphernalia out of the house?’ We then have a discussion. If the patient decides to go ahead and do a taper, I always leave the door open. So, as that taper persists and someone says, ‘I’m starting to think about using, Doctor,’ I’ll put them back on [buprenorphine]. Or, if they come off the drug and they find themselves at risk of relapsing, they come back in and see me.”

There’s also some evidence that contingency management might be helpful, both in terms of opioid negative urines, and retention and treatment. Meanwhile, extended-release forms of buprenorphine are emerging.

In 2017, the Food and Drug Administration approved Sublocade, the first once-monthly injectable buprenorphine product for the treatment of moderate-to-severe OUD in adult patients who have initiated treatment with a transmucosal buprenorphine-containing product. “The recommendations are that you have about a 7-day lead-in of sublingual buprenorphine, and then 2 months of a 300-mg IV injection,” Dr. Hartwell said. “This is followed by either 100-mg injections monthly or 300-mg maintenance in select cases. There is some pain at the injection site. Some clinicians are getting around this by using a little bit of lidocaine prior to giving the injection.”

Another product, Brixadi, is an extended-release weekly (8 mg, 16 mg, 24 mg, 32 mg) and monthly (64 mg, 96 mg, 128 mg) buprenorphine injection used for the treatment of moderate to severe OUD. It is expected to be available in December 2020.

In 2016, the FDA approved Probuphine, the first buprenorphine implant for the maintenance treatment of opioid dependence. Probuphine is designed to provide a constant, low-level dose of buprenorphine for 6 months in patients who are already stable on low to moderate doses of other forms of buprenorphine, as part of a complete treatment program. “The 6-month duration kind of takes the issue of adherence off the table,” Dr. Hartwell said. “The caveat with this is that you have to be stable on 8 mg of buprenorphine per day or less. The majority of my patients require much higher doses.”

Dr. Hartwell reported having no relevant disclosures.

– For Karen J. Hartwell, MD, few things in her clinical work bring more reward than providing medication-assisted treatment (MAT) to patients with opioid use disorder.

Doug Brunk/MDedge News
Dr. Karen J. Hartwell

“Seeing people get into recovery on buprenorphine is as exciting as seeing your first person respond to clozapine, or to see a depression remit on your selection of an antidepressant,” she said at an annual psychopharmacology update held by the Nevada Psychiatric Association. “We know that medication-assisted treatment is underused and, sadly, relapse rates remain high.”

According to the Centers for Disease Control and Prevention, there were 70,237 drug-related overdose deaths in 2017 – 47,600 from prescription and illicit opioids. “This is being driven predominately by fentanyl and other high-potency synthetic opioids, followed by prescription opioids and heroin,” said Dr. Hartwell, an associate professor in the addiction sciences division in the department of psychiatry and behavioral sciences at the Medical University of South Carolina, Charleston.

There were an estimated 2 million Americans with an opioid use disorder (OUD) in 2018, she said, and more than 10 million misused prescription opioids. At the same time, prescriptions for opioids have dropped to lowest level in 10 years from a peak in 2012 of 81.3 prescriptions per 100 persons to 58.7 prescriptions per 100 persons in 2017 – total of more than 191 million scripts. “There is a decline in the number of opioid prescriptions, but there is still a lot of diversion, and there are some prescription ‘hot spots’ in the Southeast,” Dr. Hartwell said. “Heroin is a very low cost, and we’re wrestling with the issue of fentanyl.”

To complicate matters, most Americans with opioid use disorder are not in treatment. “In many people, the disorder is never diagnosed, and even fewer engage in care,” she said. “There are challenges with treatment retention, and even fewer achieve remission. There’s a lot of work to be done. One of which is the availability of medication-assisted treatment.”

Dr. Hartwell said that she knows of physician colleagues who have obtained a waiver to prescribe buprenorphine but have yet to prescribe it. “Some people may prefer to avoid the dance [of buprenorphine prescribing],” she said. “I’m here to advise you to dance.” Clinicians can learn about MAT waiver training opportunities by visiting the website of the Providers Clinical Support System, a program funded by the Substance Abuse and Mental Health Services Administration (SAMHSA).

Another option is to join a telementoring session on the topic facilitated by Project ECHO, or Extension for Community Healthcare Outcomes, which is being used by the University of New Mexico, Albuquerque. The goal of this model is to break down the walls between specialty and primary care by linking experts at an academic “hub” with primary care doctors and nurses in nearby communities.

“Our Project ECHO at the Medical University of South Carolina is twice a month on Fridays,” Dr. Hartwell said. “The first half is a case. The second half is a didactic [session], and you get a free hour of CME.”

The most common drugs used for medication-assisted treatment of opioid disorder are buprenorphine (a partial agonist), naltrexone (an antagonist), and methadone (a full agonist). Methadone retention generally is better than buprenorphine or naltrexone. The recommended treatment duration is 6-12 months, yet many studies demonstrate that many only stay on treatment for 30-60 days.



“You want to keep patients on treatment as long as they benefit from the medication,” Dr. Hartwell said. One large study of Medicaid claims data found that the risk of acute care service use and overdose were high following buprenorphine discontinuation, regardless of treatment duration. Superior outcomes became significant with treatment duration beyond 15 months, although rates of the primary adverse outcomes remained high (Am J Psychiatry. 2020 Feb 1;177[2]:117-24). About 5% of patients across all cohorts experienced one or more medically treated overdoses.

“One thing I don’t want is for people to drop out of treatment and not come back to see me,” Dr. Hartwell said. “This is a time for us to use our shared decision-making skills. I like to use the Tapering Readiness Inventory, a list of 16 questions. It asks such things as ‘Are you able to cope with difficult situations without using?’ and ‘Do you have all of the [drug] paraphernalia out of the house?’ We then have a discussion. If the patient decides to go ahead and do a taper, I always leave the door open. So, as that taper persists and someone says, ‘I’m starting to think about using, Doctor,’ I’ll put them back on [buprenorphine]. Or, if they come off the drug and they find themselves at risk of relapsing, they come back in and see me.”

There’s also some evidence that contingency management might be helpful, both in terms of opioid negative urines, and retention and treatment. Meanwhile, extended-release forms of buprenorphine are emerging.

In 2017, the Food and Drug Administration approved Sublocade, the first once-monthly injectable buprenorphine product for the treatment of moderate-to-severe OUD in adult patients who have initiated treatment with a transmucosal buprenorphine-containing product. “The recommendations are that you have about a 7-day lead-in of sublingual buprenorphine, and then 2 months of a 300-mg IV injection,” Dr. Hartwell said. “This is followed by either 100-mg injections monthly or 300-mg maintenance in select cases. There is some pain at the injection site. Some clinicians are getting around this by using a little bit of lidocaine prior to giving the injection.”

Another product, Brixadi, is an extended-release weekly (8 mg, 16 mg, 24 mg, 32 mg) and monthly (64 mg, 96 mg, 128 mg) buprenorphine injection used for the treatment of moderate to severe OUD. It is expected to be available in December 2020.

In 2016, the FDA approved Probuphine, the first buprenorphine implant for the maintenance treatment of opioid dependence. Probuphine is designed to provide a constant, low-level dose of buprenorphine for 6 months in patients who are already stable on low to moderate doses of other forms of buprenorphine, as part of a complete treatment program. “The 6-month duration kind of takes the issue of adherence off the table,” Dr. Hartwell said. “The caveat with this is that you have to be stable on 8 mg of buprenorphine per day or less. The majority of my patients require much higher doses.”

Dr. Hartwell reported having no relevant disclosures.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM NPA 2020

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Private equity firms acquiring more physician group practices

Article Type
Changed
Mon, 06/08/2020 - 16:30

Private equity firms are increasingly acquiring physician practices across a range of specialties, a recent analysis shows.

Dr. Jane Zhu

Lead author Jane M. Zhu, MD, of Oregon Health & Science University, Portland, and colleagues examined physician group practice acquisitions by private equity firms using the Irving Levin Associates Health Care M&A data set, which includes manually collected and verified transactional information on health care mergers and acquisitions. Investigators linked acquisitions to the SK&A data set, a commercial data set of verified physicians and practice-level characteristics of U.S. office-based practices.

Of about 18,000 unique group medical practices, private equity firms acquired 355 physician practice acquisitions from 2013 to 2016, a trend that rose from 59 practices in 2013 to 136 practices in 2016, Dr. Zhu and colleagues reported on Feb. 18 , 2020, in a research letter published in JAMA.

Acquired practices had a mean of four sites, 16 physicians in each practice, and 6 physicians affiliated with each site, the data found. Overall, 81% of these medical practices reported accepting new patients, 83% accepted Medicare, and 60% accepted Medicaid. The majority of acquired practices were in the South (44%).

Anesthesiology (19%) and multispecialty (19%) were the most commonly represented medical groups in the acquisitions, followed by emergency medicine (12%), family practice (11%), and dermatology (10%). In addition, from 2015 to 2016, the number of acquired cardiology, ophthalmology, radiology, and ob.gyn. practices increased. Within acquired practices, anesthesiologists represented the majority of all physicians, followed by emergency medicine specialists, family physicians, and dermatologists.

Dr. Zhu and colleagues cited a key limitation: Because the data are based on transactions that have been publicly announced, the acquisition of smaller practices might have been underestimated.

Still, the findings demonstrate that private equity acquisitions of physician medical groups are accelerating across multiple specialties, Dr. Zhu said in an interview.

“From our data, acquired medical groups seem to have relatively large footprints with multiple office sites and multiple physicians, which mirrors a typical investment strategy for these firms,” she said.

Dr. Zhu said that more research is needed about how these purchases affect practice patterns, delivery of care, and clinician behavior. Private equity firms expect greater than 20% annual returns, and such financial incentives may conflict with the need for longer-term investments in practice stability, physician recruitment, quality, and safety, according to the study.

“In theory, there may be greater efficiencies introduced from private equity investment – for example, through administrative and billing efficiencies, reorganizing practice structures, or strengthening technology supports,” Dr. Zhu said. “But because of private equity firms’ emphasis on return on investment, there may be unintended consequences of these purchases on practice stability and patient care. We don’t yet know what these effects will be, and we need robust, longitudinal data to investigate this question.”

Dr. Zhu and colleagues reported that they had no disclosures.

SOURCE: Zhu JM et al. JAMA. 2020 Feb 18;323(17):663-5.

Publications
Topics
Sections

Private equity firms are increasingly acquiring physician practices across a range of specialties, a recent analysis shows.

Dr. Jane Zhu

Lead author Jane M. Zhu, MD, of Oregon Health & Science University, Portland, and colleagues examined physician group practice acquisitions by private equity firms using the Irving Levin Associates Health Care M&A data set, which includes manually collected and verified transactional information on health care mergers and acquisitions. Investigators linked acquisitions to the SK&A data set, a commercial data set of verified physicians and practice-level characteristics of U.S. office-based practices.

Of about 18,000 unique group medical practices, private equity firms acquired 355 physician practice acquisitions from 2013 to 2016, a trend that rose from 59 practices in 2013 to 136 practices in 2016, Dr. Zhu and colleagues reported on Feb. 18 , 2020, in a research letter published in JAMA.

Acquired practices had a mean of four sites, 16 physicians in each practice, and 6 physicians affiliated with each site, the data found. Overall, 81% of these medical practices reported accepting new patients, 83% accepted Medicare, and 60% accepted Medicaid. The majority of acquired practices were in the South (44%).

Anesthesiology (19%) and multispecialty (19%) were the most commonly represented medical groups in the acquisitions, followed by emergency medicine (12%), family practice (11%), and dermatology (10%). In addition, from 2015 to 2016, the number of acquired cardiology, ophthalmology, radiology, and ob.gyn. practices increased. Within acquired practices, anesthesiologists represented the majority of all physicians, followed by emergency medicine specialists, family physicians, and dermatologists.

Dr. Zhu and colleagues cited a key limitation: Because the data are based on transactions that have been publicly announced, the acquisition of smaller practices might have been underestimated.

Still, the findings demonstrate that private equity acquisitions of physician medical groups are accelerating across multiple specialties, Dr. Zhu said in an interview.

“From our data, acquired medical groups seem to have relatively large footprints with multiple office sites and multiple physicians, which mirrors a typical investment strategy for these firms,” she said.

Dr. Zhu said that more research is needed about how these purchases affect practice patterns, delivery of care, and clinician behavior. Private equity firms expect greater than 20% annual returns, and such financial incentives may conflict with the need for longer-term investments in practice stability, physician recruitment, quality, and safety, according to the study.

“In theory, there may be greater efficiencies introduced from private equity investment – for example, through administrative and billing efficiencies, reorganizing practice structures, or strengthening technology supports,” Dr. Zhu said. “But because of private equity firms’ emphasis on return on investment, there may be unintended consequences of these purchases on practice stability and patient care. We don’t yet know what these effects will be, and we need robust, longitudinal data to investigate this question.”

Dr. Zhu and colleagues reported that they had no disclosures.

SOURCE: Zhu JM et al. JAMA. 2020 Feb 18;323(17):663-5.

Private equity firms are increasingly acquiring physician practices across a range of specialties, a recent analysis shows.

Dr. Jane Zhu

Lead author Jane M. Zhu, MD, of Oregon Health & Science University, Portland, and colleagues examined physician group practice acquisitions by private equity firms using the Irving Levin Associates Health Care M&A data set, which includes manually collected and verified transactional information on health care mergers and acquisitions. Investigators linked acquisitions to the SK&A data set, a commercial data set of verified physicians and practice-level characteristics of U.S. office-based practices.

Of about 18,000 unique group medical practices, private equity firms acquired 355 physician practice acquisitions from 2013 to 2016, a trend that rose from 59 practices in 2013 to 136 practices in 2016, Dr. Zhu and colleagues reported on Feb. 18 , 2020, in a research letter published in JAMA.

Acquired practices had a mean of four sites, 16 physicians in each practice, and 6 physicians affiliated with each site, the data found. Overall, 81% of these medical practices reported accepting new patients, 83% accepted Medicare, and 60% accepted Medicaid. The majority of acquired practices were in the South (44%).

Anesthesiology (19%) and multispecialty (19%) were the most commonly represented medical groups in the acquisitions, followed by emergency medicine (12%), family practice (11%), and dermatology (10%). In addition, from 2015 to 2016, the number of acquired cardiology, ophthalmology, radiology, and ob.gyn. practices increased. Within acquired practices, anesthesiologists represented the majority of all physicians, followed by emergency medicine specialists, family physicians, and dermatologists.

Dr. Zhu and colleagues cited a key limitation: Because the data are based on transactions that have been publicly announced, the acquisition of smaller practices might have been underestimated.

Still, the findings demonstrate that private equity acquisitions of physician medical groups are accelerating across multiple specialties, Dr. Zhu said in an interview.

“From our data, acquired medical groups seem to have relatively large footprints with multiple office sites and multiple physicians, which mirrors a typical investment strategy for these firms,” she said.

Dr. Zhu said that more research is needed about how these purchases affect practice patterns, delivery of care, and clinician behavior. Private equity firms expect greater than 20% annual returns, and such financial incentives may conflict with the need for longer-term investments in practice stability, physician recruitment, quality, and safety, according to the study.

“In theory, there may be greater efficiencies introduced from private equity investment – for example, through administrative and billing efficiencies, reorganizing practice structures, or strengthening technology supports,” Dr. Zhu said. “But because of private equity firms’ emphasis on return on investment, there may be unintended consequences of these purchases on practice stability and patient care. We don’t yet know what these effects will be, and we need robust, longitudinal data to investigate this question.”

Dr. Zhu and colleagues reported that they had no disclosures.

SOURCE: Zhu JM et al. JAMA. 2020 Feb 18;323(17):663-5.

Publications
Publications
Topics
Article Type
Click for Credit Status
Active
Sections
Article Source

FROM JAMA

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
CME ID
217431
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap