User login
Demystifying psychotherapy
Managing psychiatric illnesses is rapidly becoming routine practice for primary care pediatricians, whether screening for symptoms of anxiety and depression, starting medication, or providing psychoeducation to youth and parents. Pediatricians can provide strategies to address the impairments of sleep, energy, motivation and appetite that can accompany these illnesses. Psychotherapy, a relationship based on understanding and providing support, should be a core element of treatment for emotional disorders, but there is a great deal of uncertainty around what therapies are supported by evidence. This month, we offer a primer on the evidence-based psychotherapies for youth and we also recognize that research defining the effectiveness of psychotherapy is limited and complex.
Cognitive-behavioral psychotherapy (CBT)
Mention psychotherapy and most people think of a patient reclining on a couch free-associating about their childhood while a therapist sits behind them taking notes. This potent image stems from psychoanalytic psychotherapy, developed in the 19th century by Sigmund Freud, and was based on his theory that unconscious conflicts drove most of the puzzling behaviors and emotional distress associated with “neurosis.” Psychoanalysis became popular in 20th century America, even for use with children. Evidence is hard to develop since psychoanalytic therapy often lasts years, there are a limited number of patients, and the method is hard to standardize.
A focus on how to shape behaviors directly also emerged in the early 20th century (in the work of John Watson and Ivan Pavlov). Aaron Beck, MD, the father of CBT, observed in his psychoanalytic treatments that many patients appeared to be experiencing emotional distress around thoughts that were not unconscious. Instead, his patients were experiencing “automatic thoughts,” or rapid, often-distorted thoughts that have the force of truth in the thinker. These thoughts create emotional distress and behaviors that may reinforce the thoughts and emotional distress. For example, a depressed patient who is uncomfortable in social situations may think “nobody ever likes me.” This may cause them to appear uncomfortable or unfriendly in a new social situation and prevent them from making connections, perpetuating a cycle of isolation, insecurity, and loneliness. Identifying these automatic thoughts, and their connection to painful feelings and perpetuating behaviors is at the core of CBT.
In CBT the therapist is much more active than in psychoanalysis. They engage patients in identifying thought distortions together, challenging them on the truth of these thoughts and recognizing the connection to emotional distress. They also identify maladaptive behaviors and focus on strategies to build new more effective behavioral responses to thoughts, feelings, and situations. This is often done with gradual “exposures” to new behaviors, which are naturally reinforced by better outcomes or lowered distress. When performed with high fidelity, CBT is a very structured treatment that is closer to an emotionally supportive form of coaching and skill building. CBT is at the core of most evidence-based psychotherapies that have emerged in the past 60 years.
CBT is the first-line treatment for anxiety disorders in children, adolescents, and adults. A variant called “exposure and response prevention” is the first-line treatment for obsessive-compulsive disorder, and is predominantly behavioral. It is focused on preventing patients with anxiety disorders from engaging in the maladaptive behaviors that lower their anxiety in the short term but cause worsened anxiety and impairment over time (such as avoiding social situations when they are worried that others won’t like them).
CBT is also a first-line treatment for major depressive episodes in teenagers and adults, although those for whom the symptoms are severe often need medication to be able to fully participate in therapy. There are variants of CBT that have demonstrated efficacy in the treatment of posttraumatic stress disorder, bulimia, and even psychosis. It makes developmental sense that therapies with a problem-focused coaching approach might be more effective in children and adolescents than open-ended exploratory psychotherapies.
Traditional CBT was not very effective for patients with a variant of depression that is marked by stormy relationships, irritability, chronic suicidality, and impulsive attempts to regulate discomfort (including bingeing, purging, sexual acting-out, drug use, and self-injury or cutting), a symptom pattern called “borderline personality disorder.” These patients often ended up on multiple medications with only modest improvements in their function and well-being.
But in the 1990s, a research psychologist named Marsha Linnehan developed a modified version of CBT to use with these patients called dialectical-behavioral therapy (DBT). The “dialectic” emphasizes the role of two things being true at once, in this case the need for acceptance and change. DBT helps patients develop distress tolerance and emotional regulation skills alongside adaptive social and communication skills. DBT has demonstrated efficacy in the treatment of these patients as well as in the treatment of other disorders marked by poor distress tolerance and self-regulation (such as substance use disorders, binge-eating disorder, and PTSD).
DBT was adapted for use in adolescents given the prevalence of these problems in this age group, and it is the first-line treatment for adolescents with these specific mood and behavioral symptoms. High-fidelity DBT has an individual, group, and family component that are all essential for the treatment to be effective.
Instruction about the principles of CBT and DBT is a part of graduate school in psychology, but not every postgraduate training program includes thorough training in their practice. Completion of this specialized training leads to certification. It is very important that families understand that anyone may call themselves a psychotherapist. Those therapists who have master’s degrees (MSW, MFT, PCC, and others) may not have had exposure to these evidence-based treatments in their shorter graduate programs. Even doctoral-level training programs often do not include complete training in the high-fidelity delivery of these therapies.
It is critical that you help families be educated consumers and ask therapists if they have training and certification in the recommended therapy. The Psychology Today website has a therapist referral resource that includes this information. Training programs can provide access to therapists who are learning these therapies; with skilled supervision, they can provide excellent treatment.
We should note that there are several other evidence-based therapies, including family-based treatment for anorexia nervosa, motivational interviewing for substance use disorders, and interpersonal psychotherapy for depression associated with high family conflict in adolescents.
There is good evidence that the quality of the alliance between therapist and patient is a critical predictor of whether a therapy will be effective. It is appropriate for your patient to look for a therapist that they can trust and talk to and that their therapist be trained in the recommended psychotherapy. Otherwise, your patient is spending valuable time and money on an enterprise that may not be effective. This can leave them and their parents feeling discouraged or even hopeless about the prospects for recovery and promote an overreliance on medications. In addition to providing your patients with effective screening, initiating medication treatment, and psychoeducation, you can enhance their ability to find an optimal therapist to relieve their suffering.
Dr. Swick is physician in chief at Ohana, Center for Child and Adolescent Behavioral Health, Community Hospital of the Monterey (Calif.) Peninsula. Dr. Jellinek is professor emeritus of psychiatry and pediatrics, Harvard Medical School, Boston. Email them at [email protected].
Managing psychiatric illnesses is rapidly becoming routine practice for primary care pediatricians, whether screening for symptoms of anxiety and depression, starting medication, or providing psychoeducation to youth and parents. Pediatricians can provide strategies to address the impairments of sleep, energy, motivation and appetite that can accompany these illnesses. Psychotherapy, a relationship based on understanding and providing support, should be a core element of treatment for emotional disorders, but there is a great deal of uncertainty around what therapies are supported by evidence. This month, we offer a primer on the evidence-based psychotherapies for youth and we also recognize that research defining the effectiveness of psychotherapy is limited and complex.
Cognitive-behavioral psychotherapy (CBT)
Mention psychotherapy and most people think of a patient reclining on a couch free-associating about their childhood while a therapist sits behind them taking notes. This potent image stems from psychoanalytic psychotherapy, developed in the 19th century by Sigmund Freud, and was based on his theory that unconscious conflicts drove most of the puzzling behaviors and emotional distress associated with “neurosis.” Psychoanalysis became popular in 20th century America, even for use with children. Evidence is hard to develop since psychoanalytic therapy often lasts years, there are a limited number of patients, and the method is hard to standardize.
A focus on how to shape behaviors directly also emerged in the early 20th century (in the work of John Watson and Ivan Pavlov). Aaron Beck, MD, the father of CBT, observed in his psychoanalytic treatments that many patients appeared to be experiencing emotional distress around thoughts that were not unconscious. Instead, his patients were experiencing “automatic thoughts,” or rapid, often-distorted thoughts that have the force of truth in the thinker. These thoughts create emotional distress and behaviors that may reinforce the thoughts and emotional distress. For example, a depressed patient who is uncomfortable in social situations may think “nobody ever likes me.” This may cause them to appear uncomfortable or unfriendly in a new social situation and prevent them from making connections, perpetuating a cycle of isolation, insecurity, and loneliness. Identifying these automatic thoughts, and their connection to painful feelings and perpetuating behaviors is at the core of CBT.
In CBT the therapist is much more active than in psychoanalysis. They engage patients in identifying thought distortions together, challenging them on the truth of these thoughts and recognizing the connection to emotional distress. They also identify maladaptive behaviors and focus on strategies to build new more effective behavioral responses to thoughts, feelings, and situations. This is often done with gradual “exposures” to new behaviors, which are naturally reinforced by better outcomes or lowered distress. When performed with high fidelity, CBT is a very structured treatment that is closer to an emotionally supportive form of coaching and skill building. CBT is at the core of most evidence-based psychotherapies that have emerged in the past 60 years.
CBT is the first-line treatment for anxiety disorders in children, adolescents, and adults. A variant called “exposure and response prevention” is the first-line treatment for obsessive-compulsive disorder, and is predominantly behavioral. It is focused on preventing patients with anxiety disorders from engaging in the maladaptive behaviors that lower their anxiety in the short term but cause worsened anxiety and impairment over time (such as avoiding social situations when they are worried that others won’t like them).
CBT is also a first-line treatment for major depressive episodes in teenagers and adults, although those for whom the symptoms are severe often need medication to be able to fully participate in therapy. There are variants of CBT that have demonstrated efficacy in the treatment of posttraumatic stress disorder, bulimia, and even psychosis. It makes developmental sense that therapies with a problem-focused coaching approach might be more effective in children and adolescents than open-ended exploratory psychotherapies.
Traditional CBT was not very effective for patients with a variant of depression that is marked by stormy relationships, irritability, chronic suicidality, and impulsive attempts to regulate discomfort (including bingeing, purging, sexual acting-out, drug use, and self-injury or cutting), a symptom pattern called “borderline personality disorder.” These patients often ended up on multiple medications with only modest improvements in their function and well-being.
But in the 1990s, a research psychologist named Marsha Linnehan developed a modified version of CBT to use with these patients called dialectical-behavioral therapy (DBT). The “dialectic” emphasizes the role of two things being true at once, in this case the need for acceptance and change. DBT helps patients develop distress tolerance and emotional regulation skills alongside adaptive social and communication skills. DBT has demonstrated efficacy in the treatment of these patients as well as in the treatment of other disorders marked by poor distress tolerance and self-regulation (such as substance use disorders, binge-eating disorder, and PTSD).
DBT was adapted for use in adolescents given the prevalence of these problems in this age group, and it is the first-line treatment for adolescents with these specific mood and behavioral symptoms. High-fidelity DBT has an individual, group, and family component that are all essential for the treatment to be effective.
Instruction about the principles of CBT and DBT is a part of graduate school in psychology, but not every postgraduate training program includes thorough training in their practice. Completion of this specialized training leads to certification. It is very important that families understand that anyone may call themselves a psychotherapist. Those therapists who have master’s degrees (MSW, MFT, PCC, and others) may not have had exposure to these evidence-based treatments in their shorter graduate programs. Even doctoral-level training programs often do not include complete training in the high-fidelity delivery of these therapies.
It is critical that you help families be educated consumers and ask therapists if they have training and certification in the recommended therapy. The Psychology Today website has a therapist referral resource that includes this information. Training programs can provide access to therapists who are learning these therapies; with skilled supervision, they can provide excellent treatment.
We should note that there are several other evidence-based therapies, including family-based treatment for anorexia nervosa, motivational interviewing for substance use disorders, and interpersonal psychotherapy for depression associated with high family conflict in adolescents.
There is good evidence that the quality of the alliance between therapist and patient is a critical predictor of whether a therapy will be effective. It is appropriate for your patient to look for a therapist that they can trust and talk to and that their therapist be trained in the recommended psychotherapy. Otherwise, your patient is spending valuable time and money on an enterprise that may not be effective. This can leave them and their parents feeling discouraged or even hopeless about the prospects for recovery and promote an overreliance on medications. In addition to providing your patients with effective screening, initiating medication treatment, and psychoeducation, you can enhance their ability to find an optimal therapist to relieve their suffering.
Dr. Swick is physician in chief at Ohana, Center for Child and Adolescent Behavioral Health, Community Hospital of the Monterey (Calif.) Peninsula. Dr. Jellinek is professor emeritus of psychiatry and pediatrics, Harvard Medical School, Boston. Email them at [email protected].
Managing psychiatric illnesses is rapidly becoming routine practice for primary care pediatricians, whether screening for symptoms of anxiety and depression, starting medication, or providing psychoeducation to youth and parents. Pediatricians can provide strategies to address the impairments of sleep, energy, motivation and appetite that can accompany these illnesses. Psychotherapy, a relationship based on understanding and providing support, should be a core element of treatment for emotional disorders, but there is a great deal of uncertainty around what therapies are supported by evidence. This month, we offer a primer on the evidence-based psychotherapies for youth and we also recognize that research defining the effectiveness of psychotherapy is limited and complex.
Cognitive-behavioral psychotherapy (CBT)
Mention psychotherapy and most people think of a patient reclining on a couch free-associating about their childhood while a therapist sits behind them taking notes. This potent image stems from psychoanalytic psychotherapy, developed in the 19th century by Sigmund Freud, and was based on his theory that unconscious conflicts drove most of the puzzling behaviors and emotional distress associated with “neurosis.” Psychoanalysis became popular in 20th century America, even for use with children. Evidence is hard to develop since psychoanalytic therapy often lasts years, there are a limited number of patients, and the method is hard to standardize.
A focus on how to shape behaviors directly also emerged in the early 20th century (in the work of John Watson and Ivan Pavlov). Aaron Beck, MD, the father of CBT, observed in his psychoanalytic treatments that many patients appeared to be experiencing emotional distress around thoughts that were not unconscious. Instead, his patients were experiencing “automatic thoughts,” or rapid, often-distorted thoughts that have the force of truth in the thinker. These thoughts create emotional distress and behaviors that may reinforce the thoughts and emotional distress. For example, a depressed patient who is uncomfortable in social situations may think “nobody ever likes me.” This may cause them to appear uncomfortable or unfriendly in a new social situation and prevent them from making connections, perpetuating a cycle of isolation, insecurity, and loneliness. Identifying these automatic thoughts, and their connection to painful feelings and perpetuating behaviors is at the core of CBT.
In CBT the therapist is much more active than in psychoanalysis. They engage patients in identifying thought distortions together, challenging them on the truth of these thoughts and recognizing the connection to emotional distress. They also identify maladaptive behaviors and focus on strategies to build new more effective behavioral responses to thoughts, feelings, and situations. This is often done with gradual “exposures” to new behaviors, which are naturally reinforced by better outcomes or lowered distress. When performed with high fidelity, CBT is a very structured treatment that is closer to an emotionally supportive form of coaching and skill building. CBT is at the core of most evidence-based psychotherapies that have emerged in the past 60 years.
CBT is the first-line treatment for anxiety disorders in children, adolescents, and adults. A variant called “exposure and response prevention” is the first-line treatment for obsessive-compulsive disorder, and is predominantly behavioral. It is focused on preventing patients with anxiety disorders from engaging in the maladaptive behaviors that lower their anxiety in the short term but cause worsened anxiety and impairment over time (such as avoiding social situations when they are worried that others won’t like them).
CBT is also a first-line treatment for major depressive episodes in teenagers and adults, although those for whom the symptoms are severe often need medication to be able to fully participate in therapy. There are variants of CBT that have demonstrated efficacy in the treatment of posttraumatic stress disorder, bulimia, and even psychosis. It makes developmental sense that therapies with a problem-focused coaching approach might be more effective in children and adolescents than open-ended exploratory psychotherapies.
Traditional CBT was not very effective for patients with a variant of depression that is marked by stormy relationships, irritability, chronic suicidality, and impulsive attempts to regulate discomfort (including bingeing, purging, sexual acting-out, drug use, and self-injury or cutting), a symptom pattern called “borderline personality disorder.” These patients often ended up on multiple medications with only modest improvements in their function and well-being.
But in the 1990s, a research psychologist named Marsha Linnehan developed a modified version of CBT to use with these patients called dialectical-behavioral therapy (DBT). The “dialectic” emphasizes the role of two things being true at once, in this case the need for acceptance and change. DBT helps patients develop distress tolerance and emotional regulation skills alongside adaptive social and communication skills. DBT has demonstrated efficacy in the treatment of these patients as well as in the treatment of other disorders marked by poor distress tolerance and self-regulation (such as substance use disorders, binge-eating disorder, and PTSD).
DBT was adapted for use in adolescents given the prevalence of these problems in this age group, and it is the first-line treatment for adolescents with these specific mood and behavioral symptoms. High-fidelity DBT has an individual, group, and family component that are all essential for the treatment to be effective.
Instruction about the principles of CBT and DBT is a part of graduate school in psychology, but not every postgraduate training program includes thorough training in their practice. Completion of this specialized training leads to certification. It is very important that families understand that anyone may call themselves a psychotherapist. Those therapists who have master’s degrees (MSW, MFT, PCC, and others) may not have had exposure to these evidence-based treatments in their shorter graduate programs. Even doctoral-level training programs often do not include complete training in the high-fidelity delivery of these therapies.
It is critical that you help families be educated consumers and ask therapists if they have training and certification in the recommended therapy. The Psychology Today website has a therapist referral resource that includes this information. Training programs can provide access to therapists who are learning these therapies; with skilled supervision, they can provide excellent treatment.
We should note that there are several other evidence-based therapies, including family-based treatment for anorexia nervosa, motivational interviewing for substance use disorders, and interpersonal psychotherapy for depression associated with high family conflict in adolescents.
There is good evidence that the quality of the alliance between therapist and patient is a critical predictor of whether a therapy will be effective. It is appropriate for your patient to look for a therapist that they can trust and talk to and that their therapist be trained in the recommended psychotherapy. Otherwise, your patient is spending valuable time and money on an enterprise that may not be effective. This can leave them and their parents feeling discouraged or even hopeless about the prospects for recovery and promote an overreliance on medications. In addition to providing your patients with effective screening, initiating medication treatment, and psychoeducation, you can enhance their ability to find an optimal therapist to relieve their suffering.
Dr. Swick is physician in chief at Ohana, Center for Child and Adolescent Behavioral Health, Community Hospital of the Monterey (Calif.) Peninsula. Dr. Jellinek is professor emeritus of psychiatry and pediatrics, Harvard Medical School, Boston. Email them at [email protected].
The doctor circuit
A long time ago, as a fourth-year medical student, I did a neurology rotation at a large academic center.
One of the attendings was talking to me about reading, and how, once learned, it became innate: a function that, like breathing, couldn’t be turned off.
He was right, as is obvious to anyone. Driving down the road, walking past a newsstand, even opening a fridge covered with magnets from various other medical businesses ... it’s impossible NOT to process the letters into words and words into meanings, even if just for a second. Advertisers and headline-writers figured this out long ago. The key is to make those few words something that grabs our attention and interest, so we’ll either want to read more or retain it.
So too is being a doctor. Once that switch is on, you can’t flip it off.
Recently Queen Elizabeth II died. In reading the news stories, without intending to, I found my mind trying to pick out details about her medical condition, formulate a differential ... after all these years of being in medicine it’s second nature to do that.
Of course, it’s none of my business, and I greatly respect personal privacy. But the point is there. At some point, like reading, we can’t turn off the doctor circuit (for lack of a better term). We do it all the time, analyzing gait patterns and arm swings as people go by. Noticing facial asymmetries, tremors, speech patterns. It may be turned down a few notches from when we’re in the office or hospital, but it’s still there.
It becomes second nature, a part of who we are.
It’s not just doctors. Architects casually notice building details that no one else would. Software engineers off-handedly see program features (good and bad) that the rest of us wouldn’t. Teachers and editors pick up on grammatical errors even when they’re not trying to.
None of these (aside from basic observation) are things that brains originally started out to do. But through training and experience we’ve adapted them to do this. We never stop observing, collecting data, and processing it, in ways peculiar to our backgrounds.
Which, if you think about it, is pretty remarkable.
Dr. Block has a solo neurology practice in Scottsdale, Ariz.
A long time ago, as a fourth-year medical student, I did a neurology rotation at a large academic center.
One of the attendings was talking to me about reading, and how, once learned, it became innate: a function that, like breathing, couldn’t be turned off.
He was right, as is obvious to anyone. Driving down the road, walking past a newsstand, even opening a fridge covered with magnets from various other medical businesses ... it’s impossible NOT to process the letters into words and words into meanings, even if just for a second. Advertisers and headline-writers figured this out long ago. The key is to make those few words something that grabs our attention and interest, so we’ll either want to read more or retain it.
So too is being a doctor. Once that switch is on, you can’t flip it off.
Recently Queen Elizabeth II died. In reading the news stories, without intending to, I found my mind trying to pick out details about her medical condition, formulate a differential ... after all these years of being in medicine it’s second nature to do that.
Of course, it’s none of my business, and I greatly respect personal privacy. But the point is there. At some point, like reading, we can’t turn off the doctor circuit (for lack of a better term). We do it all the time, analyzing gait patterns and arm swings as people go by. Noticing facial asymmetries, tremors, speech patterns. It may be turned down a few notches from when we’re in the office or hospital, but it’s still there.
It becomes second nature, a part of who we are.
It’s not just doctors. Architects casually notice building details that no one else would. Software engineers off-handedly see program features (good and bad) that the rest of us wouldn’t. Teachers and editors pick up on grammatical errors even when they’re not trying to.
None of these (aside from basic observation) are things that brains originally started out to do. But through training and experience we’ve adapted them to do this. We never stop observing, collecting data, and processing it, in ways peculiar to our backgrounds.
Which, if you think about it, is pretty remarkable.
Dr. Block has a solo neurology practice in Scottsdale, Ariz.
A long time ago, as a fourth-year medical student, I did a neurology rotation at a large academic center.
One of the attendings was talking to me about reading, and how, once learned, it became innate: a function that, like breathing, couldn’t be turned off.
He was right, as is obvious to anyone. Driving down the road, walking past a newsstand, even opening a fridge covered with magnets from various other medical businesses ... it’s impossible NOT to process the letters into words and words into meanings, even if just for a second. Advertisers and headline-writers figured this out long ago. The key is to make those few words something that grabs our attention and interest, so we’ll either want to read more or retain it.
So too is being a doctor. Once that switch is on, you can’t flip it off.
Recently Queen Elizabeth II died. In reading the news stories, without intending to, I found my mind trying to pick out details about her medical condition, formulate a differential ... after all these years of being in medicine it’s second nature to do that.
Of course, it’s none of my business, and I greatly respect personal privacy. But the point is there. At some point, like reading, we can’t turn off the doctor circuit (for lack of a better term). We do it all the time, analyzing gait patterns and arm swings as people go by. Noticing facial asymmetries, tremors, speech patterns. It may be turned down a few notches from when we’re in the office or hospital, but it’s still there.
It becomes second nature, a part of who we are.
It’s not just doctors. Architects casually notice building details that no one else would. Software engineers off-handedly see program features (good and bad) that the rest of us wouldn’t. Teachers and editors pick up on grammatical errors even when they’re not trying to.
None of these (aside from basic observation) are things that brains originally started out to do. But through training and experience we’ve adapted them to do this. We never stop observing, collecting data, and processing it, in ways peculiar to our backgrounds.
Which, if you think about it, is pretty remarkable.
Dr. Block has a solo neurology practice in Scottsdale, Ariz.
When do we stop using BMI to diagnose obesity?
“BMI is trash. Full stop.” This controversial tweet received 26,500 likes and almost 3,000 retweets. The 400 comments from medical and non–health care personnel ranged from agreeable to contrary to offensive.
As a Black woman who is an obesity expert living with the impact of obesity in my own life, I know the emotion that a BMI conversation can evoke. Before emotions hijack the conversation, let’s discuss BMI’s past, present, and future.
BMI: From observational measurement to clinical use
Imagine walking into your favorite clothing store where an eager clerk greets you with a shirt to try on. The fit is off, but the clerk insists that the shirt must fit because everyone who’s your height should be able to wear it. This scenario seems ridiculous. But this is how we’ve come to use the BMI. Instead of thinking that people of the same height may be the same size, we declare that they must be the same size.
The idea behind the BMI was conceived in 1832 by Belgian anthropologist and mathematician Adolphe Quetelet, but he didn’t intend for it to be a health measure. Instead, it was simply an observation of how people’s weight changed in proportion to height over their lifetime.
Fast-forward to the 20th century, when insurance companies began using weight as an indicator of health status. Weights were recorded in a “Life Table.” Individual health status was determined on the basis of arbitrary cut-offs for weight on the Life Tables. Furthermore, White men set the “normal” weight standards because they were the primary insurance holders.
In 1972, Dr. Ancel Keys, a physician and leading expert in body composition at the time, cried foul on this practice and sought to standardize the use of weight as a health indicator. Dr. Keys used Quetelet’s calculation and termed it the Body Mass Index.
By 1985, the U.S. National Institutes of Health and the World Health Organization adopted the BMI. By the 21st century, BMI had become widely used in clinical settings. For example, the Centers for Medicare & Medicaid Services adopted BMI as a quality-of-care measure, placing even more pressure on clinicians to use BMI as a health screening tool.
BMI as a tool to diagnose obesity
We can’t discuss BMI without discussing the disease of obesity. BMI is the most widely used tool to diagnose obesity. In the United States, one-third of Americans meet the criteria for obesity. Another one-third are at risk for obesity.
Compared with BMI’s relatively quick acceptance into clinical practice, however, obesity was only recently recognized as a disease.
Historically, obesity has been viewed as a lifestyle choice, fueled by misinformation and multiple forms of bias. The historical bias associated with BMI and discrimination has led some public health officials and scholars to dismiss the use of BMI or fail to recognize obesity as disease.
This is a dangerous conclusion, because it comes to the detriment of the very people disproportionately impacted by obesity-related health disparities.
Furthermore, weight bias continues to prevent people living with obesity from receiving insurance coverage for life-enhancing obesity medications and interventions.
Is it time to phase out BMI?
The BMI is intertwined with many forms of bias: age, gender, racial, ethnic, and even weight. Therefore, it is time to phase out BMI. However, phasing out BMI is complex and will take time, given that:
- Obesity is still a relatively “young” disease. 2023 marks the 10th anniversary of obesity’s recognition as a disease by the American Medical Association. Currently, BMI is the most widely used tool to diagnose obesity. Tools such as waist circumference, body composition, and metabolic health assessment will need to replace the BMI. Shifting from BMI emphasizes that obesity is more than a number on the scale. Obesity, as defined by the Obesity Medicine Association, is indeed a “chronic, relapsing, multi-factorial, neurobehavioral disease, wherein an increase in body fat promotes adipose tissue dysfunction and abnormal fat mass physical forces, resulting in adverse metabolic, biomechanical, and psychosocial health consequences.”
- Much of our health research is tied to BMI. There have been some shifts in looking at non–weight-related health indicators. However, we need more robust studies evaluating other health indicators beyond weight and BMI. The availability of this data will help eliminate the need for BMI and promote individualized health assessment.
- Current treatment guidelines for obesity medications are based on BMI. (Note: Medications to treat obesity are called “anti-obesity” medications or AOMs. However, given the stigma associated with obesity, I prefer not to use the term “anti-obesity.”) Presently this interferes with long-term obesity treatment. Once BMI is “normal,” many patients lose insurance coverage for their obesity medication, despite needing long-term metabolic support to overcome the compensatory mechanism of weight regain. Obesity is a chronic disease that exists independent of weight status. Therefore, using non-BMI measures will help ensure appropriate lifetime support for obesity.
The preceding are barriers, not impossibilities. In the interim, if BMI is still used in any capacity, the BMI reference chart should be an adjusted BMI chart based on age, race, ethnicity, biological sex, and obesity-related conditions. Furthermore, BMI isn’t the sole determining factor of health status.
Instead, an “abnormal” BMI should initiate conversation and further testing, if needed, to determine an individual’s health. For example, compare two people of the same height with different BMIs and lifestyles. Current studies support that a person flagged as having a high adjusted BMI but practicing a healthy lifestyle and having no metabolic diseases is less at risk than a person with a “normal” BMI but high waist circumference and an unhealthy lifestyle.
Regardless of your personal feelings, the facts are clear. Technology empowers us with better tools than BMI to determine health status. Therefore, it’s not a matter of if we will stop using BMI but when.
Sylvia Gonsahn-Bollie, MD, DipABOM, is an integrative obesity specialist who specializes in individualized solutions for emotional and biological overeating. Connect with her at www.embraceyouweightloss.com or on Instagram @embraceyoumd. Her bestselling book, “Embrace You: Your Guide to Transforming Weight Loss Misconceptions Into Lifelong Wellness,” is Healthline.com’s Best Overall Weight Loss Book 2022 and one of Livestrong.com’s picks for the 8 Best Weight-Loss Books to Read in 2022.
A version of this article first appeared on Medscape.com.
“BMI is trash. Full stop.” This controversial tweet received 26,500 likes and almost 3,000 retweets. The 400 comments from medical and non–health care personnel ranged from agreeable to contrary to offensive.
As a Black woman who is an obesity expert living with the impact of obesity in my own life, I know the emotion that a BMI conversation can evoke. Before emotions hijack the conversation, let’s discuss BMI’s past, present, and future.
BMI: From observational measurement to clinical use
Imagine walking into your favorite clothing store where an eager clerk greets you with a shirt to try on. The fit is off, but the clerk insists that the shirt must fit because everyone who’s your height should be able to wear it. This scenario seems ridiculous. But this is how we’ve come to use the BMI. Instead of thinking that people of the same height may be the same size, we declare that they must be the same size.
The idea behind the BMI was conceived in 1832 by Belgian anthropologist and mathematician Adolphe Quetelet, but he didn’t intend for it to be a health measure. Instead, it was simply an observation of how people’s weight changed in proportion to height over their lifetime.
Fast-forward to the 20th century, when insurance companies began using weight as an indicator of health status. Weights were recorded in a “Life Table.” Individual health status was determined on the basis of arbitrary cut-offs for weight on the Life Tables. Furthermore, White men set the “normal” weight standards because they were the primary insurance holders.
In 1972, Dr. Ancel Keys, a physician and leading expert in body composition at the time, cried foul on this practice and sought to standardize the use of weight as a health indicator. Dr. Keys used Quetelet’s calculation and termed it the Body Mass Index.
By 1985, the U.S. National Institutes of Health and the World Health Organization adopted the BMI. By the 21st century, BMI had become widely used in clinical settings. For example, the Centers for Medicare & Medicaid Services adopted BMI as a quality-of-care measure, placing even more pressure on clinicians to use BMI as a health screening tool.
BMI as a tool to diagnose obesity
We can’t discuss BMI without discussing the disease of obesity. BMI is the most widely used tool to diagnose obesity. In the United States, one-third of Americans meet the criteria for obesity. Another one-third are at risk for obesity.
Compared with BMI’s relatively quick acceptance into clinical practice, however, obesity was only recently recognized as a disease.
Historically, obesity has been viewed as a lifestyle choice, fueled by misinformation and multiple forms of bias. The historical bias associated with BMI and discrimination has led some public health officials and scholars to dismiss the use of BMI or fail to recognize obesity as disease.
This is a dangerous conclusion, because it comes to the detriment of the very people disproportionately impacted by obesity-related health disparities.
Furthermore, weight bias continues to prevent people living with obesity from receiving insurance coverage for life-enhancing obesity medications and interventions.
Is it time to phase out BMI?
The BMI is intertwined with many forms of bias: age, gender, racial, ethnic, and even weight. Therefore, it is time to phase out BMI. However, phasing out BMI is complex and will take time, given that:
- Obesity is still a relatively “young” disease. 2023 marks the 10th anniversary of obesity’s recognition as a disease by the American Medical Association. Currently, BMI is the most widely used tool to diagnose obesity. Tools such as waist circumference, body composition, and metabolic health assessment will need to replace the BMI. Shifting from BMI emphasizes that obesity is more than a number on the scale. Obesity, as defined by the Obesity Medicine Association, is indeed a “chronic, relapsing, multi-factorial, neurobehavioral disease, wherein an increase in body fat promotes adipose tissue dysfunction and abnormal fat mass physical forces, resulting in adverse metabolic, biomechanical, and psychosocial health consequences.”
- Much of our health research is tied to BMI. There have been some shifts in looking at non–weight-related health indicators. However, we need more robust studies evaluating other health indicators beyond weight and BMI. The availability of this data will help eliminate the need for BMI and promote individualized health assessment.
- Current treatment guidelines for obesity medications are based on BMI. (Note: Medications to treat obesity are called “anti-obesity” medications or AOMs. However, given the stigma associated with obesity, I prefer not to use the term “anti-obesity.”) Presently this interferes with long-term obesity treatment. Once BMI is “normal,” many patients lose insurance coverage for their obesity medication, despite needing long-term metabolic support to overcome the compensatory mechanism of weight regain. Obesity is a chronic disease that exists independent of weight status. Therefore, using non-BMI measures will help ensure appropriate lifetime support for obesity.
The preceding are barriers, not impossibilities. In the interim, if BMI is still used in any capacity, the BMI reference chart should be an adjusted BMI chart based on age, race, ethnicity, biological sex, and obesity-related conditions. Furthermore, BMI isn’t the sole determining factor of health status.
Instead, an “abnormal” BMI should initiate conversation and further testing, if needed, to determine an individual’s health. For example, compare two people of the same height with different BMIs and lifestyles. Current studies support that a person flagged as having a high adjusted BMI but practicing a healthy lifestyle and having no metabolic diseases is less at risk than a person with a “normal” BMI but high waist circumference and an unhealthy lifestyle.
Regardless of your personal feelings, the facts are clear. Technology empowers us with better tools than BMI to determine health status. Therefore, it’s not a matter of if we will stop using BMI but when.
Sylvia Gonsahn-Bollie, MD, DipABOM, is an integrative obesity specialist who specializes in individualized solutions for emotional and biological overeating. Connect with her at www.embraceyouweightloss.com or on Instagram @embraceyoumd. Her bestselling book, “Embrace You: Your Guide to Transforming Weight Loss Misconceptions Into Lifelong Wellness,” is Healthline.com’s Best Overall Weight Loss Book 2022 and one of Livestrong.com’s picks for the 8 Best Weight-Loss Books to Read in 2022.
A version of this article first appeared on Medscape.com.
“BMI is trash. Full stop.” This controversial tweet received 26,500 likes and almost 3,000 retweets. The 400 comments from medical and non–health care personnel ranged from agreeable to contrary to offensive.
As a Black woman who is an obesity expert living with the impact of obesity in my own life, I know the emotion that a BMI conversation can evoke. Before emotions hijack the conversation, let’s discuss BMI’s past, present, and future.
BMI: From observational measurement to clinical use
Imagine walking into your favorite clothing store where an eager clerk greets you with a shirt to try on. The fit is off, but the clerk insists that the shirt must fit because everyone who’s your height should be able to wear it. This scenario seems ridiculous. But this is how we’ve come to use the BMI. Instead of thinking that people of the same height may be the same size, we declare that they must be the same size.
The idea behind the BMI was conceived in 1832 by Belgian anthropologist and mathematician Adolphe Quetelet, but he didn’t intend for it to be a health measure. Instead, it was simply an observation of how people’s weight changed in proportion to height over their lifetime.
Fast-forward to the 20th century, when insurance companies began using weight as an indicator of health status. Weights were recorded in a “Life Table.” Individual health status was determined on the basis of arbitrary cut-offs for weight on the Life Tables. Furthermore, White men set the “normal” weight standards because they were the primary insurance holders.
In 1972, Dr. Ancel Keys, a physician and leading expert in body composition at the time, cried foul on this practice and sought to standardize the use of weight as a health indicator. Dr. Keys used Quetelet’s calculation and termed it the Body Mass Index.
By 1985, the U.S. National Institutes of Health and the World Health Organization adopted the BMI. By the 21st century, BMI had become widely used in clinical settings. For example, the Centers for Medicare & Medicaid Services adopted BMI as a quality-of-care measure, placing even more pressure on clinicians to use BMI as a health screening tool.
BMI as a tool to diagnose obesity
We can’t discuss BMI without discussing the disease of obesity. BMI is the most widely used tool to diagnose obesity. In the United States, one-third of Americans meet the criteria for obesity. Another one-third are at risk for obesity.
Compared with BMI’s relatively quick acceptance into clinical practice, however, obesity was only recently recognized as a disease.
Historically, obesity has been viewed as a lifestyle choice, fueled by misinformation and multiple forms of bias. The historical bias associated with BMI and discrimination has led some public health officials and scholars to dismiss the use of BMI or fail to recognize obesity as disease.
This is a dangerous conclusion, because it comes to the detriment of the very people disproportionately impacted by obesity-related health disparities.
Furthermore, weight bias continues to prevent people living with obesity from receiving insurance coverage for life-enhancing obesity medications and interventions.
Is it time to phase out BMI?
The BMI is intertwined with many forms of bias: age, gender, racial, ethnic, and even weight. Therefore, it is time to phase out BMI. However, phasing out BMI is complex and will take time, given that:
- Obesity is still a relatively “young” disease. 2023 marks the 10th anniversary of obesity’s recognition as a disease by the American Medical Association. Currently, BMI is the most widely used tool to diagnose obesity. Tools such as waist circumference, body composition, and metabolic health assessment will need to replace the BMI. Shifting from BMI emphasizes that obesity is more than a number on the scale. Obesity, as defined by the Obesity Medicine Association, is indeed a “chronic, relapsing, multi-factorial, neurobehavioral disease, wherein an increase in body fat promotes adipose tissue dysfunction and abnormal fat mass physical forces, resulting in adverse metabolic, biomechanical, and psychosocial health consequences.”
- Much of our health research is tied to BMI. There have been some shifts in looking at non–weight-related health indicators. However, we need more robust studies evaluating other health indicators beyond weight and BMI. The availability of this data will help eliminate the need for BMI and promote individualized health assessment.
- Current treatment guidelines for obesity medications are based on BMI. (Note: Medications to treat obesity are called “anti-obesity” medications or AOMs. However, given the stigma associated with obesity, I prefer not to use the term “anti-obesity.”) Presently this interferes with long-term obesity treatment. Once BMI is “normal,” many patients lose insurance coverage for their obesity medication, despite needing long-term metabolic support to overcome the compensatory mechanism of weight regain. Obesity is a chronic disease that exists independent of weight status. Therefore, using non-BMI measures will help ensure appropriate lifetime support for obesity.
The preceding are barriers, not impossibilities. In the interim, if BMI is still used in any capacity, the BMI reference chart should be an adjusted BMI chart based on age, race, ethnicity, biological sex, and obesity-related conditions. Furthermore, BMI isn’t the sole determining factor of health status.
Instead, an “abnormal” BMI should initiate conversation and further testing, if needed, to determine an individual’s health. For example, compare two people of the same height with different BMIs and lifestyles. Current studies support that a person flagged as having a high adjusted BMI but practicing a healthy lifestyle and having no metabolic diseases is less at risk than a person with a “normal” BMI but high waist circumference and an unhealthy lifestyle.
Regardless of your personal feelings, the facts are clear. Technology empowers us with better tools than BMI to determine health status. Therefore, it’s not a matter of if we will stop using BMI but when.
Sylvia Gonsahn-Bollie, MD, DipABOM, is an integrative obesity specialist who specializes in individualized solutions for emotional and biological overeating. Connect with her at www.embraceyouweightloss.com or on Instagram @embraceyoumd. Her bestselling book, “Embrace You: Your Guide to Transforming Weight Loss Misconceptions Into Lifelong Wellness,” is Healthline.com’s Best Overall Weight Loss Book 2022 and one of Livestrong.com’s picks for the 8 Best Weight-Loss Books to Read in 2022.
A version of this article first appeared on Medscape.com.
Barriers to System Quality Improvement in Health Care
Corresponding author: Ebrahim Barkoudah, MD, MPH, Department of Medicine, Brigham and Women’s Hospital, Boston, MA; [email protected]
Process improvement in any industry sector aims to increase the efficiency of resource utilization and delivery methods (cost) and the quality of the product (outcomes), with the goal of ultimately achieving continuous development.1 In the health care industry, variation in processes and outcomes along with inefficiency in resource use that result in changes in value (the product of outcomes/costs) are the general targets of quality improvement (QI) efforts employing various implementation methodologies.2 When the ultimate aim is to serve the patient (customer), best clinical practice includes both maintaining high quality (individual care delivery) and controlling costs (efficient care system delivery), leading to optimal delivery (value-based care). High-quality individual care and efficient care delivery are not competing concepts, but when working to improve both health care outcomes and cost, traditional and nontraditional barriers to system QI often arise.3
The possible scenarios after a QI intervention include backsliding (regression to the mean over time), steady-state (minimal fixed improvement that could sustain), and continuous improvement (tangible enhancement after completing the intervention with legacy effect).4 The scalability of results can be considered during the process measurement and the intervention design phases of all QI projects; however, the complex nature of barriers in the health care environment during each level of implementation should be accounted for to prevent failure in the scalability phase.5
The barriers to optimal QI outcomes leading to continuous improvement are multifactorial and are related to intrinsic and extrinsic factors.6 These factors include 3 fundamental levels: (1) individual level inertia/beliefs, prior personal knowledge, and team-related factors7,8; (2) intervention-related and process-specific barriers and clinical practice obstacles; and (3) organizational level challenges and macro-level and population-level barriers (Figure). The obstacles faced during the implementation phase will likely include 2 of these levels simultaneously, which could add complexity and hinder or prevent the implementation of a tangible successful QI process and eventually lead to backsliding or minimal fixed improvement rather than continuous improvement. Furthermore, a patient-centered approach to QI would contribute to further complexity in design and execution, given the importance of reaching sustainable, meaningful improvement by adding elements of patient’s preferences, caregiver engagement, and the shared decision-making processes.9
Overcoming these multidomain barriers and reaching resilience and sustainability requires thoughtful planning and execution through a multifaceted approach.10 A meaningful start could include addressing the clinical inertia for the individual and the team by promoting open innovation and allowing outside institutional collaborations and ideas through networks.11 On the individual level, encouraging participation and motivating health care workers in QI to reach a multidisciplinary operation approach will lead to harmony in collaboration. Concurrently, the organization should support the QI capability and scalability by removing competing priorities and establishing effective leadership that ensures resource allocation, communicates clear value-based principles, and engenders a psychological safety environment.
A continuous improvement state is the optimal QI target, a target that can be attained by removing obstacles and paving a clear pathway to implementation. Focusing on the 3 levels of barriers will position the organization for meaningful and successful QI phases to achieve continuous improvement.
1. Adesola S, Baines T. Developing and evaluating a methodology for business process improvement. Business Process Manage J. 2005;11(1):37-46. doi:10.1108/14637150510578719
2. Gershon M. Choosing which process improvement methodology to implement. J Appl Business & Economics. 2010;10(5):61-69.
3. Porter ME, Teisberg EO. Redefining Health Care: Creating Value-Based Competition on Results. Harvard Business Press; 2006.
4. Holweg M, Davies J, De Meyer A, Lawson B, Schmenner RW. Process Theory: The Principles of Operations Management. Oxford University Press; 2018.
5. Shortell SM, Bennett CL, Byck GR. Assessing the impact of continuous quality improvement on clinical practice: what it will take to accelerate progress. Milbank Q. 1998;76(4):593-624. doi:10.1111/1468-0009.00107
6. Solomons NM, Spross JA. Evidence‐based practice barriers and facilitators from a continuous quality improvement perspective: an integrative review. J Nurs Manage. 2011;19(1):109-120. doi:10.1111/j.1365-2834.2010.01144.x
7. Phillips LS, Branch WT, Cook CB, et al. Clinical inertia. Ann Intern Med. 2001;135(9):825-34. doi:10.7326/0003-4819-135-9-200111060-00012
8. Stevenson K, Baker R, Farooqi A, Sorrie R, Khunti K. Features of primary health care teams associated with successful quality improvement of diabetes care: a qualitative study. Fam Pract. 2001;18(1):21-26. doi:10.1093/fampra/18.1.21
9. What is patient-centered care? NEJM Catalyst. January 1, 2017. Accessed August 31, 2022. https://catalyst.nejm.org/doi/full/10.1056/CAT.17.0559
10. Kilbourne AM, Beck K, Spaeth‐Rublee B, et al. Measuring and improving the quality of mental health care: a global perspective. World Psychiatry. 2018;17(1):30-8. doi:10.1002/wps.20482
11. Huang HC, Lai MC, Lin LH, Chen CT. Overcoming organizational inertia to strengthen business model innovation: An open innovation perspective. J Organizational Change Manage. 2013;26(6):977-1002. doi:10.1108/JOCM-04-2012-0047
Corresponding author: Ebrahim Barkoudah, MD, MPH, Department of Medicine, Brigham and Women’s Hospital, Boston, MA; [email protected]
Process improvement in any industry sector aims to increase the efficiency of resource utilization and delivery methods (cost) and the quality of the product (outcomes), with the goal of ultimately achieving continuous development.1 In the health care industry, variation in processes and outcomes along with inefficiency in resource use that result in changes in value (the product of outcomes/costs) are the general targets of quality improvement (QI) efforts employing various implementation methodologies.2 When the ultimate aim is to serve the patient (customer), best clinical practice includes both maintaining high quality (individual care delivery) and controlling costs (efficient care system delivery), leading to optimal delivery (value-based care). High-quality individual care and efficient care delivery are not competing concepts, but when working to improve both health care outcomes and cost, traditional and nontraditional barriers to system QI often arise.3
The possible scenarios after a QI intervention include backsliding (regression to the mean over time), steady-state (minimal fixed improvement that could sustain), and continuous improvement (tangible enhancement after completing the intervention with legacy effect).4 The scalability of results can be considered during the process measurement and the intervention design phases of all QI projects; however, the complex nature of barriers in the health care environment during each level of implementation should be accounted for to prevent failure in the scalability phase.5
The barriers to optimal QI outcomes leading to continuous improvement are multifactorial and are related to intrinsic and extrinsic factors.6 These factors include 3 fundamental levels: (1) individual level inertia/beliefs, prior personal knowledge, and team-related factors7,8; (2) intervention-related and process-specific barriers and clinical practice obstacles; and (3) organizational level challenges and macro-level and population-level barriers (Figure). The obstacles faced during the implementation phase will likely include 2 of these levels simultaneously, which could add complexity and hinder or prevent the implementation of a tangible successful QI process and eventually lead to backsliding or minimal fixed improvement rather than continuous improvement. Furthermore, a patient-centered approach to QI would contribute to further complexity in design and execution, given the importance of reaching sustainable, meaningful improvement by adding elements of patient’s preferences, caregiver engagement, and the shared decision-making processes.9
Overcoming these multidomain barriers and reaching resilience and sustainability requires thoughtful planning and execution through a multifaceted approach.10 A meaningful start could include addressing the clinical inertia for the individual and the team by promoting open innovation and allowing outside institutional collaborations and ideas through networks.11 On the individual level, encouraging participation and motivating health care workers in QI to reach a multidisciplinary operation approach will lead to harmony in collaboration. Concurrently, the organization should support the QI capability and scalability by removing competing priorities and establishing effective leadership that ensures resource allocation, communicates clear value-based principles, and engenders a psychological safety environment.
A continuous improvement state is the optimal QI target, a target that can be attained by removing obstacles and paving a clear pathway to implementation. Focusing on the 3 levels of barriers will position the organization for meaningful and successful QI phases to achieve continuous improvement.
Corresponding author: Ebrahim Barkoudah, MD, MPH, Department of Medicine, Brigham and Women’s Hospital, Boston, MA; [email protected]
Process improvement in any industry sector aims to increase the efficiency of resource utilization and delivery methods (cost) and the quality of the product (outcomes), with the goal of ultimately achieving continuous development.1 In the health care industry, variation in processes and outcomes along with inefficiency in resource use that result in changes in value (the product of outcomes/costs) are the general targets of quality improvement (QI) efforts employing various implementation methodologies.2 When the ultimate aim is to serve the patient (customer), best clinical practice includes both maintaining high quality (individual care delivery) and controlling costs (efficient care system delivery), leading to optimal delivery (value-based care). High-quality individual care and efficient care delivery are not competing concepts, but when working to improve both health care outcomes and cost, traditional and nontraditional barriers to system QI often arise.3
The possible scenarios after a QI intervention include backsliding (regression to the mean over time), steady-state (minimal fixed improvement that could sustain), and continuous improvement (tangible enhancement after completing the intervention with legacy effect).4 The scalability of results can be considered during the process measurement and the intervention design phases of all QI projects; however, the complex nature of barriers in the health care environment during each level of implementation should be accounted for to prevent failure in the scalability phase.5
The barriers to optimal QI outcomes leading to continuous improvement are multifactorial and are related to intrinsic and extrinsic factors.6 These factors include 3 fundamental levels: (1) individual level inertia/beliefs, prior personal knowledge, and team-related factors7,8; (2) intervention-related and process-specific barriers and clinical practice obstacles; and (3) organizational level challenges and macro-level and population-level barriers (Figure). The obstacles faced during the implementation phase will likely include 2 of these levels simultaneously, which could add complexity and hinder or prevent the implementation of a tangible successful QI process and eventually lead to backsliding or minimal fixed improvement rather than continuous improvement. Furthermore, a patient-centered approach to QI would contribute to further complexity in design and execution, given the importance of reaching sustainable, meaningful improvement by adding elements of patient’s preferences, caregiver engagement, and the shared decision-making processes.9
Overcoming these multidomain barriers and reaching resilience and sustainability requires thoughtful planning and execution through a multifaceted approach.10 A meaningful start could include addressing the clinical inertia for the individual and the team by promoting open innovation and allowing outside institutional collaborations and ideas through networks.11 On the individual level, encouraging participation and motivating health care workers in QI to reach a multidisciplinary operation approach will lead to harmony in collaboration. Concurrently, the organization should support the QI capability and scalability by removing competing priorities and establishing effective leadership that ensures resource allocation, communicates clear value-based principles, and engenders a psychological safety environment.
A continuous improvement state is the optimal QI target, a target that can be attained by removing obstacles and paving a clear pathway to implementation. Focusing on the 3 levels of barriers will position the organization for meaningful and successful QI phases to achieve continuous improvement.
1. Adesola S, Baines T. Developing and evaluating a methodology for business process improvement. Business Process Manage J. 2005;11(1):37-46. doi:10.1108/14637150510578719
2. Gershon M. Choosing which process improvement methodology to implement. J Appl Business & Economics. 2010;10(5):61-69.
3. Porter ME, Teisberg EO. Redefining Health Care: Creating Value-Based Competition on Results. Harvard Business Press; 2006.
4. Holweg M, Davies J, De Meyer A, Lawson B, Schmenner RW. Process Theory: The Principles of Operations Management. Oxford University Press; 2018.
5. Shortell SM, Bennett CL, Byck GR. Assessing the impact of continuous quality improvement on clinical practice: what it will take to accelerate progress. Milbank Q. 1998;76(4):593-624. doi:10.1111/1468-0009.00107
6. Solomons NM, Spross JA. Evidence‐based practice barriers and facilitators from a continuous quality improvement perspective: an integrative review. J Nurs Manage. 2011;19(1):109-120. doi:10.1111/j.1365-2834.2010.01144.x
7. Phillips LS, Branch WT, Cook CB, et al. Clinical inertia. Ann Intern Med. 2001;135(9):825-34. doi:10.7326/0003-4819-135-9-200111060-00012
8. Stevenson K, Baker R, Farooqi A, Sorrie R, Khunti K. Features of primary health care teams associated with successful quality improvement of diabetes care: a qualitative study. Fam Pract. 2001;18(1):21-26. doi:10.1093/fampra/18.1.21
9. What is patient-centered care? NEJM Catalyst. January 1, 2017. Accessed August 31, 2022. https://catalyst.nejm.org/doi/full/10.1056/CAT.17.0559
10. Kilbourne AM, Beck K, Spaeth‐Rublee B, et al. Measuring and improving the quality of mental health care: a global perspective. World Psychiatry. 2018;17(1):30-8. doi:10.1002/wps.20482
11. Huang HC, Lai MC, Lin LH, Chen CT. Overcoming organizational inertia to strengthen business model innovation: An open innovation perspective. J Organizational Change Manage. 2013;26(6):977-1002. doi:10.1108/JOCM-04-2012-0047
1. Adesola S, Baines T. Developing and evaluating a methodology for business process improvement. Business Process Manage J. 2005;11(1):37-46. doi:10.1108/14637150510578719
2. Gershon M. Choosing which process improvement methodology to implement. J Appl Business & Economics. 2010;10(5):61-69.
3. Porter ME, Teisberg EO. Redefining Health Care: Creating Value-Based Competition on Results. Harvard Business Press; 2006.
4. Holweg M, Davies J, De Meyer A, Lawson B, Schmenner RW. Process Theory: The Principles of Operations Management. Oxford University Press; 2018.
5. Shortell SM, Bennett CL, Byck GR. Assessing the impact of continuous quality improvement on clinical practice: what it will take to accelerate progress. Milbank Q. 1998;76(4):593-624. doi:10.1111/1468-0009.00107
6. Solomons NM, Spross JA. Evidence‐based practice barriers and facilitators from a continuous quality improvement perspective: an integrative review. J Nurs Manage. 2011;19(1):109-120. doi:10.1111/j.1365-2834.2010.01144.x
7. Phillips LS, Branch WT, Cook CB, et al. Clinical inertia. Ann Intern Med. 2001;135(9):825-34. doi:10.7326/0003-4819-135-9-200111060-00012
8. Stevenson K, Baker R, Farooqi A, Sorrie R, Khunti K. Features of primary health care teams associated with successful quality improvement of diabetes care: a qualitative study. Fam Pract. 2001;18(1):21-26. doi:10.1093/fampra/18.1.21
9. What is patient-centered care? NEJM Catalyst. January 1, 2017. Accessed August 31, 2022. https://catalyst.nejm.org/doi/full/10.1056/CAT.17.0559
10. Kilbourne AM, Beck K, Spaeth‐Rublee B, et al. Measuring and improving the quality of mental health care: a global perspective. World Psychiatry. 2018;17(1):30-8. doi:10.1002/wps.20482
11. Huang HC, Lai MC, Lin LH, Chen CT. Overcoming organizational inertia to strengthen business model innovation: An open innovation perspective. J Organizational Change Manage. 2013;26(6):977-1002. doi:10.1108/JOCM-04-2012-0047
AI and reality – diagnosing otitis media is a real challenge
Let’s pretend for a moment that you receive a call from one of your college roommates who thanks to his family connections has become a venture capitalist in California. His group is considering investing in a start-up that is developing a handheld instrument that it claims will use artificial intelligence to diagnose ear infections far more accurately than the human eye. He wonders if you would like to help him evaluate the company’s proposal and offers you a small percentage of the profits for your efforts should they choose to invest.
Your former roommate has done enough research on his own to understand that otitis media makes up a large chunk of a pediatrician’s workload and that making an accurate diagnosis can often be difficult in a struggling child. He describes his own experience watching a frustrated pediatrician attempting to remove wax from his child’s ear and eventually prescribing antibiotics “to be safe.”
You agree and review the prospectus, which includes a paper from a peer-reviewed journal. What you discover is that the investigators used more than 600 high-resolution images of tympanic membranes taken “during operative myringotomy and tympanostomy tube placement” and the findings at tympanocentesis to train a neural network.
Once trained, the model they developed could differentiate with 95% accuracy between an image of a tympanic membrane that covered a normal middle ear from one that merely contained fluid and from one that contained infected fluid. When these same images were shown to 39 clinicians, more than half of which were pediatricians and included both faculty-level staff and trainees, the average diagnostic accuracy was 65%.
The prospectus includes prediction that this technology could easily be developed into a handheld instrument similar to a traditional otoscope, which could then be linked to the operator’s smartphone, giving the clinician an instant treat or no-treat answer.
Now, remember you have nothing to lose except maybe a friendship. How would you advise your old college roommate?
My advice to your college buddy would be one of caution! Yes, there is a potential big upside because there is a real need for a device that could provide a diagnostic accuracy that this AI model promises. While I suspect that AI will always be more accurate in diagnosis using static images, I bet that most people, clinicians and nonclinicians, could improve their accuracy by linking photos with diagnoses with an hour of practice.
However, evaluating a high-resolution photograph taken through an operative scope inserted into the cerumenless ear canal of a sedated, afrebrile child is several orders of magnitude less difficult than the real-world environment in which the diagnosis of otitis media is usually made.
If the venture capitalists were still interested in getting into the otitis media marketplace, you might suggest they look into companies that have already developed image capture otoscopes. At this point I could only find one on the Internet that was portable and it certainly isn’t small-child friendly. Once we have a tool that can capture images in real-world situations, the next step is to train AI systems to interpret them using the approach these researchers have developed. I bet it can be done. It will be only a matter of time ... and money.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Let’s pretend for a moment that you receive a call from one of your college roommates who thanks to his family connections has become a venture capitalist in California. His group is considering investing in a start-up that is developing a handheld instrument that it claims will use artificial intelligence to diagnose ear infections far more accurately than the human eye. He wonders if you would like to help him evaluate the company’s proposal and offers you a small percentage of the profits for your efforts should they choose to invest.
Your former roommate has done enough research on his own to understand that otitis media makes up a large chunk of a pediatrician’s workload and that making an accurate diagnosis can often be difficult in a struggling child. He describes his own experience watching a frustrated pediatrician attempting to remove wax from his child’s ear and eventually prescribing antibiotics “to be safe.”
You agree and review the prospectus, which includes a paper from a peer-reviewed journal. What you discover is that the investigators used more than 600 high-resolution images of tympanic membranes taken “during operative myringotomy and tympanostomy tube placement” and the findings at tympanocentesis to train a neural network.
Once trained, the model they developed could differentiate with 95% accuracy between an image of a tympanic membrane that covered a normal middle ear from one that merely contained fluid and from one that contained infected fluid. When these same images were shown to 39 clinicians, more than half of which were pediatricians and included both faculty-level staff and trainees, the average diagnostic accuracy was 65%.
The prospectus includes prediction that this technology could easily be developed into a handheld instrument similar to a traditional otoscope, which could then be linked to the operator’s smartphone, giving the clinician an instant treat or no-treat answer.
Now, remember you have nothing to lose except maybe a friendship. How would you advise your old college roommate?
My advice to your college buddy would be one of caution! Yes, there is a potential big upside because there is a real need for a device that could provide a diagnostic accuracy that this AI model promises. While I suspect that AI will always be more accurate in diagnosis using static images, I bet that most people, clinicians and nonclinicians, could improve their accuracy by linking photos with diagnoses with an hour of practice.
However, evaluating a high-resolution photograph taken through an operative scope inserted into the cerumenless ear canal of a sedated, afrebrile child is several orders of magnitude less difficult than the real-world environment in which the diagnosis of otitis media is usually made.
If the venture capitalists were still interested in getting into the otitis media marketplace, you might suggest they look into companies that have already developed image capture otoscopes. At this point I could only find one on the Internet that was portable and it certainly isn’t small-child friendly. Once we have a tool that can capture images in real-world situations, the next step is to train AI systems to interpret them using the approach these researchers have developed. I bet it can be done. It will be only a matter of time ... and money.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Let’s pretend for a moment that you receive a call from one of your college roommates who thanks to his family connections has become a venture capitalist in California. His group is considering investing in a start-up that is developing a handheld instrument that it claims will use artificial intelligence to diagnose ear infections far more accurately than the human eye. He wonders if you would like to help him evaluate the company’s proposal and offers you a small percentage of the profits for your efforts should they choose to invest.
Your former roommate has done enough research on his own to understand that otitis media makes up a large chunk of a pediatrician’s workload and that making an accurate diagnosis can often be difficult in a struggling child. He describes his own experience watching a frustrated pediatrician attempting to remove wax from his child’s ear and eventually prescribing antibiotics “to be safe.”
You agree and review the prospectus, which includes a paper from a peer-reviewed journal. What you discover is that the investigators used more than 600 high-resolution images of tympanic membranes taken “during operative myringotomy and tympanostomy tube placement” and the findings at tympanocentesis to train a neural network.
Once trained, the model they developed could differentiate with 95% accuracy between an image of a tympanic membrane that covered a normal middle ear from one that merely contained fluid and from one that contained infected fluid. When these same images were shown to 39 clinicians, more than half of which were pediatricians and included both faculty-level staff and trainees, the average diagnostic accuracy was 65%.
The prospectus includes prediction that this technology could easily be developed into a handheld instrument similar to a traditional otoscope, which could then be linked to the operator’s smartphone, giving the clinician an instant treat or no-treat answer.
Now, remember you have nothing to lose except maybe a friendship. How would you advise your old college roommate?
My advice to your college buddy would be one of caution! Yes, there is a potential big upside because there is a real need for a device that could provide a diagnostic accuracy that this AI model promises. While I suspect that AI will always be more accurate in diagnosis using static images, I bet that most people, clinicians and nonclinicians, could improve their accuracy by linking photos with diagnoses with an hour of practice.
However, evaluating a high-resolution photograph taken through an operative scope inserted into the cerumenless ear canal of a sedated, afrebrile child is several orders of magnitude less difficult than the real-world environment in which the diagnosis of otitis media is usually made.
If the venture capitalists were still interested in getting into the otitis media marketplace, you might suggest they look into companies that have already developed image capture otoscopes. At this point I could only find one on the Internet that was portable and it certainly isn’t small-child friendly. Once we have a tool that can capture images in real-world situations, the next step is to train AI systems to interpret them using the approach these researchers have developed. I bet it can be done. It will be only a matter of time ... and money.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Where a child eats breakfast is important
We’ve been told for decades that a child who doesn’t start the day with a good breakfast is entering school at a serious disadvantage. The brain needs a good supply of energy to learn optimally. So the standard wisdom goes. Subsidized school breakfast programs have been built around this chestnut. But, is there solid evidence to support the notion that simply adding a morning meal to a child’s schedule will improve his or her school performance? It sounds like common sense, but is it just one of those old grandmother’s nuggets that doesn’t stand up under close scrutiny?
A recent study from Spain suggests that the relationship between breakfast and school performance is not merely related to the nutritional needs of a growing brain. Using data from nearly 4,000 Spanish children aged 4-14 collected in a 2017 national health survey, the investigators found “skipping breakfast and eating breakfast out of the home were linked to greater odds of psychosocial behavioral problems than eating breakfast at home.” And, we already know that, in general, children who misbehave in school don’t thrive academically.
There were also associations between the absence or presence of certain food groups in the morning meal with behavioral problems. But the data lacked the granularity to draw any firm conclusions – although the authors felt that what they consider a healthy Spanish diet may have had a positive influence on behavior.
The findings in this study may simply be another example of the many positive influences that have been associated with family meals and have little to do with what is actually consumed. The association may not have much to do with the family gathering together at a single Norman Rockwell sitting, a reality that I suspect seldom occurs. The apparent positive influence of breakfast may be that it reflects a family’s priorities: that food is important, that sleep is important, and that school is important – so important that scheduling the morning should focus on sending the child off well prepared. The child who is allowed to stay up to an unhealthy hour is likely to be difficult to arouse in the morning for breakfast and getting off to school.
It may be that the child’s behavior problems are so disruptive and taxing for the family that even with their best efforts, the parents can’t find the time and energy to provide a breakfast in the home.
On the other hand, the study doesn’t tell us how many children aren’t offered breakfast at home because their families simply can’t afford it. Obviously, the answer depends on the socioeconomic mix of a given community. In some localities this may represent a sizable percentage of the population.
So where does this leave us? Unfortunately, as I read through the discussion at the end of this paper I felt that the authors were leaning too much toward further research based on the potential associations between behavior and specific food groups their data suggested.
For me, the take-home message from this paper is that our existing efforts to improve academic success with food offered in school should also include strategies that promote eating breakfast at home. For example, the backpack take-home food distribution programs that seem to have been effective could include breakfast-targeted items packaged in a way that encourage families to provide breakfast at home.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
We’ve been told for decades that a child who doesn’t start the day with a good breakfast is entering school at a serious disadvantage. The brain needs a good supply of energy to learn optimally. So the standard wisdom goes. Subsidized school breakfast programs have been built around this chestnut. But, is there solid evidence to support the notion that simply adding a morning meal to a child’s schedule will improve his or her school performance? It sounds like common sense, but is it just one of those old grandmother’s nuggets that doesn’t stand up under close scrutiny?
A recent study from Spain suggests that the relationship between breakfast and school performance is not merely related to the nutritional needs of a growing brain. Using data from nearly 4,000 Spanish children aged 4-14 collected in a 2017 national health survey, the investigators found “skipping breakfast and eating breakfast out of the home were linked to greater odds of psychosocial behavioral problems than eating breakfast at home.” And, we already know that, in general, children who misbehave in school don’t thrive academically.
There were also associations between the absence or presence of certain food groups in the morning meal with behavioral problems. But the data lacked the granularity to draw any firm conclusions – although the authors felt that what they consider a healthy Spanish diet may have had a positive influence on behavior.
The findings in this study may simply be another example of the many positive influences that have been associated with family meals and have little to do with what is actually consumed. The association may not have much to do with the family gathering together at a single Norman Rockwell sitting, a reality that I suspect seldom occurs. The apparent positive influence of breakfast may be that it reflects a family’s priorities: that food is important, that sleep is important, and that school is important – so important that scheduling the morning should focus on sending the child off well prepared. The child who is allowed to stay up to an unhealthy hour is likely to be difficult to arouse in the morning for breakfast and getting off to school.
It may be that the child’s behavior problems are so disruptive and taxing for the family that even with their best efforts, the parents can’t find the time and energy to provide a breakfast in the home.
On the other hand, the study doesn’t tell us how many children aren’t offered breakfast at home because their families simply can’t afford it. Obviously, the answer depends on the socioeconomic mix of a given community. In some localities this may represent a sizable percentage of the population.
So where does this leave us? Unfortunately, as I read through the discussion at the end of this paper I felt that the authors were leaning too much toward further research based on the potential associations between behavior and specific food groups their data suggested.
For me, the take-home message from this paper is that our existing efforts to improve academic success with food offered in school should also include strategies that promote eating breakfast at home. For example, the backpack take-home food distribution programs that seem to have been effective could include breakfast-targeted items packaged in a way that encourage families to provide breakfast at home.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
We’ve been told for decades that a child who doesn’t start the day with a good breakfast is entering school at a serious disadvantage. The brain needs a good supply of energy to learn optimally. So the standard wisdom goes. Subsidized school breakfast programs have been built around this chestnut. But, is there solid evidence to support the notion that simply adding a morning meal to a child’s schedule will improve his or her school performance? It sounds like common sense, but is it just one of those old grandmother’s nuggets that doesn’t stand up under close scrutiny?
A recent study from Spain suggests that the relationship between breakfast and school performance is not merely related to the nutritional needs of a growing brain. Using data from nearly 4,000 Spanish children aged 4-14 collected in a 2017 national health survey, the investigators found “skipping breakfast and eating breakfast out of the home were linked to greater odds of psychosocial behavioral problems than eating breakfast at home.” And, we already know that, in general, children who misbehave in school don’t thrive academically.
There were also associations between the absence or presence of certain food groups in the morning meal with behavioral problems. But the data lacked the granularity to draw any firm conclusions – although the authors felt that what they consider a healthy Spanish diet may have had a positive influence on behavior.
The findings in this study may simply be another example of the many positive influences that have been associated with family meals and have little to do with what is actually consumed. The association may not have much to do with the family gathering together at a single Norman Rockwell sitting, a reality that I suspect seldom occurs. The apparent positive influence of breakfast may be that it reflects a family’s priorities: that food is important, that sleep is important, and that school is important – so important that scheduling the morning should focus on sending the child off well prepared. The child who is allowed to stay up to an unhealthy hour is likely to be difficult to arouse in the morning for breakfast and getting off to school.
It may be that the child’s behavior problems are so disruptive and taxing for the family that even with their best efforts, the parents can’t find the time and energy to provide a breakfast in the home.
On the other hand, the study doesn’t tell us how many children aren’t offered breakfast at home because their families simply can’t afford it. Obviously, the answer depends on the socioeconomic mix of a given community. In some localities this may represent a sizable percentage of the population.
So where does this leave us? Unfortunately, as I read through the discussion at the end of this paper I felt that the authors were leaning too much toward further research based on the potential associations between behavior and specific food groups their data suggested.
For me, the take-home message from this paper is that our existing efforts to improve academic success with food offered in school should also include strategies that promote eating breakfast at home. For example, the backpack take-home food distribution programs that seem to have been effective could include breakfast-targeted items packaged in a way that encourage families to provide breakfast at home.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Five contract red flags every physician should know
Recruiting health care workers is a challenge these days for both private practice and hospital employers, and competition can be fierce. In order to be competitive, employers need to review the package they are offering potential candidates and understand that it’s more than just compensation and benefits that matter.
As someone who reviews physician contracts extensively, there are some common examples of language that may cause a candidate to choose a different position.
Probationary period
Although every employer wants to find out if they like the physician or midlevel employee that they have just hired before fully committing, the inclusion of a probationary period (usually 90 days) is offensive to a candidate, especially one with a choice of contracts.
Essentially, the employer is asking the employee to (potentially) relocate, go through the credentialing process, and turn down other potential offers, all for the possibility that they could easily be terminated. Probationary periods typically allow an employee to be immediately terminated without notice or cause, which can then leave them stranded without a paycheck (and with a new home and/or other recent commitments).
Moreover, contracts with probationary periods tend to terminate the employee without covering any tail costs or clarifying that the employer will not enforce restrictive provisions (even if unlikely to be legally enforceable based on the short relationship).
It is important to understand that the process of a person finding a new position, which includes interviewing, contract negotiation, and credentialing, can take up to 6 months. For this reason, probationary provisions create real job insecurity for a candidate.
Entering into a new affiliation is a leap of faith both for the employer and the employee. If the circumstances do not work out, the employer should fairly compensate the employee for the notice period and ask them not to return to work or otherwise allow them to keep working the notice period while they search for a new position.
Acceleration of notice
Another objectionable provision that employers like to include in their contracts is one which allows the employer to accelerate and immediately terminate an employee who has given proper notice.
The contract will contain a standard notice provision, but when the health care professional submits notice, their last date is suddenly accelerated, and they are released without further compensation, notice, or benefits. This type of provision is particularly offensive to health care employees who take the step of giving proper contractual notice and, similar to the probationary language, can create real job insecurity for an employee who suddenly loses their paycheck and has no new job to start.
Medical workers should be paid for the entire notice period whether or not they are allowed to work. Unfortunately, this type of provision is sometimes hidden in contracts and not noticed by employees, who tend to focus on the notice provision itself. I consider this provision to be a red flag about the employer when I review clients’ contracts.
Malpractice tail
Although many employers will claim it is not unusual for an employee to pay for their own malpractice tail, in the current marketplace, the payment of tail can be a deciding factor in whether a candidate accepts a contract.
At a minimum, employers should consider paying for the tail under circumstances where they non-renew a contract, terminate without cause, or the contract is terminated for the employer’s breach. Similarly, I like to seek out payment of the tail by the employer where the contract is terminated owing to a change in the law, use of a force majeure provision, loss of the employer’s hospital contract, or similar provisions where termination is outside the control of the employee.
Employers should also consider a provision where they share the cost of a tail or cover the entire cost on the basis of years of service in order to stand out to a potential candidate.
Noncompete provisions
I do not find noncompete provisions to be generally unacceptable when properly written; however, employers should reevaluate the reasonableness of their noncompete language frequently, because such language can make the difference in whether a candidate accepts a contract.
A reasonable noncompete that only protects the employer as necessary and does not restrict the reasonable practice of medicine is always preferable and can be the deciding factor for a candidate. Tying enforcement of a noncompete to reasons for termination (similar to the tail) can also make a positive difference in a candidate’s review of a contract.
Egregious noncompetes, where the candidate is simply informed that the language is “not negotiable,” are unlikely to be compelling to a candidate with other options.
Specifics on location, call, schedule
One item potential employees find extremely frustrating about contracts is when it fails to include promises made regarding location, call, and schedule.
These particular items affect a physician’s expectations about a job, including commute time, family life, and lifestyle. An employer or recruiter that makes a lot of promises on these points but won’t commit to the details in writing (or at least offer mutual agreement on these issues) can cause an uncertain candidate to choose the job that offers greater certainty.
There are many provisions of a contract that can make a difference to a particular job applicant. A savvy employer seeking to capture a particular health care professional should find out what the specific goals and needs of the candidate might be and consider adjusting the contract to best satisfy the candidate.
At the end of the day, however, at least for those physicians and others reviewing contracts that are fairly equivalent, it may be the fairness of the contract provisions that end up being the deciding factor.
Ms. Adler is Health Law Group Practice Leader for the law firm Roetzel in Chicago. She reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
Recruiting health care workers is a challenge these days for both private practice and hospital employers, and competition can be fierce. In order to be competitive, employers need to review the package they are offering potential candidates and understand that it’s more than just compensation and benefits that matter.
As someone who reviews physician contracts extensively, there are some common examples of language that may cause a candidate to choose a different position.
Probationary period
Although every employer wants to find out if they like the physician or midlevel employee that they have just hired before fully committing, the inclusion of a probationary period (usually 90 days) is offensive to a candidate, especially one with a choice of contracts.
Essentially, the employer is asking the employee to (potentially) relocate, go through the credentialing process, and turn down other potential offers, all for the possibility that they could easily be terminated. Probationary periods typically allow an employee to be immediately terminated without notice or cause, which can then leave them stranded without a paycheck (and with a new home and/or other recent commitments).
Moreover, contracts with probationary periods tend to terminate the employee without covering any tail costs or clarifying that the employer will not enforce restrictive provisions (even if unlikely to be legally enforceable based on the short relationship).
It is important to understand that the process of a person finding a new position, which includes interviewing, contract negotiation, and credentialing, can take up to 6 months. For this reason, probationary provisions create real job insecurity for a candidate.
Entering into a new affiliation is a leap of faith both for the employer and the employee. If the circumstances do not work out, the employer should fairly compensate the employee for the notice period and ask them not to return to work or otherwise allow them to keep working the notice period while they search for a new position.
Acceleration of notice
Another objectionable provision that employers like to include in their contracts is one which allows the employer to accelerate and immediately terminate an employee who has given proper notice.
The contract will contain a standard notice provision, but when the health care professional submits notice, their last date is suddenly accelerated, and they are released without further compensation, notice, or benefits. This type of provision is particularly offensive to health care employees who take the step of giving proper contractual notice and, similar to the probationary language, can create real job insecurity for an employee who suddenly loses their paycheck and has no new job to start.
Medical workers should be paid for the entire notice period whether or not they are allowed to work. Unfortunately, this type of provision is sometimes hidden in contracts and not noticed by employees, who tend to focus on the notice provision itself. I consider this provision to be a red flag about the employer when I review clients’ contracts.
Malpractice tail
Although many employers will claim it is not unusual for an employee to pay for their own malpractice tail, in the current marketplace, the payment of tail can be a deciding factor in whether a candidate accepts a contract.
At a minimum, employers should consider paying for the tail under circumstances where they non-renew a contract, terminate without cause, or the contract is terminated for the employer’s breach. Similarly, I like to seek out payment of the tail by the employer where the contract is terminated owing to a change in the law, use of a force majeure provision, loss of the employer’s hospital contract, or similar provisions where termination is outside the control of the employee.
Employers should also consider a provision where they share the cost of a tail or cover the entire cost on the basis of years of service in order to stand out to a potential candidate.
Noncompete provisions
I do not find noncompete provisions to be generally unacceptable when properly written; however, employers should reevaluate the reasonableness of their noncompete language frequently, because such language can make the difference in whether a candidate accepts a contract.
A reasonable noncompete that only protects the employer as necessary and does not restrict the reasonable practice of medicine is always preferable and can be the deciding factor for a candidate. Tying enforcement of a noncompete to reasons for termination (similar to the tail) can also make a positive difference in a candidate’s review of a contract.
Egregious noncompetes, where the candidate is simply informed that the language is “not negotiable,” are unlikely to be compelling to a candidate with other options.
Specifics on location, call, schedule
One item potential employees find extremely frustrating about contracts is when it fails to include promises made regarding location, call, and schedule.
These particular items affect a physician’s expectations about a job, including commute time, family life, and lifestyle. An employer or recruiter that makes a lot of promises on these points but won’t commit to the details in writing (or at least offer mutual agreement on these issues) can cause an uncertain candidate to choose the job that offers greater certainty.
There are many provisions of a contract that can make a difference to a particular job applicant. A savvy employer seeking to capture a particular health care professional should find out what the specific goals and needs of the candidate might be and consider adjusting the contract to best satisfy the candidate.
At the end of the day, however, at least for those physicians and others reviewing contracts that are fairly equivalent, it may be the fairness of the contract provisions that end up being the deciding factor.
Ms. Adler is Health Law Group Practice Leader for the law firm Roetzel in Chicago. She reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
Recruiting health care workers is a challenge these days for both private practice and hospital employers, and competition can be fierce. In order to be competitive, employers need to review the package they are offering potential candidates and understand that it’s more than just compensation and benefits that matter.
As someone who reviews physician contracts extensively, there are some common examples of language that may cause a candidate to choose a different position.
Probationary period
Although every employer wants to find out if they like the physician or midlevel employee that they have just hired before fully committing, the inclusion of a probationary period (usually 90 days) is offensive to a candidate, especially one with a choice of contracts.
Essentially, the employer is asking the employee to (potentially) relocate, go through the credentialing process, and turn down other potential offers, all for the possibility that they could easily be terminated. Probationary periods typically allow an employee to be immediately terminated without notice or cause, which can then leave them stranded without a paycheck (and with a new home and/or other recent commitments).
Moreover, contracts with probationary periods tend to terminate the employee without covering any tail costs or clarifying that the employer will not enforce restrictive provisions (even if unlikely to be legally enforceable based on the short relationship).
It is important to understand that the process of a person finding a new position, which includes interviewing, contract negotiation, and credentialing, can take up to 6 months. For this reason, probationary provisions create real job insecurity for a candidate.
Entering into a new affiliation is a leap of faith both for the employer and the employee. If the circumstances do not work out, the employer should fairly compensate the employee for the notice period and ask them not to return to work or otherwise allow them to keep working the notice period while they search for a new position.
Acceleration of notice
Another objectionable provision that employers like to include in their contracts is one which allows the employer to accelerate and immediately terminate an employee who has given proper notice.
The contract will contain a standard notice provision, but when the health care professional submits notice, their last date is suddenly accelerated, and they are released without further compensation, notice, or benefits. This type of provision is particularly offensive to health care employees who take the step of giving proper contractual notice and, similar to the probationary language, can create real job insecurity for an employee who suddenly loses their paycheck and has no new job to start.
Medical workers should be paid for the entire notice period whether or not they are allowed to work. Unfortunately, this type of provision is sometimes hidden in contracts and not noticed by employees, who tend to focus on the notice provision itself. I consider this provision to be a red flag about the employer when I review clients’ contracts.
Malpractice tail
Although many employers will claim it is not unusual for an employee to pay for their own malpractice tail, in the current marketplace, the payment of tail can be a deciding factor in whether a candidate accepts a contract.
At a minimum, employers should consider paying for the tail under circumstances where they non-renew a contract, terminate without cause, or the contract is terminated for the employer’s breach. Similarly, I like to seek out payment of the tail by the employer where the contract is terminated owing to a change in the law, use of a force majeure provision, loss of the employer’s hospital contract, or similar provisions where termination is outside the control of the employee.
Employers should also consider a provision where they share the cost of a tail or cover the entire cost on the basis of years of service in order to stand out to a potential candidate.
Noncompete provisions
I do not find noncompete provisions to be generally unacceptable when properly written; however, employers should reevaluate the reasonableness of their noncompete language frequently, because such language can make the difference in whether a candidate accepts a contract.
A reasonable noncompete that only protects the employer as necessary and does not restrict the reasonable practice of medicine is always preferable and can be the deciding factor for a candidate. Tying enforcement of a noncompete to reasons for termination (similar to the tail) can also make a positive difference in a candidate’s review of a contract.
Egregious noncompetes, where the candidate is simply informed that the language is “not negotiable,” are unlikely to be compelling to a candidate with other options.
Specifics on location, call, schedule
One item potential employees find extremely frustrating about contracts is when it fails to include promises made regarding location, call, and schedule.
These particular items affect a physician’s expectations about a job, including commute time, family life, and lifestyle. An employer or recruiter that makes a lot of promises on these points but won’t commit to the details in writing (or at least offer mutual agreement on these issues) can cause an uncertain candidate to choose the job that offers greater certainty.
There are many provisions of a contract that can make a difference to a particular job applicant. A savvy employer seeking to capture a particular health care professional should find out what the specific goals and needs of the candidate might be and consider adjusting the contract to best satisfy the candidate.
At the end of the day, however, at least for those physicians and others reviewing contracts that are fairly equivalent, it may be the fairness of the contract provisions that end up being the deciding factor.
Ms. Adler is Health Law Group Practice Leader for the law firm Roetzel in Chicago. She reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
Bias and other barriers to HSCT access
For example, at the June 5 plenary session of the American Society of Clinical Oncology, Paul Richardson, MD, presented results of the DETERMINATION trial. More than 40,000 attendees heard his message that, in patients with newly diagnosed multiple myeloma (MM), up-front high-dose melphalan with autologous hematopoietic stem cell transplant (HSCT) support is associated with a significantly longer median progression-free survival of 67 months, compared with 46 months for patients randomized to delayed transplantation. The 5-year overall survival is similar for both arms.
While I and many of my colleagues in the field of transplantation used this data to strongly encourage MM patients to undergo HSCT as consolidation of their initial remission, others – including many investigators on the DETERMINATION trial – reached a starkly different conclusion. They suggested that delaying transplant was a valid option, since no survival benefit was observed.
Bias, when defined as a prejudice in favor of or against a specific treatment on the part of physicians and patients, has not been carefully studied in the realm of cellular therapies. However, physician and patient perceptions or misperceptions about the value or toxicity of a specific therapy are probably major drivers of whether a patient is referred for and accepts a particular form of treatment. In my specialization, that would mean either a stem cell transplant or other forms of cell therapy.
As with other medical procedures, in my field there are significant disparities in the use of transplantation among patients of different racial, ethnic, and age groups. Rates of both auto- and allo-HSCT are significantly higher for Whites than for African Americans. Hispanic patients have the lowest rates of utilization of auto-HSCT. Patients over the age of 60 have an eightfold risk of nonreferral to an HSCT center. Obviously, these nonreferrals reduce access to HSCT for older patients, particularly if they are seen at nonacademic centers.
One must question whether these disparities are caused by the physicians not believing in the value of transplantation, or simply not understanding its value? Or do they just lack the time to refer patients to a transplant center?
Socioeconomic factors, insurance status, age, and psychosocial characteristics all impact access to HSCT, yet some older patients with fewer economic resources and less insurance coverage still undergo the procedure. Is that because their physicians spent time educating these patients about the potential value of this treatment? Is it because the physicians went the extra mile to get these patients access to HSCT?
Physician preference also plays a significant role in whether a patient receives an allo-HSCT for acute myeloid leukemia and myelodysplastic syndrome. In a large survey of hematologists and oncologists performed by Pidala and colleagues, half of those surveyed agreed with the statement: “I feel the risk (morbidity and mortality) after HSCT is very high.” Most indicated that they “feel outcomes of unrelated donor HCT are much worse than matched sibling HCT.”
More importantly, more than one-third of those surveyed agreed that, “because of the high risks of allogeneic HSCT, I refer only after failure of conventional chemotherapy.” They voiced this opinion despite the fact that mortality rates after HSCT have been reduced significantly. With modern techniques, outcomes of unrelated donors are as good as with sibling donor transplants, and national guidelines strongly recommend that patients get referred before they become refractory to chemotherapy.
What can we do about this problem? Obviously, physician and provider education is important, but primary care physicians and general oncologists are already bombarded daily with new information. Relatively rare conditions like those we treat simply may not get their attention.
Personally, I think one of the most effective ways to overcome bias among physicians would be to target patients through a direct advertising campaign and public service announcements. Only by getting the attention of patients can they be directed to current, accurate information.
This solution could reduce the impact of physician biases or misperceptions and provide patients with greater access to lifesaving cell therapies.
Dr. Giralt is deputy division head of the division of hematologic malignancies at Memorial Sloan Kettering Cancer Center in New York.
For example, at the June 5 plenary session of the American Society of Clinical Oncology, Paul Richardson, MD, presented results of the DETERMINATION trial. More than 40,000 attendees heard his message that, in patients with newly diagnosed multiple myeloma (MM), up-front high-dose melphalan with autologous hematopoietic stem cell transplant (HSCT) support is associated with a significantly longer median progression-free survival of 67 months, compared with 46 months for patients randomized to delayed transplantation. The 5-year overall survival is similar for both arms.
While I and many of my colleagues in the field of transplantation used this data to strongly encourage MM patients to undergo HSCT as consolidation of their initial remission, others – including many investigators on the DETERMINATION trial – reached a starkly different conclusion. They suggested that delaying transplant was a valid option, since no survival benefit was observed.
Bias, when defined as a prejudice in favor of or against a specific treatment on the part of physicians and patients, has not been carefully studied in the realm of cellular therapies. However, physician and patient perceptions or misperceptions about the value or toxicity of a specific therapy are probably major drivers of whether a patient is referred for and accepts a particular form of treatment. In my specialization, that would mean either a stem cell transplant or other forms of cell therapy.
As with other medical procedures, in my field there are significant disparities in the use of transplantation among patients of different racial, ethnic, and age groups. Rates of both auto- and allo-HSCT are significantly higher for Whites than for African Americans. Hispanic patients have the lowest rates of utilization of auto-HSCT. Patients over the age of 60 have an eightfold risk of nonreferral to an HSCT center. Obviously, these nonreferrals reduce access to HSCT for older patients, particularly if they are seen at nonacademic centers.
One must question whether these disparities are caused by the physicians not believing in the value of transplantation, or simply not understanding its value? Or do they just lack the time to refer patients to a transplant center?
Socioeconomic factors, insurance status, age, and psychosocial characteristics all impact access to HSCT, yet some older patients with fewer economic resources and less insurance coverage still undergo the procedure. Is that because their physicians spent time educating these patients about the potential value of this treatment? Is it because the physicians went the extra mile to get these patients access to HSCT?
Physician preference also plays a significant role in whether a patient receives an allo-HSCT for acute myeloid leukemia and myelodysplastic syndrome. In a large survey of hematologists and oncologists performed by Pidala and colleagues, half of those surveyed agreed with the statement: “I feel the risk (morbidity and mortality) after HSCT is very high.” Most indicated that they “feel outcomes of unrelated donor HCT are much worse than matched sibling HCT.”
More importantly, more than one-third of those surveyed agreed that, “because of the high risks of allogeneic HSCT, I refer only after failure of conventional chemotherapy.” They voiced this opinion despite the fact that mortality rates after HSCT have been reduced significantly. With modern techniques, outcomes of unrelated donors are as good as with sibling donor transplants, and national guidelines strongly recommend that patients get referred before they become refractory to chemotherapy.
What can we do about this problem? Obviously, physician and provider education is important, but primary care physicians and general oncologists are already bombarded daily with new information. Relatively rare conditions like those we treat simply may not get their attention.
Personally, I think one of the most effective ways to overcome bias among physicians would be to target patients through a direct advertising campaign and public service announcements. Only by getting the attention of patients can they be directed to current, accurate information.
This solution could reduce the impact of physician biases or misperceptions and provide patients with greater access to lifesaving cell therapies.
Dr. Giralt is deputy division head of the division of hematologic malignancies at Memorial Sloan Kettering Cancer Center in New York.
For example, at the June 5 plenary session of the American Society of Clinical Oncology, Paul Richardson, MD, presented results of the DETERMINATION trial. More than 40,000 attendees heard his message that, in patients with newly diagnosed multiple myeloma (MM), up-front high-dose melphalan with autologous hematopoietic stem cell transplant (HSCT) support is associated with a significantly longer median progression-free survival of 67 months, compared with 46 months for patients randomized to delayed transplantation. The 5-year overall survival is similar for both arms.
While I and many of my colleagues in the field of transplantation used this data to strongly encourage MM patients to undergo HSCT as consolidation of their initial remission, others – including many investigators on the DETERMINATION trial – reached a starkly different conclusion. They suggested that delaying transplant was a valid option, since no survival benefit was observed.
Bias, when defined as a prejudice in favor of or against a specific treatment on the part of physicians and patients, has not been carefully studied in the realm of cellular therapies. However, physician and patient perceptions or misperceptions about the value or toxicity of a specific therapy are probably major drivers of whether a patient is referred for and accepts a particular form of treatment. In my specialization, that would mean either a stem cell transplant or other forms of cell therapy.
As with other medical procedures, in my field there are significant disparities in the use of transplantation among patients of different racial, ethnic, and age groups. Rates of both auto- and allo-HSCT are significantly higher for Whites than for African Americans. Hispanic patients have the lowest rates of utilization of auto-HSCT. Patients over the age of 60 have an eightfold risk of nonreferral to an HSCT center. Obviously, these nonreferrals reduce access to HSCT for older patients, particularly if they are seen at nonacademic centers.
One must question whether these disparities are caused by the physicians not believing in the value of transplantation, or simply not understanding its value? Or do they just lack the time to refer patients to a transplant center?
Socioeconomic factors, insurance status, age, and psychosocial characteristics all impact access to HSCT, yet some older patients with fewer economic resources and less insurance coverage still undergo the procedure. Is that because their physicians spent time educating these patients about the potential value of this treatment? Is it because the physicians went the extra mile to get these patients access to HSCT?
Physician preference also plays a significant role in whether a patient receives an allo-HSCT for acute myeloid leukemia and myelodysplastic syndrome. In a large survey of hematologists and oncologists performed by Pidala and colleagues, half of those surveyed agreed with the statement: “I feel the risk (morbidity and mortality) after HSCT is very high.” Most indicated that they “feel outcomes of unrelated donor HCT are much worse than matched sibling HCT.”
More importantly, more than one-third of those surveyed agreed that, “because of the high risks of allogeneic HSCT, I refer only after failure of conventional chemotherapy.” They voiced this opinion despite the fact that mortality rates after HSCT have been reduced significantly. With modern techniques, outcomes of unrelated donors are as good as with sibling donor transplants, and national guidelines strongly recommend that patients get referred before they become refractory to chemotherapy.
What can we do about this problem? Obviously, physician and provider education is important, but primary care physicians and general oncologists are already bombarded daily with new information. Relatively rare conditions like those we treat simply may not get their attention.
Personally, I think one of the most effective ways to overcome bias among physicians would be to target patients through a direct advertising campaign and public service announcements. Only by getting the attention of patients can they be directed to current, accurate information.
This solution could reduce the impact of physician biases or misperceptions and provide patients with greater access to lifesaving cell therapies.
Dr. Giralt is deputy division head of the division of hematologic malignancies at Memorial Sloan Kettering Cancer Center in New York.
From neuroplasticity to psychoplasticity: Psilocybin may reverse personality disorders and political fanaticism
One of psychiatry’s long-standing dogmas is that personality disorders are enduring, unchangeable, and not amenable to treatment with potent psychotropics or intensive psychotherapy. I propose that this dogma may soon be shattered.
Several other dogmas in psychiatry have been demolished over the past several decades:
- that “insanity” is completely irreversible and requires lifetime institutionalization. The serendipitous discovery of chlorpromazine1 annihilated this centuries-old dogma
- that chronic, severe, refractory depression (with ongoing suicidal urges) that fails to improve with pharmacotherapy or electroconvulsive therapy (ECT) is hopeless and untreatable, until ketamine not only pulverized this dogma, but did it with lightning speed, dazzling us all2
- that dissociative agents such as ketamine are dangerous and condemnable drugs of abuse, until the therapeutic effect of ketamine slayed that dragon3
- that ECT “fries” the brain (as malevolently propagated by antipsychiatry cults), which was completely disproven by neuroimaging studies that show the hippocampus (which shrinks during depression) actually grows by >10% after a few ECT sessions4
- that psychotherapy is not a “real” treatment because talking cannot reverse a psychiatric brain disorder, until studies showed significant neuroplasticity with psychotherapy and decrease in inflammatory biomarkers with cognitive-behavioral therapy (CBT)5
- that persons with refractory hallucinations and delusions are doomed to a life of disability, until clozapine torpedoed that pessimistic dogma6
- that hallucinogens/psychedelics are dangerous and should be banned, until a jarring paradigm shift occurred with the discovery of psilocybin’s transformative effects, and the remarkable therapeutic effects of its mystical trips.7
Psilocybin’s therapeutic effects
Psilocybin has already proved to have a strong and lasting effect on depression and promises to have therapeutic benefits for patients with substance use disorders, posttraumatic stress disorder (PTSD), and anxiety.8 In addition, when the multiple psychological and neurobiological effects of psilocybin (and of other psychedelics) are examined, I see a very promising path to amelioration of severe personality disorders such as psychopathy, antisocial behavior, and narcissism. The mechanism(s) of action of psilocybin on the human brain are drastically different from any man-made psychotropic agent. As a psychiatric neuroscientist, I envision the neurologic impact of psilocybin to be conducive to a complete transformation of a patient’s view of themself, other people, and the meaning of life. It is reminiscent of religious conversion.
The psychological effects of psilocybin in humans have been described as follows:
- emotional breakthrough9
- increased psychological flexibility,10,11 a very cortical effect
- mystical experience,12 which results in sudden and significant changes in behavior and perception and includes the following dimensions: sacredness, noetic quality, deeply felt positive mood, ineffability, paradoxicality, and transcendence of time and space13
- oceanic boundlessness, feeling “one with the universe”14
- universal interconnectedness, insightfulness, blissful state, spiritual experience14
- ego dissolution,15 with loss of one’s personal identity
- increased neuroplasticity16
- changes in cognition and increase in insight.17
The neurobiological effects of psilocybin are mediated by serotonin 5HT2A agonism and include the following18:
- reduction in the activity of the medial prefrontal cortex, which regulates memory, attention, inhibitory control, and habit
- a decrease in the connectivity between the medial prefrontal cortex and the posterior cingulate cortex, which regulates memory and emotions
- reducing the default mode network, which is active during rest, stimulating internal thoughts and reminiscing about previous feelings and events, sometimes including ruminations. Psilocybin reverses those processes to thinking about others, not just the self, and becoming more open-minded about the world and other people. This can be therapeutic for depression, which is often associated with negative ruminations but also with entrenched habits (addictive behaviors), anxiety, PTSD, and obsessive-compulsive disorders
- increased global functional connectivity among various brain networks, leading to stronger functional integration of behavior
- collapse of major cortical oscillatory rhythms such as alpha and others that perpetuate “prior” beliefs
- extensive neuroplasticity and recalibration of thought processes and decomposition of pathological beliefs, referred to as REBUS (relaxed beliefs under psychedelics).
The bottom line is psilocybin and other psychedelics can dramatically alter, reshape, and relax rigid beliefs and personality traits by decreasing “neuroticism” and increasing “extraversion,” insightfulness, openness, and possibly conscientiousness.19 Although no studies of psychedelics in psychopathic, antisocial, or narcissistic personality disorders have been conducted, it is very reasonable to speculate that psilocybin may reverse traits of these disorders such as callousness, lack of empathy, and pathological self-centeredness.
Going further, a preliminary report suggests psilocybin can modify political views by decreasing authoritarianism and increasing libertarianism.20,21 In the current political zeitgeist, could psychedelics such as psilocybin reduce or even eliminate political extremism and visceral hatred on all sides? It would be remarkable research to carry out to heal a politically divided populace.The dogma of untreatable personality disorders or hopelessly entrenched political extremism is on the chopping block, and psychedelics offer hope to splinter those beliefs by concurrently remodeling brain tissue (neuroplasticity) and rectifying the mindset (psychoplasticity).
1. Delay J, Deniker P. Neuroleptic effects of chlorpromazine in therapeutics of neuropsychiatry. J Clin Exp Psychopathol. 1955;16(2):104-112.
2. Walsh Z, Mollaahmetoglu OM, Rootman, J, et al. Ketamine for the treatment of mental health and substance use disorders: comprehensive systematic review. BJPsych Open. 2021;8(1):e19. doi:10.1192/bjo.2021.1061
3. Lener MS, Kadriu B, Zarate CA Jr. Ketamine and beyond: investigations into the potential of glutamatergic agents to treat depression. Drugs. 2017;77(4):381-401.
4. Ayers B, Leaver A, Woods RP, et al. Structural plasticity of the hippocampus and amygdala induced by electroconvulsive therapy in major depression. Biol Psychiatry. 2016;79(4):282-292.
5. Cao B, Li R, Ding L, Xu J, et al. Does cognitive behaviour therapy affect peripheral inflammation of depression? A protocol for the systematic review and meta-analysis. BMJ Open. 2021;11(12):e048162. doi:10.1136/bmjopen-2020-048162
6. Wagner E, Siafis S, Fernando P, et al. Efficacy and safety of clozapine in psychotic disorders—a systematic quantitative meta-review. Transl Psychiatry. 2021;11(1):487.
7. Daws RE, Timmermann C, Giribaldi B, et al. Increased global integration in the brain after psilocybin therapy for depression. Nat Med. 2022;28(4):844-851.
8. Pearson C, Siegel J, Gold JA. Psilocybin-assisted psychotherapy for depression: emerging research on a psychedelic compound with a rich history. J Neurol Sci. 2022;434:120096. doi:10.1016/j.jns.2021.120096
9. Roseman L, Haijen E, Idialu-Ikato K, et al. Emotional breakthrough and psychedelics: validation of the Emotional Breakthrough Inventory. J Psychopharmacol. 2019;33(9):1076-1087.
10. Davis AK, Barrett FS, Griffiths RR. Psychological flexibility mediates the relations between acute psychedelic effects and subjective decreases in depression and anxiety. J Contextual Behav Sci. 2020;15:39-45.
11. Hayes SC, Luoma JB, Bond FW, et al. Acceptance and commitment therapy: model, processes and outcomes. Behav Res Ther. 2006;44(1):1-25.
12. Ross S, Bossis A, Guss J, et al. Rapid and sustained symptom reduction following psilocybin treatment for anxiety and depression in patients with life-threatening cancer: a randomized controlled trial. J Psychopharmacol. 2016;30(12):1165-1180.
13. Stace WT. Mysticism and Philosophy. Macmillan Pub Ltd; 1960:37.
14. Barrett FS, Griffiths RR. Classic hallucinogens and mystical experiences: phenomenology and neural correlates. Curr Top Behav Neurosci. 2018;36:393-430.
15. Nour MM, Evans L, Nutt D, et al. Ego-dissolution and psychedelics: validation of the Ego-Dissolution Inventory (EDI). Front Hum Neurosci. 2016;10:269. doi:10.3389/fnhum.2016.00269
16. Olson DE. The subjective effects of psychedelics may not be necessary for their enduring therapeutic effects. ACS Pharmacol Transl Sci. 2020;4(2):563-567.
17. Carhart-Harris RL, Bolstridge M, Day CMJ, et al. Psilocybin with psychological support for treatment-resistant depression: six-month follow-up. Psychopharmacology (Berl). 2018;235(2):399-408.
18. Carhart-Harris RL. How do psychedelics work? Curr Opin Psychiatry. 2019;32(1):16-21.
19. Erritzoe D, Roseman L, Nour MM, et al. Effects of psilocybin therapy on personality structure. Acta Psychiatr Scand. 2018;138(5):368-378.
20. Lyons T, Carhart-Harris RL. Increased nature relatedness and decreased authoritarian political views after psilocybin for treatment-resistant depression. J Psychopharmacol. 2018;32(7):811-819.
21. Nour MM, Evans L, Carhart-Harris RL. Psychedelics, personality and political perspectives. J Psychoactive Drugs. 2017;49(3):182-191.
One of psychiatry’s long-standing dogmas is that personality disorders are enduring, unchangeable, and not amenable to treatment with potent psychotropics or intensive psychotherapy. I propose that this dogma may soon be shattered.
Several other dogmas in psychiatry have been demolished over the past several decades:
- that “insanity” is completely irreversible and requires lifetime institutionalization. The serendipitous discovery of chlorpromazine1 annihilated this centuries-old dogma
- that chronic, severe, refractory depression (with ongoing suicidal urges) that fails to improve with pharmacotherapy or electroconvulsive therapy (ECT) is hopeless and untreatable, until ketamine not only pulverized this dogma, but did it with lightning speed, dazzling us all2
- that dissociative agents such as ketamine are dangerous and condemnable drugs of abuse, until the therapeutic effect of ketamine slayed that dragon3
- that ECT “fries” the brain (as malevolently propagated by antipsychiatry cults), which was completely disproven by neuroimaging studies that show the hippocampus (which shrinks during depression) actually grows by >10% after a few ECT sessions4
- that psychotherapy is not a “real” treatment because talking cannot reverse a psychiatric brain disorder, until studies showed significant neuroplasticity with psychotherapy and decrease in inflammatory biomarkers with cognitive-behavioral therapy (CBT)5
- that persons with refractory hallucinations and delusions are doomed to a life of disability, until clozapine torpedoed that pessimistic dogma6
- that hallucinogens/psychedelics are dangerous and should be banned, until a jarring paradigm shift occurred with the discovery of psilocybin’s transformative effects, and the remarkable therapeutic effects of its mystical trips.7
Psilocybin’s therapeutic effects
Psilocybin has already proved to have a strong and lasting effect on depression and promises to have therapeutic benefits for patients with substance use disorders, posttraumatic stress disorder (PTSD), and anxiety.8 In addition, when the multiple psychological and neurobiological effects of psilocybin (and of other psychedelics) are examined, I see a very promising path to amelioration of severe personality disorders such as psychopathy, antisocial behavior, and narcissism. The mechanism(s) of action of psilocybin on the human brain are drastically different from any man-made psychotropic agent. As a psychiatric neuroscientist, I envision the neurologic impact of psilocybin to be conducive to a complete transformation of a patient’s view of themself, other people, and the meaning of life. It is reminiscent of religious conversion.
The psychological effects of psilocybin in humans have been described as follows:
- emotional breakthrough9
- increased psychological flexibility,10,11 a very cortical effect
- mystical experience,12 which results in sudden and significant changes in behavior and perception and includes the following dimensions: sacredness, noetic quality, deeply felt positive mood, ineffability, paradoxicality, and transcendence of time and space13
- oceanic boundlessness, feeling “one with the universe”14
- universal interconnectedness, insightfulness, blissful state, spiritual experience14
- ego dissolution,15 with loss of one’s personal identity
- increased neuroplasticity16
- changes in cognition and increase in insight.17
The neurobiological effects of psilocybin are mediated by serotonin 5HT2A agonism and include the following18:
- reduction in the activity of the medial prefrontal cortex, which regulates memory, attention, inhibitory control, and habit
- a decrease in the connectivity between the medial prefrontal cortex and the posterior cingulate cortex, which regulates memory and emotions
- reducing the default mode network, which is active during rest, stimulating internal thoughts and reminiscing about previous feelings and events, sometimes including ruminations. Psilocybin reverses those processes to thinking about others, not just the self, and becoming more open-minded about the world and other people. This can be therapeutic for depression, which is often associated with negative ruminations but also with entrenched habits (addictive behaviors), anxiety, PTSD, and obsessive-compulsive disorders
- increased global functional connectivity among various brain networks, leading to stronger functional integration of behavior
- collapse of major cortical oscillatory rhythms such as alpha and others that perpetuate “prior” beliefs
- extensive neuroplasticity and recalibration of thought processes and decomposition of pathological beliefs, referred to as REBUS (relaxed beliefs under psychedelics).
The bottom line is psilocybin and other psychedelics can dramatically alter, reshape, and relax rigid beliefs and personality traits by decreasing “neuroticism” and increasing “extraversion,” insightfulness, openness, and possibly conscientiousness.19 Although no studies of psychedelics in psychopathic, antisocial, or narcissistic personality disorders have been conducted, it is very reasonable to speculate that psilocybin may reverse traits of these disorders such as callousness, lack of empathy, and pathological self-centeredness.
Going further, a preliminary report suggests psilocybin can modify political views by decreasing authoritarianism and increasing libertarianism.20,21 In the current political zeitgeist, could psychedelics such as psilocybin reduce or even eliminate political extremism and visceral hatred on all sides? It would be remarkable research to carry out to heal a politically divided populace.The dogma of untreatable personality disorders or hopelessly entrenched political extremism is on the chopping block, and psychedelics offer hope to splinter those beliefs by concurrently remodeling brain tissue (neuroplasticity) and rectifying the mindset (psychoplasticity).
One of psychiatry’s long-standing dogmas is that personality disorders are enduring, unchangeable, and not amenable to treatment with potent psychotropics or intensive psychotherapy. I propose that this dogma may soon be shattered.
Several other dogmas in psychiatry have been demolished over the past several decades:
- that “insanity” is completely irreversible and requires lifetime institutionalization. The serendipitous discovery of chlorpromazine1 annihilated this centuries-old dogma
- that chronic, severe, refractory depression (with ongoing suicidal urges) that fails to improve with pharmacotherapy or electroconvulsive therapy (ECT) is hopeless and untreatable, until ketamine not only pulverized this dogma, but did it with lightning speed, dazzling us all2
- that dissociative agents such as ketamine are dangerous and condemnable drugs of abuse, until the therapeutic effect of ketamine slayed that dragon3
- that ECT “fries” the brain (as malevolently propagated by antipsychiatry cults), which was completely disproven by neuroimaging studies that show the hippocampus (which shrinks during depression) actually grows by >10% after a few ECT sessions4
- that psychotherapy is not a “real” treatment because talking cannot reverse a psychiatric brain disorder, until studies showed significant neuroplasticity with psychotherapy and decrease in inflammatory biomarkers with cognitive-behavioral therapy (CBT)5
- that persons with refractory hallucinations and delusions are doomed to a life of disability, until clozapine torpedoed that pessimistic dogma6
- that hallucinogens/psychedelics are dangerous and should be banned, until a jarring paradigm shift occurred with the discovery of psilocybin’s transformative effects, and the remarkable therapeutic effects of its mystical trips.7
Psilocybin’s therapeutic effects
Psilocybin has already proved to have a strong and lasting effect on depression and promises to have therapeutic benefits for patients with substance use disorders, posttraumatic stress disorder (PTSD), and anxiety.8 In addition, when the multiple psychological and neurobiological effects of psilocybin (and of other psychedelics) are examined, I see a very promising path to amelioration of severe personality disorders such as psychopathy, antisocial behavior, and narcissism. The mechanism(s) of action of psilocybin on the human brain are drastically different from any man-made psychotropic agent. As a psychiatric neuroscientist, I envision the neurologic impact of psilocybin to be conducive to a complete transformation of a patient’s view of themself, other people, and the meaning of life. It is reminiscent of religious conversion.
The psychological effects of psilocybin in humans have been described as follows:
- emotional breakthrough9
- increased psychological flexibility,10,11 a very cortical effect
- mystical experience,12 which results in sudden and significant changes in behavior and perception and includes the following dimensions: sacredness, noetic quality, deeply felt positive mood, ineffability, paradoxicality, and transcendence of time and space13
- oceanic boundlessness, feeling “one with the universe”14
- universal interconnectedness, insightfulness, blissful state, spiritual experience14
- ego dissolution,15 with loss of one’s personal identity
- increased neuroplasticity16
- changes in cognition and increase in insight.17
The neurobiological effects of psilocybin are mediated by serotonin 5HT2A agonism and include the following18:
- reduction in the activity of the medial prefrontal cortex, which regulates memory, attention, inhibitory control, and habit
- a decrease in the connectivity between the medial prefrontal cortex and the posterior cingulate cortex, which regulates memory and emotions
- reducing the default mode network, which is active during rest, stimulating internal thoughts and reminiscing about previous feelings and events, sometimes including ruminations. Psilocybin reverses those processes to thinking about others, not just the self, and becoming more open-minded about the world and other people. This can be therapeutic for depression, which is often associated with negative ruminations but also with entrenched habits (addictive behaviors), anxiety, PTSD, and obsessive-compulsive disorders
- increased global functional connectivity among various brain networks, leading to stronger functional integration of behavior
- collapse of major cortical oscillatory rhythms such as alpha and others that perpetuate “prior” beliefs
- extensive neuroplasticity and recalibration of thought processes and decomposition of pathological beliefs, referred to as REBUS (relaxed beliefs under psychedelics).
The bottom line is psilocybin and other psychedelics can dramatically alter, reshape, and relax rigid beliefs and personality traits by decreasing “neuroticism” and increasing “extraversion,” insightfulness, openness, and possibly conscientiousness.19 Although no studies of psychedelics in psychopathic, antisocial, or narcissistic personality disorders have been conducted, it is very reasonable to speculate that psilocybin may reverse traits of these disorders such as callousness, lack of empathy, and pathological self-centeredness.
Going further, a preliminary report suggests psilocybin can modify political views by decreasing authoritarianism and increasing libertarianism.20,21 In the current political zeitgeist, could psychedelics such as psilocybin reduce or even eliminate political extremism and visceral hatred on all sides? It would be remarkable research to carry out to heal a politically divided populace.The dogma of untreatable personality disorders or hopelessly entrenched political extremism is on the chopping block, and psychedelics offer hope to splinter those beliefs by concurrently remodeling brain tissue (neuroplasticity) and rectifying the mindset (psychoplasticity).
1. Delay J, Deniker P. Neuroleptic effects of chlorpromazine in therapeutics of neuropsychiatry. J Clin Exp Psychopathol. 1955;16(2):104-112.
2. Walsh Z, Mollaahmetoglu OM, Rootman, J, et al. Ketamine for the treatment of mental health and substance use disorders: comprehensive systematic review. BJPsych Open. 2021;8(1):e19. doi:10.1192/bjo.2021.1061
3. Lener MS, Kadriu B, Zarate CA Jr. Ketamine and beyond: investigations into the potential of glutamatergic agents to treat depression. Drugs. 2017;77(4):381-401.
4. Ayers B, Leaver A, Woods RP, et al. Structural plasticity of the hippocampus and amygdala induced by electroconvulsive therapy in major depression. Biol Psychiatry. 2016;79(4):282-292.
5. Cao B, Li R, Ding L, Xu J, et al. Does cognitive behaviour therapy affect peripheral inflammation of depression? A protocol for the systematic review and meta-analysis. BMJ Open. 2021;11(12):e048162. doi:10.1136/bmjopen-2020-048162
6. Wagner E, Siafis S, Fernando P, et al. Efficacy and safety of clozapine in psychotic disorders—a systematic quantitative meta-review. Transl Psychiatry. 2021;11(1):487.
7. Daws RE, Timmermann C, Giribaldi B, et al. Increased global integration in the brain after psilocybin therapy for depression. Nat Med. 2022;28(4):844-851.
8. Pearson C, Siegel J, Gold JA. Psilocybin-assisted psychotherapy for depression: emerging research on a psychedelic compound with a rich history. J Neurol Sci. 2022;434:120096. doi:10.1016/j.jns.2021.120096
9. Roseman L, Haijen E, Idialu-Ikato K, et al. Emotional breakthrough and psychedelics: validation of the Emotional Breakthrough Inventory. J Psychopharmacol. 2019;33(9):1076-1087.
10. Davis AK, Barrett FS, Griffiths RR. Psychological flexibility mediates the relations between acute psychedelic effects and subjective decreases in depression and anxiety. J Contextual Behav Sci. 2020;15:39-45.
11. Hayes SC, Luoma JB, Bond FW, et al. Acceptance and commitment therapy: model, processes and outcomes. Behav Res Ther. 2006;44(1):1-25.
12. Ross S, Bossis A, Guss J, et al. Rapid and sustained symptom reduction following psilocybin treatment for anxiety and depression in patients with life-threatening cancer: a randomized controlled trial. J Psychopharmacol. 2016;30(12):1165-1180.
13. Stace WT. Mysticism and Philosophy. Macmillan Pub Ltd; 1960:37.
14. Barrett FS, Griffiths RR. Classic hallucinogens and mystical experiences: phenomenology and neural correlates. Curr Top Behav Neurosci. 2018;36:393-430.
15. Nour MM, Evans L, Nutt D, et al. Ego-dissolution and psychedelics: validation of the Ego-Dissolution Inventory (EDI). Front Hum Neurosci. 2016;10:269. doi:10.3389/fnhum.2016.00269
16. Olson DE. The subjective effects of psychedelics may not be necessary for their enduring therapeutic effects. ACS Pharmacol Transl Sci. 2020;4(2):563-567.
17. Carhart-Harris RL, Bolstridge M, Day CMJ, et al. Psilocybin with psychological support for treatment-resistant depression: six-month follow-up. Psychopharmacology (Berl). 2018;235(2):399-408.
18. Carhart-Harris RL. How do psychedelics work? Curr Opin Psychiatry. 2019;32(1):16-21.
19. Erritzoe D, Roseman L, Nour MM, et al. Effects of psilocybin therapy on personality structure. Acta Psychiatr Scand. 2018;138(5):368-378.
20. Lyons T, Carhart-Harris RL. Increased nature relatedness and decreased authoritarian political views after psilocybin for treatment-resistant depression. J Psychopharmacol. 2018;32(7):811-819.
21. Nour MM, Evans L, Carhart-Harris RL. Psychedelics, personality and political perspectives. J Psychoactive Drugs. 2017;49(3):182-191.
1. Delay J, Deniker P. Neuroleptic effects of chlorpromazine in therapeutics of neuropsychiatry. J Clin Exp Psychopathol. 1955;16(2):104-112.
2. Walsh Z, Mollaahmetoglu OM, Rootman, J, et al. Ketamine for the treatment of mental health and substance use disorders: comprehensive systematic review. BJPsych Open. 2021;8(1):e19. doi:10.1192/bjo.2021.1061
3. Lener MS, Kadriu B, Zarate CA Jr. Ketamine and beyond: investigations into the potential of glutamatergic agents to treat depression. Drugs. 2017;77(4):381-401.
4. Ayers B, Leaver A, Woods RP, et al. Structural plasticity of the hippocampus and amygdala induced by electroconvulsive therapy in major depression. Biol Psychiatry. 2016;79(4):282-292.
5. Cao B, Li R, Ding L, Xu J, et al. Does cognitive behaviour therapy affect peripheral inflammation of depression? A protocol for the systematic review and meta-analysis. BMJ Open. 2021;11(12):e048162. doi:10.1136/bmjopen-2020-048162
6. Wagner E, Siafis S, Fernando P, et al. Efficacy and safety of clozapine in psychotic disorders—a systematic quantitative meta-review. Transl Psychiatry. 2021;11(1):487.
7. Daws RE, Timmermann C, Giribaldi B, et al. Increased global integration in the brain after psilocybin therapy for depression. Nat Med. 2022;28(4):844-851.
8. Pearson C, Siegel J, Gold JA. Psilocybin-assisted psychotherapy for depression: emerging research on a psychedelic compound with a rich history. J Neurol Sci. 2022;434:120096. doi:10.1016/j.jns.2021.120096
9. Roseman L, Haijen E, Idialu-Ikato K, et al. Emotional breakthrough and psychedelics: validation of the Emotional Breakthrough Inventory. J Psychopharmacol. 2019;33(9):1076-1087.
10. Davis AK, Barrett FS, Griffiths RR. Psychological flexibility mediates the relations between acute psychedelic effects and subjective decreases in depression and anxiety. J Contextual Behav Sci. 2020;15:39-45.
11. Hayes SC, Luoma JB, Bond FW, et al. Acceptance and commitment therapy: model, processes and outcomes. Behav Res Ther. 2006;44(1):1-25.
12. Ross S, Bossis A, Guss J, et al. Rapid and sustained symptom reduction following psilocybin treatment for anxiety and depression in patients with life-threatening cancer: a randomized controlled trial. J Psychopharmacol. 2016;30(12):1165-1180.
13. Stace WT. Mysticism and Philosophy. Macmillan Pub Ltd; 1960:37.
14. Barrett FS, Griffiths RR. Classic hallucinogens and mystical experiences: phenomenology and neural correlates. Curr Top Behav Neurosci. 2018;36:393-430.
15. Nour MM, Evans L, Nutt D, et al. Ego-dissolution and psychedelics: validation of the Ego-Dissolution Inventory (EDI). Front Hum Neurosci. 2016;10:269. doi:10.3389/fnhum.2016.00269
16. Olson DE. The subjective effects of psychedelics may not be necessary for their enduring therapeutic effects. ACS Pharmacol Transl Sci. 2020;4(2):563-567.
17. Carhart-Harris RL, Bolstridge M, Day CMJ, et al. Psilocybin with psychological support for treatment-resistant depression: six-month follow-up. Psychopharmacology (Berl). 2018;235(2):399-408.
18. Carhart-Harris RL. How do psychedelics work? Curr Opin Psychiatry. 2019;32(1):16-21.
19. Erritzoe D, Roseman L, Nour MM, et al. Effects of psilocybin therapy on personality structure. Acta Psychiatr Scand. 2018;138(5):368-378.
20. Lyons T, Carhart-Harris RL. Increased nature relatedness and decreased authoritarian political views after psilocybin for treatment-resistant depression. J Psychopharmacol. 2018;32(7):811-819.
21. Nour MM, Evans L, Carhart-Harris RL. Psychedelics, personality and political perspectives. J Psychoactive Drugs. 2017;49(3):182-191.
More on neurotransmitters
The series “Neurotransmitter-based diagnosis and treatment: A hypothesis” (Part 1:
The presentation of abnormal neurotransmission may occur along a continuum. For example, extreme dopamine deficiency can present as catatonia, moderate deficiency may present with inattention, normal activity permits adaptive functioning, and excitatory delirium and sudden death may be at the extreme end of dopaminergic excess.1
The amplitude, rate of change, and location of neurotransmitter dysfunction may determine which specialty takes the primary treatment role. Fatigue, pain, sleep difficulty, and emotional distress require clinicians to understand the whole patient, which is why health care professionals need cross training in psychiatry, and psychiatry recognizes multisystem pathology.
The recognition and treatment of substance use disorders requires an understanding of neurotransmitter symptoms, in terms of both acute drug effects and withdrawal. Fallows2 provides this information in an accessible chart. Discussions of neurotransmitters also have value in managing psychotropic medication withdrawal.3
Acetylcholine is another neurotransmitter of importance; it is essential to normal motor, cognitive, and emotional function. Extreme cholinergic deficiency or anticholinergic crisis has symptoms of pupillary dilation, psychosis, and delirium.4-6 The progressive decline seen in certain dementias is related in part to cholinergic deficit. Dominance of cholinergic activity is associated with depression and biomarkers such as increased rapid eye movement (REM) density, a measure of the frequency of rapid eye movements during REM sleep.7 Cholinergic excess or cholinergic crisis may present with symptoms of salivation, lacrimation, muscle weakness, delirium, or paralysis.8
The articles alluded to the interaction of neurotransmitter systems (eg, “dopamine blockade helps with endorphin suppression”). Isolating the effects of a single neurotransmitter is useful, but covariance of neurotransmitter activity also has diagnostic and treatment implications.9-11 Abnormalities in these interactions may be part of the causal process in fundamental cognitive functions.12 If endorphin suppression is insensitive to dopamine blockade, a relative endorphin excess may create symptoms. If acetylcholine changes are normally balanced by a relative increase in dopamine and norepinephrine, then a weak catecholamine response would fit the catecholamine-cholinergic balance hypothesis of depression. Neurotransmitter interactions are well worked out in the neurology of the basal ganglia but less clear in the frontal and limbic systems.13
Quantification has been applied in some areas of clinical care. Morphine equivalents are used to express opiate potency, and there are algorithms to summarize multiple medication effects on respiratory depression/overdose risk.14,15 Chlorpromazine equivalents were used to translate a range of antipsychotic potencies in the early days of antipsychotic treatment. Adverse effects and some treatment responses partially corresponded to the level of dopamine blockade, but not without noise. There is a wide range of variance as antipsychotic potency is assessed for clinical efficacy.16 We are still working on the array of medication potency and selectivity across neurotransmitter systems.17,18 For example, paroxetine is a potent serotonin reuptake blocker but less selective than citalopram, particularly antagonizing cholinergic muscarinic receptors.
The authors noted their hypothesis needs further elaboration and quantification as psychiatry moves from impressionistic practice to firmer science. Measurement of neurotransmitter activity is an area of intense research. Biomeasures have yet to add much value to the clinical practice of psychiatry, but we hope for progress. Functional neuroimaging with sophisticated algorithms is beginning to detail neocortical activity.19 CSF measurement of dopamine and serotonin metabolites seem to correlate with severe depression and suicidal behavior. Noninvasive, wearable technologies to measure galvanic skin response, oxygenation, and neurotransmitter metabolic products may add to neuro-transmitter-based assessment and treatment.
Neurotransmitters are one aspect of brain function. Other processes, such as hormonal neuromodulation20 and ion channels, may be over- or underactive. Channelopathies are of particular interest in cardiology and neurology but are also notable in pain and emotional disorders.21-26 Voltage-gated sodium channels are thought to be involved in general anesthesia.27 Adverse effects of some psychotropic medications are best understood as ion channel dysfunction.28 Using the strategy of this hypothesis applied to activation or inactivation of sodium, potassium, and calcium channels can guide useful diagnostic and treatment ideas for further study.
Mark C. Chandler, MD
Triangle Neuropsychiatry
Durham, North Carolina
Disclosures
The author reports no financial relationships with any companies whose products are mentioned in his letter, or with manufacturers of competing products.
References
1. Mash DC. Excited delirium and sudden death: a syndromal disorder at the extreme end of the neuropsychiatric continuum. Front Physiol. 2016;7:435.
2. Fallows Z. MIT MedLinks. Accessed August 8, 2022. http://web.mit.edu/zakf/www/drugchart/drugchart11.html
3. Groot PC, van Os J. How user knowledge of psychotropic drug withdrawal resulted in the development of person-specific tapering medication. Ther Adv Psychopharmacol. 2020;10:2045125320932452. doi:10.1177/2045125320932452
4. Picciotto MR, Higley MJ, Mineur YS. Acetylcholine as a neuromodulator: cholinergic signaling shapes nervous system function and behavior. Neuron. 2012;76(1):116-129.
5. Nair VP, Hunter JM. Anticholinesterases and anticholinergic drugs. Continuing Education in Anaesthesia Critical Care & Pain. 2004;4(5):164-168.
6. Dawson AH, Buckley NA. Pharmacological management of anticholinergic delirium--theory, evidence and practice. Br J Clin Pharmacol. 2016;81(3):516-524.
7. Dulawa SC, Janowsky DS. Cholinergic regulation of mood: from basic and clinical studies to emerging therapeutics. Mol Psychiatry. 2019;24(5):694-709.
8. Adeyinka A, Kondamudi NP. Cholinergic Crisis. StatPearls Publishing; 2022.
9. El Mansari M, Guiard BP, Chernoloz O, et al. Relevance of norepinephrine-dopamine interactions in the treatment of major depressive disorder. CNS Neurosci Ther. 2010;16(3):e1-e17.
10. Esposito E. Serotonin-dopamine interaction as a focus of novel antidepressant drugs. Curr Drug Targets. 2006;7(2):177-185.
11. Kringelbach ML, Cruzat J, Cabral J, et al. Dynamic coupling of whole-brain neuronal and neurotransmitter systems. Proc Natl Acad Sci U S A. 2020;117(17):9566-9576.
12. Thiele A, Bellgrove MA. Neuromodulation of attention. Neuron. 2018;97(4):769-785.
13. Muñoz A, Lopez-Lopez A, Labandeira CM, et al. Interactions between the serotonergic and other neurotransmitter systems in the basal ganglia: role in Parkinson’s disease and adverse effects of L-DOPA. Front Neuroanat. 2020;14:26.
14. Nielsen S, Degenhardt L, Hoban B, et al. A synthesis of oral morphine equivalents (OME) for opioid utilisation studies. Pharmacoepidemiol Drug Saf. 2016;25(6):733-737.
15. Lo-Ciganic WH, Huang JL, Zhang HH, et al. Evaluation of machine-learning algorithms for predicting opioid overdose risk among Medicare beneficiaries with opioid prescriptions. JAMA Netw Open. 2019;2(3):e190968. doi:10.1001/jamanetworkopen.2019.0968
16. Dewan MJ, Koss M. The clinical impact of reported variance in potency of antipsychotic agents. Acta Psychiatr Scand. 1995;91(4):229-232.
17. Woods SW. Chlorpromazine equivalent doses for the newer atypical antipsychotics. J Clin Psychiatry. 2003;64(6):663-667.
18. Hayasaka Y, Purgato M, Magni LR, et al. Dose equivalents of antidepressants: evidence-based recommendations from randomized controlled trials. J Affect Disord. 2015;180:179-184.
19. Hansen JY, Shafiei G, Markello RD, et al. Mapping neurotransmitter systems to the structural and functional organization of the human neocortex. bioRxiv. 2021. https://doi.org/10.1101/2021.10.28.466336
20. Hwang WJ, Lee TY, Kim NS, et al. The role of estrogen receptors and their signaling across psychiatric disorders. Int J Mol Sci. 2020;22(1):373.
21. Lawrence JH, Tomaselli GF, Marban E. Ion channels: structure and function. Heart Dis Stroke. 1993;2(1):75-80.
22. Fedele F, Severino P, Bruno N, et al. Role of ion channels in coronary microcirculation: a review of the literature. Future Cardiol. 2013;9(6):897-905.
23. Kumar P, Kumar D, Jha SK, et al. Ion channels in neurological disorders. Adv Protein Chem Struct Biol. 2016;103:97-136.
24. Quagliato LA, Nardi AE. The role of convergent ion channel pathways in microglial phenotypes: a systematic review of the implications for neurological and psychiatric disorders. Transl Psychiatry. 2018;8(1):259.
25. Bianchi MT, Botzolakis EJ. Targeting ligand-gated ion channels in neurology and psychiatry: is pharmacological promiscuity an obstacle or an opportunity? BMC Pharmacol. 2010;10:3.
26. Imbrici P, Camerino DC, Tricarico D. Major channels involved in neuropsychiatric disorders and therapeutic perspectives. Front Genet. 2013;4:76.
27. Xiao J, Chen Z, Yu B. A potential mechanism of sodium channel mediating the general anesthesia induced by propofol. Front Cell Neurosci. 2020;14:593050. doi:10.3389/fncel.2020.593050
28. Kamei S, Sato N, Harayama Y, et al. Molecular analysis of potassium ion channel genes in sudden death cases among patients administered psychotropic drug therapy: are polymorphisms in LQT genes a potential risk factor? J Hum Genet. 2014;59(2):95-99.
The authors respond
Thank you for your thoughtful commentary. Our conceptual article was not designed to cover enough ground to be completely thorough. Everything you wrote adds to what we wanted to bring to the reader’s attention. The mechanisms of disease in psychiatry are numerous and still elusive, and the brain’s complexity is staggering. Our main goal was to point out possible correlations between specific symptoms and specific neurotransmitter activity. We had to oversimplify to make the article concise enough for publication. Neurotransmitter effects are based on their synthesis, storage, release, reuptake, and degradation. A receptor’s quantity and quality of function, inhibitors, inducers, and many other factors are involved in neurotransmitter performance. And, of course, there are additional fundamental neurotransmitters beyond the 6 we touched on. Our ability to sort through all of this is still rudimentary. You also reflect on the emerging methods to objectively measure neurotransmitter activity, which will eventually find their way to clinical practice and become invaluable. Still, we treat people, not tests or pictures, so diagnostic thinking based on clinical presentation will forever remain a cornerstone of dealing with individual patients.
We hope scientists and clinicians such as yourself will improve our concept and make it truly practical.
Dmitry M. Arbuck, MD
Clinical Assistant Professor of Psychiatry and Medicine
Indiana University School of Medicine
Indianapolis, Indiana
President and Medical Director
Indiana Polyclinic
Carmel, Indiana
José Miguel Salmerón, MD
Professor
Department of Psychiatry
Universidad del Valle School of Medicine/Hospital
Universitario del Valle
Cali, Colombia
Disclosures
The authors report no financial relationships with any companies whose products are mentioned in their response, or with manufacturers of competing products.
The series “Neurotransmitter-based diagnosis and treatment: A hypothesis” (Part 1:
The presentation of abnormal neurotransmission may occur along a continuum. For example, extreme dopamine deficiency can present as catatonia, moderate deficiency may present with inattention, normal activity permits adaptive functioning, and excitatory delirium and sudden death may be at the extreme end of dopaminergic excess.1
The amplitude, rate of change, and location of neurotransmitter dysfunction may determine which specialty takes the primary treatment role. Fatigue, pain, sleep difficulty, and emotional distress require clinicians to understand the whole patient, which is why health care professionals need cross training in psychiatry, and psychiatry recognizes multisystem pathology.
The recognition and treatment of substance use disorders requires an understanding of neurotransmitter symptoms, in terms of both acute drug effects and withdrawal. Fallows2 provides this information in an accessible chart. Discussions of neurotransmitters also have value in managing psychotropic medication withdrawal.3
Acetylcholine is another neurotransmitter of importance; it is essential to normal motor, cognitive, and emotional function. Extreme cholinergic deficiency or anticholinergic crisis has symptoms of pupillary dilation, psychosis, and delirium.4-6 The progressive decline seen in certain dementias is related in part to cholinergic deficit. Dominance of cholinergic activity is associated with depression and biomarkers such as increased rapid eye movement (REM) density, a measure of the frequency of rapid eye movements during REM sleep.7 Cholinergic excess or cholinergic crisis may present with symptoms of salivation, lacrimation, muscle weakness, delirium, or paralysis.8
The articles alluded to the interaction of neurotransmitter systems (eg, “dopamine blockade helps with endorphin suppression”). Isolating the effects of a single neurotransmitter is useful, but covariance of neurotransmitter activity also has diagnostic and treatment implications.9-11 Abnormalities in these interactions may be part of the causal process in fundamental cognitive functions.12 If endorphin suppression is insensitive to dopamine blockade, a relative endorphin excess may create symptoms. If acetylcholine changes are normally balanced by a relative increase in dopamine and norepinephrine, then a weak catecholamine response would fit the catecholamine-cholinergic balance hypothesis of depression. Neurotransmitter interactions are well worked out in the neurology of the basal ganglia but less clear in the frontal and limbic systems.13
Quantification has been applied in some areas of clinical care. Morphine equivalents are used to express opiate potency, and there are algorithms to summarize multiple medication effects on respiratory depression/overdose risk.14,15 Chlorpromazine equivalents were used to translate a range of antipsychotic potencies in the early days of antipsychotic treatment. Adverse effects and some treatment responses partially corresponded to the level of dopamine blockade, but not without noise. There is a wide range of variance as antipsychotic potency is assessed for clinical efficacy.16 We are still working on the array of medication potency and selectivity across neurotransmitter systems.17,18 For example, paroxetine is a potent serotonin reuptake blocker but less selective than citalopram, particularly antagonizing cholinergic muscarinic receptors.
The authors noted their hypothesis needs further elaboration and quantification as psychiatry moves from impressionistic practice to firmer science. Measurement of neurotransmitter activity is an area of intense research. Biomeasures have yet to add much value to the clinical practice of psychiatry, but we hope for progress. Functional neuroimaging with sophisticated algorithms is beginning to detail neocortical activity.19 CSF measurement of dopamine and serotonin metabolites seem to correlate with severe depression and suicidal behavior. Noninvasive, wearable technologies to measure galvanic skin response, oxygenation, and neurotransmitter metabolic products may add to neuro-transmitter-based assessment and treatment.
Neurotransmitters are one aspect of brain function. Other processes, such as hormonal neuromodulation20 and ion channels, may be over- or underactive. Channelopathies are of particular interest in cardiology and neurology but are also notable in pain and emotional disorders.21-26 Voltage-gated sodium channels are thought to be involved in general anesthesia.27 Adverse effects of some psychotropic medications are best understood as ion channel dysfunction.28 Using the strategy of this hypothesis applied to activation or inactivation of sodium, potassium, and calcium channels can guide useful diagnostic and treatment ideas for further study.
Mark C. Chandler, MD
Triangle Neuropsychiatry
Durham, North Carolina
Disclosures
The author reports no financial relationships with any companies whose products are mentioned in his letter, or with manufacturers of competing products.
References
1. Mash DC. Excited delirium and sudden death: a syndromal disorder at the extreme end of the neuropsychiatric continuum. Front Physiol. 2016;7:435.
2. Fallows Z. MIT MedLinks. Accessed August 8, 2022. http://web.mit.edu/zakf/www/drugchart/drugchart11.html
3. Groot PC, van Os J. How user knowledge of psychotropic drug withdrawal resulted in the development of person-specific tapering medication. Ther Adv Psychopharmacol. 2020;10:2045125320932452. doi:10.1177/2045125320932452
4. Picciotto MR, Higley MJ, Mineur YS. Acetylcholine as a neuromodulator: cholinergic signaling shapes nervous system function and behavior. Neuron. 2012;76(1):116-129.
5. Nair VP, Hunter JM. Anticholinesterases and anticholinergic drugs. Continuing Education in Anaesthesia Critical Care & Pain. 2004;4(5):164-168.
6. Dawson AH, Buckley NA. Pharmacological management of anticholinergic delirium--theory, evidence and practice. Br J Clin Pharmacol. 2016;81(3):516-524.
7. Dulawa SC, Janowsky DS. Cholinergic regulation of mood: from basic and clinical studies to emerging therapeutics. Mol Psychiatry. 2019;24(5):694-709.
8. Adeyinka A, Kondamudi NP. Cholinergic Crisis. StatPearls Publishing; 2022.
9. El Mansari M, Guiard BP, Chernoloz O, et al. Relevance of norepinephrine-dopamine interactions in the treatment of major depressive disorder. CNS Neurosci Ther. 2010;16(3):e1-e17.
10. Esposito E. Serotonin-dopamine interaction as a focus of novel antidepressant drugs. Curr Drug Targets. 2006;7(2):177-185.
11. Kringelbach ML, Cruzat J, Cabral J, et al. Dynamic coupling of whole-brain neuronal and neurotransmitter systems. Proc Natl Acad Sci U S A. 2020;117(17):9566-9576.
12. Thiele A, Bellgrove MA. Neuromodulation of attention. Neuron. 2018;97(4):769-785.
13. Muñoz A, Lopez-Lopez A, Labandeira CM, et al. Interactions between the serotonergic and other neurotransmitter systems in the basal ganglia: role in Parkinson’s disease and adverse effects of L-DOPA. Front Neuroanat. 2020;14:26.
14. Nielsen S, Degenhardt L, Hoban B, et al. A synthesis of oral morphine equivalents (OME) for opioid utilisation studies. Pharmacoepidemiol Drug Saf. 2016;25(6):733-737.
15. Lo-Ciganic WH, Huang JL, Zhang HH, et al. Evaluation of machine-learning algorithms for predicting opioid overdose risk among Medicare beneficiaries with opioid prescriptions. JAMA Netw Open. 2019;2(3):e190968. doi:10.1001/jamanetworkopen.2019.0968
16. Dewan MJ, Koss M. The clinical impact of reported variance in potency of antipsychotic agents. Acta Psychiatr Scand. 1995;91(4):229-232.
17. Woods SW. Chlorpromazine equivalent doses for the newer atypical antipsychotics. J Clin Psychiatry. 2003;64(6):663-667.
18. Hayasaka Y, Purgato M, Magni LR, et al. Dose equivalents of antidepressants: evidence-based recommendations from randomized controlled trials. J Affect Disord. 2015;180:179-184.
19. Hansen JY, Shafiei G, Markello RD, et al. Mapping neurotransmitter systems to the structural and functional organization of the human neocortex. bioRxiv. 2021. https://doi.org/10.1101/2021.10.28.466336
20. Hwang WJ, Lee TY, Kim NS, et al. The role of estrogen receptors and their signaling across psychiatric disorders. Int J Mol Sci. 2020;22(1):373.
21. Lawrence JH, Tomaselli GF, Marban E. Ion channels: structure and function. Heart Dis Stroke. 1993;2(1):75-80.
22. Fedele F, Severino P, Bruno N, et al. Role of ion channels in coronary microcirculation: a review of the literature. Future Cardiol. 2013;9(6):897-905.
23. Kumar P, Kumar D, Jha SK, et al. Ion channels in neurological disorders. Adv Protein Chem Struct Biol. 2016;103:97-136.
24. Quagliato LA, Nardi AE. The role of convergent ion channel pathways in microglial phenotypes: a systematic review of the implications for neurological and psychiatric disorders. Transl Psychiatry. 2018;8(1):259.
25. Bianchi MT, Botzolakis EJ. Targeting ligand-gated ion channels in neurology and psychiatry: is pharmacological promiscuity an obstacle or an opportunity? BMC Pharmacol. 2010;10:3.
26. Imbrici P, Camerino DC, Tricarico D. Major channels involved in neuropsychiatric disorders and therapeutic perspectives. Front Genet. 2013;4:76.
27. Xiao J, Chen Z, Yu B. A potential mechanism of sodium channel mediating the general anesthesia induced by propofol. Front Cell Neurosci. 2020;14:593050. doi:10.3389/fncel.2020.593050
28. Kamei S, Sato N, Harayama Y, et al. Molecular analysis of potassium ion channel genes in sudden death cases among patients administered psychotropic drug therapy: are polymorphisms in LQT genes a potential risk factor? J Hum Genet. 2014;59(2):95-99.
The authors respond
Thank you for your thoughtful commentary. Our conceptual article was not designed to cover enough ground to be completely thorough. Everything you wrote adds to what we wanted to bring to the reader’s attention. The mechanisms of disease in psychiatry are numerous and still elusive, and the brain’s complexity is staggering. Our main goal was to point out possible correlations between specific symptoms and specific neurotransmitter activity. We had to oversimplify to make the article concise enough for publication. Neurotransmitter effects are based on their synthesis, storage, release, reuptake, and degradation. A receptor’s quantity and quality of function, inhibitors, inducers, and many other factors are involved in neurotransmitter performance. And, of course, there are additional fundamental neurotransmitters beyond the 6 we touched on. Our ability to sort through all of this is still rudimentary. You also reflect on the emerging methods to objectively measure neurotransmitter activity, which will eventually find their way to clinical practice and become invaluable. Still, we treat people, not tests or pictures, so diagnostic thinking based on clinical presentation will forever remain a cornerstone of dealing with individual patients.
We hope scientists and clinicians such as yourself will improve our concept and make it truly practical.
Dmitry M. Arbuck, MD
Clinical Assistant Professor of Psychiatry and Medicine
Indiana University School of Medicine
Indianapolis, Indiana
President and Medical Director
Indiana Polyclinic
Carmel, Indiana
José Miguel Salmerón, MD
Professor
Department of Psychiatry
Universidad del Valle School of Medicine/Hospital
Universitario del Valle
Cali, Colombia
Disclosures
The authors report no financial relationships with any companies whose products are mentioned in their response, or with manufacturers of competing products.
The series “Neurotransmitter-based diagnosis and treatment: A hypothesis” (Part 1:
The presentation of abnormal neurotransmission may occur along a continuum. For example, extreme dopamine deficiency can present as catatonia, moderate deficiency may present with inattention, normal activity permits adaptive functioning, and excitatory delirium and sudden death may be at the extreme end of dopaminergic excess.1
The amplitude, rate of change, and location of neurotransmitter dysfunction may determine which specialty takes the primary treatment role. Fatigue, pain, sleep difficulty, and emotional distress require clinicians to understand the whole patient, which is why health care professionals need cross training in psychiatry, and psychiatry recognizes multisystem pathology.
The recognition and treatment of substance use disorders requires an understanding of neurotransmitter symptoms, in terms of both acute drug effects and withdrawal. Fallows2 provides this information in an accessible chart. Discussions of neurotransmitters also have value in managing psychotropic medication withdrawal.3
Acetylcholine is another neurotransmitter of importance; it is essential to normal motor, cognitive, and emotional function. Extreme cholinergic deficiency or anticholinergic crisis has symptoms of pupillary dilation, psychosis, and delirium.4-6 The progressive decline seen in certain dementias is related in part to cholinergic deficit. Dominance of cholinergic activity is associated with depression and biomarkers such as increased rapid eye movement (REM) density, a measure of the frequency of rapid eye movements during REM sleep.7 Cholinergic excess or cholinergic crisis may present with symptoms of salivation, lacrimation, muscle weakness, delirium, or paralysis.8
The articles alluded to the interaction of neurotransmitter systems (eg, “dopamine blockade helps with endorphin suppression”). Isolating the effects of a single neurotransmitter is useful, but covariance of neurotransmitter activity also has diagnostic and treatment implications.9-11 Abnormalities in these interactions may be part of the causal process in fundamental cognitive functions.12 If endorphin suppression is insensitive to dopamine blockade, a relative endorphin excess may create symptoms. If acetylcholine changes are normally balanced by a relative increase in dopamine and norepinephrine, then a weak catecholamine response would fit the catecholamine-cholinergic balance hypothesis of depression. Neurotransmitter interactions are well worked out in the neurology of the basal ganglia but less clear in the frontal and limbic systems.13
Quantification has been applied in some areas of clinical care. Morphine equivalents are used to express opiate potency, and there are algorithms to summarize multiple medication effects on respiratory depression/overdose risk.14,15 Chlorpromazine equivalents were used to translate a range of antipsychotic potencies in the early days of antipsychotic treatment. Adverse effects and some treatment responses partially corresponded to the level of dopamine blockade, but not without noise. There is a wide range of variance as antipsychotic potency is assessed for clinical efficacy.16 We are still working on the array of medication potency and selectivity across neurotransmitter systems.17,18 For example, paroxetine is a potent serotonin reuptake blocker but less selective than citalopram, particularly antagonizing cholinergic muscarinic receptors.
The authors noted their hypothesis needs further elaboration and quantification as psychiatry moves from impressionistic practice to firmer science. Measurement of neurotransmitter activity is an area of intense research. Biomeasures have yet to add much value to the clinical practice of psychiatry, but we hope for progress. Functional neuroimaging with sophisticated algorithms is beginning to detail neocortical activity.19 CSF measurement of dopamine and serotonin metabolites seem to correlate with severe depression and suicidal behavior. Noninvasive, wearable technologies to measure galvanic skin response, oxygenation, and neurotransmitter metabolic products may add to neuro-transmitter-based assessment and treatment.
Neurotransmitters are one aspect of brain function. Other processes, such as hormonal neuromodulation20 and ion channels, may be over- or underactive. Channelopathies are of particular interest in cardiology and neurology but are also notable in pain and emotional disorders.21-26 Voltage-gated sodium channels are thought to be involved in general anesthesia.27 Adverse effects of some psychotropic medications are best understood as ion channel dysfunction.28 Using the strategy of this hypothesis applied to activation or inactivation of sodium, potassium, and calcium channels can guide useful diagnostic and treatment ideas for further study.
Mark C. Chandler, MD
Triangle Neuropsychiatry
Durham, North Carolina
Disclosures
The author reports no financial relationships with any companies whose products are mentioned in his letter, or with manufacturers of competing products.
References
1. Mash DC. Excited delirium and sudden death: a syndromal disorder at the extreme end of the neuropsychiatric continuum. Front Physiol. 2016;7:435.
2. Fallows Z. MIT MedLinks. Accessed August 8, 2022. http://web.mit.edu/zakf/www/drugchart/drugchart11.html
3. Groot PC, van Os J. How user knowledge of psychotropic drug withdrawal resulted in the development of person-specific tapering medication. Ther Adv Psychopharmacol. 2020;10:2045125320932452. doi:10.1177/2045125320932452
4. Picciotto MR, Higley MJ, Mineur YS. Acetylcholine as a neuromodulator: cholinergic signaling shapes nervous system function and behavior. Neuron. 2012;76(1):116-129.
5. Nair VP, Hunter JM. Anticholinesterases and anticholinergic drugs. Continuing Education in Anaesthesia Critical Care & Pain. 2004;4(5):164-168.
6. Dawson AH, Buckley NA. Pharmacological management of anticholinergic delirium--theory, evidence and practice. Br J Clin Pharmacol. 2016;81(3):516-524.
7. Dulawa SC, Janowsky DS. Cholinergic regulation of mood: from basic and clinical studies to emerging therapeutics. Mol Psychiatry. 2019;24(5):694-709.
8. Adeyinka A, Kondamudi NP. Cholinergic Crisis. StatPearls Publishing; 2022.
9. El Mansari M, Guiard BP, Chernoloz O, et al. Relevance of norepinephrine-dopamine interactions in the treatment of major depressive disorder. CNS Neurosci Ther. 2010;16(3):e1-e17.
10. Esposito E. Serotonin-dopamine interaction as a focus of novel antidepressant drugs. Curr Drug Targets. 2006;7(2):177-185.
11. Kringelbach ML, Cruzat J, Cabral J, et al. Dynamic coupling of whole-brain neuronal and neurotransmitter systems. Proc Natl Acad Sci U S A. 2020;117(17):9566-9576.
12. Thiele A, Bellgrove MA. Neuromodulation of attention. Neuron. 2018;97(4):769-785.
13. Muñoz A, Lopez-Lopez A, Labandeira CM, et al. Interactions between the serotonergic and other neurotransmitter systems in the basal ganglia: role in Parkinson’s disease and adverse effects of L-DOPA. Front Neuroanat. 2020;14:26.
14. Nielsen S, Degenhardt L, Hoban B, et al. A synthesis of oral morphine equivalents (OME) for opioid utilisation studies. Pharmacoepidemiol Drug Saf. 2016;25(6):733-737.
15. Lo-Ciganic WH, Huang JL, Zhang HH, et al. Evaluation of machine-learning algorithms for predicting opioid overdose risk among Medicare beneficiaries with opioid prescriptions. JAMA Netw Open. 2019;2(3):e190968. doi:10.1001/jamanetworkopen.2019.0968
16. Dewan MJ, Koss M. The clinical impact of reported variance in potency of antipsychotic agents. Acta Psychiatr Scand. 1995;91(4):229-232.
17. Woods SW. Chlorpromazine equivalent doses for the newer atypical antipsychotics. J Clin Psychiatry. 2003;64(6):663-667.
18. Hayasaka Y, Purgato M, Magni LR, et al. Dose equivalents of antidepressants: evidence-based recommendations from randomized controlled trials. J Affect Disord. 2015;180:179-184.
19. Hansen JY, Shafiei G, Markello RD, et al. Mapping neurotransmitter systems to the structural and functional organization of the human neocortex. bioRxiv. 2021. https://doi.org/10.1101/2021.10.28.466336
20. Hwang WJ, Lee TY, Kim NS, et al. The role of estrogen receptors and their signaling across psychiatric disorders. Int J Mol Sci. 2020;22(1):373.
21. Lawrence JH, Tomaselli GF, Marban E. Ion channels: structure and function. Heart Dis Stroke. 1993;2(1):75-80.
22. Fedele F, Severino P, Bruno N, et al. Role of ion channels in coronary microcirculation: a review of the literature. Future Cardiol. 2013;9(6):897-905.
23. Kumar P, Kumar D, Jha SK, et al. Ion channels in neurological disorders. Adv Protein Chem Struct Biol. 2016;103:97-136.
24. Quagliato LA, Nardi AE. The role of convergent ion channel pathways in microglial phenotypes: a systematic review of the implications for neurological and psychiatric disorders. Transl Psychiatry. 2018;8(1):259.
25. Bianchi MT, Botzolakis EJ. Targeting ligand-gated ion channels in neurology and psychiatry: is pharmacological promiscuity an obstacle or an opportunity? BMC Pharmacol. 2010;10:3.
26. Imbrici P, Camerino DC, Tricarico D. Major channels involved in neuropsychiatric disorders and therapeutic perspectives. Front Genet. 2013;4:76.
27. Xiao J, Chen Z, Yu B. A potential mechanism of sodium channel mediating the general anesthesia induced by propofol. Front Cell Neurosci. 2020;14:593050. doi:10.3389/fncel.2020.593050
28. Kamei S, Sato N, Harayama Y, et al. Molecular analysis of potassium ion channel genes in sudden death cases among patients administered psychotropic drug therapy: are polymorphisms in LQT genes a potential risk factor? J Hum Genet. 2014;59(2):95-99.
The authors respond
Thank you for your thoughtful commentary. Our conceptual article was not designed to cover enough ground to be completely thorough. Everything you wrote adds to what we wanted to bring to the reader’s attention. The mechanisms of disease in psychiatry are numerous and still elusive, and the brain’s complexity is staggering. Our main goal was to point out possible correlations between specific symptoms and specific neurotransmitter activity. We had to oversimplify to make the article concise enough for publication. Neurotransmitter effects are based on their synthesis, storage, release, reuptake, and degradation. A receptor’s quantity and quality of function, inhibitors, inducers, and many other factors are involved in neurotransmitter performance. And, of course, there are additional fundamental neurotransmitters beyond the 6 we touched on. Our ability to sort through all of this is still rudimentary. You also reflect on the emerging methods to objectively measure neurotransmitter activity, which will eventually find their way to clinical practice and become invaluable. Still, we treat people, not tests or pictures, so diagnostic thinking based on clinical presentation will forever remain a cornerstone of dealing with individual patients.
We hope scientists and clinicians such as yourself will improve our concept and make it truly practical.
Dmitry M. Arbuck, MD
Clinical Assistant Professor of Psychiatry and Medicine
Indiana University School of Medicine
Indianapolis, Indiana
President and Medical Director
Indiana Polyclinic
Carmel, Indiana
José Miguel Salmerón, MD
Professor
Department of Psychiatry
Universidad del Valle School of Medicine/Hospital
Universitario del Valle
Cali, Colombia
Disclosures
The authors report no financial relationships with any companies whose products are mentioned in their response, or with manufacturers of competing products.