User login
News and Views that Matter to Pediatricians
The leading independent newspaper covering news and commentary in pediatrics.
Is ChatGPT a friend or foe of medical publishing?
. These tools should not be listed as authors, and researchers must denote how AI-assisted technologies were used, the committee said.
These new guidelines are the latest effort for medical journals to define policies for using these large-scale language models (LLMs) in scientific publication. While these AI-assisted tools can help with tasks such as writing, analyzing data, catching mistakes, and much more, they are also prone to errors, noted Casey Greene, PhD, a professor of biomedical informatics at the University of Colorado at Denver, Aurora. It is also not totally clear how information is stored and processed in these kinds of tools, and who has access to that information, he noted.
At the same time, experts argue that these AI tools could have a positive impact on the field by limiting some of the linguistic disparities in scientific publishing as well as alleviating the burden of some monotonous or mechanical tasks that come along with manuscript writing.
What experts can agree on, though, is that the use of AI tools is here to stay. “This is going to become a common tool,” Dr. Greene said. “I don’t think there’s a way out of that at this point.”
A change in medical publishing
OpenAI released ChatGPT in November 2022. In its own words, ChatGPT is “a deep learning model that has been trained on a massive amount of text data to understand and generate humanlike text.” Enter a question or a prompt, and it will respond. For example, when asked how the AI tool can be used in scientific publishing, ChatGPT responded:
“ChatGPT can aid scientific publishing by generating ideas, clarifying concepts, conducting preliminary literature reviews, providing proofreading and editing suggestions, and assisting with outlining and organizing scientific papers. However, it should be used as a supportive tool alongside domain expertise and validation from trusted scientific sources.”
Just a few months after ChatGPT became available, researchers began using this tool in their own work. One individual, Som Biswas, MD, a radiologist at the University of Tennessee Health Science Center in Memphis, reportedly used ChatGPT to author 16 scientific articles in just 4 months, according to the Daily Beast. Five of these articles have been published in four different journals. Dr. Biswas declined to be interviewed for this article.
There were also reports of papers with ChatGPT as one of the listed authors, which sparked backlash. In response, JAMA, Nature, and Science all published editorials in January outlining their policies for using ChatGPT and other large language models in the scientific authoring process. Editors from the journals of the American College of Cardiology and the American College of Rheumatology also updated their policies to reflect the influence of AI authoring tools.
The consensus is that AI has no place on the author byline.
“We think that’s not appropriate, because coauthorship means that you are taking responsibility for the analysis and the generation of data that are included in a manuscript. A machine that is dictated by AI can’t take responsibility,” said Daniel Solomon, MD, MPH, a rheumatologist at Brigham and Women’s Hospital, Boston, and the editor in chief of the ACR journal Arthritis & Rheumatology.
Issues with AI
One of the big concerns around using AI in writing is that it can generate text that seems plausible but is untrue or not supported by data. For example, Dr. Greene and colleague Milton Pividori, PhD, also of the University of Colorado, were writing a journal article about new software they developed that uses a large language model to revise scientific manuscripts.
“We used the same software to revise that article and at one point, it added a line that noted that the large language model had been fine-tuned on a data set of manuscripts from within the same field. This makes a lot of sense, and is absolutely something you could do, but was not something that we did,” Dr. Greene said. “Without a really careful review of the content, it becomes possible to invent things that were not actually done.”
In another case, ChatGPT falsely stated that a prominent law professor had been accused of sexual assault, citing a Washington Post article that did not exist.
“We live in a society where we are extremely concerned about fake news,” Dr. Pividori added, “and [these kinds of errors] could certainly exacerbate that in the scientific community, which is very concerning because science informs public policy.”
Another issue is the lack of transparency around how large language models like ChatGPT process and store data used to make queries.
“We have no idea how they are recording all the prompts and things that we input into ChatGPT and their systems,” Dr. Pividori said.
OpenAI recently addressed some privacy concerns by allowing users to turn off their chat history with the AI chatbot, so conversations cannot be used to train or improve the company’s models. But Dr. Greene noted that the terms of service “still remain pretty nebulous.”
Dr. Solomon is also concerned with researchers using these AI tools in authoring without knowing how they work. “The thing we are really concerned about is that fact that [LLMs] are a bit of a black box – people don’t really understand the methodologies,” he said.
A positive tool?
But despite these concerns, many think that these types of AI-assisted tools could have a positive impact on medical publishing, particularly for researchers for whom English is not their first language, noted Catherine Gao, MD, a pulmonary and critical care instructor at Northwestern University, Chicago. She recently led research comparing scientific abstracts written by ChatGPT and real abstracts and discovered that reviewers found it “surprisingly difficult” to differentiate the two.
“The majority of research is published in English,” she said in an email. “Responsible use of LLMs can potentially reduce the burden of writing for busy scientists and improve equity for those who are not native English speakers.”
Dr. Pividori agreed, adding that as a non-native English speaker, he spends much more time working on the structure and grammar of sentences when authoring a manuscript, compared with people who speak English as a first language. He noted that these tools can also be used to automate some of the more monotonous tasks that come along with writing manuscripts and allow researchers to focus on the more creative aspects.
In the future, “I want to focus more on the things that only a human can do and let these tools do all the rest of it,” he said.
New rules
But despite how individual researchers feel about LLMs, they agree that these AI tools are here to stay.
“I think that we should anticipate that they will become part of the medical research establishment over time, when we figure out how to use them appropriately,” Dr. Solomon said.
While the debate of how to best use AI in medical publications will continue, journal editors agree that all authors of a manuscript are solely responsible for content in articles that used AI-assisted technology.
“Authors should carefully review and edit the result because AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased,” the ICMJE guidelines state. “Authors should be able to assert that there is no plagiarism in their paper, including in text and images produced by the AI.” This includes appropriate attribution of all cited materials.
The committee also recommends that authors write in both the cover letter and submitted work how AI was used in the manuscript writing process. Recently updated guidelines from the World Association of Medical Editors recommend that all prompts used to generate new text or analytical work should be provided in submitted work. Dr. Greene also noted that if authors used an AI tool to revise their work, they can include a version of the manuscript untouched by LLMs.
It is similar to a preprint, he said, but rather than publishing a version of a paper prior to peer review, someone is showing a version of a manuscript before it was reviewed and revised by AI. “This type of practice could be a path that lets us benefit from these models,” he said, “without having the drawbacks that many are concerned about.”
Dr. Solomon has financial relationships with AbbVie, Amgen, Janssen, CorEvitas, and Moderna. Both Dr. Greene and Dr. Pividori are inventors in the U.S. Provisional Patent Application No. 63/486,706 that the University of Colorado has filed for the “Publishing Infrastructure For AI-Assisted Academic Authoring” invention with the U.S. Patent and Trademark Office. Dr. Greene and Dr. Pividori also received a grant from the Alfred P. Sloan Foundation to improve their AI-based manuscript revision tool. Dr. Gao reported no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
. These tools should not be listed as authors, and researchers must denote how AI-assisted technologies were used, the committee said.
These new guidelines are the latest effort for medical journals to define policies for using these large-scale language models (LLMs) in scientific publication. While these AI-assisted tools can help with tasks such as writing, analyzing data, catching mistakes, and much more, they are also prone to errors, noted Casey Greene, PhD, a professor of biomedical informatics at the University of Colorado at Denver, Aurora. It is also not totally clear how information is stored and processed in these kinds of tools, and who has access to that information, he noted.
At the same time, experts argue that these AI tools could have a positive impact on the field by limiting some of the linguistic disparities in scientific publishing as well as alleviating the burden of some monotonous or mechanical tasks that come along with manuscript writing.
What experts can agree on, though, is that the use of AI tools is here to stay. “This is going to become a common tool,” Dr. Greene said. “I don’t think there’s a way out of that at this point.”
A change in medical publishing
OpenAI released ChatGPT in November 2022. In its own words, ChatGPT is “a deep learning model that has been trained on a massive amount of text data to understand and generate humanlike text.” Enter a question or a prompt, and it will respond. For example, when asked how the AI tool can be used in scientific publishing, ChatGPT responded:
“ChatGPT can aid scientific publishing by generating ideas, clarifying concepts, conducting preliminary literature reviews, providing proofreading and editing suggestions, and assisting with outlining and organizing scientific papers. However, it should be used as a supportive tool alongside domain expertise and validation from trusted scientific sources.”
Just a few months after ChatGPT became available, researchers began using this tool in their own work. One individual, Som Biswas, MD, a radiologist at the University of Tennessee Health Science Center in Memphis, reportedly used ChatGPT to author 16 scientific articles in just 4 months, according to the Daily Beast. Five of these articles have been published in four different journals. Dr. Biswas declined to be interviewed for this article.
There were also reports of papers with ChatGPT as one of the listed authors, which sparked backlash. In response, JAMA, Nature, and Science all published editorials in January outlining their policies for using ChatGPT and other large language models in the scientific authoring process. Editors from the journals of the American College of Cardiology and the American College of Rheumatology also updated their policies to reflect the influence of AI authoring tools.
The consensus is that AI has no place on the author byline.
“We think that’s not appropriate, because coauthorship means that you are taking responsibility for the analysis and the generation of data that are included in a manuscript. A machine that is dictated by AI can’t take responsibility,” said Daniel Solomon, MD, MPH, a rheumatologist at Brigham and Women’s Hospital, Boston, and the editor in chief of the ACR journal Arthritis & Rheumatology.
Issues with AI
One of the big concerns around using AI in writing is that it can generate text that seems plausible but is untrue or not supported by data. For example, Dr. Greene and colleague Milton Pividori, PhD, also of the University of Colorado, were writing a journal article about new software they developed that uses a large language model to revise scientific manuscripts.
“We used the same software to revise that article and at one point, it added a line that noted that the large language model had been fine-tuned on a data set of manuscripts from within the same field. This makes a lot of sense, and is absolutely something you could do, but was not something that we did,” Dr. Greene said. “Without a really careful review of the content, it becomes possible to invent things that were not actually done.”
In another case, ChatGPT falsely stated that a prominent law professor had been accused of sexual assault, citing a Washington Post article that did not exist.
“We live in a society where we are extremely concerned about fake news,” Dr. Pividori added, “and [these kinds of errors] could certainly exacerbate that in the scientific community, which is very concerning because science informs public policy.”
Another issue is the lack of transparency around how large language models like ChatGPT process and store data used to make queries.
“We have no idea how they are recording all the prompts and things that we input into ChatGPT and their systems,” Dr. Pividori said.
OpenAI recently addressed some privacy concerns by allowing users to turn off their chat history with the AI chatbot, so conversations cannot be used to train or improve the company’s models. But Dr. Greene noted that the terms of service “still remain pretty nebulous.”
Dr. Solomon is also concerned with researchers using these AI tools in authoring without knowing how they work. “The thing we are really concerned about is that fact that [LLMs] are a bit of a black box – people don’t really understand the methodologies,” he said.
A positive tool?
But despite these concerns, many think that these types of AI-assisted tools could have a positive impact on medical publishing, particularly for researchers for whom English is not their first language, noted Catherine Gao, MD, a pulmonary and critical care instructor at Northwestern University, Chicago. She recently led research comparing scientific abstracts written by ChatGPT and real abstracts and discovered that reviewers found it “surprisingly difficult” to differentiate the two.
“The majority of research is published in English,” she said in an email. “Responsible use of LLMs can potentially reduce the burden of writing for busy scientists and improve equity for those who are not native English speakers.”
Dr. Pividori agreed, adding that as a non-native English speaker, he spends much more time working on the structure and grammar of sentences when authoring a manuscript, compared with people who speak English as a first language. He noted that these tools can also be used to automate some of the more monotonous tasks that come along with writing manuscripts and allow researchers to focus on the more creative aspects.
In the future, “I want to focus more on the things that only a human can do and let these tools do all the rest of it,” he said.
New rules
But despite how individual researchers feel about LLMs, they agree that these AI tools are here to stay.
“I think that we should anticipate that they will become part of the medical research establishment over time, when we figure out how to use them appropriately,” Dr. Solomon said.
While the debate of how to best use AI in medical publications will continue, journal editors agree that all authors of a manuscript are solely responsible for content in articles that used AI-assisted technology.
“Authors should carefully review and edit the result because AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased,” the ICMJE guidelines state. “Authors should be able to assert that there is no plagiarism in their paper, including in text and images produced by the AI.” This includes appropriate attribution of all cited materials.
The committee also recommends that authors write in both the cover letter and submitted work how AI was used in the manuscript writing process. Recently updated guidelines from the World Association of Medical Editors recommend that all prompts used to generate new text or analytical work should be provided in submitted work. Dr. Greene also noted that if authors used an AI tool to revise their work, they can include a version of the manuscript untouched by LLMs.
It is similar to a preprint, he said, but rather than publishing a version of a paper prior to peer review, someone is showing a version of a manuscript before it was reviewed and revised by AI. “This type of practice could be a path that lets us benefit from these models,” he said, “without having the drawbacks that many are concerned about.”
Dr. Solomon has financial relationships with AbbVie, Amgen, Janssen, CorEvitas, and Moderna. Both Dr. Greene and Dr. Pividori are inventors in the U.S. Provisional Patent Application No. 63/486,706 that the University of Colorado has filed for the “Publishing Infrastructure For AI-Assisted Academic Authoring” invention with the U.S. Patent and Trademark Office. Dr. Greene and Dr. Pividori also received a grant from the Alfred P. Sloan Foundation to improve their AI-based manuscript revision tool. Dr. Gao reported no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
. These tools should not be listed as authors, and researchers must denote how AI-assisted technologies were used, the committee said.
These new guidelines are the latest effort for medical journals to define policies for using these large-scale language models (LLMs) in scientific publication. While these AI-assisted tools can help with tasks such as writing, analyzing data, catching mistakes, and much more, they are also prone to errors, noted Casey Greene, PhD, a professor of biomedical informatics at the University of Colorado at Denver, Aurora. It is also not totally clear how information is stored and processed in these kinds of tools, and who has access to that information, he noted.
At the same time, experts argue that these AI tools could have a positive impact on the field by limiting some of the linguistic disparities in scientific publishing as well as alleviating the burden of some monotonous or mechanical tasks that come along with manuscript writing.
What experts can agree on, though, is that the use of AI tools is here to stay. “This is going to become a common tool,” Dr. Greene said. “I don’t think there’s a way out of that at this point.”
A change in medical publishing
OpenAI released ChatGPT in November 2022. In its own words, ChatGPT is “a deep learning model that has been trained on a massive amount of text data to understand and generate humanlike text.” Enter a question or a prompt, and it will respond. For example, when asked how the AI tool can be used in scientific publishing, ChatGPT responded:
“ChatGPT can aid scientific publishing by generating ideas, clarifying concepts, conducting preliminary literature reviews, providing proofreading and editing suggestions, and assisting with outlining and organizing scientific papers. However, it should be used as a supportive tool alongside domain expertise and validation from trusted scientific sources.”
Just a few months after ChatGPT became available, researchers began using this tool in their own work. One individual, Som Biswas, MD, a radiologist at the University of Tennessee Health Science Center in Memphis, reportedly used ChatGPT to author 16 scientific articles in just 4 months, according to the Daily Beast. Five of these articles have been published in four different journals. Dr. Biswas declined to be interviewed for this article.
There were also reports of papers with ChatGPT as one of the listed authors, which sparked backlash. In response, JAMA, Nature, and Science all published editorials in January outlining their policies for using ChatGPT and other large language models in the scientific authoring process. Editors from the journals of the American College of Cardiology and the American College of Rheumatology also updated their policies to reflect the influence of AI authoring tools.
The consensus is that AI has no place on the author byline.
“We think that’s not appropriate, because coauthorship means that you are taking responsibility for the analysis and the generation of data that are included in a manuscript. A machine that is dictated by AI can’t take responsibility,” said Daniel Solomon, MD, MPH, a rheumatologist at Brigham and Women’s Hospital, Boston, and the editor in chief of the ACR journal Arthritis & Rheumatology.
Issues with AI
One of the big concerns around using AI in writing is that it can generate text that seems plausible but is untrue or not supported by data. For example, Dr. Greene and colleague Milton Pividori, PhD, also of the University of Colorado, were writing a journal article about new software they developed that uses a large language model to revise scientific manuscripts.
“We used the same software to revise that article and at one point, it added a line that noted that the large language model had been fine-tuned on a data set of manuscripts from within the same field. This makes a lot of sense, and is absolutely something you could do, but was not something that we did,” Dr. Greene said. “Without a really careful review of the content, it becomes possible to invent things that were not actually done.”
In another case, ChatGPT falsely stated that a prominent law professor had been accused of sexual assault, citing a Washington Post article that did not exist.
“We live in a society where we are extremely concerned about fake news,” Dr. Pividori added, “and [these kinds of errors] could certainly exacerbate that in the scientific community, which is very concerning because science informs public policy.”
Another issue is the lack of transparency around how large language models like ChatGPT process and store data used to make queries.
“We have no idea how they are recording all the prompts and things that we input into ChatGPT and their systems,” Dr. Pividori said.
OpenAI recently addressed some privacy concerns by allowing users to turn off their chat history with the AI chatbot, so conversations cannot be used to train or improve the company’s models. But Dr. Greene noted that the terms of service “still remain pretty nebulous.”
Dr. Solomon is also concerned with researchers using these AI tools in authoring without knowing how they work. “The thing we are really concerned about is that fact that [LLMs] are a bit of a black box – people don’t really understand the methodologies,” he said.
A positive tool?
But despite these concerns, many think that these types of AI-assisted tools could have a positive impact on medical publishing, particularly for researchers for whom English is not their first language, noted Catherine Gao, MD, a pulmonary and critical care instructor at Northwestern University, Chicago. She recently led research comparing scientific abstracts written by ChatGPT and real abstracts and discovered that reviewers found it “surprisingly difficult” to differentiate the two.
“The majority of research is published in English,” she said in an email. “Responsible use of LLMs can potentially reduce the burden of writing for busy scientists and improve equity for those who are not native English speakers.”
Dr. Pividori agreed, adding that as a non-native English speaker, he spends much more time working on the structure and grammar of sentences when authoring a manuscript, compared with people who speak English as a first language. He noted that these tools can also be used to automate some of the more monotonous tasks that come along with writing manuscripts and allow researchers to focus on the more creative aspects.
In the future, “I want to focus more on the things that only a human can do and let these tools do all the rest of it,” he said.
New rules
But despite how individual researchers feel about LLMs, they agree that these AI tools are here to stay.
“I think that we should anticipate that they will become part of the medical research establishment over time, when we figure out how to use them appropriately,” Dr. Solomon said.
While the debate of how to best use AI in medical publications will continue, journal editors agree that all authors of a manuscript are solely responsible for content in articles that used AI-assisted technology.
“Authors should carefully review and edit the result because AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased,” the ICMJE guidelines state. “Authors should be able to assert that there is no plagiarism in their paper, including in text and images produced by the AI.” This includes appropriate attribution of all cited materials.
The committee also recommends that authors write in both the cover letter and submitted work how AI was used in the manuscript writing process. Recently updated guidelines from the World Association of Medical Editors recommend that all prompts used to generate new text or analytical work should be provided in submitted work. Dr. Greene also noted that if authors used an AI tool to revise their work, they can include a version of the manuscript untouched by LLMs.
It is similar to a preprint, he said, but rather than publishing a version of a paper prior to peer review, someone is showing a version of a manuscript before it was reviewed and revised by AI. “This type of practice could be a path that lets us benefit from these models,” he said, “without having the drawbacks that many are concerned about.”
Dr. Solomon has financial relationships with AbbVie, Amgen, Janssen, CorEvitas, and Moderna. Both Dr. Greene and Dr. Pividori are inventors in the U.S. Provisional Patent Application No. 63/486,706 that the University of Colorado has filed for the “Publishing Infrastructure For AI-Assisted Academic Authoring” invention with the U.S. Patent and Trademark Office. Dr. Greene and Dr. Pividori also received a grant from the Alfred P. Sloan Foundation to improve their AI-based manuscript revision tool. Dr. Gao reported no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
Don’t screen, just listen
A recent study published in the journal Academic Pediatrics suggests that during health maintenance visits clinicians are giving too little attention to their patients’ sleep problems. Using a questionnaire, researchers surveyed patients’ caregivers’ concerns and observations regarding a variety of sleep problems. The investigators then reviewed the clinicians’ documentation of what transpired at the visit and found that while over 90% of the caregivers reported their child had at least one sleep related problem, only 20% of the clinicians documented the problem. And, only 12% documented a management plan regarding the sleep concerns.
I am always bit skeptical about studies that rely on clinicians’ “documentation” because clinicians are busy people and don’t always remember to record things they’ve discussed. You and I know that the lawyers’ dictum “if it wasn’t documented it didn’t happen” is rubbish. However, I still find the basic finding of this study concerning. If we are failing to ask about or even listen to caregivers’ concerns about something as important as sleep, we are missing the boat ... a very large boat.
How could this be happening? First, sleep may have fallen victim to the bloated list of topics that well-intentioned single-issue preventive health advocates have tacked on to the health maintenance visit. It’s a burden that few of us can manage without cutting corners.
However, it is more troubling to me that so many clinicians have chosen sleep as one of those corners to cut. This oversight suggests to me that too many of us have failed to realize from our own observations that sleep is incredibly important to the health of our patients ... and to ourselves.
I will admit that I am extremely sensitive to the importance of sleep. Some might say my sensitivity borders on an obsession. But, the literature is clear and becoming more voluminous every year that sleep is important to the mental health of our patients and their caregivers to things like obesity, to symptoms that suggest an attention-deficit/hyperactivity disorder, to school success, and to migraine ... to name just a few.
It may be that most of us realize the importance of sleep but feel our society has allowed itself to become so sleep deprived that there is little chance we can turn the ship around by spending just a few minutes trying help a family undo their deeply ingrained sleep unfriendly habits.
I am tempted to join those of you who see sleep depravation as a “why bother” issue. But, I’m not ready to throw in the towel. Even simply sharing your observations about the importance of sleep in the whole wellness picture may have an effect.
One of the benefits of retiring in the same community in which I practiced for over 40 years is that at least every month or two I encounter a parent who thanks me for sharing my views on the importance of sleep. They may not recall the little tip or two I gave them, but it seems that urging them to put sleep near the top of their lifestyle priority list has made the difference for them.
If I have failed in getting you to join me in my crusade against sleep deprivation, at least take to heart the most basic message of this study. That is that the investigators found only 20% of clinicians were addressing a concern that 90% of the caregivers shared. It happened to be sleep, but it could have been anything.
The authors of the study suggest that we need to be more assiduous in our screening for sleep problems. On the contrary. You and I know we don’t need more screening. We just need to be better listeners.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
A recent study published in the journal Academic Pediatrics suggests that during health maintenance visits clinicians are giving too little attention to their patients’ sleep problems. Using a questionnaire, researchers surveyed patients’ caregivers’ concerns and observations regarding a variety of sleep problems. The investigators then reviewed the clinicians’ documentation of what transpired at the visit and found that while over 90% of the caregivers reported their child had at least one sleep related problem, only 20% of the clinicians documented the problem. And, only 12% documented a management plan regarding the sleep concerns.
I am always bit skeptical about studies that rely on clinicians’ “documentation” because clinicians are busy people and don’t always remember to record things they’ve discussed. You and I know that the lawyers’ dictum “if it wasn’t documented it didn’t happen” is rubbish. However, I still find the basic finding of this study concerning. If we are failing to ask about or even listen to caregivers’ concerns about something as important as sleep, we are missing the boat ... a very large boat.
How could this be happening? First, sleep may have fallen victim to the bloated list of topics that well-intentioned single-issue preventive health advocates have tacked on to the health maintenance visit. It’s a burden that few of us can manage without cutting corners.
However, it is more troubling to me that so many clinicians have chosen sleep as one of those corners to cut. This oversight suggests to me that too many of us have failed to realize from our own observations that sleep is incredibly important to the health of our patients ... and to ourselves.
I will admit that I am extremely sensitive to the importance of sleep. Some might say my sensitivity borders on an obsession. But, the literature is clear and becoming more voluminous every year that sleep is important to the mental health of our patients and their caregivers to things like obesity, to symptoms that suggest an attention-deficit/hyperactivity disorder, to school success, and to migraine ... to name just a few.
It may be that most of us realize the importance of sleep but feel our society has allowed itself to become so sleep deprived that there is little chance we can turn the ship around by spending just a few minutes trying help a family undo their deeply ingrained sleep unfriendly habits.
I am tempted to join those of you who see sleep depravation as a “why bother” issue. But, I’m not ready to throw in the towel. Even simply sharing your observations about the importance of sleep in the whole wellness picture may have an effect.
One of the benefits of retiring in the same community in which I practiced for over 40 years is that at least every month or two I encounter a parent who thanks me for sharing my views on the importance of sleep. They may not recall the little tip or two I gave them, but it seems that urging them to put sleep near the top of their lifestyle priority list has made the difference for them.
If I have failed in getting you to join me in my crusade against sleep deprivation, at least take to heart the most basic message of this study. That is that the investigators found only 20% of clinicians were addressing a concern that 90% of the caregivers shared. It happened to be sleep, but it could have been anything.
The authors of the study suggest that we need to be more assiduous in our screening for sleep problems. On the contrary. You and I know we don’t need more screening. We just need to be better listeners.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
A recent study published in the journal Academic Pediatrics suggests that during health maintenance visits clinicians are giving too little attention to their patients’ sleep problems. Using a questionnaire, researchers surveyed patients’ caregivers’ concerns and observations regarding a variety of sleep problems. The investigators then reviewed the clinicians’ documentation of what transpired at the visit and found that while over 90% of the caregivers reported their child had at least one sleep related problem, only 20% of the clinicians documented the problem. And, only 12% documented a management plan regarding the sleep concerns.
I am always bit skeptical about studies that rely on clinicians’ “documentation” because clinicians are busy people and don’t always remember to record things they’ve discussed. You and I know that the lawyers’ dictum “if it wasn’t documented it didn’t happen” is rubbish. However, I still find the basic finding of this study concerning. If we are failing to ask about or even listen to caregivers’ concerns about something as important as sleep, we are missing the boat ... a very large boat.
How could this be happening? First, sleep may have fallen victim to the bloated list of topics that well-intentioned single-issue preventive health advocates have tacked on to the health maintenance visit. It’s a burden that few of us can manage without cutting corners.
However, it is more troubling to me that so many clinicians have chosen sleep as one of those corners to cut. This oversight suggests to me that too many of us have failed to realize from our own observations that sleep is incredibly important to the health of our patients ... and to ourselves.
I will admit that I am extremely sensitive to the importance of sleep. Some might say my sensitivity borders on an obsession. But, the literature is clear and becoming more voluminous every year that sleep is important to the mental health of our patients and their caregivers to things like obesity, to symptoms that suggest an attention-deficit/hyperactivity disorder, to school success, and to migraine ... to name just a few.
It may be that most of us realize the importance of sleep but feel our society has allowed itself to become so sleep deprived that there is little chance we can turn the ship around by spending just a few minutes trying help a family undo their deeply ingrained sleep unfriendly habits.
I am tempted to join those of you who see sleep depravation as a “why bother” issue. But, I’m not ready to throw in the towel. Even simply sharing your observations about the importance of sleep in the whole wellness picture may have an effect.
One of the benefits of retiring in the same community in which I practiced for over 40 years is that at least every month or two I encounter a parent who thanks me for sharing my views on the importance of sleep. They may not recall the little tip or two I gave them, but it seems that urging them to put sleep near the top of their lifestyle priority list has made the difference for them.
If I have failed in getting you to join me in my crusade against sleep deprivation, at least take to heart the most basic message of this study. That is that the investigators found only 20% of clinicians were addressing a concern that 90% of the caregivers shared. It happened to be sleep, but it could have been anything.
The authors of the study suggest that we need to be more assiduous in our screening for sleep problems. On the contrary. You and I know we don’t need more screening. We just need to be better listeners.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
When could you be sued for AI malpractice? You’re likely using it now
The ways in which artificial intelligence (AI) may transform the future of medicine is making headlines across the globe. But chances are, you’re already using AI in your practice every day – you may just not realize it.
And whether you recognize the presence of AI or not, the technology could be putting you in danger of a lawsuit, legal experts say.
“For physicians, AI has also not yet drastically changed or improved the way care is provided or consumed,” said Michael LeTang, chief nursing informatics officer and vice president of risk management and patient safety at Healthcare Risk Advisors, part of TDC Group. “Consequently, it may seem like AI is not present in their work streams, but in reality, it has been utilized in health care for several years. As AI technologies continue to develop and become more sophisticated, we can expect them to play an increasingly significant role in health care.”
Today, most AI applications in health care use narrow AI, which is designed to complete a single task without human assistance, as opposed to artificial general intelligence (AGI), which pertains to human-level reasoning and problem solving across a broad spectrum. Here are some ways doctors are using AI throughout the day – sometimes being aware of its assistance, and sometimes being unaware:
- Many doctors use electronic health records (EHRs) with integrated AI that include computerized clinical decision support tools designed to reduce the risk of diagnostic error and to integrate decision-making in the medication ordering function.
- Cardiologists, pathologists, and dermatologists use AI in the interpretation of vast amounts of images, tracings, and complex patterns.
- Surgeons are using AI-enhanced surgical robotics for orthopedic surgeries, such as joint replacement and spine surgery.
- A growing number of doctors are using ChatGPT to assist in drafting prior authorization letters for insurers. Experts say more doctors are also experimenting with ChatGPT to support medical decision-making.
- Within oncology, physicians use machine learning techniques in the form of computer-aided detection systems for early breast cancer detection.
- AI algorithms are often used by health systems for workflow, staffing optimization, population management, and care coordination.
- Some systems within EHRs use AI to indicate high-risk patients.
- Physicians are using AI applications for the early recognition of sepsis, including EHR-integrated decision tools, such as the Hospital Corporation of America Healthcare’s Sepsis Prediction and Optimization Therapy and the Sepsis Early Risk Assessment algorithm.
- About 30% of radiologists use AI in their practice to analyze x-rays and CT scans.
- Epic Systems recently announced a partnership with Microsoft to integrate ChatGPT into MyChart, Epic’s patient portal system. Pilot hospitals will utilize ChatGPT to automatically generate responses to patient-generated questions sent via the portal.
The growth of AI in health care has been enormous, and it’s only going to continue, said Ravi B. Parikh, MD, an assistant professor in the department of medical ethics and health policy and medicine at the University of Pennsylvania, Philadelphia.
“What’s really critical is that physicians, clinicians, and nurses using AI are provided with the tools to understand how artificial intelligence works and, most importantly, understand that they are still accountable for making the ultimate decision,” Mr. LeTang said, “The information is not always going to be the right thing to do or the most accurate thing to do. They’re still liable for making a bad decision, even if AI is driving that.”
What are the top AI legal dangers of today?
A pressing legal risk is becoming too reliant on the suggestions that AI-based systems provide, which can lead to poor care decisions, said Kenneth Rashbaum, a New York–based cybersecurity attorney with more than 25 years of experience in medical malpractice defense.
This can occur, for example, when using clinical support systems that leverage AI, machine learning, or statistical pattern recognition. Today, clinical support systems are commonly administered through EHRs and other computerized clinical workflows. In general, such systems match a patient’s characteristics to a computerized clinical knowledge base. An assessment or recommendation is then presented to the physician for a decision.
“If the clinician blindly accepts it without considering whether it’s appropriate for this patient at this time with this presentation, the clinician may bear some responsibility if there is an untoward result,” Mr. Rashbaum said.
“A common claim even in the days before the EMR [electronic medical record] and AI, was that the clinician did not take all available information into account in rendering treatment, including history of past and present condition, as reflected in the records, communication with past and other present treating clinicians, lab and radiology results, discussions with the patient, and physical examination findings,” he said. “So, if the clinician relied upon the support prompt to the exclusion of these other sources of information, that could be a very strong argument for the plaintiff.”
Chatbots, such OpenAI’s ChatGPT, are another form of AI raising legal red flags. ChatGPT, trained on a massive set of text data, can carry out conversations, write code, draft emails, and answer any question posed. The chatbot has gained considerable credibility for accurately diagnosing rare conditions in seconds, and it recently passed the U.S. Medical Licensing Examination.
It’s unclear how many doctors are signing onto the ChatGPT website daily, but physicians are actively using the chatbot, particularly for assistance with prior authorization letters and to support decision-making processes in their practices, said Mr. LeTang.
When physicians ask ChatGPT a question, however, they should be mindful that ChatGPT could “hallucinate,” a term that refers to a generated response that sounds plausible but is factually incorrect or is unrelated to the context, explains Harvey Castro, MD, an emergency physician, ChatGPT health care expert, and author of the 2023 book “ChatGPT and Healthcare: Unlocking the Potential of Patient Empowerment.”
Acting on ChatGPT’s response without vetting the information places doctors at serious risk of a lawsuit, he said.
“Sometimes, the response is half true and half false,” he said. “Say, I go outside my specialty of emergency medicine and ask it about a pediatric surgical procedure. It could give me a response that sounds medically correct, but then I ask a pediatric cardiologist, and he says, ‘We don’t even do this. This doesn’t even exist!’ Physicians really have to make sure they are vetting the information provided.”
In response to ChatGPT’s growing usage by health care professionals, hospitals and practices are quickly implementing guidelines, policies, and restrictions that caution physicians about the accuracy of ChatGPT-generated information, adds Mr. LeTang.
Emerging best practices include avoiding the input of patient health information, personally identifiable information, or any data that could be commercially valuable or considered the intellectual property of a hospital or health system, he said.
“Another crucial guideline is not to rely solely on ChatGPT as a definitive source for clinical decision-making; physicians must exercise their professional judgment,” he said. “If best practices are not adhered to, the associated risks are present today. However, these risks may become more significant as AI technologies continue to evolve and become increasingly integrated into health care.”
The potential for misdiagnosis by AI systems and the risk of unnecessary procedures if physicians do not thoroughly evaluate and validate AI predictions are other dangers.
As an example, Mr. LeTang described a case in which a physician documents in the EHR that a patient has presented to the emergency department with chest pains and other signs of a heart attack, and an AI algorithm predicts that the patient is experiencing an active myocardial infarction. If the physician then sends the patient for stenting or an angioplasty without other concrete evidence or tests to confirm the diagnosis, the doctor could later face a misdiagnosis complaint if the costly procedures were unnecessary.
“That’s one of the risks of using artificial intelligence,” he said. “A large percentage of malpractice claims is failure to diagnose, delayed diagnosis, or inaccurate diagnosis. What falls in the category of failure to diagnose is sending a patient for an unnecessary procedure or having an adverse event or bad outcome because of the failure to diagnose.”
So far, no AI lawsuits have been filed, but they may make an appearance soon, said Sue Boisvert, senior patient safety risk manager at The Doctors Company, a national medical liability insurer.
“There are hundreds of AI programs currently in use in health care,” she said. “At some point, a provider will make a decision that is contrary to what the AI recommended. The AI may be wrong, or the provider may be wrong. Either way, the provider will neglect to document their clinical reasoning, a patient will be harmed, and we will have the first AI claim.”
Upcoming AI legal risks to watch for
Lawsuits that allege biased patient care by physicians on the basis of algorithmic bias may also be forthcoming, analysts warn.
Much has been written about algorithmic bias that compounds and worsens inequities in socioeconomic status, ethnicity, sexual orientation, and gender in health systems. In 2019, a groundbreaking article in Science shed light on commonly used algorithms that are considered racially biased and how health care professionals often use such information to make medical decisions.
No claims involving AI bias have come down the pipeline yet, but it’s an area to watch, said Ms. Boisvert. She noted a website that highlights complaints and accusations of AI bias, including in health care.
“We need to be sure the training of the AI is appropriate, current, and broad enough so that there is no bias in the AI when it’s participating in the decision-making,” said Ms. Boisvert. “Imagine if the AI is diagnosing based on a dataset that is not local. It doesn’t represent the population at that particular hospital, and it’s providing inaccurate information to the physicians who are then making decisions about treatment.”
In pain management, for example, there are known differences in how patients experience pain, Ms. Boisvert said. If AI was being used to develop an algorithm for how a particular patient’s postoperative pain should be managed, and the algorithm did not include the differences, the pain control for a certain patient could be inappropriate. A poor outcome resulting from the treatment could lead to a claim against the physician or hospital that used the biased AI system, she said.
In the future, as AI becomes more integrated and accepted in medicine, there may be a risk of legal complaints against doctors for not using AI, said Saurabh Jha, MD, an associate professor of radiology at the University of Pennsylvania, Philadelphia, and a scholar of AI in radiology.
“Ultimately, we might get to a place where AI starts helping physicians detect more or reduce the miss of certain conditions, and it becomes the standard of care,” Dr. Jha said. “For example, if it became part of the standard of care for pulmonary embolism [PE] detection, and you didn’t use it for PE detection, and there was a miss. That could put you at legal risk. We’re not at that stage yet, but that is one future possibility.”
Dr. Parikh envisions an even cloudier liability landscape as the potential grows for AI to control patient care decisions. In such a scenario, rather than just issuing an alert or prediction to a physician, the AI system could trigger an action.
For instance, if an algorithm is trained to predict sepsis and, once triggered, the AI could initiate a nurse-led rapid response or a change in patient care outside the clinician’s control, said Dr. Parikh, who coauthored a recent article on AI and medical liability in The Milbank Quarterly.
“That’s still very much the minority of how AI is being used, but as evidence is growing that AI-based diagnostic tools perform equivalent or even superior to physicians, these autonomous workflows are being considered,” Dr. Parikh said. “When the ultimate action upon the patient is more determined by the AI than what the clinician does, then I think the liability picture gets murkier, and we should be thinking about how we can respond to that from a liability framework.”
How you can prevent AI-related lawsuits
The first step to preventing an AI-related claim is being aware of when and how you are using AI.
Ensure you’re informed about how the AI was trained, Ms. Boisvert stresses.
“Ask questions!” she said. “Is the AI safe? Are the recommendations accurate? Does the AI perform better than current systems? In what way? What databases were used, and did the programmers consider bias? Do I understand how to use the results?”
Never blindly trust the AI but rather view it as a data point in a medical decision, said Dr. Parikh. Ensure that other sources of medical information are properly accessed and that best practices for your specialty are still being followed.
When using any form of AI, document your usage, adds Mr. Rashbaum. A record that clearly outlines how the physician incorporated the AI is critical if a claim later arises in which the doctor is accused of AI-related malpractice, he said.
“Indicating how the AI tool was used, why it was used, and that it was used in conjunction with available clinical information and the clinician’s best judgment could reduce the risk of being found responsible as a result of AI use in a particular case,” he said.
Use chatbots, such as ChatGPT, the way they were intended, as support tools, rather than definitive diagnostic instruments, adds Dr. Castro.
“Doctors should also be well-trained in interpreting and understanding the suggestions provided by ChatGPT and should use their clinical judgment and experience alongside the AI tool for more accurate decision-making,” he said.
In addition, because no AI insurance product exists on the market, physicians and organizations using AI – particularly for direct health care – should evaluate their current insurance or insurance-like products to determine where a claim involving AI might fall and whether the policy would respond, said Ms. Boisvert. The AI vendor/manufacturer will likely have indemnified themselves in the purchase and sale agreement or contract, she said.
It will also become increasingly important for medical practices, hospitals, and health systems to put in place strong data governance strategies, Mr. LeTang said.
“AI relies on good data,” he said. “A data governance strategy is a key component to making sure we understand where the data is coming from, what is represents, how accurate it is, if it’s reproducible, what controls are in place to ensure the right people have the right access, and that if we’re starting to use it to build algorithms, that it’s deidentified.”
While no malpractice claims associated with the use of AI have yet surfaced, this may change as legal courts catch up on the backlog of malpractice claims that were delayed because of COVID-19, and even more so as AI becomes more prevalent in health care, Mr. LeTang said.
“Similar to the attention that autonomous driving systems, like Tesla, receive when the system fails and accidents occur, we can be assured that media outlets will widely publicize AI-related medical adverse events,” he said. “It is crucial for health care professionals, AI developers, and regulatory authorities to work together to ensure the responsible use of AI in health care, with patient safety as the top priority. By doing so, they can mitigate the risks associated with AI implementation and minimize the potential for legal disputes arising from AI-related medical errors.”
A version of this article first appeared on Medscape.com.
The ways in which artificial intelligence (AI) may transform the future of medicine is making headlines across the globe. But chances are, you’re already using AI in your practice every day – you may just not realize it.
And whether you recognize the presence of AI or not, the technology could be putting you in danger of a lawsuit, legal experts say.
“For physicians, AI has also not yet drastically changed or improved the way care is provided or consumed,” said Michael LeTang, chief nursing informatics officer and vice president of risk management and patient safety at Healthcare Risk Advisors, part of TDC Group. “Consequently, it may seem like AI is not present in their work streams, but in reality, it has been utilized in health care for several years. As AI technologies continue to develop and become more sophisticated, we can expect them to play an increasingly significant role in health care.”
Today, most AI applications in health care use narrow AI, which is designed to complete a single task without human assistance, as opposed to artificial general intelligence (AGI), which pertains to human-level reasoning and problem solving across a broad spectrum. Here are some ways doctors are using AI throughout the day – sometimes being aware of its assistance, and sometimes being unaware:
- Many doctors use electronic health records (EHRs) with integrated AI that include computerized clinical decision support tools designed to reduce the risk of diagnostic error and to integrate decision-making in the medication ordering function.
- Cardiologists, pathologists, and dermatologists use AI in the interpretation of vast amounts of images, tracings, and complex patterns.
- Surgeons are using AI-enhanced surgical robotics for orthopedic surgeries, such as joint replacement and spine surgery.
- A growing number of doctors are using ChatGPT to assist in drafting prior authorization letters for insurers. Experts say more doctors are also experimenting with ChatGPT to support medical decision-making.
- Within oncology, physicians use machine learning techniques in the form of computer-aided detection systems for early breast cancer detection.
- AI algorithms are often used by health systems for workflow, staffing optimization, population management, and care coordination.
- Some systems within EHRs use AI to indicate high-risk patients.
- Physicians are using AI applications for the early recognition of sepsis, including EHR-integrated decision tools, such as the Hospital Corporation of America Healthcare’s Sepsis Prediction and Optimization Therapy and the Sepsis Early Risk Assessment algorithm.
- About 30% of radiologists use AI in their practice to analyze x-rays and CT scans.
- Epic Systems recently announced a partnership with Microsoft to integrate ChatGPT into MyChart, Epic’s patient portal system. Pilot hospitals will utilize ChatGPT to automatically generate responses to patient-generated questions sent via the portal.
The growth of AI in health care has been enormous, and it’s only going to continue, said Ravi B. Parikh, MD, an assistant professor in the department of medical ethics and health policy and medicine at the University of Pennsylvania, Philadelphia.
“What’s really critical is that physicians, clinicians, and nurses using AI are provided with the tools to understand how artificial intelligence works and, most importantly, understand that they are still accountable for making the ultimate decision,” Mr. LeTang said, “The information is not always going to be the right thing to do or the most accurate thing to do. They’re still liable for making a bad decision, even if AI is driving that.”
What are the top AI legal dangers of today?
A pressing legal risk is becoming too reliant on the suggestions that AI-based systems provide, which can lead to poor care decisions, said Kenneth Rashbaum, a New York–based cybersecurity attorney with more than 25 years of experience in medical malpractice defense.
This can occur, for example, when using clinical support systems that leverage AI, machine learning, or statistical pattern recognition. Today, clinical support systems are commonly administered through EHRs and other computerized clinical workflows. In general, such systems match a patient’s characteristics to a computerized clinical knowledge base. An assessment or recommendation is then presented to the physician for a decision.
“If the clinician blindly accepts it without considering whether it’s appropriate for this patient at this time with this presentation, the clinician may bear some responsibility if there is an untoward result,” Mr. Rashbaum said.
“A common claim even in the days before the EMR [electronic medical record] and AI, was that the clinician did not take all available information into account in rendering treatment, including history of past and present condition, as reflected in the records, communication with past and other present treating clinicians, lab and radiology results, discussions with the patient, and physical examination findings,” he said. “So, if the clinician relied upon the support prompt to the exclusion of these other sources of information, that could be a very strong argument for the plaintiff.”
Chatbots, such OpenAI’s ChatGPT, are another form of AI raising legal red flags. ChatGPT, trained on a massive set of text data, can carry out conversations, write code, draft emails, and answer any question posed. The chatbot has gained considerable credibility for accurately diagnosing rare conditions in seconds, and it recently passed the U.S. Medical Licensing Examination.
It’s unclear how many doctors are signing onto the ChatGPT website daily, but physicians are actively using the chatbot, particularly for assistance with prior authorization letters and to support decision-making processes in their practices, said Mr. LeTang.
When physicians ask ChatGPT a question, however, they should be mindful that ChatGPT could “hallucinate,” a term that refers to a generated response that sounds plausible but is factually incorrect or is unrelated to the context, explains Harvey Castro, MD, an emergency physician, ChatGPT health care expert, and author of the 2023 book “ChatGPT and Healthcare: Unlocking the Potential of Patient Empowerment.”
Acting on ChatGPT’s response without vetting the information places doctors at serious risk of a lawsuit, he said.
“Sometimes, the response is half true and half false,” he said. “Say, I go outside my specialty of emergency medicine and ask it about a pediatric surgical procedure. It could give me a response that sounds medically correct, but then I ask a pediatric cardiologist, and he says, ‘We don’t even do this. This doesn’t even exist!’ Physicians really have to make sure they are vetting the information provided.”
In response to ChatGPT’s growing usage by health care professionals, hospitals and practices are quickly implementing guidelines, policies, and restrictions that caution physicians about the accuracy of ChatGPT-generated information, adds Mr. LeTang.
Emerging best practices include avoiding the input of patient health information, personally identifiable information, or any data that could be commercially valuable or considered the intellectual property of a hospital or health system, he said.
“Another crucial guideline is not to rely solely on ChatGPT as a definitive source for clinical decision-making; physicians must exercise their professional judgment,” he said. “If best practices are not adhered to, the associated risks are present today. However, these risks may become more significant as AI technologies continue to evolve and become increasingly integrated into health care.”
The potential for misdiagnosis by AI systems and the risk of unnecessary procedures if physicians do not thoroughly evaluate and validate AI predictions are other dangers.
As an example, Mr. LeTang described a case in which a physician documents in the EHR that a patient has presented to the emergency department with chest pains and other signs of a heart attack, and an AI algorithm predicts that the patient is experiencing an active myocardial infarction. If the physician then sends the patient for stenting or an angioplasty without other concrete evidence or tests to confirm the diagnosis, the doctor could later face a misdiagnosis complaint if the costly procedures were unnecessary.
“That’s one of the risks of using artificial intelligence,” he said. “A large percentage of malpractice claims is failure to diagnose, delayed diagnosis, or inaccurate diagnosis. What falls in the category of failure to diagnose is sending a patient for an unnecessary procedure or having an adverse event or bad outcome because of the failure to diagnose.”
So far, no AI lawsuits have been filed, but they may make an appearance soon, said Sue Boisvert, senior patient safety risk manager at The Doctors Company, a national medical liability insurer.
“There are hundreds of AI programs currently in use in health care,” she said. “At some point, a provider will make a decision that is contrary to what the AI recommended. The AI may be wrong, or the provider may be wrong. Either way, the provider will neglect to document their clinical reasoning, a patient will be harmed, and we will have the first AI claim.”
Upcoming AI legal risks to watch for
Lawsuits that allege biased patient care by physicians on the basis of algorithmic bias may also be forthcoming, analysts warn.
Much has been written about algorithmic bias that compounds and worsens inequities in socioeconomic status, ethnicity, sexual orientation, and gender in health systems. In 2019, a groundbreaking article in Science shed light on commonly used algorithms that are considered racially biased and how health care professionals often use such information to make medical decisions.
No claims involving AI bias have come down the pipeline yet, but it’s an area to watch, said Ms. Boisvert. She noted a website that highlights complaints and accusations of AI bias, including in health care.
“We need to be sure the training of the AI is appropriate, current, and broad enough so that there is no bias in the AI when it’s participating in the decision-making,” said Ms. Boisvert. “Imagine if the AI is diagnosing based on a dataset that is not local. It doesn’t represent the population at that particular hospital, and it’s providing inaccurate information to the physicians who are then making decisions about treatment.”
In pain management, for example, there are known differences in how patients experience pain, Ms. Boisvert said. If AI was being used to develop an algorithm for how a particular patient’s postoperative pain should be managed, and the algorithm did not include the differences, the pain control for a certain patient could be inappropriate. A poor outcome resulting from the treatment could lead to a claim against the physician or hospital that used the biased AI system, she said.
In the future, as AI becomes more integrated and accepted in medicine, there may be a risk of legal complaints against doctors for not using AI, said Saurabh Jha, MD, an associate professor of radiology at the University of Pennsylvania, Philadelphia, and a scholar of AI in radiology.
“Ultimately, we might get to a place where AI starts helping physicians detect more or reduce the miss of certain conditions, and it becomes the standard of care,” Dr. Jha said. “For example, if it became part of the standard of care for pulmonary embolism [PE] detection, and you didn’t use it for PE detection, and there was a miss. That could put you at legal risk. We’re not at that stage yet, but that is one future possibility.”
Dr. Parikh envisions an even cloudier liability landscape as the potential grows for AI to control patient care decisions. In such a scenario, rather than just issuing an alert or prediction to a physician, the AI system could trigger an action.
For instance, if an algorithm is trained to predict sepsis and, once triggered, the AI could initiate a nurse-led rapid response or a change in patient care outside the clinician’s control, said Dr. Parikh, who coauthored a recent article on AI and medical liability in The Milbank Quarterly.
“That’s still very much the minority of how AI is being used, but as evidence is growing that AI-based diagnostic tools perform equivalent or even superior to physicians, these autonomous workflows are being considered,” Dr. Parikh said. “When the ultimate action upon the patient is more determined by the AI than what the clinician does, then I think the liability picture gets murkier, and we should be thinking about how we can respond to that from a liability framework.”
How you can prevent AI-related lawsuits
The first step to preventing an AI-related claim is being aware of when and how you are using AI.
Ensure you’re informed about how the AI was trained, Ms. Boisvert stresses.
“Ask questions!” she said. “Is the AI safe? Are the recommendations accurate? Does the AI perform better than current systems? In what way? What databases were used, and did the programmers consider bias? Do I understand how to use the results?”
Never blindly trust the AI but rather view it as a data point in a medical decision, said Dr. Parikh. Ensure that other sources of medical information are properly accessed and that best practices for your specialty are still being followed.
When using any form of AI, document your usage, adds Mr. Rashbaum. A record that clearly outlines how the physician incorporated the AI is critical if a claim later arises in which the doctor is accused of AI-related malpractice, he said.
“Indicating how the AI tool was used, why it was used, and that it was used in conjunction with available clinical information and the clinician’s best judgment could reduce the risk of being found responsible as a result of AI use in a particular case,” he said.
Use chatbots, such as ChatGPT, the way they were intended, as support tools, rather than definitive diagnostic instruments, adds Dr. Castro.
“Doctors should also be well-trained in interpreting and understanding the suggestions provided by ChatGPT and should use their clinical judgment and experience alongside the AI tool for more accurate decision-making,” he said.
In addition, because no AI insurance product exists on the market, physicians and organizations using AI – particularly for direct health care – should evaluate their current insurance or insurance-like products to determine where a claim involving AI might fall and whether the policy would respond, said Ms. Boisvert. The AI vendor/manufacturer will likely have indemnified themselves in the purchase and sale agreement or contract, she said.
It will also become increasingly important for medical practices, hospitals, and health systems to put in place strong data governance strategies, Mr. LeTang said.
“AI relies on good data,” he said. “A data governance strategy is a key component to making sure we understand where the data is coming from, what is represents, how accurate it is, if it’s reproducible, what controls are in place to ensure the right people have the right access, and that if we’re starting to use it to build algorithms, that it’s deidentified.”
While no malpractice claims associated with the use of AI have yet surfaced, this may change as legal courts catch up on the backlog of malpractice claims that were delayed because of COVID-19, and even more so as AI becomes more prevalent in health care, Mr. LeTang said.
“Similar to the attention that autonomous driving systems, like Tesla, receive when the system fails and accidents occur, we can be assured that media outlets will widely publicize AI-related medical adverse events,” he said. “It is crucial for health care professionals, AI developers, and regulatory authorities to work together to ensure the responsible use of AI in health care, with patient safety as the top priority. By doing so, they can mitigate the risks associated with AI implementation and minimize the potential for legal disputes arising from AI-related medical errors.”
A version of this article first appeared on Medscape.com.
The ways in which artificial intelligence (AI) may transform the future of medicine is making headlines across the globe. But chances are, you’re already using AI in your practice every day – you may just not realize it.
And whether you recognize the presence of AI or not, the technology could be putting you in danger of a lawsuit, legal experts say.
“For physicians, AI has also not yet drastically changed or improved the way care is provided or consumed,” said Michael LeTang, chief nursing informatics officer and vice president of risk management and patient safety at Healthcare Risk Advisors, part of TDC Group. “Consequently, it may seem like AI is not present in their work streams, but in reality, it has been utilized in health care for several years. As AI technologies continue to develop and become more sophisticated, we can expect them to play an increasingly significant role in health care.”
Today, most AI applications in health care use narrow AI, which is designed to complete a single task without human assistance, as opposed to artificial general intelligence (AGI), which pertains to human-level reasoning and problem solving across a broad spectrum. Here are some ways doctors are using AI throughout the day – sometimes being aware of its assistance, and sometimes being unaware:
- Many doctors use electronic health records (EHRs) with integrated AI that include computerized clinical decision support tools designed to reduce the risk of diagnostic error and to integrate decision-making in the medication ordering function.
- Cardiologists, pathologists, and dermatologists use AI in the interpretation of vast amounts of images, tracings, and complex patterns.
- Surgeons are using AI-enhanced surgical robotics for orthopedic surgeries, such as joint replacement and spine surgery.
- A growing number of doctors are using ChatGPT to assist in drafting prior authorization letters for insurers. Experts say more doctors are also experimenting with ChatGPT to support medical decision-making.
- Within oncology, physicians use machine learning techniques in the form of computer-aided detection systems for early breast cancer detection.
- AI algorithms are often used by health systems for workflow, staffing optimization, population management, and care coordination.
- Some systems within EHRs use AI to indicate high-risk patients.
- Physicians are using AI applications for the early recognition of sepsis, including EHR-integrated decision tools, such as the Hospital Corporation of America Healthcare’s Sepsis Prediction and Optimization Therapy and the Sepsis Early Risk Assessment algorithm.
- About 30% of radiologists use AI in their practice to analyze x-rays and CT scans.
- Epic Systems recently announced a partnership with Microsoft to integrate ChatGPT into MyChart, Epic’s patient portal system. Pilot hospitals will utilize ChatGPT to automatically generate responses to patient-generated questions sent via the portal.
The growth of AI in health care has been enormous, and it’s only going to continue, said Ravi B. Parikh, MD, an assistant professor in the department of medical ethics and health policy and medicine at the University of Pennsylvania, Philadelphia.
“What’s really critical is that physicians, clinicians, and nurses using AI are provided with the tools to understand how artificial intelligence works and, most importantly, understand that they are still accountable for making the ultimate decision,” Mr. LeTang said, “The information is not always going to be the right thing to do or the most accurate thing to do. They’re still liable for making a bad decision, even if AI is driving that.”
What are the top AI legal dangers of today?
A pressing legal risk is becoming too reliant on the suggestions that AI-based systems provide, which can lead to poor care decisions, said Kenneth Rashbaum, a New York–based cybersecurity attorney with more than 25 years of experience in medical malpractice defense.
This can occur, for example, when using clinical support systems that leverage AI, machine learning, or statistical pattern recognition. Today, clinical support systems are commonly administered through EHRs and other computerized clinical workflows. In general, such systems match a patient’s characteristics to a computerized clinical knowledge base. An assessment or recommendation is then presented to the physician for a decision.
“If the clinician blindly accepts it without considering whether it’s appropriate for this patient at this time with this presentation, the clinician may bear some responsibility if there is an untoward result,” Mr. Rashbaum said.
“A common claim even in the days before the EMR [electronic medical record] and AI, was that the clinician did not take all available information into account in rendering treatment, including history of past and present condition, as reflected in the records, communication with past and other present treating clinicians, lab and radiology results, discussions with the patient, and physical examination findings,” he said. “So, if the clinician relied upon the support prompt to the exclusion of these other sources of information, that could be a very strong argument for the plaintiff.”
Chatbots, such OpenAI’s ChatGPT, are another form of AI raising legal red flags. ChatGPT, trained on a massive set of text data, can carry out conversations, write code, draft emails, and answer any question posed. The chatbot has gained considerable credibility for accurately diagnosing rare conditions in seconds, and it recently passed the U.S. Medical Licensing Examination.
It’s unclear how many doctors are signing onto the ChatGPT website daily, but physicians are actively using the chatbot, particularly for assistance with prior authorization letters and to support decision-making processes in their practices, said Mr. LeTang.
When physicians ask ChatGPT a question, however, they should be mindful that ChatGPT could “hallucinate,” a term that refers to a generated response that sounds plausible but is factually incorrect or is unrelated to the context, explains Harvey Castro, MD, an emergency physician, ChatGPT health care expert, and author of the 2023 book “ChatGPT and Healthcare: Unlocking the Potential of Patient Empowerment.”
Acting on ChatGPT’s response without vetting the information places doctors at serious risk of a lawsuit, he said.
“Sometimes, the response is half true and half false,” he said. “Say, I go outside my specialty of emergency medicine and ask it about a pediatric surgical procedure. It could give me a response that sounds medically correct, but then I ask a pediatric cardiologist, and he says, ‘We don’t even do this. This doesn’t even exist!’ Physicians really have to make sure they are vetting the information provided.”
In response to ChatGPT’s growing usage by health care professionals, hospitals and practices are quickly implementing guidelines, policies, and restrictions that caution physicians about the accuracy of ChatGPT-generated information, adds Mr. LeTang.
Emerging best practices include avoiding the input of patient health information, personally identifiable information, or any data that could be commercially valuable or considered the intellectual property of a hospital or health system, he said.
“Another crucial guideline is not to rely solely on ChatGPT as a definitive source for clinical decision-making; physicians must exercise their professional judgment,” he said. “If best practices are not adhered to, the associated risks are present today. However, these risks may become more significant as AI technologies continue to evolve and become increasingly integrated into health care.”
The potential for misdiagnosis by AI systems and the risk of unnecessary procedures if physicians do not thoroughly evaluate and validate AI predictions are other dangers.
As an example, Mr. LeTang described a case in which a physician documents in the EHR that a patient has presented to the emergency department with chest pains and other signs of a heart attack, and an AI algorithm predicts that the patient is experiencing an active myocardial infarction. If the physician then sends the patient for stenting or an angioplasty without other concrete evidence or tests to confirm the diagnosis, the doctor could later face a misdiagnosis complaint if the costly procedures were unnecessary.
“That’s one of the risks of using artificial intelligence,” he said. “A large percentage of malpractice claims is failure to diagnose, delayed diagnosis, or inaccurate diagnosis. What falls in the category of failure to diagnose is sending a patient for an unnecessary procedure or having an adverse event or bad outcome because of the failure to diagnose.”
So far, no AI lawsuits have been filed, but they may make an appearance soon, said Sue Boisvert, senior patient safety risk manager at The Doctors Company, a national medical liability insurer.
“There are hundreds of AI programs currently in use in health care,” she said. “At some point, a provider will make a decision that is contrary to what the AI recommended. The AI may be wrong, or the provider may be wrong. Either way, the provider will neglect to document their clinical reasoning, a patient will be harmed, and we will have the first AI claim.”
Upcoming AI legal risks to watch for
Lawsuits that allege biased patient care by physicians on the basis of algorithmic bias may also be forthcoming, analysts warn.
Much has been written about algorithmic bias that compounds and worsens inequities in socioeconomic status, ethnicity, sexual orientation, and gender in health systems. In 2019, a groundbreaking article in Science shed light on commonly used algorithms that are considered racially biased and how health care professionals often use such information to make medical decisions.
No claims involving AI bias have come down the pipeline yet, but it’s an area to watch, said Ms. Boisvert. She noted a website that highlights complaints and accusations of AI bias, including in health care.
“We need to be sure the training of the AI is appropriate, current, and broad enough so that there is no bias in the AI when it’s participating in the decision-making,” said Ms. Boisvert. “Imagine if the AI is diagnosing based on a dataset that is not local. It doesn’t represent the population at that particular hospital, and it’s providing inaccurate information to the physicians who are then making decisions about treatment.”
In pain management, for example, there are known differences in how patients experience pain, Ms. Boisvert said. If AI was being used to develop an algorithm for how a particular patient’s postoperative pain should be managed, and the algorithm did not include the differences, the pain control for a certain patient could be inappropriate. A poor outcome resulting from the treatment could lead to a claim against the physician or hospital that used the biased AI system, she said.
In the future, as AI becomes more integrated and accepted in medicine, there may be a risk of legal complaints against doctors for not using AI, said Saurabh Jha, MD, an associate professor of radiology at the University of Pennsylvania, Philadelphia, and a scholar of AI in radiology.
“Ultimately, we might get to a place where AI starts helping physicians detect more or reduce the miss of certain conditions, and it becomes the standard of care,” Dr. Jha said. “For example, if it became part of the standard of care for pulmonary embolism [PE] detection, and you didn’t use it for PE detection, and there was a miss. That could put you at legal risk. We’re not at that stage yet, but that is one future possibility.”
Dr. Parikh envisions an even cloudier liability landscape as the potential grows for AI to control patient care decisions. In such a scenario, rather than just issuing an alert or prediction to a physician, the AI system could trigger an action.
For instance, if an algorithm is trained to predict sepsis and, once triggered, the AI could initiate a nurse-led rapid response or a change in patient care outside the clinician’s control, said Dr. Parikh, who coauthored a recent article on AI and medical liability in The Milbank Quarterly.
“That’s still very much the minority of how AI is being used, but as evidence is growing that AI-based diagnostic tools perform equivalent or even superior to physicians, these autonomous workflows are being considered,” Dr. Parikh said. “When the ultimate action upon the patient is more determined by the AI than what the clinician does, then I think the liability picture gets murkier, and we should be thinking about how we can respond to that from a liability framework.”
How you can prevent AI-related lawsuits
The first step to preventing an AI-related claim is being aware of when and how you are using AI.
Ensure you’re informed about how the AI was trained, Ms. Boisvert stresses.
“Ask questions!” she said. “Is the AI safe? Are the recommendations accurate? Does the AI perform better than current systems? In what way? What databases were used, and did the programmers consider bias? Do I understand how to use the results?”
Never blindly trust the AI but rather view it as a data point in a medical decision, said Dr. Parikh. Ensure that other sources of medical information are properly accessed and that best practices for your specialty are still being followed.
When using any form of AI, document your usage, adds Mr. Rashbaum. A record that clearly outlines how the physician incorporated the AI is critical if a claim later arises in which the doctor is accused of AI-related malpractice, he said.
“Indicating how the AI tool was used, why it was used, and that it was used in conjunction with available clinical information and the clinician’s best judgment could reduce the risk of being found responsible as a result of AI use in a particular case,” he said.
Use chatbots, such as ChatGPT, the way they were intended, as support tools, rather than definitive diagnostic instruments, adds Dr. Castro.
“Doctors should also be well-trained in interpreting and understanding the suggestions provided by ChatGPT and should use their clinical judgment and experience alongside the AI tool for more accurate decision-making,” he said.
In addition, because no AI insurance product exists on the market, physicians and organizations using AI – particularly for direct health care – should evaluate their current insurance or insurance-like products to determine where a claim involving AI might fall and whether the policy would respond, said Ms. Boisvert. The AI vendor/manufacturer will likely have indemnified themselves in the purchase and sale agreement or contract, she said.
It will also become increasingly important for medical practices, hospitals, and health systems to put in place strong data governance strategies, Mr. LeTang said.
“AI relies on good data,” he said. “A data governance strategy is a key component to making sure we understand where the data is coming from, what is represents, how accurate it is, if it’s reproducible, what controls are in place to ensure the right people have the right access, and that if we’re starting to use it to build algorithms, that it’s deidentified.”
While no malpractice claims associated with the use of AI have yet surfaced, this may change as legal courts catch up on the backlog of malpractice claims that were delayed because of COVID-19, and even more so as AI becomes more prevalent in health care, Mr. LeTang said.
“Similar to the attention that autonomous driving systems, like Tesla, receive when the system fails and accidents occur, we can be assured that media outlets will widely publicize AI-related medical adverse events,” he said. “It is crucial for health care professionals, AI developers, and regulatory authorities to work together to ensure the responsible use of AI in health care, with patient safety as the top priority. By doing so, they can mitigate the risks associated with AI implementation and minimize the potential for legal disputes arising from AI-related medical errors.”
A version of this article first appeared on Medscape.com.
The enemy of carcinogenic fumes is my friendly begonia
Sowing the seeds of cancer prevention
Are you looking to add to your quality of life, even though pets are not your speed? Might we suggest something with lower maintenance? Something a little greener?
Indoor plants can purify the air that comes from outside. Researchers at the University of Technology Sydney, in partnership with the plantscaping company Ambius, showed that a “green wall” made up of mixed indoor plants was able to suck up 97% of “the most toxic compounds” from the air in just 8 hours. We’re talking about lung-irritating, headache-inducing, cancer risk–boosting compounds from gasoline fumes, including benzene.
Public health initiatives often strive to reduce cardiovascular and obesity risks, but breathing seems pretty important too. According to the World Health Organization, household air pollution is responsible for about 2.5 million global premature deaths each year. And since 2020 we’ve become accustomed to spending more time inside and at home.
“This new research proves that plants should not just be seen as ‘nice to have,’ but rather a crucial part of every workplace wellness plan,” Ambius General Manager Johan Hodgson said in statement released by the university.
So don’t spend hundreds of dollars on a fancy air filtration system when a wall of plants can do that for next to nothing. Find what works for you and your space and become a plant parent today! Your lungs will thank you.
But officer, I had to swerve to miss the duodenal ampulla
Tiny video capsule endoscopes have been around for many years, but they have one big weakness: The ingestible cameras’ journey through the GI tract is passively driven by gravity and the natural movement of the body, so they often miss potential problem areas.
Not anymore. That flaw has been addressed by medical technology company AnX Robotica, which has taken endoscopy to the next level by adding that wondrous directional control device of the modern electronic age, a joystick.
The new system “uses an external magnet and hand-held video game style joysticks to move the capsule in three dimensions,” which allows physicians to “remotely drive a miniature video capsule to all regions of the stomach to visualize and photograph potential problem areas,” according to Andrew C. Meltzer, MD, of George Washington University and associates, who conducted a pilot study funded by AnX Robotica.
The video capsule provided a 95% rate of visualization in the stomachs of 40 patients who were examined at a medical office building by an emergency medicine physician who had no previous specialty training in endoscopy. “Capsules were driven by the ER physician and then the study reports were reviewed by an attending gastroenterologist who was physically off site,” the investigators said in a written statement.
The capsule operator did receive some additional training, and development of artificial intelligence to self-drive the capsule is in the works, but for now, we’re talking about a device controlled by a human using a joystick. And we all know that 50-year-olds are not especially known for their joystick skills. For that we need real experts. Yup, we need to put those joystick-controlled capsule endoscopes in the hands of teenage gamers. Who wants to go first?
Maybe AI isn’t ready for the big time after all
“How long before some intrepid stockholder says: ‘Hey, instead of paying doctors, why don’t we just use the free robot instead?’ ” Those words appeared on LOTME but a month ago. After all, the AI is supposed to be smarter and more empathetic than a doctor. And did we mention it’s free? Or at least extremely cheap. Cheaper than, say, a group of recently unionized health care workers.
In early May, the paid employees manning the National Eating Disorders Association emergency hotline voted to unionize, as they felt overwhelmed and underpaid. Apparently, paying six people an extra few thousand a year was too much for NEDA’s leadership, as they decided a few weeks later to fire those workers, fully closing down the hotline. Instead of talking to a real person, people “calling in” for support would be met with Tessa, a wellness chatbot that would hopefully guide them through their crisis. Key word, hopefully.
In perhaps the least surprising twist of the year, NEDA was forced to walk back its decision about a week after its initial announcement. It all started with a viral Instagram post from a woman who called in and received the following advice from Tessa: Lose 1-2 pounds a week, count calories and work for a 500- to 1,000-calorie deficit, weigh herself weekly, and restrict her diet. Unfortunately, all of these suggestions were things that led to the development of the woman’s eating disorder.
Naturally, NEDA responded in good grace, accusing the woman of lying. A NEDA vice president even left some nasty comments on the post, but hastily deleted them a day later when NEDA announced it was shutting down Tessa “until further notice for a complete investigation.” NEDA’s CEO insisted they hadn’t seen that behavior from Tessa before, calling it a “bug” and insisting the bot would only be down temporarily until the triggers causing the bug were fixed.
In the aftermath, several doctors and psychologists chimed in, terming the rush to automate human roles dangerous and risky. After all, much of what makes these hotlines effective is the volunteers speaking from their own experience. An unsupervised bot doesn’t seem to have what it takes to deal with a mental health crisis, but we’re betting that Tessa will be back. As a wise cephalopod once said: Nobody gives a care about the fate of labor as long as they can get their instant gratification.
You can’t spell existential without s-t-e-n-t
This week, we’re including a special “bonus” item that, to be honest, has nothing to do with stents. That’s why our editor is making us call this a “bonus” (and making us use quote marks, too): It doesn’t really have anything to do with stents or health care or those who practice health care. Actually, his exact words were, “You can’t just give the readers someone else’s ****ing list and expect to get paid for it.” Did we mention that he looks like Jack Nicklaus but acts like BoJack Horseman?
Anywaaay, we’re pretty sure that the list in question – “America’s Top 10 Most Googled Existential Questions” – says something about the human condition, just not about stents:
1. Why is the sky blue?
2. What do dreams mean?
3. What is the meaning of life?
4. Why am I so tired?
5. Who am I?
6. What is love?
7. Is a hot dog a sandwich?
8. What came first, the chicken or the egg?
9. What should I do?
10. Do animals have souls?
Sowing the seeds of cancer prevention
Are you looking to add to your quality of life, even though pets are not your speed? Might we suggest something with lower maintenance? Something a little greener?
Indoor plants can purify the air that comes from outside. Researchers at the University of Technology Sydney, in partnership with the plantscaping company Ambius, showed that a “green wall” made up of mixed indoor plants was able to suck up 97% of “the most toxic compounds” from the air in just 8 hours. We’re talking about lung-irritating, headache-inducing, cancer risk–boosting compounds from gasoline fumes, including benzene.
Public health initiatives often strive to reduce cardiovascular and obesity risks, but breathing seems pretty important too. According to the World Health Organization, household air pollution is responsible for about 2.5 million global premature deaths each year. And since 2020 we’ve become accustomed to spending more time inside and at home.
“This new research proves that plants should not just be seen as ‘nice to have,’ but rather a crucial part of every workplace wellness plan,” Ambius General Manager Johan Hodgson said in statement released by the university.
So don’t spend hundreds of dollars on a fancy air filtration system when a wall of plants can do that for next to nothing. Find what works for you and your space and become a plant parent today! Your lungs will thank you.
But officer, I had to swerve to miss the duodenal ampulla
Tiny video capsule endoscopes have been around for many years, but they have one big weakness: The ingestible cameras’ journey through the GI tract is passively driven by gravity and the natural movement of the body, so they often miss potential problem areas.
Not anymore. That flaw has been addressed by medical technology company AnX Robotica, which has taken endoscopy to the next level by adding that wondrous directional control device of the modern electronic age, a joystick.
The new system “uses an external magnet and hand-held video game style joysticks to move the capsule in three dimensions,” which allows physicians to “remotely drive a miniature video capsule to all regions of the stomach to visualize and photograph potential problem areas,” according to Andrew C. Meltzer, MD, of George Washington University and associates, who conducted a pilot study funded by AnX Robotica.
The video capsule provided a 95% rate of visualization in the stomachs of 40 patients who were examined at a medical office building by an emergency medicine physician who had no previous specialty training in endoscopy. “Capsules were driven by the ER physician and then the study reports were reviewed by an attending gastroenterologist who was physically off site,” the investigators said in a written statement.
The capsule operator did receive some additional training, and development of artificial intelligence to self-drive the capsule is in the works, but for now, we’re talking about a device controlled by a human using a joystick. And we all know that 50-year-olds are not especially known for their joystick skills. For that we need real experts. Yup, we need to put those joystick-controlled capsule endoscopes in the hands of teenage gamers. Who wants to go first?
Maybe AI isn’t ready for the big time after all
“How long before some intrepid stockholder says: ‘Hey, instead of paying doctors, why don’t we just use the free robot instead?’ ” Those words appeared on LOTME but a month ago. After all, the AI is supposed to be smarter and more empathetic than a doctor. And did we mention it’s free? Or at least extremely cheap. Cheaper than, say, a group of recently unionized health care workers.
In early May, the paid employees manning the National Eating Disorders Association emergency hotline voted to unionize, as they felt overwhelmed and underpaid. Apparently, paying six people an extra few thousand a year was too much for NEDA’s leadership, as they decided a few weeks later to fire those workers, fully closing down the hotline. Instead of talking to a real person, people “calling in” for support would be met with Tessa, a wellness chatbot that would hopefully guide them through their crisis. Key word, hopefully.
In perhaps the least surprising twist of the year, NEDA was forced to walk back its decision about a week after its initial announcement. It all started with a viral Instagram post from a woman who called in and received the following advice from Tessa: Lose 1-2 pounds a week, count calories and work for a 500- to 1,000-calorie deficit, weigh herself weekly, and restrict her diet. Unfortunately, all of these suggestions were things that led to the development of the woman’s eating disorder.
Naturally, NEDA responded in good grace, accusing the woman of lying. A NEDA vice president even left some nasty comments on the post, but hastily deleted them a day later when NEDA announced it was shutting down Tessa “until further notice for a complete investigation.” NEDA’s CEO insisted they hadn’t seen that behavior from Tessa before, calling it a “bug” and insisting the bot would only be down temporarily until the triggers causing the bug were fixed.
In the aftermath, several doctors and psychologists chimed in, terming the rush to automate human roles dangerous and risky. After all, much of what makes these hotlines effective is the volunteers speaking from their own experience. An unsupervised bot doesn’t seem to have what it takes to deal with a mental health crisis, but we’re betting that Tessa will be back. As a wise cephalopod once said: Nobody gives a care about the fate of labor as long as they can get their instant gratification.
You can’t spell existential without s-t-e-n-t
This week, we’re including a special “bonus” item that, to be honest, has nothing to do with stents. That’s why our editor is making us call this a “bonus” (and making us use quote marks, too): It doesn’t really have anything to do with stents or health care or those who practice health care. Actually, his exact words were, “You can’t just give the readers someone else’s ****ing list and expect to get paid for it.” Did we mention that he looks like Jack Nicklaus but acts like BoJack Horseman?
Anywaaay, we’re pretty sure that the list in question – “America’s Top 10 Most Googled Existential Questions” – says something about the human condition, just not about stents:
1. Why is the sky blue?
2. What do dreams mean?
3. What is the meaning of life?
4. Why am I so tired?
5. Who am I?
6. What is love?
7. Is a hot dog a sandwich?
8. What came first, the chicken or the egg?
9. What should I do?
10. Do animals have souls?
Sowing the seeds of cancer prevention
Are you looking to add to your quality of life, even though pets are not your speed? Might we suggest something with lower maintenance? Something a little greener?
Indoor plants can purify the air that comes from outside. Researchers at the University of Technology Sydney, in partnership with the plantscaping company Ambius, showed that a “green wall” made up of mixed indoor plants was able to suck up 97% of “the most toxic compounds” from the air in just 8 hours. We’re talking about lung-irritating, headache-inducing, cancer risk–boosting compounds from gasoline fumes, including benzene.
Public health initiatives often strive to reduce cardiovascular and obesity risks, but breathing seems pretty important too. According to the World Health Organization, household air pollution is responsible for about 2.5 million global premature deaths each year. And since 2020 we’ve become accustomed to spending more time inside and at home.
“This new research proves that plants should not just be seen as ‘nice to have,’ but rather a crucial part of every workplace wellness plan,” Ambius General Manager Johan Hodgson said in statement released by the university.
So don’t spend hundreds of dollars on a fancy air filtration system when a wall of plants can do that for next to nothing. Find what works for you and your space and become a plant parent today! Your lungs will thank you.
But officer, I had to swerve to miss the duodenal ampulla
Tiny video capsule endoscopes have been around for many years, but they have one big weakness: The ingestible cameras’ journey through the GI tract is passively driven by gravity and the natural movement of the body, so they often miss potential problem areas.
Not anymore. That flaw has been addressed by medical technology company AnX Robotica, which has taken endoscopy to the next level by adding that wondrous directional control device of the modern electronic age, a joystick.
The new system “uses an external magnet and hand-held video game style joysticks to move the capsule in three dimensions,” which allows physicians to “remotely drive a miniature video capsule to all regions of the stomach to visualize and photograph potential problem areas,” according to Andrew C. Meltzer, MD, of George Washington University and associates, who conducted a pilot study funded by AnX Robotica.
The video capsule provided a 95% rate of visualization in the stomachs of 40 patients who were examined at a medical office building by an emergency medicine physician who had no previous specialty training in endoscopy. “Capsules were driven by the ER physician and then the study reports were reviewed by an attending gastroenterologist who was physically off site,” the investigators said in a written statement.
The capsule operator did receive some additional training, and development of artificial intelligence to self-drive the capsule is in the works, but for now, we’re talking about a device controlled by a human using a joystick. And we all know that 50-year-olds are not especially known for their joystick skills. For that we need real experts. Yup, we need to put those joystick-controlled capsule endoscopes in the hands of teenage gamers. Who wants to go first?
Maybe AI isn’t ready for the big time after all
“How long before some intrepid stockholder says: ‘Hey, instead of paying doctors, why don’t we just use the free robot instead?’ ” Those words appeared on LOTME but a month ago. After all, the AI is supposed to be smarter and more empathetic than a doctor. And did we mention it’s free? Or at least extremely cheap. Cheaper than, say, a group of recently unionized health care workers.
In early May, the paid employees manning the National Eating Disorders Association emergency hotline voted to unionize, as they felt overwhelmed and underpaid. Apparently, paying six people an extra few thousand a year was too much for NEDA’s leadership, as they decided a few weeks later to fire those workers, fully closing down the hotline. Instead of talking to a real person, people “calling in” for support would be met with Tessa, a wellness chatbot that would hopefully guide them through their crisis. Key word, hopefully.
In perhaps the least surprising twist of the year, NEDA was forced to walk back its decision about a week after its initial announcement. It all started with a viral Instagram post from a woman who called in and received the following advice from Tessa: Lose 1-2 pounds a week, count calories and work for a 500- to 1,000-calorie deficit, weigh herself weekly, and restrict her diet. Unfortunately, all of these suggestions were things that led to the development of the woman’s eating disorder.
Naturally, NEDA responded in good grace, accusing the woman of lying. A NEDA vice president even left some nasty comments on the post, but hastily deleted them a day later when NEDA announced it was shutting down Tessa “until further notice for a complete investigation.” NEDA’s CEO insisted they hadn’t seen that behavior from Tessa before, calling it a “bug” and insisting the bot would only be down temporarily until the triggers causing the bug were fixed.
In the aftermath, several doctors and psychologists chimed in, terming the rush to automate human roles dangerous and risky. After all, much of what makes these hotlines effective is the volunteers speaking from their own experience. An unsupervised bot doesn’t seem to have what it takes to deal with a mental health crisis, but we’re betting that Tessa will be back. As a wise cephalopod once said: Nobody gives a care about the fate of labor as long as they can get their instant gratification.
You can’t spell existential without s-t-e-n-t
This week, we’re including a special “bonus” item that, to be honest, has nothing to do with stents. That’s why our editor is making us call this a “bonus” (and making us use quote marks, too): It doesn’t really have anything to do with stents or health care or those who practice health care. Actually, his exact words were, “You can’t just give the readers someone else’s ****ing list and expect to get paid for it.” Did we mention that he looks like Jack Nicklaus but acts like BoJack Horseman?
Anywaaay, we’re pretty sure that the list in question – “America’s Top 10 Most Googled Existential Questions” – says something about the human condition, just not about stents:
1. Why is the sky blue?
2. What do dreams mean?
3. What is the meaning of life?
4. Why am I so tired?
5. Who am I?
6. What is love?
7. Is a hot dog a sandwich?
8. What came first, the chicken or the egg?
9. What should I do?
10. Do animals have souls?
‘Never worry alone:’ Expand your child mental health comfort zone using supports
That mantra echoed through my postgraduate medical training, and is shared with patients to encourage reaching out for help. But providers are often in the exam room alone with patients whom they are, legitimately, very worried about.
Dr. Rettew’s column last month detailed the systems that are changing (slowly!) to better facilitate interface between mental health and primary care. There are increasingly supports available at a clinic level, and also a state level. Regardless of where your practice is in the process of integration, . This moment in time seems like a great opportunity to review a few favorites.
Who you gonna call?
Child Psychiatry Access Programs, sometimes called Psychiatry Access Lines, are almost everywhere!1 If you haven’t called one yet, click on your state and call! You will have immediate access to mental health resources that are curated and available in your state, child psychiatry expertise, and a way to connect families in need with targeted treatments. A long-term side effect of CPAP utilization may include improved system coordination on behalf of kids.
What about screening?
The AAP has an excellent mental health minute on screening.2 Pediatricians screen thoughtfully for psychosocial and medical concerns. Primary and secondary screenings for mental health are becoming ubiquitous in practices as a first step toward diagnosis and treatment. Primary, or initial, screening can catch concerns in your patient population. These include common tools like the Strengths and Difficulties Questionnaire (SDQ, ages 2-17), or the Pediatric Symptom Checklist (PSC-14, ages 4-17). Subscale scores help point care toward the right direction.
Once we know there is a mental health problem through screening or interview, secondary mental health screening and rating scales help find a specific diagnosis. Some basics include the PHQ-A for depression (ages 11-17), the GAD-7 for general anxiety (ages 11+), the SCARED for specific anxiety (ages 8-18), and the Vanderbilt (ages 6+) or SNAP-IV (ages 5+) parent/teacher scales for ADHD/ODD/CD/anxiety/depressive symptoms. The CY-BOCS symptom checklist (ages 6-17) is excellent to determine the extent of OCD symptoms. The asQ (ages 10+) and Columbia (C-SSRS, ages 11+) are must-use screeners to help prevent suicide. Screeners and rating scales are found on many CPAP websites, such as New York’s.3 A site full of these can seem overwhelming, but once you get comfortable with a few favorites, expanding your repertoire little by little makes providing care a lot easier!
Treating to target?
When you are fairly certain of the diagnosis, you can feel more confident to treat. Diagnoses can be tools; find the best fit one, and in a few years with more information, a different tool might be a better fit.
Some favorite treatment resources include the CPAP guidebook from your state (for example, Washington’s4 and Virginia’s5), and the AACAP parent medication guides.6 They detail evidence-based treatments including medications, and can help us professionals and high health care–literacy families. The medication tracking form found at the back of each guide is especially key. Another great book is the DSM 5 Pocket Guide for Child and Adolescent Mental Health.7 Some screeners can be repeated to see if treatment is working, as the AIMS model suggests “treat to target”8 specific symptoms until they improve.
How to provide help with few resources?
There is knowing what your patient needs, like a specific therapy, and then there is the challenge of connecting the patient with help. Getting a family started on a first step of treatment while they are on a waiting list can be transformative. One example is treatment for oppositional defiant disorder (ODD); parents can start with the first step, “special time,”9 even before a therapist is available. Or, if a family is struggling with OCD, they can start an Exposure Therapy with Response Prevention (ERP) workbook10 or look at the iocdf.org website before seeing a specialized therapist. We all know how unsatisfactory a wait-list is as a treatment plan; it is so empowering to start the family with first steps.
What about connections for us providers?
Leveraging your own relationship with patients who have mental health challenges can be powerful, and staying connected with others is vital to maintaining your own emotional well-being. Having a therapist, being active in your medical chapters, gardening, and connecting your practice to local mental health providers and schools can be rejuvenating. Improving the systems around us prevents burnout and keeps us connected.
And finally ...
So, join the movement to help our fields work better together; walk out of that exam room and listen to your worry about your patients and the systems that support them. Reach out for help, toward child psychiatry access lines, the AAP, AACAP, and other collective agents of change. Share what is making your lives and your patients’ lives easier so we can amplify these together. Let’s worry together, and make things better.
Dr. Margaret Spottswood is a child psychiatrist practicing in an integrated care clinic at the Community Health Centers of Burlington, Vt., a Federally Qualified Health Center. She is also the medical director of the Vermont Child Psychiatry Access Program and a clinical assistant professor in the department of psychiatry at the University of Vermont, Burlington.
References
1. National Network of Child Psychiatry Access Programs. Child Psychiatry Access Programs in the United States. https://www.nncpap.orgmap. 2023 Mar 14.
2. American Academy of Pediatrics. Screening Tools: Pediatric Mental Health Minute Series. https://www.aap.org/en/patient-care/mental-health-minute/screening-tools.
3. New York ProjectTEACH. Child Clinical Rating Scales. https://projectteachny.org/child-rating-scales.
4. Hilt H, Barclay R. Seattle Children’s Primary Care Principles for Child Mental Health. https://www.seattlechildrens.org/globalassets/documents/healthcare-professionals/pal/wa/wa-pal-care-guide.pdf.
5. Virginia Mental Health Access Program. VMAP Guidebook. https://vmap.org/guidebook.
6. American Academy of Child and Adolescent Psychiatry. Parents’ Medication Guides. https://www.aacap.org/AACAP/Families_and_Youth/Family_Resources/Parents_Medication_Guides.aspx.
7. Hilt RJ, Nussbaum AM. DSM-5 Pocket Guide to Child and Adolescent Mental Health. Arlington, Va.: American Psychiatric Association Publishing, 2015.
8. Advanced Integration Mental Health Solutions. Measurement-Based Treatment to Target. https://aims.uw.edu/resource-library/measurement-based-treatment-target.
9. Vermont Child Psychiatry Access Program. Caregiver Guide: Special Time With Children. https://www.chcb.org/wp-content/uploads/2023/03/Special-Time-with-Children-for-Caregivers.pdf.
10. Reuter T. Standing Up to OCD Workbook for Kids. New York: Simon and Schuster, 2019.
That mantra echoed through my postgraduate medical training, and is shared with patients to encourage reaching out for help. But providers are often in the exam room alone with patients whom they are, legitimately, very worried about.
Dr. Rettew’s column last month detailed the systems that are changing (slowly!) to better facilitate interface between mental health and primary care. There are increasingly supports available at a clinic level, and also a state level. Regardless of where your practice is in the process of integration, . This moment in time seems like a great opportunity to review a few favorites.
Who you gonna call?
Child Psychiatry Access Programs, sometimes called Psychiatry Access Lines, are almost everywhere!1 If you haven’t called one yet, click on your state and call! You will have immediate access to mental health resources that are curated and available in your state, child psychiatry expertise, and a way to connect families in need with targeted treatments. A long-term side effect of CPAP utilization may include improved system coordination on behalf of kids.
What about screening?
The AAP has an excellent mental health minute on screening.2 Pediatricians screen thoughtfully for psychosocial and medical concerns. Primary and secondary screenings for mental health are becoming ubiquitous in practices as a first step toward diagnosis and treatment. Primary, or initial, screening can catch concerns in your patient population. These include common tools like the Strengths and Difficulties Questionnaire (SDQ, ages 2-17), or the Pediatric Symptom Checklist (PSC-14, ages 4-17). Subscale scores help point care toward the right direction.
Once we know there is a mental health problem through screening or interview, secondary mental health screening and rating scales help find a specific diagnosis. Some basics include the PHQ-A for depression (ages 11-17), the GAD-7 for general anxiety (ages 11+), the SCARED for specific anxiety (ages 8-18), and the Vanderbilt (ages 6+) or SNAP-IV (ages 5+) parent/teacher scales for ADHD/ODD/CD/anxiety/depressive symptoms. The CY-BOCS symptom checklist (ages 6-17) is excellent to determine the extent of OCD symptoms. The asQ (ages 10+) and Columbia (C-SSRS, ages 11+) are must-use screeners to help prevent suicide. Screeners and rating scales are found on many CPAP websites, such as New York’s.3 A site full of these can seem overwhelming, but once you get comfortable with a few favorites, expanding your repertoire little by little makes providing care a lot easier!
Treating to target?
When you are fairly certain of the diagnosis, you can feel more confident to treat. Diagnoses can be tools; find the best fit one, and in a few years with more information, a different tool might be a better fit.
Some favorite treatment resources include the CPAP guidebook from your state (for example, Washington’s4 and Virginia’s5), and the AACAP parent medication guides.6 They detail evidence-based treatments including medications, and can help us professionals and high health care–literacy families. The medication tracking form found at the back of each guide is especially key. Another great book is the DSM 5 Pocket Guide for Child and Adolescent Mental Health.7 Some screeners can be repeated to see if treatment is working, as the AIMS model suggests “treat to target”8 specific symptoms until they improve.
How to provide help with few resources?
There is knowing what your patient needs, like a specific therapy, and then there is the challenge of connecting the patient with help. Getting a family started on a first step of treatment while they are on a waiting list can be transformative. One example is treatment for oppositional defiant disorder (ODD); parents can start with the first step, “special time,”9 even before a therapist is available. Or, if a family is struggling with OCD, they can start an Exposure Therapy with Response Prevention (ERP) workbook10 or look at the iocdf.org website before seeing a specialized therapist. We all know how unsatisfactory a wait-list is as a treatment plan; it is so empowering to start the family with first steps.
What about connections for us providers?
Leveraging your own relationship with patients who have mental health challenges can be powerful, and staying connected with others is vital to maintaining your own emotional well-being. Having a therapist, being active in your medical chapters, gardening, and connecting your practice to local mental health providers and schools can be rejuvenating. Improving the systems around us prevents burnout and keeps us connected.
And finally ...
So, join the movement to help our fields work better together; walk out of that exam room and listen to your worry about your patients and the systems that support them. Reach out for help, toward child psychiatry access lines, the AAP, AACAP, and other collective agents of change. Share what is making your lives and your patients’ lives easier so we can amplify these together. Let’s worry together, and make things better.
Dr. Margaret Spottswood is a child psychiatrist practicing in an integrated care clinic at the Community Health Centers of Burlington, Vt., a Federally Qualified Health Center. She is also the medical director of the Vermont Child Psychiatry Access Program and a clinical assistant professor in the department of psychiatry at the University of Vermont, Burlington.
References
1. National Network of Child Psychiatry Access Programs. Child Psychiatry Access Programs in the United States. https://www.nncpap.orgmap. 2023 Mar 14.
2. American Academy of Pediatrics. Screening Tools: Pediatric Mental Health Minute Series. https://www.aap.org/en/patient-care/mental-health-minute/screening-tools.
3. New York ProjectTEACH. Child Clinical Rating Scales. https://projectteachny.org/child-rating-scales.
4. Hilt H, Barclay R. Seattle Children’s Primary Care Principles for Child Mental Health. https://www.seattlechildrens.org/globalassets/documents/healthcare-professionals/pal/wa/wa-pal-care-guide.pdf.
5. Virginia Mental Health Access Program. VMAP Guidebook. https://vmap.org/guidebook.
6. American Academy of Child and Adolescent Psychiatry. Parents’ Medication Guides. https://www.aacap.org/AACAP/Families_and_Youth/Family_Resources/Parents_Medication_Guides.aspx.
7. Hilt RJ, Nussbaum AM. DSM-5 Pocket Guide to Child and Adolescent Mental Health. Arlington, Va.: American Psychiatric Association Publishing, 2015.
8. Advanced Integration Mental Health Solutions. Measurement-Based Treatment to Target. https://aims.uw.edu/resource-library/measurement-based-treatment-target.
9. Vermont Child Psychiatry Access Program. Caregiver Guide: Special Time With Children. https://www.chcb.org/wp-content/uploads/2023/03/Special-Time-with-Children-for-Caregivers.pdf.
10. Reuter T. Standing Up to OCD Workbook for Kids. New York: Simon and Schuster, 2019.
That mantra echoed through my postgraduate medical training, and is shared with patients to encourage reaching out for help. But providers are often in the exam room alone with patients whom they are, legitimately, very worried about.
Dr. Rettew’s column last month detailed the systems that are changing (slowly!) to better facilitate interface between mental health and primary care. There are increasingly supports available at a clinic level, and also a state level. Regardless of where your practice is in the process of integration, . This moment in time seems like a great opportunity to review a few favorites.
Who you gonna call?
Child Psychiatry Access Programs, sometimes called Psychiatry Access Lines, are almost everywhere!1 If you haven’t called one yet, click on your state and call! You will have immediate access to mental health resources that are curated and available in your state, child psychiatry expertise, and a way to connect families in need with targeted treatments. A long-term side effect of CPAP utilization may include improved system coordination on behalf of kids.
What about screening?
The AAP has an excellent mental health minute on screening.2 Pediatricians screen thoughtfully for psychosocial and medical concerns. Primary and secondary screenings for mental health are becoming ubiquitous in practices as a first step toward diagnosis and treatment. Primary, or initial, screening can catch concerns in your patient population. These include common tools like the Strengths and Difficulties Questionnaire (SDQ, ages 2-17), or the Pediatric Symptom Checklist (PSC-14, ages 4-17). Subscale scores help point care toward the right direction.
Once we know there is a mental health problem through screening or interview, secondary mental health screening and rating scales help find a specific diagnosis. Some basics include the PHQ-A for depression (ages 11-17), the GAD-7 for general anxiety (ages 11+), the SCARED for specific anxiety (ages 8-18), and the Vanderbilt (ages 6+) or SNAP-IV (ages 5+) parent/teacher scales for ADHD/ODD/CD/anxiety/depressive symptoms. The CY-BOCS symptom checklist (ages 6-17) is excellent to determine the extent of OCD symptoms. The asQ (ages 10+) and Columbia (C-SSRS, ages 11+) are must-use screeners to help prevent suicide. Screeners and rating scales are found on many CPAP websites, such as New York’s.3 A site full of these can seem overwhelming, but once you get comfortable with a few favorites, expanding your repertoire little by little makes providing care a lot easier!
Treating to target?
When you are fairly certain of the diagnosis, you can feel more confident to treat. Diagnoses can be tools; find the best fit one, and in a few years with more information, a different tool might be a better fit.
Some favorite treatment resources include the CPAP guidebook from your state (for example, Washington’s4 and Virginia’s5), and the AACAP parent medication guides.6 They detail evidence-based treatments including medications, and can help us professionals and high health care–literacy families. The medication tracking form found at the back of each guide is especially key. Another great book is the DSM 5 Pocket Guide for Child and Adolescent Mental Health.7 Some screeners can be repeated to see if treatment is working, as the AIMS model suggests “treat to target”8 specific symptoms until they improve.
How to provide help with few resources?
There is knowing what your patient needs, like a specific therapy, and then there is the challenge of connecting the patient with help. Getting a family started on a first step of treatment while they are on a waiting list can be transformative. One example is treatment for oppositional defiant disorder (ODD); parents can start with the first step, “special time,”9 even before a therapist is available. Or, if a family is struggling with OCD, they can start an Exposure Therapy with Response Prevention (ERP) workbook10 or look at the iocdf.org website before seeing a specialized therapist. We all know how unsatisfactory a wait-list is as a treatment plan; it is so empowering to start the family with first steps.
What about connections for us providers?
Leveraging your own relationship with patients who have mental health challenges can be powerful, and staying connected with others is vital to maintaining your own emotional well-being. Having a therapist, being active in your medical chapters, gardening, and connecting your practice to local mental health providers and schools can be rejuvenating. Improving the systems around us prevents burnout and keeps us connected.
And finally ...
So, join the movement to help our fields work better together; walk out of that exam room and listen to your worry about your patients and the systems that support them. Reach out for help, toward child psychiatry access lines, the AAP, AACAP, and other collective agents of change. Share what is making your lives and your patients’ lives easier so we can amplify these together. Let’s worry together, and make things better.
Dr. Margaret Spottswood is a child psychiatrist practicing in an integrated care clinic at the Community Health Centers of Burlington, Vt., a Federally Qualified Health Center. She is also the medical director of the Vermont Child Psychiatry Access Program and a clinical assistant professor in the department of psychiatry at the University of Vermont, Burlington.
References
1. National Network of Child Psychiatry Access Programs. Child Psychiatry Access Programs in the United States. https://www.nncpap.orgmap. 2023 Mar 14.
2. American Academy of Pediatrics. Screening Tools: Pediatric Mental Health Minute Series. https://www.aap.org/en/patient-care/mental-health-minute/screening-tools.
3. New York ProjectTEACH. Child Clinical Rating Scales. https://projectteachny.org/child-rating-scales.
4. Hilt H, Barclay R. Seattle Children’s Primary Care Principles for Child Mental Health. https://www.seattlechildrens.org/globalassets/documents/healthcare-professionals/pal/wa/wa-pal-care-guide.pdf.
5. Virginia Mental Health Access Program. VMAP Guidebook. https://vmap.org/guidebook.
6. American Academy of Child and Adolescent Psychiatry. Parents’ Medication Guides. https://www.aacap.org/AACAP/Families_and_Youth/Family_Resources/Parents_Medication_Guides.aspx.
7. Hilt RJ, Nussbaum AM. DSM-5 Pocket Guide to Child and Adolescent Mental Health. Arlington, Va.: American Psychiatric Association Publishing, 2015.
8. Advanced Integration Mental Health Solutions. Measurement-Based Treatment to Target. https://aims.uw.edu/resource-library/measurement-based-treatment-target.
9. Vermont Child Psychiatry Access Program. Caregiver Guide: Special Time With Children. https://www.chcb.org/wp-content/uploads/2023/03/Special-Time-with-Children-for-Caregivers.pdf.
10. Reuter T. Standing Up to OCD Workbook for Kids. New York: Simon and Schuster, 2019.
How can we make medical training less ‘toxic’?
This transcript has been edited for clarity.
Robert D. Glatter, MD: Welcome. I’m Dr. Robert Glatter, medical adviser for Medscape Emergency Medicine. Joining me to discuss ways to address and reform the toxic culture associated with medical training is Dr. Amy Faith Ho, senior vice president of clinical informatics and analytics at Integrative Emergency Services in Dallas. Also joining us is Dr. Júlia Loyola Ferreira, a pediatric surgeon originally from Brazil, now practicing at Montreal Children’s and focused on advocacy for gender equity and patient-centered care.
Welcome to both of you. Thanks so much for joining me.
Amy Faith Ho, MD, MPH: Thanks so much for having us, Rob.
Dr. Glatter: Amy, I noticed a tweet recently where you talked about how your career choice was affected by the toxic environment in medical school, affecting your choice of residency. Can you elaborate on that?
Dr. Ho: In this instance, what we’re talking about is gender, but it can be directed toward any number of other groups as well.
What you’re alluding to is a tweet by Stanford Surgery Group showing the next residency class, and what was really stunning about this residency class was that it was almost all females. And this was something that took off on social media.
When I saw this, I was really brought back to one of my personal experiences that I chose to share, which was basically that, as a medical student, I really wanted to be a surgeon. I’m an emergency medicine doctor now, so you know that didn’t happen.
The story that I was sharing was that when I was a third-year medical student rotating on surgery, we had a male attending who was very well known at that school at the time who basically would take the female medical students, and instead of clinic, he would round us up. He would have us sit around him in the workplace room while everyone else was seeing patients, and he would have you look at news clippings of himself. He would tell you stories about himself, like he was holding court for the ladies.
It was this very weird culture where my takeaway as a med student was like, “Wow, this is kind of abusive patriarchy that is supported,” because everyone knew about it and was complicit. Even though I really liked surgery, this was just one instance and one example of where you see this culture that really resonates into the rest of life that I didn’t really want to be a part of.
I went into emergency medicine and loved it. It’s also highly procedural, and I was very happy with where I was. What was really interesting about this tweet to me, though, is that it really took off and garnered hundreds of thousands of views on a very niche topic, because what was most revealing is that everyone has a story like this.
It is not just surgery. It is definitely not just one specialty and it is not just one school. It is an endemic problem in medicine. Not only does it change the lives of young women, but it also says so much about the complicity and the culture that we have in medicine that many people were upset about just the same way I was.
Medical training experience in other countries vs. the United States
Dr. Glatter: Júlia, I want to hear about your experience in medical school, surgery, and then fellowship training and up to the present, if possible.
Júlia Loyola Ferreira, MD: In Brazil, as in many countries now, women have made up the majority of the medical students since 2010. It’s a more female-friendly environment when you’re going through medical school, and I was lucky enough to do rotations in areas of surgery where people were friendly to women.
I lived in this tiny bubble that also gave me the privilege of not facing some things that I can imagine that people in Brazil in different areas and smaller towns face. In Brazil, people try to not talk about this gender agenda. This is something that’s being talked about outside Brazil. But in Brazil, we are years back. People are not really engaging on this conversation. I thought it was going to be hard for me as a woman, because Brazil has around 20% female surgeons.
I knew it was going to be challenging, but I had no idea how bad it was. When I started and things started happening, the list was big. I have an example of everything that is written about – microaggression, implicit bias, discrimination, harassment.
Every time I would try to speak about it and talk to someone, I would be strongly gaslighted. It was the whole training, the whole 5 years. People would say, “Oh, I don’t think it was like that. I think you were overreacting.” People would come with all these different answers for what I was experiencing, and that was frustrating. That was even harder because I had to cope with everything that was happening and I had no one to turn to. I had no mentors.
When I looked up to women who were in surgery, they would be tougher on us young surgeons than the men and they would tell us that we should not complain because in their time it was even harder. Now, it’s getting better and we are supposed to accept whatever comes.
That was at least a little bit of what I experienced in my training. It was only after I finished and started to do research about it that I really encountered a field of people who would echo what I was trying to say to many people in different hospitals that I attended to.
That was the key for me to get out of that situation of being gaslighted and of not being able to really talk about it. Suddenly, I started to publish things about Brazil that nobody was even writing or studying. That gave me a large amount of responsibility, but also motivation to keep going and to see the change.
Valuing women in medicine
Dr. Glatter: This is a very important point that you’re raising about the environment of women being hard on other women. We know that men can be very difficult on and also judgmental toward their trainees.
Amy, how would you respond to that? Was your experience similar in emergency medicine training?
Dr. Ho: I actually don’t feel like it was. I think what Júlia is alluding to is this “mean girls” idea, of “I went through it and thus you have to go through it.” I think you do see this in many specialties. One of the classic ones we hear about, and I don’t want to speak to it too much because it’s not my specialty, is ob.gyn., where it is a very female-dominant surgery group. There’s almost a hazing level that you hear about in some of the more malignant workplaces.
I think that you speak to two really important things. Number one is the numbers game. As you were saying, Brazil actually has many women. That’s awesome. That’s actually different from the United States, especially for the historic, existing workplace and less so for the medical students and for residents. I think step one is having minorities like women just present and there.
Step two is actually including and valuing them. While I think it’s really easy to move away from the women discussion, because there are women when you look around in medicine, it doesn’t mean that women are actually being heard, that they’re actually being accepted, or that their viewpoints are being listened to. A big part of it is normalizing not only seeing women in medicine but also normalizing the narrative of women in medicine.
It’s not just about motherhood; it’s about things like normalizing talking about advancement, academic promotions, pay, culture, being called things like “too reactive,” “anxious,” or “too assertive.” These are all classic things that we hear about when we talk about women.
That’s why we’re looking to not only conversations like this, but also structured ways for women to discuss being women in medicine. There are many women in medicine groups in emergency medicine, including: Females Working in Emergency Medicine (FemInEM); the American College of Emergency Physicians (ACEP) and Society for Academic Emergency Medicine (SAEM) women’s groups, which are American Association of Women Emergency Physicians (AAWEP) and Academy for Women in Academic Emergency Medicine (AWAEM), respectively; and the American Medical Women’s Association (AMWA), which is the American Medical Association’s offshoot.
All of these groups are geared toward normalizing women in medicine, normalizing the narrative of women in medicine, and then working on mentoring and educating so that we can advance our initiatives.
Gender balance is not gender equity
Dr. Glatter: Amy, you bring up a very critical point that mentoring is sort of the antidote to gender-based discrimination. Júlia had written a paper back in November of 2022 that was published in the Journal of Surgical Research talking exactly about this and how important it is to develop mentoring. Part of her research showed that about 20% of medical students who took the survey, about 1,000 people, had mentors, which was very disturbing.
Dr. Loyola Ferreira: Mentorship is one of the ways of changing the reality about gender-based discrimination. Amy’s comment was very strong and we need to really keep saying it, which is that gender balance is not gender equity.
The idea of having more women is not the same as women being recognized as equals, as able as men, and as valued as men. To change this very long culture of male domination, we need support, and this support comes from mentorship.
Although I didn’t have one, I feel that since I started being a mentor for some students, it changed not only them but myself. It gave me strength to keep going, studying, publishing, and going further with this discussion. I feel like the relationship was as good for them as it is for me. That’s how things change.
Diversity, equity, and inclusion training
Dr. Glatter: We’re talking about the reality of gender equity in terms of the ability to have equal respect, recognition, opportunities, and access. That’s really an important point to realize, and for our audience, to understand that gender equity is not gender balance.
Amy, I want to talk about medical school curriculums. Are there advances that you’re aware of being made at certain schools, programs, even in residencies, to enforce these things and make it a priority?
Dr. Ho: We’re really lucky that, as a culture in the United States, medical training is certainly very geared toward diversity. Some of that is certainly unofficial. Some of that just means when they’re looking at a medical school class or looking at rank lists for residency, that they’re cognizant of the different backgrounds that people have. That’s still a step. That is a step, that we’re at least acknowledging it.
There are multiple medical schools and residencies that have more formal unconscious-bias training or diversity, equity, and inclusion (DEI) training, both of which are excellent not only for us in the workplace but also for our patients. Almost all of us will see patients of highly diverse backgrounds. I think the biggest push is looking toward the criteria that we use for selecting trainees and students into our programs. Historically, it’s been MCAT, GPA, and so on.
We’ve really started to ask the question of, are these sorts of “objective criteria” actually biased in institutional ways? They talk about this all the time where GPAs will bias against students from underrepresented minorities (URM). I think all medical students and residencies have really acknowledged that. Although there are still test cutoffs, we are putting an inquisitive eye to what those mean, why they exist, and what are the other things that we should consider. This is all very heartening from what I’m seeing in medical training.
Dr. Glatter: There’s no formal rating system for DEI curriculums right now, like ranking of this school, or this program has more advanced recognition in terms of DEI?
Dr. Ho: No, but on the flip side, the U.S. News & World Report was classically one of the major rankings for medical schools. What we saw fairly recently was that very high-tier schools like Harvard and University of Chicago pulled out of that ranking because that ranking did not acknowledge the value of diversity. That was an incredible stance for medical schools to take, to say, “Hey, you are not evaluating an important criterion of ours.”
Dr. Glatter: That’s a great point. Júlia, where are we now in Brazil in terms of awareness of DEI and curriculum in schools and training programs?
Dr. Loyola Ferreira: Our reality is not as good as in the U.S., unfortunately. I don’t see much discussion on residency programs or medical schools at the moment. I see many students bringing it out and trying to make their schools engage in that discussion. This is something that is coming from the bottom up and not from the top down. I think it can lead to change as well. It is a step and it’s a beginning. Institutions should take the responsibility of doing this from the beginning. This is something where Brazil is still years behind you guys.
Dr. Glatter: It’s unfortunate, but certainly it’s important to hear that. What about in Canada and certainly your institution, McGill, where you just completed a master’s degree?
Dr. Loyola Ferreira: Canada is very much like the U.S. This is something that is really happening and it’s happening fast. I see, at least at McGill, a large amount of DEI inclusion and everything on this discussion. They have institutional courses for us to do as students, and we are all obliged to do many courses, which I think is really educating, especially for people with different cultures and backgrounds.
Dr. Glatter: Amy, where do you think we are in emergency medicine to look at the other side of it? Comparing surgery with emergency medicine, do you think we’re well advanced in terms of DEI, inclusion criteria, respect, and dignity, or are we really far off?
Dr. Ho: I may be biased, but I think emergency medicine is one of the best in terms of this, and I think there are a couple of reasons for it. One is that we are an inherently team-based organization. The attending, the residents, and the students all work in line with one another. There’s less of a hierarchy.
The same is true for our nurses, pharmacists, techs, and EMS. We all work together as a team. Because of that fairly flat structure, it’s really easy for us to value one another as individuals with our diverse backgrounds. In a way, that’s harder for specialties that are more hierarchical, and I think surgery is certainly one of the most hierarchical.
The second reason why emergency medicine is fairly well off in this is that we’re, by nature, a safety-net specialty. We see patients of all-comers, all walks, all backgrounds. I think we both recognize the value of physician-patient concordance. When we share characteristics with our patients, we recognize that value immediately at the bedside.
It exposes us to so much diversity. I see a refugee one day and the next patient is someone who is incarcerated. The next patient after that is an important businessman in society. That diversity and whiplash in the type of patients that we see back-to-back helps us see the playing field in a really flat, diverse way. Because of that, I think our culture is much better, as is our understanding of the value and importance of diversity not only for our programs, but also for our patients.
Do female doctors have better patient outcomes?
Dr. Glatter: Specialties working together in the emergency department is so important. Building that team and that togetherness is so critical. Júlia, would you agree?
Dr. Loyola Ferreira: Definitely. Something Amy said that is beautiful is that you recognize yourself in these patients. In surgery, we are taught to try to be away from the patients and not to put ourselves in the same position. We are taught to be less engaging, and this is not good. The good thing is when we really have patient-centered care, when we listen to them, and when we are involved with them.
I saw a publication showing that female and male surgeons treating similar patients had the same surgical outcomes. Women are as good as men technically to do surgery and have the same surgical outcomes. However, there is research showing that surgical teams with greater representation of women have improved surgical outcomes because of patient-centered care and the way women conduct bedside attention to patients. And they have better patient experience measures afterward. That is not only from the women who are treating the patients, but the whole environment. Women end up bringing men [into the conversation] and this better improves patient-centered care, and that makes the whole team a better team attending patients. Definitely, we are in the moment of patient experience and satisfaction, and increasing women is a way of achieving better patient satisfaction and experience.
Dr. Ho: There’s much to be said about having female clinicians available for patients. It doesn’t have to be just for female patients, although again, concordance between physicians and patients is certainly beneficial. Besides outcomes benefit, there’s even just a communication benefit. The way that women and men communicate is inherently different. The way women and men experience certain things is also inherently different.
A classic example of this is women who are experiencing a heart attack may not actually have chest pain but present with nausea. As a female who’s sensitive to this, when I see a woman throwing up, I am very attuned to something actually being wrong, knowing that they may not present with classic pain for a syndrome, but actually may be presenting with nausea instead. It doesn’t have to be a woman who takes that knowledge and turns it into something at the bedside. It certainly doesn’t have to, but it is just a natural, easy thing to step into as a female.
While I’m really careful to not step into this “women are better than men” or “men are better than women” argument, there’s something to be said about how the availability of female clinicians for all patients, not just female patients, can have benefit. Again, it’s shown in studies with cardiovascular outcomes and cardiologists, it’s certainly shown in ob.gyn., particularly for underrepresented minorities as well for maternal outcomes of Black mothers. It’s certainly shown again in patient satisfaction, which is concordance.
There is a profound level of research already on this that goes beyond just the idea of stacking the bench and putting more women in there. That’s not the value. We’re not just here to check off the box. We’re here to actually lend some value to our patients and, again, to one another as well.
Dr. Glatter: Absolutely. These are excellent points. The point you make about patient presentation is so vital. The fact that women have nausea sometimes in ACS presentations, the research never was really attentive to this. It was biased. The symptoms that women may have that are not “typical” for ACS weren’t included in patient presentations. Educating everyone about, overall, the types of presentations that we can recognize is vital and important.
Dr. Ho: Yes. It’s worth saying that, when you look at how medicine and research developed, classically, who were the research participants? They were often White men. They were college students who, historically, because women were not allowed to go to college, were men.
I say that not to fault the institution, because that was the culture of our history, but to just say it is okay to question things. It is okay to realize that someone’s presenting outside of the box and that maybe we actually need to reframe what even created the walls of the box in the first place.
Dr. Glatter: Thank you again for joining us. I truly appreciate your insight and expertise.
Dr. Glatter is assistant professor of emergency medicine, department of emergency medicine, Hofstra/Northwell, New York. Dr. Ho is senior vice president of clinical informatics & analytics, department of emergency medicine, Integrative Emergency Services, Dallas. Dr. Loyola Ferreira is a master of science candidate, department of experimental surgery, McGill University, Montreal. They reported that they had no conflicts of interest.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
Robert D. Glatter, MD: Welcome. I’m Dr. Robert Glatter, medical adviser for Medscape Emergency Medicine. Joining me to discuss ways to address and reform the toxic culture associated with medical training is Dr. Amy Faith Ho, senior vice president of clinical informatics and analytics at Integrative Emergency Services in Dallas. Also joining us is Dr. Júlia Loyola Ferreira, a pediatric surgeon originally from Brazil, now practicing at Montreal Children’s and focused on advocacy for gender equity and patient-centered care.
Welcome to both of you. Thanks so much for joining me.
Amy Faith Ho, MD, MPH: Thanks so much for having us, Rob.
Dr. Glatter: Amy, I noticed a tweet recently where you talked about how your career choice was affected by the toxic environment in medical school, affecting your choice of residency. Can you elaborate on that?
Dr. Ho: In this instance, what we’re talking about is gender, but it can be directed toward any number of other groups as well.
What you’re alluding to is a tweet by Stanford Surgery Group showing the next residency class, and what was really stunning about this residency class was that it was almost all females. And this was something that took off on social media.
When I saw this, I was really brought back to one of my personal experiences that I chose to share, which was basically that, as a medical student, I really wanted to be a surgeon. I’m an emergency medicine doctor now, so you know that didn’t happen.
The story that I was sharing was that when I was a third-year medical student rotating on surgery, we had a male attending who was very well known at that school at the time who basically would take the female medical students, and instead of clinic, he would round us up. He would have us sit around him in the workplace room while everyone else was seeing patients, and he would have you look at news clippings of himself. He would tell you stories about himself, like he was holding court for the ladies.
It was this very weird culture where my takeaway as a med student was like, “Wow, this is kind of abusive patriarchy that is supported,” because everyone knew about it and was complicit. Even though I really liked surgery, this was just one instance and one example of where you see this culture that really resonates into the rest of life that I didn’t really want to be a part of.
I went into emergency medicine and loved it. It’s also highly procedural, and I was very happy with where I was. What was really interesting about this tweet to me, though, is that it really took off and garnered hundreds of thousands of views on a very niche topic, because what was most revealing is that everyone has a story like this.
It is not just surgery. It is definitely not just one specialty and it is not just one school. It is an endemic problem in medicine. Not only does it change the lives of young women, but it also says so much about the complicity and the culture that we have in medicine that many people were upset about just the same way I was.
Medical training experience in other countries vs. the United States
Dr. Glatter: Júlia, I want to hear about your experience in medical school, surgery, and then fellowship training and up to the present, if possible.
Júlia Loyola Ferreira, MD: In Brazil, as in many countries now, women have made up the majority of the medical students since 2010. It’s a more female-friendly environment when you’re going through medical school, and I was lucky enough to do rotations in areas of surgery where people were friendly to women.
I lived in this tiny bubble that also gave me the privilege of not facing some things that I can imagine that people in Brazil in different areas and smaller towns face. In Brazil, people try to not talk about this gender agenda. This is something that’s being talked about outside Brazil. But in Brazil, we are years back. People are not really engaging on this conversation. I thought it was going to be hard for me as a woman, because Brazil has around 20% female surgeons.
I knew it was going to be challenging, but I had no idea how bad it was. When I started and things started happening, the list was big. I have an example of everything that is written about – microaggression, implicit bias, discrimination, harassment.
Every time I would try to speak about it and talk to someone, I would be strongly gaslighted. It was the whole training, the whole 5 years. People would say, “Oh, I don’t think it was like that. I think you were overreacting.” People would come with all these different answers for what I was experiencing, and that was frustrating. That was even harder because I had to cope with everything that was happening and I had no one to turn to. I had no mentors.
When I looked up to women who were in surgery, they would be tougher on us young surgeons than the men and they would tell us that we should not complain because in their time it was even harder. Now, it’s getting better and we are supposed to accept whatever comes.
That was at least a little bit of what I experienced in my training. It was only after I finished and started to do research about it that I really encountered a field of people who would echo what I was trying to say to many people in different hospitals that I attended to.
That was the key for me to get out of that situation of being gaslighted and of not being able to really talk about it. Suddenly, I started to publish things about Brazil that nobody was even writing or studying. That gave me a large amount of responsibility, but also motivation to keep going and to see the change.
Valuing women in medicine
Dr. Glatter: This is a very important point that you’re raising about the environment of women being hard on other women. We know that men can be very difficult on and also judgmental toward their trainees.
Amy, how would you respond to that? Was your experience similar in emergency medicine training?
Dr. Ho: I actually don’t feel like it was. I think what Júlia is alluding to is this “mean girls” idea, of “I went through it and thus you have to go through it.” I think you do see this in many specialties. One of the classic ones we hear about, and I don’t want to speak to it too much because it’s not my specialty, is ob.gyn., where it is a very female-dominant surgery group. There’s almost a hazing level that you hear about in some of the more malignant workplaces.
I think that you speak to two really important things. Number one is the numbers game. As you were saying, Brazil actually has many women. That’s awesome. That’s actually different from the United States, especially for the historic, existing workplace and less so for the medical students and for residents. I think step one is having minorities like women just present and there.
Step two is actually including and valuing them. While I think it’s really easy to move away from the women discussion, because there are women when you look around in medicine, it doesn’t mean that women are actually being heard, that they’re actually being accepted, or that their viewpoints are being listened to. A big part of it is normalizing not only seeing women in medicine but also normalizing the narrative of women in medicine.
It’s not just about motherhood; it’s about things like normalizing talking about advancement, academic promotions, pay, culture, being called things like “too reactive,” “anxious,” or “too assertive.” These are all classic things that we hear about when we talk about women.
That’s why we’re looking to not only conversations like this, but also structured ways for women to discuss being women in medicine. There are many women in medicine groups in emergency medicine, including: Females Working in Emergency Medicine (FemInEM); the American College of Emergency Physicians (ACEP) and Society for Academic Emergency Medicine (SAEM) women’s groups, which are American Association of Women Emergency Physicians (AAWEP) and Academy for Women in Academic Emergency Medicine (AWAEM), respectively; and the American Medical Women’s Association (AMWA), which is the American Medical Association’s offshoot.
All of these groups are geared toward normalizing women in medicine, normalizing the narrative of women in medicine, and then working on mentoring and educating so that we can advance our initiatives.
Gender balance is not gender equity
Dr. Glatter: Amy, you bring up a very critical point that mentoring is sort of the antidote to gender-based discrimination. Júlia had written a paper back in November of 2022 that was published in the Journal of Surgical Research talking exactly about this and how important it is to develop mentoring. Part of her research showed that about 20% of medical students who took the survey, about 1,000 people, had mentors, which was very disturbing.
Dr. Loyola Ferreira: Mentorship is one of the ways of changing the reality about gender-based discrimination. Amy’s comment was very strong and we need to really keep saying it, which is that gender balance is not gender equity.
The idea of having more women is not the same as women being recognized as equals, as able as men, and as valued as men. To change this very long culture of male domination, we need support, and this support comes from mentorship.
Although I didn’t have one, I feel that since I started being a mentor for some students, it changed not only them but myself. It gave me strength to keep going, studying, publishing, and going further with this discussion. I feel like the relationship was as good for them as it is for me. That’s how things change.
Diversity, equity, and inclusion training
Dr. Glatter: We’re talking about the reality of gender equity in terms of the ability to have equal respect, recognition, opportunities, and access. That’s really an important point to realize, and for our audience, to understand that gender equity is not gender balance.
Amy, I want to talk about medical school curriculums. Are there advances that you’re aware of being made at certain schools, programs, even in residencies, to enforce these things and make it a priority?
Dr. Ho: We’re really lucky that, as a culture in the United States, medical training is certainly very geared toward diversity. Some of that is certainly unofficial. Some of that just means when they’re looking at a medical school class or looking at rank lists for residency, that they’re cognizant of the different backgrounds that people have. That’s still a step. That is a step, that we’re at least acknowledging it.
There are multiple medical schools and residencies that have more formal unconscious-bias training or diversity, equity, and inclusion (DEI) training, both of which are excellent not only for us in the workplace but also for our patients. Almost all of us will see patients of highly diverse backgrounds. I think the biggest push is looking toward the criteria that we use for selecting trainees and students into our programs. Historically, it’s been MCAT, GPA, and so on.
We’ve really started to ask the question of, are these sorts of “objective criteria” actually biased in institutional ways? They talk about this all the time where GPAs will bias against students from underrepresented minorities (URM). I think all medical students and residencies have really acknowledged that. Although there are still test cutoffs, we are putting an inquisitive eye to what those mean, why they exist, and what are the other things that we should consider. This is all very heartening from what I’m seeing in medical training.
Dr. Glatter: There’s no formal rating system for DEI curriculums right now, like ranking of this school, or this program has more advanced recognition in terms of DEI?
Dr. Ho: No, but on the flip side, the U.S. News & World Report was classically one of the major rankings for medical schools. What we saw fairly recently was that very high-tier schools like Harvard and University of Chicago pulled out of that ranking because that ranking did not acknowledge the value of diversity. That was an incredible stance for medical schools to take, to say, “Hey, you are not evaluating an important criterion of ours.”
Dr. Glatter: That’s a great point. Júlia, where are we now in Brazil in terms of awareness of DEI and curriculum in schools and training programs?
Dr. Loyola Ferreira: Our reality is not as good as in the U.S., unfortunately. I don’t see much discussion on residency programs or medical schools at the moment. I see many students bringing it out and trying to make their schools engage in that discussion. This is something that is coming from the bottom up and not from the top down. I think it can lead to change as well. It is a step and it’s a beginning. Institutions should take the responsibility of doing this from the beginning. This is something where Brazil is still years behind you guys.
Dr. Glatter: It’s unfortunate, but certainly it’s important to hear that. What about in Canada and certainly your institution, McGill, where you just completed a master’s degree?
Dr. Loyola Ferreira: Canada is very much like the U.S. This is something that is really happening and it’s happening fast. I see, at least at McGill, a large amount of DEI inclusion and everything on this discussion. They have institutional courses for us to do as students, and we are all obliged to do many courses, which I think is really educating, especially for people with different cultures and backgrounds.
Dr. Glatter: Amy, where do you think we are in emergency medicine to look at the other side of it? Comparing surgery with emergency medicine, do you think we’re well advanced in terms of DEI, inclusion criteria, respect, and dignity, or are we really far off?
Dr. Ho: I may be biased, but I think emergency medicine is one of the best in terms of this, and I think there are a couple of reasons for it. One is that we are an inherently team-based organization. The attending, the residents, and the students all work in line with one another. There’s less of a hierarchy.
The same is true for our nurses, pharmacists, techs, and EMS. We all work together as a team. Because of that fairly flat structure, it’s really easy for us to value one another as individuals with our diverse backgrounds. In a way, that’s harder for specialties that are more hierarchical, and I think surgery is certainly one of the most hierarchical.
The second reason why emergency medicine is fairly well off in this is that we’re, by nature, a safety-net specialty. We see patients of all-comers, all walks, all backgrounds. I think we both recognize the value of physician-patient concordance. When we share characteristics with our patients, we recognize that value immediately at the bedside.
It exposes us to so much diversity. I see a refugee one day and the next patient is someone who is incarcerated. The next patient after that is an important businessman in society. That diversity and whiplash in the type of patients that we see back-to-back helps us see the playing field in a really flat, diverse way. Because of that, I think our culture is much better, as is our understanding of the value and importance of diversity not only for our programs, but also for our patients.
Do female doctors have better patient outcomes?
Dr. Glatter: Specialties working together in the emergency department is so important. Building that team and that togetherness is so critical. Júlia, would you agree?
Dr. Loyola Ferreira: Definitely. Something Amy said that is beautiful is that you recognize yourself in these patients. In surgery, we are taught to try to be away from the patients and not to put ourselves in the same position. We are taught to be less engaging, and this is not good. The good thing is when we really have patient-centered care, when we listen to them, and when we are involved with them.
I saw a publication showing that female and male surgeons treating similar patients had the same surgical outcomes. Women are as good as men technically to do surgery and have the same surgical outcomes. However, there is research showing that surgical teams with greater representation of women have improved surgical outcomes because of patient-centered care and the way women conduct bedside attention to patients. And they have better patient experience measures afterward. That is not only from the women who are treating the patients, but the whole environment. Women end up bringing men [into the conversation] and this better improves patient-centered care, and that makes the whole team a better team attending patients. Definitely, we are in the moment of patient experience and satisfaction, and increasing women is a way of achieving better patient satisfaction and experience.
Dr. Ho: There’s much to be said about having female clinicians available for patients. It doesn’t have to be just for female patients, although again, concordance between physicians and patients is certainly beneficial. Besides outcomes benefit, there’s even just a communication benefit. The way that women and men communicate is inherently different. The way women and men experience certain things is also inherently different.
A classic example of this is women who are experiencing a heart attack may not actually have chest pain but present with nausea. As a female who’s sensitive to this, when I see a woman throwing up, I am very attuned to something actually being wrong, knowing that they may not present with classic pain for a syndrome, but actually may be presenting with nausea instead. It doesn’t have to be a woman who takes that knowledge and turns it into something at the bedside. It certainly doesn’t have to, but it is just a natural, easy thing to step into as a female.
While I’m really careful to not step into this “women are better than men” or “men are better than women” argument, there’s something to be said about how the availability of female clinicians for all patients, not just female patients, can have benefit. Again, it’s shown in studies with cardiovascular outcomes and cardiologists, it’s certainly shown in ob.gyn., particularly for underrepresented minorities as well for maternal outcomes of Black mothers. It’s certainly shown again in patient satisfaction, which is concordance.
There is a profound level of research already on this that goes beyond just the idea of stacking the bench and putting more women in there. That’s not the value. We’re not just here to check off the box. We’re here to actually lend some value to our patients and, again, to one another as well.
Dr. Glatter: Absolutely. These are excellent points. The point you make about patient presentation is so vital. The fact that women have nausea sometimes in ACS presentations, the research never was really attentive to this. It was biased. The symptoms that women may have that are not “typical” for ACS weren’t included in patient presentations. Educating everyone about, overall, the types of presentations that we can recognize is vital and important.
Dr. Ho: Yes. It’s worth saying that, when you look at how medicine and research developed, classically, who were the research participants? They were often White men. They were college students who, historically, because women were not allowed to go to college, were men.
I say that not to fault the institution, because that was the culture of our history, but to just say it is okay to question things. It is okay to realize that someone’s presenting outside of the box and that maybe we actually need to reframe what even created the walls of the box in the first place.
Dr. Glatter: Thank you again for joining us. I truly appreciate your insight and expertise.
Dr. Glatter is assistant professor of emergency medicine, department of emergency medicine, Hofstra/Northwell, New York. Dr. Ho is senior vice president of clinical informatics & analytics, department of emergency medicine, Integrative Emergency Services, Dallas. Dr. Loyola Ferreira is a master of science candidate, department of experimental surgery, McGill University, Montreal. They reported that they had no conflicts of interest.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
Robert D. Glatter, MD: Welcome. I’m Dr. Robert Glatter, medical adviser for Medscape Emergency Medicine. Joining me to discuss ways to address and reform the toxic culture associated with medical training is Dr. Amy Faith Ho, senior vice president of clinical informatics and analytics at Integrative Emergency Services in Dallas. Also joining us is Dr. Júlia Loyola Ferreira, a pediatric surgeon originally from Brazil, now practicing at Montreal Children’s and focused on advocacy for gender equity and patient-centered care.
Welcome to both of you. Thanks so much for joining me.
Amy Faith Ho, MD, MPH: Thanks so much for having us, Rob.
Dr. Glatter: Amy, I noticed a tweet recently where you talked about how your career choice was affected by the toxic environment in medical school, affecting your choice of residency. Can you elaborate on that?
Dr. Ho: In this instance, what we’re talking about is gender, but it can be directed toward any number of other groups as well.
What you’re alluding to is a tweet by Stanford Surgery Group showing the next residency class, and what was really stunning about this residency class was that it was almost all females. And this was something that took off on social media.
When I saw this, I was really brought back to one of my personal experiences that I chose to share, which was basically that, as a medical student, I really wanted to be a surgeon. I’m an emergency medicine doctor now, so you know that didn’t happen.
The story that I was sharing was that when I was a third-year medical student rotating on surgery, we had a male attending who was very well known at that school at the time who basically would take the female medical students, and instead of clinic, he would round us up. He would have us sit around him in the workplace room while everyone else was seeing patients, and he would have you look at news clippings of himself. He would tell you stories about himself, like he was holding court for the ladies.
It was this very weird culture where my takeaway as a med student was like, “Wow, this is kind of abusive patriarchy that is supported,” because everyone knew about it and was complicit. Even though I really liked surgery, this was just one instance and one example of where you see this culture that really resonates into the rest of life that I didn’t really want to be a part of.
I went into emergency medicine and loved it. It’s also highly procedural, and I was very happy with where I was. What was really interesting about this tweet to me, though, is that it really took off and garnered hundreds of thousands of views on a very niche topic, because what was most revealing is that everyone has a story like this.
It is not just surgery. It is definitely not just one specialty and it is not just one school. It is an endemic problem in medicine. Not only does it change the lives of young women, but it also says so much about the complicity and the culture that we have in medicine that many people were upset about just the same way I was.
Medical training experience in other countries vs. the United States
Dr. Glatter: Júlia, I want to hear about your experience in medical school, surgery, and then fellowship training and up to the present, if possible.
Júlia Loyola Ferreira, MD: In Brazil, as in many countries now, women have made up the majority of the medical students since 2010. It’s a more female-friendly environment when you’re going through medical school, and I was lucky enough to do rotations in areas of surgery where people were friendly to women.
I lived in this tiny bubble that also gave me the privilege of not facing some things that I can imagine that people in Brazil in different areas and smaller towns face. In Brazil, people try to not talk about this gender agenda. This is something that’s being talked about outside Brazil. But in Brazil, we are years back. People are not really engaging on this conversation. I thought it was going to be hard for me as a woman, because Brazil has around 20% female surgeons.
I knew it was going to be challenging, but I had no idea how bad it was. When I started and things started happening, the list was big. I have an example of everything that is written about – microaggression, implicit bias, discrimination, harassment.
Every time I would try to speak about it and talk to someone, I would be strongly gaslighted. It was the whole training, the whole 5 years. People would say, “Oh, I don’t think it was like that. I think you were overreacting.” People would come with all these different answers for what I was experiencing, and that was frustrating. That was even harder because I had to cope with everything that was happening and I had no one to turn to. I had no mentors.
When I looked up to women who were in surgery, they would be tougher on us young surgeons than the men and they would tell us that we should not complain because in their time it was even harder. Now, it’s getting better and we are supposed to accept whatever comes.
That was at least a little bit of what I experienced in my training. It was only after I finished and started to do research about it that I really encountered a field of people who would echo what I was trying to say to many people in different hospitals that I attended to.
That was the key for me to get out of that situation of being gaslighted and of not being able to really talk about it. Suddenly, I started to publish things about Brazil that nobody was even writing or studying. That gave me a large amount of responsibility, but also motivation to keep going and to see the change.
Valuing women in medicine
Dr. Glatter: This is a very important point that you’re raising about the environment of women being hard on other women. We know that men can be very difficult on and also judgmental toward their trainees.
Amy, how would you respond to that? Was your experience similar in emergency medicine training?
Dr. Ho: I actually don’t feel like it was. I think what Júlia is alluding to is this “mean girls” idea, of “I went through it and thus you have to go through it.” I think you do see this in many specialties. One of the classic ones we hear about, and I don’t want to speak to it too much because it’s not my specialty, is ob.gyn., where it is a very female-dominant surgery group. There’s almost a hazing level that you hear about in some of the more malignant workplaces.
I think that you speak to two really important things. Number one is the numbers game. As you were saying, Brazil actually has many women. That’s awesome. That’s actually different from the United States, especially for the historic, existing workplace and less so for the medical students and for residents. I think step one is having minorities like women just present and there.
Step two is actually including and valuing them. While I think it’s really easy to move away from the women discussion, because there are women when you look around in medicine, it doesn’t mean that women are actually being heard, that they’re actually being accepted, or that their viewpoints are being listened to. A big part of it is normalizing not only seeing women in medicine but also normalizing the narrative of women in medicine.
It’s not just about motherhood; it’s about things like normalizing talking about advancement, academic promotions, pay, culture, being called things like “too reactive,” “anxious,” or “too assertive.” These are all classic things that we hear about when we talk about women.
That’s why we’re looking to not only conversations like this, but also structured ways for women to discuss being women in medicine. There are many women in medicine groups in emergency medicine, including: Females Working in Emergency Medicine (FemInEM); the American College of Emergency Physicians (ACEP) and Society for Academic Emergency Medicine (SAEM) women’s groups, which are American Association of Women Emergency Physicians (AAWEP) and Academy for Women in Academic Emergency Medicine (AWAEM), respectively; and the American Medical Women’s Association (AMWA), which is the American Medical Association’s offshoot.
All of these groups are geared toward normalizing women in medicine, normalizing the narrative of women in medicine, and then working on mentoring and educating so that we can advance our initiatives.
Gender balance is not gender equity
Dr. Glatter: Amy, you bring up a very critical point that mentoring is sort of the antidote to gender-based discrimination. Júlia had written a paper back in November of 2022 that was published in the Journal of Surgical Research talking exactly about this and how important it is to develop mentoring. Part of her research showed that about 20% of medical students who took the survey, about 1,000 people, had mentors, which was very disturbing.
Dr. Loyola Ferreira: Mentorship is one of the ways of changing the reality about gender-based discrimination. Amy’s comment was very strong and we need to really keep saying it, which is that gender balance is not gender equity.
The idea of having more women is not the same as women being recognized as equals, as able as men, and as valued as men. To change this very long culture of male domination, we need support, and this support comes from mentorship.
Although I didn’t have one, I feel that since I started being a mentor for some students, it changed not only them but myself. It gave me strength to keep going, studying, publishing, and going further with this discussion. I feel like the relationship was as good for them as it is for me. That’s how things change.
Diversity, equity, and inclusion training
Dr. Glatter: We’re talking about the reality of gender equity in terms of the ability to have equal respect, recognition, opportunities, and access. That’s really an important point to realize, and for our audience, to understand that gender equity is not gender balance.
Amy, I want to talk about medical school curriculums. Are there advances that you’re aware of being made at certain schools, programs, even in residencies, to enforce these things and make it a priority?
Dr. Ho: We’re really lucky that, as a culture in the United States, medical training is certainly very geared toward diversity. Some of that is certainly unofficial. Some of that just means when they’re looking at a medical school class or looking at rank lists for residency, that they’re cognizant of the different backgrounds that people have. That’s still a step. That is a step, that we’re at least acknowledging it.
There are multiple medical schools and residencies that have more formal unconscious-bias training or diversity, equity, and inclusion (DEI) training, both of which are excellent not only for us in the workplace but also for our patients. Almost all of us will see patients of highly diverse backgrounds. I think the biggest push is looking toward the criteria that we use for selecting trainees and students into our programs. Historically, it’s been MCAT, GPA, and so on.
We’ve really started to ask the question of, are these sorts of “objective criteria” actually biased in institutional ways? They talk about this all the time where GPAs will bias against students from underrepresented minorities (URM). I think all medical students and residencies have really acknowledged that. Although there are still test cutoffs, we are putting an inquisitive eye to what those mean, why they exist, and what are the other things that we should consider. This is all very heartening from what I’m seeing in medical training.
Dr. Glatter: There’s no formal rating system for DEI curriculums right now, like ranking of this school, or this program has more advanced recognition in terms of DEI?
Dr. Ho: No, but on the flip side, the U.S. News & World Report was classically one of the major rankings for medical schools. What we saw fairly recently was that very high-tier schools like Harvard and University of Chicago pulled out of that ranking because that ranking did not acknowledge the value of diversity. That was an incredible stance for medical schools to take, to say, “Hey, you are not evaluating an important criterion of ours.”
Dr. Glatter: That’s a great point. Júlia, where are we now in Brazil in terms of awareness of DEI and curriculum in schools and training programs?
Dr. Loyola Ferreira: Our reality is not as good as in the U.S., unfortunately. I don’t see much discussion on residency programs or medical schools at the moment. I see many students bringing it out and trying to make their schools engage in that discussion. This is something that is coming from the bottom up and not from the top down. I think it can lead to change as well. It is a step and it’s a beginning. Institutions should take the responsibility of doing this from the beginning. This is something where Brazil is still years behind you guys.
Dr. Glatter: It’s unfortunate, but certainly it’s important to hear that. What about in Canada and certainly your institution, McGill, where you just completed a master’s degree?
Dr. Loyola Ferreira: Canada is very much like the U.S. This is something that is really happening and it’s happening fast. I see, at least at McGill, a large amount of DEI inclusion and everything on this discussion. They have institutional courses for us to do as students, and we are all obliged to do many courses, which I think is really educating, especially for people with different cultures and backgrounds.
Dr. Glatter: Amy, where do you think we are in emergency medicine to look at the other side of it? Comparing surgery with emergency medicine, do you think we’re well advanced in terms of DEI, inclusion criteria, respect, and dignity, or are we really far off?
Dr. Ho: I may be biased, but I think emergency medicine is one of the best in terms of this, and I think there are a couple of reasons for it. One is that we are an inherently team-based organization. The attending, the residents, and the students all work in line with one another. There’s less of a hierarchy.
The same is true for our nurses, pharmacists, techs, and EMS. We all work together as a team. Because of that fairly flat structure, it’s really easy for us to value one another as individuals with our diverse backgrounds. In a way, that’s harder for specialties that are more hierarchical, and I think surgery is certainly one of the most hierarchical.
The second reason why emergency medicine is fairly well off in this is that we’re, by nature, a safety-net specialty. We see patients of all-comers, all walks, all backgrounds. I think we both recognize the value of physician-patient concordance. When we share characteristics with our patients, we recognize that value immediately at the bedside.
It exposes us to so much diversity. I see a refugee one day and the next patient is someone who is incarcerated. The next patient after that is an important businessman in society. That diversity and whiplash in the type of patients that we see back-to-back helps us see the playing field in a really flat, diverse way. Because of that, I think our culture is much better, as is our understanding of the value and importance of diversity not only for our programs, but also for our patients.
Do female doctors have better patient outcomes?
Dr. Glatter: Specialties working together in the emergency department is so important. Building that team and that togetherness is so critical. Júlia, would you agree?
Dr. Loyola Ferreira: Definitely. Something Amy said that is beautiful is that you recognize yourself in these patients. In surgery, we are taught to try to be away from the patients and not to put ourselves in the same position. We are taught to be less engaging, and this is not good. The good thing is when we really have patient-centered care, when we listen to them, and when we are involved with them.
I saw a publication showing that female and male surgeons treating similar patients had the same surgical outcomes. Women are as good as men technically to do surgery and have the same surgical outcomes. However, there is research showing that surgical teams with greater representation of women have improved surgical outcomes because of patient-centered care and the way women conduct bedside attention to patients. And they have better patient experience measures afterward. That is not only from the women who are treating the patients, but the whole environment. Women end up bringing men [into the conversation] and this better improves patient-centered care, and that makes the whole team a better team attending patients. Definitely, we are in the moment of patient experience and satisfaction, and increasing women is a way of achieving better patient satisfaction and experience.
Dr. Ho: There’s much to be said about having female clinicians available for patients. It doesn’t have to be just for female patients, although again, concordance between physicians and patients is certainly beneficial. Besides outcomes benefit, there’s even just a communication benefit. The way that women and men communicate is inherently different. The way women and men experience certain things is also inherently different.
A classic example of this is women who are experiencing a heart attack may not actually have chest pain but present with nausea. As a female who’s sensitive to this, when I see a woman throwing up, I am very attuned to something actually being wrong, knowing that they may not present with classic pain for a syndrome, but actually may be presenting with nausea instead. It doesn’t have to be a woman who takes that knowledge and turns it into something at the bedside. It certainly doesn’t have to, but it is just a natural, easy thing to step into as a female.
While I’m really careful to not step into this “women are better than men” or “men are better than women” argument, there’s something to be said about how the availability of female clinicians for all patients, not just female patients, can have benefit. Again, it’s shown in studies with cardiovascular outcomes and cardiologists, it’s certainly shown in ob.gyn., particularly for underrepresented minorities as well for maternal outcomes of Black mothers. It’s certainly shown again in patient satisfaction, which is concordance.
There is a profound level of research already on this that goes beyond just the idea of stacking the bench and putting more women in there. That’s not the value. We’re not just here to check off the box. We’re here to actually lend some value to our patients and, again, to one another as well.
Dr. Glatter: Absolutely. These are excellent points. The point you make about patient presentation is so vital. The fact that women have nausea sometimes in ACS presentations, the research never was really attentive to this. It was biased. The symptoms that women may have that are not “typical” for ACS weren’t included in patient presentations. Educating everyone about, overall, the types of presentations that we can recognize is vital and important.
Dr. Ho: Yes. It’s worth saying that, when you look at how medicine and research developed, classically, who were the research participants? They were often White men. They were college students who, historically, because women were not allowed to go to college, were men.
I say that not to fault the institution, because that was the culture of our history, but to just say it is okay to question things. It is okay to realize that someone’s presenting outside of the box and that maybe we actually need to reframe what even created the walls of the box in the first place.
Dr. Glatter: Thank you again for joining us. I truly appreciate your insight and expertise.
Dr. Glatter is assistant professor of emergency medicine, department of emergency medicine, Hofstra/Northwell, New York. Dr. Ho is senior vice president of clinical informatics & analytics, department of emergency medicine, Integrative Emergency Services, Dallas. Dr. Loyola Ferreira is a master of science candidate, department of experimental surgery, McGill University, Montreal. They reported that they had no conflicts of interest.
A version of this article first appeared on Medscape.com.
COVID vaccines safe for young children, study finds
TOPLINE:
COVID-19 vaccines from Moderna and Pfizer-BioNTech are safe for children under age 5 years, according to findings from a study funded by the Centers for Disease Control and Prevention.
METHODOLOGY:
- Data came from the Vaccine Safety Datalink, which gathers information from eight health systems in the United States.
- Analyzed data from 135,005 doses given to children age 4 and younger who received the Pfizer-BioNTech , and 112,006 doses given to children aged 5 and younger who received the Moderna version.
- Assessed for 23 safety outcomes, including myocarditis, pericarditis, and seizures.
TAKEAWAY:
- One case of hemorrhagic stroke and one case of pulmonary embolism occurred after vaccination but these were linked to preexisting congenital abnormalities.
IN PRACTICE:
“These results can provide reassurance to clinicians, parents, and policymakers alike.”
STUDY DETAILS:
The study was led by Kristin Goddard, MPH, a researcher at the Kaiser Permanente Vaccine Study Center in Oakland, Calif., and was funded by the Centers for Disease Control and Prevention.
LIMITATIONS:
The researchers reported low statistical power for early analysis, especially for rare outcomes. In addition, fewer than 25% of children in the database had received a vaccine at the time of analysis.
DISCLOSURES:
A coauthor reported receiving funding from Janssen Vaccines and Prevention for a study unrelated to COVID-19 vaccines. Another coauthor reported receiving grants from Pfizer in 2019 for clinical trials for coronavirus vaccines, and from Merck, GSK, and Sanofi Pasteur for unrelated research.
A version of this article first appeared on Medscape.com.
TOPLINE:
COVID-19 vaccines from Moderna and Pfizer-BioNTech are safe for children under age 5 years, according to findings from a study funded by the Centers for Disease Control and Prevention.
METHODOLOGY:
- Data came from the Vaccine Safety Datalink, which gathers information from eight health systems in the United States.
- Analyzed data from 135,005 doses given to children age 4 and younger who received the Pfizer-BioNTech , and 112,006 doses given to children aged 5 and younger who received the Moderna version.
- Assessed for 23 safety outcomes, including myocarditis, pericarditis, and seizures.
TAKEAWAY:
- One case of hemorrhagic stroke and one case of pulmonary embolism occurred after vaccination but these were linked to preexisting congenital abnormalities.
IN PRACTICE:
“These results can provide reassurance to clinicians, parents, and policymakers alike.”
STUDY DETAILS:
The study was led by Kristin Goddard, MPH, a researcher at the Kaiser Permanente Vaccine Study Center in Oakland, Calif., and was funded by the Centers for Disease Control and Prevention.
LIMITATIONS:
The researchers reported low statistical power for early analysis, especially for rare outcomes. In addition, fewer than 25% of children in the database had received a vaccine at the time of analysis.
DISCLOSURES:
A coauthor reported receiving funding from Janssen Vaccines and Prevention for a study unrelated to COVID-19 vaccines. Another coauthor reported receiving grants from Pfizer in 2019 for clinical trials for coronavirus vaccines, and from Merck, GSK, and Sanofi Pasteur for unrelated research.
A version of this article first appeared on Medscape.com.
TOPLINE:
COVID-19 vaccines from Moderna and Pfizer-BioNTech are safe for children under age 5 years, according to findings from a study funded by the Centers for Disease Control and Prevention.
METHODOLOGY:
- Data came from the Vaccine Safety Datalink, which gathers information from eight health systems in the United States.
- Analyzed data from 135,005 doses given to children age 4 and younger who received the Pfizer-BioNTech , and 112,006 doses given to children aged 5 and younger who received the Moderna version.
- Assessed for 23 safety outcomes, including myocarditis, pericarditis, and seizures.
TAKEAWAY:
- One case of hemorrhagic stroke and one case of pulmonary embolism occurred after vaccination but these were linked to preexisting congenital abnormalities.
IN PRACTICE:
“These results can provide reassurance to clinicians, parents, and policymakers alike.”
STUDY DETAILS:
The study was led by Kristin Goddard, MPH, a researcher at the Kaiser Permanente Vaccine Study Center in Oakland, Calif., and was funded by the Centers for Disease Control and Prevention.
LIMITATIONS:
The researchers reported low statistical power for early analysis, especially for rare outcomes. In addition, fewer than 25% of children in the database had received a vaccine at the time of analysis.
DISCLOSURES:
A coauthor reported receiving funding from Janssen Vaccines and Prevention for a study unrelated to COVID-19 vaccines. Another coauthor reported receiving grants from Pfizer in 2019 for clinical trials for coronavirus vaccines, and from Merck, GSK, and Sanofi Pasteur for unrelated research.
A version of this article first appeared on Medscape.com.
FROM PEDIATRICS
Review may help clinicians treat adolescents with depression
Depression is common among Canadian adolescents and often goes unnoticed. Many family physicians report feeling unprepared to identify and manage depression in these patients.
“Depression is an increasingly common but treatable condition among adolescents,” the authors wrote. “Primary care physicians and pediatricians are well positioned to support the assessment and first-line management of depression in this group, helping patients to regain their health and function.”
The article was published in CMAJ.
Distinct presentation
More than 40% of cases of depression begin during childhood. Onset at this life stage is associated with worse severity of depression in adulthood and worse social, occupational, and physical health outcomes.
Depression is influenced by genetic and environmental factors. Family history of depression is associated with a three- to fivefold increased risk of depression among older children. Genetic loci are known to be associated with depression, but exposure to parental depression, adverse childhood experiences, and family conflict are also linked to greater risk. Bullying and stigma are associated with greater risk among lesbian, gay, bisexual, and transgender youth.
Compared with adults, adolescents with depression are more likely to be irritable and to have a labile mood, rather than a low mood. Social withdrawal is also more common among adolescents than among adults. Unusual features, such as hypersomnia and increased appetite, may also be present. Anxiety, somatic symptoms, psychomotor agitation, and hallucinations are more common in adolescents than in younger persons with depression. It is vital to assess risk of suicidality and self-injury as well as support systems, and validated scales such as the Columbia Suicide Severity Rating Scale can be useful.
There is no consensus as to whether universal screening for depression is beneficial among adolescents. “Screening in this age group may be a reasonable approach, however, when implemented together with adequate systems that ensure accurate diagnosis and appropriate follow-up,” wrote the authors.
Management of depression in adolescents should begin with psychoeducation and may include lifestyle modification, psychotherapy, and medication. “Importantly, a suicide risk assessment must be done to ensure appropriateness of an outpatient management plan and facilitate safety planning,” the authors wrote.
Lifestyle interventions may target physical activity, diet, and sleep, since unhealthy patterns in all three are associated with heightened symptoms of depression in this population. Regular moderate to vigorous physical activity, and perhaps physical activity of short duration, can improve mood in adolescents. Reduced consumption of sugar-sweetened drinks, processed foods, and meats, along with greater consumption of fruits and legumes, has been shown to reduce depressive symptoms in randomized, controlled trials with adults.
Among psychotherapeutic approaches, cognitive-behavioral therapy has shown the most evidence of efficacy among adolescents with depression, though it is less effective for those with more severe symptoms, poor coping skills, and nonsuicidal self-injury. Some evidence supports interpersonal therapy, which focuses on relationships and social functioning. The involvement of caregivers may improve the response, compared with psychotherapy that only includes the adolescent.
The authors recommend antidepressant medications in more severe cases or when psychotherapy is ineffective or impossible. Guidelines generally support trials with at least two SSRIs before switching to another drug class, since efficacy data for them are sparser, and other drugs have worse side effect profiles.
About 2% of adolescents with depression experience an increase in suicidal ideation and behavior after exposure to antidepressants, usually within the first weeks of initiation, so this potential risk should be discussed with patients and caregivers.
Clinicians feel unprepared
Commenting on the review, Pierre-Paul Tellier, MD, an associate professor of family medicine at McGill University, Montreal, said that clinicians frequently report that they do not feel confident in their ability to manage and diagnose adolescent depression. “We did two systematic reviews to look at the continuing professional development of family physicians in adolescent health, and it turned out that there’s really a very large lack. When we looked at residents and the training that they were getting in adolescent medicine, it was very similar, so they felt unprepared to deal with issues around mental health.”
Medication can be effective, but it can be seen as “an easy way out,” Dr. Tellier added. “It’s not necessarily an ideal plan. What we need to do is to change the person’s way of thinking, the person’s way of responding to a variety of things which will occur throughout their lives. People will have other transition periods in their lives. It’s best if they learn a variety of techniques to deal with depression.”
These techniques include exercise, relaxation methods [which reduce anxiety], and wellness training. Through such techniques, patients “learn a healthier way of living with themselves and who they are, and then this is a lifelong way of learning,” said Dr. Tellier. “If I give you a pill, what I’m teaching is, yes, you can feel better. But you’re not dealing with the problem, you’re just dealing with the symptoms.”
He frequently refers his patients to YouTube videos that outline and explain various strategies. A favorite is a deep breathing exercise presented by Jeremy Howick.
The authors and Dr. Tellier disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Depression is common among Canadian adolescents and often goes unnoticed. Many family physicians report feeling unprepared to identify and manage depression in these patients.
“Depression is an increasingly common but treatable condition among adolescents,” the authors wrote. “Primary care physicians and pediatricians are well positioned to support the assessment and first-line management of depression in this group, helping patients to regain their health and function.”
The article was published in CMAJ.
Distinct presentation
More than 40% of cases of depression begin during childhood. Onset at this life stage is associated with worse severity of depression in adulthood and worse social, occupational, and physical health outcomes.
Depression is influenced by genetic and environmental factors. Family history of depression is associated with a three- to fivefold increased risk of depression among older children. Genetic loci are known to be associated with depression, but exposure to parental depression, adverse childhood experiences, and family conflict are also linked to greater risk. Bullying and stigma are associated with greater risk among lesbian, gay, bisexual, and transgender youth.
Compared with adults, adolescents with depression are more likely to be irritable and to have a labile mood, rather than a low mood. Social withdrawal is also more common among adolescents than among adults. Unusual features, such as hypersomnia and increased appetite, may also be present. Anxiety, somatic symptoms, psychomotor agitation, and hallucinations are more common in adolescents than in younger persons with depression. It is vital to assess risk of suicidality and self-injury as well as support systems, and validated scales such as the Columbia Suicide Severity Rating Scale can be useful.
There is no consensus as to whether universal screening for depression is beneficial among adolescents. “Screening in this age group may be a reasonable approach, however, when implemented together with adequate systems that ensure accurate diagnosis and appropriate follow-up,” wrote the authors.
Management of depression in adolescents should begin with psychoeducation and may include lifestyle modification, psychotherapy, and medication. “Importantly, a suicide risk assessment must be done to ensure appropriateness of an outpatient management plan and facilitate safety planning,” the authors wrote.
Lifestyle interventions may target physical activity, diet, and sleep, since unhealthy patterns in all three are associated with heightened symptoms of depression in this population. Regular moderate to vigorous physical activity, and perhaps physical activity of short duration, can improve mood in adolescents. Reduced consumption of sugar-sweetened drinks, processed foods, and meats, along with greater consumption of fruits and legumes, has been shown to reduce depressive symptoms in randomized, controlled trials with adults.
Among psychotherapeutic approaches, cognitive-behavioral therapy has shown the most evidence of efficacy among adolescents with depression, though it is less effective for those with more severe symptoms, poor coping skills, and nonsuicidal self-injury. Some evidence supports interpersonal therapy, which focuses on relationships and social functioning. The involvement of caregivers may improve the response, compared with psychotherapy that only includes the adolescent.
The authors recommend antidepressant medications in more severe cases or when psychotherapy is ineffective or impossible. Guidelines generally support trials with at least two SSRIs before switching to another drug class, since efficacy data for them are sparser, and other drugs have worse side effect profiles.
About 2% of adolescents with depression experience an increase in suicidal ideation and behavior after exposure to antidepressants, usually within the first weeks of initiation, so this potential risk should be discussed with patients and caregivers.
Clinicians feel unprepared
Commenting on the review, Pierre-Paul Tellier, MD, an associate professor of family medicine at McGill University, Montreal, said that clinicians frequently report that they do not feel confident in their ability to manage and diagnose adolescent depression. “We did two systematic reviews to look at the continuing professional development of family physicians in adolescent health, and it turned out that there’s really a very large lack. When we looked at residents and the training that they were getting in adolescent medicine, it was very similar, so they felt unprepared to deal with issues around mental health.”
Medication can be effective, but it can be seen as “an easy way out,” Dr. Tellier added. “It’s not necessarily an ideal plan. What we need to do is to change the person’s way of thinking, the person’s way of responding to a variety of things which will occur throughout their lives. People will have other transition periods in their lives. It’s best if they learn a variety of techniques to deal with depression.”
These techniques include exercise, relaxation methods [which reduce anxiety], and wellness training. Through such techniques, patients “learn a healthier way of living with themselves and who they are, and then this is a lifelong way of learning,” said Dr. Tellier. “If I give you a pill, what I’m teaching is, yes, you can feel better. But you’re not dealing with the problem, you’re just dealing with the symptoms.”
He frequently refers his patients to YouTube videos that outline and explain various strategies. A favorite is a deep breathing exercise presented by Jeremy Howick.
The authors and Dr. Tellier disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Depression is common among Canadian adolescents and often goes unnoticed. Many family physicians report feeling unprepared to identify and manage depression in these patients.
“Depression is an increasingly common but treatable condition among adolescents,” the authors wrote. “Primary care physicians and pediatricians are well positioned to support the assessment and first-line management of depression in this group, helping patients to regain their health and function.”
The article was published in CMAJ.
Distinct presentation
More than 40% of cases of depression begin during childhood. Onset at this life stage is associated with worse severity of depression in adulthood and worse social, occupational, and physical health outcomes.
Depression is influenced by genetic and environmental factors. Family history of depression is associated with a three- to fivefold increased risk of depression among older children. Genetic loci are known to be associated with depression, but exposure to parental depression, adverse childhood experiences, and family conflict are also linked to greater risk. Bullying and stigma are associated with greater risk among lesbian, gay, bisexual, and transgender youth.
Compared with adults, adolescents with depression are more likely to be irritable and to have a labile mood, rather than a low mood. Social withdrawal is also more common among adolescents than among adults. Unusual features, such as hypersomnia and increased appetite, may also be present. Anxiety, somatic symptoms, psychomotor agitation, and hallucinations are more common in adolescents than in younger persons with depression. It is vital to assess risk of suicidality and self-injury as well as support systems, and validated scales such as the Columbia Suicide Severity Rating Scale can be useful.
There is no consensus as to whether universal screening for depression is beneficial among adolescents. “Screening in this age group may be a reasonable approach, however, when implemented together with adequate systems that ensure accurate diagnosis and appropriate follow-up,” wrote the authors.
Management of depression in adolescents should begin with psychoeducation and may include lifestyle modification, psychotherapy, and medication. “Importantly, a suicide risk assessment must be done to ensure appropriateness of an outpatient management plan and facilitate safety planning,” the authors wrote.
Lifestyle interventions may target physical activity, diet, and sleep, since unhealthy patterns in all three are associated with heightened symptoms of depression in this population. Regular moderate to vigorous physical activity, and perhaps physical activity of short duration, can improve mood in adolescents. Reduced consumption of sugar-sweetened drinks, processed foods, and meats, along with greater consumption of fruits and legumes, has been shown to reduce depressive symptoms in randomized, controlled trials with adults.
Among psychotherapeutic approaches, cognitive-behavioral therapy has shown the most evidence of efficacy among adolescents with depression, though it is less effective for those with more severe symptoms, poor coping skills, and nonsuicidal self-injury. Some evidence supports interpersonal therapy, which focuses on relationships and social functioning. The involvement of caregivers may improve the response, compared with psychotherapy that only includes the adolescent.
The authors recommend antidepressant medications in more severe cases or when psychotherapy is ineffective or impossible. Guidelines generally support trials with at least two SSRIs before switching to another drug class, since efficacy data for them are sparser, and other drugs have worse side effect profiles.
About 2% of adolescents with depression experience an increase in suicidal ideation and behavior after exposure to antidepressants, usually within the first weeks of initiation, so this potential risk should be discussed with patients and caregivers.
Clinicians feel unprepared
Commenting on the review, Pierre-Paul Tellier, MD, an associate professor of family medicine at McGill University, Montreal, said that clinicians frequently report that they do not feel confident in their ability to manage and diagnose adolescent depression. “We did two systematic reviews to look at the continuing professional development of family physicians in adolescent health, and it turned out that there’s really a very large lack. When we looked at residents and the training that they were getting in adolescent medicine, it was very similar, so they felt unprepared to deal with issues around mental health.”
Medication can be effective, but it can be seen as “an easy way out,” Dr. Tellier added. “It’s not necessarily an ideal plan. What we need to do is to change the person’s way of thinking, the person’s way of responding to a variety of things which will occur throughout their lives. People will have other transition periods in their lives. It’s best if they learn a variety of techniques to deal with depression.”
These techniques include exercise, relaxation methods [which reduce anxiety], and wellness training. Through such techniques, patients “learn a healthier way of living with themselves and who they are, and then this is a lifelong way of learning,” said Dr. Tellier. “If I give you a pill, what I’m teaching is, yes, you can feel better. But you’re not dealing with the problem, you’re just dealing with the symptoms.”
He frequently refers his patients to YouTube videos that outline and explain various strategies. A favorite is a deep breathing exercise presented by Jeremy Howick.
The authors and Dr. Tellier disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM CMAJ
Suicidality risk in youth at highest at night
Investigators found that suicidal ideation and attempts were lowest in the mornings and highest in the evenings, particularly among youth with higher levels of self-critical rumination.
“These are preliminary findings, and there is a need for more data, but they signal potentially that there’s a need for support, particularly at nighttime, and that there might be a potential of targeting self-critical rumination in daily lives of youth,” said lead researcher Anastacia Kudinova, PhD, with the department of psychiatry and human behavior, Alpert Medical School of Brown University, Providence, R.I.
The findings were presented at the late-breaker session at the annual meeting of the Associated Professional Sleep Societies.
Urgent need
Suicidal ideation (SI) is a “robust” predictor of suicidal behavior and, “alarmingly,” both suicidal ideation and suicidal behavior have been increasing, Dr. Kudinova said.
“There is an urgent need to describe proximal time-period risk factors for suicide so that we can identify who is at a greater suicide risk on the time scale of weeks, days, or even hours,” she told attendees.
The researchers asked 165 psychiatrically hospitalized youth aged 11-18 (72% female) about the time of day of their most recent suicide attempt.
More than half (58%) said it occurred in the evenings and nights, followed by daytime (35%) and mornings (7%).
They also assessed the timing of suicidal ideation at home in 61 youth aged 12-15 (61% female) who were discharged after a partial hospitalization program.
They did this using ecological momentary assessments (EMAs) three times a day over 2 weeks. EMAs study people’s thoughts and behavior in their daily lives by repeatedly collecting data in an individual’s normal environment at or close to the time they carry out that behavior.
As in the other sample, youth in this sample also experienced significantly more frequent suicidal ideation later in the day (P < .01).
There was also a significant moderating effect of self-criticism (P < .01), such that more self-critical youth evidenced the highest levels of suicidal ideation later in the day.
True variation or mechanics?
Reached for comment, Paul Nestadt, MD, with Johns Hopkins Bloomberg School of Public Health, Baltimore, noted that EMA is becoming “an interesting way to track high-resolution temporal variation in suicidal ideation and other psych symptoms.”
Dr. Nestadt, who was not involved in the study, said that “it’s not surprising” that the majority of youth attempted suicide in evenings and nights, “as adolescents are generally being supervised in a school setting during daytime hours. It may not be the fluctuation in suicidality that impacts attempt timing so much as the mechanics – it is very hard to attempt suicide in math class.”
The same may be true for the youth in the second sample who were in the partial hospital program. “During the day, they were in therapy groups where feelings of suicidal ideation would have been solicited and addressed in real time,” Dr. Nestadt noted.
“Again, suicidal ideation later in the day may be a practical effect of how they are occupied in the partial hospital program, as opposed to some inherent suicidal ideation increase linked to something endogenous, such as circadian rhythm or cortisol level rises. That said, we do often see more attempts in the evenings in adults as well,” he added.
A vulnerable time
Also weighing in, Casey O’Brien, PsyD, a psychologist in the department of psychiatry at Columbia University Irving Medical Center, New York, said the findings in this study “track” with what she sees in the clinic.
Teens often report in session that the “unstructured time of night – especially the time when they usually should be getting to bed but are kind of staying up – tends to be a very vulnerable time for them,” Dr. O’Brien said in an interview.
“It’s really nice to have research confirm a lot of what we see reported anecdotally from the teens we work with,” said Dr. O’Brien.
Dr. O’Brien heads the intensive adolescent dialectical behavior therapy (DBT) program at Columbia for young people struggling with mental health issues.
“Within the DBT framework, we try to really focus on accepting that this is a vulnerable time and then planning ahead for what the strategies are that they can use to help them transition to bed quickly and smoothly,” Dr. O’Brien said.
These strategies may include spending time with their parents before bed, reading, or building into their bedtime routines things that they find soothing and comforting, like taking a longer shower or having comfortable pajamas to change into, she explained.
“We also work a lot on sleep hygiene strategies to help develop a regular bedtime and have a consistent sleep-wake cycle. We also will plan ahead for using distress tolerance skills during times of emotional vulnerability,” Dr. O’Brien said.
The Columbia DBT program also offers phone coaching “so teens can reach out to a therapist for help using skills outside of a therapeutic hour, and we do find that we get more coaching calls closer to around bedtime,” Dr. O’Brien said.
Support for the study was provided by the National Institute of Mental Health and Bradley Hospital COBRE Center. Dr. Kudinova, Dr. Nestadt, and Dr. O’Brien have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Investigators found that suicidal ideation and attempts were lowest in the mornings and highest in the evenings, particularly among youth with higher levels of self-critical rumination.
“These are preliminary findings, and there is a need for more data, but they signal potentially that there’s a need for support, particularly at nighttime, and that there might be a potential of targeting self-critical rumination in daily lives of youth,” said lead researcher Anastacia Kudinova, PhD, with the department of psychiatry and human behavior, Alpert Medical School of Brown University, Providence, R.I.
The findings were presented at the late-breaker session at the annual meeting of the Associated Professional Sleep Societies.
Urgent need
Suicidal ideation (SI) is a “robust” predictor of suicidal behavior and, “alarmingly,” both suicidal ideation and suicidal behavior have been increasing, Dr. Kudinova said.
“There is an urgent need to describe proximal time-period risk factors for suicide so that we can identify who is at a greater suicide risk on the time scale of weeks, days, or even hours,” she told attendees.
The researchers asked 165 psychiatrically hospitalized youth aged 11-18 (72% female) about the time of day of their most recent suicide attempt.
More than half (58%) said it occurred in the evenings and nights, followed by daytime (35%) and mornings (7%).
They also assessed the timing of suicidal ideation at home in 61 youth aged 12-15 (61% female) who were discharged after a partial hospitalization program.
They did this using ecological momentary assessments (EMAs) three times a day over 2 weeks. EMAs study people’s thoughts and behavior in their daily lives by repeatedly collecting data in an individual’s normal environment at or close to the time they carry out that behavior.
As in the other sample, youth in this sample also experienced significantly more frequent suicidal ideation later in the day (P < .01).
There was also a significant moderating effect of self-criticism (P < .01), such that more self-critical youth evidenced the highest levels of suicidal ideation later in the day.
True variation or mechanics?
Reached for comment, Paul Nestadt, MD, with Johns Hopkins Bloomberg School of Public Health, Baltimore, noted that EMA is becoming “an interesting way to track high-resolution temporal variation in suicidal ideation and other psych symptoms.”
Dr. Nestadt, who was not involved in the study, said that “it’s not surprising” that the majority of youth attempted suicide in evenings and nights, “as adolescents are generally being supervised in a school setting during daytime hours. It may not be the fluctuation in suicidality that impacts attempt timing so much as the mechanics – it is very hard to attempt suicide in math class.”
The same may be true for the youth in the second sample who were in the partial hospital program. “During the day, they were in therapy groups where feelings of suicidal ideation would have been solicited and addressed in real time,” Dr. Nestadt noted.
“Again, suicidal ideation later in the day may be a practical effect of how they are occupied in the partial hospital program, as opposed to some inherent suicidal ideation increase linked to something endogenous, such as circadian rhythm or cortisol level rises. That said, we do often see more attempts in the evenings in adults as well,” he added.
A vulnerable time
Also weighing in, Casey O’Brien, PsyD, a psychologist in the department of psychiatry at Columbia University Irving Medical Center, New York, said the findings in this study “track” with what she sees in the clinic.
Teens often report in session that the “unstructured time of night – especially the time when they usually should be getting to bed but are kind of staying up – tends to be a very vulnerable time for them,” Dr. O’Brien said in an interview.
“It’s really nice to have research confirm a lot of what we see reported anecdotally from the teens we work with,” said Dr. O’Brien.
Dr. O’Brien heads the intensive adolescent dialectical behavior therapy (DBT) program at Columbia for young people struggling with mental health issues.
“Within the DBT framework, we try to really focus on accepting that this is a vulnerable time and then planning ahead for what the strategies are that they can use to help them transition to bed quickly and smoothly,” Dr. O’Brien said.
These strategies may include spending time with their parents before bed, reading, or building into their bedtime routines things that they find soothing and comforting, like taking a longer shower or having comfortable pajamas to change into, she explained.
“We also work a lot on sleep hygiene strategies to help develop a regular bedtime and have a consistent sleep-wake cycle. We also will plan ahead for using distress tolerance skills during times of emotional vulnerability,” Dr. O’Brien said.
The Columbia DBT program also offers phone coaching “so teens can reach out to a therapist for help using skills outside of a therapeutic hour, and we do find that we get more coaching calls closer to around bedtime,” Dr. O’Brien said.
Support for the study was provided by the National Institute of Mental Health and Bradley Hospital COBRE Center. Dr. Kudinova, Dr. Nestadt, and Dr. O’Brien have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Investigators found that suicidal ideation and attempts were lowest in the mornings and highest in the evenings, particularly among youth with higher levels of self-critical rumination.
“These are preliminary findings, and there is a need for more data, but they signal potentially that there’s a need for support, particularly at nighttime, and that there might be a potential of targeting self-critical rumination in daily lives of youth,” said lead researcher Anastacia Kudinova, PhD, with the department of psychiatry and human behavior, Alpert Medical School of Brown University, Providence, R.I.
The findings were presented at the late-breaker session at the annual meeting of the Associated Professional Sleep Societies.
Urgent need
Suicidal ideation (SI) is a “robust” predictor of suicidal behavior and, “alarmingly,” both suicidal ideation and suicidal behavior have been increasing, Dr. Kudinova said.
“There is an urgent need to describe proximal time-period risk factors for suicide so that we can identify who is at a greater suicide risk on the time scale of weeks, days, or even hours,” she told attendees.
The researchers asked 165 psychiatrically hospitalized youth aged 11-18 (72% female) about the time of day of their most recent suicide attempt.
More than half (58%) said it occurred in the evenings and nights, followed by daytime (35%) and mornings (7%).
They also assessed the timing of suicidal ideation at home in 61 youth aged 12-15 (61% female) who were discharged after a partial hospitalization program.
They did this using ecological momentary assessments (EMAs) three times a day over 2 weeks. EMAs study people’s thoughts and behavior in their daily lives by repeatedly collecting data in an individual’s normal environment at or close to the time they carry out that behavior.
As in the other sample, youth in this sample also experienced significantly more frequent suicidal ideation later in the day (P < .01).
There was also a significant moderating effect of self-criticism (P < .01), such that more self-critical youth evidenced the highest levels of suicidal ideation later in the day.
True variation or mechanics?
Reached for comment, Paul Nestadt, MD, with Johns Hopkins Bloomberg School of Public Health, Baltimore, noted that EMA is becoming “an interesting way to track high-resolution temporal variation in suicidal ideation and other psych symptoms.”
Dr. Nestadt, who was not involved in the study, said that “it’s not surprising” that the majority of youth attempted suicide in evenings and nights, “as adolescents are generally being supervised in a school setting during daytime hours. It may not be the fluctuation in suicidality that impacts attempt timing so much as the mechanics – it is very hard to attempt suicide in math class.”
The same may be true for the youth in the second sample who were in the partial hospital program. “During the day, they were in therapy groups where feelings of suicidal ideation would have been solicited and addressed in real time,” Dr. Nestadt noted.
“Again, suicidal ideation later in the day may be a practical effect of how they are occupied in the partial hospital program, as opposed to some inherent suicidal ideation increase linked to something endogenous, such as circadian rhythm or cortisol level rises. That said, we do often see more attempts in the evenings in adults as well,” he added.
A vulnerable time
Also weighing in, Casey O’Brien, PsyD, a psychologist in the department of psychiatry at Columbia University Irving Medical Center, New York, said the findings in this study “track” with what she sees in the clinic.
Teens often report in session that the “unstructured time of night – especially the time when they usually should be getting to bed but are kind of staying up – tends to be a very vulnerable time for them,” Dr. O’Brien said in an interview.
“It’s really nice to have research confirm a lot of what we see reported anecdotally from the teens we work with,” said Dr. O’Brien.
Dr. O’Brien heads the intensive adolescent dialectical behavior therapy (DBT) program at Columbia for young people struggling with mental health issues.
“Within the DBT framework, we try to really focus on accepting that this is a vulnerable time and then planning ahead for what the strategies are that they can use to help them transition to bed quickly and smoothly,” Dr. O’Brien said.
These strategies may include spending time with their parents before bed, reading, or building into their bedtime routines things that they find soothing and comforting, like taking a longer shower or having comfortable pajamas to change into, she explained.
“We also work a lot on sleep hygiene strategies to help develop a regular bedtime and have a consistent sleep-wake cycle. We also will plan ahead for using distress tolerance skills during times of emotional vulnerability,” Dr. O’Brien said.
The Columbia DBT program also offers phone coaching “so teens can reach out to a therapist for help using skills outside of a therapeutic hour, and we do find that we get more coaching calls closer to around bedtime,” Dr. O’Brien said.
Support for the study was provided by the National Institute of Mental Health and Bradley Hospital COBRE Center. Dr. Kudinova, Dr. Nestadt, and Dr. O’Brien have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM SLEEP 2023
IL-17 inhibitor approved in Europe for hidradenitis suppurativa
The biologic is the first interleukin-17A (IL-17A) inhibitor to be approved for the treatment of moderate to severe HS. The manufacturer, Novartis, expects a regulatory decision from the U.S. Food and Drug Administration later this year, according to a company press release announcing the approval.
The European approval is based on the results from the phase 3 SUNSHINE and SUNRISE trials, which evaluated the efficacy, safety, and tolerability of the drug. The multicenter, randomized, placebo-controlled, double-blind trials enrolled a total of more than 1,000 adults with moderate to severe HS.
Patients were randomly assigned either to receive subcutaneous secukinumab 300 mg every 2 weeks or 4 weeks or to receive placebo. The treatment was effective at improving the symptoms of HS when given every 2 weeks, according to results recently published in The Lancet.
The primary outcome measure for both trials was HS clinical response – defined as a decrease in abscess and inflammatory nodule count by 50% or more with no increase in the number of abscesses or draining fistulae, compared with baseline.
In the studies, 42% and 45% of patients treated with secukinumab every 2 weeks in the SUNRISE and SUNSHINE trials, respectively, had a clinical response at 16 weeks, compared with 31% and 34% among those who received placebo, which were statistically significant differences. A significant clinical response was seen at week 4 in the SUNSHINE trial and in week 2 in the SUNRISE trial. In both trials, clinical efficacy was sustained to the end of the trial, at week 52.
Headaches were the most common side effect. They affected approximately 1 in 10 patients in both trials.
HS, also called acne inversa, is a chronic skin condition that causes painful lesions. The condition affects 1%- 2% of the U.S. population, according to the nonprofit Hidradenitis Suppurativa Foundation. It also disproportionately affects young adults, women, and Black patients.
In Europe, about 200,000 people live with moderate to severe stages of the condition, according to the Novartis press release.
Secukinumab inhibits IL-17A, a cytokine involved in the inflammation of psoriatic arthritis, plaque psoriasis, ankylosing spondylitis, and nonradiographic axial spondylarthritis. It has been approved for the treatment of those conditions, as well as for the treatment of juvenile idiopathic arthritis and enthesitis-related arthritis in the United States and the European Union.
The only other approved biologic therapy for HS is the tumor necrosis factor inhibitor adalimumab.
Novartis is investigating the potential application of secukinumab for the treatment of lupus nephritis and giant cell arteritis, as well as polymyalgia rheumatica and rotator cuff tendinopathy, according to the company press release.
The study published in The Lancet was funded by Novartis.
A version of this article first appeared on Medscape.com.
The biologic is the first interleukin-17A (IL-17A) inhibitor to be approved for the treatment of moderate to severe HS. The manufacturer, Novartis, expects a regulatory decision from the U.S. Food and Drug Administration later this year, according to a company press release announcing the approval.
The European approval is based on the results from the phase 3 SUNSHINE and SUNRISE trials, which evaluated the efficacy, safety, and tolerability of the drug. The multicenter, randomized, placebo-controlled, double-blind trials enrolled a total of more than 1,000 adults with moderate to severe HS.
Patients were randomly assigned either to receive subcutaneous secukinumab 300 mg every 2 weeks or 4 weeks or to receive placebo. The treatment was effective at improving the symptoms of HS when given every 2 weeks, according to results recently published in The Lancet.
The primary outcome measure for both trials was HS clinical response – defined as a decrease in abscess and inflammatory nodule count by 50% or more with no increase in the number of abscesses or draining fistulae, compared with baseline.
In the studies, 42% and 45% of patients treated with secukinumab every 2 weeks in the SUNRISE and SUNSHINE trials, respectively, had a clinical response at 16 weeks, compared with 31% and 34% among those who received placebo, which were statistically significant differences. A significant clinical response was seen at week 4 in the SUNSHINE trial and in week 2 in the SUNRISE trial. In both trials, clinical efficacy was sustained to the end of the trial, at week 52.
Headaches were the most common side effect. They affected approximately 1 in 10 patients in both trials.
HS, also called acne inversa, is a chronic skin condition that causes painful lesions. The condition affects 1%- 2% of the U.S. population, according to the nonprofit Hidradenitis Suppurativa Foundation. It also disproportionately affects young adults, women, and Black patients.
In Europe, about 200,000 people live with moderate to severe stages of the condition, according to the Novartis press release.
Secukinumab inhibits IL-17A, a cytokine involved in the inflammation of psoriatic arthritis, plaque psoriasis, ankylosing spondylitis, and nonradiographic axial spondylarthritis. It has been approved for the treatment of those conditions, as well as for the treatment of juvenile idiopathic arthritis and enthesitis-related arthritis in the United States and the European Union.
The only other approved biologic therapy for HS is the tumor necrosis factor inhibitor adalimumab.
Novartis is investigating the potential application of secukinumab for the treatment of lupus nephritis and giant cell arteritis, as well as polymyalgia rheumatica and rotator cuff tendinopathy, according to the company press release.
The study published in The Lancet was funded by Novartis.
A version of this article first appeared on Medscape.com.
The biologic is the first interleukin-17A (IL-17A) inhibitor to be approved for the treatment of moderate to severe HS. The manufacturer, Novartis, expects a regulatory decision from the U.S. Food and Drug Administration later this year, according to a company press release announcing the approval.
The European approval is based on the results from the phase 3 SUNSHINE and SUNRISE trials, which evaluated the efficacy, safety, and tolerability of the drug. The multicenter, randomized, placebo-controlled, double-blind trials enrolled a total of more than 1,000 adults with moderate to severe HS.
Patients were randomly assigned either to receive subcutaneous secukinumab 300 mg every 2 weeks or 4 weeks or to receive placebo. The treatment was effective at improving the symptoms of HS when given every 2 weeks, according to results recently published in The Lancet.
The primary outcome measure for both trials was HS clinical response – defined as a decrease in abscess and inflammatory nodule count by 50% or more with no increase in the number of abscesses or draining fistulae, compared with baseline.
In the studies, 42% and 45% of patients treated with secukinumab every 2 weeks in the SUNRISE and SUNSHINE trials, respectively, had a clinical response at 16 weeks, compared with 31% and 34% among those who received placebo, which were statistically significant differences. A significant clinical response was seen at week 4 in the SUNSHINE trial and in week 2 in the SUNRISE trial. In both trials, clinical efficacy was sustained to the end of the trial, at week 52.
Headaches were the most common side effect. They affected approximately 1 in 10 patients in both trials.
HS, also called acne inversa, is a chronic skin condition that causes painful lesions. The condition affects 1%- 2% of the U.S. population, according to the nonprofit Hidradenitis Suppurativa Foundation. It also disproportionately affects young adults, women, and Black patients.
In Europe, about 200,000 people live with moderate to severe stages of the condition, according to the Novartis press release.
Secukinumab inhibits IL-17A, a cytokine involved in the inflammation of psoriatic arthritis, plaque psoriasis, ankylosing spondylitis, and nonradiographic axial spondylarthritis. It has been approved for the treatment of those conditions, as well as for the treatment of juvenile idiopathic arthritis and enthesitis-related arthritis in the United States and the European Union.
The only other approved biologic therapy for HS is the tumor necrosis factor inhibitor adalimumab.
Novartis is investigating the potential application of secukinumab for the treatment of lupus nephritis and giant cell arteritis, as well as polymyalgia rheumatica and rotator cuff tendinopathy, according to the company press release.
The study published in The Lancet was funded by Novartis.
A version of this article first appeared on Medscape.com.