User login
Wildfire smoke and air quality: How long could health effects last?
People with moderate to severe asthma, chronic obstructive pulmonary disease, and other risk factors are used to checking air quality warnings before heading outside. But this situation is anything but typical.
Even people not normally at risk can have burning eyes, a runny nose, and a hard time breathing. These are among the symptoms to watch for as health effects of wildfire smoke. Special considerations should be made for people with heart disease, lung disease, and other conditions that put them at increased risk. Those affected can also have trouble sleeping, anxiety, and ongoing mental health issues.
The smoke will stick around the next few days, possibly clearing out early next week when the winds change direction, Weather Channel meteorologist Ari Sarsalari predicted June 8. But that doesn’t mean any physical or mental health effects will clear up as quickly.
“We are seeing dramatic increases in air pollution, and we are seeing increases in patients coming to the ED and the hospital. We expect that this will increase in the days ahead,” said Meredith McCormack, MD, MHS, a volunteer medical spokesperson for the American Lung Association.
“The air quality in our area – Baltimore – and other surrounding areas is not healthy for anyone,” said Dr. McCormack, who specializes in pulmonary and critical care medicine at Johns Hopkins University, Baltimore.
How serious are the health warnings?
Residents of California might be more familiar with the hazards of wildfire smoke, but this is a novel experience for many people along the East Coast. Air quality advisories are popping up on cellphones for people living in Boston, New York, and as far south as Northern Virginia. What should the estimated 75 million to 128 million affected Americans do?
We asked experts to weigh in on when it’s safe or not safe to spend time outside, when to seek medical help, and the best ways for people to protect themselves.
“It’s important to stay indoors and close all windows to reduce exposure to smoke from wildfires. It’s also essential to stay away from any windows that may not have a good seal, in order to minimize any potential exposure to smoke,” said Robert Glatter, MD, editor at large for Medscape Emergency Medicine and an emergency medicine doctor at Lenox Hill Hospital/Northwell Health in New York.
Dr. Glatter noted that placing moist towels under doors and sealing leaking windows can help.
Monitor your symptoms, and contact your doctor or go to urgent care, Dr. McCormack advised, if you see any increase in concerning symptoms. These include shortness of breath, coughing, chest tightness, or wheezing. Also make sure you take recommended medications and have enough on hand, she said.
Fine particles, big concerns
The weather is warming in many parts of the country, and that can mean air conditioning. Adding a MERV 13 filter to a central air conditioning system could reduce exposure to wildfire smoke. Using a portable indoor air purifier with a HEPA filter also can help people without central air conditioning. The filter can help remove small particles in the air but must be replaced regularly.
Smoke from wildfires contains multiple toxins, including heavy metals, carcinogens, and fine particulate matter (PM) under 2.5 microns. Dr. Glatter explained that these particles are about 100 times thinner than a human hair. Because of their size, they can embed deeper into the airways in the lungs and trigger chronic inflammation.
“This has also been linked to increased rates of lung cancer and brain tumors,” he said, based on a 2022 study in Canada.
The effects of smoke from wildfires can continue for many years. After the 2014 Hazelwood coal mine fire, emergency department visits for respiratory conditions and cardiovascular complaints remained higher for up to 2-5 years later, Dr. Glatter said. Again, large quantities of fine particulate matter in the smoke, less than 2.5 microns (PM 2.5), was to blame.
Exposure to smoke from wildfires during pregnancy has also been linked to abnormal fetal growth, preterm birth, as well as low birth weight, a January 2023 preprint on MedRxiv suggested.
Time to wear a mask again?
A properly fitted N95 mask will be the best approach to lessen exposure to smoke from wildfires, “but by itself cannot eliminate all of the risk,” Dr. Glatter said. Surgical masks can add minimal protection, and cloth masks will not provide any significant protection against the damaging effects of smoke from wildfires.
KN95 masks tend to be more comfortable to wear than N95s. But leakage often occurs that can make this type of protection less effective, Dr. Glatter said.
“Masks are important if you need to go outdoors,” Dr. McCormack said. Also, if you’re traveling by car, set the air conditioning system to recirculate to filter the air inside the vehicle, she recommended.
What does that number mean?
The federal government monitors air quality nationwide. In case you’re unfamiliar, the U.S. Air Quality Index includes a color-coded scale for ozone levels and particle pollution, the main concern from wildfire smoke. The lowest risk is the Green or satisfactory air quality category, where air pollution poses little or no risk, with an Index number from 0 to 50.
The index gets progressively more serious, from Yellow for moderate risk (51-100) up to a Maroon category, a hazardous range of 300 or higher on the index. When a Maroon advisory is issued, it means an emergency health warning where “everyone is more likely to be affected.”
How do you know if your outside air is polluted? Your local Air Quality Index (AQI) from the EPA can help. It’s a scale of 0 to 500, and the greater the number, the more harmful pollution in the air. It has six levels: good, moderate, unhealthy for sensitive groups, unhealthy, very unhealthy, and hazardous. You can find it at AirNow.gov.
New York is under an air quality alert until midnight Friday with a current “unhealthy” Index report of 200. The city recorded its worst-ever air quality on Wednesday. The New York State Department of Environmental Conservation warns that fine particulate levels – small particles that can enter a person’s lungs – are the biggest concern.
AirNow.gov warns that western New England down to Washington has air quality in the three worst categories – ranging from unhealthy to very unhealthy and hazardous. The ten worst locations on the U.S. Air Quality Index as of 10 a.m. ET on June 8 include the Wilmington, Del., area with an Index of 241, or “very unhealthy.”
Other “very unhealthy” locations have the following Index readings:
- 244: Suburban Washington/Maryland.
- 252: Southern coastal New Jersey.
- 252: Kent County, Del.
- 270: Philadelphia.
- 291: Greater New Castle County, Del.
- 293: Northern Virginia.
- 293: Metropolitan Washington.
These two locations are in the “hazardous” or health emergency warning category:
- 309: Lehigh Valley, Pa.
- 399: Susquehanna Valley, Pa.
To check an air quality advisory in your area, enter your ZIP code at AirNow.gov.
A version of this article first appeared on WebMD.com.
People with moderate to severe asthma, chronic obstructive pulmonary disease, and other risk factors are used to checking air quality warnings before heading outside. But this situation is anything but typical.
Even people not normally at risk can have burning eyes, a runny nose, and a hard time breathing. These are among the symptoms to watch for as health effects of wildfire smoke. Special considerations should be made for people with heart disease, lung disease, and other conditions that put them at increased risk. Those affected can also have trouble sleeping, anxiety, and ongoing mental health issues.
The smoke will stick around the next few days, possibly clearing out early next week when the winds change direction, Weather Channel meteorologist Ari Sarsalari predicted June 8. But that doesn’t mean any physical or mental health effects will clear up as quickly.
“We are seeing dramatic increases in air pollution, and we are seeing increases in patients coming to the ED and the hospital. We expect that this will increase in the days ahead,” said Meredith McCormack, MD, MHS, a volunteer medical spokesperson for the American Lung Association.
“The air quality in our area – Baltimore – and other surrounding areas is not healthy for anyone,” said Dr. McCormack, who specializes in pulmonary and critical care medicine at Johns Hopkins University, Baltimore.
How serious are the health warnings?
Residents of California might be more familiar with the hazards of wildfire smoke, but this is a novel experience for many people along the East Coast. Air quality advisories are popping up on cellphones for people living in Boston, New York, and as far south as Northern Virginia. What should the estimated 75 million to 128 million affected Americans do?
We asked experts to weigh in on when it’s safe or not safe to spend time outside, when to seek medical help, and the best ways for people to protect themselves.
“It’s important to stay indoors and close all windows to reduce exposure to smoke from wildfires. It’s also essential to stay away from any windows that may not have a good seal, in order to minimize any potential exposure to smoke,” said Robert Glatter, MD, editor at large for Medscape Emergency Medicine and an emergency medicine doctor at Lenox Hill Hospital/Northwell Health in New York.
Dr. Glatter noted that placing moist towels under doors and sealing leaking windows can help.
Monitor your symptoms, and contact your doctor or go to urgent care, Dr. McCormack advised, if you see any increase in concerning symptoms. These include shortness of breath, coughing, chest tightness, or wheezing. Also make sure you take recommended medications and have enough on hand, she said.
Fine particles, big concerns
The weather is warming in many parts of the country, and that can mean air conditioning. Adding a MERV 13 filter to a central air conditioning system could reduce exposure to wildfire smoke. Using a portable indoor air purifier with a HEPA filter also can help people without central air conditioning. The filter can help remove small particles in the air but must be replaced regularly.
Smoke from wildfires contains multiple toxins, including heavy metals, carcinogens, and fine particulate matter (PM) under 2.5 microns. Dr. Glatter explained that these particles are about 100 times thinner than a human hair. Because of their size, they can embed deeper into the airways in the lungs and trigger chronic inflammation.
“This has also been linked to increased rates of lung cancer and brain tumors,” he said, based on a 2022 study in Canada.
The effects of smoke from wildfires can continue for many years. After the 2014 Hazelwood coal mine fire, emergency department visits for respiratory conditions and cardiovascular complaints remained higher for up to 2-5 years later, Dr. Glatter said. Again, large quantities of fine particulate matter in the smoke, less than 2.5 microns (PM 2.5), was to blame.
Exposure to smoke from wildfires during pregnancy has also been linked to abnormal fetal growth, preterm birth, as well as low birth weight, a January 2023 preprint on MedRxiv suggested.
Time to wear a mask again?
A properly fitted N95 mask will be the best approach to lessen exposure to smoke from wildfires, “but by itself cannot eliminate all of the risk,” Dr. Glatter said. Surgical masks can add minimal protection, and cloth masks will not provide any significant protection against the damaging effects of smoke from wildfires.
KN95 masks tend to be more comfortable to wear than N95s. But leakage often occurs that can make this type of protection less effective, Dr. Glatter said.
“Masks are important if you need to go outdoors,” Dr. McCormack said. Also, if you’re traveling by car, set the air conditioning system to recirculate to filter the air inside the vehicle, she recommended.
What does that number mean?
The federal government monitors air quality nationwide. In case you’re unfamiliar, the U.S. Air Quality Index includes a color-coded scale for ozone levels and particle pollution, the main concern from wildfire smoke. The lowest risk is the Green or satisfactory air quality category, where air pollution poses little or no risk, with an Index number from 0 to 50.
The index gets progressively more serious, from Yellow for moderate risk (51-100) up to a Maroon category, a hazardous range of 300 or higher on the index. When a Maroon advisory is issued, it means an emergency health warning where “everyone is more likely to be affected.”
How do you know if your outside air is polluted? Your local Air Quality Index (AQI) from the EPA can help. It’s a scale of 0 to 500, and the greater the number, the more harmful pollution in the air. It has six levels: good, moderate, unhealthy for sensitive groups, unhealthy, very unhealthy, and hazardous. You can find it at AirNow.gov.
New York is under an air quality alert until midnight Friday with a current “unhealthy” Index report of 200. The city recorded its worst-ever air quality on Wednesday. The New York State Department of Environmental Conservation warns that fine particulate levels – small particles that can enter a person’s lungs – are the biggest concern.
AirNow.gov warns that western New England down to Washington has air quality in the three worst categories – ranging from unhealthy to very unhealthy and hazardous. The ten worst locations on the U.S. Air Quality Index as of 10 a.m. ET on June 8 include the Wilmington, Del., area with an Index of 241, or “very unhealthy.”
Other “very unhealthy” locations have the following Index readings:
- 244: Suburban Washington/Maryland.
- 252: Southern coastal New Jersey.
- 252: Kent County, Del.
- 270: Philadelphia.
- 291: Greater New Castle County, Del.
- 293: Northern Virginia.
- 293: Metropolitan Washington.
These two locations are in the “hazardous” or health emergency warning category:
- 309: Lehigh Valley, Pa.
- 399: Susquehanna Valley, Pa.
To check an air quality advisory in your area, enter your ZIP code at AirNow.gov.
A version of this article first appeared on WebMD.com.
People with moderate to severe asthma, chronic obstructive pulmonary disease, and other risk factors are used to checking air quality warnings before heading outside. But this situation is anything but typical.
Even people not normally at risk can have burning eyes, a runny nose, and a hard time breathing. These are among the symptoms to watch for as health effects of wildfire smoke. Special considerations should be made for people with heart disease, lung disease, and other conditions that put them at increased risk. Those affected can also have trouble sleeping, anxiety, and ongoing mental health issues.
The smoke will stick around the next few days, possibly clearing out early next week when the winds change direction, Weather Channel meteorologist Ari Sarsalari predicted June 8. But that doesn’t mean any physical or mental health effects will clear up as quickly.
“We are seeing dramatic increases in air pollution, and we are seeing increases in patients coming to the ED and the hospital. We expect that this will increase in the days ahead,” said Meredith McCormack, MD, MHS, a volunteer medical spokesperson for the American Lung Association.
“The air quality in our area – Baltimore – and other surrounding areas is not healthy for anyone,” said Dr. McCormack, who specializes in pulmonary and critical care medicine at Johns Hopkins University, Baltimore.
How serious are the health warnings?
Residents of California might be more familiar with the hazards of wildfire smoke, but this is a novel experience for many people along the East Coast. Air quality advisories are popping up on cellphones for people living in Boston, New York, and as far south as Northern Virginia. What should the estimated 75 million to 128 million affected Americans do?
We asked experts to weigh in on when it’s safe or not safe to spend time outside, when to seek medical help, and the best ways for people to protect themselves.
“It’s important to stay indoors and close all windows to reduce exposure to smoke from wildfires. It’s also essential to stay away from any windows that may not have a good seal, in order to minimize any potential exposure to smoke,” said Robert Glatter, MD, editor at large for Medscape Emergency Medicine and an emergency medicine doctor at Lenox Hill Hospital/Northwell Health in New York.
Dr. Glatter noted that placing moist towels under doors and sealing leaking windows can help.
Monitor your symptoms, and contact your doctor or go to urgent care, Dr. McCormack advised, if you see any increase in concerning symptoms. These include shortness of breath, coughing, chest tightness, or wheezing. Also make sure you take recommended medications and have enough on hand, she said.
Fine particles, big concerns
The weather is warming in many parts of the country, and that can mean air conditioning. Adding a MERV 13 filter to a central air conditioning system could reduce exposure to wildfire smoke. Using a portable indoor air purifier with a HEPA filter also can help people without central air conditioning. The filter can help remove small particles in the air but must be replaced regularly.
Smoke from wildfires contains multiple toxins, including heavy metals, carcinogens, and fine particulate matter (PM) under 2.5 microns. Dr. Glatter explained that these particles are about 100 times thinner than a human hair. Because of their size, they can embed deeper into the airways in the lungs and trigger chronic inflammation.
“This has also been linked to increased rates of lung cancer and brain tumors,” he said, based on a 2022 study in Canada.
The effects of smoke from wildfires can continue for many years. After the 2014 Hazelwood coal mine fire, emergency department visits for respiratory conditions and cardiovascular complaints remained higher for up to 2-5 years later, Dr. Glatter said. Again, large quantities of fine particulate matter in the smoke, less than 2.5 microns (PM 2.5), was to blame.
Exposure to smoke from wildfires during pregnancy has also been linked to abnormal fetal growth, preterm birth, as well as low birth weight, a January 2023 preprint on MedRxiv suggested.
Time to wear a mask again?
A properly fitted N95 mask will be the best approach to lessen exposure to smoke from wildfires, “but by itself cannot eliminate all of the risk,” Dr. Glatter said. Surgical masks can add minimal protection, and cloth masks will not provide any significant protection against the damaging effects of smoke from wildfires.
KN95 masks tend to be more comfortable to wear than N95s. But leakage often occurs that can make this type of protection less effective, Dr. Glatter said.
“Masks are important if you need to go outdoors,” Dr. McCormack said. Also, if you’re traveling by car, set the air conditioning system to recirculate to filter the air inside the vehicle, she recommended.
What does that number mean?
The federal government monitors air quality nationwide. In case you’re unfamiliar, the U.S. Air Quality Index includes a color-coded scale for ozone levels and particle pollution, the main concern from wildfire smoke. The lowest risk is the Green or satisfactory air quality category, where air pollution poses little or no risk, with an Index number from 0 to 50.
The index gets progressively more serious, from Yellow for moderate risk (51-100) up to a Maroon category, a hazardous range of 300 or higher on the index. When a Maroon advisory is issued, it means an emergency health warning where “everyone is more likely to be affected.”
How do you know if your outside air is polluted? Your local Air Quality Index (AQI) from the EPA can help. It’s a scale of 0 to 500, and the greater the number, the more harmful pollution in the air. It has six levels: good, moderate, unhealthy for sensitive groups, unhealthy, very unhealthy, and hazardous. You can find it at AirNow.gov.
New York is under an air quality alert until midnight Friday with a current “unhealthy” Index report of 200. The city recorded its worst-ever air quality on Wednesday. The New York State Department of Environmental Conservation warns that fine particulate levels – small particles that can enter a person’s lungs – are the biggest concern.
AirNow.gov warns that western New England down to Washington has air quality in the three worst categories – ranging from unhealthy to very unhealthy and hazardous. The ten worst locations on the U.S. Air Quality Index as of 10 a.m. ET on June 8 include the Wilmington, Del., area with an Index of 241, or “very unhealthy.”
Other “very unhealthy” locations have the following Index readings:
- 244: Suburban Washington/Maryland.
- 252: Southern coastal New Jersey.
- 252: Kent County, Del.
- 270: Philadelphia.
- 291: Greater New Castle County, Del.
- 293: Northern Virginia.
- 293: Metropolitan Washington.
These two locations are in the “hazardous” or health emergency warning category:
- 309: Lehigh Valley, Pa.
- 399: Susquehanna Valley, Pa.
To check an air quality advisory in your area, enter your ZIP code at AirNow.gov.
A version of this article first appeared on WebMD.com.
Is ChatGPT a friend or foe of medical publishing?
. These tools should not be listed as authors, and researchers must denote how AI-assisted technologies were used, the committee said.
These new guidelines are the latest effort for medical journals to define policies for using these large-scale language models (LLMs) in scientific publication. While these AI-assisted tools can help with tasks such as writing, analyzing data, catching mistakes, and much more, they are also prone to errors, noted Casey Greene, PhD, a professor of biomedical informatics at the University of Colorado at Denver, Aurora. It is also not totally clear how information is stored and processed in these kinds of tools, and who has access to that information, he noted.
At the same time, experts argue that these AI tools could have a positive impact on the field by limiting some of the linguistic disparities in scientific publishing as well as alleviating the burden of some monotonous or mechanical tasks that come along with manuscript writing.
What experts can agree on, though, is that the use of AI tools is here to stay. “This is going to become a common tool,” Dr. Greene said. “I don’t think there’s a way out of that at this point.”
A change in medical publishing
OpenAI released ChatGPT in November 2022. In its own words, ChatGPT is “a deep learning model that has been trained on a massive amount of text data to understand and generate humanlike text.” Enter a question or a prompt, and it will respond. For example, when asked how the AI tool can be used in scientific publishing, ChatGPT responded:
“ChatGPT can aid scientific publishing by generating ideas, clarifying concepts, conducting preliminary literature reviews, providing proofreading and editing suggestions, and assisting with outlining and organizing scientific papers. However, it should be used as a supportive tool alongside domain expertise and validation from trusted scientific sources.”
Just a few months after ChatGPT became available, researchers began using this tool in their own work. One individual, Som Biswas, MD, a radiologist at the University of Tennessee Health Science Center in Memphis, reportedly used ChatGPT to author 16 scientific articles in just 4 months, according to the Daily Beast. Five of these articles have been published in four different journals. Dr. Biswas declined to be interviewed for this article.
There were also reports of papers with ChatGPT as one of the listed authors, which sparked backlash. In response, JAMA, Nature, and Science all published editorials in January outlining their policies for using ChatGPT and other large language models in the scientific authoring process. Editors from the journals of the American College of Cardiology and the American College of Rheumatology also updated their policies to reflect the influence of AI authoring tools.
The consensus is that AI has no place on the author byline.
“We think that’s not appropriate, because coauthorship means that you are taking responsibility for the analysis and the generation of data that are included in a manuscript. A machine that is dictated by AI can’t take responsibility,” said Daniel Solomon, MD, MPH, a rheumatologist at Brigham and Women’s Hospital, Boston, and the editor in chief of the ACR journal Arthritis & Rheumatology.
Issues with AI
One of the big concerns around using AI in writing is that it can generate text that seems plausible but is untrue or not supported by data. For example, Dr. Greene and colleague Milton Pividori, PhD, also of the University of Colorado, were writing a journal article about new software they developed that uses a large language model to revise scientific manuscripts.
“We used the same software to revise that article and at one point, it added a line that noted that the large language model had been fine-tuned on a data set of manuscripts from within the same field. This makes a lot of sense, and is absolutely something you could do, but was not something that we did,” Dr. Greene said. “Without a really careful review of the content, it becomes possible to invent things that were not actually done.”
In another case, ChatGPT falsely stated that a prominent law professor had been accused of sexual assault, citing a Washington Post article that did not exist.
“We live in a society where we are extremely concerned about fake news,” Dr. Pividori added, “and [these kinds of errors] could certainly exacerbate that in the scientific community, which is very concerning because science informs public policy.”
Another issue is the lack of transparency around how large language models like ChatGPT process and store data used to make queries.
“We have no idea how they are recording all the prompts and things that we input into ChatGPT and their systems,” Dr. Pividori said.
OpenAI recently addressed some privacy concerns by allowing users to turn off their chat history with the AI chatbot, so conversations cannot be used to train or improve the company’s models. But Dr. Greene noted that the terms of service “still remain pretty nebulous.”
Dr. Solomon is also concerned with researchers using these AI tools in authoring without knowing how they work. “The thing we are really concerned about is that fact that [LLMs] are a bit of a black box – people don’t really understand the methodologies,” he said.
A positive tool?
But despite these concerns, many think that these types of AI-assisted tools could have a positive impact on medical publishing, particularly for researchers for whom English is not their first language, noted Catherine Gao, MD, a pulmonary and critical care instructor at Northwestern University, Chicago. She recently led research comparing scientific abstracts written by ChatGPT and real abstracts and discovered that reviewers found it “surprisingly difficult” to differentiate the two.
“The majority of research is published in English,” she said in an email. “Responsible use of LLMs can potentially reduce the burden of writing for busy scientists and improve equity for those who are not native English speakers.”
Dr. Pividori agreed, adding that as a non-native English speaker, he spends much more time working on the structure and grammar of sentences when authoring a manuscript, compared with people who speak English as a first language. He noted that these tools can also be used to automate some of the more monotonous tasks that come along with writing manuscripts and allow researchers to focus on the more creative aspects.
In the future, “I want to focus more on the things that only a human can do and let these tools do all the rest of it,” he said.
New rules
But despite how individual researchers feel about LLMs, they agree that these AI tools are here to stay.
“I think that we should anticipate that they will become part of the medical research establishment over time, when we figure out how to use them appropriately,” Dr. Solomon said.
While the debate of how to best use AI in medical publications will continue, journal editors agree that all authors of a manuscript are solely responsible for content in articles that used AI-assisted technology.
“Authors should carefully review and edit the result because AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased,” the ICMJE guidelines state. “Authors should be able to assert that there is no plagiarism in their paper, including in text and images produced by the AI.” This includes appropriate attribution of all cited materials.
The committee also recommends that authors write in both the cover letter and submitted work how AI was used in the manuscript writing process. Recently updated guidelines from the World Association of Medical Editors recommend that all prompts used to generate new text or analytical work should be provided in submitted work. Dr. Greene also noted that if authors used an AI tool to revise their work, they can include a version of the manuscript untouched by LLMs.
It is similar to a preprint, he said, but rather than publishing a version of a paper prior to peer review, someone is showing a version of a manuscript before it was reviewed and revised by AI. “This type of practice could be a path that lets us benefit from these models,” he said, “without having the drawbacks that many are concerned about.”
Dr. Solomon has financial relationships with AbbVie, Amgen, Janssen, CorEvitas, and Moderna. Both Dr. Greene and Dr. Pividori are inventors in the U.S. Provisional Patent Application No. 63/486,706 that the University of Colorado has filed for the “Publishing Infrastructure For AI-Assisted Academic Authoring” invention with the U.S. Patent and Trademark Office. Dr. Greene and Dr. Pividori also received a grant from the Alfred P. Sloan Foundation to improve their AI-based manuscript revision tool. Dr. Gao reported no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
. These tools should not be listed as authors, and researchers must denote how AI-assisted technologies were used, the committee said.
These new guidelines are the latest effort for medical journals to define policies for using these large-scale language models (LLMs) in scientific publication. While these AI-assisted tools can help with tasks such as writing, analyzing data, catching mistakes, and much more, they are also prone to errors, noted Casey Greene, PhD, a professor of biomedical informatics at the University of Colorado at Denver, Aurora. It is also not totally clear how information is stored and processed in these kinds of tools, and who has access to that information, he noted.
At the same time, experts argue that these AI tools could have a positive impact on the field by limiting some of the linguistic disparities in scientific publishing as well as alleviating the burden of some monotonous or mechanical tasks that come along with manuscript writing.
What experts can agree on, though, is that the use of AI tools is here to stay. “This is going to become a common tool,” Dr. Greene said. “I don’t think there’s a way out of that at this point.”
A change in medical publishing
OpenAI released ChatGPT in November 2022. In its own words, ChatGPT is “a deep learning model that has been trained on a massive amount of text data to understand and generate humanlike text.” Enter a question or a prompt, and it will respond. For example, when asked how the AI tool can be used in scientific publishing, ChatGPT responded:
“ChatGPT can aid scientific publishing by generating ideas, clarifying concepts, conducting preliminary literature reviews, providing proofreading and editing suggestions, and assisting with outlining and organizing scientific papers. However, it should be used as a supportive tool alongside domain expertise and validation from trusted scientific sources.”
Just a few months after ChatGPT became available, researchers began using this tool in their own work. One individual, Som Biswas, MD, a radiologist at the University of Tennessee Health Science Center in Memphis, reportedly used ChatGPT to author 16 scientific articles in just 4 months, according to the Daily Beast. Five of these articles have been published in four different journals. Dr. Biswas declined to be interviewed for this article.
There were also reports of papers with ChatGPT as one of the listed authors, which sparked backlash. In response, JAMA, Nature, and Science all published editorials in January outlining their policies for using ChatGPT and other large language models in the scientific authoring process. Editors from the journals of the American College of Cardiology and the American College of Rheumatology also updated their policies to reflect the influence of AI authoring tools.
The consensus is that AI has no place on the author byline.
“We think that’s not appropriate, because coauthorship means that you are taking responsibility for the analysis and the generation of data that are included in a manuscript. A machine that is dictated by AI can’t take responsibility,” said Daniel Solomon, MD, MPH, a rheumatologist at Brigham and Women’s Hospital, Boston, and the editor in chief of the ACR journal Arthritis & Rheumatology.
Issues with AI
One of the big concerns around using AI in writing is that it can generate text that seems plausible but is untrue or not supported by data. For example, Dr. Greene and colleague Milton Pividori, PhD, also of the University of Colorado, were writing a journal article about new software they developed that uses a large language model to revise scientific manuscripts.
“We used the same software to revise that article and at one point, it added a line that noted that the large language model had been fine-tuned on a data set of manuscripts from within the same field. This makes a lot of sense, and is absolutely something you could do, but was not something that we did,” Dr. Greene said. “Without a really careful review of the content, it becomes possible to invent things that were not actually done.”
In another case, ChatGPT falsely stated that a prominent law professor had been accused of sexual assault, citing a Washington Post article that did not exist.
“We live in a society where we are extremely concerned about fake news,” Dr. Pividori added, “and [these kinds of errors] could certainly exacerbate that in the scientific community, which is very concerning because science informs public policy.”
Another issue is the lack of transparency around how large language models like ChatGPT process and store data used to make queries.
“We have no idea how they are recording all the prompts and things that we input into ChatGPT and their systems,” Dr. Pividori said.
OpenAI recently addressed some privacy concerns by allowing users to turn off their chat history with the AI chatbot, so conversations cannot be used to train or improve the company’s models. But Dr. Greene noted that the terms of service “still remain pretty nebulous.”
Dr. Solomon is also concerned with researchers using these AI tools in authoring without knowing how they work. “The thing we are really concerned about is that fact that [LLMs] are a bit of a black box – people don’t really understand the methodologies,” he said.
A positive tool?
But despite these concerns, many think that these types of AI-assisted tools could have a positive impact on medical publishing, particularly for researchers for whom English is not their first language, noted Catherine Gao, MD, a pulmonary and critical care instructor at Northwestern University, Chicago. She recently led research comparing scientific abstracts written by ChatGPT and real abstracts and discovered that reviewers found it “surprisingly difficult” to differentiate the two.
“The majority of research is published in English,” she said in an email. “Responsible use of LLMs can potentially reduce the burden of writing for busy scientists and improve equity for those who are not native English speakers.”
Dr. Pividori agreed, adding that as a non-native English speaker, he spends much more time working on the structure and grammar of sentences when authoring a manuscript, compared with people who speak English as a first language. He noted that these tools can also be used to automate some of the more monotonous tasks that come along with writing manuscripts and allow researchers to focus on the more creative aspects.
In the future, “I want to focus more on the things that only a human can do and let these tools do all the rest of it,” he said.
New rules
But despite how individual researchers feel about LLMs, they agree that these AI tools are here to stay.
“I think that we should anticipate that they will become part of the medical research establishment over time, when we figure out how to use them appropriately,” Dr. Solomon said.
While the debate of how to best use AI in medical publications will continue, journal editors agree that all authors of a manuscript are solely responsible for content in articles that used AI-assisted technology.
“Authors should carefully review and edit the result because AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased,” the ICMJE guidelines state. “Authors should be able to assert that there is no plagiarism in their paper, including in text and images produced by the AI.” This includes appropriate attribution of all cited materials.
The committee also recommends that authors write in both the cover letter and submitted work how AI was used in the manuscript writing process. Recently updated guidelines from the World Association of Medical Editors recommend that all prompts used to generate new text or analytical work should be provided in submitted work. Dr. Greene also noted that if authors used an AI tool to revise their work, they can include a version of the manuscript untouched by LLMs.
It is similar to a preprint, he said, but rather than publishing a version of a paper prior to peer review, someone is showing a version of a manuscript before it was reviewed and revised by AI. “This type of practice could be a path that lets us benefit from these models,” he said, “without having the drawbacks that many are concerned about.”
Dr. Solomon has financial relationships with AbbVie, Amgen, Janssen, CorEvitas, and Moderna. Both Dr. Greene and Dr. Pividori are inventors in the U.S. Provisional Patent Application No. 63/486,706 that the University of Colorado has filed for the “Publishing Infrastructure For AI-Assisted Academic Authoring” invention with the U.S. Patent and Trademark Office. Dr. Greene and Dr. Pividori also received a grant from the Alfred P. Sloan Foundation to improve their AI-based manuscript revision tool. Dr. Gao reported no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
. These tools should not be listed as authors, and researchers must denote how AI-assisted technologies were used, the committee said.
These new guidelines are the latest effort for medical journals to define policies for using these large-scale language models (LLMs) in scientific publication. While these AI-assisted tools can help with tasks such as writing, analyzing data, catching mistakes, and much more, they are also prone to errors, noted Casey Greene, PhD, a professor of biomedical informatics at the University of Colorado at Denver, Aurora. It is also not totally clear how information is stored and processed in these kinds of tools, and who has access to that information, he noted.
At the same time, experts argue that these AI tools could have a positive impact on the field by limiting some of the linguistic disparities in scientific publishing as well as alleviating the burden of some monotonous or mechanical tasks that come along with manuscript writing.
What experts can agree on, though, is that the use of AI tools is here to stay. “This is going to become a common tool,” Dr. Greene said. “I don’t think there’s a way out of that at this point.”
A change in medical publishing
OpenAI released ChatGPT in November 2022. In its own words, ChatGPT is “a deep learning model that has been trained on a massive amount of text data to understand and generate humanlike text.” Enter a question or a prompt, and it will respond. For example, when asked how the AI tool can be used in scientific publishing, ChatGPT responded:
“ChatGPT can aid scientific publishing by generating ideas, clarifying concepts, conducting preliminary literature reviews, providing proofreading and editing suggestions, and assisting with outlining and organizing scientific papers. However, it should be used as a supportive tool alongside domain expertise and validation from trusted scientific sources.”
Just a few months after ChatGPT became available, researchers began using this tool in their own work. One individual, Som Biswas, MD, a radiologist at the University of Tennessee Health Science Center in Memphis, reportedly used ChatGPT to author 16 scientific articles in just 4 months, according to the Daily Beast. Five of these articles have been published in four different journals. Dr. Biswas declined to be interviewed for this article.
There were also reports of papers with ChatGPT as one of the listed authors, which sparked backlash. In response, JAMA, Nature, and Science all published editorials in January outlining their policies for using ChatGPT and other large language models in the scientific authoring process. Editors from the journals of the American College of Cardiology and the American College of Rheumatology also updated their policies to reflect the influence of AI authoring tools.
The consensus is that AI has no place on the author byline.
“We think that’s not appropriate, because coauthorship means that you are taking responsibility for the analysis and the generation of data that are included in a manuscript. A machine that is dictated by AI can’t take responsibility,” said Daniel Solomon, MD, MPH, a rheumatologist at Brigham and Women’s Hospital, Boston, and the editor in chief of the ACR journal Arthritis & Rheumatology.
Issues with AI
One of the big concerns around using AI in writing is that it can generate text that seems plausible but is untrue or not supported by data. For example, Dr. Greene and colleague Milton Pividori, PhD, also of the University of Colorado, were writing a journal article about new software they developed that uses a large language model to revise scientific manuscripts.
“We used the same software to revise that article and at one point, it added a line that noted that the large language model had been fine-tuned on a data set of manuscripts from within the same field. This makes a lot of sense, and is absolutely something you could do, but was not something that we did,” Dr. Greene said. “Without a really careful review of the content, it becomes possible to invent things that were not actually done.”
In another case, ChatGPT falsely stated that a prominent law professor had been accused of sexual assault, citing a Washington Post article that did not exist.
“We live in a society where we are extremely concerned about fake news,” Dr. Pividori added, “and [these kinds of errors] could certainly exacerbate that in the scientific community, which is very concerning because science informs public policy.”
Another issue is the lack of transparency around how large language models like ChatGPT process and store data used to make queries.
“We have no idea how they are recording all the prompts and things that we input into ChatGPT and their systems,” Dr. Pividori said.
OpenAI recently addressed some privacy concerns by allowing users to turn off their chat history with the AI chatbot, so conversations cannot be used to train or improve the company’s models. But Dr. Greene noted that the terms of service “still remain pretty nebulous.”
Dr. Solomon is also concerned with researchers using these AI tools in authoring without knowing how they work. “The thing we are really concerned about is that fact that [LLMs] are a bit of a black box – people don’t really understand the methodologies,” he said.
A positive tool?
But despite these concerns, many think that these types of AI-assisted tools could have a positive impact on medical publishing, particularly for researchers for whom English is not their first language, noted Catherine Gao, MD, a pulmonary and critical care instructor at Northwestern University, Chicago. She recently led research comparing scientific abstracts written by ChatGPT and real abstracts and discovered that reviewers found it “surprisingly difficult” to differentiate the two.
“The majority of research is published in English,” she said in an email. “Responsible use of LLMs can potentially reduce the burden of writing for busy scientists and improve equity for those who are not native English speakers.”
Dr. Pividori agreed, adding that as a non-native English speaker, he spends much more time working on the structure and grammar of sentences when authoring a manuscript, compared with people who speak English as a first language. He noted that these tools can also be used to automate some of the more monotonous tasks that come along with writing manuscripts and allow researchers to focus on the more creative aspects.
In the future, “I want to focus more on the things that only a human can do and let these tools do all the rest of it,” he said.
New rules
But despite how individual researchers feel about LLMs, they agree that these AI tools are here to stay.
“I think that we should anticipate that they will become part of the medical research establishment over time, when we figure out how to use them appropriately,” Dr. Solomon said.
While the debate of how to best use AI in medical publications will continue, journal editors agree that all authors of a manuscript are solely responsible for content in articles that used AI-assisted technology.
“Authors should carefully review and edit the result because AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased,” the ICMJE guidelines state. “Authors should be able to assert that there is no plagiarism in their paper, including in text and images produced by the AI.” This includes appropriate attribution of all cited materials.
The committee also recommends that authors write in both the cover letter and submitted work how AI was used in the manuscript writing process. Recently updated guidelines from the World Association of Medical Editors recommend that all prompts used to generate new text or analytical work should be provided in submitted work. Dr. Greene also noted that if authors used an AI tool to revise their work, they can include a version of the manuscript untouched by LLMs.
It is similar to a preprint, he said, but rather than publishing a version of a paper prior to peer review, someone is showing a version of a manuscript before it was reviewed and revised by AI. “This type of practice could be a path that lets us benefit from these models,” he said, “without having the drawbacks that many are concerned about.”
Dr. Solomon has financial relationships with AbbVie, Amgen, Janssen, CorEvitas, and Moderna. Both Dr. Greene and Dr. Pividori are inventors in the U.S. Provisional Patent Application No. 63/486,706 that the University of Colorado has filed for the “Publishing Infrastructure For AI-Assisted Academic Authoring” invention with the U.S. Patent and Trademark Office. Dr. Greene and Dr. Pividori also received a grant from the Alfred P. Sloan Foundation to improve their AI-based manuscript revision tool. Dr. Gao reported no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
Should antibiotic treatment be used toward the end of life?
Diagnosing an infection is complex because of the presence of symptoms that are often nonspecific and that are common in patients in decline toward the end of life. Use of antibiotic therapy in this patient population is still controversial, because the clinical benefits are not clear and the risk of pointless overmedicalization is very high.
Etiology
For patients who are receiving palliative care, the following factors predispose to an infection:
- Increasing fragility.
- Bedbound status and anorexia/cachexia syndrome.
- Weakened immune defenses owing to disease or treatments.
- Changes to skin integrity, related to venous access sites and/or bladder catheterization.
Four-week cutoff
For patients who are expected to live for fewer than 4 weeks, evidence from the literature shows that antimicrobial therapy does not resolve a potential infection or improve the prognosis. Antibiotics should therefore be used only for improving symptom management.
In practice, the most common infections in patients receiving end-of-life care are in the urinary and respiratory tracts. Antibiotics are beneficial in the short term in managing symptoms associated with urinary tract infections (effective in 60%-92% of cases), so they should be considered if the patient is not in the agonal or pre-agonal phase of death.
Antibiotics are also beneficial in managing symptoms associated with respiratory tract infections (effective in up to 53% of cases), so they should be considered if the patient is not in the agonal or pre-agonal phase of death. However, the risk of futility is high. As an alternative, opioids and antitussives could provide greater benefit for patients with dyspnea and cough.
No benefit has been observed with the use of antibiotics to treat symptoms associated with sepsis, abscesses, and deep and complicated infections. Antibiotics are therefore deemed futile in these cases.
In unclear cases, the “2-day rule” is useful. This involves waiting for 2 days, and if the patient remains clinically stable, prescribing antibiotics. If the patient’s condition deteriorates rapidly and progressively, antibiotics should not be prescribed.
Alternatively, one can prescribe antibiotics immediately. If no clinical improvement is observed after 2 days, the antibiotics should be stopped, especially if deterioration of the patient’s condition is rapid and progressive.
Increased body temperature is somewhat common in the last days and hours of life and is not generally associated with symptoms. Fever in these cases is not an indication for the use of antimicrobial therapy.
The most common laboratory markers of infection (C-reactive protein level, erythrocyte sedimentation rate, leukocyte level) are not particularly useful in this patient population, because they are affected by the baseline condition as well as by any treatments given and the state of systemic inflammation, which is associated with the decline in overall health in the last few weeks of life.
The choice should be individualized and shared with patients and family members so that the clinical appropriateness of the therapeutic strategy is evident and that decisions regarding antibiotic treatment are not regarded as a failure to treat the patient.
The longer term
In deciding to start antibiotic therapy, consideration must be given to the patient’s overall health, the treatment objectives, the possibility that the antibiotic will resolve the infection or improve the patient’s symptoms, and the estimated prognosis, which must be sufficiently long to allow the antibiotic time to take effect.
This article was translated from Univadis Italy, which is part of the Medscape Professional Network. A version of this article appeared on Medscape.com.
Diagnosing an infection is complex because of the presence of symptoms that are often nonspecific and that are common in patients in decline toward the end of life. Use of antibiotic therapy in this patient population is still controversial, because the clinical benefits are not clear and the risk of pointless overmedicalization is very high.
Etiology
For patients who are receiving palliative care, the following factors predispose to an infection:
- Increasing fragility.
- Bedbound status and anorexia/cachexia syndrome.
- Weakened immune defenses owing to disease or treatments.
- Changes to skin integrity, related to venous access sites and/or bladder catheterization.
Four-week cutoff
For patients who are expected to live for fewer than 4 weeks, evidence from the literature shows that antimicrobial therapy does not resolve a potential infection or improve the prognosis. Antibiotics should therefore be used only for improving symptom management.
In practice, the most common infections in patients receiving end-of-life care are in the urinary and respiratory tracts. Antibiotics are beneficial in the short term in managing symptoms associated with urinary tract infections (effective in 60%-92% of cases), so they should be considered if the patient is not in the agonal or pre-agonal phase of death.
Antibiotics are also beneficial in managing symptoms associated with respiratory tract infections (effective in up to 53% of cases), so they should be considered if the patient is not in the agonal or pre-agonal phase of death. However, the risk of futility is high. As an alternative, opioids and antitussives could provide greater benefit for patients with dyspnea and cough.
No benefit has been observed with the use of antibiotics to treat symptoms associated with sepsis, abscesses, and deep and complicated infections. Antibiotics are therefore deemed futile in these cases.
In unclear cases, the “2-day rule” is useful. This involves waiting for 2 days, and if the patient remains clinically stable, prescribing antibiotics. If the patient’s condition deteriorates rapidly and progressively, antibiotics should not be prescribed.
Alternatively, one can prescribe antibiotics immediately. If no clinical improvement is observed after 2 days, the antibiotics should be stopped, especially if deterioration of the patient’s condition is rapid and progressive.
Increased body temperature is somewhat common in the last days and hours of life and is not generally associated with symptoms. Fever in these cases is not an indication for the use of antimicrobial therapy.
The most common laboratory markers of infection (C-reactive protein level, erythrocyte sedimentation rate, leukocyte level) are not particularly useful in this patient population, because they are affected by the baseline condition as well as by any treatments given and the state of systemic inflammation, which is associated with the decline in overall health in the last few weeks of life.
The choice should be individualized and shared with patients and family members so that the clinical appropriateness of the therapeutic strategy is evident and that decisions regarding antibiotic treatment are not regarded as a failure to treat the patient.
The longer term
In deciding to start antibiotic therapy, consideration must be given to the patient’s overall health, the treatment objectives, the possibility that the antibiotic will resolve the infection or improve the patient’s symptoms, and the estimated prognosis, which must be sufficiently long to allow the antibiotic time to take effect.
This article was translated from Univadis Italy, which is part of the Medscape Professional Network. A version of this article appeared on Medscape.com.
Diagnosing an infection is complex because of the presence of symptoms that are often nonspecific and that are common in patients in decline toward the end of life. Use of antibiotic therapy in this patient population is still controversial, because the clinical benefits are not clear and the risk of pointless overmedicalization is very high.
Etiology
For patients who are receiving palliative care, the following factors predispose to an infection:
- Increasing fragility.
- Bedbound status and anorexia/cachexia syndrome.
- Weakened immune defenses owing to disease or treatments.
- Changes to skin integrity, related to venous access sites and/or bladder catheterization.
Four-week cutoff
For patients who are expected to live for fewer than 4 weeks, evidence from the literature shows that antimicrobial therapy does not resolve a potential infection or improve the prognosis. Antibiotics should therefore be used only for improving symptom management.
In practice, the most common infections in patients receiving end-of-life care are in the urinary and respiratory tracts. Antibiotics are beneficial in the short term in managing symptoms associated with urinary tract infections (effective in 60%-92% of cases), so they should be considered if the patient is not in the agonal or pre-agonal phase of death.
Antibiotics are also beneficial in managing symptoms associated with respiratory tract infections (effective in up to 53% of cases), so they should be considered if the patient is not in the agonal or pre-agonal phase of death. However, the risk of futility is high. As an alternative, opioids and antitussives could provide greater benefit for patients with dyspnea and cough.
No benefit has been observed with the use of antibiotics to treat symptoms associated with sepsis, abscesses, and deep and complicated infections. Antibiotics are therefore deemed futile in these cases.
In unclear cases, the “2-day rule” is useful. This involves waiting for 2 days, and if the patient remains clinically stable, prescribing antibiotics. If the patient’s condition deteriorates rapidly and progressively, antibiotics should not be prescribed.
Alternatively, one can prescribe antibiotics immediately. If no clinical improvement is observed after 2 days, the antibiotics should be stopped, especially if deterioration of the patient’s condition is rapid and progressive.
Increased body temperature is somewhat common in the last days and hours of life and is not generally associated with symptoms. Fever in these cases is not an indication for the use of antimicrobial therapy.
The most common laboratory markers of infection (C-reactive protein level, erythrocyte sedimentation rate, leukocyte level) are not particularly useful in this patient population, because they are affected by the baseline condition as well as by any treatments given and the state of systemic inflammation, which is associated with the decline in overall health in the last few weeks of life.
The choice should be individualized and shared with patients and family members so that the clinical appropriateness of the therapeutic strategy is evident and that decisions regarding antibiotic treatment are not regarded as a failure to treat the patient.
The longer term
In deciding to start antibiotic therapy, consideration must be given to the patient’s overall health, the treatment objectives, the possibility that the antibiotic will resolve the infection or improve the patient’s symptoms, and the estimated prognosis, which must be sufficiently long to allow the antibiotic time to take effect.
This article was translated from Univadis Italy, which is part of the Medscape Professional Network. A version of this article appeared on Medscape.com.
Three ‘synergistic’ problems when taking blood pressure
Insufficient blood pressure measurement during medical consultation, use of an inadequate technique for its determination, and lack of validated automatic sphygmomanometers are three problems that convergently complicate the diagnosis and control of arterial hypertension in the Americas, a silent disease that affects 180 million people in the region and is the main risk factor for cardiovascular diseases, said the Pan American Health Organization.
Jarbas Barbosa, MD, MPH, PhD, director of PAHO, said in an interview: “We don’t have specific data for each of these scenarios, but unfortunately, all three doubtless work together to make the situation worse.
“Often, the staff members at our primary care clinics are not prepared to diagnose and treat hypertension, because there aren’t national protocols to raise awareness and prepare them to provide this care to the correct standard. Also, they are often unqualified to take blood pressure readings properly,” he added.
This concern is reflected in the theme the organization chose for World Hypertension Day, which was observed on May 17: Measure your blood pressure accurately, control it, live longer! “We shouldn’t underestimate the importance of taking blood pressure,” warned Silvana Luciani, chief of PAHO’s noncommunicable diseases, violence, and injury prevention unit. But, the experts stressed, it must be done correctly.
Time no problem
It’s important to raise awareness of the value of blood pressure measurement for the general population. However, as multiple studies have shown, one barrier to detecting and controlling hypertension is that doctors and other health care professionals measure blood pressure less frequently in clinic than expected, or they use inappropriate techniques or obsolete or uncalibrated measurement devices.
“The importance of clinic blood pressure measurement has been recognized for many decades, but adherence to guidelines on proper, standardized blood pressure measurement remains uncommon in clinical practice,” concluded a consensus document signed by 25 experts from 13 institutions in the United States, Australia, Germany, the United Kingdom, Canada, Italy, Belgium, and Greece.
The first problem lies in the low quantity of measurements. A recent study in Argentina of nearly 3,000 visits to the doctor’s office at nine health care centers showed that doctors took blood pressure readings in only once in every seven encounters. Even cardiologists, the specialists with the best performance, did so only half of the time.
“Several factors can come into play: lack of awareness, medical inertia, or lack of appropriate equipment. But it is not for lack of time. How long does it take to take blood pressure three times within a 1-minute interval, with the patient seated and their back supported, as indicated? Four minutes. That’s not very much,” said Judith Zilberman, MD, PhD, said in an interview. Dr. Zilberman leads the department of hypertension and the women’s cardiovascular disease area at the Argerich Hospital in Buenos Aires, and is the former chair of the Argentinian Society of Hypertension.
Patricio López-Jaramillo, MD, PhD, said in an interview that the greatest obstacle is the lack of awareness among physicians and other health care staff about the importance of taking proper blood pressure measurements. Dr. López-Jaramillo is president and scientific director of the MASIRA Research Institute at the University of Santander in Bucaramanga, Colombia, and first author of the Manual Práctico de Diagnóstico y Manejo de la Hipertensión Arterial (Practice Guidelines for Diagnosing and Managing Hypertension), published by the Latin American Hypertension Society.
“Medical schools are also responsible for this. They go over this topic very superficially during undergraduate and, even worse, postgraduate training. The lack of time to take correct measurements, or the lack of appropriate instruments, is secondary to this lack of awareness among most health care staff members,” added Dr. López-Jaramillo, who is one of the researchers of the PURE epidemiologic study. Since 2002, it has followed a cohort of 225,000 participants from 27 high-, mid-, and low-income countries.
Dr. Zilberman added that it would be good practice for all primary care physicians to take blood pressure readings regardless of the reason for the visit and whether patients have been diagnosed with hypertension or not. “If a woman goes to her gynecologist because she wants to get pregnant, her blood pressure should also be taken! And any other specialist should interview the patient, ascertain her history, what medications she’s on, and then ask if her blood pressure has been taken recently,” she recommended.
Measure well
The second factor to consider is that a correct technique should be used to take blood pressure readings in the doctor’s office or clinic so as not to produce inaccurate results that could lead to underdiagnosis, overdiagnosis, or a poor assessment of the patient’s response to prescribed treatments. An observational study performed in Uruguay in 2017 showed that only 5% of 302 blood pressure measurements followed appropriate procedures.
A new fact sheet from the PAHO lists the following eight requirements for obtaining an accurate reading: don’t have a conversation, support the arm at heart level, put the cuff on a bare arm, use the correct cuff size, support the feet, keep the legs uncrossed, ensure the patient has an empty bladder, and support the back.
Though most guidelines recommend taking three readings, the “pragmatic” focus proposed in the international consensus accepts at least two readings separated by a minimum of 30 seconds. The two readings should then be averaged out. There is evidence that simplified protocols can be used, at least for population screening.
The authors of the new document also recommend preparing the patient before taking the measurement. The patient should be asked not to smoke, exercise, or consume alcohol or caffeine for at least 30 minutes beforehand. He or she should rest for a period of 3-5 minutes without speaking or being spoken to before the measurement is taken.
Lastly, clinically validated automated measurement devices should be used, as called for by the PAHO HEARTS initiative in the Americas. “The sphygmomanometer or classic aneroid tensiometer for the auscultatory method, which is still used way too often at doctor’s office visits in the region, has many weaknesses – not only the device itself but also the way it’s used (human error). This produces a rounded, approximate reading,” stressed Dr. Zilberman.
Automated devices also minimize interactions with the patient by reducing distractions during the preparation and measurement phases and freeing up time for the health care professional. “To [check for a] fever, we use the appropriate thermometer in the appropriate location. We should do the same for blood pressure,” she added.
The STRIDE-BP database, which is affiliated with the European Society of Hypertension, the International Society of Hypertension, and the World Hypertension League, contains an updated list of validated devices for measuring blood pressure.
The signers of the consensus likewise recognized that, beyond taking blood pressure measurements during office visits, the best measurements are those taken at home outside the context of medical care (doctor’s office or clinic) and that the same recommendations are directly applicable. “Few diseases can be detected so easily as with a simple at-home assessment performed by the individual himself or herself. If after three consecutive measurements, readings above 140/90 mm Hg are obtained, the individual should see the doctor to set up a comprehensive treatment program,” said Pablo Rodríguez, MD, secretary of the Argentinian Society of Hypertension. From now through September 14 (Day for Patients With Hypertension), the society is conducting a campaign to take blood pressure measurements at different locations across the country.
Dr. Zilberman and Dr. López-Jiménez disclosed no relevant financial relationships.
This article was translated from the Medscape Spanish Edition. A version appeared on Medscape.com.
Insufficient blood pressure measurement during medical consultation, use of an inadequate technique for its determination, and lack of validated automatic sphygmomanometers are three problems that convergently complicate the diagnosis and control of arterial hypertension in the Americas, a silent disease that affects 180 million people in the region and is the main risk factor for cardiovascular diseases, said the Pan American Health Organization.
Jarbas Barbosa, MD, MPH, PhD, director of PAHO, said in an interview: “We don’t have specific data for each of these scenarios, but unfortunately, all three doubtless work together to make the situation worse.
“Often, the staff members at our primary care clinics are not prepared to diagnose and treat hypertension, because there aren’t national protocols to raise awareness and prepare them to provide this care to the correct standard. Also, they are often unqualified to take blood pressure readings properly,” he added.
This concern is reflected in the theme the organization chose for World Hypertension Day, which was observed on May 17: Measure your blood pressure accurately, control it, live longer! “We shouldn’t underestimate the importance of taking blood pressure,” warned Silvana Luciani, chief of PAHO’s noncommunicable diseases, violence, and injury prevention unit. But, the experts stressed, it must be done correctly.
Time no problem
It’s important to raise awareness of the value of blood pressure measurement for the general population. However, as multiple studies have shown, one barrier to detecting and controlling hypertension is that doctors and other health care professionals measure blood pressure less frequently in clinic than expected, or they use inappropriate techniques or obsolete or uncalibrated measurement devices.
“The importance of clinic blood pressure measurement has been recognized for many decades, but adherence to guidelines on proper, standardized blood pressure measurement remains uncommon in clinical practice,” concluded a consensus document signed by 25 experts from 13 institutions in the United States, Australia, Germany, the United Kingdom, Canada, Italy, Belgium, and Greece.
The first problem lies in the low quantity of measurements. A recent study in Argentina of nearly 3,000 visits to the doctor’s office at nine health care centers showed that doctors took blood pressure readings in only once in every seven encounters. Even cardiologists, the specialists with the best performance, did so only half of the time.
“Several factors can come into play: lack of awareness, medical inertia, or lack of appropriate equipment. But it is not for lack of time. How long does it take to take blood pressure three times within a 1-minute interval, with the patient seated and their back supported, as indicated? Four minutes. That’s not very much,” said Judith Zilberman, MD, PhD, said in an interview. Dr. Zilberman leads the department of hypertension and the women’s cardiovascular disease area at the Argerich Hospital in Buenos Aires, and is the former chair of the Argentinian Society of Hypertension.
Patricio López-Jaramillo, MD, PhD, said in an interview that the greatest obstacle is the lack of awareness among physicians and other health care staff about the importance of taking proper blood pressure measurements. Dr. López-Jaramillo is president and scientific director of the MASIRA Research Institute at the University of Santander in Bucaramanga, Colombia, and first author of the Manual Práctico de Diagnóstico y Manejo de la Hipertensión Arterial (Practice Guidelines for Diagnosing and Managing Hypertension), published by the Latin American Hypertension Society.
“Medical schools are also responsible for this. They go over this topic very superficially during undergraduate and, even worse, postgraduate training. The lack of time to take correct measurements, or the lack of appropriate instruments, is secondary to this lack of awareness among most health care staff members,” added Dr. López-Jaramillo, who is one of the researchers of the PURE epidemiologic study. Since 2002, it has followed a cohort of 225,000 participants from 27 high-, mid-, and low-income countries.
Dr. Zilberman added that it would be good practice for all primary care physicians to take blood pressure readings regardless of the reason for the visit and whether patients have been diagnosed with hypertension or not. “If a woman goes to her gynecologist because she wants to get pregnant, her blood pressure should also be taken! And any other specialist should interview the patient, ascertain her history, what medications she’s on, and then ask if her blood pressure has been taken recently,” she recommended.
Measure well
The second factor to consider is that a correct technique should be used to take blood pressure readings in the doctor’s office or clinic so as not to produce inaccurate results that could lead to underdiagnosis, overdiagnosis, or a poor assessment of the patient’s response to prescribed treatments. An observational study performed in Uruguay in 2017 showed that only 5% of 302 blood pressure measurements followed appropriate procedures.
A new fact sheet from the PAHO lists the following eight requirements for obtaining an accurate reading: don’t have a conversation, support the arm at heart level, put the cuff on a bare arm, use the correct cuff size, support the feet, keep the legs uncrossed, ensure the patient has an empty bladder, and support the back.
Though most guidelines recommend taking three readings, the “pragmatic” focus proposed in the international consensus accepts at least two readings separated by a minimum of 30 seconds. The two readings should then be averaged out. There is evidence that simplified protocols can be used, at least for population screening.
The authors of the new document also recommend preparing the patient before taking the measurement. The patient should be asked not to smoke, exercise, or consume alcohol or caffeine for at least 30 minutes beforehand. He or she should rest for a period of 3-5 minutes without speaking or being spoken to before the measurement is taken.
Lastly, clinically validated automated measurement devices should be used, as called for by the PAHO HEARTS initiative in the Americas. “The sphygmomanometer or classic aneroid tensiometer for the auscultatory method, which is still used way too often at doctor’s office visits in the region, has many weaknesses – not only the device itself but also the way it’s used (human error). This produces a rounded, approximate reading,” stressed Dr. Zilberman.
Automated devices also minimize interactions with the patient by reducing distractions during the preparation and measurement phases and freeing up time for the health care professional. “To [check for a] fever, we use the appropriate thermometer in the appropriate location. We should do the same for blood pressure,” she added.
The STRIDE-BP database, which is affiliated with the European Society of Hypertension, the International Society of Hypertension, and the World Hypertension League, contains an updated list of validated devices for measuring blood pressure.
The signers of the consensus likewise recognized that, beyond taking blood pressure measurements during office visits, the best measurements are those taken at home outside the context of medical care (doctor’s office or clinic) and that the same recommendations are directly applicable. “Few diseases can be detected so easily as with a simple at-home assessment performed by the individual himself or herself. If after three consecutive measurements, readings above 140/90 mm Hg are obtained, the individual should see the doctor to set up a comprehensive treatment program,” said Pablo Rodríguez, MD, secretary of the Argentinian Society of Hypertension. From now through September 14 (Day for Patients With Hypertension), the society is conducting a campaign to take blood pressure measurements at different locations across the country.
Dr. Zilberman and Dr. López-Jiménez disclosed no relevant financial relationships.
This article was translated from the Medscape Spanish Edition. A version appeared on Medscape.com.
Insufficient blood pressure measurement during medical consultation, use of an inadequate technique for its determination, and lack of validated automatic sphygmomanometers are three problems that convergently complicate the diagnosis and control of arterial hypertension in the Americas, a silent disease that affects 180 million people in the region and is the main risk factor for cardiovascular diseases, said the Pan American Health Organization.
Jarbas Barbosa, MD, MPH, PhD, director of PAHO, said in an interview: “We don’t have specific data for each of these scenarios, but unfortunately, all three doubtless work together to make the situation worse.
“Often, the staff members at our primary care clinics are not prepared to diagnose and treat hypertension, because there aren’t national protocols to raise awareness and prepare them to provide this care to the correct standard. Also, they are often unqualified to take blood pressure readings properly,” he added.
This concern is reflected in the theme the organization chose for World Hypertension Day, which was observed on May 17: Measure your blood pressure accurately, control it, live longer! “We shouldn’t underestimate the importance of taking blood pressure,” warned Silvana Luciani, chief of PAHO’s noncommunicable diseases, violence, and injury prevention unit. But, the experts stressed, it must be done correctly.
Time no problem
It’s important to raise awareness of the value of blood pressure measurement for the general population. However, as multiple studies have shown, one barrier to detecting and controlling hypertension is that doctors and other health care professionals measure blood pressure less frequently in clinic than expected, or they use inappropriate techniques or obsolete or uncalibrated measurement devices.
“The importance of clinic blood pressure measurement has been recognized for many decades, but adherence to guidelines on proper, standardized blood pressure measurement remains uncommon in clinical practice,” concluded a consensus document signed by 25 experts from 13 institutions in the United States, Australia, Germany, the United Kingdom, Canada, Italy, Belgium, and Greece.
The first problem lies in the low quantity of measurements. A recent study in Argentina of nearly 3,000 visits to the doctor’s office at nine health care centers showed that doctors took blood pressure readings in only once in every seven encounters. Even cardiologists, the specialists with the best performance, did so only half of the time.
“Several factors can come into play: lack of awareness, medical inertia, or lack of appropriate equipment. But it is not for lack of time. How long does it take to take blood pressure three times within a 1-minute interval, with the patient seated and their back supported, as indicated? Four minutes. That’s not very much,” said Judith Zilberman, MD, PhD, said in an interview. Dr. Zilberman leads the department of hypertension and the women’s cardiovascular disease area at the Argerich Hospital in Buenos Aires, and is the former chair of the Argentinian Society of Hypertension.
Patricio López-Jaramillo, MD, PhD, said in an interview that the greatest obstacle is the lack of awareness among physicians and other health care staff about the importance of taking proper blood pressure measurements. Dr. López-Jaramillo is president and scientific director of the MASIRA Research Institute at the University of Santander in Bucaramanga, Colombia, and first author of the Manual Práctico de Diagnóstico y Manejo de la Hipertensión Arterial (Practice Guidelines for Diagnosing and Managing Hypertension), published by the Latin American Hypertension Society.
“Medical schools are also responsible for this. They go over this topic very superficially during undergraduate and, even worse, postgraduate training. The lack of time to take correct measurements, or the lack of appropriate instruments, is secondary to this lack of awareness among most health care staff members,” added Dr. López-Jaramillo, who is one of the researchers of the PURE epidemiologic study. Since 2002, it has followed a cohort of 225,000 participants from 27 high-, mid-, and low-income countries.
Dr. Zilberman added that it would be good practice for all primary care physicians to take blood pressure readings regardless of the reason for the visit and whether patients have been diagnosed with hypertension or not. “If a woman goes to her gynecologist because she wants to get pregnant, her blood pressure should also be taken! And any other specialist should interview the patient, ascertain her history, what medications she’s on, and then ask if her blood pressure has been taken recently,” she recommended.
Measure well
The second factor to consider is that a correct technique should be used to take blood pressure readings in the doctor’s office or clinic so as not to produce inaccurate results that could lead to underdiagnosis, overdiagnosis, or a poor assessment of the patient’s response to prescribed treatments. An observational study performed in Uruguay in 2017 showed that only 5% of 302 blood pressure measurements followed appropriate procedures.
A new fact sheet from the PAHO lists the following eight requirements for obtaining an accurate reading: don’t have a conversation, support the arm at heart level, put the cuff on a bare arm, use the correct cuff size, support the feet, keep the legs uncrossed, ensure the patient has an empty bladder, and support the back.
Though most guidelines recommend taking three readings, the “pragmatic” focus proposed in the international consensus accepts at least two readings separated by a minimum of 30 seconds. The two readings should then be averaged out. There is evidence that simplified protocols can be used, at least for population screening.
The authors of the new document also recommend preparing the patient before taking the measurement. The patient should be asked not to smoke, exercise, or consume alcohol or caffeine for at least 30 minutes beforehand. He or she should rest for a period of 3-5 minutes without speaking or being spoken to before the measurement is taken.
Lastly, clinically validated automated measurement devices should be used, as called for by the PAHO HEARTS initiative in the Americas. “The sphygmomanometer or classic aneroid tensiometer for the auscultatory method, which is still used way too often at doctor’s office visits in the region, has many weaknesses – not only the device itself but also the way it’s used (human error). This produces a rounded, approximate reading,” stressed Dr. Zilberman.
Automated devices also minimize interactions with the patient by reducing distractions during the preparation and measurement phases and freeing up time for the health care professional. “To [check for a] fever, we use the appropriate thermometer in the appropriate location. We should do the same for blood pressure,” she added.
The STRIDE-BP database, which is affiliated with the European Society of Hypertension, the International Society of Hypertension, and the World Hypertension League, contains an updated list of validated devices for measuring blood pressure.
The signers of the consensus likewise recognized that, beyond taking blood pressure measurements during office visits, the best measurements are those taken at home outside the context of medical care (doctor’s office or clinic) and that the same recommendations are directly applicable. “Few diseases can be detected so easily as with a simple at-home assessment performed by the individual himself or herself. If after three consecutive measurements, readings above 140/90 mm Hg are obtained, the individual should see the doctor to set up a comprehensive treatment program,” said Pablo Rodríguez, MD, secretary of the Argentinian Society of Hypertension. From now through September 14 (Day for Patients With Hypertension), the society is conducting a campaign to take blood pressure measurements at different locations across the country.
Dr. Zilberman and Dr. López-Jiménez disclosed no relevant financial relationships.
This article was translated from the Medscape Spanish Edition. A version appeared on Medscape.com.
When could you be sued for AI malpractice? You’re likely using it now
The ways in which artificial intelligence (AI) may transform the future of medicine is making headlines across the globe. But chances are, you’re already using AI in your practice every day – you may just not realize it.
And whether you recognize the presence of AI or not, the technology could be putting you in danger of a lawsuit, legal experts say.
“For physicians, AI has also not yet drastically changed or improved the way care is provided or consumed,” said Michael LeTang, chief nursing informatics officer and vice president of risk management and patient safety at Healthcare Risk Advisors, part of TDC Group. “Consequently, it may seem like AI is not present in their work streams, but in reality, it has been utilized in health care for several years. As AI technologies continue to develop and become more sophisticated, we can expect them to play an increasingly significant role in health care.”
Today, most AI applications in health care use narrow AI, which is designed to complete a single task without human assistance, as opposed to artificial general intelligence (AGI), which pertains to human-level reasoning and problem solving across a broad spectrum. Here are some ways doctors are using AI throughout the day – sometimes being aware of its assistance, and sometimes being unaware:
- Many doctors use electronic health records (EHRs) with integrated AI that include computerized clinical decision support tools designed to reduce the risk of diagnostic error and to integrate decision-making in the medication ordering function.
- Cardiologists, pathologists, and dermatologists use AI in the interpretation of vast amounts of images, tracings, and complex patterns.
- Surgeons are using AI-enhanced surgical robotics for orthopedic surgeries, such as joint replacement and spine surgery.
- A growing number of doctors are using ChatGPT to assist in drafting prior authorization letters for insurers. Experts say more doctors are also experimenting with ChatGPT to support medical decision-making.
- Within oncology, physicians use machine learning techniques in the form of computer-aided detection systems for early breast cancer detection.
- AI algorithms are often used by health systems for workflow, staffing optimization, population management, and care coordination.
- Some systems within EHRs use AI to indicate high-risk patients.
- Physicians are using AI applications for the early recognition of sepsis, including EHR-integrated decision tools, such as the Hospital Corporation of America Healthcare’s Sepsis Prediction and Optimization Therapy and the Sepsis Early Risk Assessment algorithm.
- About 30% of radiologists use AI in their practice to analyze x-rays and CT scans.
- Epic Systems recently announced a partnership with Microsoft to integrate ChatGPT into MyChart, Epic’s patient portal system. Pilot hospitals will utilize ChatGPT to automatically generate responses to patient-generated questions sent via the portal.
The growth of AI in health care has been enormous, and it’s only going to continue, said Ravi B. Parikh, MD, an assistant professor in the department of medical ethics and health policy and medicine at the University of Pennsylvania, Philadelphia.
“What’s really critical is that physicians, clinicians, and nurses using AI are provided with the tools to understand how artificial intelligence works and, most importantly, understand that they are still accountable for making the ultimate decision,” Mr. LeTang said, “The information is not always going to be the right thing to do or the most accurate thing to do. They’re still liable for making a bad decision, even if AI is driving that.”
What are the top AI legal dangers of today?
A pressing legal risk is becoming too reliant on the suggestions that AI-based systems provide, which can lead to poor care decisions, said Kenneth Rashbaum, a New York–based cybersecurity attorney with more than 25 years of experience in medical malpractice defense.
This can occur, for example, when using clinical support systems that leverage AI, machine learning, or statistical pattern recognition. Today, clinical support systems are commonly administered through EHRs and other computerized clinical workflows. In general, such systems match a patient’s characteristics to a computerized clinical knowledge base. An assessment or recommendation is then presented to the physician for a decision.
“If the clinician blindly accepts it without considering whether it’s appropriate for this patient at this time with this presentation, the clinician may bear some responsibility if there is an untoward result,” Mr. Rashbaum said.
“A common claim even in the days before the EMR [electronic medical record] and AI, was that the clinician did not take all available information into account in rendering treatment, including history of past and present condition, as reflected in the records, communication with past and other present treating clinicians, lab and radiology results, discussions with the patient, and physical examination findings,” he said. “So, if the clinician relied upon the support prompt to the exclusion of these other sources of information, that could be a very strong argument for the plaintiff.”
Chatbots, such OpenAI’s ChatGPT, are another form of AI raising legal red flags. ChatGPT, trained on a massive set of text data, can carry out conversations, write code, draft emails, and answer any question posed. The chatbot has gained considerable credibility for accurately diagnosing rare conditions in seconds, and it recently passed the U.S. Medical Licensing Examination.
It’s unclear how many doctors are signing onto the ChatGPT website daily, but physicians are actively using the chatbot, particularly for assistance with prior authorization letters and to support decision-making processes in their practices, said Mr. LeTang.
When physicians ask ChatGPT a question, however, they should be mindful that ChatGPT could “hallucinate,” a term that refers to a generated response that sounds plausible but is factually incorrect or is unrelated to the context, explains Harvey Castro, MD, an emergency physician, ChatGPT health care expert, and author of the 2023 book “ChatGPT and Healthcare: Unlocking the Potential of Patient Empowerment.”
Acting on ChatGPT’s response without vetting the information places doctors at serious risk of a lawsuit, he said.
“Sometimes, the response is half true and half false,” he said. “Say, I go outside my specialty of emergency medicine and ask it about a pediatric surgical procedure. It could give me a response that sounds medically correct, but then I ask a pediatric cardiologist, and he says, ‘We don’t even do this. This doesn’t even exist!’ Physicians really have to make sure they are vetting the information provided.”
In response to ChatGPT’s growing usage by health care professionals, hospitals and practices are quickly implementing guidelines, policies, and restrictions that caution physicians about the accuracy of ChatGPT-generated information, adds Mr. LeTang.
Emerging best practices include avoiding the input of patient health information, personally identifiable information, or any data that could be commercially valuable or considered the intellectual property of a hospital or health system, he said.
“Another crucial guideline is not to rely solely on ChatGPT as a definitive source for clinical decision-making; physicians must exercise their professional judgment,” he said. “If best practices are not adhered to, the associated risks are present today. However, these risks may become more significant as AI technologies continue to evolve and become increasingly integrated into health care.”
The potential for misdiagnosis by AI systems and the risk of unnecessary procedures if physicians do not thoroughly evaluate and validate AI predictions are other dangers.
As an example, Mr. LeTang described a case in which a physician documents in the EHR that a patient has presented to the emergency department with chest pains and other signs of a heart attack, and an AI algorithm predicts that the patient is experiencing an active myocardial infarction. If the physician then sends the patient for stenting or an angioplasty without other concrete evidence or tests to confirm the diagnosis, the doctor could later face a misdiagnosis complaint if the costly procedures were unnecessary.
“That’s one of the risks of using artificial intelligence,” he said. “A large percentage of malpractice claims is failure to diagnose, delayed diagnosis, or inaccurate diagnosis. What falls in the category of failure to diagnose is sending a patient for an unnecessary procedure or having an adverse event or bad outcome because of the failure to diagnose.”
So far, no AI lawsuits have been filed, but they may make an appearance soon, said Sue Boisvert, senior patient safety risk manager at The Doctors Company, a national medical liability insurer.
“There are hundreds of AI programs currently in use in health care,” she said. “At some point, a provider will make a decision that is contrary to what the AI recommended. The AI may be wrong, or the provider may be wrong. Either way, the provider will neglect to document their clinical reasoning, a patient will be harmed, and we will have the first AI claim.”
Upcoming AI legal risks to watch for
Lawsuits that allege biased patient care by physicians on the basis of algorithmic bias may also be forthcoming, analysts warn.
Much has been written about algorithmic bias that compounds and worsens inequities in socioeconomic status, ethnicity, sexual orientation, and gender in health systems. In 2019, a groundbreaking article in Science shed light on commonly used algorithms that are considered racially biased and how health care professionals often use such information to make medical decisions.
No claims involving AI bias have come down the pipeline yet, but it’s an area to watch, said Ms. Boisvert. She noted a website that highlights complaints and accusations of AI bias, including in health care.
“We need to be sure the training of the AI is appropriate, current, and broad enough so that there is no bias in the AI when it’s participating in the decision-making,” said Ms. Boisvert. “Imagine if the AI is diagnosing based on a dataset that is not local. It doesn’t represent the population at that particular hospital, and it’s providing inaccurate information to the physicians who are then making decisions about treatment.”
In pain management, for example, there are known differences in how patients experience pain, Ms. Boisvert said. If AI was being used to develop an algorithm for how a particular patient’s postoperative pain should be managed, and the algorithm did not include the differences, the pain control for a certain patient could be inappropriate. A poor outcome resulting from the treatment could lead to a claim against the physician or hospital that used the biased AI system, she said.
In the future, as AI becomes more integrated and accepted in medicine, there may be a risk of legal complaints against doctors for not using AI, said Saurabh Jha, MD, an associate professor of radiology at the University of Pennsylvania, Philadelphia, and a scholar of AI in radiology.
“Ultimately, we might get to a place where AI starts helping physicians detect more or reduce the miss of certain conditions, and it becomes the standard of care,” Dr. Jha said. “For example, if it became part of the standard of care for pulmonary embolism [PE] detection, and you didn’t use it for PE detection, and there was a miss. That could put you at legal risk. We’re not at that stage yet, but that is one future possibility.”
Dr. Parikh envisions an even cloudier liability landscape as the potential grows for AI to control patient care decisions. In such a scenario, rather than just issuing an alert or prediction to a physician, the AI system could trigger an action.
For instance, if an algorithm is trained to predict sepsis and, once triggered, the AI could initiate a nurse-led rapid response or a change in patient care outside the clinician’s control, said Dr. Parikh, who coauthored a recent article on AI and medical liability in The Milbank Quarterly.
“That’s still very much the minority of how AI is being used, but as evidence is growing that AI-based diagnostic tools perform equivalent or even superior to physicians, these autonomous workflows are being considered,” Dr. Parikh said. “When the ultimate action upon the patient is more determined by the AI than what the clinician does, then I think the liability picture gets murkier, and we should be thinking about how we can respond to that from a liability framework.”
How you can prevent AI-related lawsuits
The first step to preventing an AI-related claim is being aware of when and how you are using AI.
Ensure you’re informed about how the AI was trained, Ms. Boisvert stresses.
“Ask questions!” she said. “Is the AI safe? Are the recommendations accurate? Does the AI perform better than current systems? In what way? What databases were used, and did the programmers consider bias? Do I understand how to use the results?”
Never blindly trust the AI but rather view it as a data point in a medical decision, said Dr. Parikh. Ensure that other sources of medical information are properly accessed and that best practices for your specialty are still being followed.
When using any form of AI, document your usage, adds Mr. Rashbaum. A record that clearly outlines how the physician incorporated the AI is critical if a claim later arises in which the doctor is accused of AI-related malpractice, he said.
“Indicating how the AI tool was used, why it was used, and that it was used in conjunction with available clinical information and the clinician’s best judgment could reduce the risk of being found responsible as a result of AI use in a particular case,” he said.
Use chatbots, such as ChatGPT, the way they were intended, as support tools, rather than definitive diagnostic instruments, adds Dr. Castro.
“Doctors should also be well-trained in interpreting and understanding the suggestions provided by ChatGPT and should use their clinical judgment and experience alongside the AI tool for more accurate decision-making,” he said.
In addition, because no AI insurance product exists on the market, physicians and organizations using AI – particularly for direct health care – should evaluate their current insurance or insurance-like products to determine where a claim involving AI might fall and whether the policy would respond, said Ms. Boisvert. The AI vendor/manufacturer will likely have indemnified themselves in the purchase and sale agreement or contract, she said.
It will also become increasingly important for medical practices, hospitals, and health systems to put in place strong data governance strategies, Mr. LeTang said.
“AI relies on good data,” he said. “A data governance strategy is a key component to making sure we understand where the data is coming from, what is represents, how accurate it is, if it’s reproducible, what controls are in place to ensure the right people have the right access, and that if we’re starting to use it to build algorithms, that it’s deidentified.”
While no malpractice claims associated with the use of AI have yet surfaced, this may change as legal courts catch up on the backlog of malpractice claims that were delayed because of COVID-19, and even more so as AI becomes more prevalent in health care, Mr. LeTang said.
“Similar to the attention that autonomous driving systems, like Tesla, receive when the system fails and accidents occur, we can be assured that media outlets will widely publicize AI-related medical adverse events,” he said. “It is crucial for health care professionals, AI developers, and regulatory authorities to work together to ensure the responsible use of AI in health care, with patient safety as the top priority. By doing so, they can mitigate the risks associated with AI implementation and minimize the potential for legal disputes arising from AI-related medical errors.”
A version of this article first appeared on Medscape.com.
The ways in which artificial intelligence (AI) may transform the future of medicine is making headlines across the globe. But chances are, you’re already using AI in your practice every day – you may just not realize it.
And whether you recognize the presence of AI or not, the technology could be putting you in danger of a lawsuit, legal experts say.
“For physicians, AI has also not yet drastically changed or improved the way care is provided or consumed,” said Michael LeTang, chief nursing informatics officer and vice president of risk management and patient safety at Healthcare Risk Advisors, part of TDC Group. “Consequently, it may seem like AI is not present in their work streams, but in reality, it has been utilized in health care for several years. As AI technologies continue to develop and become more sophisticated, we can expect them to play an increasingly significant role in health care.”
Today, most AI applications in health care use narrow AI, which is designed to complete a single task without human assistance, as opposed to artificial general intelligence (AGI), which pertains to human-level reasoning and problem solving across a broad spectrum. Here are some ways doctors are using AI throughout the day – sometimes being aware of its assistance, and sometimes being unaware:
- Many doctors use electronic health records (EHRs) with integrated AI that include computerized clinical decision support tools designed to reduce the risk of diagnostic error and to integrate decision-making in the medication ordering function.
- Cardiologists, pathologists, and dermatologists use AI in the interpretation of vast amounts of images, tracings, and complex patterns.
- Surgeons are using AI-enhanced surgical robotics for orthopedic surgeries, such as joint replacement and spine surgery.
- A growing number of doctors are using ChatGPT to assist in drafting prior authorization letters for insurers. Experts say more doctors are also experimenting with ChatGPT to support medical decision-making.
- Within oncology, physicians use machine learning techniques in the form of computer-aided detection systems for early breast cancer detection.
- AI algorithms are often used by health systems for workflow, staffing optimization, population management, and care coordination.
- Some systems within EHRs use AI to indicate high-risk patients.
- Physicians are using AI applications for the early recognition of sepsis, including EHR-integrated decision tools, such as the Hospital Corporation of America Healthcare’s Sepsis Prediction and Optimization Therapy and the Sepsis Early Risk Assessment algorithm.
- About 30% of radiologists use AI in their practice to analyze x-rays and CT scans.
- Epic Systems recently announced a partnership with Microsoft to integrate ChatGPT into MyChart, Epic’s patient portal system. Pilot hospitals will utilize ChatGPT to automatically generate responses to patient-generated questions sent via the portal.
The growth of AI in health care has been enormous, and it’s only going to continue, said Ravi B. Parikh, MD, an assistant professor in the department of medical ethics and health policy and medicine at the University of Pennsylvania, Philadelphia.
“What’s really critical is that physicians, clinicians, and nurses using AI are provided with the tools to understand how artificial intelligence works and, most importantly, understand that they are still accountable for making the ultimate decision,” Mr. LeTang said, “The information is not always going to be the right thing to do or the most accurate thing to do. They’re still liable for making a bad decision, even if AI is driving that.”
What are the top AI legal dangers of today?
A pressing legal risk is becoming too reliant on the suggestions that AI-based systems provide, which can lead to poor care decisions, said Kenneth Rashbaum, a New York–based cybersecurity attorney with more than 25 years of experience in medical malpractice defense.
This can occur, for example, when using clinical support systems that leverage AI, machine learning, or statistical pattern recognition. Today, clinical support systems are commonly administered through EHRs and other computerized clinical workflows. In general, such systems match a patient’s characteristics to a computerized clinical knowledge base. An assessment or recommendation is then presented to the physician for a decision.
“If the clinician blindly accepts it without considering whether it’s appropriate for this patient at this time with this presentation, the clinician may bear some responsibility if there is an untoward result,” Mr. Rashbaum said.
“A common claim even in the days before the EMR [electronic medical record] and AI, was that the clinician did not take all available information into account in rendering treatment, including history of past and present condition, as reflected in the records, communication with past and other present treating clinicians, lab and radiology results, discussions with the patient, and physical examination findings,” he said. “So, if the clinician relied upon the support prompt to the exclusion of these other sources of information, that could be a very strong argument for the plaintiff.”
Chatbots, such OpenAI’s ChatGPT, are another form of AI raising legal red flags. ChatGPT, trained on a massive set of text data, can carry out conversations, write code, draft emails, and answer any question posed. The chatbot has gained considerable credibility for accurately diagnosing rare conditions in seconds, and it recently passed the U.S. Medical Licensing Examination.
It’s unclear how many doctors are signing onto the ChatGPT website daily, but physicians are actively using the chatbot, particularly for assistance with prior authorization letters and to support decision-making processes in their practices, said Mr. LeTang.
When physicians ask ChatGPT a question, however, they should be mindful that ChatGPT could “hallucinate,” a term that refers to a generated response that sounds plausible but is factually incorrect or is unrelated to the context, explains Harvey Castro, MD, an emergency physician, ChatGPT health care expert, and author of the 2023 book “ChatGPT and Healthcare: Unlocking the Potential of Patient Empowerment.”
Acting on ChatGPT’s response without vetting the information places doctors at serious risk of a lawsuit, he said.
“Sometimes, the response is half true and half false,” he said. “Say, I go outside my specialty of emergency medicine and ask it about a pediatric surgical procedure. It could give me a response that sounds medically correct, but then I ask a pediatric cardiologist, and he says, ‘We don’t even do this. This doesn’t even exist!’ Physicians really have to make sure they are vetting the information provided.”
In response to ChatGPT’s growing usage by health care professionals, hospitals and practices are quickly implementing guidelines, policies, and restrictions that caution physicians about the accuracy of ChatGPT-generated information, adds Mr. LeTang.
Emerging best practices include avoiding the input of patient health information, personally identifiable information, or any data that could be commercially valuable or considered the intellectual property of a hospital or health system, he said.
“Another crucial guideline is not to rely solely on ChatGPT as a definitive source for clinical decision-making; physicians must exercise their professional judgment,” he said. “If best practices are not adhered to, the associated risks are present today. However, these risks may become more significant as AI technologies continue to evolve and become increasingly integrated into health care.”
The potential for misdiagnosis by AI systems and the risk of unnecessary procedures if physicians do not thoroughly evaluate and validate AI predictions are other dangers.
As an example, Mr. LeTang described a case in which a physician documents in the EHR that a patient has presented to the emergency department with chest pains and other signs of a heart attack, and an AI algorithm predicts that the patient is experiencing an active myocardial infarction. If the physician then sends the patient for stenting or an angioplasty without other concrete evidence or tests to confirm the diagnosis, the doctor could later face a misdiagnosis complaint if the costly procedures were unnecessary.
“That’s one of the risks of using artificial intelligence,” he said. “A large percentage of malpractice claims is failure to diagnose, delayed diagnosis, or inaccurate diagnosis. What falls in the category of failure to diagnose is sending a patient for an unnecessary procedure or having an adverse event or bad outcome because of the failure to diagnose.”
So far, no AI lawsuits have been filed, but they may make an appearance soon, said Sue Boisvert, senior patient safety risk manager at The Doctors Company, a national medical liability insurer.
“There are hundreds of AI programs currently in use in health care,” she said. “At some point, a provider will make a decision that is contrary to what the AI recommended. The AI may be wrong, or the provider may be wrong. Either way, the provider will neglect to document their clinical reasoning, a patient will be harmed, and we will have the first AI claim.”
Upcoming AI legal risks to watch for
Lawsuits that allege biased patient care by physicians on the basis of algorithmic bias may also be forthcoming, analysts warn.
Much has been written about algorithmic bias that compounds and worsens inequities in socioeconomic status, ethnicity, sexual orientation, and gender in health systems. In 2019, a groundbreaking article in Science shed light on commonly used algorithms that are considered racially biased and how health care professionals often use such information to make medical decisions.
No claims involving AI bias have come down the pipeline yet, but it’s an area to watch, said Ms. Boisvert. She noted a website that highlights complaints and accusations of AI bias, including in health care.
“We need to be sure the training of the AI is appropriate, current, and broad enough so that there is no bias in the AI when it’s participating in the decision-making,” said Ms. Boisvert. “Imagine if the AI is diagnosing based on a dataset that is not local. It doesn’t represent the population at that particular hospital, and it’s providing inaccurate information to the physicians who are then making decisions about treatment.”
In pain management, for example, there are known differences in how patients experience pain, Ms. Boisvert said. If AI was being used to develop an algorithm for how a particular patient’s postoperative pain should be managed, and the algorithm did not include the differences, the pain control for a certain patient could be inappropriate. A poor outcome resulting from the treatment could lead to a claim against the physician or hospital that used the biased AI system, she said.
In the future, as AI becomes more integrated and accepted in medicine, there may be a risk of legal complaints against doctors for not using AI, said Saurabh Jha, MD, an associate professor of radiology at the University of Pennsylvania, Philadelphia, and a scholar of AI in radiology.
“Ultimately, we might get to a place where AI starts helping physicians detect more or reduce the miss of certain conditions, and it becomes the standard of care,” Dr. Jha said. “For example, if it became part of the standard of care for pulmonary embolism [PE] detection, and you didn’t use it for PE detection, and there was a miss. That could put you at legal risk. We’re not at that stage yet, but that is one future possibility.”
Dr. Parikh envisions an even cloudier liability landscape as the potential grows for AI to control patient care decisions. In such a scenario, rather than just issuing an alert or prediction to a physician, the AI system could trigger an action.
For instance, if an algorithm is trained to predict sepsis and, once triggered, the AI could initiate a nurse-led rapid response or a change in patient care outside the clinician’s control, said Dr. Parikh, who coauthored a recent article on AI and medical liability in The Milbank Quarterly.
“That’s still very much the minority of how AI is being used, but as evidence is growing that AI-based diagnostic tools perform equivalent or even superior to physicians, these autonomous workflows are being considered,” Dr. Parikh said. “When the ultimate action upon the patient is more determined by the AI than what the clinician does, then I think the liability picture gets murkier, and we should be thinking about how we can respond to that from a liability framework.”
How you can prevent AI-related lawsuits
The first step to preventing an AI-related claim is being aware of when and how you are using AI.
Ensure you’re informed about how the AI was trained, Ms. Boisvert stresses.
“Ask questions!” she said. “Is the AI safe? Are the recommendations accurate? Does the AI perform better than current systems? In what way? What databases were used, and did the programmers consider bias? Do I understand how to use the results?”
Never blindly trust the AI but rather view it as a data point in a medical decision, said Dr. Parikh. Ensure that other sources of medical information are properly accessed and that best practices for your specialty are still being followed.
When using any form of AI, document your usage, adds Mr. Rashbaum. A record that clearly outlines how the physician incorporated the AI is critical if a claim later arises in which the doctor is accused of AI-related malpractice, he said.
“Indicating how the AI tool was used, why it was used, and that it was used in conjunction with available clinical information and the clinician’s best judgment could reduce the risk of being found responsible as a result of AI use in a particular case,” he said.
Use chatbots, such as ChatGPT, the way they were intended, as support tools, rather than definitive diagnostic instruments, adds Dr. Castro.
“Doctors should also be well-trained in interpreting and understanding the suggestions provided by ChatGPT and should use their clinical judgment and experience alongside the AI tool for more accurate decision-making,” he said.
In addition, because no AI insurance product exists on the market, physicians and organizations using AI – particularly for direct health care – should evaluate their current insurance or insurance-like products to determine where a claim involving AI might fall and whether the policy would respond, said Ms. Boisvert. The AI vendor/manufacturer will likely have indemnified themselves in the purchase and sale agreement or contract, she said.
It will also become increasingly important for medical practices, hospitals, and health systems to put in place strong data governance strategies, Mr. LeTang said.
“AI relies on good data,” he said. “A data governance strategy is a key component to making sure we understand where the data is coming from, what is represents, how accurate it is, if it’s reproducible, what controls are in place to ensure the right people have the right access, and that if we’re starting to use it to build algorithms, that it’s deidentified.”
While no malpractice claims associated with the use of AI have yet surfaced, this may change as legal courts catch up on the backlog of malpractice claims that were delayed because of COVID-19, and even more so as AI becomes more prevalent in health care, Mr. LeTang said.
“Similar to the attention that autonomous driving systems, like Tesla, receive when the system fails and accidents occur, we can be assured that media outlets will widely publicize AI-related medical adverse events,” he said. “It is crucial for health care professionals, AI developers, and regulatory authorities to work together to ensure the responsible use of AI in health care, with patient safety as the top priority. By doing so, they can mitigate the risks associated with AI implementation and minimize the potential for legal disputes arising from AI-related medical errors.”
A version of this article first appeared on Medscape.com.
The ways in which artificial intelligence (AI) may transform the future of medicine is making headlines across the globe. But chances are, you’re already using AI in your practice every day – you may just not realize it.
And whether you recognize the presence of AI or not, the technology could be putting you in danger of a lawsuit, legal experts say.
“For physicians, AI has also not yet drastically changed or improved the way care is provided or consumed,” said Michael LeTang, chief nursing informatics officer and vice president of risk management and patient safety at Healthcare Risk Advisors, part of TDC Group. “Consequently, it may seem like AI is not present in their work streams, but in reality, it has been utilized in health care for several years. As AI technologies continue to develop and become more sophisticated, we can expect them to play an increasingly significant role in health care.”
Today, most AI applications in health care use narrow AI, which is designed to complete a single task without human assistance, as opposed to artificial general intelligence (AGI), which pertains to human-level reasoning and problem solving across a broad spectrum. Here are some ways doctors are using AI throughout the day – sometimes being aware of its assistance, and sometimes being unaware:
- Many doctors use electronic health records (EHRs) with integrated AI that include computerized clinical decision support tools designed to reduce the risk of diagnostic error and to integrate decision-making in the medication ordering function.
- Cardiologists, pathologists, and dermatologists use AI in the interpretation of vast amounts of images, tracings, and complex patterns.
- Surgeons are using AI-enhanced surgical robotics for orthopedic surgeries, such as joint replacement and spine surgery.
- A growing number of doctors are using ChatGPT to assist in drafting prior authorization letters for insurers. Experts say more doctors are also experimenting with ChatGPT to support medical decision-making.
- Within oncology, physicians use machine learning techniques in the form of computer-aided detection systems for early breast cancer detection.
- AI algorithms are often used by health systems for workflow, staffing optimization, population management, and care coordination.
- Some systems within EHRs use AI to indicate high-risk patients.
- Physicians are using AI applications for the early recognition of sepsis, including EHR-integrated decision tools, such as the Hospital Corporation of America Healthcare’s Sepsis Prediction and Optimization Therapy and the Sepsis Early Risk Assessment algorithm.
- About 30% of radiologists use AI in their practice to analyze x-rays and CT scans.
- Epic Systems recently announced a partnership with Microsoft to integrate ChatGPT into MyChart, Epic’s patient portal system. Pilot hospitals will utilize ChatGPT to automatically generate responses to patient-generated questions sent via the portal.
The growth of AI in health care has been enormous, and it’s only going to continue, said Ravi B. Parikh, MD, an assistant professor in the department of medical ethics and health policy and medicine at the University of Pennsylvania, Philadelphia.
“What’s really critical is that physicians, clinicians, and nurses using AI are provided with the tools to understand how artificial intelligence works and, most importantly, understand that they are still accountable for making the ultimate decision,” Mr. LeTang said, “The information is not always going to be the right thing to do or the most accurate thing to do. They’re still liable for making a bad decision, even if AI is driving that.”
What are the top AI legal dangers of today?
A pressing legal risk is becoming too reliant on the suggestions that AI-based systems provide, which can lead to poor care decisions, said Kenneth Rashbaum, a New York–based cybersecurity attorney with more than 25 years of experience in medical malpractice defense.
This can occur, for example, when using clinical support systems that leverage AI, machine learning, or statistical pattern recognition. Today, clinical support systems are commonly administered through EHRs and other computerized clinical workflows. In general, such systems match a patient’s characteristics to a computerized clinical knowledge base. An assessment or recommendation is then presented to the physician for a decision.
“If the clinician blindly accepts it without considering whether it’s appropriate for this patient at this time with this presentation, the clinician may bear some responsibility if there is an untoward result,” Mr. Rashbaum said.
“A common claim even in the days before the EMR [electronic medical record] and AI, was that the clinician did not take all available information into account in rendering treatment, including history of past and present condition, as reflected in the records, communication with past and other present treating clinicians, lab and radiology results, discussions with the patient, and physical examination findings,” he said. “So, if the clinician relied upon the support prompt to the exclusion of these other sources of information, that could be a very strong argument for the plaintiff.”
Chatbots, such OpenAI’s ChatGPT, are another form of AI raising legal red flags. ChatGPT, trained on a massive set of text data, can carry out conversations, write code, draft emails, and answer any question posed. The chatbot has gained considerable credibility for accurately diagnosing rare conditions in seconds, and it recently passed the U.S. Medical Licensing Examination.
It’s unclear how many doctors are signing onto the ChatGPT website daily, but physicians are actively using the chatbot, particularly for assistance with prior authorization letters and to support decision-making processes in their practices, said Mr. LeTang.
When physicians ask ChatGPT a question, however, they should be mindful that ChatGPT could “hallucinate,” a term that refers to a generated response that sounds plausible but is factually incorrect or is unrelated to the context, explains Harvey Castro, MD, an emergency physician, ChatGPT health care expert, and author of the 2023 book “ChatGPT and Healthcare: Unlocking the Potential of Patient Empowerment.”
Acting on ChatGPT’s response without vetting the information places doctors at serious risk of a lawsuit, he said.
“Sometimes, the response is half true and half false,” he said. “Say, I go outside my specialty of emergency medicine and ask it about a pediatric surgical procedure. It could give me a response that sounds medically correct, but then I ask a pediatric cardiologist, and he says, ‘We don’t even do this. This doesn’t even exist!’ Physicians really have to make sure they are vetting the information provided.”
In response to ChatGPT’s growing usage by health care professionals, hospitals and practices are quickly implementing guidelines, policies, and restrictions that caution physicians about the accuracy of ChatGPT-generated information, adds Mr. LeTang.
Emerging best practices include avoiding the input of patient health information, personally identifiable information, or any data that could be commercially valuable or considered the intellectual property of a hospital or health system, he said.
“Another crucial guideline is not to rely solely on ChatGPT as a definitive source for clinical decision-making; physicians must exercise their professional judgment,” he said. “If best practices are not adhered to, the associated risks are present today. However, these risks may become more significant as AI technologies continue to evolve and become increasingly integrated into health care.”
The potential for misdiagnosis by AI systems and the risk of unnecessary procedures if physicians do not thoroughly evaluate and validate AI predictions are other dangers.
As an example, Mr. LeTang described a case in which a physician documents in the EHR that a patient has presented to the emergency department with chest pains and other signs of a heart attack, and an AI algorithm predicts that the patient is experiencing an active myocardial infarction. If the physician then sends the patient for stenting or an angioplasty without other concrete evidence or tests to confirm the diagnosis, the doctor could later face a misdiagnosis complaint if the costly procedures were unnecessary.
“That’s one of the risks of using artificial intelligence,” he said. “A large percentage of malpractice claims is failure to diagnose, delayed diagnosis, or inaccurate diagnosis. What falls in the category of failure to diagnose is sending a patient for an unnecessary procedure or having an adverse event or bad outcome because of the failure to diagnose.”
So far, no AI lawsuits have been filed, but they may make an appearance soon, said Sue Boisvert, senior patient safety risk manager at The Doctors Company, a national medical liability insurer.
“There are hundreds of AI programs currently in use in health care,” she said. “At some point, a provider will make a decision that is contrary to what the AI recommended. The AI may be wrong, or the provider may be wrong. Either way, the provider will neglect to document their clinical reasoning, a patient will be harmed, and we will have the first AI claim.”
Upcoming AI legal risks to watch for
Lawsuits that allege biased patient care by physicians on the basis of algorithmic bias may also be forthcoming, analysts warn.
Much has been written about algorithmic bias that compounds and worsens inequities in socioeconomic status, ethnicity, sexual orientation, and gender in health systems. In 2019, a groundbreaking article in Science shed light on commonly used algorithms that are considered racially biased and how health care professionals often use such information to make medical decisions.
No claims involving AI bias have come down the pipeline yet, but it’s an area to watch, said Ms. Boisvert. She noted a website that highlights complaints and accusations of AI bias, including in health care.
“We need to be sure the training of the AI is appropriate, current, and broad enough so that there is no bias in the AI when it’s participating in the decision-making,” said Ms. Boisvert. “Imagine if the AI is diagnosing based on a dataset that is not local. It doesn’t represent the population at that particular hospital, and it’s providing inaccurate information to the physicians who are then making decisions about treatment.”
In pain management, for example, there are known differences in how patients experience pain, Ms. Boisvert said. If AI was being used to develop an algorithm for how a particular patient’s postoperative pain should be managed, and the algorithm did not include the differences, the pain control for a certain patient could be inappropriate. A poor outcome resulting from the treatment could lead to a claim against the physician or hospital that used the biased AI system, she said.
In the future, as AI becomes more integrated and accepted in medicine, there may be a risk of legal complaints against doctors for not using AI, said Saurabh Jha, MD, an associate professor of radiology at the University of Pennsylvania, Philadelphia, and a scholar of AI in radiology.
“Ultimately, we might get to a place where AI starts helping physicians detect more or reduce the miss of certain conditions, and it becomes the standard of care,” Dr. Jha said. “For example, if it became part of the standard of care for pulmonary embolism [PE] detection, and you didn’t use it for PE detection, and there was a miss. That could put you at legal risk. We’re not at that stage yet, but that is one future possibility.”
Dr. Parikh envisions an even cloudier liability landscape as the potential grows for AI to control patient care decisions. In such a scenario, rather than just issuing an alert or prediction to a physician, the AI system could trigger an action.
For instance, if an algorithm is trained to predict sepsis and, once triggered, the AI could initiate a nurse-led rapid response or a change in patient care outside the clinician’s control, said Dr. Parikh, who coauthored a recent article on AI and medical liability in The Milbank Quarterly.
“That’s still very much the minority of how AI is being used, but as evidence is growing that AI-based diagnostic tools perform equivalent or even superior to physicians, these autonomous workflows are being considered,” Dr. Parikh said. “When the ultimate action upon the patient is more determined by the AI than what the clinician does, then I think the liability picture gets murkier, and we should be thinking about how we can respond to that from a liability framework.”
How you can prevent AI-related lawsuits
The first step to preventing an AI-related claim is being aware of when and how you are using AI.
Ensure you’re informed about how the AI was trained, Ms. Boisvert stresses.
“Ask questions!” she said. “Is the AI safe? Are the recommendations accurate? Does the AI perform better than current systems? In what way? What databases were used, and did the programmers consider bias? Do I understand how to use the results?”
Never blindly trust the AI but rather view it as a data point in a medical decision, said Dr. Parikh. Ensure that other sources of medical information are properly accessed and that best practices for your specialty are still being followed.
When using any form of AI, document your usage, adds Mr. Rashbaum. A record that clearly outlines how the physician incorporated the AI is critical if a claim later arises in which the doctor is accused of AI-related malpractice, he said.
“Indicating how the AI tool was used, why it was used, and that it was used in conjunction with available clinical information and the clinician’s best judgment could reduce the risk of being found responsible as a result of AI use in a particular case,” he said.
Use chatbots, such as ChatGPT, the way they were intended, as support tools, rather than definitive diagnostic instruments, adds Dr. Castro.
“Doctors should also be well-trained in interpreting and understanding the suggestions provided by ChatGPT and should use their clinical judgment and experience alongside the AI tool for more accurate decision-making,” he said.
In addition, because no AI insurance product exists on the market, physicians and organizations using AI – particularly for direct health care – should evaluate their current insurance or insurance-like products to determine where a claim involving AI might fall and whether the policy would respond, said Ms. Boisvert. The AI vendor/manufacturer will likely have indemnified themselves in the purchase and sale agreement or contract, she said.
It will also become increasingly important for medical practices, hospitals, and health systems to put in place strong data governance strategies, Mr. LeTang said.
“AI relies on good data,” he said. “A data governance strategy is a key component to making sure we understand where the data is coming from, what is represents, how accurate it is, if it’s reproducible, what controls are in place to ensure the right people have the right access, and that if we’re starting to use it to build algorithms, that it’s deidentified.”
While no malpractice claims associated with the use of AI have yet surfaced, this may change as legal courts catch up on the backlog of malpractice claims that were delayed because of COVID-19, and even more so as AI becomes more prevalent in health care, Mr. LeTang said.
“Similar to the attention that autonomous driving systems, like Tesla, receive when the system fails and accidents occur, we can be assured that media outlets will widely publicize AI-related medical adverse events,” he said. “It is crucial for health care professionals, AI developers, and regulatory authorities to work together to ensure the responsible use of AI in health care, with patient safety as the top priority. By doing so, they can mitigate the risks associated with AI implementation and minimize the potential for legal disputes arising from AI-related medical errors.”
A version of this article first appeared on Medscape.com.
The enemy of carcinogenic fumes is my friendly begonia
Sowing the seeds of cancer prevention
Are you looking to add to your quality of life, even though pets are not your speed? Might we suggest something with lower maintenance? Something a little greener?
Indoor plants can purify the air that comes from outside. Researchers at the University of Technology Sydney, in partnership with the plantscaping company Ambius, showed that a “green wall” made up of mixed indoor plants was able to suck up 97% of “the most toxic compounds” from the air in just 8 hours. We’re talking about lung-irritating, headache-inducing, cancer risk–boosting compounds from gasoline fumes, including benzene.
Public health initiatives often strive to reduce cardiovascular and obesity risks, but breathing seems pretty important too. According to the World Health Organization, household air pollution is responsible for about 2.5 million global premature deaths each year. And since 2020 we’ve become accustomed to spending more time inside and at home.
“This new research proves that plants should not just be seen as ‘nice to have,’ but rather a crucial part of every workplace wellness plan,” Ambius General Manager Johan Hodgson said in statement released by the university.
So don’t spend hundreds of dollars on a fancy air filtration system when a wall of plants can do that for next to nothing. Find what works for you and your space and become a plant parent today! Your lungs will thank you.
But officer, I had to swerve to miss the duodenal ampulla
Tiny video capsule endoscopes have been around for many years, but they have one big weakness: The ingestible cameras’ journey through the GI tract is passively driven by gravity and the natural movement of the body, so they often miss potential problem areas.
Not anymore. That flaw has been addressed by medical technology company AnX Robotica, which has taken endoscopy to the next level by adding that wondrous directional control device of the modern electronic age, a joystick.
The new system “uses an external magnet and hand-held video game style joysticks to move the capsule in three dimensions,” which allows physicians to “remotely drive a miniature video capsule to all regions of the stomach to visualize and photograph potential problem areas,” according to Andrew C. Meltzer, MD, of George Washington University and associates, who conducted a pilot study funded by AnX Robotica.
The video capsule provided a 95% rate of visualization in the stomachs of 40 patients who were examined at a medical office building by an emergency medicine physician who had no previous specialty training in endoscopy. “Capsules were driven by the ER physician and then the study reports were reviewed by an attending gastroenterologist who was physically off site,” the investigators said in a written statement.
The capsule operator did receive some additional training, and development of artificial intelligence to self-drive the capsule is in the works, but for now, we’re talking about a device controlled by a human using a joystick. And we all know that 50-year-olds are not especially known for their joystick skills. For that we need real experts. Yup, we need to put those joystick-controlled capsule endoscopes in the hands of teenage gamers. Who wants to go first?
Maybe AI isn’t ready for the big time after all
“How long before some intrepid stockholder says: ‘Hey, instead of paying doctors, why don’t we just use the free robot instead?’ ” Those words appeared on LOTME but a month ago. After all, the AI is supposed to be smarter and more empathetic than a doctor. And did we mention it’s free? Or at least extremely cheap. Cheaper than, say, a group of recently unionized health care workers.
In early May, the paid employees manning the National Eating Disorders Association emergency hotline voted to unionize, as they felt overwhelmed and underpaid. Apparently, paying six people an extra few thousand a year was too much for NEDA’s leadership, as they decided a few weeks later to fire those workers, fully closing down the hotline. Instead of talking to a real person, people “calling in” for support would be met with Tessa, a wellness chatbot that would hopefully guide them through their crisis. Key word, hopefully.
In perhaps the least surprising twist of the year, NEDA was forced to walk back its decision about a week after its initial announcement. It all started with a viral Instagram post from a woman who called in and received the following advice from Tessa: Lose 1-2 pounds a week, count calories and work for a 500- to 1,000-calorie deficit, weigh herself weekly, and restrict her diet. Unfortunately, all of these suggestions were things that led to the development of the woman’s eating disorder.
Naturally, NEDA responded in good grace, accusing the woman of lying. A NEDA vice president even left some nasty comments on the post, but hastily deleted them a day later when NEDA announced it was shutting down Tessa “until further notice for a complete investigation.” NEDA’s CEO insisted they hadn’t seen that behavior from Tessa before, calling it a “bug” and insisting the bot would only be down temporarily until the triggers causing the bug were fixed.
In the aftermath, several doctors and psychologists chimed in, terming the rush to automate human roles dangerous and risky. After all, much of what makes these hotlines effective is the volunteers speaking from their own experience. An unsupervised bot doesn’t seem to have what it takes to deal with a mental health crisis, but we’re betting that Tessa will be back. As a wise cephalopod once said: Nobody gives a care about the fate of labor as long as they can get their instant gratification.
You can’t spell existential without s-t-e-n-t
This week, we’re including a special “bonus” item that, to be honest, has nothing to do with stents. That’s why our editor is making us call this a “bonus” (and making us use quote marks, too): It doesn’t really have anything to do with stents or health care or those who practice health care. Actually, his exact words were, “You can’t just give the readers someone else’s ****ing list and expect to get paid for it.” Did we mention that he looks like Jack Nicklaus but acts like BoJack Horseman?
Anywaaay, we’re pretty sure that the list in question – “America’s Top 10 Most Googled Existential Questions” – says something about the human condition, just not about stents:
1. Why is the sky blue?
2. What do dreams mean?
3. What is the meaning of life?
4. Why am I so tired?
5. Who am I?
6. What is love?
7. Is a hot dog a sandwich?
8. What came first, the chicken or the egg?
9. What should I do?
10. Do animals have souls?
Sowing the seeds of cancer prevention
Are you looking to add to your quality of life, even though pets are not your speed? Might we suggest something with lower maintenance? Something a little greener?
Indoor plants can purify the air that comes from outside. Researchers at the University of Technology Sydney, in partnership with the plantscaping company Ambius, showed that a “green wall” made up of mixed indoor plants was able to suck up 97% of “the most toxic compounds” from the air in just 8 hours. We’re talking about lung-irritating, headache-inducing, cancer risk–boosting compounds from gasoline fumes, including benzene.
Public health initiatives often strive to reduce cardiovascular and obesity risks, but breathing seems pretty important too. According to the World Health Organization, household air pollution is responsible for about 2.5 million global premature deaths each year. And since 2020 we’ve become accustomed to spending more time inside and at home.
“This new research proves that plants should not just be seen as ‘nice to have,’ but rather a crucial part of every workplace wellness plan,” Ambius General Manager Johan Hodgson said in statement released by the university.
So don’t spend hundreds of dollars on a fancy air filtration system when a wall of plants can do that for next to nothing. Find what works for you and your space and become a plant parent today! Your lungs will thank you.
But officer, I had to swerve to miss the duodenal ampulla
Tiny video capsule endoscopes have been around for many years, but they have one big weakness: The ingestible cameras’ journey through the GI tract is passively driven by gravity and the natural movement of the body, so they often miss potential problem areas.
Not anymore. That flaw has been addressed by medical technology company AnX Robotica, which has taken endoscopy to the next level by adding that wondrous directional control device of the modern electronic age, a joystick.
The new system “uses an external magnet and hand-held video game style joysticks to move the capsule in three dimensions,” which allows physicians to “remotely drive a miniature video capsule to all regions of the stomach to visualize and photograph potential problem areas,” according to Andrew C. Meltzer, MD, of George Washington University and associates, who conducted a pilot study funded by AnX Robotica.
The video capsule provided a 95% rate of visualization in the stomachs of 40 patients who were examined at a medical office building by an emergency medicine physician who had no previous specialty training in endoscopy. “Capsules were driven by the ER physician and then the study reports were reviewed by an attending gastroenterologist who was physically off site,” the investigators said in a written statement.
The capsule operator did receive some additional training, and development of artificial intelligence to self-drive the capsule is in the works, but for now, we’re talking about a device controlled by a human using a joystick. And we all know that 50-year-olds are not especially known for their joystick skills. For that we need real experts. Yup, we need to put those joystick-controlled capsule endoscopes in the hands of teenage gamers. Who wants to go first?
Maybe AI isn’t ready for the big time after all
“How long before some intrepid stockholder says: ‘Hey, instead of paying doctors, why don’t we just use the free robot instead?’ ” Those words appeared on LOTME but a month ago. After all, the AI is supposed to be smarter and more empathetic than a doctor. And did we mention it’s free? Or at least extremely cheap. Cheaper than, say, a group of recently unionized health care workers.
In early May, the paid employees manning the National Eating Disorders Association emergency hotline voted to unionize, as they felt overwhelmed and underpaid. Apparently, paying six people an extra few thousand a year was too much for NEDA’s leadership, as they decided a few weeks later to fire those workers, fully closing down the hotline. Instead of talking to a real person, people “calling in” for support would be met with Tessa, a wellness chatbot that would hopefully guide them through their crisis. Key word, hopefully.
In perhaps the least surprising twist of the year, NEDA was forced to walk back its decision about a week after its initial announcement. It all started with a viral Instagram post from a woman who called in and received the following advice from Tessa: Lose 1-2 pounds a week, count calories and work for a 500- to 1,000-calorie deficit, weigh herself weekly, and restrict her diet. Unfortunately, all of these suggestions were things that led to the development of the woman’s eating disorder.
Naturally, NEDA responded in good grace, accusing the woman of lying. A NEDA vice president even left some nasty comments on the post, but hastily deleted them a day later when NEDA announced it was shutting down Tessa “until further notice for a complete investigation.” NEDA’s CEO insisted they hadn’t seen that behavior from Tessa before, calling it a “bug” and insisting the bot would only be down temporarily until the triggers causing the bug were fixed.
In the aftermath, several doctors and psychologists chimed in, terming the rush to automate human roles dangerous and risky. After all, much of what makes these hotlines effective is the volunteers speaking from their own experience. An unsupervised bot doesn’t seem to have what it takes to deal with a mental health crisis, but we’re betting that Tessa will be back. As a wise cephalopod once said: Nobody gives a care about the fate of labor as long as they can get their instant gratification.
You can’t spell existential without s-t-e-n-t
This week, we’re including a special “bonus” item that, to be honest, has nothing to do with stents. That’s why our editor is making us call this a “bonus” (and making us use quote marks, too): It doesn’t really have anything to do with stents or health care or those who practice health care. Actually, his exact words were, “You can’t just give the readers someone else’s ****ing list and expect to get paid for it.” Did we mention that he looks like Jack Nicklaus but acts like BoJack Horseman?
Anywaaay, we’re pretty sure that the list in question – “America’s Top 10 Most Googled Existential Questions” – says something about the human condition, just not about stents:
1. Why is the sky blue?
2. What do dreams mean?
3. What is the meaning of life?
4. Why am I so tired?
5. Who am I?
6. What is love?
7. Is a hot dog a sandwich?
8. What came first, the chicken or the egg?
9. What should I do?
10. Do animals have souls?
Sowing the seeds of cancer prevention
Are you looking to add to your quality of life, even though pets are not your speed? Might we suggest something with lower maintenance? Something a little greener?
Indoor plants can purify the air that comes from outside. Researchers at the University of Technology Sydney, in partnership with the plantscaping company Ambius, showed that a “green wall” made up of mixed indoor plants was able to suck up 97% of “the most toxic compounds” from the air in just 8 hours. We’re talking about lung-irritating, headache-inducing, cancer risk–boosting compounds from gasoline fumes, including benzene.
Public health initiatives often strive to reduce cardiovascular and obesity risks, but breathing seems pretty important too. According to the World Health Organization, household air pollution is responsible for about 2.5 million global premature deaths each year. And since 2020 we’ve become accustomed to spending more time inside and at home.
“This new research proves that plants should not just be seen as ‘nice to have,’ but rather a crucial part of every workplace wellness plan,” Ambius General Manager Johan Hodgson said in statement released by the university.
So don’t spend hundreds of dollars on a fancy air filtration system when a wall of plants can do that for next to nothing. Find what works for you and your space and become a plant parent today! Your lungs will thank you.
But officer, I had to swerve to miss the duodenal ampulla
Tiny video capsule endoscopes have been around for many years, but they have one big weakness: The ingestible cameras’ journey through the GI tract is passively driven by gravity and the natural movement of the body, so they often miss potential problem areas.
Not anymore. That flaw has been addressed by medical technology company AnX Robotica, which has taken endoscopy to the next level by adding that wondrous directional control device of the modern electronic age, a joystick.
The new system “uses an external magnet and hand-held video game style joysticks to move the capsule in three dimensions,” which allows physicians to “remotely drive a miniature video capsule to all regions of the stomach to visualize and photograph potential problem areas,” according to Andrew C. Meltzer, MD, of George Washington University and associates, who conducted a pilot study funded by AnX Robotica.
The video capsule provided a 95% rate of visualization in the stomachs of 40 patients who were examined at a medical office building by an emergency medicine physician who had no previous specialty training in endoscopy. “Capsules were driven by the ER physician and then the study reports were reviewed by an attending gastroenterologist who was physically off site,” the investigators said in a written statement.
The capsule operator did receive some additional training, and development of artificial intelligence to self-drive the capsule is in the works, but for now, we’re talking about a device controlled by a human using a joystick. And we all know that 50-year-olds are not especially known for their joystick skills. For that we need real experts. Yup, we need to put those joystick-controlled capsule endoscopes in the hands of teenage gamers. Who wants to go first?
Maybe AI isn’t ready for the big time after all
“How long before some intrepid stockholder says: ‘Hey, instead of paying doctors, why don’t we just use the free robot instead?’ ” Those words appeared on LOTME but a month ago. After all, the AI is supposed to be smarter and more empathetic than a doctor. And did we mention it’s free? Or at least extremely cheap. Cheaper than, say, a group of recently unionized health care workers.
In early May, the paid employees manning the National Eating Disorders Association emergency hotline voted to unionize, as they felt overwhelmed and underpaid. Apparently, paying six people an extra few thousand a year was too much for NEDA’s leadership, as they decided a few weeks later to fire those workers, fully closing down the hotline. Instead of talking to a real person, people “calling in” for support would be met with Tessa, a wellness chatbot that would hopefully guide them through their crisis. Key word, hopefully.
In perhaps the least surprising twist of the year, NEDA was forced to walk back its decision about a week after its initial announcement. It all started with a viral Instagram post from a woman who called in and received the following advice from Tessa: Lose 1-2 pounds a week, count calories and work for a 500- to 1,000-calorie deficit, weigh herself weekly, and restrict her diet. Unfortunately, all of these suggestions were things that led to the development of the woman’s eating disorder.
Naturally, NEDA responded in good grace, accusing the woman of lying. A NEDA vice president even left some nasty comments on the post, but hastily deleted them a day later when NEDA announced it was shutting down Tessa “until further notice for a complete investigation.” NEDA’s CEO insisted they hadn’t seen that behavior from Tessa before, calling it a “bug” and insisting the bot would only be down temporarily until the triggers causing the bug were fixed.
In the aftermath, several doctors and psychologists chimed in, terming the rush to automate human roles dangerous and risky. After all, much of what makes these hotlines effective is the volunteers speaking from their own experience. An unsupervised bot doesn’t seem to have what it takes to deal with a mental health crisis, but we’re betting that Tessa will be back. As a wise cephalopod once said: Nobody gives a care about the fate of labor as long as they can get their instant gratification.
You can’t spell existential without s-t-e-n-t
This week, we’re including a special “bonus” item that, to be honest, has nothing to do with stents. That’s why our editor is making us call this a “bonus” (and making us use quote marks, too): It doesn’t really have anything to do with stents or health care or those who practice health care. Actually, his exact words were, “You can’t just give the readers someone else’s ****ing list and expect to get paid for it.” Did we mention that he looks like Jack Nicklaus but acts like BoJack Horseman?
Anywaaay, we’re pretty sure that the list in question – “America’s Top 10 Most Googled Existential Questions” – says something about the human condition, just not about stents:
1. Why is the sky blue?
2. What do dreams mean?
3. What is the meaning of life?
4. Why am I so tired?
5. Who am I?
6. What is love?
7. Is a hot dog a sandwich?
8. What came first, the chicken or the egg?
9. What should I do?
10. Do animals have souls?
Menopause and long COVID: What women should know
British researchers have noted that women at midlife who have long COVID seem to get specific, and severe, symptoms, including brain fog, fatigue, new-onset dizziness, and difficulty sleeping through the night.
Doctors also think it’s possible that long COVID worsens the symptoms of perimenopause and menopause. Lower levels of estrogen and testosterone appear to be the reason.
“A long COVID theory is that there is a temporary disruption to physiological ovarian steroid hormone production, which could [worsen] symptoms of perimenopause and menopause,” said JoAnn V. Pinkerton, MD, professor of obstetrics at the University of Virginia, Charlottesville, and executive director of the North American Menopause Society.
Long COVID symptoms and menopause symptoms can also be very hard to tell apart.
Another U.K. study cautions that because of this kind of symptom overlap, women at midlife may be misdiagnosed. Research from the North American Menopause Society shows that many women may have trouble recovering from long COVID unless their hormone deficiency is treated.
What are the symptoms of long COVID?
There are over 200 symptoms that have been associated with long COVID, according to the American Medical Association. Some common symptoms are currently defined as the following: feeling extremely tired, feeling depleted after exertion, cognitive issues such as brain fog, heart beating over 100 times a minute, and a loss of sense of smell and taste.
Long COVID symptoms begin a few weeks to a few months after a COVID infection. They can last an indefinite amount of time, but “the hope is that long COVID will not be lifelong,” said Clare Flannery, MD, an endocrinologist and associate professor in the departments of obstetrics, gynecology and reproductive sciences and internal medicine at Yale University, New Haven, Conn.
What are the symptoms of menopause?
Some symptoms of menopause include vaginal infections, irregular bleeding, urinary problems, and sexual problems.
Women in their middle years have other symptoms that can be the same as perimenopause/menopause symptoms.
“Common symptoms of perimenopause and menopause which may also be symptoms ascribed to long COVID include hot flashes, night sweats, disrupted sleep, low mood, depression or anxiety, decreased concentration, memory problems, joint and muscle pains, and headaches,” Dr. Pinkerton said.
Can long COVID actually bring on menopause?
In short: Possibly.
A new study from the Massachusetts Institute of Technology/Patient-Led Research Collaborative/University of California, San Francisco, found that long COVID can cause disruptions to a woman’s menstrual cycle, ovaries, fertility, and menopause itself.
This could be caused by chronic inflammation caused by long COVID on hormones as well. This kind of inflammatory response could explain irregularities in a woman’s menstrual cycle, according to the Newson Health Research and Education study. For instance, “when the body has inflammation, ovulation can happen,” Dr. Flannery said.
The mechanism for how long COVID could spur menopause can also involve a woman’s ovaries.
“Since the theory is that COVID affects the ovary with declines in ovarian reserve and ovarian function, it makes sense that long COVID could bring on symptoms of perimenopause or menopause more acutely or more severely and lengthen the symptoms of the perimenopause and menopausal transition,” Dr. Pinkerton said.
How can hormone replacement therapy benefit women dealing with long COVID during menopause?
Estradiol, the strongest estrogen hormone in a woman’s body, has already been shown to have a positive effect against COVID.
“Estradiol therapy treats symptoms more aggressively in the setting of long COVID,” said Dr. Flannery.
Estradiol is also a form of hormone therapy for menopause symptoms.
“Estradiol has been shown to help hot flashes, night sweats, and sleep and improve mood during perimenopause,” said Dr. Pinkerton. “So it’s likely that perimenopausal or menopausal women with long COVID would see improvements both due to the action of estradiol on the ovary seen during COVID and the improvements in symptoms.”
Estrogen-based hormone therapy has been linked to an increased risk for endometrial, breast, and ovarian cancer, according to the American Cancer Society. This means you should carefully consider how comfortable you are with those additional risks before starting this kind of therapy.
“Which of your symptoms are the most difficult to manage? You may see if you can navigate one to three of them. What are you willing to do for your symptoms? If a woman is willing to favor her sleep for the next 6 months to a year, she may be willing to change how she perceives her risk for cancer,” Dr. Flannery said. “What risk is a woman willing to take? I think if someone has a very low concern about a risk of cancer, and she’s suffering a disrupted life, then taking estradiol in a 1- to 2-year trial period could be critical to help.”
What else can help ease long COVID during menopause?
Getting the COVID vaccine, as well as getting a booster, could help. Not only will this help prevent people from being reinfected with COVID, which can worsen symptoms, but a new Swedish study says there is no evidence that it will cause postmenopausal problems like irregular bleeding.
“Weak and inconsistent associations were observed between SARS-CoV-2 vaccination and healthcare contacts for bleeding in women who are postmenopausal, and even less evidence was recorded of an association for menstrual disturbance or bleeding in women who were premenopausal,” said study coauthor Rickard Ljung, MD, PhD, MPH, professor and acting head of the pharmacoepidemiology and analysis department in the division of use and information of the Swedish Medical Products Agency in Uppsala.
A version of this article first appeared on WebMD.com.
British researchers have noted that women at midlife who have long COVID seem to get specific, and severe, symptoms, including brain fog, fatigue, new-onset dizziness, and difficulty sleeping through the night.
Doctors also think it’s possible that long COVID worsens the symptoms of perimenopause and menopause. Lower levels of estrogen and testosterone appear to be the reason.
“A long COVID theory is that there is a temporary disruption to physiological ovarian steroid hormone production, which could [worsen] symptoms of perimenopause and menopause,” said JoAnn V. Pinkerton, MD, professor of obstetrics at the University of Virginia, Charlottesville, and executive director of the North American Menopause Society.
Long COVID symptoms and menopause symptoms can also be very hard to tell apart.
Another U.K. study cautions that because of this kind of symptom overlap, women at midlife may be misdiagnosed. Research from the North American Menopause Society shows that many women may have trouble recovering from long COVID unless their hormone deficiency is treated.
What are the symptoms of long COVID?
There are over 200 symptoms that have been associated with long COVID, according to the American Medical Association. Some common symptoms are currently defined as the following: feeling extremely tired, feeling depleted after exertion, cognitive issues such as brain fog, heart beating over 100 times a minute, and a loss of sense of smell and taste.
Long COVID symptoms begin a few weeks to a few months after a COVID infection. They can last an indefinite amount of time, but “the hope is that long COVID will not be lifelong,” said Clare Flannery, MD, an endocrinologist and associate professor in the departments of obstetrics, gynecology and reproductive sciences and internal medicine at Yale University, New Haven, Conn.
What are the symptoms of menopause?
Some symptoms of menopause include vaginal infections, irregular bleeding, urinary problems, and sexual problems.
Women in their middle years have other symptoms that can be the same as perimenopause/menopause symptoms.
“Common symptoms of perimenopause and menopause which may also be symptoms ascribed to long COVID include hot flashes, night sweats, disrupted sleep, low mood, depression or anxiety, decreased concentration, memory problems, joint and muscle pains, and headaches,” Dr. Pinkerton said.
Can long COVID actually bring on menopause?
In short: Possibly.
A new study from the Massachusetts Institute of Technology/Patient-Led Research Collaborative/University of California, San Francisco, found that long COVID can cause disruptions to a woman’s menstrual cycle, ovaries, fertility, and menopause itself.
This could be caused by chronic inflammation caused by long COVID on hormones as well. This kind of inflammatory response could explain irregularities in a woman’s menstrual cycle, according to the Newson Health Research and Education study. For instance, “when the body has inflammation, ovulation can happen,” Dr. Flannery said.
The mechanism for how long COVID could spur menopause can also involve a woman’s ovaries.
“Since the theory is that COVID affects the ovary with declines in ovarian reserve and ovarian function, it makes sense that long COVID could bring on symptoms of perimenopause or menopause more acutely or more severely and lengthen the symptoms of the perimenopause and menopausal transition,” Dr. Pinkerton said.
How can hormone replacement therapy benefit women dealing with long COVID during menopause?
Estradiol, the strongest estrogen hormone in a woman’s body, has already been shown to have a positive effect against COVID.
“Estradiol therapy treats symptoms more aggressively in the setting of long COVID,” said Dr. Flannery.
Estradiol is also a form of hormone therapy for menopause symptoms.
“Estradiol has been shown to help hot flashes, night sweats, and sleep and improve mood during perimenopause,” said Dr. Pinkerton. “So it’s likely that perimenopausal or menopausal women with long COVID would see improvements both due to the action of estradiol on the ovary seen during COVID and the improvements in symptoms.”
Estrogen-based hormone therapy has been linked to an increased risk for endometrial, breast, and ovarian cancer, according to the American Cancer Society. This means you should carefully consider how comfortable you are with those additional risks before starting this kind of therapy.
“Which of your symptoms are the most difficult to manage? You may see if you can navigate one to three of them. What are you willing to do for your symptoms? If a woman is willing to favor her sleep for the next 6 months to a year, she may be willing to change how she perceives her risk for cancer,” Dr. Flannery said. “What risk is a woman willing to take? I think if someone has a very low concern about a risk of cancer, and she’s suffering a disrupted life, then taking estradiol in a 1- to 2-year trial period could be critical to help.”
What else can help ease long COVID during menopause?
Getting the COVID vaccine, as well as getting a booster, could help. Not only will this help prevent people from being reinfected with COVID, which can worsen symptoms, but a new Swedish study says there is no evidence that it will cause postmenopausal problems like irregular bleeding.
“Weak and inconsistent associations were observed between SARS-CoV-2 vaccination and healthcare contacts for bleeding in women who are postmenopausal, and even less evidence was recorded of an association for menstrual disturbance or bleeding in women who were premenopausal,” said study coauthor Rickard Ljung, MD, PhD, MPH, professor and acting head of the pharmacoepidemiology and analysis department in the division of use and information of the Swedish Medical Products Agency in Uppsala.
A version of this article first appeared on WebMD.com.
British researchers have noted that women at midlife who have long COVID seem to get specific, and severe, symptoms, including brain fog, fatigue, new-onset dizziness, and difficulty sleeping through the night.
Doctors also think it’s possible that long COVID worsens the symptoms of perimenopause and menopause. Lower levels of estrogen and testosterone appear to be the reason.
“A long COVID theory is that there is a temporary disruption to physiological ovarian steroid hormone production, which could [worsen] symptoms of perimenopause and menopause,” said JoAnn V. Pinkerton, MD, professor of obstetrics at the University of Virginia, Charlottesville, and executive director of the North American Menopause Society.
Long COVID symptoms and menopause symptoms can also be very hard to tell apart.
Another U.K. study cautions that because of this kind of symptom overlap, women at midlife may be misdiagnosed. Research from the North American Menopause Society shows that many women may have trouble recovering from long COVID unless their hormone deficiency is treated.
What are the symptoms of long COVID?
There are over 200 symptoms that have been associated with long COVID, according to the American Medical Association. Some common symptoms are currently defined as the following: feeling extremely tired, feeling depleted after exertion, cognitive issues such as brain fog, heart beating over 100 times a minute, and a loss of sense of smell and taste.
Long COVID symptoms begin a few weeks to a few months after a COVID infection. They can last an indefinite amount of time, but “the hope is that long COVID will not be lifelong,” said Clare Flannery, MD, an endocrinologist and associate professor in the departments of obstetrics, gynecology and reproductive sciences and internal medicine at Yale University, New Haven, Conn.
What are the symptoms of menopause?
Some symptoms of menopause include vaginal infections, irregular bleeding, urinary problems, and sexual problems.
Women in their middle years have other symptoms that can be the same as perimenopause/menopause symptoms.
“Common symptoms of perimenopause and menopause which may also be symptoms ascribed to long COVID include hot flashes, night sweats, disrupted sleep, low mood, depression or anxiety, decreased concentration, memory problems, joint and muscle pains, and headaches,” Dr. Pinkerton said.
Can long COVID actually bring on menopause?
In short: Possibly.
A new study from the Massachusetts Institute of Technology/Patient-Led Research Collaborative/University of California, San Francisco, found that long COVID can cause disruptions to a woman’s menstrual cycle, ovaries, fertility, and menopause itself.
This could be caused by chronic inflammation caused by long COVID on hormones as well. This kind of inflammatory response could explain irregularities in a woman’s menstrual cycle, according to the Newson Health Research and Education study. For instance, “when the body has inflammation, ovulation can happen,” Dr. Flannery said.
The mechanism for how long COVID could spur menopause can also involve a woman’s ovaries.
“Since the theory is that COVID affects the ovary with declines in ovarian reserve and ovarian function, it makes sense that long COVID could bring on symptoms of perimenopause or menopause more acutely or more severely and lengthen the symptoms of the perimenopause and menopausal transition,” Dr. Pinkerton said.
How can hormone replacement therapy benefit women dealing with long COVID during menopause?
Estradiol, the strongest estrogen hormone in a woman’s body, has already been shown to have a positive effect against COVID.
“Estradiol therapy treats symptoms more aggressively in the setting of long COVID,” said Dr. Flannery.
Estradiol is also a form of hormone therapy for menopause symptoms.
“Estradiol has been shown to help hot flashes, night sweats, and sleep and improve mood during perimenopause,” said Dr. Pinkerton. “So it’s likely that perimenopausal or menopausal women with long COVID would see improvements both due to the action of estradiol on the ovary seen during COVID and the improvements in symptoms.”
Estrogen-based hormone therapy has been linked to an increased risk for endometrial, breast, and ovarian cancer, according to the American Cancer Society. This means you should carefully consider how comfortable you are with those additional risks before starting this kind of therapy.
“Which of your symptoms are the most difficult to manage? You may see if you can navigate one to three of them. What are you willing to do for your symptoms? If a woman is willing to favor her sleep for the next 6 months to a year, she may be willing to change how she perceives her risk for cancer,” Dr. Flannery said. “What risk is a woman willing to take? I think if someone has a very low concern about a risk of cancer, and she’s suffering a disrupted life, then taking estradiol in a 1- to 2-year trial period could be critical to help.”
What else can help ease long COVID during menopause?
Getting the COVID vaccine, as well as getting a booster, could help. Not only will this help prevent people from being reinfected with COVID, which can worsen symptoms, but a new Swedish study says there is no evidence that it will cause postmenopausal problems like irregular bleeding.
“Weak and inconsistent associations were observed between SARS-CoV-2 vaccination and healthcare contacts for bleeding in women who are postmenopausal, and even less evidence was recorded of an association for menstrual disturbance or bleeding in women who were premenopausal,” said study coauthor Rickard Ljung, MD, PhD, MPH, professor and acting head of the pharmacoepidemiology and analysis department in the division of use and information of the Swedish Medical Products Agency in Uppsala.
A version of this article first appeared on WebMD.com.
Game-changing Alzheimer’s research: The latest on biomarkers
The field of neurodegenerative dementias, particularly Alzheimer’s disease (AD), has been revolutionized by the development of imaging and cerebrospinal fluid biomarkers and is on the brink of a new development: emerging plasma biomarkers. Research now recognizes the relationship between the cognitive-behavioral syndromic diagnosis (that is, the illness) and the etiologic diagnosis (the disease) – and the need to consider each separately when developing a diagnostic formulation. The National Institute on Aging and Alzheimer’s Association Research Framework uses the amyloid, tau, and neurodegeneration system to define AD biologically in living patients. Here is an overview of the framework, which requires biomarker evidence of amyloid plaques (amyloid positivity) and neurofibrillary tangles (tau positivity), with evidence of neurodegeneration (neurodegeneration positivity) to support the diagnosis.
The diagnostic approach for symptomatic patients
The differential diagnosis in symptomatic patients with mild cognitive impairment (MCI), mild behavioral impairment, or dementia is broad and includes multiple neurodegenerative diseases (for example, AD, frontotemporal lobar degeneration, dementia with Lewy bodies, argyrophilic grain disease, hippocampal sclerosis); vascular ischemic brain injury (for example, stroke); tumors; infectious, inflammatory, paraneoplastic, or demyelinating diseases; trauma; hydrocephalus; toxic/metabolic insults; and other rare diseases. The patient’s clinical syndrome narrows the differential diagnosis.
Once the clinician has a prioritized differential diagnosis of the brain disease or condition that is probably causing or contributing to the patient’s signs and symptoms, they can then select appropriate assessments and tests, typically starting with a laboratory panel and brain MRI. Strong evidence backed by practice recommendations also supports the use of fluorodeoxyglucose PET as a marker of functional brain abnormalities associated with dementia. Although molecular biomarkers are typically considered at the later stage of the clinical workup, the anticipated future availability of plasma biomarkers will probably change the timing of molecular biomarker assessment in patients with suspected cognitive impairment owing to AD.
Molecular PET biomarkers
Three PET tracers approved by the U.S. Food and Drug Administration for the detection of cerebral amyloid plaques have high sensitivity (89%-98%) and specificity (88%-100%), compared with autopsy, the gold standard diagnostic tool. However, these scans are costly and are not reimbursed by Medicare and Medicaid. Because all amyloid PET scans are covered by the Veterans Administration, this test is more readily accessible for patients receiving VA benefits.
The appropriate-use criteria developed by the Amyloid Imaging Task Force recommends amyloid PET for patients with persistent or progressive MCI or dementia. In such patients, a negative amyloid PET scan would strongly weigh against AD, supporting a differential diagnosis of other etiologies. Although a positive amyloid PET scan in patients with MCI or dementia indicates the presence of amyloid plaques, it does not necessarily confirm AD as the cause. Cerebral amyloid plaques may coexist with other pathologies and increase with age, even in cognitively normal individuals.
The IDEAS study looked at the clinical utility of amyloid PET in a real-world dementia specialist setting. In the study, dementia subspecialists documented their presumed etiologic diagnosis (and level of confidence) before and after amyloid PET. Of the 11,409 patients who completed the study, the etiologic diagnosis changed from AD to non-AD in just over 25% of cases and from non-AD to AD in 10.5%. Clinical management changed in about 60% of patients with MCI and 63.5% of patients with dementia.
In May 2020, the FDA approved flortaucipir F-18, the first diagnostic tau radiotracer for use with PET to estimate the density and distribution of aggregated tau neurofibrillary tangles in adults with cognitive impairment undergoing evaluation for AD. Regulatory approval of flortaucipir F-18 was based on findings from two clinical trials of terminally ill patients who were followed to autopsy. The studies included patients with a spectrum of clinically diagnosed dementias and those with normal cognition. The primary outcome of the studies was accurate visual interpretation of the images in detecting advanced AD tau neurofibrillary tangle pathology (Braak stage V or VI tau pathology). Sensitivity of five trained readers ranged from 68% to 86%, and specificity ranged from 63% to 100%; interrater agreement was 0.87. Tau PET is not yet reimbursed and is therefore not yet readily available in the clinical setting. Moreover, appropriate use criteria have not yet been published.
Molecular fluid biomarkers
Cerebrospinal fluid (CSF) analysis is currently the most readily available and reimbursed test to aid in diagnosing AD, with appropriate-use criteria for patients with suspected AD. CSF biomarkers for AD are useful in cognitively impaired patients when the etiologic diagnosis is equivocal, there is only an intermediate level of diagnostic confidence, or there is very high confidence in the etiologic diagnosis. Testing for CSF biomarkers is also recommended for patients at very early clinical stages (for example, early MCI) or with atypical clinical presentations.
A decreased concentration of amyloid-beta 42 in CSF is a marker of amyloid neuritic plaques in the brain. An increased concentration of total tau in CSF reflects injury to neurons, and an increased concentration of specific isoforms of hyperphosphorylated tau reflects neurofibrillary tangles. Presently, the ratios of t-tau to amyloid-beta 42, amyloid-beta 42 to amyloid-beta 40, and phosphorylated-tau 181 to amyloid-beta 42 are the best-performing markers of AD neuropathologic changes and are more accurate than assessing individual biomarkers. These CSF biomarkers of AD have been validated against autopsy, and ratio values of CSF amyloid-beta 42 have been further validated against amyloid PET, with overall sensitivity and specificity of approximately 90% and 84%, respectively.
Some of the most exciting recent advances in AD center around the measurement of these proteins and others in plasma. Appropriate-use criteria for plasma biomarkers in the evaluation of patients with cognitive impairment were published in 2022. In addition to their use in clinical trials, these criteria cautiously recommend using these biomarkers in specialized memory clinics in the diagnostic workup of patients with cognitive symptoms, along with confirmatory CSF markers or PET. Additional data are needed before plasma biomarkers of AD are used as standalone diagnostic markers or considered in the primary care setting.
We have made remarkable progress toward more precise molecular diagnosis of brain diseases underlying cognitive impairment and dementia. Ongoing efforts to evaluate the utility of these measures in clinical practice include the need to increase diversity of patients and providers. Ultimately, the tremendous progress in molecular biomarkers for the diseases causing dementia will help the field work toward our common goal of early and accurate diagnosis, better management, and hope for people living with these diseases.
Bradford C. Dickerson, MD, MMSc, is a professor, department of neurology, Harvard Medical School, and director, Frontotemporal Disorders Unit, department of neurology, at Massachusetts General Hospital, both in Boston.
A version of this article first appeared on Medscape.com.
The field of neurodegenerative dementias, particularly Alzheimer’s disease (AD), has been revolutionized by the development of imaging and cerebrospinal fluid biomarkers and is on the brink of a new development: emerging plasma biomarkers. Research now recognizes the relationship between the cognitive-behavioral syndromic diagnosis (that is, the illness) and the etiologic diagnosis (the disease) – and the need to consider each separately when developing a diagnostic formulation. The National Institute on Aging and Alzheimer’s Association Research Framework uses the amyloid, tau, and neurodegeneration system to define AD biologically in living patients. Here is an overview of the framework, which requires biomarker evidence of amyloid plaques (amyloid positivity) and neurofibrillary tangles (tau positivity), with evidence of neurodegeneration (neurodegeneration positivity) to support the diagnosis.
The diagnostic approach for symptomatic patients
The differential diagnosis in symptomatic patients with mild cognitive impairment (MCI), mild behavioral impairment, or dementia is broad and includes multiple neurodegenerative diseases (for example, AD, frontotemporal lobar degeneration, dementia with Lewy bodies, argyrophilic grain disease, hippocampal sclerosis); vascular ischemic brain injury (for example, stroke); tumors; infectious, inflammatory, paraneoplastic, or demyelinating diseases; trauma; hydrocephalus; toxic/metabolic insults; and other rare diseases. The patient’s clinical syndrome narrows the differential diagnosis.
Once the clinician has a prioritized differential diagnosis of the brain disease or condition that is probably causing or contributing to the patient’s signs and symptoms, they can then select appropriate assessments and tests, typically starting with a laboratory panel and brain MRI. Strong evidence backed by practice recommendations also supports the use of fluorodeoxyglucose PET as a marker of functional brain abnormalities associated with dementia. Although molecular biomarkers are typically considered at the later stage of the clinical workup, the anticipated future availability of plasma biomarkers will probably change the timing of molecular biomarker assessment in patients with suspected cognitive impairment owing to AD.
Molecular PET biomarkers
Three PET tracers approved by the U.S. Food and Drug Administration for the detection of cerebral amyloid plaques have high sensitivity (89%-98%) and specificity (88%-100%), compared with autopsy, the gold standard diagnostic tool. However, these scans are costly and are not reimbursed by Medicare and Medicaid. Because all amyloid PET scans are covered by the Veterans Administration, this test is more readily accessible for patients receiving VA benefits.
The appropriate-use criteria developed by the Amyloid Imaging Task Force recommends amyloid PET for patients with persistent or progressive MCI or dementia. In such patients, a negative amyloid PET scan would strongly weigh against AD, supporting a differential diagnosis of other etiologies. Although a positive amyloid PET scan in patients with MCI or dementia indicates the presence of amyloid plaques, it does not necessarily confirm AD as the cause. Cerebral amyloid plaques may coexist with other pathologies and increase with age, even in cognitively normal individuals.
The IDEAS study looked at the clinical utility of amyloid PET in a real-world dementia specialist setting. In the study, dementia subspecialists documented their presumed etiologic diagnosis (and level of confidence) before and after amyloid PET. Of the 11,409 patients who completed the study, the etiologic diagnosis changed from AD to non-AD in just over 25% of cases and from non-AD to AD in 10.5%. Clinical management changed in about 60% of patients with MCI and 63.5% of patients with dementia.
In May 2020, the FDA approved flortaucipir F-18, the first diagnostic tau radiotracer for use with PET to estimate the density and distribution of aggregated tau neurofibrillary tangles in adults with cognitive impairment undergoing evaluation for AD. Regulatory approval of flortaucipir F-18 was based on findings from two clinical trials of terminally ill patients who were followed to autopsy. The studies included patients with a spectrum of clinically diagnosed dementias and those with normal cognition. The primary outcome of the studies was accurate visual interpretation of the images in detecting advanced AD tau neurofibrillary tangle pathology (Braak stage V or VI tau pathology). Sensitivity of five trained readers ranged from 68% to 86%, and specificity ranged from 63% to 100%; interrater agreement was 0.87. Tau PET is not yet reimbursed and is therefore not yet readily available in the clinical setting. Moreover, appropriate use criteria have not yet been published.
Molecular fluid biomarkers
Cerebrospinal fluid (CSF) analysis is currently the most readily available and reimbursed test to aid in diagnosing AD, with appropriate-use criteria for patients with suspected AD. CSF biomarkers for AD are useful in cognitively impaired patients when the etiologic diagnosis is equivocal, there is only an intermediate level of diagnostic confidence, or there is very high confidence in the etiologic diagnosis. Testing for CSF biomarkers is also recommended for patients at very early clinical stages (for example, early MCI) or with atypical clinical presentations.
A decreased concentration of amyloid-beta 42 in CSF is a marker of amyloid neuritic plaques in the brain. An increased concentration of total tau in CSF reflects injury to neurons, and an increased concentration of specific isoforms of hyperphosphorylated tau reflects neurofibrillary tangles. Presently, the ratios of t-tau to amyloid-beta 42, amyloid-beta 42 to amyloid-beta 40, and phosphorylated-tau 181 to amyloid-beta 42 are the best-performing markers of AD neuropathologic changes and are more accurate than assessing individual biomarkers. These CSF biomarkers of AD have been validated against autopsy, and ratio values of CSF amyloid-beta 42 have been further validated against amyloid PET, with overall sensitivity and specificity of approximately 90% and 84%, respectively.
Some of the most exciting recent advances in AD center around the measurement of these proteins and others in plasma. Appropriate-use criteria for plasma biomarkers in the evaluation of patients with cognitive impairment were published in 2022. In addition to their use in clinical trials, these criteria cautiously recommend using these biomarkers in specialized memory clinics in the diagnostic workup of patients with cognitive symptoms, along with confirmatory CSF markers or PET. Additional data are needed before plasma biomarkers of AD are used as standalone diagnostic markers or considered in the primary care setting.
We have made remarkable progress toward more precise molecular diagnosis of brain diseases underlying cognitive impairment and dementia. Ongoing efforts to evaluate the utility of these measures in clinical practice include the need to increase diversity of patients and providers. Ultimately, the tremendous progress in molecular biomarkers for the diseases causing dementia will help the field work toward our common goal of early and accurate diagnosis, better management, and hope for people living with these diseases.
Bradford C. Dickerson, MD, MMSc, is a professor, department of neurology, Harvard Medical School, and director, Frontotemporal Disorders Unit, department of neurology, at Massachusetts General Hospital, both in Boston.
A version of this article first appeared on Medscape.com.
The field of neurodegenerative dementias, particularly Alzheimer’s disease (AD), has been revolutionized by the development of imaging and cerebrospinal fluid biomarkers and is on the brink of a new development: emerging plasma biomarkers. Research now recognizes the relationship between the cognitive-behavioral syndromic diagnosis (that is, the illness) and the etiologic diagnosis (the disease) – and the need to consider each separately when developing a diagnostic formulation. The National Institute on Aging and Alzheimer’s Association Research Framework uses the amyloid, tau, and neurodegeneration system to define AD biologically in living patients. Here is an overview of the framework, which requires biomarker evidence of amyloid plaques (amyloid positivity) and neurofibrillary tangles (tau positivity), with evidence of neurodegeneration (neurodegeneration positivity) to support the diagnosis.
The diagnostic approach for symptomatic patients
The differential diagnosis in symptomatic patients with mild cognitive impairment (MCI), mild behavioral impairment, or dementia is broad and includes multiple neurodegenerative diseases (for example, AD, frontotemporal lobar degeneration, dementia with Lewy bodies, argyrophilic grain disease, hippocampal sclerosis); vascular ischemic brain injury (for example, stroke); tumors; infectious, inflammatory, paraneoplastic, or demyelinating diseases; trauma; hydrocephalus; toxic/metabolic insults; and other rare diseases. The patient’s clinical syndrome narrows the differential diagnosis.
Once the clinician has a prioritized differential diagnosis of the brain disease or condition that is probably causing or contributing to the patient’s signs and symptoms, they can then select appropriate assessments and tests, typically starting with a laboratory panel and brain MRI. Strong evidence backed by practice recommendations also supports the use of fluorodeoxyglucose PET as a marker of functional brain abnormalities associated with dementia. Although molecular biomarkers are typically considered at the later stage of the clinical workup, the anticipated future availability of plasma biomarkers will probably change the timing of molecular biomarker assessment in patients with suspected cognitive impairment owing to AD.
Molecular PET biomarkers
Three PET tracers approved by the U.S. Food and Drug Administration for the detection of cerebral amyloid plaques have high sensitivity (89%-98%) and specificity (88%-100%), compared with autopsy, the gold standard diagnostic tool. However, these scans are costly and are not reimbursed by Medicare and Medicaid. Because all amyloid PET scans are covered by the Veterans Administration, this test is more readily accessible for patients receiving VA benefits.
The appropriate-use criteria developed by the Amyloid Imaging Task Force recommends amyloid PET for patients with persistent or progressive MCI or dementia. In such patients, a negative amyloid PET scan would strongly weigh against AD, supporting a differential diagnosis of other etiologies. Although a positive amyloid PET scan in patients with MCI or dementia indicates the presence of amyloid plaques, it does not necessarily confirm AD as the cause. Cerebral amyloid plaques may coexist with other pathologies and increase with age, even in cognitively normal individuals.
The IDEAS study looked at the clinical utility of amyloid PET in a real-world dementia specialist setting. In the study, dementia subspecialists documented their presumed etiologic diagnosis (and level of confidence) before and after amyloid PET. Of the 11,409 patients who completed the study, the etiologic diagnosis changed from AD to non-AD in just over 25% of cases and from non-AD to AD in 10.5%. Clinical management changed in about 60% of patients with MCI and 63.5% of patients with dementia.
In May 2020, the FDA approved flortaucipir F-18, the first diagnostic tau radiotracer for use with PET to estimate the density and distribution of aggregated tau neurofibrillary tangles in adults with cognitive impairment undergoing evaluation for AD. Regulatory approval of flortaucipir F-18 was based on findings from two clinical trials of terminally ill patients who were followed to autopsy. The studies included patients with a spectrum of clinically diagnosed dementias and those with normal cognition. The primary outcome of the studies was accurate visual interpretation of the images in detecting advanced AD tau neurofibrillary tangle pathology (Braak stage V or VI tau pathology). Sensitivity of five trained readers ranged from 68% to 86%, and specificity ranged from 63% to 100%; interrater agreement was 0.87. Tau PET is not yet reimbursed and is therefore not yet readily available in the clinical setting. Moreover, appropriate use criteria have not yet been published.
Molecular fluid biomarkers
Cerebrospinal fluid (CSF) analysis is currently the most readily available and reimbursed test to aid in diagnosing AD, with appropriate-use criteria for patients with suspected AD. CSF biomarkers for AD are useful in cognitively impaired patients when the etiologic diagnosis is equivocal, there is only an intermediate level of diagnostic confidence, or there is very high confidence in the etiologic diagnosis. Testing for CSF biomarkers is also recommended for patients at very early clinical stages (for example, early MCI) or with atypical clinical presentations.
A decreased concentration of amyloid-beta 42 in CSF is a marker of amyloid neuritic plaques in the brain. An increased concentration of total tau in CSF reflects injury to neurons, and an increased concentration of specific isoforms of hyperphosphorylated tau reflects neurofibrillary tangles. Presently, the ratios of t-tau to amyloid-beta 42, amyloid-beta 42 to amyloid-beta 40, and phosphorylated-tau 181 to amyloid-beta 42 are the best-performing markers of AD neuropathologic changes and are more accurate than assessing individual biomarkers. These CSF biomarkers of AD have been validated against autopsy, and ratio values of CSF amyloid-beta 42 have been further validated against amyloid PET, with overall sensitivity and specificity of approximately 90% and 84%, respectively.
Some of the most exciting recent advances in AD center around the measurement of these proteins and others in plasma. Appropriate-use criteria for plasma biomarkers in the evaluation of patients with cognitive impairment were published in 2022. In addition to their use in clinical trials, these criteria cautiously recommend using these biomarkers in specialized memory clinics in the diagnostic workup of patients with cognitive symptoms, along with confirmatory CSF markers or PET. Additional data are needed before plasma biomarkers of AD are used as standalone diagnostic markers or considered in the primary care setting.
We have made remarkable progress toward more precise molecular diagnosis of brain diseases underlying cognitive impairment and dementia. Ongoing efforts to evaluate the utility of these measures in clinical practice include the need to increase diversity of patients and providers. Ultimately, the tremendous progress in molecular biomarkers for the diseases causing dementia will help the field work toward our common goal of early and accurate diagnosis, better management, and hope for people living with these diseases.
Bradford C. Dickerson, MD, MMSc, is a professor, department of neurology, Harvard Medical School, and director, Frontotemporal Disorders Unit, department of neurology, at Massachusetts General Hospital, both in Boston.
A version of this article first appeared on Medscape.com.
Unlocking the riddle of REM sleep
Eugene Aserinsky, PhD, never wanted to study sleep. He tried being a social worker, a dental student, and even did a stint in the army as an explosives handler. He enrolled at the University of Chicago to pursue organ physiology, but all potential supervisors were too busy to take him on. His only choice was Nathaniel Kleitman, PhD, a middle-aged professor whom Dr. Aserinsky described as “always serious.” Dr. Kleitman was doing research on sleep and so, grudgingly, Dr. Aserinsky had followed suit.
Two years later, in 1953, the duo published a paper that shattered the way we saw sleep. They described a weird phenomenon Dr. Aserinsky later called REM sleep: periods of rapid eye movements paired with wakefulness-like activity in the brain. “We are still at the very beginning of understanding this phenomenon,” Mark Blumberg, PhD, professor of psychological and brain sciences at University of Iowa, Iowa City, said in an interview.
Before Dr. Aserinsky had walked into Dr. Kleitman’s lab, the widespread belief held that sleep was “the antithesis of wakefulness,” as Dr. Kleitman wrote in his seminal 1939 book, “Sleep and Wakefulness.” Others saw it as a kind of a coma, a passive state. Another theory, developed in the early 20th century by French psychologist Henri Piéron, PhD, held that sleepiness is caused by an accumulation of ‘hypnotoxins’ in the brain.
In his 1913 study that would likely fail a contemporary ethics review, Dr. Piéron drew fluid from the brains of sleep-deprived dogs and injected it into other dogs to induce sleep. As he explained in an interview with The Washington Times in 1933, he said he believed that fatigue toxins accumulate in the brain throughout the wakeful hours, then slowly seep into the spinal column, promoting drowsiness. Once we fall asleep, Dr. Piéron claimed, the hypnotoxins burn away.
From blinking to rapid eye movement
In 1925 when Dr. Kleitman established the world’s first sleep laboratory at the University of Chicago, sleep was a fringe science that most researchers avoided with a wide berth. Yet Dr. Kleitman was obsessed. The Moldova-born scientist famously worked 24/7 – literally. He not only stayed long hours in his lab, but also slept attached to a plethora of instruments to measure his brain waves, breathing, and heartbeat. At one point, Dr. Kleitman stayed awake for 180 hours (more than a week), to check how forced sleeplessness would affect his body (he later compared it to torture). He also lived 2 weeks aboard a submarine, moved his family north of the Arctic Circle, and spent over a month 119 feet below the surface in a cave in Kentucky, fighting rats, cold, and humidity to study circadian rhythms.
Dr. Kleitman was intrigued by an article in Nature in which the author asserted that he could detect the approach of slumber in train passengers by observing their blink frequencies. He instructed Dr. Aserinsky to observe sleeping infants (being monitored for a different study), to see how their blinking related to sleep. Yet Dr. Aserinsky was not amused. The project, he later wrote, “seemed about as exciting as warm milk.”
Dr. Aserinsky was uncertain whether eyelid movement with the eyes closed constituted a blink, then he noticed a 20-minute span in each hour when eye movement ceased entirely. Still short of getting his degree, Dr. Aserinsky decided to observe sleeping adults. He hauled a dusty clanker of a brain-wave machine out of the university’s basement, and started registering the electrical activity of the brain of his dozing subjects. Soon, he noticed something weird.
As he kept staring at the sleeping adults, he noticed that at times they’d have saccadic-like eye movements, just as the EEG machine would register a wake-like state of the brain. At first, he thought the machine was broken (it was ancient, after all). Then, that the subjects were awake and just keeping their eyes shut. Yet after conducting several sessions and tinkering with the EEG machine, Dr. Aserinsky finally concluded that the recordings and observations were correct: Something was indeed happening during sleep that kept the cortex activated and made the subjects’ eyes move in a jerky manner.
Dreams, memory, and thermoregulation
After studying dozens of subjects, including his son and Dr. Kleitman’s daughter, and using miles of polygraph paper, the two scientists published their findings in September 1953 in the journal Science. Dr. Kleitman didn’t expect the discovery to be particularly earth-shattering. When asked in a later interview how much research and excitement he thought the paper would generate, he replied: “none whatsoever.” That’s not how things went, though. “They completely changed the way people think,” Dr. Blumberg said. Once and for all, the REM discovery put to rest the idea that sleep was a passive state where nothing interesting happens.
Dr. Aserinsky soon left the University of Chicago, while Dr. Kleitman continued research on rapid eye movements in sleep with his new student, William Dement, MD. Together, they published studies suggesting that REM periods were when dreaming occurred – they reported that people who were awakened during REM sleep were far more likely to recall dreams than were those awakened outside of that period. “REM sleep = dreams” became established dogma for decades, even though first reports of dreams during non-REM sleep came as early as Dr. Kleitman’s and Dr. Dement’s original research (they assumed these were recollections from the preceding REM episodes).
“It turns out that you can have a perfectly good dream when you haven’t had a previous REM sleep period,” said Jerome Siegel, PhD, professor of psychiatry and biobehavioral sciences at UCLA’s Center for Sleep Research, pointing out that equating REM sleep with dreams is still “a common misconception.”
By the 1960s, REM sleep seemed to be well defined as the combination of rapid eye movement with EEG showing brain activation, first noted by Dr. Aserinsky, as well as muscle atonia – a state of near-total muscle relaxation or paralysis. Today, however, Dr. Blumberg said, things are considerably less clear cut. In one recent paper, Dr. Blumberg and his colleagues went as far as to question whether REM sleep is even “a thing.” REM sleep is prevalent across terrestrial vertebrates, but they found that it is also highly nuanced, messing up old definitions.
Take the platypus, for example, the animal with the most REM sleep (as far as we know): They have rapid eye movements and their bills twitch during REM (stillness punctuated by sudden twitches is typical of that period of sleep), but they don’t have the classic brain activation on EEG. Owls have EEG activation and twitching, but no rapid eye movements, since their eyes are largely immobile. Geese, meanwhile, are missing muscle atonia – that’s why they can sleep standing. And new studies are still coming in, showing, for instance, that even jumping spiders may have REM sleep, complete with jerky eye movements and limb twitching.
For Dr. Siegel, the findings on REM sleep in animals point to the potential explanation of what that bizarre stage of sleep may be all about: thermoregulation. “When you look at differences in sleep among the groups of warm-blooded animals, the correlation is almost perfect, and inverse. The colder they are, the more REM sleep they get,” Dr. Siegel said. During REM sleep, body thermoregulation is basically suspended, and so, as Dr. Siegel argued in The Lancet Neurology last fall, REM sleep could be a vital player in managing our brain’s temperature and metabolic activity during sleep.
Wallace B. Mendelson, MD, professor emeritus of psychiatry at the University of Chicago, said it’s likely, however, that REM sleep has more than one function. “There is no reason why one single theory has to be an answer. Most important physiological functions have multiple functions,” he said. The ideas are many, including that REM sleep helps consolidate our memories and plays an important role in emotion regulation But it’s not that simple. A Swiss study of nearly 1,000 healthy participants did not show any correlation between sleep stage and memory consolidation. Sleep disruption of any stage can prevent memory consolidation and quiet wakefulness with closed eyes can be as effective as sleep for memory recall.
In 1971, researchers from the National Institute of Mental Health published results of their study on total suppression of REM sleep. For as long as 40 days, they administered the monoamine oxidase inhibitor (MAOI) phenelzine, a type of drug that can completely eliminate REM sleep, to six patients with anxiety and depression. They showed that suppression of REM sleep could improve symptoms of depression, seemingly without impairing the patients’ cognitive function. Modern antidepressants, too, can greatly diminish REM sleep, Dr. Siegel said. “I’m not aware that there is any dramatic downside in having REM sleep reduced,” he said.
So do we even need REM sleep for optimal performance? Dr. Siegel said that there is a lot of exaggeration about how great REM sleep is for our health. “People just indulge their imaginations,” he said.
Dr. Blumberg pointed out that, in general, as long as you get enough sleep in the first place, you will get enough REM. “You can’t control the amount of REM sleep you have,” he explained.
REM sleep behavior disorder
Even though we may not need REM sleep to function well, REM sleep behavior disorder (RBD) is a sign that our health may be in trouble. In 1986, scientists from the University of Minnesota reported a bizarre REM sleep pathology in four men and one woman who would act out their dreams. One 67-year-old man, for example, reportedly punched and kicked his wife at night for years. One time he found himself kneeling alongside the bed with his arms extended as if he were holding a rifle (he dreamt he was in a shootout). His overall health, however, seemed unaffected apart from self-injury during some episodes.
However, in 1996 the same group of researchers reported that 11 of 29 men originally diagnosed with RBD went on to develop a parkinsonian disorder. Combined data from 24 centers of the International RBD Study Group puts that number as high as 74% at 12-year follow-up. These patients get diagnosed with Parkinson’s disease, dementia with Lewy bodies, or multiple system atrophy. Scientists believe that the protein alpha-synuclein forms toxic clumps in the brain, which are responsible both for malfunctioning of muscle atonia during REM sleep and subsequent neurodegenerative disorders.
While some researchers say that RBD may offer a unique window into better understanding REM sleep, we’re still a long way off from fully figuring out this biological phenomenon. According to Dr. Blumberg, the story of REM sleep has arguably become more muddled in the 7 decades since Dr. Aserinsky and Dr. Kleitman published their original findings, dispelling myths about ‘fatigue toxins’ and sleep as a passive, coma-like state. Dr. Mendelson concurred: “It truly remains a mystery.”
Dr. Blumberg, Dr. Mendelson, and Dr. Siegel reported no relevant disclosures.
A version of this article originally appeared on Medscape.com.
Eugene Aserinsky, PhD, never wanted to study sleep. He tried being a social worker, a dental student, and even did a stint in the army as an explosives handler. He enrolled at the University of Chicago to pursue organ physiology, but all potential supervisors were too busy to take him on. His only choice was Nathaniel Kleitman, PhD, a middle-aged professor whom Dr. Aserinsky described as “always serious.” Dr. Kleitman was doing research on sleep and so, grudgingly, Dr. Aserinsky had followed suit.
Two years later, in 1953, the duo published a paper that shattered the way we saw sleep. They described a weird phenomenon Dr. Aserinsky later called REM sleep: periods of rapid eye movements paired with wakefulness-like activity in the brain. “We are still at the very beginning of understanding this phenomenon,” Mark Blumberg, PhD, professor of psychological and brain sciences at University of Iowa, Iowa City, said in an interview.
Before Dr. Aserinsky had walked into Dr. Kleitman’s lab, the widespread belief held that sleep was “the antithesis of wakefulness,” as Dr. Kleitman wrote in his seminal 1939 book, “Sleep and Wakefulness.” Others saw it as a kind of a coma, a passive state. Another theory, developed in the early 20th century by French psychologist Henri Piéron, PhD, held that sleepiness is caused by an accumulation of ‘hypnotoxins’ in the brain.
In his 1913 study that would likely fail a contemporary ethics review, Dr. Piéron drew fluid from the brains of sleep-deprived dogs and injected it into other dogs to induce sleep. As he explained in an interview with The Washington Times in 1933, he said he believed that fatigue toxins accumulate in the brain throughout the wakeful hours, then slowly seep into the spinal column, promoting drowsiness. Once we fall asleep, Dr. Piéron claimed, the hypnotoxins burn away.
From blinking to rapid eye movement
In 1925 when Dr. Kleitman established the world’s first sleep laboratory at the University of Chicago, sleep was a fringe science that most researchers avoided with a wide berth. Yet Dr. Kleitman was obsessed. The Moldova-born scientist famously worked 24/7 – literally. He not only stayed long hours in his lab, but also slept attached to a plethora of instruments to measure his brain waves, breathing, and heartbeat. At one point, Dr. Kleitman stayed awake for 180 hours (more than a week), to check how forced sleeplessness would affect his body (he later compared it to torture). He also lived 2 weeks aboard a submarine, moved his family north of the Arctic Circle, and spent over a month 119 feet below the surface in a cave in Kentucky, fighting rats, cold, and humidity to study circadian rhythms.
Dr. Kleitman was intrigued by an article in Nature in which the author asserted that he could detect the approach of slumber in train passengers by observing their blink frequencies. He instructed Dr. Aserinsky to observe sleeping infants (being monitored for a different study), to see how their blinking related to sleep. Yet Dr. Aserinsky was not amused. The project, he later wrote, “seemed about as exciting as warm milk.”
Dr. Aserinsky was uncertain whether eyelid movement with the eyes closed constituted a blink, then he noticed a 20-minute span in each hour when eye movement ceased entirely. Still short of getting his degree, Dr. Aserinsky decided to observe sleeping adults. He hauled a dusty clanker of a brain-wave machine out of the university’s basement, and started registering the electrical activity of the brain of his dozing subjects. Soon, he noticed something weird.
As he kept staring at the sleeping adults, he noticed that at times they’d have saccadic-like eye movements, just as the EEG machine would register a wake-like state of the brain. At first, he thought the machine was broken (it was ancient, after all). Then, that the subjects were awake and just keeping their eyes shut. Yet after conducting several sessions and tinkering with the EEG machine, Dr. Aserinsky finally concluded that the recordings and observations were correct: Something was indeed happening during sleep that kept the cortex activated and made the subjects’ eyes move in a jerky manner.
Dreams, memory, and thermoregulation
After studying dozens of subjects, including his son and Dr. Kleitman’s daughter, and using miles of polygraph paper, the two scientists published their findings in September 1953 in the journal Science. Dr. Kleitman didn’t expect the discovery to be particularly earth-shattering. When asked in a later interview how much research and excitement he thought the paper would generate, he replied: “none whatsoever.” That’s not how things went, though. “They completely changed the way people think,” Dr. Blumberg said. Once and for all, the REM discovery put to rest the idea that sleep was a passive state where nothing interesting happens.
Dr. Aserinsky soon left the University of Chicago, while Dr. Kleitman continued research on rapid eye movements in sleep with his new student, William Dement, MD. Together, they published studies suggesting that REM periods were when dreaming occurred – they reported that people who were awakened during REM sleep were far more likely to recall dreams than were those awakened outside of that period. “REM sleep = dreams” became established dogma for decades, even though first reports of dreams during non-REM sleep came as early as Dr. Kleitman’s and Dr. Dement’s original research (they assumed these were recollections from the preceding REM episodes).
“It turns out that you can have a perfectly good dream when you haven’t had a previous REM sleep period,” said Jerome Siegel, PhD, professor of psychiatry and biobehavioral sciences at UCLA’s Center for Sleep Research, pointing out that equating REM sleep with dreams is still “a common misconception.”
By the 1960s, REM sleep seemed to be well defined as the combination of rapid eye movement with EEG showing brain activation, first noted by Dr. Aserinsky, as well as muscle atonia – a state of near-total muscle relaxation or paralysis. Today, however, Dr. Blumberg said, things are considerably less clear cut. In one recent paper, Dr. Blumberg and his colleagues went as far as to question whether REM sleep is even “a thing.” REM sleep is prevalent across terrestrial vertebrates, but they found that it is also highly nuanced, messing up old definitions.
Take the platypus, for example, the animal with the most REM sleep (as far as we know): They have rapid eye movements and their bills twitch during REM (stillness punctuated by sudden twitches is typical of that period of sleep), but they don’t have the classic brain activation on EEG. Owls have EEG activation and twitching, but no rapid eye movements, since their eyes are largely immobile. Geese, meanwhile, are missing muscle atonia – that’s why they can sleep standing. And new studies are still coming in, showing, for instance, that even jumping spiders may have REM sleep, complete with jerky eye movements and limb twitching.
For Dr. Siegel, the findings on REM sleep in animals point to the potential explanation of what that bizarre stage of sleep may be all about: thermoregulation. “When you look at differences in sleep among the groups of warm-blooded animals, the correlation is almost perfect, and inverse. The colder they are, the more REM sleep they get,” Dr. Siegel said. During REM sleep, body thermoregulation is basically suspended, and so, as Dr. Siegel argued in The Lancet Neurology last fall, REM sleep could be a vital player in managing our brain’s temperature and metabolic activity during sleep.
Wallace B. Mendelson, MD, professor emeritus of psychiatry at the University of Chicago, said it’s likely, however, that REM sleep has more than one function. “There is no reason why one single theory has to be an answer. Most important physiological functions have multiple functions,” he said. The ideas are many, including that REM sleep helps consolidate our memories and plays an important role in emotion regulation But it’s not that simple. A Swiss study of nearly 1,000 healthy participants did not show any correlation between sleep stage and memory consolidation. Sleep disruption of any stage can prevent memory consolidation and quiet wakefulness with closed eyes can be as effective as sleep for memory recall.
In 1971, researchers from the National Institute of Mental Health published results of their study on total suppression of REM sleep. For as long as 40 days, they administered the monoamine oxidase inhibitor (MAOI) phenelzine, a type of drug that can completely eliminate REM sleep, to six patients with anxiety and depression. They showed that suppression of REM sleep could improve symptoms of depression, seemingly without impairing the patients’ cognitive function. Modern antidepressants, too, can greatly diminish REM sleep, Dr. Siegel said. “I’m not aware that there is any dramatic downside in having REM sleep reduced,” he said.
So do we even need REM sleep for optimal performance? Dr. Siegel said that there is a lot of exaggeration about how great REM sleep is for our health. “People just indulge their imaginations,” he said.
Dr. Blumberg pointed out that, in general, as long as you get enough sleep in the first place, you will get enough REM. “You can’t control the amount of REM sleep you have,” he explained.
REM sleep behavior disorder
Even though we may not need REM sleep to function well, REM sleep behavior disorder (RBD) is a sign that our health may be in trouble. In 1986, scientists from the University of Minnesota reported a bizarre REM sleep pathology in four men and one woman who would act out their dreams. One 67-year-old man, for example, reportedly punched and kicked his wife at night for years. One time he found himself kneeling alongside the bed with his arms extended as if he were holding a rifle (he dreamt he was in a shootout). His overall health, however, seemed unaffected apart from self-injury during some episodes.
However, in 1996 the same group of researchers reported that 11 of 29 men originally diagnosed with RBD went on to develop a parkinsonian disorder. Combined data from 24 centers of the International RBD Study Group puts that number as high as 74% at 12-year follow-up. These patients get diagnosed with Parkinson’s disease, dementia with Lewy bodies, or multiple system atrophy. Scientists believe that the protein alpha-synuclein forms toxic clumps in the brain, which are responsible both for malfunctioning of muscle atonia during REM sleep and subsequent neurodegenerative disorders.
While some researchers say that RBD may offer a unique window into better understanding REM sleep, we’re still a long way off from fully figuring out this biological phenomenon. According to Dr. Blumberg, the story of REM sleep has arguably become more muddled in the 7 decades since Dr. Aserinsky and Dr. Kleitman published their original findings, dispelling myths about ‘fatigue toxins’ and sleep as a passive, coma-like state. Dr. Mendelson concurred: “It truly remains a mystery.”
Dr. Blumberg, Dr. Mendelson, and Dr. Siegel reported no relevant disclosures.
A version of this article originally appeared on Medscape.com.
Eugene Aserinsky, PhD, never wanted to study sleep. He tried being a social worker, a dental student, and even did a stint in the army as an explosives handler. He enrolled at the University of Chicago to pursue organ physiology, but all potential supervisors were too busy to take him on. His only choice was Nathaniel Kleitman, PhD, a middle-aged professor whom Dr. Aserinsky described as “always serious.” Dr. Kleitman was doing research on sleep and so, grudgingly, Dr. Aserinsky had followed suit.
Two years later, in 1953, the duo published a paper that shattered the way we saw sleep. They described a weird phenomenon Dr. Aserinsky later called REM sleep: periods of rapid eye movements paired with wakefulness-like activity in the brain. “We are still at the very beginning of understanding this phenomenon,” Mark Blumberg, PhD, professor of psychological and brain sciences at University of Iowa, Iowa City, said in an interview.
Before Dr. Aserinsky had walked into Dr. Kleitman’s lab, the widespread belief held that sleep was “the antithesis of wakefulness,” as Dr. Kleitman wrote in his seminal 1939 book, “Sleep and Wakefulness.” Others saw it as a kind of a coma, a passive state. Another theory, developed in the early 20th century by French psychologist Henri Piéron, PhD, held that sleepiness is caused by an accumulation of ‘hypnotoxins’ in the brain.
In his 1913 study that would likely fail a contemporary ethics review, Dr. Piéron drew fluid from the brains of sleep-deprived dogs and injected it into other dogs to induce sleep. As he explained in an interview with The Washington Times in 1933, he said he believed that fatigue toxins accumulate in the brain throughout the wakeful hours, then slowly seep into the spinal column, promoting drowsiness. Once we fall asleep, Dr. Piéron claimed, the hypnotoxins burn away.
From blinking to rapid eye movement
In 1925 when Dr. Kleitman established the world’s first sleep laboratory at the University of Chicago, sleep was a fringe science that most researchers avoided with a wide berth. Yet Dr. Kleitman was obsessed. The Moldova-born scientist famously worked 24/7 – literally. He not only stayed long hours in his lab, but also slept attached to a plethora of instruments to measure his brain waves, breathing, and heartbeat. At one point, Dr. Kleitman stayed awake for 180 hours (more than a week), to check how forced sleeplessness would affect his body (he later compared it to torture). He also lived 2 weeks aboard a submarine, moved his family north of the Arctic Circle, and spent over a month 119 feet below the surface in a cave in Kentucky, fighting rats, cold, and humidity to study circadian rhythms.
Dr. Kleitman was intrigued by an article in Nature in which the author asserted that he could detect the approach of slumber in train passengers by observing their blink frequencies. He instructed Dr. Aserinsky to observe sleeping infants (being monitored for a different study), to see how their blinking related to sleep. Yet Dr. Aserinsky was not amused. The project, he later wrote, “seemed about as exciting as warm milk.”
Dr. Aserinsky was uncertain whether eyelid movement with the eyes closed constituted a blink, then he noticed a 20-minute span in each hour when eye movement ceased entirely. Still short of getting his degree, Dr. Aserinsky decided to observe sleeping adults. He hauled a dusty clanker of a brain-wave machine out of the university’s basement, and started registering the electrical activity of the brain of his dozing subjects. Soon, he noticed something weird.
As he kept staring at the sleeping adults, he noticed that at times they’d have saccadic-like eye movements, just as the EEG machine would register a wake-like state of the brain. At first, he thought the machine was broken (it was ancient, after all). Then, that the subjects were awake and just keeping their eyes shut. Yet after conducting several sessions and tinkering with the EEG machine, Dr. Aserinsky finally concluded that the recordings and observations were correct: Something was indeed happening during sleep that kept the cortex activated and made the subjects’ eyes move in a jerky manner.
Dreams, memory, and thermoregulation
After studying dozens of subjects, including his son and Dr. Kleitman’s daughter, and using miles of polygraph paper, the two scientists published their findings in September 1953 in the journal Science. Dr. Kleitman didn’t expect the discovery to be particularly earth-shattering. When asked in a later interview how much research and excitement he thought the paper would generate, he replied: “none whatsoever.” That’s not how things went, though. “They completely changed the way people think,” Dr. Blumberg said. Once and for all, the REM discovery put to rest the idea that sleep was a passive state where nothing interesting happens.
Dr. Aserinsky soon left the University of Chicago, while Dr. Kleitman continued research on rapid eye movements in sleep with his new student, William Dement, MD. Together, they published studies suggesting that REM periods were when dreaming occurred – they reported that people who were awakened during REM sleep were far more likely to recall dreams than were those awakened outside of that period. “REM sleep = dreams” became established dogma for decades, even though first reports of dreams during non-REM sleep came as early as Dr. Kleitman’s and Dr. Dement’s original research (they assumed these were recollections from the preceding REM episodes).
“It turns out that you can have a perfectly good dream when you haven’t had a previous REM sleep period,” said Jerome Siegel, PhD, professor of psychiatry and biobehavioral sciences at UCLA’s Center for Sleep Research, pointing out that equating REM sleep with dreams is still “a common misconception.”
By the 1960s, REM sleep seemed to be well defined as the combination of rapid eye movement with EEG showing brain activation, first noted by Dr. Aserinsky, as well as muscle atonia – a state of near-total muscle relaxation or paralysis. Today, however, Dr. Blumberg said, things are considerably less clear cut. In one recent paper, Dr. Blumberg and his colleagues went as far as to question whether REM sleep is even “a thing.” REM sleep is prevalent across terrestrial vertebrates, but they found that it is also highly nuanced, messing up old definitions.
Take the platypus, for example, the animal with the most REM sleep (as far as we know): They have rapid eye movements and their bills twitch during REM (stillness punctuated by sudden twitches is typical of that period of sleep), but they don’t have the classic brain activation on EEG. Owls have EEG activation and twitching, but no rapid eye movements, since their eyes are largely immobile. Geese, meanwhile, are missing muscle atonia – that’s why they can sleep standing. And new studies are still coming in, showing, for instance, that even jumping spiders may have REM sleep, complete with jerky eye movements and limb twitching.
For Dr. Siegel, the findings on REM sleep in animals point to the potential explanation of what that bizarre stage of sleep may be all about: thermoregulation. “When you look at differences in sleep among the groups of warm-blooded animals, the correlation is almost perfect, and inverse. The colder they are, the more REM sleep they get,” Dr. Siegel said. During REM sleep, body thermoregulation is basically suspended, and so, as Dr. Siegel argued in The Lancet Neurology last fall, REM sleep could be a vital player in managing our brain’s temperature and metabolic activity during sleep.
Wallace B. Mendelson, MD, professor emeritus of psychiatry at the University of Chicago, said it’s likely, however, that REM sleep has more than one function. “There is no reason why one single theory has to be an answer. Most important physiological functions have multiple functions,” he said. The ideas are many, including that REM sleep helps consolidate our memories and plays an important role in emotion regulation But it’s not that simple. A Swiss study of nearly 1,000 healthy participants did not show any correlation between sleep stage and memory consolidation. Sleep disruption of any stage can prevent memory consolidation and quiet wakefulness with closed eyes can be as effective as sleep for memory recall.
In 1971, researchers from the National Institute of Mental Health published results of their study on total suppression of REM sleep. For as long as 40 days, they administered the monoamine oxidase inhibitor (MAOI) phenelzine, a type of drug that can completely eliminate REM sleep, to six patients with anxiety and depression. They showed that suppression of REM sleep could improve symptoms of depression, seemingly without impairing the patients’ cognitive function. Modern antidepressants, too, can greatly diminish REM sleep, Dr. Siegel said. “I’m not aware that there is any dramatic downside in having REM sleep reduced,” he said.
So do we even need REM sleep for optimal performance? Dr. Siegel said that there is a lot of exaggeration about how great REM sleep is for our health. “People just indulge their imaginations,” he said.
Dr. Blumberg pointed out that, in general, as long as you get enough sleep in the first place, you will get enough REM. “You can’t control the amount of REM sleep you have,” he explained.
REM sleep behavior disorder
Even though we may not need REM sleep to function well, REM sleep behavior disorder (RBD) is a sign that our health may be in trouble. In 1986, scientists from the University of Minnesota reported a bizarre REM sleep pathology in four men and one woman who would act out their dreams. One 67-year-old man, for example, reportedly punched and kicked his wife at night for years. One time he found himself kneeling alongside the bed with his arms extended as if he were holding a rifle (he dreamt he was in a shootout). His overall health, however, seemed unaffected apart from self-injury during some episodes.
However, in 1996 the same group of researchers reported that 11 of 29 men originally diagnosed with RBD went on to develop a parkinsonian disorder. Combined data from 24 centers of the International RBD Study Group puts that number as high as 74% at 12-year follow-up. These patients get diagnosed with Parkinson’s disease, dementia with Lewy bodies, or multiple system atrophy. Scientists believe that the protein alpha-synuclein forms toxic clumps in the brain, which are responsible both for malfunctioning of muscle atonia during REM sleep and subsequent neurodegenerative disorders.
While some researchers say that RBD may offer a unique window into better understanding REM sleep, we’re still a long way off from fully figuring out this biological phenomenon. According to Dr. Blumberg, the story of REM sleep has arguably become more muddled in the 7 decades since Dr. Aserinsky and Dr. Kleitman published their original findings, dispelling myths about ‘fatigue toxins’ and sleep as a passive, coma-like state. Dr. Mendelson concurred: “It truly remains a mystery.”
Dr. Blumberg, Dr. Mendelson, and Dr. Siegel reported no relevant disclosures.
A version of this article originally appeared on Medscape.com.
Continuous glucose monitors come to hospitals
But that technological future will require ensuring that the monitoring devices are as accurate as the conventional method, experts told this news organization.
In 2020, the U.S. Food and Drug Administration enabled in-hospital use of CGMs to reduce contact between patients and health care providers during the COVID-19 pandemic. Diabetes is a risk factor for more severe COVID, meaning that many patients with the infection also required ongoing care for their blood sugar problems.
Prior to the pandemic, in-person finger-stick tests were the primary means of measuring glucose for hospitalized patients with diabetes.
The trouble is that finger-stick measurements quickly become inaccurate.
“Glucose is a measurement that changes pretty rapidly,” said Eileen Faulds, RN, PhD, an endocrinology nurse and health services researcher at the Ohio State University, Columbus. Finger sticks might occur only four or five times per day, Dr. Faulds noted, or as often as every hour for people who receive insulin intravenously. But even that more frequent pace is far from continuous.
“With CGM we can get the glucose level in real time,” Dr. Faulds said.
Dr. Faulds is lead author of a new study in the Journal of Diabetes Science and Technology, which shows that nurses in the ICU believe that using continuous monitors, subcutaneous filaments connected to sensors that regularly report glucose levels, enables better patient care than does relying on periodic glucose tests alone. Nurses still used traditional finger sticks, which Dr. Faulds notes are highly accurate at the time of the reading.
In a 2022 study, glucose levels generated by CGM and those measured by finger sticks varied by up to 14%. A hybrid care model combining CGMs and finger stick tests may emerge, Dr. Faulds said.
A gusher of glucose data
People with diabetes have long been able to use CGMs in their daily lives, which typically report the glucose value to a smartphone or watch. The devices are now part of hospital care as well. In 2022, the Food and Drug Administration granted a breakthrough therapy designation to the company Dexcom for use of its CGMs to manage care of people with diabetes in hospitals.
One open question is how often CGMs should report glucose readings for optimum patient health. Dexcom’s G6 CGM reports glucose levels every five minutes, for example, whereas Abbott’s FreeStyle Libre 2 delivers glucose values every minute.
“We wouldn’t look at each value, we would look at the big picture,” to determine if a patient is at risk of becoming hyper- or hypoglycemic, said Lizda Guerrero-Arroyo, MD, a postdoctoral fellow in endocrinology at the Emory University School of Medicine, Atlanta. Dr. Guerrero-Arroyo recently reported that clinicians in multiple ICUs began to use CGMs in conjunction with finger sticks during the pandemic and felt the devices could reduce patient discomfort.
“A finger stick is very painful,” Dr. Guerrero-Arroyo said, and a bottleneck for nursing staff who administer these tests. In contrast, Dr. Faulds said, CGM placement is essentially painless and requires less labor on the ward to manage.
Beyond use in the ICU, clinicians are also experimenting with use of CGMs to monitor blood sugar levels in people with diabetes who are undergoing general surgery. And other researchers are describing how to integrate data from CGMs into patient care tools such as the electronic health record, although a standard way to do this does not yet exist.
Assuming CGMs remain part of the mix for in-hospital care of people with diabetes, clinicians may mainly need trend summaries of how glucose levels rise and fall over time, said data scientist Samantha Spierling Bagsic, PhD, of the Scripps Whittier Diabetes Institute, San Diego. Dr. Guerrero-Arroyo said that she shares that vision. But a minute-by-minute analysis of glucose levels also may be necessary to get a granular sense of how changing a patient’s insulin level affects their blood sugar, Dr. Spierling Bagsic said.
“We need to figure out what data different audiences need, how often we need to measure glucose, and how to present that information to different audiences in different ways,” said Dr. Spierling Bagsic, a co-author of the study about integrating CGM data into patient care tools.
The wider use of CGMs in hospitals may be one silver lining of the COVID-19 pandemic. As an inpatient endocrinology nurse, Dr. Faulds said that she wanted to use CGMs prior to the outbreak, but at that point, a critical mass of studies about their benefits was missing.
“We all know the terrible things that happened during the pandemic,” Dr. Faulds said. “But it gave us the allowance to use CGMs, and we saw that nurses loved them.”
Dr. Faulds reports relationships with Dexcom and Insulet and has received an honorarium from Medscape. Dr. Guerrero-Arroyo and Dr. Spierling Bagsic reported no financial conflicts of interest.
A version of this article originally appeared on Medscape.com.
But that technological future will require ensuring that the monitoring devices are as accurate as the conventional method, experts told this news organization.
In 2020, the U.S. Food and Drug Administration enabled in-hospital use of CGMs to reduce contact between patients and health care providers during the COVID-19 pandemic. Diabetes is a risk factor for more severe COVID, meaning that many patients with the infection also required ongoing care for their blood sugar problems.
Prior to the pandemic, in-person finger-stick tests were the primary means of measuring glucose for hospitalized patients with diabetes.
The trouble is that finger-stick measurements quickly become inaccurate.
“Glucose is a measurement that changes pretty rapidly,” said Eileen Faulds, RN, PhD, an endocrinology nurse and health services researcher at the Ohio State University, Columbus. Finger sticks might occur only four or five times per day, Dr. Faulds noted, or as often as every hour for people who receive insulin intravenously. But even that more frequent pace is far from continuous.
“With CGM we can get the glucose level in real time,” Dr. Faulds said.
Dr. Faulds is lead author of a new study in the Journal of Diabetes Science and Technology, which shows that nurses in the ICU believe that using continuous monitors, subcutaneous filaments connected to sensors that regularly report glucose levels, enables better patient care than does relying on periodic glucose tests alone. Nurses still used traditional finger sticks, which Dr. Faulds notes are highly accurate at the time of the reading.
In a 2022 study, glucose levels generated by CGM and those measured by finger sticks varied by up to 14%. A hybrid care model combining CGMs and finger stick tests may emerge, Dr. Faulds said.
A gusher of glucose data
People with diabetes have long been able to use CGMs in their daily lives, which typically report the glucose value to a smartphone or watch. The devices are now part of hospital care as well. In 2022, the Food and Drug Administration granted a breakthrough therapy designation to the company Dexcom for use of its CGMs to manage care of people with diabetes in hospitals.
One open question is how often CGMs should report glucose readings for optimum patient health. Dexcom’s G6 CGM reports glucose levels every five minutes, for example, whereas Abbott’s FreeStyle Libre 2 delivers glucose values every minute.
“We wouldn’t look at each value, we would look at the big picture,” to determine if a patient is at risk of becoming hyper- or hypoglycemic, said Lizda Guerrero-Arroyo, MD, a postdoctoral fellow in endocrinology at the Emory University School of Medicine, Atlanta. Dr. Guerrero-Arroyo recently reported that clinicians in multiple ICUs began to use CGMs in conjunction with finger sticks during the pandemic and felt the devices could reduce patient discomfort.
“A finger stick is very painful,” Dr. Guerrero-Arroyo said, and a bottleneck for nursing staff who administer these tests. In contrast, Dr. Faulds said, CGM placement is essentially painless and requires less labor on the ward to manage.
Beyond use in the ICU, clinicians are also experimenting with use of CGMs to monitor blood sugar levels in people with diabetes who are undergoing general surgery. And other researchers are describing how to integrate data from CGMs into patient care tools such as the electronic health record, although a standard way to do this does not yet exist.
Assuming CGMs remain part of the mix for in-hospital care of people with diabetes, clinicians may mainly need trend summaries of how glucose levels rise and fall over time, said data scientist Samantha Spierling Bagsic, PhD, of the Scripps Whittier Diabetes Institute, San Diego. Dr. Guerrero-Arroyo said that she shares that vision. But a minute-by-minute analysis of glucose levels also may be necessary to get a granular sense of how changing a patient’s insulin level affects their blood sugar, Dr. Spierling Bagsic said.
“We need to figure out what data different audiences need, how often we need to measure glucose, and how to present that information to different audiences in different ways,” said Dr. Spierling Bagsic, a co-author of the study about integrating CGM data into patient care tools.
The wider use of CGMs in hospitals may be one silver lining of the COVID-19 pandemic. As an inpatient endocrinology nurse, Dr. Faulds said that she wanted to use CGMs prior to the outbreak, but at that point, a critical mass of studies about their benefits was missing.
“We all know the terrible things that happened during the pandemic,” Dr. Faulds said. “But it gave us the allowance to use CGMs, and we saw that nurses loved them.”
Dr. Faulds reports relationships with Dexcom and Insulet and has received an honorarium from Medscape. Dr. Guerrero-Arroyo and Dr. Spierling Bagsic reported no financial conflicts of interest.
A version of this article originally appeared on Medscape.com.
But that technological future will require ensuring that the monitoring devices are as accurate as the conventional method, experts told this news organization.
In 2020, the U.S. Food and Drug Administration enabled in-hospital use of CGMs to reduce contact between patients and health care providers during the COVID-19 pandemic. Diabetes is a risk factor for more severe COVID, meaning that many patients with the infection also required ongoing care for their blood sugar problems.
Prior to the pandemic, in-person finger-stick tests were the primary means of measuring glucose for hospitalized patients with diabetes.
The trouble is that finger-stick measurements quickly become inaccurate.
“Glucose is a measurement that changes pretty rapidly,” said Eileen Faulds, RN, PhD, an endocrinology nurse and health services researcher at the Ohio State University, Columbus. Finger sticks might occur only four or five times per day, Dr. Faulds noted, or as often as every hour for people who receive insulin intravenously. But even that more frequent pace is far from continuous.
“With CGM we can get the glucose level in real time,” Dr. Faulds said.
Dr. Faulds is lead author of a new study in the Journal of Diabetes Science and Technology, which shows that nurses in the ICU believe that using continuous monitors, subcutaneous filaments connected to sensors that regularly report glucose levels, enables better patient care than does relying on periodic glucose tests alone. Nurses still used traditional finger sticks, which Dr. Faulds notes are highly accurate at the time of the reading.
In a 2022 study, glucose levels generated by CGM and those measured by finger sticks varied by up to 14%. A hybrid care model combining CGMs and finger stick tests may emerge, Dr. Faulds said.
A gusher of glucose data
People with diabetes have long been able to use CGMs in their daily lives, which typically report the glucose value to a smartphone or watch. The devices are now part of hospital care as well. In 2022, the Food and Drug Administration granted a breakthrough therapy designation to the company Dexcom for use of its CGMs to manage care of people with diabetes in hospitals.
One open question is how often CGMs should report glucose readings for optimum patient health. Dexcom’s G6 CGM reports glucose levels every five minutes, for example, whereas Abbott’s FreeStyle Libre 2 delivers glucose values every minute.
“We wouldn’t look at each value, we would look at the big picture,” to determine if a patient is at risk of becoming hyper- or hypoglycemic, said Lizda Guerrero-Arroyo, MD, a postdoctoral fellow in endocrinology at the Emory University School of Medicine, Atlanta. Dr. Guerrero-Arroyo recently reported that clinicians in multiple ICUs began to use CGMs in conjunction with finger sticks during the pandemic and felt the devices could reduce patient discomfort.
“A finger stick is very painful,” Dr. Guerrero-Arroyo said, and a bottleneck for nursing staff who administer these tests. In contrast, Dr. Faulds said, CGM placement is essentially painless and requires less labor on the ward to manage.
Beyond use in the ICU, clinicians are also experimenting with use of CGMs to monitor blood sugar levels in people with diabetes who are undergoing general surgery. And other researchers are describing how to integrate data from CGMs into patient care tools such as the electronic health record, although a standard way to do this does not yet exist.
Assuming CGMs remain part of the mix for in-hospital care of people with diabetes, clinicians may mainly need trend summaries of how glucose levels rise and fall over time, said data scientist Samantha Spierling Bagsic, PhD, of the Scripps Whittier Diabetes Institute, San Diego. Dr. Guerrero-Arroyo said that she shares that vision. But a minute-by-minute analysis of glucose levels also may be necessary to get a granular sense of how changing a patient’s insulin level affects their blood sugar, Dr. Spierling Bagsic said.
“We need to figure out what data different audiences need, how often we need to measure glucose, and how to present that information to different audiences in different ways,” said Dr. Spierling Bagsic, a co-author of the study about integrating CGM data into patient care tools.
The wider use of CGMs in hospitals may be one silver lining of the COVID-19 pandemic. As an inpatient endocrinology nurse, Dr. Faulds said that she wanted to use CGMs prior to the outbreak, but at that point, a critical mass of studies about their benefits was missing.
“We all know the terrible things that happened during the pandemic,” Dr. Faulds said. “But it gave us the allowance to use CGMs, and we saw that nurses loved them.”
Dr. Faulds reports relationships with Dexcom and Insulet and has received an honorarium from Medscape. Dr. Guerrero-Arroyo and Dr. Spierling Bagsic reported no financial conflicts of interest.
A version of this article originally appeared on Medscape.com.