User login
The utility of artificial intelligence in pulmonology has focused mainly on using image datasets to detect and diagnose lung malignancies, but now a growing number of AI models are emerging that apply machine learning to improve predictability for other pulmonary conditions, including pulmonary infections, pulmonary fibrosis, and chronic obstructive pulmonary disease (COPD).
These applications are moving beyond the traditional AI model of collecting data from a multitude of images to cast a wider data net that includes electronic health records.
Also on the horizon, ChatGPT technology is poised to have an impact. But pulmonologists and their practices have a number of barriers to clear before they feel a meaningful impact from AI.
The imperative, said AI researcher Ishanu Chattopadhyay, PhD, is to create transformative models that can detect lung disease early on. Dr. Chattopadhyay, an assistant professor of medicine at the University of Chicago and its Institute for Genomics and Systems Biology, and fellow researchers developed an AI algorithm that uses comorbidity signatures in electronic health records to screen for idiopathic pulmonary fibrosis (IPF) (Nature Med. 2022 Sep 29. doi: 10.1038/s41591-022-02010-y).
“If you could do this when somebody walks into a primary care setting and they are barely suspecting something is going on with them or when they don’t have the typical risk factors, there is a certain fraction of these people who do have IPF and they will almost invariably be diagnosed late and/or misdiagnosed,” Dr. Chattopadhyay said, citing a study that found 55% of patients with IPF have had at least one misdiagnosis and 38% have two or more misdiagnoses (BMC Pulm Med. 2018 Jan 17. doi: 10.1186/s12890-017-0560-x).
Harnessing massive data sets
AI models cull data sets, whether banks of radiographic images or files in an EHR, to extract telltale signatures of a disease state. Dr. Chattopadhyay and his team’s model used three databases with almost 3 million participants and 54,247 positive cases of IPF. Hospitals in Scotland have deployed what they’ve claimed are the first AI models to predict COPD built with 55,000 patient records from a regional National Health Service database. Another AI model for staging COPD, developed by researchers in the United States and Romania, used more than 18,000 medical records from 588 patients to identify physiological signals predictive of COPD (Advanced Sci. 2023 Feb 19. doi: 10.1002/advs.202203485).
Said Dr. Chattopadhyay: “If I can bring in AI which doesn’t just look at radiological images but actually gets it back where someone walks into primary care using only the information that is available at that point in the patient files and asking for nothing more, it raises a flag reliably that gets you a pulmonary referral that will hopefully reduce the misdiagnosis and late diagnosis.”
Victor Tseng, MD, medical director for pulmonology at Ansible Health in Mountain View, Calif., who’s researching the potential of AI in pulmonology, speculated on what functions AI can perform in the clinic. “I think you will start to see much more interventional sort of clinically patient care–facing applications,” he said. That would include acting as a triage layer to direct patient queries to a nurse, physician, or another practitioner, providing patient instructions, serving as therapeutic software, coordinating care, integrating supply chain issues,” he said.
AI vs. spirometry for COPD
Researchers in the United States and Romania, led by Paul Bogdan, PhD, at the University of Southern California Viterbi School of Engineering, developed a model that predicted COPD with an accuracy of almost 99% (98.66%) and avoids many of the shortcomings of spirometry, Dr. Bogdan said.
The models developed by Dr. Bogdan and collaborators use a different principle than existing AI platforms, Dr. Bogdan said. They analyze the properties of the data. As he explained it, they exploit what he called the “geometry of these data” to make inferences and decisions on a patient’s risk for COPD.
“Deep learning is very good for images, for videos, but it doesn’t work that well for signals,” said Mihai Udrescu, PhD, one of the Romanian collaborators. “But if we process the data with the technique brought up by Paul [Bogdan] and his team at USC, deep learning also works well on physiological signals.”
Said Dr. Bogdan, “Nobody thought about using physiological signals to predict COPD before this work. They used spirometry, but spirometry is cumbersome and several steps have to be performed in order to have an accurate spirometry.” His team’s AI models extract and analyze risk data based on 10 minutes of monitoring.
This technology also has the potential to improve accessibility of COPD screening, Dr. Udrescu said. “It can democratize the access to the health care because you don’t need to travel for 100 or 200 miles to see a specialist,” he said. “You just send an app to the mobile phone of a patient, the person puts on a smart watch and wears it for a couple of minutes and the data is either recorded locally or is transmitted and it is analyzed.” The computations can be done locally and in a matter of minutes, he said.
In Scotland, a 12-month feasibility study is underway to evaluate an AI model to identify COPD patients at greatest risk for adverse events. A press release from Lenus, the company developing the technology, said the study will use a COPD multidisciplinary team to consider real-time AI model outputs to enable proactive patient encounters and reduce emergency care visits.
Researchers in Paris built an AI model that showed a 68% accuracy in distinguishing people with asthma from people with COPD in administrative medical databases (BMC Pulmon Med. 2022 Sep 20. doi: 10.1186/s12890-022-02144-2). They found that asthma patients were younger than those with COPD (a mean of 49.9 vs. 72.1 years) and that COPD occurred mostly in men (68% vs. 33%). And an international team of researchers reported that an AI model that used chest CT scans determined that ever-smokers with COPD who met the imaging criteria for bronchiectasis were more prone to disease exacerbations (Radiology. 2022 Dec 13. doi: 10.1148/radiol.221109).
AI in idiopathic pulmonary fibrosis
The AI model that Dr. Chattopadhyay and collaborators developed had an 88% accuracy in predicting IPF. The model, known as the zero-burden comorbidity risk score for IPF (ZCoR-IPF), identified IPF cases in adults age 45 and older 1-4 years sooner than in a variety of pulmonology practice settings.
The model accounted for about 700 different features of IPF, Dr. Chattopadhyay said, but it deviated from standard AI risk models in that it used a machine learning algorithm to extract disease features that aren’t well understood or even known. “We do not know what all the risk factors of IPF are,” he said. “People who don’t have all the risk factors still get IPF. So we have to step back from the raw EHR data from where the features are being generated automatically, and then you can apply standard ML tools.”
Researchers at Nagoya University in Japan also reported on an AI algorithm for predicting IPF that used 646,800 high-resolution CT images and medical records data from 1,068 patients. Their algorithm had an average diagnostic accuracy of 83.6% and, they reported, demonstrated good accuracy even in patients with signs of interstitial pneumonia or who had surgical lung biopsies (Respirology. 2022 Dec 13. doi: 10.1111/resp.14310).
Chat GPT: The next frontier in AI
Dr. Tseng last year led a group of researchers that fed questions from the United States Medical Licensing Exam to a ChatGPT model, which found it answered 60%-65% of questions correctly (PLOS Digit Health. 2023 Feb 9. doi: 10.1371/journal.pdig.0000198). As Dr. Tseng pointed out, that’s good enough to get a medical license.
It may be a matter of time before ChatGPT technology finds its way into the clinic, Dr. Tseng said. A quick ChatGPT query of how it could be used in medicine yielded 12 different answers, from patient triage to providing basic first aid instructions in an emergency.
Dr. Tseng, who’s pulmonology practice places an emphasis on virtual care delivery, went deeper than the ChatGPT answer. “If you’re a respiratory therapist and you’re trying to execute a complicated medical care plan written by a physician, there’s a natural disconnect between our language and their language,” he said. “What we have found is that GPT has significantly harmonized the care plan. And that’s amazing because you end up with this single-stream understanding of the care plan, where the language is halfway between a bedside clinician, like the respiratory therapist or nurse, and is also something that a physician can understand and take the bigger sort of direction of care from.”
Barriers to AI in clinic
Numerous barriers face widespread adoption of AI tools in the clinic, Dr. Tseng said, including physician and staff anxiety about learning new technology. “AI tools, for all purposes, are supposed to allay the cognitive burden and the tedium burden on clinicians, but end up actually costing more time,” he said.
Health care organizations will also need to retool for AI, a group of medical informatics and digital health experts, led by Laurie Lovett Novak, PhD, reported (JAMIA Open. 2023 May 3. doi: 10.1093/jamiaopen/ooad028). But it’s coming nonetheless, Dr. Novak, an associate professor of biomedical informatics at Vanderbilt University Medical Center in Nashville, Tenn., said in an interview.
“In the near future, managers in clinics and inpatient units will be overseeing care that involves numerous AI-based technologies, including predictive analytics, imaging tools, language models, and others,” she said. “Organizations need to support managers by implementing capabilities for algorithmo-vigilance.”
That would include dealing with what she called “algorithmic drift” – when the accuracy of an AI model wanes because of changes in the underlying data – and ensuring that models are unbiased and aren’t used in a way that contributes to inequities in health care. “These new organizational capabilities will demand new tools and new competencies,” she said. That would include policies and processes drawing guidance from medical societies for legal and regulatory direction for managers, staff training, and software documentation.
Dr. Tseng envisioned how AI would work in the clinic. “I personally think that, at some time in the near future, AI-driven care coordination, where the AI basically handles appointment scheduling, patient motivation, patient engagement and acts as their health navigator, will be superior to any human health navigator on the whole, only for the reason that AI is indefatigable,” Dr. Tseng said. “It doesn’t get tired, it doesn’t get burned out, and these health navigation care coordination roles are notoriously difficult.”
The physicians and researchers interviewed for this report had no relevant relationships to disclose.
The utility of artificial intelligence in pulmonology has focused mainly on using image datasets to detect and diagnose lung malignancies, but now a growing number of AI models are emerging that apply machine learning to improve predictability for other pulmonary conditions, including pulmonary infections, pulmonary fibrosis, and chronic obstructive pulmonary disease (COPD).
These applications are moving beyond the traditional AI model of collecting data from a multitude of images to cast a wider data net that includes electronic health records.
Also on the horizon, ChatGPT technology is poised to have an impact. But pulmonologists and their practices have a number of barriers to clear before they feel a meaningful impact from AI.
The imperative, said AI researcher Ishanu Chattopadhyay, PhD, is to create transformative models that can detect lung disease early on. Dr. Chattopadhyay, an assistant professor of medicine at the University of Chicago and its Institute for Genomics and Systems Biology, and fellow researchers developed an AI algorithm that uses comorbidity signatures in electronic health records to screen for idiopathic pulmonary fibrosis (IPF) (Nature Med. 2022 Sep 29. doi: 10.1038/s41591-022-02010-y).
“If you could do this when somebody walks into a primary care setting and they are barely suspecting something is going on with them or when they don’t have the typical risk factors, there is a certain fraction of these people who do have IPF and they will almost invariably be diagnosed late and/or misdiagnosed,” Dr. Chattopadhyay said, citing a study that found 55% of patients with IPF have had at least one misdiagnosis and 38% have two or more misdiagnoses (BMC Pulm Med. 2018 Jan 17. doi: 10.1186/s12890-017-0560-x).
Harnessing massive data sets
AI models cull data sets, whether banks of radiographic images or files in an EHR, to extract telltale signatures of a disease state. Dr. Chattopadhyay and his team’s model used three databases with almost 3 million participants and 54,247 positive cases of IPF. Hospitals in Scotland have deployed what they’ve claimed are the first AI models to predict COPD built with 55,000 patient records from a regional National Health Service database. Another AI model for staging COPD, developed by researchers in the United States and Romania, used more than 18,000 medical records from 588 patients to identify physiological signals predictive of COPD (Advanced Sci. 2023 Feb 19. doi: 10.1002/advs.202203485).
Said Dr. Chattopadhyay: “If I can bring in AI which doesn’t just look at radiological images but actually gets it back where someone walks into primary care using only the information that is available at that point in the patient files and asking for nothing more, it raises a flag reliably that gets you a pulmonary referral that will hopefully reduce the misdiagnosis and late diagnosis.”
Victor Tseng, MD, medical director for pulmonology at Ansible Health in Mountain View, Calif., who’s researching the potential of AI in pulmonology, speculated on what functions AI can perform in the clinic. “I think you will start to see much more interventional sort of clinically patient care–facing applications,” he said. That would include acting as a triage layer to direct patient queries to a nurse, physician, or another practitioner, providing patient instructions, serving as therapeutic software, coordinating care, integrating supply chain issues,” he said.
AI vs. spirometry for COPD
Researchers in the United States and Romania, led by Paul Bogdan, PhD, at the University of Southern California Viterbi School of Engineering, developed a model that predicted COPD with an accuracy of almost 99% (98.66%) and avoids many of the shortcomings of spirometry, Dr. Bogdan said.
The models developed by Dr. Bogdan and collaborators use a different principle than existing AI platforms, Dr. Bogdan said. They analyze the properties of the data. As he explained it, they exploit what he called the “geometry of these data” to make inferences and decisions on a patient’s risk for COPD.
“Deep learning is very good for images, for videos, but it doesn’t work that well for signals,” said Mihai Udrescu, PhD, one of the Romanian collaborators. “But if we process the data with the technique brought up by Paul [Bogdan] and his team at USC, deep learning also works well on physiological signals.”
Said Dr. Bogdan, “Nobody thought about using physiological signals to predict COPD before this work. They used spirometry, but spirometry is cumbersome and several steps have to be performed in order to have an accurate spirometry.” His team’s AI models extract and analyze risk data based on 10 minutes of monitoring.
This technology also has the potential to improve accessibility of COPD screening, Dr. Udrescu said. “It can democratize the access to the health care because you don’t need to travel for 100 or 200 miles to see a specialist,” he said. “You just send an app to the mobile phone of a patient, the person puts on a smart watch and wears it for a couple of minutes and the data is either recorded locally or is transmitted and it is analyzed.” The computations can be done locally and in a matter of minutes, he said.
In Scotland, a 12-month feasibility study is underway to evaluate an AI model to identify COPD patients at greatest risk for adverse events. A press release from Lenus, the company developing the technology, said the study will use a COPD multidisciplinary team to consider real-time AI model outputs to enable proactive patient encounters and reduce emergency care visits.
Researchers in Paris built an AI model that showed a 68% accuracy in distinguishing people with asthma from people with COPD in administrative medical databases (BMC Pulmon Med. 2022 Sep 20. doi: 10.1186/s12890-022-02144-2). They found that asthma patients were younger than those with COPD (a mean of 49.9 vs. 72.1 years) and that COPD occurred mostly in men (68% vs. 33%). And an international team of researchers reported that an AI model that used chest CT scans determined that ever-smokers with COPD who met the imaging criteria for bronchiectasis were more prone to disease exacerbations (Radiology. 2022 Dec 13. doi: 10.1148/radiol.221109).
AI in idiopathic pulmonary fibrosis
The AI model that Dr. Chattopadhyay and collaborators developed had an 88% accuracy in predicting IPF. The model, known as the zero-burden comorbidity risk score for IPF (ZCoR-IPF), identified IPF cases in adults age 45 and older 1-4 years sooner than in a variety of pulmonology practice settings.
The model accounted for about 700 different features of IPF, Dr. Chattopadhyay said, but it deviated from standard AI risk models in that it used a machine learning algorithm to extract disease features that aren’t well understood or even known. “We do not know what all the risk factors of IPF are,” he said. “People who don’t have all the risk factors still get IPF. So we have to step back from the raw EHR data from where the features are being generated automatically, and then you can apply standard ML tools.”
Researchers at Nagoya University in Japan also reported on an AI algorithm for predicting IPF that used 646,800 high-resolution CT images and medical records data from 1,068 patients. Their algorithm had an average diagnostic accuracy of 83.6% and, they reported, demonstrated good accuracy even in patients with signs of interstitial pneumonia or who had surgical lung biopsies (Respirology. 2022 Dec 13. doi: 10.1111/resp.14310).
Chat GPT: The next frontier in AI
Dr. Tseng last year led a group of researchers that fed questions from the United States Medical Licensing Exam to a ChatGPT model, which found it answered 60%-65% of questions correctly (PLOS Digit Health. 2023 Feb 9. doi: 10.1371/journal.pdig.0000198). As Dr. Tseng pointed out, that’s good enough to get a medical license.
It may be a matter of time before ChatGPT technology finds its way into the clinic, Dr. Tseng said. A quick ChatGPT query of how it could be used in medicine yielded 12 different answers, from patient triage to providing basic first aid instructions in an emergency.
Dr. Tseng, who’s pulmonology practice places an emphasis on virtual care delivery, went deeper than the ChatGPT answer. “If you’re a respiratory therapist and you’re trying to execute a complicated medical care plan written by a physician, there’s a natural disconnect between our language and their language,” he said. “What we have found is that GPT has significantly harmonized the care plan. And that’s amazing because you end up with this single-stream understanding of the care plan, where the language is halfway between a bedside clinician, like the respiratory therapist or nurse, and is also something that a physician can understand and take the bigger sort of direction of care from.”
Barriers to AI in clinic
Numerous barriers face widespread adoption of AI tools in the clinic, Dr. Tseng said, including physician and staff anxiety about learning new technology. “AI tools, for all purposes, are supposed to allay the cognitive burden and the tedium burden on clinicians, but end up actually costing more time,” he said.
Health care organizations will also need to retool for AI, a group of medical informatics and digital health experts, led by Laurie Lovett Novak, PhD, reported (JAMIA Open. 2023 May 3. doi: 10.1093/jamiaopen/ooad028). But it’s coming nonetheless, Dr. Novak, an associate professor of biomedical informatics at Vanderbilt University Medical Center in Nashville, Tenn., said in an interview.
“In the near future, managers in clinics and inpatient units will be overseeing care that involves numerous AI-based technologies, including predictive analytics, imaging tools, language models, and others,” she said. “Organizations need to support managers by implementing capabilities for algorithmo-vigilance.”
That would include dealing with what she called “algorithmic drift” – when the accuracy of an AI model wanes because of changes in the underlying data – and ensuring that models are unbiased and aren’t used in a way that contributes to inequities in health care. “These new organizational capabilities will demand new tools and new competencies,” she said. That would include policies and processes drawing guidance from medical societies for legal and regulatory direction for managers, staff training, and software documentation.
Dr. Tseng envisioned how AI would work in the clinic. “I personally think that, at some time in the near future, AI-driven care coordination, where the AI basically handles appointment scheduling, patient motivation, patient engagement and acts as their health navigator, will be superior to any human health navigator on the whole, only for the reason that AI is indefatigable,” Dr. Tseng said. “It doesn’t get tired, it doesn’t get burned out, and these health navigation care coordination roles are notoriously difficult.”
The physicians and researchers interviewed for this report had no relevant relationships to disclose.
The utility of artificial intelligence in pulmonology has focused mainly on using image datasets to detect and diagnose lung malignancies, but now a growing number of AI models are emerging that apply machine learning to improve predictability for other pulmonary conditions, including pulmonary infections, pulmonary fibrosis, and chronic obstructive pulmonary disease (COPD).
These applications are moving beyond the traditional AI model of collecting data from a multitude of images to cast a wider data net that includes electronic health records.
Also on the horizon, ChatGPT technology is poised to have an impact. But pulmonologists and their practices have a number of barriers to clear before they feel a meaningful impact from AI.
The imperative, said AI researcher Ishanu Chattopadhyay, PhD, is to create transformative models that can detect lung disease early on. Dr. Chattopadhyay, an assistant professor of medicine at the University of Chicago and its Institute for Genomics and Systems Biology, and fellow researchers developed an AI algorithm that uses comorbidity signatures in electronic health records to screen for idiopathic pulmonary fibrosis (IPF) (Nature Med. 2022 Sep 29. doi: 10.1038/s41591-022-02010-y).
“If you could do this when somebody walks into a primary care setting and they are barely suspecting something is going on with them or when they don’t have the typical risk factors, there is a certain fraction of these people who do have IPF and they will almost invariably be diagnosed late and/or misdiagnosed,” Dr. Chattopadhyay said, citing a study that found 55% of patients with IPF have had at least one misdiagnosis and 38% have two or more misdiagnoses (BMC Pulm Med. 2018 Jan 17. doi: 10.1186/s12890-017-0560-x).
Harnessing massive data sets
AI models cull data sets, whether banks of radiographic images or files in an EHR, to extract telltale signatures of a disease state. Dr. Chattopadhyay and his team’s model used three databases with almost 3 million participants and 54,247 positive cases of IPF. Hospitals in Scotland have deployed what they’ve claimed are the first AI models to predict COPD built with 55,000 patient records from a regional National Health Service database. Another AI model for staging COPD, developed by researchers in the United States and Romania, used more than 18,000 medical records from 588 patients to identify physiological signals predictive of COPD (Advanced Sci. 2023 Feb 19. doi: 10.1002/advs.202203485).
Said Dr. Chattopadhyay: “If I can bring in AI which doesn’t just look at radiological images but actually gets it back where someone walks into primary care using only the information that is available at that point in the patient files and asking for nothing more, it raises a flag reliably that gets you a pulmonary referral that will hopefully reduce the misdiagnosis and late diagnosis.”
Victor Tseng, MD, medical director for pulmonology at Ansible Health in Mountain View, Calif., who’s researching the potential of AI in pulmonology, speculated on what functions AI can perform in the clinic. “I think you will start to see much more interventional sort of clinically patient care–facing applications,” he said. That would include acting as a triage layer to direct patient queries to a nurse, physician, or another practitioner, providing patient instructions, serving as therapeutic software, coordinating care, integrating supply chain issues,” he said.
AI vs. spirometry for COPD
Researchers in the United States and Romania, led by Paul Bogdan, PhD, at the University of Southern California Viterbi School of Engineering, developed a model that predicted COPD with an accuracy of almost 99% (98.66%) and avoids many of the shortcomings of spirometry, Dr. Bogdan said.
The models developed by Dr. Bogdan and collaborators use a different principle than existing AI platforms, Dr. Bogdan said. They analyze the properties of the data. As he explained it, they exploit what he called the “geometry of these data” to make inferences and decisions on a patient’s risk for COPD.
“Deep learning is very good for images, for videos, but it doesn’t work that well for signals,” said Mihai Udrescu, PhD, one of the Romanian collaborators. “But if we process the data with the technique brought up by Paul [Bogdan] and his team at USC, deep learning also works well on physiological signals.”
Said Dr. Bogdan, “Nobody thought about using physiological signals to predict COPD before this work. They used spirometry, but spirometry is cumbersome and several steps have to be performed in order to have an accurate spirometry.” His team’s AI models extract and analyze risk data based on 10 minutes of monitoring.
This technology also has the potential to improve accessibility of COPD screening, Dr. Udrescu said. “It can democratize the access to the health care because you don’t need to travel for 100 or 200 miles to see a specialist,” he said. “You just send an app to the mobile phone of a patient, the person puts on a smart watch and wears it for a couple of minutes and the data is either recorded locally or is transmitted and it is analyzed.” The computations can be done locally and in a matter of minutes, he said.
In Scotland, a 12-month feasibility study is underway to evaluate an AI model to identify COPD patients at greatest risk for adverse events. A press release from Lenus, the company developing the technology, said the study will use a COPD multidisciplinary team to consider real-time AI model outputs to enable proactive patient encounters and reduce emergency care visits.
Researchers in Paris built an AI model that showed a 68% accuracy in distinguishing people with asthma from people with COPD in administrative medical databases (BMC Pulmon Med. 2022 Sep 20. doi: 10.1186/s12890-022-02144-2). They found that asthma patients were younger than those with COPD (a mean of 49.9 vs. 72.1 years) and that COPD occurred mostly in men (68% vs. 33%). And an international team of researchers reported that an AI model that used chest CT scans determined that ever-smokers with COPD who met the imaging criteria for bronchiectasis were more prone to disease exacerbations (Radiology. 2022 Dec 13. doi: 10.1148/radiol.221109).
AI in idiopathic pulmonary fibrosis
The AI model that Dr. Chattopadhyay and collaborators developed had an 88% accuracy in predicting IPF. The model, known as the zero-burden comorbidity risk score for IPF (ZCoR-IPF), identified IPF cases in adults age 45 and older 1-4 years sooner than in a variety of pulmonology practice settings.
The model accounted for about 700 different features of IPF, Dr. Chattopadhyay said, but it deviated from standard AI risk models in that it used a machine learning algorithm to extract disease features that aren’t well understood or even known. “We do not know what all the risk factors of IPF are,” he said. “People who don’t have all the risk factors still get IPF. So we have to step back from the raw EHR data from where the features are being generated automatically, and then you can apply standard ML tools.”
Researchers at Nagoya University in Japan also reported on an AI algorithm for predicting IPF that used 646,800 high-resolution CT images and medical records data from 1,068 patients. Their algorithm had an average diagnostic accuracy of 83.6% and, they reported, demonstrated good accuracy even in patients with signs of interstitial pneumonia or who had surgical lung biopsies (Respirology. 2022 Dec 13. doi: 10.1111/resp.14310).
Chat GPT: The next frontier in AI
Dr. Tseng last year led a group of researchers that fed questions from the United States Medical Licensing Exam to a ChatGPT model, which found it answered 60%-65% of questions correctly (PLOS Digit Health. 2023 Feb 9. doi: 10.1371/journal.pdig.0000198). As Dr. Tseng pointed out, that’s good enough to get a medical license.
It may be a matter of time before ChatGPT technology finds its way into the clinic, Dr. Tseng said. A quick ChatGPT query of how it could be used in medicine yielded 12 different answers, from patient triage to providing basic first aid instructions in an emergency.
Dr. Tseng, who’s pulmonology practice places an emphasis on virtual care delivery, went deeper than the ChatGPT answer. “If you’re a respiratory therapist and you’re trying to execute a complicated medical care plan written by a physician, there’s a natural disconnect between our language and their language,” he said. “What we have found is that GPT has significantly harmonized the care plan. And that’s amazing because you end up with this single-stream understanding of the care plan, where the language is halfway between a bedside clinician, like the respiratory therapist or nurse, and is also something that a physician can understand and take the bigger sort of direction of care from.”
Barriers to AI in clinic
Numerous barriers face widespread adoption of AI tools in the clinic, Dr. Tseng said, including physician and staff anxiety about learning new technology. “AI tools, for all purposes, are supposed to allay the cognitive burden and the tedium burden on clinicians, but end up actually costing more time,” he said.
Health care organizations will also need to retool for AI, a group of medical informatics and digital health experts, led by Laurie Lovett Novak, PhD, reported (JAMIA Open. 2023 May 3. doi: 10.1093/jamiaopen/ooad028). But it’s coming nonetheless, Dr. Novak, an associate professor of biomedical informatics at Vanderbilt University Medical Center in Nashville, Tenn., said in an interview.
“In the near future, managers in clinics and inpatient units will be overseeing care that involves numerous AI-based technologies, including predictive analytics, imaging tools, language models, and others,” she said. “Organizations need to support managers by implementing capabilities for algorithmo-vigilance.”
That would include dealing with what she called “algorithmic drift” – when the accuracy of an AI model wanes because of changes in the underlying data – and ensuring that models are unbiased and aren’t used in a way that contributes to inequities in health care. “These new organizational capabilities will demand new tools and new competencies,” she said. That would include policies and processes drawing guidance from medical societies for legal and regulatory direction for managers, staff training, and software documentation.
Dr. Tseng envisioned how AI would work in the clinic. “I personally think that, at some time in the near future, AI-driven care coordination, where the AI basically handles appointment scheduling, patient motivation, patient engagement and acts as their health navigator, will be superior to any human health navigator on the whole, only for the reason that AI is indefatigable,” Dr. Tseng said. “It doesn’t get tired, it doesn’t get burned out, and these health navigation care coordination roles are notoriously difficult.”
The physicians and researchers interviewed for this report had no relevant relationships to disclose.