User login
Lobar vs. sublobar resection in stage 1 lung cancer
Thoracic Oncology & Chest Imaging Network
Pleural Disease Section
Lobectomy with intrathoracic nodal dissection remains the standard of care for early stage (tumor size ≤ 3.0 cm) peripheral non–small cell lung cancer (NSCLC). This practice is primarily influenced by data from the mid-1990s associating limited resection (segmentectomy or wedge resection) with increased recurrence rate and mortality compared with lobectomy (Ginsberg et al. Ann Thorac Surg. 1995;60:615). Recent advances in video and robot-assisted thoracic surgery, as well as the implementation of lung cancer screening, improvement in minimally invasive diagnostic modalities, and neoadjuvant therapies have driven the medical community to revisit the role of sublobar lung resection.
Two newly published large randomized control multicenter multinational trials (Saji et al. Lancet. 2022;399:1670 and Altorki et al. N Engl J Med. 2023;388:489) have challenged our well-established practices. They compared overall and disease-free survival sublobar to lobar resection of early stage NSCLC (tumor size ≤ 2.0 cm and negative intraoperative nodal disease) and demonstrated noninferiority of sublobar resection with respect to overall survival and disease-free survival. While the sublobar resection in the Saji et al. trial consisted strictly of segmentectomy, the majority of sublobar resections in the Altorki et al. trial were wedge resections. Interestingly, both trials chose lower cut-offs for tumor size (≤ 2.0 cm) compared with the Ginsberg trial (≤ 3.0 cm), which could arguably have accounted for this difference in outcomes.
Christopher Yurosko, DO – Section Fellow-in-Training
Melissa Rosas, MD – Section Member-at-Large
Labib Debiane, MD - Section Member-at-Large
Thoracic Oncology & Chest Imaging Network
Pleural Disease Section
Lobectomy with intrathoracic nodal dissection remains the standard of care for early stage (tumor size ≤ 3.0 cm) peripheral non–small cell lung cancer (NSCLC). This practice is primarily influenced by data from the mid-1990s associating limited resection (segmentectomy or wedge resection) with increased recurrence rate and mortality compared with lobectomy (Ginsberg et al. Ann Thorac Surg. 1995;60:615). Recent advances in video and robot-assisted thoracic surgery, as well as the implementation of lung cancer screening, improvement in minimally invasive diagnostic modalities, and neoadjuvant therapies have driven the medical community to revisit the role of sublobar lung resection.
Two newly published large randomized control multicenter multinational trials (Saji et al. Lancet. 2022;399:1670 and Altorki et al. N Engl J Med. 2023;388:489) have challenged our well-established practices. They compared overall and disease-free survival sublobar to lobar resection of early stage NSCLC (tumor size ≤ 2.0 cm and negative intraoperative nodal disease) and demonstrated noninferiority of sublobar resection with respect to overall survival and disease-free survival. While the sublobar resection in the Saji et al. trial consisted strictly of segmentectomy, the majority of sublobar resections in the Altorki et al. trial were wedge resections. Interestingly, both trials chose lower cut-offs for tumor size (≤ 2.0 cm) compared with the Ginsberg trial (≤ 3.0 cm), which could arguably have accounted for this difference in outcomes.
Christopher Yurosko, DO – Section Fellow-in-Training
Melissa Rosas, MD – Section Member-at-Large
Labib Debiane, MD - Section Member-at-Large
Thoracic Oncology & Chest Imaging Network
Pleural Disease Section
Lobectomy with intrathoracic nodal dissection remains the standard of care for early stage (tumor size ≤ 3.0 cm) peripheral non–small cell lung cancer (NSCLC). This practice is primarily influenced by data from the mid-1990s associating limited resection (segmentectomy or wedge resection) with increased recurrence rate and mortality compared with lobectomy (Ginsberg et al. Ann Thorac Surg. 1995;60:615). Recent advances in video and robot-assisted thoracic surgery, as well as the implementation of lung cancer screening, improvement in minimally invasive diagnostic modalities, and neoadjuvant therapies have driven the medical community to revisit the role of sublobar lung resection.
Two newly published large randomized control multicenter multinational trials (Saji et al. Lancet. 2022;399:1670 and Altorki et al. N Engl J Med. 2023;388:489) have challenged our well-established practices. They compared overall and disease-free survival sublobar to lobar resection of early stage NSCLC (tumor size ≤ 2.0 cm and negative intraoperative nodal disease) and demonstrated noninferiority of sublobar resection with respect to overall survival and disease-free survival. While the sublobar resection in the Saji et al. trial consisted strictly of segmentectomy, the majority of sublobar resections in the Altorki et al. trial were wedge resections. Interestingly, both trials chose lower cut-offs for tumor size (≤ 2.0 cm) compared with the Ginsberg trial (≤ 3.0 cm), which could arguably have accounted for this difference in outcomes.
Christopher Yurosko, DO – Section Fellow-in-Training
Melissa Rosas, MD – Section Member-at-Large
Labib Debiane, MD - Section Member-at-Large
Applications of ChatGPT and Large Language Models in Medicine and Health Care: Benefits and Pitfalls
The development of [artificial intelligence] is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other.
Bill Gates 1
As the world emerges from the pandemic and the health care system faces new challenges, technology has become an increasingly important tool for health care professionals (HCPs). One such technology is the large language model (LLM), which has the potential to revolutionize the health care industry. ChatGPT, a popular LLM developed by OpenAI, has gained particular attention in the medical community for its ability to pass the United States Medical Licensing Exam.2 This article will explore the benefits and potential pitfalls of using LLMs like ChatGPT in medicine and health care.
Benefits
HCP burnout is a serious issue that can lead to lower productivity, increased medical errors, and decreased patient satisfaction.3 LLMs can alleviate some administrative burdens on HCPs, allowing them to focus on patient care. By assisting with billing, coding, insurance claims, and organizing schedules, LLMs like ChatGPT can free up time for HCPs to focus on what they do best: providing quality patient care.4 ChatGPT also can assist with diagnoses by providing accurate and reliable information based on a vast amount of clinical data. By learning the relationships between different medical conditions, symptoms, and treatment options, ChatGPT can provide an appropriate differential diagnosis (Figure 1).
Imaging medical specialists like radiologists, pathologists, dermatologists, and others can benefit from combining computer vision diagnostics with ChatGPT report creation abilities to streamline the diagnostic workflow and improve diagnostic accuracy (Figure 2).
Although using ChatGPT and other LLMs in mental health care has potential benefits, it is essential to note that they are not a substitute for human interaction and personalized care. While ChatGPT can remember information from previous conversations, it cannot provide the same level of personalized, high-quality care that a professional therapist or HCP can. However, by augmenting the work of HCPs, ChatGPT and other LLMs have the potential to make mental health care more accessible and efficient. In addition to providing effective screening in underserved areas, ChatGPT technology may improve the competence of physician assistants and nurse practitioners in delivering mental health care. With the increased incidence of mental health problems in veterans, the pertinence of a ChatGPT-like feature will only increase with time.9
ChatGPT can also be integrated into health care organizations’ websites and mobile apps, providing patients instant access to medical information, self-care advice, symptom checkers, scheduling appointments, and arranging transportation. These features can reduce the burden on health care staff and help patients stay informed and motivated to take an active role in their health. Additionally, health care organizations can use ChatGPT to engage patients by providing reminders for medication renewals and assistance with self-care.4,6,10,11
The potential of artificial intelligence (AI) in the field of medical education and research is immense. According to a study by Gilson and colleagues, ChatGPT has shown promising results as a medical education tool.12 ChatGPT can simulate clinical scenarios, provide real-time feedback, and improve diagnostic skills. It also offers new interactive and personalized learning opportunities for medical students and HCPs.13 ChatGPT can help researchers by streamlining the process of data analysis. It can also administer surveys or questionnaires, facilitate data collection on preferences and experiences, and help in writing scientific publications.14 Nevertheless, to fully unlock the potential of these AI models, additional models that perform checks for factual accuracy, plagiarism, and copyright infringement must be developed.15,16
AI Bill of Rights
In order to protect the American public, the White House Office of Science and Technology Policy (OSTP) has released a blueprint for an AI Bill of Rights that emphasizes 5 principles to protect the public from the harmful effects of AI models, including safe and effective systems; algorithmic discrimination protection; data privacy; notice and explanation; and human alternatives, considerations, and fallback (Figure 3).17
One of the biggest challenges with LLMs like ChatGPT is the prevalence of inaccurate information or so-called hallucinations.16 These inaccuracies stem from the inability of LLMs to distinguish between real and fake information. To prevent hallucinations, researchers have proposed several methods, including training models on more diverse data, using adversarial training methods, and human-in-the-loop approaches.21 In addition, medicine-specific models like GatorTron, medPaLM, and Almanac were developed, increasing the accuracy of factual results.22-24 Unfortunately, only the GatorTron model is available to the public through the NVIDIA developers’ program.25
Despite these shortcomings, the future of LLMs in health care is promising. Although these models will not replace HCPs, they can help reduce the unnecessary burden on them, prevent burnout, and enable HCPs and patients spend more time together. Establishing an official hospital AI oversight governing body that would promote best practices could ensure the trustworthy implementation of these new technologies.26
Conclusions
The use of ChatGPT and other LLMs in health care has the potential to revolutionize the industry. By assisting HCPs with administrative tasks, improving the accuracy and reliability of diagnoses, and engaging patients, ChatGPT can help health care organizations provide better care to their patients. While LLMs are not a substitute for human interaction and personalized care, they can augment the work of HCPs, making health care more accessible and efficient. As the health care industry continues to evolve, it will be exciting to see how ChatGPT and other LLMs are used to improve patient outcomes and quality of care. In addition, AI technologies like ChatGPT offer enormous potential in medical education and research. To ensure that the benefits outweigh the risks, developing trustworthy AI health care products and establishing oversight governing bodies to ensure their implementation is essential. By doing so, we can help HCPs focus on what matters most, providing high-quality care to patients.
Acknowledgments
This material is the result of work supported by resources and the use of facilities at the James A. Haley Veterans’ Hospital.
1. Bill Gates. The age of AI has begun. March 21, 2023. Accessed May 10, 2023. https://www.gatesnotes.com/the-age-of-ai-has-begun
2. Kung TH, Cheatham M, Medenilla A, et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digit Health. 2023;2(2):e0000198. Published 2023 Feb 9. doi:10.1371/journal.pdig.0000198
3. Shanafelt TD, West CP, Sinsky C, et al. Changes in burnout and satisfaction with work-life integration in physicians and the general US working population between 2011 and 2020. Mayo Clin Proc. 2022;97(3):491-506. doi:10.1016/j.mayocp.2021.11.021
4. Goodman RS, Patrinely JR Jr, Osterman T, Wheless L, Johnson DB. On the cusp: considering the impact of artificial intelligence language models in healthcare. Med. 2023;4(3):139-140. doi:10.1016/j.medj.2023.02.008
5. Will ChatGPT transform healthcare? Nat Med. 2023;29(3):505-506. doi:10.1038/s41591-023-02289-5
6. Hopkins AM, Logan JM, Kichenadasse G, Sorich MJ. Artificial intelligence chatbots will revolutionize how cancer patients access information: ChatGPT represents a paradigm-shift. JNCI Cancer Spectr. 2023;7(2):pkad010. doi:10.1093/jncics/pkad010
7. Babar Z, van Laarhoven T, Zanzotto FM, Marchiori E. Evaluating diagnostic content of AI-generated radiology reports of chest X-rays. Artif Intell Med. 2021;116:102075. doi:10.1016/j.artmed.2021.102075
8. Lecler A, Duron L, Soyer P. Revolutionizing radiology with GPT-based models: current applications, future possibilities and limitations of ChatGPT. Diagn Interv Imaging. 2023;S2211-5684(23)00027-X. doi:10.1016/j.diii.2023.02.003
9. Germain JM. Is ChatGPT smart enough to practice mental health therapy? March 23, 2023. Accessed May 11, 2023. https://www.technewsworld.com/story/is-chatgpt-smart-enough-to-practice-mental-health-therapy-178064.html
10. Cascella M, Montomoli J, Bellini V, Bignami E. Evaluating the feasibility of ChatGPT in healthcare: an analysis of multiple clinical and research scenarios. J Med Syst. 2023;47(1):33. Published 2023 Mar 4. doi:10.1007/s10916-023-01925-4
11. Jungwirth D, Haluza D. Artificial intelligence and public health: an exploratory study. Int J Environ Res Public Health. 2023;20(5):4541. Published 2023 Mar 3. doi:10.3390/ijerph20054541
12. Gilson A, Safranek CW, Huang T, et al. How does ChatGPT perform on the United States Medical Licensing Examination? The implications of large language models for medical education and knowledge assessment. JMIR Med Educ. 2023;9:e45312. Published 2023 Feb 8. doi:10.2196/45312
13. Eysenbach G. The role of ChatGPT, generative language models, and artificial intelligence in medical education: a conversation with ChatGPT and a call for papers. JMIR Med Educ. 2023;9:e46885. Published 2023 Mar 6. doi:10.2196/46885
14. Macdonald C, Adeloye D, Sheikh A, Rudan I. Can ChatGPT draft a research article? An example of population-level vaccine effectiveness analysis. J Glob Health. 2023;13:01003. Published 2023 Feb 17. doi:10.7189/jogh.13.01003
15. Masters K. Ethical use of artificial intelligence in health professions education: AMEE Guide No.158. Med Teach. 2023;1-11. doi:10.1080/0142159X.2023.2186203
16. Smith CS. Hallucinations could blunt ChatGPT’s success. IEEE Spectrum. March 13, 2023. Accessed May 11, 2023. https://spectrum.ieee.org/ai-hallucination
17. Executive Office of the President, Office of Science and Technology Policy. Blueprint for an AI Bill of Rights. Accessed May 11, 2023. https://www.whitehouse.gov/ostp/ai-bill-of-rights
18. Executive office of the President. Executive Order 13960: promoting the use of trustworthy artificial intelligence in the federal government. Fed Regist. 2020;89(236):78939-78943.
19. US Department of Commerce, National institute of Standards and Technology. Artificial Intelligence Risk Management Framework (AI RMF 1.0). Published January 2023. doi:10.6028/NIST.AI.100-1
20. Microsoft. Azure Cognitive Search—Cloud Search Service. Accessed May 11, 2023. https://azure.microsoft.com/en-us/products/search
21. Aiyappa R, An J, Kwak H, Ahn YY. Can we trust the evaluation on ChatGPT? March 22, 2023. Accessed May 11, 2023. https://arxiv.org/abs/2303.12767v1
22. Yang X, Chen A, Pournejatian N, et al. GatorTron: a large clinical language model to unlock patient information from unstructured electronic health records. Updated December 16, 2022. Accessed May 11, 2023. https://arxiv.org/abs/2203.03540v3
23. Singhal K, Azizi S, Tu T, et al. Large language models encode clinical knowledge. December 26, 2022. Accessed May 11, 2023. https://arxiv.org/abs/2212.13138v1
24. Zakka C, Chaurasia A, Shad R, Hiesinger W. Almanac: knowledge-grounded language models for clinical medicine. March 1, 2023. Accessed May 11, 2023. https://arxiv.org/abs/2303.01229v1
25. NVIDIA. GatorTron-OG. Accessed May 11, 2023. https://catalog.ngc.nvidia.com/orgs/nvidia/teams/clara/models/gatortron_og
26. Borkowski AA, Jakey CE, Thomas LB, Viswanadhan N, Mastorides SM. Establishing a hospital artificial intelligence committee to improve patient care. Fed Pract. 2022;39(8):334-336. doi:10.12788/fp.0299
The development of [artificial intelligence] is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other.
Bill Gates 1
As the world emerges from the pandemic and the health care system faces new challenges, technology has become an increasingly important tool for health care professionals (HCPs). One such technology is the large language model (LLM), which has the potential to revolutionize the health care industry. ChatGPT, a popular LLM developed by OpenAI, has gained particular attention in the medical community for its ability to pass the United States Medical Licensing Exam.2 This article will explore the benefits and potential pitfalls of using LLMs like ChatGPT in medicine and health care.
Benefits
HCP burnout is a serious issue that can lead to lower productivity, increased medical errors, and decreased patient satisfaction.3 LLMs can alleviate some administrative burdens on HCPs, allowing them to focus on patient care. By assisting with billing, coding, insurance claims, and organizing schedules, LLMs like ChatGPT can free up time for HCPs to focus on what they do best: providing quality patient care.4 ChatGPT also can assist with diagnoses by providing accurate and reliable information based on a vast amount of clinical data. By learning the relationships between different medical conditions, symptoms, and treatment options, ChatGPT can provide an appropriate differential diagnosis (Figure 1).
Imaging medical specialists like radiologists, pathologists, dermatologists, and others can benefit from combining computer vision diagnostics with ChatGPT report creation abilities to streamline the diagnostic workflow and improve diagnostic accuracy (Figure 2).
Although using ChatGPT and other LLMs in mental health care has potential benefits, it is essential to note that they are not a substitute for human interaction and personalized care. While ChatGPT can remember information from previous conversations, it cannot provide the same level of personalized, high-quality care that a professional therapist or HCP can. However, by augmenting the work of HCPs, ChatGPT and other LLMs have the potential to make mental health care more accessible and efficient. In addition to providing effective screening in underserved areas, ChatGPT technology may improve the competence of physician assistants and nurse practitioners in delivering mental health care. With the increased incidence of mental health problems in veterans, the pertinence of a ChatGPT-like feature will only increase with time.9
ChatGPT can also be integrated into health care organizations’ websites and mobile apps, providing patients instant access to medical information, self-care advice, symptom checkers, scheduling appointments, and arranging transportation. These features can reduce the burden on health care staff and help patients stay informed and motivated to take an active role in their health. Additionally, health care organizations can use ChatGPT to engage patients by providing reminders for medication renewals and assistance with self-care.4,6,10,11
The potential of artificial intelligence (AI) in the field of medical education and research is immense. According to a study by Gilson and colleagues, ChatGPT has shown promising results as a medical education tool.12 ChatGPT can simulate clinical scenarios, provide real-time feedback, and improve diagnostic skills. It also offers new interactive and personalized learning opportunities for medical students and HCPs.13 ChatGPT can help researchers by streamlining the process of data analysis. It can also administer surveys or questionnaires, facilitate data collection on preferences and experiences, and help in writing scientific publications.14 Nevertheless, to fully unlock the potential of these AI models, additional models that perform checks for factual accuracy, plagiarism, and copyright infringement must be developed.15,16
AI Bill of Rights
In order to protect the American public, the White House Office of Science and Technology Policy (OSTP) has released a blueprint for an AI Bill of Rights that emphasizes 5 principles to protect the public from the harmful effects of AI models, including safe and effective systems; algorithmic discrimination protection; data privacy; notice and explanation; and human alternatives, considerations, and fallback (Figure 3).17
One of the biggest challenges with LLMs like ChatGPT is the prevalence of inaccurate information or so-called hallucinations.16 These inaccuracies stem from the inability of LLMs to distinguish between real and fake information. To prevent hallucinations, researchers have proposed several methods, including training models on more diverse data, using adversarial training methods, and human-in-the-loop approaches.21 In addition, medicine-specific models like GatorTron, medPaLM, and Almanac were developed, increasing the accuracy of factual results.22-24 Unfortunately, only the GatorTron model is available to the public through the NVIDIA developers’ program.25
Despite these shortcomings, the future of LLMs in health care is promising. Although these models will not replace HCPs, they can help reduce the unnecessary burden on them, prevent burnout, and enable HCPs and patients spend more time together. Establishing an official hospital AI oversight governing body that would promote best practices could ensure the trustworthy implementation of these new technologies.26
Conclusions
The use of ChatGPT and other LLMs in health care has the potential to revolutionize the industry. By assisting HCPs with administrative tasks, improving the accuracy and reliability of diagnoses, and engaging patients, ChatGPT can help health care organizations provide better care to their patients. While LLMs are not a substitute for human interaction and personalized care, they can augment the work of HCPs, making health care more accessible and efficient. As the health care industry continues to evolve, it will be exciting to see how ChatGPT and other LLMs are used to improve patient outcomes and quality of care. In addition, AI technologies like ChatGPT offer enormous potential in medical education and research. To ensure that the benefits outweigh the risks, developing trustworthy AI health care products and establishing oversight governing bodies to ensure their implementation is essential. By doing so, we can help HCPs focus on what matters most, providing high-quality care to patients.
Acknowledgments
This material is the result of work supported by resources and the use of facilities at the James A. Haley Veterans’ Hospital.
The development of [artificial intelligence] is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other.
Bill Gates 1
As the world emerges from the pandemic and the health care system faces new challenges, technology has become an increasingly important tool for health care professionals (HCPs). One such technology is the large language model (LLM), which has the potential to revolutionize the health care industry. ChatGPT, a popular LLM developed by OpenAI, has gained particular attention in the medical community for its ability to pass the United States Medical Licensing Exam.2 This article will explore the benefits and potential pitfalls of using LLMs like ChatGPT in medicine and health care.
Benefits
HCP burnout is a serious issue that can lead to lower productivity, increased medical errors, and decreased patient satisfaction.3 LLMs can alleviate some administrative burdens on HCPs, allowing them to focus on patient care. By assisting with billing, coding, insurance claims, and organizing schedules, LLMs like ChatGPT can free up time for HCPs to focus on what they do best: providing quality patient care.4 ChatGPT also can assist with diagnoses by providing accurate and reliable information based on a vast amount of clinical data. By learning the relationships between different medical conditions, symptoms, and treatment options, ChatGPT can provide an appropriate differential diagnosis (Figure 1).
Imaging medical specialists like radiologists, pathologists, dermatologists, and others can benefit from combining computer vision diagnostics with ChatGPT report creation abilities to streamline the diagnostic workflow and improve diagnostic accuracy (Figure 2).
Although using ChatGPT and other LLMs in mental health care has potential benefits, it is essential to note that they are not a substitute for human interaction and personalized care. While ChatGPT can remember information from previous conversations, it cannot provide the same level of personalized, high-quality care that a professional therapist or HCP can. However, by augmenting the work of HCPs, ChatGPT and other LLMs have the potential to make mental health care more accessible and efficient. In addition to providing effective screening in underserved areas, ChatGPT technology may improve the competence of physician assistants and nurse practitioners in delivering mental health care. With the increased incidence of mental health problems in veterans, the pertinence of a ChatGPT-like feature will only increase with time.9
ChatGPT can also be integrated into health care organizations’ websites and mobile apps, providing patients instant access to medical information, self-care advice, symptom checkers, scheduling appointments, and arranging transportation. These features can reduce the burden on health care staff and help patients stay informed and motivated to take an active role in their health. Additionally, health care organizations can use ChatGPT to engage patients by providing reminders for medication renewals and assistance with self-care.4,6,10,11
The potential of artificial intelligence (AI) in the field of medical education and research is immense. According to a study by Gilson and colleagues, ChatGPT has shown promising results as a medical education tool.12 ChatGPT can simulate clinical scenarios, provide real-time feedback, and improve diagnostic skills. It also offers new interactive and personalized learning opportunities for medical students and HCPs.13 ChatGPT can help researchers by streamlining the process of data analysis. It can also administer surveys or questionnaires, facilitate data collection on preferences and experiences, and help in writing scientific publications.14 Nevertheless, to fully unlock the potential of these AI models, additional models that perform checks for factual accuracy, plagiarism, and copyright infringement must be developed.15,16
AI Bill of Rights
In order to protect the American public, the White House Office of Science and Technology Policy (OSTP) has released a blueprint for an AI Bill of Rights that emphasizes 5 principles to protect the public from the harmful effects of AI models, including safe and effective systems; algorithmic discrimination protection; data privacy; notice and explanation; and human alternatives, considerations, and fallback (Figure 3).17
One of the biggest challenges with LLMs like ChatGPT is the prevalence of inaccurate information or so-called hallucinations.16 These inaccuracies stem from the inability of LLMs to distinguish between real and fake information. To prevent hallucinations, researchers have proposed several methods, including training models on more diverse data, using adversarial training methods, and human-in-the-loop approaches.21 In addition, medicine-specific models like GatorTron, medPaLM, and Almanac were developed, increasing the accuracy of factual results.22-24 Unfortunately, only the GatorTron model is available to the public through the NVIDIA developers’ program.25
Despite these shortcomings, the future of LLMs in health care is promising. Although these models will not replace HCPs, they can help reduce the unnecessary burden on them, prevent burnout, and enable HCPs and patients spend more time together. Establishing an official hospital AI oversight governing body that would promote best practices could ensure the trustworthy implementation of these new technologies.26
Conclusions
The use of ChatGPT and other LLMs in health care has the potential to revolutionize the industry. By assisting HCPs with administrative tasks, improving the accuracy and reliability of diagnoses, and engaging patients, ChatGPT can help health care organizations provide better care to their patients. While LLMs are not a substitute for human interaction and personalized care, they can augment the work of HCPs, making health care more accessible and efficient. As the health care industry continues to evolve, it will be exciting to see how ChatGPT and other LLMs are used to improve patient outcomes and quality of care. In addition, AI technologies like ChatGPT offer enormous potential in medical education and research. To ensure that the benefits outweigh the risks, developing trustworthy AI health care products and establishing oversight governing bodies to ensure their implementation is essential. By doing so, we can help HCPs focus on what matters most, providing high-quality care to patients.
Acknowledgments
This material is the result of work supported by resources and the use of facilities at the James A. Haley Veterans’ Hospital.
1. Bill Gates. The age of AI has begun. March 21, 2023. Accessed May 10, 2023. https://www.gatesnotes.com/the-age-of-ai-has-begun
2. Kung TH, Cheatham M, Medenilla A, et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digit Health. 2023;2(2):e0000198. Published 2023 Feb 9. doi:10.1371/journal.pdig.0000198
3. Shanafelt TD, West CP, Sinsky C, et al. Changes in burnout and satisfaction with work-life integration in physicians and the general US working population between 2011 and 2020. Mayo Clin Proc. 2022;97(3):491-506. doi:10.1016/j.mayocp.2021.11.021
4. Goodman RS, Patrinely JR Jr, Osterman T, Wheless L, Johnson DB. On the cusp: considering the impact of artificial intelligence language models in healthcare. Med. 2023;4(3):139-140. doi:10.1016/j.medj.2023.02.008
5. Will ChatGPT transform healthcare? Nat Med. 2023;29(3):505-506. doi:10.1038/s41591-023-02289-5
6. Hopkins AM, Logan JM, Kichenadasse G, Sorich MJ. Artificial intelligence chatbots will revolutionize how cancer patients access information: ChatGPT represents a paradigm-shift. JNCI Cancer Spectr. 2023;7(2):pkad010. doi:10.1093/jncics/pkad010
7. Babar Z, van Laarhoven T, Zanzotto FM, Marchiori E. Evaluating diagnostic content of AI-generated radiology reports of chest X-rays. Artif Intell Med. 2021;116:102075. doi:10.1016/j.artmed.2021.102075
8. Lecler A, Duron L, Soyer P. Revolutionizing radiology with GPT-based models: current applications, future possibilities and limitations of ChatGPT. Diagn Interv Imaging. 2023;S2211-5684(23)00027-X. doi:10.1016/j.diii.2023.02.003
9. Germain JM. Is ChatGPT smart enough to practice mental health therapy? March 23, 2023. Accessed May 11, 2023. https://www.technewsworld.com/story/is-chatgpt-smart-enough-to-practice-mental-health-therapy-178064.html
10. Cascella M, Montomoli J, Bellini V, Bignami E. Evaluating the feasibility of ChatGPT in healthcare: an analysis of multiple clinical and research scenarios. J Med Syst. 2023;47(1):33. Published 2023 Mar 4. doi:10.1007/s10916-023-01925-4
11. Jungwirth D, Haluza D. Artificial intelligence and public health: an exploratory study. Int J Environ Res Public Health. 2023;20(5):4541. Published 2023 Mar 3. doi:10.3390/ijerph20054541
12. Gilson A, Safranek CW, Huang T, et al. How does ChatGPT perform on the United States Medical Licensing Examination? The implications of large language models for medical education and knowledge assessment. JMIR Med Educ. 2023;9:e45312. Published 2023 Feb 8. doi:10.2196/45312
13. Eysenbach G. The role of ChatGPT, generative language models, and artificial intelligence in medical education: a conversation with ChatGPT and a call for papers. JMIR Med Educ. 2023;9:e46885. Published 2023 Mar 6. doi:10.2196/46885
14. Macdonald C, Adeloye D, Sheikh A, Rudan I. Can ChatGPT draft a research article? An example of population-level vaccine effectiveness analysis. J Glob Health. 2023;13:01003. Published 2023 Feb 17. doi:10.7189/jogh.13.01003
15. Masters K. Ethical use of artificial intelligence in health professions education: AMEE Guide No.158. Med Teach. 2023;1-11. doi:10.1080/0142159X.2023.2186203
16. Smith CS. Hallucinations could blunt ChatGPT’s success. IEEE Spectrum. March 13, 2023. Accessed May 11, 2023. https://spectrum.ieee.org/ai-hallucination
17. Executive Office of the President, Office of Science and Technology Policy. Blueprint for an AI Bill of Rights. Accessed May 11, 2023. https://www.whitehouse.gov/ostp/ai-bill-of-rights
18. Executive office of the President. Executive Order 13960: promoting the use of trustworthy artificial intelligence in the federal government. Fed Regist. 2020;89(236):78939-78943.
19. US Department of Commerce, National institute of Standards and Technology. Artificial Intelligence Risk Management Framework (AI RMF 1.0). Published January 2023. doi:10.6028/NIST.AI.100-1
20. Microsoft. Azure Cognitive Search—Cloud Search Service. Accessed May 11, 2023. https://azure.microsoft.com/en-us/products/search
21. Aiyappa R, An J, Kwak H, Ahn YY. Can we trust the evaluation on ChatGPT? March 22, 2023. Accessed May 11, 2023. https://arxiv.org/abs/2303.12767v1
22. Yang X, Chen A, Pournejatian N, et al. GatorTron: a large clinical language model to unlock patient information from unstructured electronic health records. Updated December 16, 2022. Accessed May 11, 2023. https://arxiv.org/abs/2203.03540v3
23. Singhal K, Azizi S, Tu T, et al. Large language models encode clinical knowledge. December 26, 2022. Accessed May 11, 2023. https://arxiv.org/abs/2212.13138v1
24. Zakka C, Chaurasia A, Shad R, Hiesinger W. Almanac: knowledge-grounded language models for clinical medicine. March 1, 2023. Accessed May 11, 2023. https://arxiv.org/abs/2303.01229v1
25. NVIDIA. GatorTron-OG. Accessed May 11, 2023. https://catalog.ngc.nvidia.com/orgs/nvidia/teams/clara/models/gatortron_og
26. Borkowski AA, Jakey CE, Thomas LB, Viswanadhan N, Mastorides SM. Establishing a hospital artificial intelligence committee to improve patient care. Fed Pract. 2022;39(8):334-336. doi:10.12788/fp.0299
1. Bill Gates. The age of AI has begun. March 21, 2023. Accessed May 10, 2023. https://www.gatesnotes.com/the-age-of-ai-has-begun
2. Kung TH, Cheatham M, Medenilla A, et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digit Health. 2023;2(2):e0000198. Published 2023 Feb 9. doi:10.1371/journal.pdig.0000198
3. Shanafelt TD, West CP, Sinsky C, et al. Changes in burnout and satisfaction with work-life integration in physicians and the general US working population between 2011 and 2020. Mayo Clin Proc. 2022;97(3):491-506. doi:10.1016/j.mayocp.2021.11.021
4. Goodman RS, Patrinely JR Jr, Osterman T, Wheless L, Johnson DB. On the cusp: considering the impact of artificial intelligence language models in healthcare. Med. 2023;4(3):139-140. doi:10.1016/j.medj.2023.02.008
5. Will ChatGPT transform healthcare? Nat Med. 2023;29(3):505-506. doi:10.1038/s41591-023-02289-5
6. Hopkins AM, Logan JM, Kichenadasse G, Sorich MJ. Artificial intelligence chatbots will revolutionize how cancer patients access information: ChatGPT represents a paradigm-shift. JNCI Cancer Spectr. 2023;7(2):pkad010. doi:10.1093/jncics/pkad010
7. Babar Z, van Laarhoven T, Zanzotto FM, Marchiori E. Evaluating diagnostic content of AI-generated radiology reports of chest X-rays. Artif Intell Med. 2021;116:102075. doi:10.1016/j.artmed.2021.102075
8. Lecler A, Duron L, Soyer P. Revolutionizing radiology with GPT-based models: current applications, future possibilities and limitations of ChatGPT. Diagn Interv Imaging. 2023;S2211-5684(23)00027-X. doi:10.1016/j.diii.2023.02.003
9. Germain JM. Is ChatGPT smart enough to practice mental health therapy? March 23, 2023. Accessed May 11, 2023. https://www.technewsworld.com/story/is-chatgpt-smart-enough-to-practice-mental-health-therapy-178064.html
10. Cascella M, Montomoli J, Bellini V, Bignami E. Evaluating the feasibility of ChatGPT in healthcare: an analysis of multiple clinical and research scenarios. J Med Syst. 2023;47(1):33. Published 2023 Mar 4. doi:10.1007/s10916-023-01925-4
11. Jungwirth D, Haluza D. Artificial intelligence and public health: an exploratory study. Int J Environ Res Public Health. 2023;20(5):4541. Published 2023 Mar 3. doi:10.3390/ijerph20054541
12. Gilson A, Safranek CW, Huang T, et al. How does ChatGPT perform on the United States Medical Licensing Examination? The implications of large language models for medical education and knowledge assessment. JMIR Med Educ. 2023;9:e45312. Published 2023 Feb 8. doi:10.2196/45312
13. Eysenbach G. The role of ChatGPT, generative language models, and artificial intelligence in medical education: a conversation with ChatGPT and a call for papers. JMIR Med Educ. 2023;9:e46885. Published 2023 Mar 6. doi:10.2196/46885
14. Macdonald C, Adeloye D, Sheikh A, Rudan I. Can ChatGPT draft a research article? An example of population-level vaccine effectiveness analysis. J Glob Health. 2023;13:01003. Published 2023 Feb 17. doi:10.7189/jogh.13.01003
15. Masters K. Ethical use of artificial intelligence in health professions education: AMEE Guide No.158. Med Teach. 2023;1-11. doi:10.1080/0142159X.2023.2186203
16. Smith CS. Hallucinations could blunt ChatGPT’s success. IEEE Spectrum. March 13, 2023. Accessed May 11, 2023. https://spectrum.ieee.org/ai-hallucination
17. Executive Office of the President, Office of Science and Technology Policy. Blueprint for an AI Bill of Rights. Accessed May 11, 2023. https://www.whitehouse.gov/ostp/ai-bill-of-rights
18. Executive office of the President. Executive Order 13960: promoting the use of trustworthy artificial intelligence in the federal government. Fed Regist. 2020;89(236):78939-78943.
19. US Department of Commerce, National institute of Standards and Technology. Artificial Intelligence Risk Management Framework (AI RMF 1.0). Published January 2023. doi:10.6028/NIST.AI.100-1
20. Microsoft. Azure Cognitive Search—Cloud Search Service. Accessed May 11, 2023. https://azure.microsoft.com/en-us/products/search
21. Aiyappa R, An J, Kwak H, Ahn YY. Can we trust the evaluation on ChatGPT? March 22, 2023. Accessed May 11, 2023. https://arxiv.org/abs/2303.12767v1
22. Yang X, Chen A, Pournejatian N, et al. GatorTron: a large clinical language model to unlock patient information from unstructured electronic health records. Updated December 16, 2022. Accessed May 11, 2023. https://arxiv.org/abs/2203.03540v3
23. Singhal K, Azizi S, Tu T, et al. Large language models encode clinical knowledge. December 26, 2022. Accessed May 11, 2023. https://arxiv.org/abs/2212.13138v1
24. Zakka C, Chaurasia A, Shad R, Hiesinger W. Almanac: knowledge-grounded language models for clinical medicine. March 1, 2023. Accessed May 11, 2023. https://arxiv.org/abs/2303.01229v1
25. NVIDIA. GatorTron-OG. Accessed May 11, 2023. https://catalog.ngc.nvidia.com/orgs/nvidia/teams/clara/models/gatortron_og
26. Borkowski AA, Jakey CE, Thomas LB, Viswanadhan N, Mastorides SM. Establishing a hospital artificial intelligence committee to improve patient care. Fed Pract. 2022;39(8):334-336. doi:10.12788/fp.0299
WOW! You spend that much time on the EHR?
Unlike many of you, maybe even most of you, I can recall when my office records were handwritten, some would say scribbled, on pieces of paper. They were decipherable by a select few. Some veteran assistants never mastered the skill. Pages were sometimes lavishly illustrated with drawings of body parts, often because I couldn’t remember or spell the correct anatomic term. When I needed to send a referring letter to another provider I typed it myself because dictating never quite suited my personality.
When I joined a small primary care group, the computer-savvy lead physician and a programmer developed our own homegrown EHR. It relied on scanning documents, as so many of us still generated handwritten notes. Even the most vociferous Luddites among us loved the system from day 2.
However, for a variety of reasons, some defensible some just plain bad, our beloved system needed to be replaced after 7 years. We then invested in an off-the-shelf EHR system that promised more capabilities. We were told there would be a learning curve but the plateau would come quickly and we would enjoy our new electronic assistant.
You’ve lived the rest of the story. The learning curve was steep and long and the plateau was a time gobbler. I was probably the most efficient provider in the group, and after 6 months I was leaving the office an hour later than I had been and was seeing the same number of patients. Most of my coworkers were staying and/or working on the computer at home for an extra 2 hours. This change could be easily documented by speaking with our spouses and children. I understand from my colleagues who have stayed in the business that over the ensuing decade and a half since my first experience with the EHR, its insatiable appetite for a clinician’s time has not abated.
The authors of a recent article in Annals of Family Medicine offer up some advice on how this tragic situation might be brought under control. First, the investigators point out that the phenomenon of after-hours EHR work, sometimes referred to as WOW (work outside of work), has not gone unnoticed by health system administrators and vendors who develop and sell the EHRs. However, analyzing the voluminous data necessary is not any easy task and for the most part has resulted in metrics that cannot be easily applied over a variety of practice scenarios. Many health care organizations, even large ones, have simply given up and rely on the WOW data and recommendations provided by the vendors, obviously lending the situation a faint odor of conflict of interest.
The bottom line is that . It would seem to me just asking the spouses and significant others of the clinicians would be sufficient. But, authors of the paper have more specific recommendations. First, they suggest that time working on the computer outside of scheduled time with patients should be separated from any other calculation of EHR usage. They encourage vendors and time-management researchers to develop standardized and validated methods for measuring active EHR use. And, finally they recommend that all EHR work done outside of time scheduled with patients be attributed to WOW. They feel that clearly labeling it work outside of work offers health care organizations a better chance of developing policies that will address the scourge of burnout.
This, unfortunately, is another tragic example of how clinicians have lost control of our work environments. The fact that 20 years have passed and there is still no standardized method for determining how much time we spend on the computer is more evidence we need to raise our voices.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Unlike many of you, maybe even most of you, I can recall when my office records were handwritten, some would say scribbled, on pieces of paper. They were decipherable by a select few. Some veteran assistants never mastered the skill. Pages were sometimes lavishly illustrated with drawings of body parts, often because I couldn’t remember or spell the correct anatomic term. When I needed to send a referring letter to another provider I typed it myself because dictating never quite suited my personality.
When I joined a small primary care group, the computer-savvy lead physician and a programmer developed our own homegrown EHR. It relied on scanning documents, as so many of us still generated handwritten notes. Even the most vociferous Luddites among us loved the system from day 2.
However, for a variety of reasons, some defensible some just plain bad, our beloved system needed to be replaced after 7 years. We then invested in an off-the-shelf EHR system that promised more capabilities. We were told there would be a learning curve but the plateau would come quickly and we would enjoy our new electronic assistant.
You’ve lived the rest of the story. The learning curve was steep and long and the plateau was a time gobbler. I was probably the most efficient provider in the group, and after 6 months I was leaving the office an hour later than I had been and was seeing the same number of patients. Most of my coworkers were staying and/or working on the computer at home for an extra 2 hours. This change could be easily documented by speaking with our spouses and children. I understand from my colleagues who have stayed in the business that over the ensuing decade and a half since my first experience with the EHR, its insatiable appetite for a clinician’s time has not abated.
The authors of a recent article in Annals of Family Medicine offer up some advice on how this tragic situation might be brought under control. First, the investigators point out that the phenomenon of after-hours EHR work, sometimes referred to as WOW (work outside of work), has not gone unnoticed by health system administrators and vendors who develop and sell the EHRs. However, analyzing the voluminous data necessary is not any easy task and for the most part has resulted in metrics that cannot be easily applied over a variety of practice scenarios. Many health care organizations, even large ones, have simply given up and rely on the WOW data and recommendations provided by the vendors, obviously lending the situation a faint odor of conflict of interest.
The bottom line is that . It would seem to me just asking the spouses and significant others of the clinicians would be sufficient. But, authors of the paper have more specific recommendations. First, they suggest that time working on the computer outside of scheduled time with patients should be separated from any other calculation of EHR usage. They encourage vendors and time-management researchers to develop standardized and validated methods for measuring active EHR use. And, finally they recommend that all EHR work done outside of time scheduled with patients be attributed to WOW. They feel that clearly labeling it work outside of work offers health care organizations a better chance of developing policies that will address the scourge of burnout.
This, unfortunately, is another tragic example of how clinicians have lost control of our work environments. The fact that 20 years have passed and there is still no standardized method for determining how much time we spend on the computer is more evidence we need to raise our voices.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Unlike many of you, maybe even most of you, I can recall when my office records were handwritten, some would say scribbled, on pieces of paper. They were decipherable by a select few. Some veteran assistants never mastered the skill. Pages were sometimes lavishly illustrated with drawings of body parts, often because I couldn’t remember or spell the correct anatomic term. When I needed to send a referring letter to another provider I typed it myself because dictating never quite suited my personality.
When I joined a small primary care group, the computer-savvy lead physician and a programmer developed our own homegrown EHR. It relied on scanning documents, as so many of us still generated handwritten notes. Even the most vociferous Luddites among us loved the system from day 2.
However, for a variety of reasons, some defensible some just plain bad, our beloved system needed to be replaced after 7 years. We then invested in an off-the-shelf EHR system that promised more capabilities. We were told there would be a learning curve but the plateau would come quickly and we would enjoy our new electronic assistant.
You’ve lived the rest of the story. The learning curve was steep and long and the plateau was a time gobbler. I was probably the most efficient provider in the group, and after 6 months I was leaving the office an hour later than I had been and was seeing the same number of patients. Most of my coworkers were staying and/or working on the computer at home for an extra 2 hours. This change could be easily documented by speaking with our spouses and children. I understand from my colleagues who have stayed in the business that over the ensuing decade and a half since my first experience with the EHR, its insatiable appetite for a clinician’s time has not abated.
The authors of a recent article in Annals of Family Medicine offer up some advice on how this tragic situation might be brought under control. First, the investigators point out that the phenomenon of after-hours EHR work, sometimes referred to as WOW (work outside of work), has not gone unnoticed by health system administrators and vendors who develop and sell the EHRs. However, analyzing the voluminous data necessary is not any easy task and for the most part has resulted in metrics that cannot be easily applied over a variety of practice scenarios. Many health care organizations, even large ones, have simply given up and rely on the WOW data and recommendations provided by the vendors, obviously lending the situation a faint odor of conflict of interest.
The bottom line is that . It would seem to me just asking the spouses and significant others of the clinicians would be sufficient. But, authors of the paper have more specific recommendations. First, they suggest that time working on the computer outside of scheduled time with patients should be separated from any other calculation of EHR usage. They encourage vendors and time-management researchers to develop standardized and validated methods for measuring active EHR use. And, finally they recommend that all EHR work done outside of time scheduled with patients be attributed to WOW. They feel that clearly labeling it work outside of work offers health care organizations a better chance of developing policies that will address the scourge of burnout.
This, unfortunately, is another tragic example of how clinicians have lost control of our work environments. The fact that 20 years have passed and there is still no standardized method for determining how much time we spend on the computer is more evidence we need to raise our voices.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Could semaglutide treat addiction as well as obesity?
As demand for semaglutide for weight loss grew following approval of Wegovy by the U.S. Food and Drug Administration in 2021, anecdotal reports of unexpected potential added benefits also began to surface.
Some patients taking these drugs for type 2 diabetes or weight loss also lost interest in addictive and compulsive behaviors such as drinking alcohol, smoking, shopping, nail biting, and skin picking, as reported in articles in the New York Times and The Atlantic, among others.
There is also some preliminary research to support these observations.
This news organization invited three experts to weigh in.
Recent and upcoming studies
The senior author of a recent randomized controlled trial of 127 patients with alcohol use disorder (AUD), Anders Fink-Jensen, MD, said: “I hope that GLP-1 analogs in the future can be used against AUD, but before that can happen, several GLP-1 trials [are needed to] prove an effect on alcohol intake.”
His study involved patients who received exenatide (Byetta, Bydureon, AstraZeneca), the first-generation GLP-1 agonist approved for type 2 diabetes, over 26 weeks, but treatment did not reduce the number of heavy drinking days (the primary outcome), compared with placebo.
However, in post hoc, exploratory analyses, heavy drinking days and total alcohol intake were significantly reduced in the subgroup of patients with AUD and obesity (body mass index > 30 kg/m2).
The participants were also shown pictures of alcohol or neutral subjects while they underwent functional magnetic resonance imaging. Those who had received exenatide, compared with placebo, had significantly less activation of brain reward centers when shown the pictures of alcohol.
“Something is happening in the brain and activation of the reward center is hampered by the GLP-1 compound,” Dr. Fink-Jensen, a clinical psychologist at the Psychiatric Centre Copenhagen, remarked in an email.
“If patients with AUD already fulfill the criteria for semaglutide (or other GLP-1 analogs) by having type 2 diabetes and/or a BMI over 30 kg/m2, they can of course use the compound right now,” he noted.
His team is also beginning a study in patients with AUD and a BMI ≥ 30 kg/m2 to investigate the effects on alcohol intake of semaglutide up to 2.4 mg weekly, the maximum dose currently approved for obesity in the United States.
“Based on the potency of exenatide and semaglutide,” Dr. Fink-Jensen said, “we expect that semaglutide will cause a stronger reduction in alcohol intake” than exenatide.
Animal studies have also shown that GLP-1 agonists suppress alcohol-induced reward, alcohol intake, motivation to consume alcohol, alcohol seeking, and relapse drinking of alcohol, Elisabet Jerlhag Holm, PhD, noted.
Interestingly, these agents also suppress the reward, intake, and motivation to consume other addictive drugs like cocaine, amphetamine, nicotine, and some opioids, Jerlhag Holm, professor, department of pharmacology, University of Gothenburg, Sweden, noted in an email.
In a recently published preclinical study, her group provides evidence to help explain anecdotal reports from patients with obesity treated with semaglutide who claim they also reduced their alcohol intake. In the study, semaglutide both reduced alcohol intake (and relapse-like drinking) and decreased body weight of rats of both sexes.
“Future research should explore the possibility of semaglutide decreasing alcohol intake in patients with AUD, particularly those who are overweight,” said Prof. Holm.
“AUD is a heterogenous disorder, and one medication is most likely not helpful for all AUD patients,” she added. “Therefore, an arsenal of different medications is beneficial when treating AUD.”
Janice J. Hwang, MD, MHS, echoed these thoughts: “Anecdotally, there are a lot of reports from patients (and in the news) that this class of medication [GLP-1 agonists] impacts cravings and could impact addictive behaviors.”
“I would say, overall, the jury is still out,” as to whether anecdotal reports of GLP-1 agonists curbing addictions will be borne out in randomized controlled trials.
“I think it is much too early to tell” whether these drugs might be approved for treating addictions without more solid clinical trial data, noted Dr. Hwang, who is an associate professor of medicine and chief, division of endocrinology and metabolism, at the University of North Carolina at Chapel Hill.
Meanwhile, another research group at the University of North Carolina at Chapel Hill, led by psychiatrist Christian Hendershot, PhD, is conducting a clinical trial in 48 participants with AUD who are also smokers.
They aim to determine if patients who receive semaglutide at escalating doses (0.25 mg to 1.0 mg per week via subcutaneous injection) over 9 weeks will consume less alcohol (the primary outcome) and smoke less (a secondary outcome) than those who receive a sham placebo injection. Results are expected in October 2023.
Dr. Fink-Jensen has received an unrestricted research grant from Novo Nordisk to investigate the effects of GLP-1 receptor stimulation on weight gain and metabolic disturbances in patients with schizophrenia treated with an antipsychotic.
A version of this article first appeared on Medscape.com.
As demand for semaglutide for weight loss grew following approval of Wegovy by the U.S. Food and Drug Administration in 2021, anecdotal reports of unexpected potential added benefits also began to surface.
Some patients taking these drugs for type 2 diabetes or weight loss also lost interest in addictive and compulsive behaviors such as drinking alcohol, smoking, shopping, nail biting, and skin picking, as reported in articles in the New York Times and The Atlantic, among others.
There is also some preliminary research to support these observations.
This news organization invited three experts to weigh in.
Recent and upcoming studies
The senior author of a recent randomized controlled trial of 127 patients with alcohol use disorder (AUD), Anders Fink-Jensen, MD, said: “I hope that GLP-1 analogs in the future can be used against AUD, but before that can happen, several GLP-1 trials [are needed to] prove an effect on alcohol intake.”
His study involved patients who received exenatide (Byetta, Bydureon, AstraZeneca), the first-generation GLP-1 agonist approved for type 2 diabetes, over 26 weeks, but treatment did not reduce the number of heavy drinking days (the primary outcome), compared with placebo.
However, in post hoc, exploratory analyses, heavy drinking days and total alcohol intake were significantly reduced in the subgroup of patients with AUD and obesity (body mass index > 30 kg/m2).
The participants were also shown pictures of alcohol or neutral subjects while they underwent functional magnetic resonance imaging. Those who had received exenatide, compared with placebo, had significantly less activation of brain reward centers when shown the pictures of alcohol.
“Something is happening in the brain and activation of the reward center is hampered by the GLP-1 compound,” Dr. Fink-Jensen, a clinical psychologist at the Psychiatric Centre Copenhagen, remarked in an email.
“If patients with AUD already fulfill the criteria for semaglutide (or other GLP-1 analogs) by having type 2 diabetes and/or a BMI over 30 kg/m2, they can of course use the compound right now,” he noted.
His team is also beginning a study in patients with AUD and a BMI ≥ 30 kg/m2 to investigate the effects on alcohol intake of semaglutide up to 2.4 mg weekly, the maximum dose currently approved for obesity in the United States.
“Based on the potency of exenatide and semaglutide,” Dr. Fink-Jensen said, “we expect that semaglutide will cause a stronger reduction in alcohol intake” than exenatide.
Animal studies have also shown that GLP-1 agonists suppress alcohol-induced reward, alcohol intake, motivation to consume alcohol, alcohol seeking, and relapse drinking of alcohol, Elisabet Jerlhag Holm, PhD, noted.
Interestingly, these agents also suppress the reward, intake, and motivation to consume other addictive drugs like cocaine, amphetamine, nicotine, and some opioids, Jerlhag Holm, professor, department of pharmacology, University of Gothenburg, Sweden, noted in an email.
In a recently published preclinical study, her group provides evidence to help explain anecdotal reports from patients with obesity treated with semaglutide who claim they also reduced their alcohol intake. In the study, semaglutide both reduced alcohol intake (and relapse-like drinking) and decreased body weight of rats of both sexes.
“Future research should explore the possibility of semaglutide decreasing alcohol intake in patients with AUD, particularly those who are overweight,” said Prof. Holm.
“AUD is a heterogenous disorder, and one medication is most likely not helpful for all AUD patients,” she added. “Therefore, an arsenal of different medications is beneficial when treating AUD.”
Janice J. Hwang, MD, MHS, echoed these thoughts: “Anecdotally, there are a lot of reports from patients (and in the news) that this class of medication [GLP-1 agonists] impacts cravings and could impact addictive behaviors.”
“I would say, overall, the jury is still out,” as to whether anecdotal reports of GLP-1 agonists curbing addictions will be borne out in randomized controlled trials.
“I think it is much too early to tell” whether these drugs might be approved for treating addictions without more solid clinical trial data, noted Dr. Hwang, who is an associate professor of medicine and chief, division of endocrinology and metabolism, at the University of North Carolina at Chapel Hill.
Meanwhile, another research group at the University of North Carolina at Chapel Hill, led by psychiatrist Christian Hendershot, PhD, is conducting a clinical trial in 48 participants with AUD who are also smokers.
They aim to determine if patients who receive semaglutide at escalating doses (0.25 mg to 1.0 mg per week via subcutaneous injection) over 9 weeks will consume less alcohol (the primary outcome) and smoke less (a secondary outcome) than those who receive a sham placebo injection. Results are expected in October 2023.
Dr. Fink-Jensen has received an unrestricted research grant from Novo Nordisk to investigate the effects of GLP-1 receptor stimulation on weight gain and metabolic disturbances in patients with schizophrenia treated with an antipsychotic.
A version of this article first appeared on Medscape.com.
As demand for semaglutide for weight loss grew following approval of Wegovy by the U.S. Food and Drug Administration in 2021, anecdotal reports of unexpected potential added benefits also began to surface.
Some patients taking these drugs for type 2 diabetes or weight loss also lost interest in addictive and compulsive behaviors such as drinking alcohol, smoking, shopping, nail biting, and skin picking, as reported in articles in the New York Times and The Atlantic, among others.
There is also some preliminary research to support these observations.
This news organization invited three experts to weigh in.
Recent and upcoming studies
The senior author of a recent randomized controlled trial of 127 patients with alcohol use disorder (AUD), Anders Fink-Jensen, MD, said: “I hope that GLP-1 analogs in the future can be used against AUD, but before that can happen, several GLP-1 trials [are needed to] prove an effect on alcohol intake.”
His study involved patients who received exenatide (Byetta, Bydureon, AstraZeneca), the first-generation GLP-1 agonist approved for type 2 diabetes, over 26 weeks, but treatment did not reduce the number of heavy drinking days (the primary outcome), compared with placebo.
However, in post hoc, exploratory analyses, heavy drinking days and total alcohol intake were significantly reduced in the subgroup of patients with AUD and obesity (body mass index > 30 kg/m2).
The participants were also shown pictures of alcohol or neutral subjects while they underwent functional magnetic resonance imaging. Those who had received exenatide, compared with placebo, had significantly less activation of brain reward centers when shown the pictures of alcohol.
“Something is happening in the brain and activation of the reward center is hampered by the GLP-1 compound,” Dr. Fink-Jensen, a clinical psychologist at the Psychiatric Centre Copenhagen, remarked in an email.
“If patients with AUD already fulfill the criteria for semaglutide (or other GLP-1 analogs) by having type 2 diabetes and/or a BMI over 30 kg/m2, they can of course use the compound right now,” he noted.
His team is also beginning a study in patients with AUD and a BMI ≥ 30 kg/m2 to investigate the effects on alcohol intake of semaglutide up to 2.4 mg weekly, the maximum dose currently approved for obesity in the United States.
“Based on the potency of exenatide and semaglutide,” Dr. Fink-Jensen said, “we expect that semaglutide will cause a stronger reduction in alcohol intake” than exenatide.
Animal studies have also shown that GLP-1 agonists suppress alcohol-induced reward, alcohol intake, motivation to consume alcohol, alcohol seeking, and relapse drinking of alcohol, Elisabet Jerlhag Holm, PhD, noted.
Interestingly, these agents also suppress the reward, intake, and motivation to consume other addictive drugs like cocaine, amphetamine, nicotine, and some opioids, Jerlhag Holm, professor, department of pharmacology, University of Gothenburg, Sweden, noted in an email.
In a recently published preclinical study, her group provides evidence to help explain anecdotal reports from patients with obesity treated with semaglutide who claim they also reduced their alcohol intake. In the study, semaglutide both reduced alcohol intake (and relapse-like drinking) and decreased body weight of rats of both sexes.
“Future research should explore the possibility of semaglutide decreasing alcohol intake in patients with AUD, particularly those who are overweight,” said Prof. Holm.
“AUD is a heterogenous disorder, and one medication is most likely not helpful for all AUD patients,” she added. “Therefore, an arsenal of different medications is beneficial when treating AUD.”
Janice J. Hwang, MD, MHS, echoed these thoughts: “Anecdotally, there are a lot of reports from patients (and in the news) that this class of medication [GLP-1 agonists] impacts cravings and could impact addictive behaviors.”
“I would say, overall, the jury is still out,” as to whether anecdotal reports of GLP-1 agonists curbing addictions will be borne out in randomized controlled trials.
“I think it is much too early to tell” whether these drugs might be approved for treating addictions without more solid clinical trial data, noted Dr. Hwang, who is an associate professor of medicine and chief, division of endocrinology and metabolism, at the University of North Carolina at Chapel Hill.
Meanwhile, another research group at the University of North Carolina at Chapel Hill, led by psychiatrist Christian Hendershot, PhD, is conducting a clinical trial in 48 participants with AUD who are also smokers.
They aim to determine if patients who receive semaglutide at escalating doses (0.25 mg to 1.0 mg per week via subcutaneous injection) over 9 weeks will consume less alcohol (the primary outcome) and smoke less (a secondary outcome) than those who receive a sham placebo injection. Results are expected in October 2023.
Dr. Fink-Jensen has received an unrestricted research grant from Novo Nordisk to investigate the effects of GLP-1 receptor stimulation on weight gain and metabolic disturbances in patients with schizophrenia treated with an antipsychotic.
A version of this article first appeared on Medscape.com.
Daily multivitamins boost memory in older adults: A randomized trial
This transcript has been edited for clarity.
This is Dr. JoAnn Manson, professor of medicine at Harvard Medical School and Brigham and Women’s Hospital. , known as COSMOS (Cocoa Supplement and Multivitamins Outcome Study). This is the second COSMOS trial to show a benefit of multivitamins on memory and cognition. This trial involved a collaboration between Brigham and Columbia University and was published in the American Journal of Clinical Nutrition. I’d like to acknowledge that I am a coauthor of this study, together with Dr. Howard Sesso, who co-leads the main COSMOS trial with me.
Preserving memory and cognitive function is of critical importance to older adults. Nutritional interventions play an important role because we know the brain requires several nutrients for optimal health, and deficiencies in one or more of these nutrients may accelerate cognitive decline. Some of the micronutrients that are known to be important for brain health include vitamin B12, thiamin, other B vitamins, lutein, magnesium, and zinc, among others.
The current trial included 3,500 participants aged 60 or older, looking at performance on a web-based memory test. The multivitamin group did significantly better than the placebo group on memory tests and word recall, a finding that was estimated as the equivalent of slowing age-related memory loss by about 3 years. The benefit was first seen at 1 year and was sustained across the 3 years of the trial.
Intriguingly, in both COSMOS and COSMOS-Web, and the earlier COSMOS-Mind study, which was done in collaboration with Wake Forest, the participants with a history of cardiovascular disease showed the greatest benefits from multivitamins, perhaps due to lower nutrient status. But the basis for this finding needs to be explored further.
A few important caveats need to be emphasized. First, multivitamins and other dietary supplements will never be a substitute for a healthy diet and healthy lifestyle and should not distract from those goals. But multivitamins may have a role as a complementary strategy. Another caveat is that the randomized trials tested recommended dietary allowances and not megadoses of these micronutrients. In fact, randomized trials of high doses of isolated micronutrients have not clearly shown cognitive benefits, and this suggests that more is not necessarily better and may be worse. High doses also may be associated with toxicity, or they may interfere with absorption or bioavailability of other nutrients.
In COSMOS, over the average 3.6 years of follow-up and in the earlier Physicians’ Health Study II, over 1 year of supplementation, multivitamins were found to be safe without any clear risks or safety concerns. A further caveat is that although Centrum Silver was tested in this trial, we would not expect that this is a brand-specific benefit, and other high-quality multivitamin brands would be expected to confer similar benefits. Of course, it’s important to check bottles for quality-control documentation such as the seals of the U.S. Pharmacopeia, National Science Foundation, ConsumerLab.com, and other auditors.
Overall, the finding that a daily multivitamin improved memory and slowed cognitive decline in two separate COSMOS randomized trials is exciting, suggesting that multivitamin supplementation holds promise as a safe, accessible, and affordable approach to protecting cognitive health in older adults. Further research will be needed to understand who is most likely to benefit and the biological mechanisms involved. Expert committees will have to look at the research and decide whether any changes in guidelines are indicated in the future.
Dr. Manson is Professor of Medicine and the Michael and Lee Bell Professor of Women’s Health, Harvard Medical School and director of the Division of Preventive Medicine, Brigham and Women’s Hospital, both in Boston. She reported receiving funding/donations from Mars Symbioscience.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
This is Dr. JoAnn Manson, professor of medicine at Harvard Medical School and Brigham and Women’s Hospital. , known as COSMOS (Cocoa Supplement and Multivitamins Outcome Study). This is the second COSMOS trial to show a benefit of multivitamins on memory and cognition. This trial involved a collaboration between Brigham and Columbia University and was published in the American Journal of Clinical Nutrition. I’d like to acknowledge that I am a coauthor of this study, together with Dr. Howard Sesso, who co-leads the main COSMOS trial with me.
Preserving memory and cognitive function is of critical importance to older adults. Nutritional interventions play an important role because we know the brain requires several nutrients for optimal health, and deficiencies in one or more of these nutrients may accelerate cognitive decline. Some of the micronutrients that are known to be important for brain health include vitamin B12, thiamin, other B vitamins, lutein, magnesium, and zinc, among others.
The current trial included 3,500 participants aged 60 or older, looking at performance on a web-based memory test. The multivitamin group did significantly better than the placebo group on memory tests and word recall, a finding that was estimated as the equivalent of slowing age-related memory loss by about 3 years. The benefit was first seen at 1 year and was sustained across the 3 years of the trial.
Intriguingly, in both COSMOS and COSMOS-Web, and the earlier COSMOS-Mind study, which was done in collaboration with Wake Forest, the participants with a history of cardiovascular disease showed the greatest benefits from multivitamins, perhaps due to lower nutrient status. But the basis for this finding needs to be explored further.
A few important caveats need to be emphasized. First, multivitamins and other dietary supplements will never be a substitute for a healthy diet and healthy lifestyle and should not distract from those goals. But multivitamins may have a role as a complementary strategy. Another caveat is that the randomized trials tested recommended dietary allowances and not megadoses of these micronutrients. In fact, randomized trials of high doses of isolated micronutrients have not clearly shown cognitive benefits, and this suggests that more is not necessarily better and may be worse. High doses also may be associated with toxicity, or they may interfere with absorption or bioavailability of other nutrients.
In COSMOS, over the average 3.6 years of follow-up and in the earlier Physicians’ Health Study II, over 1 year of supplementation, multivitamins were found to be safe without any clear risks or safety concerns. A further caveat is that although Centrum Silver was tested in this trial, we would not expect that this is a brand-specific benefit, and other high-quality multivitamin brands would be expected to confer similar benefits. Of course, it’s important to check bottles for quality-control documentation such as the seals of the U.S. Pharmacopeia, National Science Foundation, ConsumerLab.com, and other auditors.
Overall, the finding that a daily multivitamin improved memory and slowed cognitive decline in two separate COSMOS randomized trials is exciting, suggesting that multivitamin supplementation holds promise as a safe, accessible, and affordable approach to protecting cognitive health in older adults. Further research will be needed to understand who is most likely to benefit and the biological mechanisms involved. Expert committees will have to look at the research and decide whether any changes in guidelines are indicated in the future.
Dr. Manson is Professor of Medicine and the Michael and Lee Bell Professor of Women’s Health, Harvard Medical School and director of the Division of Preventive Medicine, Brigham and Women’s Hospital, both in Boston. She reported receiving funding/donations from Mars Symbioscience.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
This is Dr. JoAnn Manson, professor of medicine at Harvard Medical School and Brigham and Women’s Hospital. , known as COSMOS (Cocoa Supplement and Multivitamins Outcome Study). This is the second COSMOS trial to show a benefit of multivitamins on memory and cognition. This trial involved a collaboration between Brigham and Columbia University and was published in the American Journal of Clinical Nutrition. I’d like to acknowledge that I am a coauthor of this study, together with Dr. Howard Sesso, who co-leads the main COSMOS trial with me.
Preserving memory and cognitive function is of critical importance to older adults. Nutritional interventions play an important role because we know the brain requires several nutrients for optimal health, and deficiencies in one or more of these nutrients may accelerate cognitive decline. Some of the micronutrients that are known to be important for brain health include vitamin B12, thiamin, other B vitamins, lutein, magnesium, and zinc, among others.
The current trial included 3,500 participants aged 60 or older, looking at performance on a web-based memory test. The multivitamin group did significantly better than the placebo group on memory tests and word recall, a finding that was estimated as the equivalent of slowing age-related memory loss by about 3 years. The benefit was first seen at 1 year and was sustained across the 3 years of the trial.
Intriguingly, in both COSMOS and COSMOS-Web, and the earlier COSMOS-Mind study, which was done in collaboration with Wake Forest, the participants with a history of cardiovascular disease showed the greatest benefits from multivitamins, perhaps due to lower nutrient status. But the basis for this finding needs to be explored further.
A few important caveats need to be emphasized. First, multivitamins and other dietary supplements will never be a substitute for a healthy diet and healthy lifestyle and should not distract from those goals. But multivitamins may have a role as a complementary strategy. Another caveat is that the randomized trials tested recommended dietary allowances and not megadoses of these micronutrients. In fact, randomized trials of high doses of isolated micronutrients have not clearly shown cognitive benefits, and this suggests that more is not necessarily better and may be worse. High doses also may be associated with toxicity, or they may interfere with absorption or bioavailability of other nutrients.
In COSMOS, over the average 3.6 years of follow-up and in the earlier Physicians’ Health Study II, over 1 year of supplementation, multivitamins were found to be safe without any clear risks or safety concerns. A further caveat is that although Centrum Silver was tested in this trial, we would not expect that this is a brand-specific benefit, and other high-quality multivitamin brands would be expected to confer similar benefits. Of course, it’s important to check bottles for quality-control documentation such as the seals of the U.S. Pharmacopeia, National Science Foundation, ConsumerLab.com, and other auditors.
Overall, the finding that a daily multivitamin improved memory and slowed cognitive decline in two separate COSMOS randomized trials is exciting, suggesting that multivitamin supplementation holds promise as a safe, accessible, and affordable approach to protecting cognitive health in older adults. Further research will be needed to understand who is most likely to benefit and the biological mechanisms involved. Expert committees will have to look at the research and decide whether any changes in guidelines are indicated in the future.
Dr. Manson is Professor of Medicine and the Michael and Lee Bell Professor of Women’s Health, Harvard Medical School and director of the Division of Preventive Medicine, Brigham and Women’s Hospital, both in Boston. She reported receiving funding/donations from Mars Symbioscience.
A version of this article first appeared on Medscape.com.
Beating jet lag at CHEST 2023
Sleep Medicine Network
Non-Respiratory Sleep Section
Want to feel your best when enjoying CHEST 2023 sessions, games, vendors, networking events, and much more on the island paradise of Hawai’i? It’s time to start making plans to align your circadian rhythm with Hawai’i Standard Time (HST).
Dr. Sabra Abbott, a circadian rhythm expert and the Director of the Circadian Medicine Clinic at Northwestern University, recommends “to best adapt to the time zone change, you can take advantage of the time-of-day specific phase shifting properties of light and melatonin.”
Luckily, afternoon/early evening light exposure is encouraged, which will help get some extra hours on the beach! Don’t forget your sunglasses to help with blocking light in the morning.
Once the meeting has concluded, attendees from mainland USA will need to advance their internal clocks earlier as they travel east back home. This can be achieved by taking melatonin 0.5 mg around bedtime and seeking bright-light during the mid-to-late morning.
To develop a personalized sleep prescription based on your time zone and preferred sleep times, you can use an online jet lag calculator, such as Jet Lag Rooster (jetlag.sleepopolis.com; no affiliations with authors or Dr. Abbott).
To learn more about circadian rhythm alignment when working and traveling, we’ll see you at the CHEST 2023 session “Shifting to Hawai’i – Jet Lag, Shift Workers, and Sleep for Health Care Providers” (10/8/2023 at 0815-HST). If you haven't registered for the meeting, make sure to do so soon! You'll find the full schedule, pricing, and more at the CHEST 2023 website.
Paul Chung, DO – Section Fellow-in-Training
Sleep Medicine Network
Non-Respiratory Sleep Section
Want to feel your best when enjoying CHEST 2023 sessions, games, vendors, networking events, and much more on the island paradise of Hawai’i? It’s time to start making plans to align your circadian rhythm with Hawai’i Standard Time (HST).
Dr. Sabra Abbott, a circadian rhythm expert and the Director of the Circadian Medicine Clinic at Northwestern University, recommends “to best adapt to the time zone change, you can take advantage of the time-of-day specific phase shifting properties of light and melatonin.”
Luckily, afternoon/early evening light exposure is encouraged, which will help get some extra hours on the beach! Don’t forget your sunglasses to help with blocking light in the morning.
Once the meeting has concluded, attendees from mainland USA will need to advance their internal clocks earlier as they travel east back home. This can be achieved by taking melatonin 0.5 mg around bedtime and seeking bright-light during the mid-to-late morning.
To develop a personalized sleep prescription based on your time zone and preferred sleep times, you can use an online jet lag calculator, such as Jet Lag Rooster (jetlag.sleepopolis.com; no affiliations with authors or Dr. Abbott).
To learn more about circadian rhythm alignment when working and traveling, we’ll see you at the CHEST 2023 session “Shifting to Hawai’i – Jet Lag, Shift Workers, and Sleep for Health Care Providers” (10/8/2023 at 0815-HST). If you haven't registered for the meeting, make sure to do so soon! You'll find the full schedule, pricing, and more at the CHEST 2023 website.
Paul Chung, DO – Section Fellow-in-Training
Sleep Medicine Network
Non-Respiratory Sleep Section
Want to feel your best when enjoying CHEST 2023 sessions, games, vendors, networking events, and much more on the island paradise of Hawai’i? It’s time to start making plans to align your circadian rhythm with Hawai’i Standard Time (HST).
Dr. Sabra Abbott, a circadian rhythm expert and the Director of the Circadian Medicine Clinic at Northwestern University, recommends “to best adapt to the time zone change, you can take advantage of the time-of-day specific phase shifting properties of light and melatonin.”
Luckily, afternoon/early evening light exposure is encouraged, which will help get some extra hours on the beach! Don’t forget your sunglasses to help with blocking light in the morning.
Once the meeting has concluded, attendees from mainland USA will need to advance their internal clocks earlier as they travel east back home. This can be achieved by taking melatonin 0.5 mg around bedtime and seeking bright-light during the mid-to-late morning.
To develop a personalized sleep prescription based on your time zone and preferred sleep times, you can use an online jet lag calculator, such as Jet Lag Rooster (jetlag.sleepopolis.com; no affiliations with authors or Dr. Abbott).
To learn more about circadian rhythm alignment when working and traveling, we’ll see you at the CHEST 2023 session “Shifting to Hawai’i – Jet Lag, Shift Workers, and Sleep for Health Care Providers” (10/8/2023 at 0815-HST). If you haven't registered for the meeting, make sure to do so soon! You'll find the full schedule, pricing, and more at the CHEST 2023 website.
Paul Chung, DO – Section Fellow-in-Training
AI efforts make strides in predicting progression to RA
MILAN – Two independent efforts to use artificial intelligence (AI) to predict the development of early rheumatoid arthritis (RA) from patients with signs and symptoms not meeting full disease criteria showed good, near expert-level accuracy, according to findings from two studies presented at the annual European Congress of Rheumatology.
In one study, researchers from Leiden University Medical Center in the Netherlands developed an AI-based method to automatically analyze MR scans of extremities in order to predict early rheumatoid arthritis. The second study involved a Japanese research team that used machine learning to create a model capable of predicting progression from undifferentiated arthritis (UA) to RA. Both approaches would facilitate early diagnosis of RA, enabling timely treatment and improved clinical outcomes.
Lennart Jans, MD, PhD, who was not involved in either study but works with AI-assisted imaging analysis on a daily basis as head of clinics in musculoskeletal radiology at Ghent University Hospital and a professor of radiology at Ghent University in Belgium, said that integrating AI into health care poses several challenging aspects that need to be addressed. “There are three main challenges associated with the development and implementation of AI-based tools in clinical practice,” he said. “Firstly, obtaining heterogeneous datasets from different image hardware vendors, diverse racial and ethnic backgrounds, and various ages and genders is crucial for training and testing the AI algorithms. Secondly, AI algorithms need to achieve a predetermined performance level depending on the specific use case. Finally, a regulatory pathway must be followed to obtain the necessary FDA or MDR [medical devices regulation] certification before applying an AI use case in clinical practice.”
RA prediction
Yanli Li, the first author of the study and a member of the division of image processing at Leiden University Medical Center, explained the potential benefits of early RA prediction. “If we could determine whether a patient presenting with clinically suspected arthralgia (CSA) or early onset arthritis (EAC) is likely to develop RA in the near future, physicians could initiate treatment earlier, reducing the risk of disease progression.”
Currently, rheumatologists estimate the likelihood of developing RA by visually scoring MR scans using the RAMRIS scoring system. “We decided to explore the use of AI,” Dr. Li explained, “because it could save time, reduce costs and labor, eliminate the need for scoring training, and allow for hypothesis-free discoveries.”
The research team collected MR scans of the hands and feet from Leiden University Medical Center’s radiology department. The dataset consisted of images from 177 healthy individuals, 692 subjects with CSA (including 113 who developed RA), and 969 with EAC (including 447 who developed RA). The images underwent automated preprocessing to remove artifacts and standardize the input for the computer. Subsequently, a deep learning model was trained to predict RA development within a 2-year time frame.
The training process involved several steps. Initially, the researchers pretrained the model to learn anatomy by masking parts of the images and tasking the computer with reconstructing them. Subsequently, the AI was trained to differentiate between the groups (EAC vs. healthy and CSA vs. healthy), then between RA and other disorders. Finally, the AI model was trained to predict RA.
The accuracy of the model was evaluated using the area under the receiver operator characteristic curve (AUROC). The model that was trained using MR scans of the hands (including the wrist and metacarpophalangeal joints) achieved a mean AUROC of 0.84 for distinguishing EAC from healthy subjects and 0.83 for distinguishing CSA from healthy subjects. The model trained using MR scans of both the hands and feet achieved a mean AUROC of 0.71 for distinguishing RA from non-RA cases in EAC. The accuracy of the model in predicting RA using MR scans of the hands was 0.73, which closely matches the reported accuracy of visual scoring by human experts (0.74). Importantly, the generation and analysis of heat maps suggested that the deep learning model predicts RA based on known inflammatory signals.
“Automatic RA prediction using AI interpretation of MR scans is feasible,” Dr. Li said. “Incorporating additional clinical data will likely further enhance the AI prediction, and the heat maps may contribute to the discovery of new MRI biomarkers for RA development.”
“AI models and engines have achieved near-expertise levels for various use cases, including the early detection of RA on MRI scans of the hands,” said Dr. Jans, the Ghent University radiologist. “We are observing the same progress in AI detection of rheumatic diseases in other imaging modalities, such as radiography, CT, and ultrasound. However, it is important to note that the reported performances often apply to selected cohorts with standardized imaging protocols. The next challenge [for Dr. Li and colleagues, and others] will be to train and test these algorithms using more heterogeneous datasets to make them applicable in real-world settings.”
A ‘transitional phase’ of applying AI techniques
“In a medical setting, as computer scientists, we face unique challenges,” pointed out Berend C. Stoel, MSc, PhD, the senior author of the Leiden study. “Our team consists of approximately 30-35 researchers, primarily electrical engineers or computer scientists, situated within the radiology department of Leiden University Medical Center. Our focus is on image processing, seeking AI-based solutions for image analysis, particularly utilizing deep learning techniques.”
Their objective is to validate this method more broadly, and to achieve that, they require collaboration with other hospitals. Up until now, they have primarily worked with a specific type of MR images, extremity MR scans. These scans are conducted in only a few centers equipped with extremity MR scanners, which can accommodate only hands or feet.
“We are currently in a transitional phase, aiming to apply our methods to standard MR scans, which are more widely available,” Dr. Stoel informed this news organization. “We are engaged in various projects. One project, nearing completion, involves the scoring of early RA, where we train the computer to imitate the actions of rheumatologists or radiologists. We started with a relatively straightforward approach, but AI offers a multitude of possibilities. In the project presented at EULAR, we manipulated the images in a different manner, attempting to predict future events. We also have a parallel project where we employ AI to detect inflammatory changes over time by analyzing sequences of images (MR scans). Furthermore, we have developed AI models to distinguish between treatment and placebo groups. Once the neural network has been trained for this task, we can inquire about the location and timing of changes, thereby gaining insights into the therapy’s response.
“When considering the history of AI, it has experienced both ups and downs. We are currently in a promising phase, but if certain projects fail, expectations might diminish. My hope is that we will indeed revolutionize and enhance disease diagnosis, monitoring, and prediction. Additionally, AI may provide us with additional information that we, as humans, may not be able to extract from these images. However, it is difficult to predict where we will stand in 5-10 years,” he concluded.
Predicting disease progression
The second study, which explored the application of AI in predicting the progression of undifferentiated arthritis (UA) to RA, was presented by Takayuki Fujii, MD, PhD, assistant professor in the department of advanced medicine for rheumatic diseases at Kyoto University’s Graduate School of Medicine in Japan. “Predicting the progression of RA from UA remains an unmet medical need,” he reminded the audience.
Dr. Fujii’s team used data from the KURAMA cohort, a large observational RA cohort from a single center, to develop a machine learning model. The study included a total of 322 patients initially diagnosed with UA. The deep neural network (DNN) model was trained using 24 clinical features that are easily obtainable in routine clinical practice, such as age, sex, C-reactive protein (CRP) levels, and disease activity score in 28 joints using erythrocyte sedimentation rate (DAS28-ESR). The DNN model achieved a prediction accuracy of 85.1% in the training cohort. When the model was applied to validation data from an external dataset consisting of 88 patients from the ANSWER cohort, a large multicenter observational RA cohort, the prediction accuracy was 80%.
“We have developed a machine learning model that can predict the progression of RA from UA using clinical parameters,” Dr. Fujii concluded. “This model has the potential to assist rheumatologists in providing appropriate care and timely intervention for patients with UA.”
“Dr. Fujii presented a fascinating study,” Dr. Jans said. “They achieved an accuracy of 80% when applying a DNN model to predict progression from UA to RA. This level of accuracy is relatively high and certainly promising. However, it is important to consider that a pre-test probability of 30% [for progressing from UA to RA] is also relatively high, which partially explains the high accuracy. Nonetheless, this study represents a significant step forward in the clinical management of patients with UA, as it helps identify those who may benefit the most from regular clinical follow-up.”
Dr. Li and Dr. Stoel report no relevant financial relationships with industry. Dr. Fujii has received speaking fees from Asahi Kasei, AbbVie, Chugai, and Tanabe Mitsubishi Pharma. Dr. Jans has received speaking fees from AbbVie, UCB, Lilly, and Novartis; he is cofounder of RheumaFinder. The Leiden study was funded by the Dutch Research Council and the China Scholarship Council. The study by Dr. Fujii and colleagues had no outside funding.
A version of this article first appeared on Medscape.com.
MILAN – Two independent efforts to use artificial intelligence (AI) to predict the development of early rheumatoid arthritis (RA) from patients with signs and symptoms not meeting full disease criteria showed good, near expert-level accuracy, according to findings from two studies presented at the annual European Congress of Rheumatology.
In one study, researchers from Leiden University Medical Center in the Netherlands developed an AI-based method to automatically analyze MR scans of extremities in order to predict early rheumatoid arthritis. The second study involved a Japanese research team that used machine learning to create a model capable of predicting progression from undifferentiated arthritis (UA) to RA. Both approaches would facilitate early diagnosis of RA, enabling timely treatment and improved clinical outcomes.
Lennart Jans, MD, PhD, who was not involved in either study but works with AI-assisted imaging analysis on a daily basis as head of clinics in musculoskeletal radiology at Ghent University Hospital and a professor of radiology at Ghent University in Belgium, said that integrating AI into health care poses several challenging aspects that need to be addressed. “There are three main challenges associated with the development and implementation of AI-based tools in clinical practice,” he said. “Firstly, obtaining heterogeneous datasets from different image hardware vendors, diverse racial and ethnic backgrounds, and various ages and genders is crucial for training and testing the AI algorithms. Secondly, AI algorithms need to achieve a predetermined performance level depending on the specific use case. Finally, a regulatory pathway must be followed to obtain the necessary FDA or MDR [medical devices regulation] certification before applying an AI use case in clinical practice.”
RA prediction
Yanli Li, the first author of the study and a member of the division of image processing at Leiden University Medical Center, explained the potential benefits of early RA prediction. “If we could determine whether a patient presenting with clinically suspected arthralgia (CSA) or early onset arthritis (EAC) is likely to develop RA in the near future, physicians could initiate treatment earlier, reducing the risk of disease progression.”
Currently, rheumatologists estimate the likelihood of developing RA by visually scoring MR scans using the RAMRIS scoring system. “We decided to explore the use of AI,” Dr. Li explained, “because it could save time, reduce costs and labor, eliminate the need for scoring training, and allow for hypothesis-free discoveries.”
The research team collected MR scans of the hands and feet from Leiden University Medical Center’s radiology department. The dataset consisted of images from 177 healthy individuals, 692 subjects with CSA (including 113 who developed RA), and 969 with EAC (including 447 who developed RA). The images underwent automated preprocessing to remove artifacts and standardize the input for the computer. Subsequently, a deep learning model was trained to predict RA development within a 2-year time frame.
The training process involved several steps. Initially, the researchers pretrained the model to learn anatomy by masking parts of the images and tasking the computer with reconstructing them. Subsequently, the AI was trained to differentiate between the groups (EAC vs. healthy and CSA vs. healthy), then between RA and other disorders. Finally, the AI model was trained to predict RA.
The accuracy of the model was evaluated using the area under the receiver operator characteristic curve (AUROC). The model that was trained using MR scans of the hands (including the wrist and metacarpophalangeal joints) achieved a mean AUROC of 0.84 for distinguishing EAC from healthy subjects and 0.83 for distinguishing CSA from healthy subjects. The model trained using MR scans of both the hands and feet achieved a mean AUROC of 0.71 for distinguishing RA from non-RA cases in EAC. The accuracy of the model in predicting RA using MR scans of the hands was 0.73, which closely matches the reported accuracy of visual scoring by human experts (0.74). Importantly, the generation and analysis of heat maps suggested that the deep learning model predicts RA based on known inflammatory signals.
“Automatic RA prediction using AI interpretation of MR scans is feasible,” Dr. Li said. “Incorporating additional clinical data will likely further enhance the AI prediction, and the heat maps may contribute to the discovery of new MRI biomarkers for RA development.”
“AI models and engines have achieved near-expertise levels for various use cases, including the early detection of RA on MRI scans of the hands,” said Dr. Jans, the Ghent University radiologist. “We are observing the same progress in AI detection of rheumatic diseases in other imaging modalities, such as radiography, CT, and ultrasound. However, it is important to note that the reported performances often apply to selected cohorts with standardized imaging protocols. The next challenge [for Dr. Li and colleagues, and others] will be to train and test these algorithms using more heterogeneous datasets to make them applicable in real-world settings.”
A ‘transitional phase’ of applying AI techniques
“In a medical setting, as computer scientists, we face unique challenges,” pointed out Berend C. Stoel, MSc, PhD, the senior author of the Leiden study. “Our team consists of approximately 30-35 researchers, primarily electrical engineers or computer scientists, situated within the radiology department of Leiden University Medical Center. Our focus is on image processing, seeking AI-based solutions for image analysis, particularly utilizing deep learning techniques.”
Their objective is to validate this method more broadly, and to achieve that, they require collaboration with other hospitals. Up until now, they have primarily worked with a specific type of MR images, extremity MR scans. These scans are conducted in only a few centers equipped with extremity MR scanners, which can accommodate only hands or feet.
“We are currently in a transitional phase, aiming to apply our methods to standard MR scans, which are more widely available,” Dr. Stoel informed this news organization. “We are engaged in various projects. One project, nearing completion, involves the scoring of early RA, where we train the computer to imitate the actions of rheumatologists or radiologists. We started with a relatively straightforward approach, but AI offers a multitude of possibilities. In the project presented at EULAR, we manipulated the images in a different manner, attempting to predict future events. We also have a parallel project where we employ AI to detect inflammatory changes over time by analyzing sequences of images (MR scans). Furthermore, we have developed AI models to distinguish between treatment and placebo groups. Once the neural network has been trained for this task, we can inquire about the location and timing of changes, thereby gaining insights into the therapy’s response.
“When considering the history of AI, it has experienced both ups and downs. We are currently in a promising phase, but if certain projects fail, expectations might diminish. My hope is that we will indeed revolutionize and enhance disease diagnosis, monitoring, and prediction. Additionally, AI may provide us with additional information that we, as humans, may not be able to extract from these images. However, it is difficult to predict where we will stand in 5-10 years,” he concluded.
Predicting disease progression
The second study, which explored the application of AI in predicting the progression of undifferentiated arthritis (UA) to RA, was presented by Takayuki Fujii, MD, PhD, assistant professor in the department of advanced medicine for rheumatic diseases at Kyoto University’s Graduate School of Medicine in Japan. “Predicting the progression of RA from UA remains an unmet medical need,” he reminded the audience.
Dr. Fujii’s team used data from the KURAMA cohort, a large observational RA cohort from a single center, to develop a machine learning model. The study included a total of 322 patients initially diagnosed with UA. The deep neural network (DNN) model was trained using 24 clinical features that are easily obtainable in routine clinical practice, such as age, sex, C-reactive protein (CRP) levels, and disease activity score in 28 joints using erythrocyte sedimentation rate (DAS28-ESR). The DNN model achieved a prediction accuracy of 85.1% in the training cohort. When the model was applied to validation data from an external dataset consisting of 88 patients from the ANSWER cohort, a large multicenter observational RA cohort, the prediction accuracy was 80%.
“We have developed a machine learning model that can predict the progression of RA from UA using clinical parameters,” Dr. Fujii concluded. “This model has the potential to assist rheumatologists in providing appropriate care and timely intervention for patients with UA.”
“Dr. Fujii presented a fascinating study,” Dr. Jans said. “They achieved an accuracy of 80% when applying a DNN model to predict progression from UA to RA. This level of accuracy is relatively high and certainly promising. However, it is important to consider that a pre-test probability of 30% [for progressing from UA to RA] is also relatively high, which partially explains the high accuracy. Nonetheless, this study represents a significant step forward in the clinical management of patients with UA, as it helps identify those who may benefit the most from regular clinical follow-up.”
Dr. Li and Dr. Stoel report no relevant financial relationships with industry. Dr. Fujii has received speaking fees from Asahi Kasei, AbbVie, Chugai, and Tanabe Mitsubishi Pharma. Dr. Jans has received speaking fees from AbbVie, UCB, Lilly, and Novartis; he is cofounder of RheumaFinder. The Leiden study was funded by the Dutch Research Council and the China Scholarship Council. The study by Dr. Fujii and colleagues had no outside funding.
A version of this article first appeared on Medscape.com.
MILAN – Two independent efforts to use artificial intelligence (AI) to predict the development of early rheumatoid arthritis (RA) from patients with signs and symptoms not meeting full disease criteria showed good, near expert-level accuracy, according to findings from two studies presented at the annual European Congress of Rheumatology.
In one study, researchers from Leiden University Medical Center in the Netherlands developed an AI-based method to automatically analyze MR scans of extremities in order to predict early rheumatoid arthritis. The second study involved a Japanese research team that used machine learning to create a model capable of predicting progression from undifferentiated arthritis (UA) to RA. Both approaches would facilitate early diagnosis of RA, enabling timely treatment and improved clinical outcomes.
Lennart Jans, MD, PhD, who was not involved in either study but works with AI-assisted imaging analysis on a daily basis as head of clinics in musculoskeletal radiology at Ghent University Hospital and a professor of radiology at Ghent University in Belgium, said that integrating AI into health care poses several challenging aspects that need to be addressed. “There are three main challenges associated with the development and implementation of AI-based tools in clinical practice,” he said. “Firstly, obtaining heterogeneous datasets from different image hardware vendors, diverse racial and ethnic backgrounds, and various ages and genders is crucial for training and testing the AI algorithms. Secondly, AI algorithms need to achieve a predetermined performance level depending on the specific use case. Finally, a regulatory pathway must be followed to obtain the necessary FDA or MDR [medical devices regulation] certification before applying an AI use case in clinical practice.”
RA prediction
Yanli Li, the first author of the study and a member of the division of image processing at Leiden University Medical Center, explained the potential benefits of early RA prediction. “If we could determine whether a patient presenting with clinically suspected arthralgia (CSA) or early onset arthritis (EAC) is likely to develop RA in the near future, physicians could initiate treatment earlier, reducing the risk of disease progression.”
Currently, rheumatologists estimate the likelihood of developing RA by visually scoring MR scans using the RAMRIS scoring system. “We decided to explore the use of AI,” Dr. Li explained, “because it could save time, reduce costs and labor, eliminate the need for scoring training, and allow for hypothesis-free discoveries.”
The research team collected MR scans of the hands and feet from Leiden University Medical Center’s radiology department. The dataset consisted of images from 177 healthy individuals, 692 subjects with CSA (including 113 who developed RA), and 969 with EAC (including 447 who developed RA). The images underwent automated preprocessing to remove artifacts and standardize the input for the computer. Subsequently, a deep learning model was trained to predict RA development within a 2-year time frame.
The training process involved several steps. Initially, the researchers pretrained the model to learn anatomy by masking parts of the images and tasking the computer with reconstructing them. Subsequently, the AI was trained to differentiate between the groups (EAC vs. healthy and CSA vs. healthy), then between RA and other disorders. Finally, the AI model was trained to predict RA.
The accuracy of the model was evaluated using the area under the receiver operator characteristic curve (AUROC). The model that was trained using MR scans of the hands (including the wrist and metacarpophalangeal joints) achieved a mean AUROC of 0.84 for distinguishing EAC from healthy subjects and 0.83 for distinguishing CSA from healthy subjects. The model trained using MR scans of both the hands and feet achieved a mean AUROC of 0.71 for distinguishing RA from non-RA cases in EAC. The accuracy of the model in predicting RA using MR scans of the hands was 0.73, which closely matches the reported accuracy of visual scoring by human experts (0.74). Importantly, the generation and analysis of heat maps suggested that the deep learning model predicts RA based on known inflammatory signals.
“Automatic RA prediction using AI interpretation of MR scans is feasible,” Dr. Li said. “Incorporating additional clinical data will likely further enhance the AI prediction, and the heat maps may contribute to the discovery of new MRI biomarkers for RA development.”
“AI models and engines have achieved near-expertise levels for various use cases, including the early detection of RA on MRI scans of the hands,” said Dr. Jans, the Ghent University radiologist. “We are observing the same progress in AI detection of rheumatic diseases in other imaging modalities, such as radiography, CT, and ultrasound. However, it is important to note that the reported performances often apply to selected cohorts with standardized imaging protocols. The next challenge [for Dr. Li and colleagues, and others] will be to train and test these algorithms using more heterogeneous datasets to make them applicable in real-world settings.”
A ‘transitional phase’ of applying AI techniques
“In a medical setting, as computer scientists, we face unique challenges,” pointed out Berend C. Stoel, MSc, PhD, the senior author of the Leiden study. “Our team consists of approximately 30-35 researchers, primarily electrical engineers or computer scientists, situated within the radiology department of Leiden University Medical Center. Our focus is on image processing, seeking AI-based solutions for image analysis, particularly utilizing deep learning techniques.”
Their objective is to validate this method more broadly, and to achieve that, they require collaboration with other hospitals. Up until now, they have primarily worked with a specific type of MR images, extremity MR scans. These scans are conducted in only a few centers equipped with extremity MR scanners, which can accommodate only hands or feet.
“We are currently in a transitional phase, aiming to apply our methods to standard MR scans, which are more widely available,” Dr. Stoel informed this news organization. “We are engaged in various projects. One project, nearing completion, involves the scoring of early RA, where we train the computer to imitate the actions of rheumatologists or radiologists. We started with a relatively straightforward approach, but AI offers a multitude of possibilities. In the project presented at EULAR, we manipulated the images in a different manner, attempting to predict future events. We also have a parallel project where we employ AI to detect inflammatory changes over time by analyzing sequences of images (MR scans). Furthermore, we have developed AI models to distinguish between treatment and placebo groups. Once the neural network has been trained for this task, we can inquire about the location and timing of changes, thereby gaining insights into the therapy’s response.
“When considering the history of AI, it has experienced both ups and downs. We are currently in a promising phase, but if certain projects fail, expectations might diminish. My hope is that we will indeed revolutionize and enhance disease diagnosis, monitoring, and prediction. Additionally, AI may provide us with additional information that we, as humans, may not be able to extract from these images. However, it is difficult to predict where we will stand in 5-10 years,” he concluded.
Predicting disease progression
The second study, which explored the application of AI in predicting the progression of undifferentiated arthritis (UA) to RA, was presented by Takayuki Fujii, MD, PhD, assistant professor in the department of advanced medicine for rheumatic diseases at Kyoto University’s Graduate School of Medicine in Japan. “Predicting the progression of RA from UA remains an unmet medical need,” he reminded the audience.
Dr. Fujii’s team used data from the KURAMA cohort, a large observational RA cohort from a single center, to develop a machine learning model. The study included a total of 322 patients initially diagnosed with UA. The deep neural network (DNN) model was trained using 24 clinical features that are easily obtainable in routine clinical practice, such as age, sex, C-reactive protein (CRP) levels, and disease activity score in 28 joints using erythrocyte sedimentation rate (DAS28-ESR). The DNN model achieved a prediction accuracy of 85.1% in the training cohort. When the model was applied to validation data from an external dataset consisting of 88 patients from the ANSWER cohort, a large multicenter observational RA cohort, the prediction accuracy was 80%.
“We have developed a machine learning model that can predict the progression of RA from UA using clinical parameters,” Dr. Fujii concluded. “This model has the potential to assist rheumatologists in providing appropriate care and timely intervention for patients with UA.”
“Dr. Fujii presented a fascinating study,” Dr. Jans said. “They achieved an accuracy of 80% when applying a DNN model to predict progression from UA to RA. This level of accuracy is relatively high and certainly promising. However, it is important to consider that a pre-test probability of 30% [for progressing from UA to RA] is also relatively high, which partially explains the high accuracy. Nonetheless, this study represents a significant step forward in the clinical management of patients with UA, as it helps identify those who may benefit the most from regular clinical follow-up.”
Dr. Li and Dr. Stoel report no relevant financial relationships with industry. Dr. Fujii has received speaking fees from Asahi Kasei, AbbVie, Chugai, and Tanabe Mitsubishi Pharma. Dr. Jans has received speaking fees from AbbVie, UCB, Lilly, and Novartis; he is cofounder of RheumaFinder. The Leiden study was funded by the Dutch Research Council and the China Scholarship Council. The study by Dr. Fujii and colleagues had no outside funding.
A version of this article first appeared on Medscape.com.
AT EULAR 2023
Sewer data says Ohio person has had COVID for 2 years
The strain of the virus appears to be unique, the researchers said.
The mutated version of the virus was discovered by a team of researchers, led by University of Missouri–Columbia virologist Marc Johnson, PhD, that has been studying standalone mutations identified in wastewater. On Twitter, Dr. Johnson said their work could help warn people of a potential risk.
“If you knew of an exposure of a group of people to a deadly disease, there would be an obligation to inform them,” he wrote.
He believes the infected person lives in Columbus, works at a courthouse in a nearby county, and has gut health problems. The county where the person works has a population of just 15,000 people but had record COVID wastewater levels in May, The Columbus Dispatch reported. The unique COVID strain that Dr. Johnson is researching was the only COVID strain found in Fayette County’s wastewater.
“This person was shedding thousands of times more material than a normal person ever would,” Dr. Johnson told the Dispatch. “I think this person isn’t well. ... I’m guessing they have GI issues.”
Monitoring wastewater for COVID-19 is only used to inform public health officials of community levels and spread of the virus. People with COVID are not tracked down using such information.
The Centers for Disease Control and Prevention told the Dispatch that the findings do not mean there’s a public health threat.
“Unusual or ‘cryptic’ sequences identified in wastewater may represent viruses that can replicate in particular individuals, but not in the general population,” the CDC wrote in a statement to the newspaper. “This can be because of a compromised immune system. CDC and other institutions conduct studies in immunocompromised individuals to understand persistent infection and virus evolution.”
Ohio health officials told the newspaper that they don’t consider the situation a public health threat because the cryptic strain hasn’t spread beyond two sewer sheds for those 2 years.
Dr. Johnson and colleagues have been researching other unique COVID strains found in wastewater. They wrote a paper about a case in Wisconsin currently in preprint.
In the paper, the researchers suggest some people are persistently infected, calling them “prolonged shedders.” The researchers wrote that prolonged shedders could be human or “nonhuman,” and that “increased global monitoring of such lineages in wastewater could help anticipate future circulating mutations and/or variants of concern.”
Earlier in 2023, the CDC announced it was ending its community-level reporting of COVID test data and would rely more heavily on hospitalization reports and wastewater monitoring. COVID hospitalizations dipped to 7,212 nationally for the week of June 1-8, which is a 6% decline from the week prior, according to the CDC. That number of hospitalizations equals about two hospitalizations per 100,000 people.
A version of this article first appeared on WebMD.com.
The strain of the virus appears to be unique, the researchers said.
The mutated version of the virus was discovered by a team of researchers, led by University of Missouri–Columbia virologist Marc Johnson, PhD, that has been studying standalone mutations identified in wastewater. On Twitter, Dr. Johnson said their work could help warn people of a potential risk.
“If you knew of an exposure of a group of people to a deadly disease, there would be an obligation to inform them,” he wrote.
He believes the infected person lives in Columbus, works at a courthouse in a nearby county, and has gut health problems. The county where the person works has a population of just 15,000 people but had record COVID wastewater levels in May, The Columbus Dispatch reported. The unique COVID strain that Dr. Johnson is researching was the only COVID strain found in Fayette County’s wastewater.
“This person was shedding thousands of times more material than a normal person ever would,” Dr. Johnson told the Dispatch. “I think this person isn’t well. ... I’m guessing they have GI issues.”
Monitoring wastewater for COVID-19 is only used to inform public health officials of community levels and spread of the virus. People with COVID are not tracked down using such information.
The Centers for Disease Control and Prevention told the Dispatch that the findings do not mean there’s a public health threat.
“Unusual or ‘cryptic’ sequences identified in wastewater may represent viruses that can replicate in particular individuals, but not in the general population,” the CDC wrote in a statement to the newspaper. “This can be because of a compromised immune system. CDC and other institutions conduct studies in immunocompromised individuals to understand persistent infection and virus evolution.”
Ohio health officials told the newspaper that they don’t consider the situation a public health threat because the cryptic strain hasn’t spread beyond two sewer sheds for those 2 years.
Dr. Johnson and colleagues have been researching other unique COVID strains found in wastewater. They wrote a paper about a case in Wisconsin currently in preprint.
In the paper, the researchers suggest some people are persistently infected, calling them “prolonged shedders.” The researchers wrote that prolonged shedders could be human or “nonhuman,” and that “increased global monitoring of such lineages in wastewater could help anticipate future circulating mutations and/or variants of concern.”
Earlier in 2023, the CDC announced it was ending its community-level reporting of COVID test data and would rely more heavily on hospitalization reports and wastewater monitoring. COVID hospitalizations dipped to 7,212 nationally for the week of June 1-8, which is a 6% decline from the week prior, according to the CDC. That number of hospitalizations equals about two hospitalizations per 100,000 people.
A version of this article first appeared on WebMD.com.
The strain of the virus appears to be unique, the researchers said.
The mutated version of the virus was discovered by a team of researchers, led by University of Missouri–Columbia virologist Marc Johnson, PhD, that has been studying standalone mutations identified in wastewater. On Twitter, Dr. Johnson said their work could help warn people of a potential risk.
“If you knew of an exposure of a group of people to a deadly disease, there would be an obligation to inform them,” he wrote.
He believes the infected person lives in Columbus, works at a courthouse in a nearby county, and has gut health problems. The county where the person works has a population of just 15,000 people but had record COVID wastewater levels in May, The Columbus Dispatch reported. The unique COVID strain that Dr. Johnson is researching was the only COVID strain found in Fayette County’s wastewater.
“This person was shedding thousands of times more material than a normal person ever would,” Dr. Johnson told the Dispatch. “I think this person isn’t well. ... I’m guessing they have GI issues.”
Monitoring wastewater for COVID-19 is only used to inform public health officials of community levels and spread of the virus. People with COVID are not tracked down using such information.
The Centers for Disease Control and Prevention told the Dispatch that the findings do not mean there’s a public health threat.
“Unusual or ‘cryptic’ sequences identified in wastewater may represent viruses that can replicate in particular individuals, but not in the general population,” the CDC wrote in a statement to the newspaper. “This can be because of a compromised immune system. CDC and other institutions conduct studies in immunocompromised individuals to understand persistent infection and virus evolution.”
Ohio health officials told the newspaper that they don’t consider the situation a public health threat because the cryptic strain hasn’t spread beyond two sewer sheds for those 2 years.
Dr. Johnson and colleagues have been researching other unique COVID strains found in wastewater. They wrote a paper about a case in Wisconsin currently in preprint.
In the paper, the researchers suggest some people are persistently infected, calling them “prolonged shedders.” The researchers wrote that prolonged shedders could be human or “nonhuman,” and that “increased global monitoring of such lineages in wastewater could help anticipate future circulating mutations and/or variants of concern.”
Earlier in 2023, the CDC announced it was ending its community-level reporting of COVID test data and would rely more heavily on hospitalization reports and wastewater monitoring. COVID hospitalizations dipped to 7,212 nationally for the week of June 1-8, which is a 6% decline from the week prior, according to the CDC. That number of hospitalizations equals about two hospitalizations per 100,000 people.
A version of this article first appeared on WebMD.com.