User login
Remembering the Dead in Unity and Peace
Soldiers’ graves are the greatest preachers of peace.
Albert Schweitzer 1
From the window of my room in the house where I grew up, I could see the American flag flying over Fort Sam Houston National Cemetery. I would ride my bicycle around the paths that divided the grassy sections of graves to the blocks where my father and grandfather were buried. I would stand before the gravesites in a state combining prayer, processing, and remembrance. Carved into my grandfather’s headstone were the 2 world wars he fought in and on my father’s, the 3 conflicts in which he served. I would walk up to their headstones and trace the emblems of belief: the engraved Star of David that marked my grandfather’s grave and the simple cross for my father.
My visits and writing about them may strike some readers as morbid. However, for me, the experience and memories are calming and peaceful, like the cemetery. There was something incredibly comforting about the uniformity of the headstones standing out for miles, mirroring the ranks of soldiers in the wars they commemorated. Yet, as with the men and women who fought each conflict, every grave told a succinct Hemingway-like story of their military career etched in stone. I know now that discrimination in the military segregated even the burial of service members.2 It appeared to my younger self that at least compared to civilian cemeteries with their massive monuments to the wealthy and powerful, there was an egalitarian effect: my master sergeant grandfather’s plot was indistinguishable from that of my colonel father.
Memorial Day and military cemeteries have a shared history. While Veterans Day honors all who have worn the uniform, living and dead, Memorial Day, as its name suggests, remembers those who have died in a broadly conceived line of duty. To emphasize the more solemn character of the holiday, the original name, Decoration Day, was changed to emphasize the reverence of remembrance.3 The first widespread observance of Memorial Day was to commemorate those who perished in the Civil War, which remains the conflict with the highest number of casualties in American history. The first national commemoration occurred at Arlington National Cemetery when 5000 volunteers decorated 20,000 Union and Confederate graves in an act of solidarity and reconciliation. The practice struck a chord in a country beleaguered by war and division.2
National cemeteries also emerged from the grief and gratitude that marked the Civil War. President Abraham Lincoln, who gave us the famous US Department of Veterans Affairs (VA) mission motto, also inaugurated national cemeteries. At the beginning of the Civil War, only Union soldiers who sacrificed their lives to end slavery were entitled to burial. Reflective of the rift that divided the country, Confederate soldiers contended that such divisiveness should not continue unto death and were granted the right to be buried beside those they fought against, united in death and memory.4
Today, the country is more divided than ever: more than a few observers of American culture, including the new popular film Civil War, believe we are on the brink of another civil war.5 While we take their warning seriously, there are still signs of unity amongst the people, like those who followed the war between the states. Recently, in that same national cemetery where I first contemplated these themes, justice, delayed too long, was not entirely denied. A ceremony was held to dedicate 17 headstones to honor the memories of Black World War I Army soldiers who were court-martialed and hanged in the wake of the Houston riots of 1917. As a sign of their dishonor, their headstones listed only their dates and names—nothing of their military service. At the urging of their descendants, the US Army reopened the files and found the verdict to have been racially motivated. They set aside their convictions, gave them honorable discharges for their service in life, and replaced their gravesites with ones that enshrined that respect in death.6
Some reading this column may, like me, have had the profound privilege of participating in a burial at a national cemetery. We recall the stirring mix of pride and loss when the honor guard hands the perfectly folded flag to the bereaved family member and bids farewell to their comrade with a salute. Yet, not all families have this privilege. One of the saddest experiences I recall is when I was in a leadership position at a VA facility and unable to help impoverished families who were denied VA burial benefits or payments to transport their deceased veteran closer to home. That sorrow often turned to thankful relief when a veterans service organization or other community group offered to pay the funerary expenses. Fortunately, like eligibility for VA health care, the criteria for burial benefits have steadily expanded to encompass spouses, adult children, and others who served.7
In a similar display of altruism this Memorial Day, veterans service organizations, Boy Scouts, and volunteers will place a flag on every grave to show that some memories are stronger than death. If you have never seen it, I encourage you to visit a VA or a national cemetery this holiday or, even better, volunteer to place flags. Either way, spend a few moments thankfully remembering that we can all engage in those uniquely American Memorial Day pastimes of barbecues and baseball games because so many served and died to protect our way of life. The epigraph at the beginning of this column is attributed to Albert Schweitzer, the physician-theologian of reverence for life. The news today is full of war and rumors of war.8 Let us all hope that the message is heard around the world so there is no need to build more national cemeteries to remember our veterans.
1. Cohen R. On Omaha Beach today, where’s the comradeship? The New York Times. June 5, 2024. Accessed April 26, 2024. https://www.nytimes.com/2004/06/05/world/on-omaha-beach-today-where-s-the-comradeship.html
2. Stillwell B. ‘How decoration day’ became memorial day. Military.com. Published May 12, 2020. Accessed April 26, 2024. https://www.military.com/holidays/memorial-day/how-decoration-day-became-memorial-day.html
3. The history of Memorial Day. PBS. Accessed April 26, 2024. https://www.pbs.org/national-memorial-day-concert/memorial-day/history/
4. US Department of Veterans Affairs, National Cemetery Administration. Facts: NCA history and development. Updated October 18, 2023. Accessed April 26, 2024. https://www.cem.va.gov/facts/NCA_History_and_Development_1.asp
5. Lerer L. How the movie ‘civil war’ echoes real political anxieties. The New York Times. April 21, 2024. Accessed April 26, 2024. https://www.nytimes.com/2024/04/21/us/politics/civil-war-movie-politics.html
6. VA’s national cemetery administration dedicates new headstones to honor black soldiers, correcting 1917 injustice. News release. US Department of Veterans Affairs. Published February 22, 2024. Accessed April 26, 2024. https://news.va.gov/press-room/va-headstones-black-soldiers-1917-injustice/
7. US Department of Veterans Affairs, National Cemetery Administration. Burial benefits. Updated September 27, 2023. Accessed April 26, 2024. https://www.cem.va.gov/burial_benefits/
8. Racker M. Why so many politicians are talking about world war III. Time. November 20, 2023. Accessed April 29, 2024. https://time.com/6336897/israel-war-gaza-world-war-iii/
Soldiers’ graves are the greatest preachers of peace.
Albert Schweitzer 1
From the window of my room in the house where I grew up, I could see the American flag flying over Fort Sam Houston National Cemetery. I would ride my bicycle around the paths that divided the grassy sections of graves to the blocks where my father and grandfather were buried. I would stand before the gravesites in a state combining prayer, processing, and remembrance. Carved into my grandfather’s headstone were the 2 world wars he fought in and on my father’s, the 3 conflicts in which he served. I would walk up to their headstones and trace the emblems of belief: the engraved Star of David that marked my grandfather’s grave and the simple cross for my father.
My visits and writing about them may strike some readers as morbid. However, for me, the experience and memories are calming and peaceful, like the cemetery. There was something incredibly comforting about the uniformity of the headstones standing out for miles, mirroring the ranks of soldiers in the wars they commemorated. Yet, as with the men and women who fought each conflict, every grave told a succinct Hemingway-like story of their military career etched in stone. I know now that discrimination in the military segregated even the burial of service members.2 It appeared to my younger self that at least compared to civilian cemeteries with their massive monuments to the wealthy and powerful, there was an egalitarian effect: my master sergeant grandfather’s plot was indistinguishable from that of my colonel father.
Memorial Day and military cemeteries have a shared history. While Veterans Day honors all who have worn the uniform, living and dead, Memorial Day, as its name suggests, remembers those who have died in a broadly conceived line of duty. To emphasize the more solemn character of the holiday, the original name, Decoration Day, was changed to emphasize the reverence of remembrance.3 The first widespread observance of Memorial Day was to commemorate those who perished in the Civil War, which remains the conflict with the highest number of casualties in American history. The first national commemoration occurred at Arlington National Cemetery when 5000 volunteers decorated 20,000 Union and Confederate graves in an act of solidarity and reconciliation. The practice struck a chord in a country beleaguered by war and division.2
National cemeteries also emerged from the grief and gratitude that marked the Civil War. President Abraham Lincoln, who gave us the famous US Department of Veterans Affairs (VA) mission motto, also inaugurated national cemeteries. At the beginning of the Civil War, only Union soldiers who sacrificed their lives to end slavery were entitled to burial. Reflective of the rift that divided the country, Confederate soldiers contended that such divisiveness should not continue unto death and were granted the right to be buried beside those they fought against, united in death and memory.4
Today, the country is more divided than ever: more than a few observers of American culture, including the new popular film Civil War, believe we are on the brink of another civil war.5 While we take their warning seriously, there are still signs of unity amongst the people, like those who followed the war between the states. Recently, in that same national cemetery where I first contemplated these themes, justice, delayed too long, was not entirely denied. A ceremony was held to dedicate 17 headstones to honor the memories of Black World War I Army soldiers who were court-martialed and hanged in the wake of the Houston riots of 1917. As a sign of their dishonor, their headstones listed only their dates and names—nothing of their military service. At the urging of their descendants, the US Army reopened the files and found the verdict to have been racially motivated. They set aside their convictions, gave them honorable discharges for their service in life, and replaced their gravesites with ones that enshrined that respect in death.6
Some reading this column may, like me, have had the profound privilege of participating in a burial at a national cemetery. We recall the stirring mix of pride and loss when the honor guard hands the perfectly folded flag to the bereaved family member and bids farewell to their comrade with a salute. Yet, not all families have this privilege. One of the saddest experiences I recall is when I was in a leadership position at a VA facility and unable to help impoverished families who were denied VA burial benefits or payments to transport their deceased veteran closer to home. That sorrow often turned to thankful relief when a veterans service organization or other community group offered to pay the funerary expenses. Fortunately, like eligibility for VA health care, the criteria for burial benefits have steadily expanded to encompass spouses, adult children, and others who served.7
In a similar display of altruism this Memorial Day, veterans service organizations, Boy Scouts, and volunteers will place a flag on every grave to show that some memories are stronger than death. If you have never seen it, I encourage you to visit a VA or a national cemetery this holiday or, even better, volunteer to place flags. Either way, spend a few moments thankfully remembering that we can all engage in those uniquely American Memorial Day pastimes of barbecues and baseball games because so many served and died to protect our way of life. The epigraph at the beginning of this column is attributed to Albert Schweitzer, the physician-theologian of reverence for life. The news today is full of war and rumors of war.8 Let us all hope that the message is heard around the world so there is no need to build more national cemeteries to remember our veterans.
Soldiers’ graves are the greatest preachers of peace.
Albert Schweitzer 1
From the window of my room in the house where I grew up, I could see the American flag flying over Fort Sam Houston National Cemetery. I would ride my bicycle around the paths that divided the grassy sections of graves to the blocks where my father and grandfather were buried. I would stand before the gravesites in a state combining prayer, processing, and remembrance. Carved into my grandfather’s headstone were the 2 world wars he fought in and on my father’s, the 3 conflicts in which he served. I would walk up to their headstones and trace the emblems of belief: the engraved Star of David that marked my grandfather’s grave and the simple cross for my father.
My visits and writing about them may strike some readers as morbid. However, for me, the experience and memories are calming and peaceful, like the cemetery. There was something incredibly comforting about the uniformity of the headstones standing out for miles, mirroring the ranks of soldiers in the wars they commemorated. Yet, as with the men and women who fought each conflict, every grave told a succinct Hemingway-like story of their military career etched in stone. I know now that discrimination in the military segregated even the burial of service members.2 It appeared to my younger self that at least compared to civilian cemeteries with their massive monuments to the wealthy and powerful, there was an egalitarian effect: my master sergeant grandfather’s plot was indistinguishable from that of my colonel father.
Memorial Day and military cemeteries have a shared history. While Veterans Day honors all who have worn the uniform, living and dead, Memorial Day, as its name suggests, remembers those who have died in a broadly conceived line of duty. To emphasize the more solemn character of the holiday, the original name, Decoration Day, was changed to emphasize the reverence of remembrance.3 The first widespread observance of Memorial Day was to commemorate those who perished in the Civil War, which remains the conflict with the highest number of casualties in American history. The first national commemoration occurred at Arlington National Cemetery when 5000 volunteers decorated 20,000 Union and Confederate graves in an act of solidarity and reconciliation. The practice struck a chord in a country beleaguered by war and division.2
National cemeteries also emerged from the grief and gratitude that marked the Civil War. President Abraham Lincoln, who gave us the famous US Department of Veterans Affairs (VA) mission motto, also inaugurated national cemeteries. At the beginning of the Civil War, only Union soldiers who sacrificed their lives to end slavery were entitled to burial. Reflective of the rift that divided the country, Confederate soldiers contended that such divisiveness should not continue unto death and were granted the right to be buried beside those they fought against, united in death and memory.4
Today, the country is more divided than ever: more than a few observers of American culture, including the new popular film Civil War, believe we are on the brink of another civil war.5 While we take their warning seriously, there are still signs of unity amongst the people, like those who followed the war between the states. Recently, in that same national cemetery where I first contemplated these themes, justice, delayed too long, was not entirely denied. A ceremony was held to dedicate 17 headstones to honor the memories of Black World War I Army soldiers who were court-martialed and hanged in the wake of the Houston riots of 1917. As a sign of their dishonor, their headstones listed only their dates and names—nothing of their military service. At the urging of their descendants, the US Army reopened the files and found the verdict to have been racially motivated. They set aside their convictions, gave them honorable discharges for their service in life, and replaced their gravesites with ones that enshrined that respect in death.6
Some reading this column may, like me, have had the profound privilege of participating in a burial at a national cemetery. We recall the stirring mix of pride and loss when the honor guard hands the perfectly folded flag to the bereaved family member and bids farewell to their comrade with a salute. Yet, not all families have this privilege. One of the saddest experiences I recall is when I was in a leadership position at a VA facility and unable to help impoverished families who were denied VA burial benefits or payments to transport their deceased veteran closer to home. That sorrow often turned to thankful relief when a veterans service organization or other community group offered to pay the funerary expenses. Fortunately, like eligibility for VA health care, the criteria for burial benefits have steadily expanded to encompass spouses, adult children, and others who served.7
In a similar display of altruism this Memorial Day, veterans service organizations, Boy Scouts, and volunteers will place a flag on every grave to show that some memories are stronger than death. If you have never seen it, I encourage you to visit a VA or a national cemetery this holiday or, even better, volunteer to place flags. Either way, spend a few moments thankfully remembering that we can all engage in those uniquely American Memorial Day pastimes of barbecues and baseball games because so many served and died to protect our way of life. The epigraph at the beginning of this column is attributed to Albert Schweitzer, the physician-theologian of reverence for life. The news today is full of war and rumors of war.8 Let us all hope that the message is heard around the world so there is no need to build more national cemeteries to remember our veterans.
1. Cohen R. On Omaha Beach today, where’s the comradeship? The New York Times. June 5, 2024. Accessed April 26, 2024. https://www.nytimes.com/2004/06/05/world/on-omaha-beach-today-where-s-the-comradeship.html
2. Stillwell B. ‘How decoration day’ became memorial day. Military.com. Published May 12, 2020. Accessed April 26, 2024. https://www.military.com/holidays/memorial-day/how-decoration-day-became-memorial-day.html
3. The history of Memorial Day. PBS. Accessed April 26, 2024. https://www.pbs.org/national-memorial-day-concert/memorial-day/history/
4. US Department of Veterans Affairs, National Cemetery Administration. Facts: NCA history and development. Updated October 18, 2023. Accessed April 26, 2024. https://www.cem.va.gov/facts/NCA_History_and_Development_1.asp
5. Lerer L. How the movie ‘civil war’ echoes real political anxieties. The New York Times. April 21, 2024. Accessed April 26, 2024. https://www.nytimes.com/2024/04/21/us/politics/civil-war-movie-politics.html
6. VA’s national cemetery administration dedicates new headstones to honor black soldiers, correcting 1917 injustice. News release. US Department of Veterans Affairs. Published February 22, 2024. Accessed April 26, 2024. https://news.va.gov/press-room/va-headstones-black-soldiers-1917-injustice/
7. US Department of Veterans Affairs, National Cemetery Administration. Burial benefits. Updated September 27, 2023. Accessed April 26, 2024. https://www.cem.va.gov/burial_benefits/
8. Racker M. Why so many politicians are talking about world war III. Time. November 20, 2023. Accessed April 29, 2024. https://time.com/6336897/israel-war-gaza-world-war-iii/
1. Cohen R. On Omaha Beach today, where’s the comradeship? The New York Times. June 5, 2024. Accessed April 26, 2024. https://www.nytimes.com/2004/06/05/world/on-omaha-beach-today-where-s-the-comradeship.html
2. Stillwell B. ‘How decoration day’ became memorial day. Military.com. Published May 12, 2020. Accessed April 26, 2024. https://www.military.com/holidays/memorial-day/how-decoration-day-became-memorial-day.html
3. The history of Memorial Day. PBS. Accessed April 26, 2024. https://www.pbs.org/national-memorial-day-concert/memorial-day/history/
4. US Department of Veterans Affairs, National Cemetery Administration. Facts: NCA history and development. Updated October 18, 2023. Accessed April 26, 2024. https://www.cem.va.gov/facts/NCA_History_and_Development_1.asp
5. Lerer L. How the movie ‘civil war’ echoes real political anxieties. The New York Times. April 21, 2024. Accessed April 26, 2024. https://www.nytimes.com/2024/04/21/us/politics/civil-war-movie-politics.html
6. VA’s national cemetery administration dedicates new headstones to honor black soldiers, correcting 1917 injustice. News release. US Department of Veterans Affairs. Published February 22, 2024. Accessed April 26, 2024. https://news.va.gov/press-room/va-headstones-black-soldiers-1917-injustice/
7. US Department of Veterans Affairs, National Cemetery Administration. Burial benefits. Updated September 27, 2023. Accessed April 26, 2024. https://www.cem.va.gov/burial_benefits/
8. Racker M. Why so many politicians are talking about world war III. Time. November 20, 2023. Accessed April 29, 2024. https://time.com/6336897/israel-war-gaza-world-war-iii/
Artificial Intelligence in GI and Hepatology
Dear colleagues,
Since our prior Perspectives piece on artificial intelligence (AI) in GI and Hepatology in 2022, the field has seen almost exponential growth. Expectations are high that AI will revolutionize our field and significantly improve patient care. But as the global discussion on AI has shown, there are real challenges with adoption, including issues with accuracy, reliability, and privacy.
In this issue, Dr. Nabil M. Mansour and Dr. Thomas R. McCarty explore the current and future impact of AI on gastroenterology, while Dr. Basile Njei and Yazan A. Al Ajlouni assess its role in hepatology. We hope these pieces will help your discussions in incorporating or researching AI for use in your own practices. We welcome your thoughts on this issue on X @AGA_GIHN.
Gyanprakash A. Ketwaroo, MD, MSc, is associate professor of medicine, Yale University, New Haven, Conn., and chief of endoscopy at West Haven (Conn.) VA Medical Center. He is an associate editor for GI & Hepatology News.
Artificial Intelligence in Gastrointestinal Endoscopy
BY THOMAS R. MCCARTY, MD, MPH; NABIL M. MANSOUR, MD
The last few decades have seen an exponential increase and interest in the role of artificial intelligence (AI) and adoption of deep learning algorithms within healthcare and patient care services. The field of gastroenterology and endoscopy has similarly seen a tremendous uptake in acceptance and implementation of AI for a variety of gastrointestinal conditions. The spectrum of AI-based applications includes detection or diagnostic-based as well as therapeutic assistance tools. From the first US Food and Drug Administration (FDA)-approved device that uses machine learning to assist clinicians in detecting lesions during colonoscopy, to other more innovative machine learning techniques for small bowel, esophageal, and hepatobiliary conditions, AI has dramatically changed the landscape of gastrointestinal endoscopy.
Approved applications for colorectal cancer
In an attempt to improve colorectal cancer screening and outcomes related to screening and surveillance, efforts have been focused on procedural performance metrics, quality indicators, and tools to aid in lesion detection and improve quality of care. One such tool has been computer-aided detection (CADe), with early randomized controlled trial (RCT) data showing significantly increased adenoma detection rate (ADR) and adenomas per colonoscopy (APC).1-3
Ultimately, this data led to FDA approval of the CADe system GI Genius (Medtronic, Dublin, Ireland) in 2021.4 Additional systems have since been FDA approved or 510(k) cleared including Endoscreener (Wision AI, Shanghai, China), SKOUT (Iterative Health, Cambridge, Massachusetts), MAGENTIQ-COLO (MAGENTIQ-EYE LTD, Haifa, Israel), and CAD EYE (Fujifilm, Tokyo), all of which have shown increased ADR and/or increased APC and/or reduced adenoma miss rates in randomized trials.5
Yet despite the promise of improved quality and subsequent translation to better patient outcomes, there has been a noticeable disconnect between RCT data and more real-world literature.6 In a recent study, no improvement was seen in ADR after implementation of a CADe system for colorectal cancer screening — including both higher and lower-ADR performers. Looking at change over time after implementation, CADe had no positive effect in any group over time, divergent from early RCT data. In a more recent multicenter, community-based RCT study, again CADe did not result in a statistically significant difference in the number of adenomas detected.7 The differences between some of these more recent “real-world” studies vs the majority of data from RCTs raise important questions regarding the potential of bias (due to unblinding) in prospective trials, as well as the role of the human-AI interaction.
Importantly for RCT data, both cohorts in these studies met adequate ADR benchmarks, though it remains unclear whether a truly increased ADR necessitates better patient outcomes — is higher always better? In addition, an important consideration with evaluating any AI/CADe system is that they often undergo frequent updates, each promising improved accuracy, sensitivity, and specificity. This is an interesting dilemma and raises questions about the enduring relevance of studies conducted using an outdated version of a CADe system.
Additional unanswered questions regarding an ideal ADR for implementation, preferred patient populations for screening (especially for younger individuals), and the role and adoption of computer-aided polyp diagnosis/characterization (CADx) within the United States remain. Furthermore, questions regarding procedural withdrawal time, impact on sessile serrated lesion detection, cost-effectiveness, and preferred adoption strategies have begun to be explored, though require more data to better define a best practice approach. Ultimately, answers to some of these unknowns may explain the discordant results and help guide future implementation measures.
Innovative applications for alternative gastrointestinal conditions
Given the fervor and excitement, as well as the outcomes associated with AI-based colorectal screening, it is not surprising these techniques have been expanded to other gastrointestinal conditions. At this time, all of these are fledgling, mostly single-center tools, not yet ready for widespread adoption. Nonetheless, these represent a potentially important step forward for difficult-to-manage gastrointestinal diseases.
Machine learning CADe systems have been developed to help identify early Barrett’s neoplasia, depth and invasion of gastric cancer, as well as lesion detection in small bowel video capsule endoscopy.8-10 Endoscopic retrograde cholangiopancreatography (ERCP)-based applications for cholangiocarcinoma and indeterminate stricture diagnosis have also been studied.11 Additional AI-based algorithms have been employed for complex procedures such as endoscopic submucosal dissection (ESD) or peroral endoscopic myotomy (POEM) to delineate vessels, better define tissue planes for dissection, and visualize landmark structures.12,13 Furthermore, AI-based scope guidance/manipulation, bleeding detection, landmark identification, and lesion detection have the potential to revolutionize endoscopic training and education. The impact that generative AI can potentially have on clinical practice is also an exciting prospect that warrants further investigation.
Artificial intelligence adoption in clinical practice
Clinical practice with regard to AI and colorectal cancer screening largely mirrors the disconnect in the current literature, with “believers” and “non-believers” as well as innovators and early adopters alongside laggards. In our own academic practices, we continue to struggle with the adoption and standardized implementation of AI-based colorectal cancer CADe systems, despite the RCT data showing positive results. It is likely that AI uptake will follow the technology predictions of Amara’s Law — i.e., individuals tend to overestimate the short-term impact of new technologies while underestimating long-term effects. In the end, more widespread adoption in community practice and larger scale real-world clinical outcomes studies are likely to determine the true impact of these exciting technologies. For other, less established AI-based tools, more data are currently required.
Conclusions
Ultimately, AI-based algorithms are likely here to stay, with continued improvement and evolution to occur based on provider feedback and patient care needs. Current tools, while not all-encompassing, have the potential to dramatically change the landscape of endoscopic training, diagnostic evaluation, and therapeutic care. It is critically important that relevant stakeholders, both endoscopists and patients, be involved in future applications and design to improve efficiency and quality outcomes overall.
Dr. McCarty is based in the Lynda K. and David M. Underwood Center for Digestive Disorders, Houston Methodist Hospital. Dr. Mansour is based in the section of gastroenterology, Baylor College of Medicine, Houston. Dr. McCarty reports no conflicts of interest. Dr. Mansour reports having been a consultant for Iterative Health.
References
1. Repici A, et al. Efficacy of real-time computer-aided detection of colorectal neoplasia in a randomized trial. Gastroenterology. 2020 Aug. doi: 10.1053/j.gastro.2020.04.062.
2. Repici A, et al. Artificial intelligence and colonoscopy experience: Lessons from two randomised trials. Gut. Apr 2022. doi: 10.1136/gutjnl-2021-324471.
3. Wallace MB, et al. Impact of artificial intelligence on miss rate of colorectal neoplasia. Gastroenterology 2022 Jul. doi: 10.1053/j.gastro.2022.03.007.
4. United States Food and Drug Administration (FDA). GI Genius FDA Approval [April 9, 2021]. Accessed January 5, 2022. Available at: www.accessdata.fda.gov/cdrh_docs/pdf21/K211951.pdf.
5. Maas MHJ, et al. A computer-aided polyp detection system in screening and surveillance colonoscopy: An international, multicentre, randomised, tandem trial. Lancet Digit Health. 2024 Mar. doi: 10.1016/S2589-7500(23)00242-X.
6. Ladabaum U, et al. Computer-aided detection of polyps does not improve colonoscopist performance in a pragmatic implementation trial. Gastroenterology. 2023 Mar. doi: 10.1053/j.gastro.2022.12.004.
7. Wei MT, et al. Evaluation of computer-aided detection during colonoscopy in the community (AI-SEE): A multicenter randomized clinical trial. Am J Gastroenterol. 2023 Oct. doi: 10.14309/ajg.0000000000002239.
8. de Groof J, et al. The Argos project: The development of a computer-aided detection system to improve detection of Barrett’s neoplasia on white light endoscopy. United European Gastroenterol J. 2019 May. doi: 10.1177/2050640619837443.
9. Kanesaka T, et al. Computer-aided diagnosis for identifying and delineating early gastric cancers in magnifying narrow-band imaging. Gastrointest Endosc. 2018 May. doi: 10.1016/j.gie.2017.11.029.
10. Sahafi A, et al. Edge artificial intelligence wireless video capsule endoscopy. Sci Rep. 2022 Aug. doi: 10.1038/s41598-022-17502-7.
11. Njei B, et al. Artificial intelligence in endoscopic imaging for detection of malignant biliary strictures and cholangiocarcinoma: A systematic review. Ann Gastroenterol. 2023 Mar-Apr. doi: 10.20524/aog.2023.0779.
12. Ebigbo A, et al. Vessel and tissue recognition during third-space endoscopy using a deep learning algorithm. Gut. 2022 Dec. doi: 10.1136/gutjnl-2021-326470.
13. Cao J, et al. Intelligent surgical workflow recognition for endoscopic submucosal dissection with real-time animal study. Nat Commun. 2023 Oct. doi: 10.1038/s41467-023-42451-8.
The Promise and Challenges of AI in Hepatology
BY BASILE NJEI, MD, MPH, PHD; YAZAN A. AL-AJLOUNI, MPHIL
In the dynamic realm of medicine, artificial intelligence (AI) emerges as a transformative force, notably within hepatology. The discipline of hepatology, dedicated to liver and related organ diseases, is ripe for AI’s promise to revolutionize diagnostics and treatment, pushing toward a future of precision medicine. Yet, the path to fully realizing AI’s potential in hepatology is laced with data, ethical, and integration challenges.
The application of AI, particularly in histopathology, significantly enhances disease diagnosis and staging in hepatology. AI-driven approaches remedy traditional histopathological challenges, such as interpretative variability, providing more consistent and accurate disease analyses. This is especially evident in conditions like metabolic dysfunction-associated steatohepatitis (MASH) and hepatocellular carcinoma (HCC), where AI aids in identifying critical gene signatures, thereby refining therapy selection.
Similarly, deep learning (DL), a branch of AI, has attracted significant interest globally, particularly in image recognition. AI’s incorporation into medical imaging marks a significant advancement, enabling early detection of malignancies like HCC and improving diagnostics in steatotic liver disease through enhanced imaging analyses using convolutional neural networks (CNN). The abundance of imaging data alongside clinical outcomes has catalyzed AI’s integration into radiology, leading to the swift growth of radiomics as a novel domain in medical research.
AI has also been shown to identify nuanced alterations in electrocardiograms (EKGs) associated with liver conditions, potentially detecting the progression of liver diseases at an earlier stage than currently possible. By leveraging complex algorithms and machine learning, AI can analyze EKG patterns with a precision and depth unattainable through traditional manual interpretation. Given that liver diseases, such as cirrhosis or hepatitis, can induce subtle cardiac changes long before other clinical symptoms manifest, early detection through AI-enhanced EKG analysis could lead to timely interventions, potentially halting or reversing disease progression. This approach further enriches our understanding of the intricate interplay between liver function and cardiac health, highlighting the potential for AI to transform not just liver disease diagnostics but also to foster a more integrated approach to patient care.
Beyond diagnostics, the burgeoning field of generative AI introduces groundbreaking possibilities in treatment planning and patient education, particularly for chronic conditions like cirrhosis. Generative AI produces original content, including text, visuals, and music, by identifying and learning patterns from its training data. When it leverages large language models (LLMs), it entails training on vast collections of textual data and using AI models characterized by many parameters. A notable instance of generative AI employing LLMs is ChatGPT (General Pretrained Transformers). By simulating disease progression and treatment outcomes, generative AI can foster personalized treatment strategies and empower patients with knowledge about their health trajectories. Yet, realizing these potential demands requires overcoming data quality and interpretability challenges, and ensuring AI outputs are accessible and actionable for clinicians and patients.
Despite these advancements, leveraging AI in hepatology is not devoid of hurdles. The development and training of AI models require extensive and diverse datasets, raising concerns about data privacy and ethical use. Addressing these concerns is paramount for successfully integrating AI into clinical hepatology practice, necessitating transparent algorithmic processes and stringent ethical standards. Ethical considerations are central to AI’s integration into hepatology. Algorithmic biases, patient privacy, and the impact of AI-driven decisions underscore the need for cautious AI deployment. Developing transparent, understandable algorithms and establishing ethical guidelines for AI use are critical steps towards ethically leveraging AI in patient care.
In conclusion, AI’s integration into hepatology holds tremendous promise for advancing patient care through enhanced diagnostics, treatment planning, and patient education. Overcoming the associated challenges, including ethical concerns, data diversity, and algorithm interpretability, is crucial. As the hepatology community navigates this technological evolution, a balanced approach that marries technological advancements with ethical stewardship will be key to harnessing AI’s full potential, ensuring it serves the best interests of patients and propels the field of hepatology into the future.
We predict a trajectory of increased use and adoption of AI in hepatology. AI in hepatology is likely to meet the test of pervasiveness, improvement, and innovation. The adoption of AI in routine hepatology diagnosis and management will likely follow Amara’s law and the five stages of the hype cycle. We believe that we are still in the infant stages of adopting AI technology in hepatology, and this phase may last 5 years before there is a peak of inflated expectations. The trough of disillusionment and slopes of enlightenment may only be observed in the next decades.
Dr. Njei is based in the Section of Digestive Diseases, Yale School of Medicine, New Haven, Conn. Mr. Al-Ajlouni is a senior medical student at New York Medical College School of Medicine, Valhalla, N.Y. They have no conflicts of interest to declare.
Sources
Taylor-Weiner A, et al. A Machine Learning Approach Enables Quantitative Measurement of Liver Histology and Disease Monitoring in NASH. Hepatology. 2021 Jul. doi: 10.1002/hep.31750.
Zeng Q, et al. Artificial intelligence predicts immune and inflammatory gene signatures directly from hepatocellular carcinoma histology. J Hepatol. 2022 Jul. doi: 10.1016/j.jhep.2022.01.018.
Ahn JC, et al. Development of the AI-Cirrhosis-ECG Score: An Electrocardiogram-Based Deep Learning Model in Cirrhosis. Am J Gastroenterol. 2022 Mar. doi: 10.14309/ajg.0000000000001617.
Nduma BN, et al. The Application of Artificial Intelligence (AI)-Based Ultrasound for the Diagnosis of Fatty Liver Disease: A Systematic Review. Cureus. 2023 Dec 15. doi: 10.7759/cureus.50601.
Dear colleagues,
Since our prior Perspectives piece on artificial intelligence (AI) in GI and Hepatology in 2022, the field has seen almost exponential growth. Expectations are high that AI will revolutionize our field and significantly improve patient care. But as the global discussion on AI has shown, there are real challenges with adoption, including issues with accuracy, reliability, and privacy.
In this issue, Dr. Nabil M. Mansour and Dr. Thomas R. McCarty explore the current and future impact of AI on gastroenterology, while Dr. Basile Njei and Yazan A. Al Ajlouni assess its role in hepatology. We hope these pieces will help your discussions in incorporating or researching AI for use in your own practices. We welcome your thoughts on this issue on X @AGA_GIHN.
Gyanprakash A. Ketwaroo, MD, MSc, is associate professor of medicine, Yale University, New Haven, Conn., and chief of endoscopy at West Haven (Conn.) VA Medical Center. He is an associate editor for GI & Hepatology News.
Artificial Intelligence in Gastrointestinal Endoscopy
BY THOMAS R. MCCARTY, MD, MPH; NABIL M. MANSOUR, MD
The last few decades have seen an exponential increase and interest in the role of artificial intelligence (AI) and adoption of deep learning algorithms within healthcare and patient care services. The field of gastroenterology and endoscopy has similarly seen a tremendous uptake in acceptance and implementation of AI for a variety of gastrointestinal conditions. The spectrum of AI-based applications includes detection or diagnostic-based as well as therapeutic assistance tools. From the first US Food and Drug Administration (FDA)-approved device that uses machine learning to assist clinicians in detecting lesions during colonoscopy, to other more innovative machine learning techniques for small bowel, esophageal, and hepatobiliary conditions, AI has dramatically changed the landscape of gastrointestinal endoscopy.
Approved applications for colorectal cancer
In an attempt to improve colorectal cancer screening and outcomes related to screening and surveillance, efforts have been focused on procedural performance metrics, quality indicators, and tools to aid in lesion detection and improve quality of care. One such tool has been computer-aided detection (CADe), with early randomized controlled trial (RCT) data showing significantly increased adenoma detection rate (ADR) and adenomas per colonoscopy (APC).1-3
Ultimately, this data led to FDA approval of the CADe system GI Genius (Medtronic, Dublin, Ireland) in 2021.4 Additional systems have since been FDA approved or 510(k) cleared including Endoscreener (Wision AI, Shanghai, China), SKOUT (Iterative Health, Cambridge, Massachusetts), MAGENTIQ-COLO (MAGENTIQ-EYE LTD, Haifa, Israel), and CAD EYE (Fujifilm, Tokyo), all of which have shown increased ADR and/or increased APC and/or reduced adenoma miss rates in randomized trials.5
Yet despite the promise of improved quality and subsequent translation to better patient outcomes, there has been a noticeable disconnect between RCT data and more real-world literature.6 In a recent study, no improvement was seen in ADR after implementation of a CADe system for colorectal cancer screening — including both higher and lower-ADR performers. Looking at change over time after implementation, CADe had no positive effect in any group over time, divergent from early RCT data. In a more recent multicenter, community-based RCT study, again CADe did not result in a statistically significant difference in the number of adenomas detected.7 The differences between some of these more recent “real-world” studies vs the majority of data from RCTs raise important questions regarding the potential of bias (due to unblinding) in prospective trials, as well as the role of the human-AI interaction.
Importantly for RCT data, both cohorts in these studies met adequate ADR benchmarks, though it remains unclear whether a truly increased ADR necessitates better patient outcomes — is higher always better? In addition, an important consideration with evaluating any AI/CADe system is that they often undergo frequent updates, each promising improved accuracy, sensitivity, and specificity. This is an interesting dilemma and raises questions about the enduring relevance of studies conducted using an outdated version of a CADe system.
Additional unanswered questions regarding an ideal ADR for implementation, preferred patient populations for screening (especially for younger individuals), and the role and adoption of computer-aided polyp diagnosis/characterization (CADx) within the United States remain. Furthermore, questions regarding procedural withdrawal time, impact on sessile serrated lesion detection, cost-effectiveness, and preferred adoption strategies have begun to be explored, though require more data to better define a best practice approach. Ultimately, answers to some of these unknowns may explain the discordant results and help guide future implementation measures.
Innovative applications for alternative gastrointestinal conditions
Given the fervor and excitement, as well as the outcomes associated with AI-based colorectal screening, it is not surprising these techniques have been expanded to other gastrointestinal conditions. At this time, all of these are fledgling, mostly single-center tools, not yet ready for widespread adoption. Nonetheless, these represent a potentially important step forward for difficult-to-manage gastrointestinal diseases.
Machine learning CADe systems have been developed to help identify early Barrett’s neoplasia, depth and invasion of gastric cancer, as well as lesion detection in small bowel video capsule endoscopy.8-10 Endoscopic retrograde cholangiopancreatography (ERCP)-based applications for cholangiocarcinoma and indeterminate stricture diagnosis have also been studied.11 Additional AI-based algorithms have been employed for complex procedures such as endoscopic submucosal dissection (ESD) or peroral endoscopic myotomy (POEM) to delineate vessels, better define tissue planes for dissection, and visualize landmark structures.12,13 Furthermore, AI-based scope guidance/manipulation, bleeding detection, landmark identification, and lesion detection have the potential to revolutionize endoscopic training and education. The impact that generative AI can potentially have on clinical practice is also an exciting prospect that warrants further investigation.
Artificial intelligence adoption in clinical practice
Clinical practice with regard to AI and colorectal cancer screening largely mirrors the disconnect in the current literature, with “believers” and “non-believers” as well as innovators and early adopters alongside laggards. In our own academic practices, we continue to struggle with the adoption and standardized implementation of AI-based colorectal cancer CADe systems, despite the RCT data showing positive results. It is likely that AI uptake will follow the technology predictions of Amara’s Law — i.e., individuals tend to overestimate the short-term impact of new technologies while underestimating long-term effects. In the end, more widespread adoption in community practice and larger scale real-world clinical outcomes studies are likely to determine the true impact of these exciting technologies. For other, less established AI-based tools, more data are currently required.
Conclusions
Ultimately, AI-based algorithms are likely here to stay, with continued improvement and evolution to occur based on provider feedback and patient care needs. Current tools, while not all-encompassing, have the potential to dramatically change the landscape of endoscopic training, diagnostic evaluation, and therapeutic care. It is critically important that relevant stakeholders, both endoscopists and patients, be involved in future applications and design to improve efficiency and quality outcomes overall.
Dr. McCarty is based in the Lynda K. and David M. Underwood Center for Digestive Disorders, Houston Methodist Hospital. Dr. Mansour is based in the section of gastroenterology, Baylor College of Medicine, Houston. Dr. McCarty reports no conflicts of interest. Dr. Mansour reports having been a consultant for Iterative Health.
References
1. Repici A, et al. Efficacy of real-time computer-aided detection of colorectal neoplasia in a randomized trial. Gastroenterology. 2020 Aug. doi: 10.1053/j.gastro.2020.04.062.
2. Repici A, et al. Artificial intelligence and colonoscopy experience: Lessons from two randomised trials. Gut. Apr 2022. doi: 10.1136/gutjnl-2021-324471.
3. Wallace MB, et al. Impact of artificial intelligence on miss rate of colorectal neoplasia. Gastroenterology 2022 Jul. doi: 10.1053/j.gastro.2022.03.007.
4. United States Food and Drug Administration (FDA). GI Genius FDA Approval [April 9, 2021]. Accessed January 5, 2022. Available at: www.accessdata.fda.gov/cdrh_docs/pdf21/K211951.pdf.
5. Maas MHJ, et al. A computer-aided polyp detection system in screening and surveillance colonoscopy: An international, multicentre, randomised, tandem trial. Lancet Digit Health. 2024 Mar. doi: 10.1016/S2589-7500(23)00242-X.
6. Ladabaum U, et al. Computer-aided detection of polyps does not improve colonoscopist performance in a pragmatic implementation trial. Gastroenterology. 2023 Mar. doi: 10.1053/j.gastro.2022.12.004.
7. Wei MT, et al. Evaluation of computer-aided detection during colonoscopy in the community (AI-SEE): A multicenter randomized clinical trial. Am J Gastroenterol. 2023 Oct. doi: 10.14309/ajg.0000000000002239.
8. de Groof J, et al. The Argos project: The development of a computer-aided detection system to improve detection of Barrett’s neoplasia on white light endoscopy. United European Gastroenterol J. 2019 May. doi: 10.1177/2050640619837443.
9. Kanesaka T, et al. Computer-aided diagnosis for identifying and delineating early gastric cancers in magnifying narrow-band imaging. Gastrointest Endosc. 2018 May. doi: 10.1016/j.gie.2017.11.029.
10. Sahafi A, et al. Edge artificial intelligence wireless video capsule endoscopy. Sci Rep. 2022 Aug. doi: 10.1038/s41598-022-17502-7.
11. Njei B, et al. Artificial intelligence in endoscopic imaging for detection of malignant biliary strictures and cholangiocarcinoma: A systematic review. Ann Gastroenterol. 2023 Mar-Apr. doi: 10.20524/aog.2023.0779.
12. Ebigbo A, et al. Vessel and tissue recognition during third-space endoscopy using a deep learning algorithm. Gut. 2022 Dec. doi: 10.1136/gutjnl-2021-326470.
13. Cao J, et al. Intelligent surgical workflow recognition for endoscopic submucosal dissection with real-time animal study. Nat Commun. 2023 Oct. doi: 10.1038/s41467-023-42451-8.
The Promise and Challenges of AI in Hepatology
BY BASILE NJEI, MD, MPH, PHD; YAZAN A. AL-AJLOUNI, MPHIL
In the dynamic realm of medicine, artificial intelligence (AI) emerges as a transformative force, notably within hepatology. The discipline of hepatology, dedicated to liver and related organ diseases, is ripe for AI’s promise to revolutionize diagnostics and treatment, pushing toward a future of precision medicine. Yet, the path to fully realizing AI’s potential in hepatology is laced with data, ethical, and integration challenges.
The application of AI, particularly in histopathology, significantly enhances disease diagnosis and staging in hepatology. AI-driven approaches remedy traditional histopathological challenges, such as interpretative variability, providing more consistent and accurate disease analyses. This is especially evident in conditions like metabolic dysfunction-associated steatohepatitis (MASH) and hepatocellular carcinoma (HCC), where AI aids in identifying critical gene signatures, thereby refining therapy selection.
Similarly, deep learning (DL), a branch of AI, has attracted significant interest globally, particularly in image recognition. AI’s incorporation into medical imaging marks a significant advancement, enabling early detection of malignancies like HCC and improving diagnostics in steatotic liver disease through enhanced imaging analyses using convolutional neural networks (CNN). The abundance of imaging data alongside clinical outcomes has catalyzed AI’s integration into radiology, leading to the swift growth of radiomics as a novel domain in medical research.
AI has also been shown to identify nuanced alterations in electrocardiograms (EKGs) associated with liver conditions, potentially detecting the progression of liver diseases at an earlier stage than currently possible. By leveraging complex algorithms and machine learning, AI can analyze EKG patterns with a precision and depth unattainable through traditional manual interpretation. Given that liver diseases, such as cirrhosis or hepatitis, can induce subtle cardiac changes long before other clinical symptoms manifest, early detection through AI-enhanced EKG analysis could lead to timely interventions, potentially halting or reversing disease progression. This approach further enriches our understanding of the intricate interplay between liver function and cardiac health, highlighting the potential for AI to transform not just liver disease diagnostics but also to foster a more integrated approach to patient care.
Beyond diagnostics, the burgeoning field of generative AI introduces groundbreaking possibilities in treatment planning and patient education, particularly for chronic conditions like cirrhosis. Generative AI produces original content, including text, visuals, and music, by identifying and learning patterns from its training data. When it leverages large language models (LLMs), it entails training on vast collections of textual data and using AI models characterized by many parameters. A notable instance of generative AI employing LLMs is ChatGPT (General Pretrained Transformers). By simulating disease progression and treatment outcomes, generative AI can foster personalized treatment strategies and empower patients with knowledge about their health trajectories. Yet, realizing these potential demands requires overcoming data quality and interpretability challenges, and ensuring AI outputs are accessible and actionable for clinicians and patients.
Despite these advancements, leveraging AI in hepatology is not devoid of hurdles. The development and training of AI models require extensive and diverse datasets, raising concerns about data privacy and ethical use. Addressing these concerns is paramount for successfully integrating AI into clinical hepatology practice, necessitating transparent algorithmic processes and stringent ethical standards. Ethical considerations are central to AI’s integration into hepatology. Algorithmic biases, patient privacy, and the impact of AI-driven decisions underscore the need for cautious AI deployment. Developing transparent, understandable algorithms and establishing ethical guidelines for AI use are critical steps towards ethically leveraging AI in patient care.
In conclusion, AI’s integration into hepatology holds tremendous promise for advancing patient care through enhanced diagnostics, treatment planning, and patient education. Overcoming the associated challenges, including ethical concerns, data diversity, and algorithm interpretability, is crucial. As the hepatology community navigates this technological evolution, a balanced approach that marries technological advancements with ethical stewardship will be key to harnessing AI’s full potential, ensuring it serves the best interests of patients and propels the field of hepatology into the future.
We predict a trajectory of increased use and adoption of AI in hepatology. AI in hepatology is likely to meet the test of pervasiveness, improvement, and innovation. The adoption of AI in routine hepatology diagnosis and management will likely follow Amara’s law and the five stages of the hype cycle. We believe that we are still in the infant stages of adopting AI technology in hepatology, and this phase may last 5 years before there is a peak of inflated expectations. The trough of disillusionment and slopes of enlightenment may only be observed in the next decades.
Dr. Njei is based in the Section of Digestive Diseases, Yale School of Medicine, New Haven, Conn. Mr. Al-Ajlouni is a senior medical student at New York Medical College School of Medicine, Valhalla, N.Y. They have no conflicts of interest to declare.
Sources
Taylor-Weiner A, et al. A Machine Learning Approach Enables Quantitative Measurement of Liver Histology and Disease Monitoring in NASH. Hepatology. 2021 Jul. doi: 10.1002/hep.31750.
Zeng Q, et al. Artificial intelligence predicts immune and inflammatory gene signatures directly from hepatocellular carcinoma histology. J Hepatol. 2022 Jul. doi: 10.1016/j.jhep.2022.01.018.
Ahn JC, et al. Development of the AI-Cirrhosis-ECG Score: An Electrocardiogram-Based Deep Learning Model in Cirrhosis. Am J Gastroenterol. 2022 Mar. doi: 10.14309/ajg.0000000000001617.
Nduma BN, et al. The Application of Artificial Intelligence (AI)-Based Ultrasound for the Diagnosis of Fatty Liver Disease: A Systematic Review. Cureus. 2023 Dec 15. doi: 10.7759/cureus.50601.
Dear colleagues,
Since our prior Perspectives piece on artificial intelligence (AI) in GI and Hepatology in 2022, the field has seen almost exponential growth. Expectations are high that AI will revolutionize our field and significantly improve patient care. But as the global discussion on AI has shown, there are real challenges with adoption, including issues with accuracy, reliability, and privacy.
In this issue, Dr. Nabil M. Mansour and Dr. Thomas R. McCarty explore the current and future impact of AI on gastroenterology, while Dr. Basile Njei and Yazan A. Al Ajlouni assess its role in hepatology. We hope these pieces will help your discussions in incorporating or researching AI for use in your own practices. We welcome your thoughts on this issue on X @AGA_GIHN.
Gyanprakash A. Ketwaroo, MD, MSc, is associate professor of medicine, Yale University, New Haven, Conn., and chief of endoscopy at West Haven (Conn.) VA Medical Center. He is an associate editor for GI & Hepatology News.
Artificial Intelligence in Gastrointestinal Endoscopy
BY THOMAS R. MCCARTY, MD, MPH; NABIL M. MANSOUR, MD
The last few decades have seen an exponential increase and interest in the role of artificial intelligence (AI) and adoption of deep learning algorithms within healthcare and patient care services. The field of gastroenterology and endoscopy has similarly seen a tremendous uptake in acceptance and implementation of AI for a variety of gastrointestinal conditions. The spectrum of AI-based applications includes detection or diagnostic-based as well as therapeutic assistance tools. From the first US Food and Drug Administration (FDA)-approved device that uses machine learning to assist clinicians in detecting lesions during colonoscopy, to other more innovative machine learning techniques for small bowel, esophageal, and hepatobiliary conditions, AI has dramatically changed the landscape of gastrointestinal endoscopy.
Approved applications for colorectal cancer
In an attempt to improve colorectal cancer screening and outcomes related to screening and surveillance, efforts have been focused on procedural performance metrics, quality indicators, and tools to aid in lesion detection and improve quality of care. One such tool has been computer-aided detection (CADe), with early randomized controlled trial (RCT) data showing significantly increased adenoma detection rate (ADR) and adenomas per colonoscopy (APC).1-3
Ultimately, this data led to FDA approval of the CADe system GI Genius (Medtronic, Dublin, Ireland) in 2021.4 Additional systems have since been FDA approved or 510(k) cleared including Endoscreener (Wision AI, Shanghai, China), SKOUT (Iterative Health, Cambridge, Massachusetts), MAGENTIQ-COLO (MAGENTIQ-EYE LTD, Haifa, Israel), and CAD EYE (Fujifilm, Tokyo), all of which have shown increased ADR and/or increased APC and/or reduced adenoma miss rates in randomized trials.5
Yet despite the promise of improved quality and subsequent translation to better patient outcomes, there has been a noticeable disconnect between RCT data and more real-world literature.6 In a recent study, no improvement was seen in ADR after implementation of a CADe system for colorectal cancer screening — including both higher and lower-ADR performers. Looking at change over time after implementation, CADe had no positive effect in any group over time, divergent from early RCT data. In a more recent multicenter, community-based RCT study, again CADe did not result in a statistically significant difference in the number of adenomas detected.7 The differences between some of these more recent “real-world” studies vs the majority of data from RCTs raise important questions regarding the potential of bias (due to unblinding) in prospective trials, as well as the role of the human-AI interaction.
Importantly for RCT data, both cohorts in these studies met adequate ADR benchmarks, though it remains unclear whether a truly increased ADR necessitates better patient outcomes — is higher always better? In addition, an important consideration with evaluating any AI/CADe system is that they often undergo frequent updates, each promising improved accuracy, sensitivity, and specificity. This is an interesting dilemma and raises questions about the enduring relevance of studies conducted using an outdated version of a CADe system.
Additional unanswered questions regarding an ideal ADR for implementation, preferred patient populations for screening (especially for younger individuals), and the role and adoption of computer-aided polyp diagnosis/characterization (CADx) within the United States remain. Furthermore, questions regarding procedural withdrawal time, impact on sessile serrated lesion detection, cost-effectiveness, and preferred adoption strategies have begun to be explored, though require more data to better define a best practice approach. Ultimately, answers to some of these unknowns may explain the discordant results and help guide future implementation measures.
Innovative applications for alternative gastrointestinal conditions
Given the fervor and excitement, as well as the outcomes associated with AI-based colorectal screening, it is not surprising these techniques have been expanded to other gastrointestinal conditions. At this time, all of these are fledgling, mostly single-center tools, not yet ready for widespread adoption. Nonetheless, these represent a potentially important step forward for difficult-to-manage gastrointestinal diseases.
Machine learning CADe systems have been developed to help identify early Barrett’s neoplasia, depth and invasion of gastric cancer, as well as lesion detection in small bowel video capsule endoscopy.8-10 Endoscopic retrograde cholangiopancreatography (ERCP)-based applications for cholangiocarcinoma and indeterminate stricture diagnosis have also been studied.11 Additional AI-based algorithms have been employed for complex procedures such as endoscopic submucosal dissection (ESD) or peroral endoscopic myotomy (POEM) to delineate vessels, better define tissue planes for dissection, and visualize landmark structures.12,13 Furthermore, AI-based scope guidance/manipulation, bleeding detection, landmark identification, and lesion detection have the potential to revolutionize endoscopic training and education. The impact that generative AI can potentially have on clinical practice is also an exciting prospect that warrants further investigation.
Artificial intelligence adoption in clinical practice
Clinical practice with regard to AI and colorectal cancer screening largely mirrors the disconnect in the current literature, with “believers” and “non-believers” as well as innovators and early adopters alongside laggards. In our own academic practices, we continue to struggle with the adoption and standardized implementation of AI-based colorectal cancer CADe systems, despite the RCT data showing positive results. It is likely that AI uptake will follow the technology predictions of Amara’s Law — i.e., individuals tend to overestimate the short-term impact of new technologies while underestimating long-term effects. In the end, more widespread adoption in community practice and larger scale real-world clinical outcomes studies are likely to determine the true impact of these exciting technologies. For other, less established AI-based tools, more data are currently required.
Conclusions
Ultimately, AI-based algorithms are likely here to stay, with continued improvement and evolution to occur based on provider feedback and patient care needs. Current tools, while not all-encompassing, have the potential to dramatically change the landscape of endoscopic training, diagnostic evaluation, and therapeutic care. It is critically important that relevant stakeholders, both endoscopists and patients, be involved in future applications and design to improve efficiency and quality outcomes overall.
Dr. McCarty is based in the Lynda K. and David M. Underwood Center for Digestive Disorders, Houston Methodist Hospital. Dr. Mansour is based in the section of gastroenterology, Baylor College of Medicine, Houston. Dr. McCarty reports no conflicts of interest. Dr. Mansour reports having been a consultant for Iterative Health.
References
1. Repici A, et al. Efficacy of real-time computer-aided detection of colorectal neoplasia in a randomized trial. Gastroenterology. 2020 Aug. doi: 10.1053/j.gastro.2020.04.062.
2. Repici A, et al. Artificial intelligence and colonoscopy experience: Lessons from two randomised trials. Gut. Apr 2022. doi: 10.1136/gutjnl-2021-324471.
3. Wallace MB, et al. Impact of artificial intelligence on miss rate of colorectal neoplasia. Gastroenterology 2022 Jul. doi: 10.1053/j.gastro.2022.03.007.
4. United States Food and Drug Administration (FDA). GI Genius FDA Approval [April 9, 2021]. Accessed January 5, 2022. Available at: www.accessdata.fda.gov/cdrh_docs/pdf21/K211951.pdf.
5. Maas MHJ, et al. A computer-aided polyp detection system in screening and surveillance colonoscopy: An international, multicentre, randomised, tandem trial. Lancet Digit Health. 2024 Mar. doi: 10.1016/S2589-7500(23)00242-X.
6. Ladabaum U, et al. Computer-aided detection of polyps does not improve colonoscopist performance in a pragmatic implementation trial. Gastroenterology. 2023 Mar. doi: 10.1053/j.gastro.2022.12.004.
7. Wei MT, et al. Evaluation of computer-aided detection during colonoscopy in the community (AI-SEE): A multicenter randomized clinical trial. Am J Gastroenterol. 2023 Oct. doi: 10.14309/ajg.0000000000002239.
8. de Groof J, et al. The Argos project: The development of a computer-aided detection system to improve detection of Barrett’s neoplasia on white light endoscopy. United European Gastroenterol J. 2019 May. doi: 10.1177/2050640619837443.
9. Kanesaka T, et al. Computer-aided diagnosis for identifying and delineating early gastric cancers in magnifying narrow-band imaging. Gastrointest Endosc. 2018 May. doi: 10.1016/j.gie.2017.11.029.
10. Sahafi A, et al. Edge artificial intelligence wireless video capsule endoscopy. Sci Rep. 2022 Aug. doi: 10.1038/s41598-022-17502-7.
11. Njei B, et al. Artificial intelligence in endoscopic imaging for detection of malignant biliary strictures and cholangiocarcinoma: A systematic review. Ann Gastroenterol. 2023 Mar-Apr. doi: 10.20524/aog.2023.0779.
12. Ebigbo A, et al. Vessel and tissue recognition during third-space endoscopy using a deep learning algorithm. Gut. 2022 Dec. doi: 10.1136/gutjnl-2021-326470.
13. Cao J, et al. Intelligent surgical workflow recognition for endoscopic submucosal dissection with real-time animal study. Nat Commun. 2023 Oct. doi: 10.1038/s41467-023-42451-8.
The Promise and Challenges of AI in Hepatology
BY BASILE NJEI, MD, MPH, PHD; YAZAN A. AL-AJLOUNI, MPHIL
In the dynamic realm of medicine, artificial intelligence (AI) emerges as a transformative force, notably within hepatology. The discipline of hepatology, dedicated to liver and related organ diseases, is ripe for AI’s promise to revolutionize diagnostics and treatment, pushing toward a future of precision medicine. Yet, the path to fully realizing AI’s potential in hepatology is laced with data, ethical, and integration challenges.
The application of AI, particularly in histopathology, significantly enhances disease diagnosis and staging in hepatology. AI-driven approaches remedy traditional histopathological challenges, such as interpretative variability, providing more consistent and accurate disease analyses. This is especially evident in conditions like metabolic dysfunction-associated steatohepatitis (MASH) and hepatocellular carcinoma (HCC), where AI aids in identifying critical gene signatures, thereby refining therapy selection.
Similarly, deep learning (DL), a branch of AI, has attracted significant interest globally, particularly in image recognition. AI’s incorporation into medical imaging marks a significant advancement, enabling early detection of malignancies like HCC and improving diagnostics in steatotic liver disease through enhanced imaging analyses using convolutional neural networks (CNN). The abundance of imaging data alongside clinical outcomes has catalyzed AI’s integration into radiology, leading to the swift growth of radiomics as a novel domain in medical research.
AI has also been shown to identify nuanced alterations in electrocardiograms (EKGs) associated with liver conditions, potentially detecting the progression of liver diseases at an earlier stage than currently possible. By leveraging complex algorithms and machine learning, AI can analyze EKG patterns with a precision and depth unattainable through traditional manual interpretation. Given that liver diseases, such as cirrhosis or hepatitis, can induce subtle cardiac changes long before other clinical symptoms manifest, early detection through AI-enhanced EKG analysis could lead to timely interventions, potentially halting or reversing disease progression. This approach further enriches our understanding of the intricate interplay between liver function and cardiac health, highlighting the potential for AI to transform not just liver disease diagnostics but also to foster a more integrated approach to patient care.
Beyond diagnostics, the burgeoning field of generative AI introduces groundbreaking possibilities in treatment planning and patient education, particularly for chronic conditions like cirrhosis. Generative AI produces original content, including text, visuals, and music, by identifying and learning patterns from its training data. When it leverages large language models (LLMs), it entails training on vast collections of textual data and using AI models characterized by many parameters. A notable instance of generative AI employing LLMs is ChatGPT (General Pretrained Transformers). By simulating disease progression and treatment outcomes, generative AI can foster personalized treatment strategies and empower patients with knowledge about their health trajectories. Yet, realizing these potential demands requires overcoming data quality and interpretability challenges, and ensuring AI outputs are accessible and actionable for clinicians and patients.
Despite these advancements, leveraging AI in hepatology is not devoid of hurdles. The development and training of AI models require extensive and diverse datasets, raising concerns about data privacy and ethical use. Addressing these concerns is paramount for successfully integrating AI into clinical hepatology practice, necessitating transparent algorithmic processes and stringent ethical standards. Ethical considerations are central to AI’s integration into hepatology. Algorithmic biases, patient privacy, and the impact of AI-driven decisions underscore the need for cautious AI deployment. Developing transparent, understandable algorithms and establishing ethical guidelines for AI use are critical steps towards ethically leveraging AI in patient care.
In conclusion, AI’s integration into hepatology holds tremendous promise for advancing patient care through enhanced diagnostics, treatment planning, and patient education. Overcoming the associated challenges, including ethical concerns, data diversity, and algorithm interpretability, is crucial. As the hepatology community navigates this technological evolution, a balanced approach that marries technological advancements with ethical stewardship will be key to harnessing AI’s full potential, ensuring it serves the best interests of patients and propels the field of hepatology into the future.
We predict a trajectory of increased use and adoption of AI in hepatology. AI in hepatology is likely to meet the test of pervasiveness, improvement, and innovation. The adoption of AI in routine hepatology diagnosis and management will likely follow Amara’s law and the five stages of the hype cycle. We believe that we are still in the infant stages of adopting AI technology in hepatology, and this phase may last 5 years before there is a peak of inflated expectations. The trough of disillusionment and slopes of enlightenment may only be observed in the next decades.
Dr. Njei is based in the Section of Digestive Diseases, Yale School of Medicine, New Haven, Conn. Mr. Al-Ajlouni is a senior medical student at New York Medical College School of Medicine, Valhalla, N.Y. They have no conflicts of interest to declare.
Sources
Taylor-Weiner A, et al. A Machine Learning Approach Enables Quantitative Measurement of Liver Histology and Disease Monitoring in NASH. Hepatology. 2021 Jul. doi: 10.1002/hep.31750.
Zeng Q, et al. Artificial intelligence predicts immune and inflammatory gene signatures directly from hepatocellular carcinoma histology. J Hepatol. 2022 Jul. doi: 10.1016/j.jhep.2022.01.018.
Ahn JC, et al. Development of the AI-Cirrhosis-ECG Score: An Electrocardiogram-Based Deep Learning Model in Cirrhosis. Am J Gastroenterol. 2022 Mar. doi: 10.14309/ajg.0000000000001617.
Nduma BN, et al. The Application of Artificial Intelligence (AI)-Based Ultrasound for the Diagnosis of Fatty Liver Disease: A Systematic Review. Cureus. 2023 Dec 15. doi: 10.7759/cureus.50601.
‘We Need to Rethink Our Options’: Lung Cancer Recurrence
This transcript has been edited for clarity.
Hello. It’s Mark Kris reporting back after attending the New York Lung Cancer Foundation Summit here in New York. A large amount of discussion went on, but as usual, I was most interested in the perioperative space.
In previous videos, I’ve talked about this ongoing discussion of whether you should operate and give adjuvant therapy or give neoadjuvant therapy, and I’ve addressed that already. One thing I want to bring up – and as we move off of that argument, which frankly doesn’t have an answer today, with neoadjuvant therapy, having all the data to support it – is
I was taught early on by my surgical mentors that the issue here was systemic control. While they could do very successful surgery to get high levels of local control, they could not control systemic disease. Sadly, the tools we had early on with chemotherapy were just not good enough. Suddenly, we have better tools to control systemic spread. In the past, the vast majority of occurrences were systemic; they’re now local.
What I think we need to do as a group of practitioners trying to deal with the problems getting in the way of curing our patients is look at what the issue is now. Frankly, the big issue now, as systemic therapy has controlled metastatic disease, is recurrence in the chest.
We give adjuvant osimertinib. Please remember what the numbers are. In the osimertinib arm, of the 11 recurrences reported in the European Society for Medical Oncology presentation a few years back, nine of them were in the chest or mediastinal nodes. In the arm that got no osimertinib afterward, there were 46 recurrences, and 32 of those 46 recurrences were in the chest, either the lung or mediastinal nodes. Therefore, 74% of the recurrences are suddenly in the chest. What’s the issue here?
The issue is we need to find strategies to give better disease control in the chest, as we have made inroads in controlling systemic disease with the targeted therapies in the endothelial growth factor receptor space, and very likely the checkpoint inhibitors, too, as that data kind of filters out. We need to think about how better to get local control.
I think rather than continue to get into this argument of neoadjuvant vs adjuvant, we should move to what’s really hurting our patients. Again, the data I quoted you was from the ADAURA trial, which was adjuvant therapy, and I’m sure the neoadjuvant is going to show the same thing. It’s better systemic therapy but now, more trouble in the chest.
How are we going to deal with that? I’d like to throw out one strategy, and that is to rethink the role of radiation in these patients. Again, if the problem is local in the chest, lung, and lymph nodes, we have to think about local therapy. Yes, we’re not recommending it routinely for everybody, but now that we have better systemic control, we need to rethink our options. The obvious one to rethink is about giving radiotherapy.
We should also use what we learned in the earlier trials, which is that there is harm in giving excessive radiation to the heart. If you avoid the heart, you avoid the harm. We have better planning strategies for stereotactic body radiotherapy and more traditional radiation, and of course, we have proton therapy as well.
As we continue to struggle with the idea of that patient with stage II or III disease, whether to give adjuvant vs neoadjuvant therapy, please remember to consider their risk in 2024. Their risk for first recurrence is in the chest.
What are we going to do to better control disease in the chest? We have a challenge. I’m sure we can meet it if we put our heads together.
Dr. Kris is professor of medicine at Weill Cornell Medical College, and attending physician, Thoracic Oncology Service, Memorial Sloan Kettering Cancer Center, New York. He disclosed ties with AstraZeneca, Roche/Genentech, Ariad Pharmaceuticals, Pfizer, and PUMA.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
Hello. It’s Mark Kris reporting back after attending the New York Lung Cancer Foundation Summit here in New York. A large amount of discussion went on, but as usual, I was most interested in the perioperative space.
In previous videos, I’ve talked about this ongoing discussion of whether you should operate and give adjuvant therapy or give neoadjuvant therapy, and I’ve addressed that already. One thing I want to bring up – and as we move off of that argument, which frankly doesn’t have an answer today, with neoadjuvant therapy, having all the data to support it – is
I was taught early on by my surgical mentors that the issue here was systemic control. While they could do very successful surgery to get high levels of local control, they could not control systemic disease. Sadly, the tools we had early on with chemotherapy were just not good enough. Suddenly, we have better tools to control systemic spread. In the past, the vast majority of occurrences were systemic; they’re now local.
What I think we need to do as a group of practitioners trying to deal with the problems getting in the way of curing our patients is look at what the issue is now. Frankly, the big issue now, as systemic therapy has controlled metastatic disease, is recurrence in the chest.
We give adjuvant osimertinib. Please remember what the numbers are. In the osimertinib arm, of the 11 recurrences reported in the European Society for Medical Oncology presentation a few years back, nine of them were in the chest or mediastinal nodes. In the arm that got no osimertinib afterward, there were 46 recurrences, and 32 of those 46 recurrences were in the chest, either the lung or mediastinal nodes. Therefore, 74% of the recurrences are suddenly in the chest. What’s the issue here?
The issue is we need to find strategies to give better disease control in the chest, as we have made inroads in controlling systemic disease with the targeted therapies in the endothelial growth factor receptor space, and very likely the checkpoint inhibitors, too, as that data kind of filters out. We need to think about how better to get local control.
I think rather than continue to get into this argument of neoadjuvant vs adjuvant, we should move to what’s really hurting our patients. Again, the data I quoted you was from the ADAURA trial, which was adjuvant therapy, and I’m sure the neoadjuvant is going to show the same thing. It’s better systemic therapy but now, more trouble in the chest.
How are we going to deal with that? I’d like to throw out one strategy, and that is to rethink the role of radiation in these patients. Again, if the problem is local in the chest, lung, and lymph nodes, we have to think about local therapy. Yes, we’re not recommending it routinely for everybody, but now that we have better systemic control, we need to rethink our options. The obvious one to rethink is about giving radiotherapy.
We should also use what we learned in the earlier trials, which is that there is harm in giving excessive radiation to the heart. If you avoid the heart, you avoid the harm. We have better planning strategies for stereotactic body radiotherapy and more traditional radiation, and of course, we have proton therapy as well.
As we continue to struggle with the idea of that patient with stage II or III disease, whether to give adjuvant vs neoadjuvant therapy, please remember to consider their risk in 2024. Their risk for first recurrence is in the chest.
What are we going to do to better control disease in the chest? We have a challenge. I’m sure we can meet it if we put our heads together.
Dr. Kris is professor of medicine at Weill Cornell Medical College, and attending physician, Thoracic Oncology Service, Memorial Sloan Kettering Cancer Center, New York. He disclosed ties with AstraZeneca, Roche/Genentech, Ariad Pharmaceuticals, Pfizer, and PUMA.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
Hello. It’s Mark Kris reporting back after attending the New York Lung Cancer Foundation Summit here in New York. A large amount of discussion went on, but as usual, I was most interested in the perioperative space.
In previous videos, I’ve talked about this ongoing discussion of whether you should operate and give adjuvant therapy or give neoadjuvant therapy, and I’ve addressed that already. One thing I want to bring up – and as we move off of that argument, which frankly doesn’t have an answer today, with neoadjuvant therapy, having all the data to support it – is
I was taught early on by my surgical mentors that the issue here was systemic control. While they could do very successful surgery to get high levels of local control, they could not control systemic disease. Sadly, the tools we had early on with chemotherapy were just not good enough. Suddenly, we have better tools to control systemic spread. In the past, the vast majority of occurrences were systemic; they’re now local.
What I think we need to do as a group of practitioners trying to deal with the problems getting in the way of curing our patients is look at what the issue is now. Frankly, the big issue now, as systemic therapy has controlled metastatic disease, is recurrence in the chest.
We give adjuvant osimertinib. Please remember what the numbers are. In the osimertinib arm, of the 11 recurrences reported in the European Society for Medical Oncology presentation a few years back, nine of them were in the chest or mediastinal nodes. In the arm that got no osimertinib afterward, there were 46 recurrences, and 32 of those 46 recurrences were in the chest, either the lung or mediastinal nodes. Therefore, 74% of the recurrences are suddenly in the chest. What’s the issue here?
The issue is we need to find strategies to give better disease control in the chest, as we have made inroads in controlling systemic disease with the targeted therapies in the endothelial growth factor receptor space, and very likely the checkpoint inhibitors, too, as that data kind of filters out. We need to think about how better to get local control.
I think rather than continue to get into this argument of neoadjuvant vs adjuvant, we should move to what’s really hurting our patients. Again, the data I quoted you was from the ADAURA trial, which was adjuvant therapy, and I’m sure the neoadjuvant is going to show the same thing. It’s better systemic therapy but now, more trouble in the chest.
How are we going to deal with that? I’d like to throw out one strategy, and that is to rethink the role of radiation in these patients. Again, if the problem is local in the chest, lung, and lymph nodes, we have to think about local therapy. Yes, we’re not recommending it routinely for everybody, but now that we have better systemic control, we need to rethink our options. The obvious one to rethink is about giving radiotherapy.
We should also use what we learned in the earlier trials, which is that there is harm in giving excessive radiation to the heart. If you avoid the heart, you avoid the harm. We have better planning strategies for stereotactic body radiotherapy and more traditional radiation, and of course, we have proton therapy as well.
As we continue to struggle with the idea of that patient with stage II or III disease, whether to give adjuvant vs neoadjuvant therapy, please remember to consider their risk in 2024. Their risk for first recurrence is in the chest.
What are we going to do to better control disease in the chest? We have a challenge. I’m sure we can meet it if we put our heads together.
Dr. Kris is professor of medicine at Weill Cornell Medical College, and attending physician, Thoracic Oncology Service, Memorial Sloan Kettering Cancer Center, New York. He disclosed ties with AstraZeneca, Roche/Genentech, Ariad Pharmaceuticals, Pfizer, and PUMA.
A version of this article appeared on Medscape.com.
GLP-1 Receptor Agonists: Which Drug for Which Patient?
With all the excitement about GLP-1 agonists,
Of course, we want to make sure that we’re treating the right condition. If the patient has type 2 diabetes, we tend to give them medication that is indicated for type 2 diabetes. Many GLP-1 agonists are available in a diabetes version and a chronic weight management or obesity version. If a patient has diabetes and obesity, they can receive either one. If a patient has only diabetes but not obesity, they should be prescribed the diabetes version. For obesity without diabetes, we tend to stick with the drugs that are indicated for chronic weight management.
Let’s go through them.
Exenatide. In chronological order of approval, the first GLP-1 drug that was used for diabetes dates back to exenatide (Bydureon). Bydureon had a partner called Byetta (also exenatide), both of which are still on the market but infrequently used. Some patients reported that these medications were inconvenient because they required twice-daily injections and caused painful injection-site nodules.
Diabetes drugs in more common use include liraglutide (Victoza) for type 2 diabetes. It is a daily injection and has various doses. We always start low and increase with tolerance and desired effect for A1c.
Liraglutide. Victoza has an antiobesity counterpart called Saxenda. The Saxenda pen looks very similar to the Victoza pen. It is a daily GLP-1 agonist for chronic weight management. The SCALE trial demonstrated 8%-12% weight loss with Saxenda.
Those are the daily injections: Victoza for diabetes and Saxenda for weight loss.
Our patients are very excited about the advent of weekly injections for diabetes and weight management. Ozempic is very popular. It is a weekly GLP-1 agonist for type 2 diabetes. Many patients come in asking for Ozempic, and we must make sure that we’re moving them in the right direction depending on their condition.
Semaglutide. Ozempic has a few different doses. It is a weekly injection and has been found to be quite efficacious for treating diabetes. The drug’s weight loss counterpart is called Wegovy, which comes in a different pen. Both forms contain the compound semaglutide. While all of these GLP-1 agonists are indicated to treat type 2 diabetes or for weight management, Wegovy has a special indication that none of the others have. In March 2024, Wegovy acquired an indication to decrease cardiac risk in those with a BMI ≥ 27 and a previous cardiac history. This will really change the accessibility of this medication because patients with heart conditions who are on Medicare are expected to have access to Wegovy.
Tirzepatide. Another weekly injection for treatment of type 2 diabetes is called Mounjaro. Its counterpart for weight management is called Zepbound, which was found to have about 20.9% weight loss over 72 weeks. These medications have similar side effects in differing degrees, but the most-often reported are nausea, stool changes, abdominal pain, and reflux. There are some other potential side effects; I recommend that you read the individual prescribing information available for each drug to have more clarity about that.
It is important that we stay on label for using the GLP-1 receptor agonists, for many reasons. One, it increases our patients’ accessibility to the right medication for them, and we can also make sure that we’re treating the patient with the right drug according to the clinical trials. When the clinical trials are done, the study populations demonstrate safety and efficacy for that population. But if we’re prescribing a GLP-1 for a different population, it is considered off-label use.
Dr. Lofton, an obesity medicine specialist, is clinical associate professor of surgery and medicine at NYU Grossman School of Medicine, and director of the medical weight management program at NYU Langone Weight Management Center, New York. She disclosed ties to Novo Nordisk and Eli Lilly. This transcript has been edited for clarity.
A version of this article appeared on Medscape.com.
With all the excitement about GLP-1 agonists,
Of course, we want to make sure that we’re treating the right condition. If the patient has type 2 diabetes, we tend to give them medication that is indicated for type 2 diabetes. Many GLP-1 agonists are available in a diabetes version and a chronic weight management or obesity version. If a patient has diabetes and obesity, they can receive either one. If a patient has only diabetes but not obesity, they should be prescribed the diabetes version. For obesity without diabetes, we tend to stick with the drugs that are indicated for chronic weight management.
Let’s go through them.
Exenatide. In chronological order of approval, the first GLP-1 drug that was used for diabetes dates back to exenatide (Bydureon). Bydureon had a partner called Byetta (also exenatide), both of which are still on the market but infrequently used. Some patients reported that these medications were inconvenient because they required twice-daily injections and caused painful injection-site nodules.
Diabetes drugs in more common use include liraglutide (Victoza) for type 2 diabetes. It is a daily injection and has various doses. We always start low and increase with tolerance and desired effect for A1c.
Liraglutide. Victoza has an antiobesity counterpart called Saxenda. The Saxenda pen looks very similar to the Victoza pen. It is a daily GLP-1 agonist for chronic weight management. The SCALE trial demonstrated 8%-12% weight loss with Saxenda.
Those are the daily injections: Victoza for diabetes and Saxenda for weight loss.
Our patients are very excited about the advent of weekly injections for diabetes and weight management. Ozempic is very popular. It is a weekly GLP-1 agonist for type 2 diabetes. Many patients come in asking for Ozempic, and we must make sure that we’re moving them in the right direction depending on their condition.
Semaglutide. Ozempic has a few different doses. It is a weekly injection and has been found to be quite efficacious for treating diabetes. The drug’s weight loss counterpart is called Wegovy, which comes in a different pen. Both forms contain the compound semaglutide. While all of these GLP-1 agonists are indicated to treat type 2 diabetes or for weight management, Wegovy has a special indication that none of the others have. In March 2024, Wegovy acquired an indication to decrease cardiac risk in those with a BMI ≥ 27 and a previous cardiac history. This will really change the accessibility of this medication because patients with heart conditions who are on Medicare are expected to have access to Wegovy.
Tirzepatide. Another weekly injection for treatment of type 2 diabetes is called Mounjaro. Its counterpart for weight management is called Zepbound, which was found to have about 20.9% weight loss over 72 weeks. These medications have similar side effects in differing degrees, but the most-often reported are nausea, stool changes, abdominal pain, and reflux. There are some other potential side effects; I recommend that you read the individual prescribing information available for each drug to have more clarity about that.
It is important that we stay on label for using the GLP-1 receptor agonists, for many reasons. One, it increases our patients’ accessibility to the right medication for them, and we can also make sure that we’re treating the patient with the right drug according to the clinical trials. When the clinical trials are done, the study populations demonstrate safety and efficacy for that population. But if we’re prescribing a GLP-1 for a different population, it is considered off-label use.
Dr. Lofton, an obesity medicine specialist, is clinical associate professor of surgery and medicine at NYU Grossman School of Medicine, and director of the medical weight management program at NYU Langone Weight Management Center, New York. She disclosed ties to Novo Nordisk and Eli Lilly. This transcript has been edited for clarity.
A version of this article appeared on Medscape.com.
With all the excitement about GLP-1 agonists,
Of course, we want to make sure that we’re treating the right condition. If the patient has type 2 diabetes, we tend to give them medication that is indicated for type 2 diabetes. Many GLP-1 agonists are available in a diabetes version and a chronic weight management or obesity version. If a patient has diabetes and obesity, they can receive either one. If a patient has only diabetes but not obesity, they should be prescribed the diabetes version. For obesity without diabetes, we tend to stick with the drugs that are indicated for chronic weight management.
Let’s go through them.
Exenatide. In chronological order of approval, the first GLP-1 drug that was used for diabetes dates back to exenatide (Bydureon). Bydureon had a partner called Byetta (also exenatide), both of which are still on the market but infrequently used. Some patients reported that these medications were inconvenient because they required twice-daily injections and caused painful injection-site nodules.
Diabetes drugs in more common use include liraglutide (Victoza) for type 2 diabetes. It is a daily injection and has various doses. We always start low and increase with tolerance and desired effect for A1c.
Liraglutide. Victoza has an antiobesity counterpart called Saxenda. The Saxenda pen looks very similar to the Victoza pen. It is a daily GLP-1 agonist for chronic weight management. The SCALE trial demonstrated 8%-12% weight loss with Saxenda.
Those are the daily injections: Victoza for diabetes and Saxenda for weight loss.
Our patients are very excited about the advent of weekly injections for diabetes and weight management. Ozempic is very popular. It is a weekly GLP-1 agonist for type 2 diabetes. Many patients come in asking for Ozempic, and we must make sure that we’re moving them in the right direction depending on their condition.
Semaglutide. Ozempic has a few different doses. It is a weekly injection and has been found to be quite efficacious for treating diabetes. The drug’s weight loss counterpart is called Wegovy, which comes in a different pen. Both forms contain the compound semaglutide. While all of these GLP-1 agonists are indicated to treat type 2 diabetes or for weight management, Wegovy has a special indication that none of the others have. In March 2024, Wegovy acquired an indication to decrease cardiac risk in those with a BMI ≥ 27 and a previous cardiac history. This will really change the accessibility of this medication because patients with heart conditions who are on Medicare are expected to have access to Wegovy.
Tirzepatide. Another weekly injection for treatment of type 2 diabetes is called Mounjaro. Its counterpart for weight management is called Zepbound, which was found to have about 20.9% weight loss over 72 weeks. These medications have similar side effects in differing degrees, but the most-often reported are nausea, stool changes, abdominal pain, and reflux. There are some other potential side effects; I recommend that you read the individual prescribing information available for each drug to have more clarity about that.
It is important that we stay on label for using the GLP-1 receptor agonists, for many reasons. One, it increases our patients’ accessibility to the right medication for them, and we can also make sure that we’re treating the patient with the right drug according to the clinical trials. When the clinical trials are done, the study populations demonstrate safety and efficacy for that population. But if we’re prescribing a GLP-1 for a different population, it is considered off-label use.
Dr. Lofton, an obesity medicine specialist, is clinical associate professor of surgery and medicine at NYU Grossman School of Medicine, and director of the medical weight management program at NYU Langone Weight Management Center, New York. She disclosed ties to Novo Nordisk and Eli Lilly. This transcript has been edited for clarity.
A version of this article appeared on Medscape.com.
CRC Screening in Primary Care: The Blood Test Option
Last year, I concluded a commentary for this news organization on colorectal cancer (CRC) screening guidelines by stating that between stool-based tests, flexible sigmoidoscopy, and colonoscopy, “the best screening test is the test that gets done.” But should that maxim apply to the new blood-based screening test, Guardant Health Shield? This proprietary test, which costs $895 and is not generally covered by insurance, identifies alterations in cell-free DNA that are characteristic of CRC.
Shield’s test characteristics were recently evaluated in a prospective study of more than 10,000 adults aged 45-84 at average risk for CRC. The test had an 87.5% sensitivity for stage I, II, or III colorectal cancer but only a 13% sensitivity for advanced precancerous lesions. Test specificity was 89.6%, meaning that about 1 in 10 participants without CRC or advanced precancerous lesions on colonoscopy had a false-positive result.
Although the Shield blood test has a higher rate of false positives than the traditional fecal immunochemical test (FIT) and lower sensitivity and specificity than a multitarget stool DNA (FIT-DNA) test designed to improve on Cologuard, it meets the previously established criteria set forth by the Centers for Medicare & Medicaid Services (CMS) to be covered for Medicare beneficiaries at 3-year intervals, pending FDA approval.
A big concern, however, is that the availability of a blood test may cause patients who would have otherwise been screened with colonoscopy or stool tests to switch to the blood test. A cost-effectiveness analysis found that offering a blood test to patients who decline screening colonoscopy saves additional lives, but at the cost of more than $377,000 per life-year gained. Another study relying on three microsimulation models previously utilized by the US Preventive Services Task Force (USPSTF) found that annual FIT results in more life-years gained at substantially lower cost than blood-based screening every 3 years “even when uptake of blood-based screening was 20 percentage points higher than uptake of FIT.” As a result, a multidisciplinary expert panel concluded that blood-based screening should not substitute for established CRC screening tests, but instead be offered only to patients who decline those tests.
In practice, this will increase the complexity of the CRC screening conversations we have with patients. We will need to be clear that the blood test is not yet endorsed by the USPSTF or any major guideline group and is a second-line test that will miss most precancerous polyps. As with the stool tests, it is essential to emphasize that a positive result must be followed by diagnostic colonoscopy. To addend the cancer screening maxim I mentioned before, the blood test is not the best test for CRC, but it’s probably better than no test at all.
Dr. Lin is a family physician and associate director, Family Medicine Residency Program, Lancaster General Hospital, Lancaster, Pennsylvania. He blogs at Common Sense Family Doctor.
A version of this article appeared on Medscape.com.
Last year, I concluded a commentary for this news organization on colorectal cancer (CRC) screening guidelines by stating that between stool-based tests, flexible sigmoidoscopy, and colonoscopy, “the best screening test is the test that gets done.” But should that maxim apply to the new blood-based screening test, Guardant Health Shield? This proprietary test, which costs $895 and is not generally covered by insurance, identifies alterations in cell-free DNA that are characteristic of CRC.
Shield’s test characteristics were recently evaluated in a prospective study of more than 10,000 adults aged 45-84 at average risk for CRC. The test had an 87.5% sensitivity for stage I, II, or III colorectal cancer but only a 13% sensitivity for advanced precancerous lesions. Test specificity was 89.6%, meaning that about 1 in 10 participants without CRC or advanced precancerous lesions on colonoscopy had a false-positive result.
Although the Shield blood test has a higher rate of false positives than the traditional fecal immunochemical test (FIT) and lower sensitivity and specificity than a multitarget stool DNA (FIT-DNA) test designed to improve on Cologuard, it meets the previously established criteria set forth by the Centers for Medicare & Medicaid Services (CMS) to be covered for Medicare beneficiaries at 3-year intervals, pending FDA approval.
A big concern, however, is that the availability of a blood test may cause patients who would have otherwise been screened with colonoscopy or stool tests to switch to the blood test. A cost-effectiveness analysis found that offering a blood test to patients who decline screening colonoscopy saves additional lives, but at the cost of more than $377,000 per life-year gained. Another study relying on three microsimulation models previously utilized by the US Preventive Services Task Force (USPSTF) found that annual FIT results in more life-years gained at substantially lower cost than blood-based screening every 3 years “even when uptake of blood-based screening was 20 percentage points higher than uptake of FIT.” As a result, a multidisciplinary expert panel concluded that blood-based screening should not substitute for established CRC screening tests, but instead be offered only to patients who decline those tests.
In practice, this will increase the complexity of the CRC screening conversations we have with patients. We will need to be clear that the blood test is not yet endorsed by the USPSTF or any major guideline group and is a second-line test that will miss most precancerous polyps. As with the stool tests, it is essential to emphasize that a positive result must be followed by diagnostic colonoscopy. To addend the cancer screening maxim I mentioned before, the blood test is not the best test for CRC, but it’s probably better than no test at all.
Dr. Lin is a family physician and associate director, Family Medicine Residency Program, Lancaster General Hospital, Lancaster, Pennsylvania. He blogs at Common Sense Family Doctor.
A version of this article appeared on Medscape.com.
Last year, I concluded a commentary for this news organization on colorectal cancer (CRC) screening guidelines by stating that between stool-based tests, flexible sigmoidoscopy, and colonoscopy, “the best screening test is the test that gets done.” But should that maxim apply to the new blood-based screening test, Guardant Health Shield? This proprietary test, which costs $895 and is not generally covered by insurance, identifies alterations in cell-free DNA that are characteristic of CRC.
Shield’s test characteristics were recently evaluated in a prospective study of more than 10,000 adults aged 45-84 at average risk for CRC. The test had an 87.5% sensitivity for stage I, II, or III colorectal cancer but only a 13% sensitivity for advanced precancerous lesions. Test specificity was 89.6%, meaning that about 1 in 10 participants without CRC or advanced precancerous lesions on colonoscopy had a false-positive result.
Although the Shield blood test has a higher rate of false positives than the traditional fecal immunochemical test (FIT) and lower sensitivity and specificity than a multitarget stool DNA (FIT-DNA) test designed to improve on Cologuard, it meets the previously established criteria set forth by the Centers for Medicare & Medicaid Services (CMS) to be covered for Medicare beneficiaries at 3-year intervals, pending FDA approval.
A big concern, however, is that the availability of a blood test may cause patients who would have otherwise been screened with colonoscopy or stool tests to switch to the blood test. A cost-effectiveness analysis found that offering a blood test to patients who decline screening colonoscopy saves additional lives, but at the cost of more than $377,000 per life-year gained. Another study relying on three microsimulation models previously utilized by the US Preventive Services Task Force (USPSTF) found that annual FIT results in more life-years gained at substantially lower cost than blood-based screening every 3 years “even when uptake of blood-based screening was 20 percentage points higher than uptake of FIT.” As a result, a multidisciplinary expert panel concluded that blood-based screening should not substitute for established CRC screening tests, but instead be offered only to patients who decline those tests.
In practice, this will increase the complexity of the CRC screening conversations we have with patients. We will need to be clear that the blood test is not yet endorsed by the USPSTF or any major guideline group and is a second-line test that will miss most precancerous polyps. As with the stool tests, it is essential to emphasize that a positive result must be followed by diagnostic colonoscopy. To addend the cancer screening maxim I mentioned before, the blood test is not the best test for CRC, but it’s probably better than no test at all.
Dr. Lin is a family physician and associate director, Family Medicine Residency Program, Lancaster General Hospital, Lancaster, Pennsylvania. He blogs at Common Sense Family Doctor.
A version of this article appeared on Medscape.com.
Are Carbs Really the Enemy?
Recent headlines scream that we have an obesity problem and that carbs are the culprit for the problem. That leads me to ask: How did we get to blaming carbs as the enemy in the war against obesity?
First, a quick review of the history of diet and macronutrient content.
A long time ago, prehistoric humans foraged and hunted for food. Protein and fat were procured from animal meat, which was very important for encephalization, or evolutionary increase in the complexity or relative size of the brain. Most of the requirements for protein and iron were satisfied by hunting and eating land animals as well as consuming marine life that washed up on shore.
Carbohydrates in the form of plant foods served as the only sources of energy available to prehistoric hunter-gatherers, which offset the high protein content of the rest of their diet. These were only available during spring and summer.
Then, about 10,000 years ago, plant and animal agriculture began, and humans saw a permanent shift in the macronutrient content of our daily intake so that it was more consistent and stable. Initially, the nutrient characteristic changes were subtle, going from wild food to cultivated food with the Agricultural Revolution in the mid-17th century. Then, it changed even more rapidly less than 200 years ago with the Industrial Revolution, resulting in semiprocessed and ultraprocessed foods.
This change in food intake altered human physiology, with major changes in our digestive, immune, and neural physiology and an increase in chronic disease prevalence. The last 50 years has seen an increase in obesity in the United States, along with increases in chronic disease such as type 2 diabetes, which leads cardiovascular disease and certain cancers.
Back to Carbohydrates: Do We Need Them? How Much? What Kind?
Unfortunately, ultraprocessed foods have become a staple of the standard American or Western diet.
Ultraprocessed foods such as cakes, cookies, crackers, sugary breakfast cereals, pizza, potato chips, soft drinks, and ice cream are eons away from our prehistoric diet of wild game, nuts, fruits, and berries, at which time, our digestive immune and nervous systems evolved. The pace at which ultraprocessed foods have entered our diet outpaces the time necessary for adaptation of our digestive systems and genes to these foods. They are indeed pathogenic in this context.
So when was the time when humans consumed an “optimal” diet? This is hard to say because during the time of brain evolution, we needed protein and iron and succumbed to infections and trauma. In the early 1900s, we continued to succumb to infection until the discovery of antibiotics. Soon thereafter, industrialization and processed foods led to weight gain and the chronic diseases of the cardiovascular system and type 2 diabetes.
Carbohydrates provide calories and fiber and some micronutrients, which are needed for energy, metabolism, and bowel and immune health. But how much do we need?
Currently in the United States, the percentage of total food energy derived from the three major macronutrients is: carbohydrates, 51.8%; fat, 32.8%; and protein, 15.4%. Current advice for a healthy diet to lower risk for cardiovascular disease is to limit fat intake to 30% of total energy, protein to 15%, and to increase complex carbohydrates to 55%-60% of total energy. But we also need to qualify this in terms of the quality of the macronutrient, particularly carbohydrates.
In addition to the quality, the macronutrient content of the diet has varied considerably from our prehistoric times when dietary protein intakes were high at 19%-35% of energy at the expense of carbohydrate (22%-40% of energy).
If our genes haven’t kept up with industrialization, then why do we need so many carbohydrates to equate to 55%-60% of energy? Is it possible that we are confusing what is available with what we actually need? What do I mean by this?
We certainly have changed the landscape of the world due to agriculture, which has allowed us to procreate and feed ourselves, and certainly, industrialization has increased the availability of accessible cheap food. Protein in the form of meat, fish, and fowl are harder to get in industrialized nations as are fruits and vegetables. These macronutrients were the foods of our ancestors. It may be that a healthy diet is considered the one that is available.
For instance, the Mediterranean diet is somewhat higher in fat content, 40%-50% fat (mostly mono and unsaturated), and similar in protein content but lower in carbohydrate content than the typical Western diet. The Dietary Approaches to Stop Hypertension (DASH) diet is lower in fat at 25% total calories, is higher in carbohydrates at 55%, and is lower in protein, but this diet was generated in the United States, therefore it is more Western.
We need high-quality protein for organ and muscle function, high-quality unsaturated and monounsaturated fats for brain function and cellular functions, and high-quality complex carbohydrates for energy and gut health as well as micronutrients for many cellular functions. A ketogenic diet is not sustainable in the long-term for these reasons: chiefly the need for some carbohydrates for gut health and micronutrients.
How much carbohydrate content is needed should take into consideration energy expenditure as well as micronutrients and fiber intake. Protein and fat can contribute to energy production but not as readily as carbohydrates that can quickly restore glycogen in the muscle and liver. What’s interesting is that our ancestors were able to hunt and run away from danger with the small amounts of carbohydrates from plants and berries plus the protein and fat intake from animals and fish — but the Olympics weren’t a thing then!
It may be another 200,000 years before our genes catch up to ultraprocessed foods and the simple carbohydrates and sugars contained in these products. Evidence suggests that ultraprocessed foods cause inflammation in organs like the liver, adipose tissue, the heart, and even the brain. In the brain, this inflammation may be what’s causing us to defend a higher body weight set point in this environment of easily obtained highly palatable ultraprocessed foods.
Let’s not wait until our genes catch up and our bodies tolerate junk food without disease progression. It could be like waiting for Godot!
Dr. Apovian is professor of medicine, Harvard Medical School, and codirector, Center for Weight Management and Wellness, Brigham and Women’s Hospital, Boston, Massachusetts. She disclosed ties to Altimmune, CinFina Pharma, Cowen and Company, EPG Communication Holdings, Form Health, Gelesis, and L-Nutra.
A version of this article appeared on Medscape.com.
Recent headlines scream that we have an obesity problem and that carbs are the culprit for the problem. That leads me to ask: How did we get to blaming carbs as the enemy in the war against obesity?
First, a quick review of the history of diet and macronutrient content.
A long time ago, prehistoric humans foraged and hunted for food. Protein and fat were procured from animal meat, which was very important for encephalization, or evolutionary increase in the complexity or relative size of the brain. Most of the requirements for protein and iron were satisfied by hunting and eating land animals as well as consuming marine life that washed up on shore.
Carbohydrates in the form of plant foods served as the only sources of energy available to prehistoric hunter-gatherers, which offset the high protein content of the rest of their diet. These were only available during spring and summer.
Then, about 10,000 years ago, plant and animal agriculture began, and humans saw a permanent shift in the macronutrient content of our daily intake so that it was more consistent and stable. Initially, the nutrient characteristic changes were subtle, going from wild food to cultivated food with the Agricultural Revolution in the mid-17th century. Then, it changed even more rapidly less than 200 years ago with the Industrial Revolution, resulting in semiprocessed and ultraprocessed foods.
This change in food intake altered human physiology, with major changes in our digestive, immune, and neural physiology and an increase in chronic disease prevalence. The last 50 years has seen an increase in obesity in the United States, along with increases in chronic disease such as type 2 diabetes, which leads cardiovascular disease and certain cancers.
Back to Carbohydrates: Do We Need Them? How Much? What Kind?
Unfortunately, ultraprocessed foods have become a staple of the standard American or Western diet.
Ultraprocessed foods such as cakes, cookies, crackers, sugary breakfast cereals, pizza, potato chips, soft drinks, and ice cream are eons away from our prehistoric diet of wild game, nuts, fruits, and berries, at which time, our digestive immune and nervous systems evolved. The pace at which ultraprocessed foods have entered our diet outpaces the time necessary for adaptation of our digestive systems and genes to these foods. They are indeed pathogenic in this context.
So when was the time when humans consumed an “optimal” diet? This is hard to say because during the time of brain evolution, we needed protein and iron and succumbed to infections and trauma. In the early 1900s, we continued to succumb to infection until the discovery of antibiotics. Soon thereafter, industrialization and processed foods led to weight gain and the chronic diseases of the cardiovascular system and type 2 diabetes.
Carbohydrates provide calories and fiber and some micronutrients, which are needed for energy, metabolism, and bowel and immune health. But how much do we need?
Currently in the United States, the percentage of total food energy derived from the three major macronutrients is: carbohydrates, 51.8%; fat, 32.8%; and protein, 15.4%. Current advice for a healthy diet to lower risk for cardiovascular disease is to limit fat intake to 30% of total energy, protein to 15%, and to increase complex carbohydrates to 55%-60% of total energy. But we also need to qualify this in terms of the quality of the macronutrient, particularly carbohydrates.
In addition to the quality, the macronutrient content of the diet has varied considerably from our prehistoric times when dietary protein intakes were high at 19%-35% of energy at the expense of carbohydrate (22%-40% of energy).
If our genes haven’t kept up with industrialization, then why do we need so many carbohydrates to equate to 55%-60% of energy? Is it possible that we are confusing what is available with what we actually need? What do I mean by this?
We certainly have changed the landscape of the world due to agriculture, which has allowed us to procreate and feed ourselves, and certainly, industrialization has increased the availability of accessible cheap food. Protein in the form of meat, fish, and fowl are harder to get in industrialized nations as are fruits and vegetables. These macronutrients were the foods of our ancestors. It may be that a healthy diet is considered the one that is available.
For instance, the Mediterranean diet is somewhat higher in fat content, 40%-50% fat (mostly mono and unsaturated), and similar in protein content but lower in carbohydrate content than the typical Western diet. The Dietary Approaches to Stop Hypertension (DASH) diet is lower in fat at 25% total calories, is higher in carbohydrates at 55%, and is lower in protein, but this diet was generated in the United States, therefore it is more Western.
We need high-quality protein for organ and muscle function, high-quality unsaturated and monounsaturated fats for brain function and cellular functions, and high-quality complex carbohydrates for energy and gut health as well as micronutrients for many cellular functions. A ketogenic diet is not sustainable in the long-term for these reasons: chiefly the need for some carbohydrates for gut health and micronutrients.
How much carbohydrate content is needed should take into consideration energy expenditure as well as micronutrients and fiber intake. Protein and fat can contribute to energy production but not as readily as carbohydrates that can quickly restore glycogen in the muscle and liver. What’s interesting is that our ancestors were able to hunt and run away from danger with the small amounts of carbohydrates from plants and berries plus the protein and fat intake from animals and fish — but the Olympics weren’t a thing then!
It may be another 200,000 years before our genes catch up to ultraprocessed foods and the simple carbohydrates and sugars contained in these products. Evidence suggests that ultraprocessed foods cause inflammation in organs like the liver, adipose tissue, the heart, and even the brain. In the brain, this inflammation may be what’s causing us to defend a higher body weight set point in this environment of easily obtained highly palatable ultraprocessed foods.
Let’s not wait until our genes catch up and our bodies tolerate junk food without disease progression. It could be like waiting for Godot!
Dr. Apovian is professor of medicine, Harvard Medical School, and codirector, Center for Weight Management and Wellness, Brigham and Women’s Hospital, Boston, Massachusetts. She disclosed ties to Altimmune, CinFina Pharma, Cowen and Company, EPG Communication Holdings, Form Health, Gelesis, and L-Nutra.
A version of this article appeared on Medscape.com.
Recent headlines scream that we have an obesity problem and that carbs are the culprit for the problem. That leads me to ask: How did we get to blaming carbs as the enemy in the war against obesity?
First, a quick review of the history of diet and macronutrient content.
A long time ago, prehistoric humans foraged and hunted for food. Protein and fat were procured from animal meat, which was very important for encephalization, or evolutionary increase in the complexity or relative size of the brain. Most of the requirements for protein and iron were satisfied by hunting and eating land animals as well as consuming marine life that washed up on shore.
Carbohydrates in the form of plant foods served as the only sources of energy available to prehistoric hunter-gatherers, which offset the high protein content of the rest of their diet. These were only available during spring and summer.
Then, about 10,000 years ago, plant and animal agriculture began, and humans saw a permanent shift in the macronutrient content of our daily intake so that it was more consistent and stable. Initially, the nutrient characteristic changes were subtle, going from wild food to cultivated food with the Agricultural Revolution in the mid-17th century. Then, it changed even more rapidly less than 200 years ago with the Industrial Revolution, resulting in semiprocessed and ultraprocessed foods.
This change in food intake altered human physiology, with major changes in our digestive, immune, and neural physiology and an increase in chronic disease prevalence. The last 50 years has seen an increase in obesity in the United States, along with increases in chronic disease such as type 2 diabetes, which leads cardiovascular disease and certain cancers.
Back to Carbohydrates: Do We Need Them? How Much? What Kind?
Unfortunately, ultraprocessed foods have become a staple of the standard American or Western diet.
Ultraprocessed foods such as cakes, cookies, crackers, sugary breakfast cereals, pizza, potato chips, soft drinks, and ice cream are eons away from our prehistoric diet of wild game, nuts, fruits, and berries, at which time, our digestive immune and nervous systems evolved. The pace at which ultraprocessed foods have entered our diet outpaces the time necessary for adaptation of our digestive systems and genes to these foods. They are indeed pathogenic in this context.
So when was the time when humans consumed an “optimal” diet? This is hard to say because during the time of brain evolution, we needed protein and iron and succumbed to infections and trauma. In the early 1900s, we continued to succumb to infection until the discovery of antibiotics. Soon thereafter, industrialization and processed foods led to weight gain and the chronic diseases of the cardiovascular system and type 2 diabetes.
Carbohydrates provide calories and fiber and some micronutrients, which are needed for energy, metabolism, and bowel and immune health. But how much do we need?
Currently in the United States, the percentage of total food energy derived from the three major macronutrients is: carbohydrates, 51.8%; fat, 32.8%; and protein, 15.4%. Current advice for a healthy diet to lower risk for cardiovascular disease is to limit fat intake to 30% of total energy, protein to 15%, and to increase complex carbohydrates to 55%-60% of total energy. But we also need to qualify this in terms of the quality of the macronutrient, particularly carbohydrates.
In addition to the quality, the macronutrient content of the diet has varied considerably from our prehistoric times when dietary protein intakes were high at 19%-35% of energy at the expense of carbohydrate (22%-40% of energy).
If our genes haven’t kept up with industrialization, then why do we need so many carbohydrates to equate to 55%-60% of energy? Is it possible that we are confusing what is available with what we actually need? What do I mean by this?
We certainly have changed the landscape of the world due to agriculture, which has allowed us to procreate and feed ourselves, and certainly, industrialization has increased the availability of accessible cheap food. Protein in the form of meat, fish, and fowl are harder to get in industrialized nations as are fruits and vegetables. These macronutrients were the foods of our ancestors. It may be that a healthy diet is considered the one that is available.
For instance, the Mediterranean diet is somewhat higher in fat content, 40%-50% fat (mostly mono and unsaturated), and similar in protein content but lower in carbohydrate content than the typical Western diet. The Dietary Approaches to Stop Hypertension (DASH) diet is lower in fat at 25% total calories, is higher in carbohydrates at 55%, and is lower in protein, but this diet was generated in the United States, therefore it is more Western.
We need high-quality protein for organ and muscle function, high-quality unsaturated and monounsaturated fats for brain function and cellular functions, and high-quality complex carbohydrates for energy and gut health as well as micronutrients for many cellular functions. A ketogenic diet is not sustainable in the long-term for these reasons: chiefly the need for some carbohydrates for gut health and micronutrients.
How much carbohydrate content is needed should take into consideration energy expenditure as well as micronutrients and fiber intake. Protein and fat can contribute to energy production but not as readily as carbohydrates that can quickly restore glycogen in the muscle and liver. What’s interesting is that our ancestors were able to hunt and run away from danger with the small amounts of carbohydrates from plants and berries plus the protein and fat intake from animals and fish — but the Olympics weren’t a thing then!
It may be another 200,000 years before our genes catch up to ultraprocessed foods and the simple carbohydrates and sugars contained in these products. Evidence suggests that ultraprocessed foods cause inflammation in organs like the liver, adipose tissue, the heart, and even the brain. In the brain, this inflammation may be what’s causing us to defend a higher body weight set point in this environment of easily obtained highly palatable ultraprocessed foods.
Let’s not wait until our genes catch up and our bodies tolerate junk food without disease progression. It could be like waiting for Godot!
Dr. Apovian is professor of medicine, Harvard Medical School, and codirector, Center for Weight Management and Wellness, Brigham and Women’s Hospital, Boston, Massachusetts. She disclosed ties to Altimmune, CinFina Pharma, Cowen and Company, EPG Communication Holdings, Form Health, Gelesis, and L-Nutra.
A version of this article appeared on Medscape.com.
Weighing the Benefits of Integrating AI-based Clinical Notes Into Your Practice
Picture a healthcare system where physicians aren’t bogged down by excessive charting but are instead fully present with their patients, offering undivided attention and personalized care. In a recent X post, Stuart Blitz, COO and co-founder of Hone Health, sparked a thought-provoking conversation. “The problem with US healthcare is physicians are burned out since they spend way too much time charting, not enough with patients,” he wrote. “If you created a health system that did zero charting, you’d attract the best physicians and all patients would go there. Who is working on this?”
This resonates with many in the medical community, myself included, because the strain of extensive documentation detracts from patient care. Having worked in both large and small healthcare systems, I know the burden of extensive charting is a palpable challenge, often detracting from the time we can devote to our patients.
The first part of this two-part series examines the overarching benefits of artificial intelligence (AI)–based clinical documentation in modern healthcare, a field witnessing a paradigm shift thanks to advancements in AI.
Transformative Evolution of Clinical Documentation
The transition from manual documentation to AI-driven solutions marks a significant shift in the field, with a number of products in development including Nuance, Abridge, Ambience, ScribeAmerica, 3M, and DeepScribe. These tools use ambient clinical intelligence (ACI) to automate documentation, capturing patient conversations and translating them into structured clinical summaries. This innovation aligns with the vision of reducing charting burdens and enhancing patient-physician interactions.
How does it work? ACI refers to a sophisticated form of AI applied in healthcare settings, particularly focusing on enhancing the clinical documentation process without disrupting the natural flow of the consultation. Here’s a technical yet practical breakdown of ACI and the algorithms it typically employs:
Data capture and processing: ACI systems employ various sensors and processing units, typically integrated into clinical settings. These sensors, like microphones and cameras, gather diverse data such as audio from patient-doctor dialogues and visual cues. This information is then processed in real-time or near–real-time.
Natural language processing (NLP): A core component of ACI is advanced NLP algorithms. These algorithms analyze the captured audio data, transcribing spoken words into text. NLP goes beyond mere transcription; it involves understanding context, extracting relevant medical information (like symptoms, diagnoses, and treatment plans), and interpreting the nuances of human language.
Deep learning: Machine learning, particularly deep-learning techniques, are employed to improve the accuracy of ACI systems continually. These algorithms can learn from vast datasets of clinical interactions, enhancing their ability to transcribe and interpret future conversations accurately. As they learn, they become better at understanding different accents, complex medical terms, and variations in speech patterns.
Integration with electronic health records (EHRs): ACI systems are often designed to integrate seamlessly with existing EHR systems. They can automatically populate patient records with information from patient-clinician interactions, reducing manual entry and potential errors.
Customization and personalization: Many ACI systems offer customizable templates or allow clinicians to tailor documentation workflows. This flexibility ensures that the output aligns with the specific needs and preferences of healthcare providers.
Ethical and privacy considerations: ACI systems must navigate significant ethical and privacy concerns, especially related to patient consent and data security. These systems need to comply with healthcare privacy regulations such as HIPAA. They need to securely manage sensitive patient data and restrict access to authorized personnel only.
Broad-Spectrum Benefits of AI in Documentation
- Reducing clinician burnout: By automating the documentation process, AI tools like DAX Copilot alleviate a significant contributor to physician burnout, enabling clinicians to focus more on patient care.
- Enhanced patient care: With AI handling documentation, clinicians can engage more with their patients, leading to improved care quality and patient satisfaction.
- Data accuracy and quality: AI-driven documentation captures detailed patient encounters accurately, ensuring high-quality and comprehensive medical records.
- Response to the growing need for efficient healthcare: AI-based documentation is a direct response to the growing call for more efficient healthcare practices, where clinicians spend less time on paperwork and more with patients.
The shift toward AI-based clinical documentation represents a critical step in addressing the inefficiencies in healthcare systems. It’s a move towards a more patient-centered approach, where clinicians can focus more on patient care by reducing the time spent on excessive charting. Hopefully, we can integrate these solutions into our clinics at a large enough scale to make such an impact.
In the next column, we will explore in-depth insights from Kenneth Harper at Nuance on the technical implementation of these tools, with DAX as an example.
I would love to read your comments on AI in clinical trials as well as other AI-related topics. Write me at [email protected] or find me on X @DrBonillaOnc.
Dr. Loaiza-Bonilla is the co-founder and chief medical officer at Massive Bio, a company connecting patients to clinical trials using artificial intelligence. His research and professional interests focus on precision medicine, clinical trial design, digital health, entrepreneurship, and patient advocacy. Dr Loaiza-Bonilla serves as medical director of oncology research at Capital Health in New Jersey, where he maintains a connection to patient care by attending to patients 2 days a week. He has served as a consultant for Verify, PSI CRO, Bayer, AstraZeneca, Cardinal Health, BrightInsight, The Lynx Group, Fresenius, Pfizer, Ipsen, and Guardant; served as a speaker or a member of a speakers bureau for Amgen, Guardant, Eisai, Ipsen, Natera, Merck, Bristol-Myers Squibb, and AstraZeneca. He holds a 5% or greater equity interest in Massive Bio.
A version of this article appeared on Medscape.com.
Picture a healthcare system where physicians aren’t bogged down by excessive charting but are instead fully present with their patients, offering undivided attention and personalized care. In a recent X post, Stuart Blitz, COO and co-founder of Hone Health, sparked a thought-provoking conversation. “The problem with US healthcare is physicians are burned out since they spend way too much time charting, not enough with patients,” he wrote. “If you created a health system that did zero charting, you’d attract the best physicians and all patients would go there. Who is working on this?”
This resonates with many in the medical community, myself included, because the strain of extensive documentation detracts from patient care. Having worked in both large and small healthcare systems, I know the burden of extensive charting is a palpable challenge, often detracting from the time we can devote to our patients.
The first part of this two-part series examines the overarching benefits of artificial intelligence (AI)–based clinical documentation in modern healthcare, a field witnessing a paradigm shift thanks to advancements in AI.
Transformative Evolution of Clinical Documentation
The transition from manual documentation to AI-driven solutions marks a significant shift in the field, with a number of products in development including Nuance, Abridge, Ambience, ScribeAmerica, 3M, and DeepScribe. These tools use ambient clinical intelligence (ACI) to automate documentation, capturing patient conversations and translating them into structured clinical summaries. This innovation aligns with the vision of reducing charting burdens and enhancing patient-physician interactions.
How does it work? ACI refers to a sophisticated form of AI applied in healthcare settings, particularly focusing on enhancing the clinical documentation process without disrupting the natural flow of the consultation. Here’s a technical yet practical breakdown of ACI and the algorithms it typically employs:
Data capture and processing: ACI systems employ various sensors and processing units, typically integrated into clinical settings. These sensors, like microphones and cameras, gather diverse data such as audio from patient-doctor dialogues and visual cues. This information is then processed in real-time or near–real-time.
Natural language processing (NLP): A core component of ACI is advanced NLP algorithms. These algorithms analyze the captured audio data, transcribing spoken words into text. NLP goes beyond mere transcription; it involves understanding context, extracting relevant medical information (like symptoms, diagnoses, and treatment plans), and interpreting the nuances of human language.
Deep learning: Machine learning, particularly deep-learning techniques, are employed to improve the accuracy of ACI systems continually. These algorithms can learn from vast datasets of clinical interactions, enhancing their ability to transcribe and interpret future conversations accurately. As they learn, they become better at understanding different accents, complex medical terms, and variations in speech patterns.
Integration with electronic health records (EHRs): ACI systems are often designed to integrate seamlessly with existing EHR systems. They can automatically populate patient records with information from patient-clinician interactions, reducing manual entry and potential errors.
Customization and personalization: Many ACI systems offer customizable templates or allow clinicians to tailor documentation workflows. This flexibility ensures that the output aligns with the specific needs and preferences of healthcare providers.
Ethical and privacy considerations: ACI systems must navigate significant ethical and privacy concerns, especially related to patient consent and data security. These systems need to comply with healthcare privacy regulations such as HIPAA. They need to securely manage sensitive patient data and restrict access to authorized personnel only.
Broad-Spectrum Benefits of AI in Documentation
- Reducing clinician burnout: By automating the documentation process, AI tools like DAX Copilot alleviate a significant contributor to physician burnout, enabling clinicians to focus more on patient care.
- Enhanced patient care: With AI handling documentation, clinicians can engage more with their patients, leading to improved care quality and patient satisfaction.
- Data accuracy and quality: AI-driven documentation captures detailed patient encounters accurately, ensuring high-quality and comprehensive medical records.
- Response to the growing need for efficient healthcare: AI-based documentation is a direct response to the growing call for more efficient healthcare practices, where clinicians spend less time on paperwork and more with patients.
The shift toward AI-based clinical documentation represents a critical step in addressing the inefficiencies in healthcare systems. It’s a move towards a more patient-centered approach, where clinicians can focus more on patient care by reducing the time spent on excessive charting. Hopefully, we can integrate these solutions into our clinics at a large enough scale to make such an impact.
In the next column, we will explore in-depth insights from Kenneth Harper at Nuance on the technical implementation of these tools, with DAX as an example.
I would love to read your comments on AI in clinical trials as well as other AI-related topics. Write me at [email protected] or find me on X @DrBonillaOnc.
Dr. Loaiza-Bonilla is the co-founder and chief medical officer at Massive Bio, a company connecting patients to clinical trials using artificial intelligence. His research and professional interests focus on precision medicine, clinical trial design, digital health, entrepreneurship, and patient advocacy. Dr Loaiza-Bonilla serves as medical director of oncology research at Capital Health in New Jersey, where he maintains a connection to patient care by attending to patients 2 days a week. He has served as a consultant for Verify, PSI CRO, Bayer, AstraZeneca, Cardinal Health, BrightInsight, The Lynx Group, Fresenius, Pfizer, Ipsen, and Guardant; served as a speaker or a member of a speakers bureau for Amgen, Guardant, Eisai, Ipsen, Natera, Merck, Bristol-Myers Squibb, and AstraZeneca. He holds a 5% or greater equity interest in Massive Bio.
A version of this article appeared on Medscape.com.
Picture a healthcare system where physicians aren’t bogged down by excessive charting but are instead fully present with their patients, offering undivided attention and personalized care. In a recent X post, Stuart Blitz, COO and co-founder of Hone Health, sparked a thought-provoking conversation. “The problem with US healthcare is physicians are burned out since they spend way too much time charting, not enough with patients,” he wrote. “If you created a health system that did zero charting, you’d attract the best physicians and all patients would go there. Who is working on this?”
This resonates with many in the medical community, myself included, because the strain of extensive documentation detracts from patient care. Having worked in both large and small healthcare systems, I know the burden of extensive charting is a palpable challenge, often detracting from the time we can devote to our patients.
The first part of this two-part series examines the overarching benefits of artificial intelligence (AI)–based clinical documentation in modern healthcare, a field witnessing a paradigm shift thanks to advancements in AI.
Transformative Evolution of Clinical Documentation
The transition from manual documentation to AI-driven solutions marks a significant shift in the field, with a number of products in development including Nuance, Abridge, Ambience, ScribeAmerica, 3M, and DeepScribe. These tools use ambient clinical intelligence (ACI) to automate documentation, capturing patient conversations and translating them into structured clinical summaries. This innovation aligns with the vision of reducing charting burdens and enhancing patient-physician interactions.
How does it work? ACI refers to a sophisticated form of AI applied in healthcare settings, particularly focusing on enhancing the clinical documentation process without disrupting the natural flow of the consultation. Here’s a technical yet practical breakdown of ACI and the algorithms it typically employs:
Data capture and processing: ACI systems employ various sensors and processing units, typically integrated into clinical settings. These sensors, like microphones and cameras, gather diverse data such as audio from patient-doctor dialogues and visual cues. This information is then processed in real-time or near–real-time.
Natural language processing (NLP): A core component of ACI is advanced NLP algorithms. These algorithms analyze the captured audio data, transcribing spoken words into text. NLP goes beyond mere transcription; it involves understanding context, extracting relevant medical information (like symptoms, diagnoses, and treatment plans), and interpreting the nuances of human language.
Deep learning: Machine learning, particularly deep-learning techniques, are employed to improve the accuracy of ACI systems continually. These algorithms can learn from vast datasets of clinical interactions, enhancing their ability to transcribe and interpret future conversations accurately. As they learn, they become better at understanding different accents, complex medical terms, and variations in speech patterns.
Integration with electronic health records (EHRs): ACI systems are often designed to integrate seamlessly with existing EHR systems. They can automatically populate patient records with information from patient-clinician interactions, reducing manual entry and potential errors.
Customization and personalization: Many ACI systems offer customizable templates or allow clinicians to tailor documentation workflows. This flexibility ensures that the output aligns with the specific needs and preferences of healthcare providers.
Ethical and privacy considerations: ACI systems must navigate significant ethical and privacy concerns, especially related to patient consent and data security. These systems need to comply with healthcare privacy regulations such as HIPAA. They need to securely manage sensitive patient data and restrict access to authorized personnel only.
Broad-Spectrum Benefits of AI in Documentation
- Reducing clinician burnout: By automating the documentation process, AI tools like DAX Copilot alleviate a significant contributor to physician burnout, enabling clinicians to focus more on patient care.
- Enhanced patient care: With AI handling documentation, clinicians can engage more with their patients, leading to improved care quality and patient satisfaction.
- Data accuracy and quality: AI-driven documentation captures detailed patient encounters accurately, ensuring high-quality and comprehensive medical records.
- Response to the growing need for efficient healthcare: AI-based documentation is a direct response to the growing call for more efficient healthcare practices, where clinicians spend less time on paperwork and more with patients.
The shift toward AI-based clinical documentation represents a critical step in addressing the inefficiencies in healthcare systems. It’s a move towards a more patient-centered approach, where clinicians can focus more on patient care by reducing the time spent on excessive charting. Hopefully, we can integrate these solutions into our clinics at a large enough scale to make such an impact.
In the next column, we will explore in-depth insights from Kenneth Harper at Nuance on the technical implementation of these tools, with DAX as an example.
I would love to read your comments on AI in clinical trials as well as other AI-related topics. Write me at [email protected] or find me on X @DrBonillaOnc.
Dr. Loaiza-Bonilla is the co-founder and chief medical officer at Massive Bio, a company connecting patients to clinical trials using artificial intelligence. His research and professional interests focus on precision medicine, clinical trial design, digital health, entrepreneurship, and patient advocacy. Dr Loaiza-Bonilla serves as medical director of oncology research at Capital Health in New Jersey, where he maintains a connection to patient care by attending to patients 2 days a week. He has served as a consultant for Verify, PSI CRO, Bayer, AstraZeneca, Cardinal Health, BrightInsight, The Lynx Group, Fresenius, Pfizer, Ipsen, and Guardant; served as a speaker or a member of a speakers bureau for Amgen, Guardant, Eisai, Ipsen, Natera, Merck, Bristol-Myers Squibb, and AstraZeneca. He holds a 5% or greater equity interest in Massive Bio.
A version of this article appeared on Medscape.com.
‘Difficult Patient’: Stigmatizing Words and Medical Error
This transcript has been edited for clarity.
When I was doing my nephrology training, I had an attending who would write notes that were, well, kind of funny. I remember one time we were seeing a patient whose first name was “Lucky.” He dryly opened his section of the consult note as follows: “This is a 56-year-old woman with an ironic name who presents with acute renal failure.”
As an exhausted renal fellow, I appreciated the bit of color amid the ongoing series of tragedies that was the consult service. But let’s be clear — writing like this in the medical record is not a good idea. It wasn’t a good idea then, when any record might end up disclosed during a malpractice suit, and it’s really not a good idea now, when patients have ready and automated access to all the notes we write about them.
And yet, worse language than that of my attending appears in hospital notes all the time; there is research about this. Specifically, I’m talking about language that does not have high clinical utility but telegraphs the biases of the person writing the note. This is known as “stigmatizing language” and it can be overt or subtle.
For example, a physician wrote “I listed several fictitious medication names and she reported she was taking them.”
This casts suspicions about the patient’s credibility, as does the more subtle statement, “he claims nicotine patches don’t work for him.” Stigmatizing language may cast the patient in a difficult light, like this note: “she persevered on the fact that ... ‘you wouldn’t understand.’ ”
Stay with me.
We are going to start by defining a very sick patient population: those admitted to the hospital and who, within 48 hours, have either been transferred to the intensive care unit or died. Because of the severity of illness in this population we’ve just defined, figuring out whether a diagnostic or other error was made would be extremely high yield; these can mean the difference between life and death.
In a letter appearing in JAMA Internal Medicine, researchers examined a group of more than 2300 patients just like this from 29 hospitals, scouring the medical records for evidence of these types of errors.
Nearly one in four (23.2%) had at least one diagnostic error, which could include a missed physical exam finding, failure to ask a key question on history taking, inadequate testing, and so on.
Understanding why we make these errors is clearly critical to improving care for these patients. The researchers hypothesized that stigmatizing language might lead to errors like this. For example, by demonstrating that you don’t find a patient credible, you may ignore statements that would help make a better diagnosis.
Just over 5% of these patients had evidence of stigmatizing language in their medical notes. Like earlier studies, this language was more common if the patient was Black or had unstable housing.
Critically, stigmatizing language was more likely to be found among those who had diagnostic errors — a rate of 8.2% vs 4.1%. After adjustment for factors like race, the presence of stigmatizing language was associated with roughly a doubling of the risk for diagnostic errors.
Now, I’m all for eliminating stigmatizing language from our medical notes. And, given the increased transparency of all medical notes these days, I expect that we’ll see less of this over time. But of course, the fact that a physician doesn’t write something that disparages the patient does not necessarily mean that they don’t retain that bias. That said, those comments have an effect on all the other team members who care for that patient as well; it sets a tone and can entrench an individual’s bias more broadly. We should strive to eliminate our biases when it comes to caring for patients. But perhaps the second best thing is to work to keep those biases to ourselves.
Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
When I was doing my nephrology training, I had an attending who would write notes that were, well, kind of funny. I remember one time we were seeing a patient whose first name was “Lucky.” He dryly opened his section of the consult note as follows: “This is a 56-year-old woman with an ironic name who presents with acute renal failure.”
As an exhausted renal fellow, I appreciated the bit of color amid the ongoing series of tragedies that was the consult service. But let’s be clear — writing like this in the medical record is not a good idea. It wasn’t a good idea then, when any record might end up disclosed during a malpractice suit, and it’s really not a good idea now, when patients have ready and automated access to all the notes we write about them.
And yet, worse language than that of my attending appears in hospital notes all the time; there is research about this. Specifically, I’m talking about language that does not have high clinical utility but telegraphs the biases of the person writing the note. This is known as “stigmatizing language” and it can be overt or subtle.
For example, a physician wrote “I listed several fictitious medication names and she reported she was taking them.”
This casts suspicions about the patient’s credibility, as does the more subtle statement, “he claims nicotine patches don’t work for him.” Stigmatizing language may cast the patient in a difficult light, like this note: “she persevered on the fact that ... ‘you wouldn’t understand.’ ”
Stay with me.
We are going to start by defining a very sick patient population: those admitted to the hospital and who, within 48 hours, have either been transferred to the intensive care unit or died. Because of the severity of illness in this population we’ve just defined, figuring out whether a diagnostic or other error was made would be extremely high yield; these can mean the difference between life and death.
In a letter appearing in JAMA Internal Medicine, researchers examined a group of more than 2300 patients just like this from 29 hospitals, scouring the medical records for evidence of these types of errors.
Nearly one in four (23.2%) had at least one diagnostic error, which could include a missed physical exam finding, failure to ask a key question on history taking, inadequate testing, and so on.
Understanding why we make these errors is clearly critical to improving care for these patients. The researchers hypothesized that stigmatizing language might lead to errors like this. For example, by demonstrating that you don’t find a patient credible, you may ignore statements that would help make a better diagnosis.
Just over 5% of these patients had evidence of stigmatizing language in their medical notes. Like earlier studies, this language was more common if the patient was Black or had unstable housing.
Critically, stigmatizing language was more likely to be found among those who had diagnostic errors — a rate of 8.2% vs 4.1%. After adjustment for factors like race, the presence of stigmatizing language was associated with roughly a doubling of the risk for diagnostic errors.
Now, I’m all for eliminating stigmatizing language from our medical notes. And, given the increased transparency of all medical notes these days, I expect that we’ll see less of this over time. But of course, the fact that a physician doesn’t write something that disparages the patient does not necessarily mean that they don’t retain that bias. That said, those comments have an effect on all the other team members who care for that patient as well; it sets a tone and can entrench an individual’s bias more broadly. We should strive to eliminate our biases when it comes to caring for patients. But perhaps the second best thing is to work to keep those biases to ourselves.
Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
When I was doing my nephrology training, I had an attending who would write notes that were, well, kind of funny. I remember one time we were seeing a patient whose first name was “Lucky.” He dryly opened his section of the consult note as follows: “This is a 56-year-old woman with an ironic name who presents with acute renal failure.”
As an exhausted renal fellow, I appreciated the bit of color amid the ongoing series of tragedies that was the consult service. But let’s be clear — writing like this in the medical record is not a good idea. It wasn’t a good idea then, when any record might end up disclosed during a malpractice suit, and it’s really not a good idea now, when patients have ready and automated access to all the notes we write about them.
And yet, worse language than that of my attending appears in hospital notes all the time; there is research about this. Specifically, I’m talking about language that does not have high clinical utility but telegraphs the biases of the person writing the note. This is known as “stigmatizing language” and it can be overt or subtle.
For example, a physician wrote “I listed several fictitious medication names and she reported she was taking them.”
This casts suspicions about the patient’s credibility, as does the more subtle statement, “he claims nicotine patches don’t work for him.” Stigmatizing language may cast the patient in a difficult light, like this note: “she persevered on the fact that ... ‘you wouldn’t understand.’ ”
Stay with me.
We are going to start by defining a very sick patient population: those admitted to the hospital and who, within 48 hours, have either been transferred to the intensive care unit or died. Because of the severity of illness in this population we’ve just defined, figuring out whether a diagnostic or other error was made would be extremely high yield; these can mean the difference between life and death.
In a letter appearing in JAMA Internal Medicine, researchers examined a group of more than 2300 patients just like this from 29 hospitals, scouring the medical records for evidence of these types of errors.
Nearly one in four (23.2%) had at least one diagnostic error, which could include a missed physical exam finding, failure to ask a key question on history taking, inadequate testing, and so on.
Understanding why we make these errors is clearly critical to improving care for these patients. The researchers hypothesized that stigmatizing language might lead to errors like this. For example, by demonstrating that you don’t find a patient credible, you may ignore statements that would help make a better diagnosis.
Just over 5% of these patients had evidence of stigmatizing language in their medical notes. Like earlier studies, this language was more common if the patient was Black or had unstable housing.
Critically, stigmatizing language was more likely to be found among those who had diagnostic errors — a rate of 8.2% vs 4.1%. After adjustment for factors like race, the presence of stigmatizing language was associated with roughly a doubling of the risk for diagnostic errors.
Now, I’m all for eliminating stigmatizing language from our medical notes. And, given the increased transparency of all medical notes these days, I expect that we’ll see less of this over time. But of course, the fact that a physician doesn’t write something that disparages the patient does not necessarily mean that they don’t retain that bias. That said, those comments have an effect on all the other team members who care for that patient as well; it sets a tone and can entrench an individual’s bias more broadly. We should strive to eliminate our biases when it comes to caring for patients. But perhaps the second best thing is to work to keep those biases to ourselves.
Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
Time Wasted to Avoid Penalties
Depression is a serious issue. I want to say that off the top, because nothing below is intended to minimize it.
But does everyone need to be tested for it?
A lot of general practices test for it with every patient and every visit. After all, mandates say you have to or you’ll get penalized a few bucks. Since no one wants to leave any money on the table in the razor-thin margins of running a medical practice, they ask these questions (I don’t blame them for that).
I can see where this might be useful, but does it really do much? Or is it just a mandatory waste of time?
Good question.
A recent review by the American College of Physicians found it was mostly a waste of time (which surprises no one). Only one of the eight measures involved in depression screening (suicide risk assessment) turned out to be useful. So, basically, 88% of the time spent on these questions contributed absolutely nothing of clinical relevance.
Of course, this isn’t unique to family medicine. Every time I see a Medicare or Medicare Advantage patient I have to document whether they’ve had flu and pneumonia vaccines. While there are occasional cases where asking about recent vaccines is critical to the history, for most it’s not. But I do it so I don’t get penalized, even though the answer changes nothing. It’s not like I give vaccines in my practice.
A fair number of people come to me for hospital follow-ups, so I go into the system and review the chart. The notes inevitably contain questions of sexual activity, fear of violence, fear of domestic abuse, food security, recent travel patterns, and so on. Some of them are useful in certain situations, but not in all, or even most. All they do is increase the length of the note until anything of relevance is obscured, and allow someone in coding to check the boxes to raise the billing level. Realistically, the ER staff involved probably didn’t ask any of them, and just clicked “no.”
Once this probably seemed like a good idea, but clearly most of it is now a waste of time. These “quality measures” have turned the art of taking a good history into a session of mouse and box clicking.
Does that really improve care?
Dr. Block has a solo neurology practice in Scottsdale, Arizona.
Depression is a serious issue. I want to say that off the top, because nothing below is intended to minimize it.
But does everyone need to be tested for it?
A lot of general practices test for it with every patient and every visit. After all, mandates say you have to or you’ll get penalized a few bucks. Since no one wants to leave any money on the table in the razor-thin margins of running a medical practice, they ask these questions (I don’t blame them for that).
I can see where this might be useful, but does it really do much? Or is it just a mandatory waste of time?
Good question.
A recent review by the American College of Physicians found it was mostly a waste of time (which surprises no one). Only one of the eight measures involved in depression screening (suicide risk assessment) turned out to be useful. So, basically, 88% of the time spent on these questions contributed absolutely nothing of clinical relevance.
Of course, this isn’t unique to family medicine. Every time I see a Medicare or Medicare Advantage patient I have to document whether they’ve had flu and pneumonia vaccines. While there are occasional cases where asking about recent vaccines is critical to the history, for most it’s not. But I do it so I don’t get penalized, even though the answer changes nothing. It’s not like I give vaccines in my practice.
A fair number of people come to me for hospital follow-ups, so I go into the system and review the chart. The notes inevitably contain questions of sexual activity, fear of violence, fear of domestic abuse, food security, recent travel patterns, and so on. Some of them are useful in certain situations, but not in all, or even most. All they do is increase the length of the note until anything of relevance is obscured, and allow someone in coding to check the boxes to raise the billing level. Realistically, the ER staff involved probably didn’t ask any of them, and just clicked “no.”
Once this probably seemed like a good idea, but clearly most of it is now a waste of time. These “quality measures” have turned the art of taking a good history into a session of mouse and box clicking.
Does that really improve care?
Dr. Block has a solo neurology practice in Scottsdale, Arizona.
Depression is a serious issue. I want to say that off the top, because nothing below is intended to minimize it.
But does everyone need to be tested for it?
A lot of general practices test for it with every patient and every visit. After all, mandates say you have to or you’ll get penalized a few bucks. Since no one wants to leave any money on the table in the razor-thin margins of running a medical practice, they ask these questions (I don’t blame them for that).
I can see where this might be useful, but does it really do much? Or is it just a mandatory waste of time?
Good question.
A recent review by the American College of Physicians found it was mostly a waste of time (which surprises no one). Only one of the eight measures involved in depression screening (suicide risk assessment) turned out to be useful. So, basically, 88% of the time spent on these questions contributed absolutely nothing of clinical relevance.
Of course, this isn’t unique to family medicine. Every time I see a Medicare or Medicare Advantage patient I have to document whether they’ve had flu and pneumonia vaccines. While there are occasional cases where asking about recent vaccines is critical to the history, for most it’s not. But I do it so I don’t get penalized, even though the answer changes nothing. It’s not like I give vaccines in my practice.
A fair number of people come to me for hospital follow-ups, so I go into the system and review the chart. The notes inevitably contain questions of sexual activity, fear of violence, fear of domestic abuse, food security, recent travel patterns, and so on. Some of them are useful in certain situations, but not in all, or even most. All they do is increase the length of the note until anything of relevance is obscured, and allow someone in coding to check the boxes to raise the billing level. Realistically, the ER staff involved probably didn’t ask any of them, and just clicked “no.”
Once this probably seemed like a good idea, but clearly most of it is now a waste of time. These “quality measures” have turned the art of taking a good history into a session of mouse and box clicking.
Does that really improve care?
Dr. Block has a solo neurology practice in Scottsdale, Arizona.
Meat Linked to Higher Erectile Dysfunction Risk
Rachel S. Rubin, MD: Welcome to another episode of Sex Matters. I’m Dr. Rachel Rubin. I’m a urologist and sexual medicine specialist based in the Washington, DC area, and I interview amazingly cool people doing research in sexual medicine.
I heard an incredible lecture while I was at the Mayo Clinic urology conference by Dr. Stacy Loeb, who is a wonderful researcher of all things prostate cancer and men’s health, who is now talking more plant-based diets. Her lecture was so good, I begged her to join me for this discussion.
Dr. Loeb, I would love for you to introduce yourself.
Stacy Loeb, MD: I’m Dr. Loeb. I’m a urologist at New York University in the Manhattan VA, and I recently became board certified in lifestyle medicine because it’s so important for sexual health and, really, everything that we do.
Dr. Rubin: You recently became very interested in studying plant-based diets. How did that start, and how has the research evolved over time?
Dr. Loeb: It’s really amazing. For one thing, more of our patients with prostate cancer die of heart disease than of prostate cancer. And erectile dysfunction is really an early warning sign of cardiovascular disease. We felt like it was incumbent upon us, even within urology and sexual medicine, to better understand the basis for lifestyle modification that can help with these issues.
Dr. Rubin: Tell us more about what you found for erectile dysfunction. How much benefit do people get by switching to a plant-based diet?
Dr. Loeb: First we looked at erectile function in men without prostate cancer in the health professionals follow-up study, a very large cohort study out of Harvard University. We found that among omnivorous people, those who ate more plant-based and less animal-based food were less likely to have incident erectile dysfunction. Then, we published a new paper looking at patients with prostate cancer. These men have extra challenges for sexual function because in addition to the standard cardiovascular changes with aging, prostate cancer treatment can affect the nerves that are involved in erections. But amazingly, even in that population, we found that the men who ate more plant-based and less animal-based food had better scores for erectile function.
That was really good news, and it’s a win-win. There is no reason not to counsel our patients to eat more plant-based foods. Meat is not masculine. Meat is associated with a higher risk for erectile dysfunction and is considered carcinogenic. It’s just something that we should try to stay away from.
Dr. Rubin: How do you counsel patients who might not be ready to go fully plant-based? Is a little better than nothing? How do you even start these conversations with people? Do you have any tips for primary care doctors?
Dr. Loeb: Great question. A little bit is very much better than nothing. In fact, in the health professionals follow-up study, we actually looked at quintiles of people who ate the most meat and animal-based foods and the least plant-based foods all the way up to the most plant-based and the least animal-based diets. Along that spectrum, it really does make a big difference. Anywhere that patients can start from is definitely better than nothing.
Simple things such as Meatless Monday or choosing a few days that they will give up animal-based foods will help. For some people, trying new things is easier than cutting things out, for example, trying a milk substitute such as oat, almond, or soy milk instead of dairy milk. That could be a great first step, or trying some dishes that don’t include meat — maybe a tofu stir fry or a taco or burrito without the meat.
There are many great options out there. In terms of resources for doctors, the Physicians Committee for Responsible Medicine has a great website. They have fact sheets for a lot of the common questions that people ask such as how can I get enough protein or calcium on a plant-based diet? This isn’t a problem at all. In fact, Novak Djokovic and many other elite athletes eat plant-based diets, and they get enough protein with a much higher requirement than most of us who are not elite athletes. These fact sheets explain which plant foods are the best
I also like Nutritionfacts.org. They also have all kinds of great videos and resources. Both of these websites have recipes that were created by doctors and nutritionists.
We can suggest that our patients work with a nutritionist or join a virtual program. For example, Plant Powered here in New York has virtual plant-based jumpstart programs. People around the country can get in on programs that have nutritionists and health coaches — for people who want a boost.
Dr. Rubin: The data are really compelling. When you were speaking, not a person in the room was interested in having a steak that night for dinner, even with a steakhouse in the hotel.
What do you say to men who have prostate cancer or suffer from erectile dysfunction? Do any data show that by going plant-based you may show improvements? We have recent studies that show that regular exercise might be as good as Viagra.
Dr. Loeb: It’s definitely not too late, even if you’ve already been diagnosed with these conditions. In my own practice, I have seen changes in patients. In fact, one of the case scenarios that I submitted for the lifestyle medicine boards was a patient who adopted a whole food, plant-based diet and no longer uses Viagra. This is definitely something that’s possible to do with intensive lifestyle modification.
Dr. Rubin: Maybe vegetables are the new sexual health aide. How can people find out more? I know you have a Sirius XM radio show.
Dr. Loeb: It’s the Men’s Health Show on Sirius XM channel 110. It’s on Wednesdays from 6:00 to 8:00 PM ET, or you can listen to it on demand anytime through the Sirius XM app.
Dr. Rubin: You have done an enormous amount of research in prostate cancer and sexual medicine. You are an all-star in the field. Thank you for sharing all of your knowledge about plant-based diets. You’ve given us all a lot to think about today.
Dr. Rubin has disclosed the following relevant financial relationships: Serve(d) as a speaker for Sprout; received research grant from Maternal Medical; received income in an amount equal to or greater than $250 from Absorption Pharmaceuticals, GSK, and Endo.
A version of this article appeared on Medscape.com.
Rachel S. Rubin, MD: Welcome to another episode of Sex Matters. I’m Dr. Rachel Rubin. I’m a urologist and sexual medicine specialist based in the Washington, DC area, and I interview amazingly cool people doing research in sexual medicine.
I heard an incredible lecture while I was at the Mayo Clinic urology conference by Dr. Stacy Loeb, who is a wonderful researcher of all things prostate cancer and men’s health, who is now talking more plant-based diets. Her lecture was so good, I begged her to join me for this discussion.
Dr. Loeb, I would love for you to introduce yourself.
Stacy Loeb, MD: I’m Dr. Loeb. I’m a urologist at New York University in the Manhattan VA, and I recently became board certified in lifestyle medicine because it’s so important for sexual health and, really, everything that we do.
Dr. Rubin: You recently became very interested in studying plant-based diets. How did that start, and how has the research evolved over time?
Dr. Loeb: It’s really amazing. For one thing, more of our patients with prostate cancer die of heart disease than of prostate cancer. And erectile dysfunction is really an early warning sign of cardiovascular disease. We felt like it was incumbent upon us, even within urology and sexual medicine, to better understand the basis for lifestyle modification that can help with these issues.
Dr. Rubin: Tell us more about what you found for erectile dysfunction. How much benefit do people get by switching to a plant-based diet?
Dr. Loeb: First we looked at erectile function in men without prostate cancer in the health professionals follow-up study, a very large cohort study out of Harvard University. We found that among omnivorous people, those who ate more plant-based and less animal-based food were less likely to have incident erectile dysfunction. Then, we published a new paper looking at patients with prostate cancer. These men have extra challenges for sexual function because in addition to the standard cardiovascular changes with aging, prostate cancer treatment can affect the nerves that are involved in erections. But amazingly, even in that population, we found that the men who ate more plant-based and less animal-based food had better scores for erectile function.
That was really good news, and it’s a win-win. There is no reason not to counsel our patients to eat more plant-based foods. Meat is not masculine. Meat is associated with a higher risk for erectile dysfunction and is considered carcinogenic. It’s just something that we should try to stay away from.
Dr. Rubin: How do you counsel patients who might not be ready to go fully plant-based? Is a little better than nothing? How do you even start these conversations with people? Do you have any tips for primary care doctors?
Dr. Loeb: Great question. A little bit is very much better than nothing. In fact, in the health professionals follow-up study, we actually looked at quintiles of people who ate the most meat and animal-based foods and the least plant-based foods all the way up to the most plant-based and the least animal-based diets. Along that spectrum, it really does make a big difference. Anywhere that patients can start from is definitely better than nothing.
Simple things such as Meatless Monday or choosing a few days that they will give up animal-based foods will help. For some people, trying new things is easier than cutting things out, for example, trying a milk substitute such as oat, almond, or soy milk instead of dairy milk. That could be a great first step, or trying some dishes that don’t include meat — maybe a tofu stir fry or a taco or burrito without the meat.
There are many great options out there. In terms of resources for doctors, the Physicians Committee for Responsible Medicine has a great website. They have fact sheets for a lot of the common questions that people ask such as how can I get enough protein or calcium on a plant-based diet? This isn’t a problem at all. In fact, Novak Djokovic and many other elite athletes eat plant-based diets, and they get enough protein with a much higher requirement than most of us who are not elite athletes. These fact sheets explain which plant foods are the best
I also like Nutritionfacts.org. They also have all kinds of great videos and resources. Both of these websites have recipes that were created by doctors and nutritionists.
We can suggest that our patients work with a nutritionist or join a virtual program. For example, Plant Powered here in New York has virtual plant-based jumpstart programs. People around the country can get in on programs that have nutritionists and health coaches — for people who want a boost.
Dr. Rubin: The data are really compelling. When you were speaking, not a person in the room was interested in having a steak that night for dinner, even with a steakhouse in the hotel.
What do you say to men who have prostate cancer or suffer from erectile dysfunction? Do any data show that by going plant-based you may show improvements? We have recent studies that show that regular exercise might be as good as Viagra.
Dr. Loeb: It’s definitely not too late, even if you’ve already been diagnosed with these conditions. In my own practice, I have seen changes in patients. In fact, one of the case scenarios that I submitted for the lifestyle medicine boards was a patient who adopted a whole food, plant-based diet and no longer uses Viagra. This is definitely something that’s possible to do with intensive lifestyle modification.
Dr. Rubin: Maybe vegetables are the new sexual health aide. How can people find out more? I know you have a Sirius XM radio show.
Dr. Loeb: It’s the Men’s Health Show on Sirius XM channel 110. It’s on Wednesdays from 6:00 to 8:00 PM ET, or you can listen to it on demand anytime through the Sirius XM app.
Dr. Rubin: You have done an enormous amount of research in prostate cancer and sexual medicine. You are an all-star in the field. Thank you for sharing all of your knowledge about plant-based diets. You’ve given us all a lot to think about today.
Dr. Rubin has disclosed the following relevant financial relationships: Serve(d) as a speaker for Sprout; received research grant from Maternal Medical; received income in an amount equal to or greater than $250 from Absorption Pharmaceuticals, GSK, and Endo.
A version of this article appeared on Medscape.com.
Rachel S. Rubin, MD: Welcome to another episode of Sex Matters. I’m Dr. Rachel Rubin. I’m a urologist and sexual medicine specialist based in the Washington, DC area, and I interview amazingly cool people doing research in sexual medicine.
I heard an incredible lecture while I was at the Mayo Clinic urology conference by Dr. Stacy Loeb, who is a wonderful researcher of all things prostate cancer and men’s health, who is now talking more plant-based diets. Her lecture was so good, I begged her to join me for this discussion.
Dr. Loeb, I would love for you to introduce yourself.
Stacy Loeb, MD: I’m Dr. Loeb. I’m a urologist at New York University in the Manhattan VA, and I recently became board certified in lifestyle medicine because it’s so important for sexual health and, really, everything that we do.
Dr. Rubin: You recently became very interested in studying plant-based diets. How did that start, and how has the research evolved over time?
Dr. Loeb: It’s really amazing. For one thing, more of our patients with prostate cancer die of heart disease than of prostate cancer. And erectile dysfunction is really an early warning sign of cardiovascular disease. We felt like it was incumbent upon us, even within urology and sexual medicine, to better understand the basis for lifestyle modification that can help with these issues.
Dr. Rubin: Tell us more about what you found for erectile dysfunction. How much benefit do people get by switching to a plant-based diet?
Dr. Loeb: First we looked at erectile function in men without prostate cancer in the health professionals follow-up study, a very large cohort study out of Harvard University. We found that among omnivorous people, those who ate more plant-based and less animal-based food were less likely to have incident erectile dysfunction. Then, we published a new paper looking at patients with prostate cancer. These men have extra challenges for sexual function because in addition to the standard cardiovascular changes with aging, prostate cancer treatment can affect the nerves that are involved in erections. But amazingly, even in that population, we found that the men who ate more plant-based and less animal-based food had better scores for erectile function.
That was really good news, and it’s a win-win. There is no reason not to counsel our patients to eat more plant-based foods. Meat is not masculine. Meat is associated with a higher risk for erectile dysfunction and is considered carcinogenic. It’s just something that we should try to stay away from.
Dr. Rubin: How do you counsel patients who might not be ready to go fully plant-based? Is a little better than nothing? How do you even start these conversations with people? Do you have any tips for primary care doctors?
Dr. Loeb: Great question. A little bit is very much better than nothing. In fact, in the health professionals follow-up study, we actually looked at quintiles of people who ate the most meat and animal-based foods and the least plant-based foods all the way up to the most plant-based and the least animal-based diets. Along that spectrum, it really does make a big difference. Anywhere that patients can start from is definitely better than nothing.
Simple things such as Meatless Monday or choosing a few days that they will give up animal-based foods will help. For some people, trying new things is easier than cutting things out, for example, trying a milk substitute such as oat, almond, or soy milk instead of dairy milk. That could be a great first step, or trying some dishes that don’t include meat — maybe a tofu stir fry or a taco or burrito without the meat.
There are many great options out there. In terms of resources for doctors, the Physicians Committee for Responsible Medicine has a great website. They have fact sheets for a lot of the common questions that people ask such as how can I get enough protein or calcium on a plant-based diet? This isn’t a problem at all. In fact, Novak Djokovic and many other elite athletes eat plant-based diets, and they get enough protein with a much higher requirement than most of us who are not elite athletes. These fact sheets explain which plant foods are the best
I also like Nutritionfacts.org. They also have all kinds of great videos and resources. Both of these websites have recipes that were created by doctors and nutritionists.
We can suggest that our patients work with a nutritionist or join a virtual program. For example, Plant Powered here in New York has virtual plant-based jumpstart programs. People around the country can get in on programs that have nutritionists and health coaches — for people who want a boost.
Dr. Rubin: The data are really compelling. When you were speaking, not a person in the room was interested in having a steak that night for dinner, even with a steakhouse in the hotel.
What do you say to men who have prostate cancer or suffer from erectile dysfunction? Do any data show that by going plant-based you may show improvements? We have recent studies that show that regular exercise might be as good as Viagra.
Dr. Loeb: It’s definitely not too late, even if you’ve already been diagnosed with these conditions. In my own practice, I have seen changes in patients. In fact, one of the case scenarios that I submitted for the lifestyle medicine boards was a patient who adopted a whole food, plant-based diet and no longer uses Viagra. This is definitely something that’s possible to do with intensive lifestyle modification.
Dr. Rubin: Maybe vegetables are the new sexual health aide. How can people find out more? I know you have a Sirius XM radio show.
Dr. Loeb: It’s the Men’s Health Show on Sirius XM channel 110. It’s on Wednesdays from 6:00 to 8:00 PM ET, or you can listen to it on demand anytime through the Sirius XM app.
Dr. Rubin: You have done an enormous amount of research in prostate cancer and sexual medicine. You are an all-star in the field. Thank you for sharing all of your knowledge about plant-based diets. You’ve given us all a lot to think about today.
Dr. Rubin has disclosed the following relevant financial relationships: Serve(d) as a speaker for Sprout; received research grant from Maternal Medical; received income in an amount equal to or greater than $250 from Absorption Pharmaceuticals, GSK, and Endo.
A version of this article appeared on Medscape.com.