User login
A CRC Blood Test Is Here. What Does it Mean for Screening?
In July, the US Food and Drug Administration (FDA) approved the first blood-based test to screen for colorectal cancer (CRC).
The FDA’s approval of Shield (Guardant Health) marks a notable achievement, as individuals at average risk now have the option to receive a simple blood test for CRC screening, starting at age 45.
“No one has an excuse anymore not to be screened,” said John Marshall, MD, director of The Ruesch Center for the Cure of Gastrointestinal Cancers and chief medical officer of the Lombardi Comprehensive Cancer Center at the Georgetown University Medical Center in Washington, DC.
The approval was based on findings from the ECLIPSE study, which reported that Shield had 83% sensitivity for CRC and 90% specificity for advanced neoplasia, though only 13% sensitivity for advanced precancerous lesions.
While an exciting option, the test has its pros and cons.
The bad news, however, is that it does a poor job of detecting precancerous lesions. This could snowball if patients decide to replace a colonoscopy — which helps both detect and prevent CRC — with the blood test.
This news organization spoke to experts across three core specialties involved in the screening and treatment of CRC — primary care, gastroenterology, and oncology — to better understand both the potential value and potential pitfalls of this new option.
The interview responses have been condensed and edited for clarity.
What does this FDA approval mean for CRC screening?
David Lieberman, MD, gastroenterologist and professor emeritus at Oregon Health & Science University: Detecting circulating-free DNA associated with CRC in blood is a major scientific breakthrough. The ease of blood testing will appeal to patients and providers.
Folasade May, MD, director of the gastroenterology quality improvement program at the University of California, Los Angeles: The FDA approval means that we continue to broaden the scope of available tools to help reduce the impact of this largely preventable disease.
Dr. Marshall: Colonoscopy is still the gold standard, but we have to recognize that not everyone does it. And that not everyone wants to send their poop in the mail (with a stool-based test). Now there are no more excuses.
Alan Venook, MD, gastrointestinal medical oncologist at the University of California, San Francisco: Although it’s good to have a blood test that’s approved for CRC screening, I don’t think it moves the bar much in terms of screening. I worry about it overpromising and under-delivering. If it could find polyps or premalignant lesions, that would make a big difference; however, at 13%, that doesn’t really register, so this doesn’t really change anything.
Kenny Lin, MD, a family physician at Penn Medicine Lancaster General Health: I see this test as a good option for the 30% people of CRC screening age who are either not being screened or out of date for screening. I’m a little concerned about the people who are already getting recommended screening and may try to switch to this option.
William Golden, MD, internist and professor of medicine and public health at the University of Arkansas for Medical Sciences, Little Rock, Arkansas: On a scale of 1-10, I give it a 2. It’s expensive ($900 per test without insurance). It’s also not sensitive for early cancers, which would be its main value. Frankly, there are better strategies to get patients engaged.
What do you see as the pros and cons of this test?
Dr. Lin: The pros are that it’s very convenient for patients, and it’s especially easy for physicians if they have a lab in their office and can avoid a referral where patients may never get the test. However, the data I saw were disappointing, with sensitivity and specificity falling short of the stool-based Cologuard test, which is also not invasive and less likely to miss early cancers, precancerous lesions, and polyps.
Dr. Lieberman: A major con is the detection rate of only 13% for advanced precancerous lesions, which means that this test is not likely to result in much cancer prevention. There is good evidence that if advanced precancerous lesions are detected and removed, many — if not most — CRCs can be prevented.
Dr. Marshall: Another issue is the potential for a false-positive result (which occurs for 1 in every 10 tests). With this result, you would do a scope but can’t find what’s going on. This is a big deal. It’s the first of the blood tests that will be used for cancer screening, and it could be scary for a patient to receive a positive result but not be able to figure out where it’s coming from.
Will you be recommending this test or relying on its results?
Dr. Lieberman: Patients need to understand that the blood test is inferior to every other screening test and, if selected, would result in less protection against developing CRC or dying from CRC than other screening tests. But models suggest that this test will perform better than no screening. Therefore, it is reasonable to offer the test to individuals who decline any other form of screening.
Dr. May: I will do what I’ve always done — after the FDA approval, I wait for the US Preventive Services Task Force (USPSTF) to endorse it. If it does, then I feel it’s my responsibility to tell my patients about all the options they have and stay up to date on how the tests perform, what the pros and cons are, and what reliable information will help patients make the best decision.
Dr. Venook: No, but I could potentially see us moving it into surveillance mode, where CRC survivors or patients undergoing therapy could take it, which might give us a unique second bite of the apple. The test could potentially be of value in identifying early relapse or recurrence, which might give us a heads-up or jump start on follow-up.
Are you concerned that patients won’t return for a colonoscopy after a positive result?
Dr. Golden: This concern is relevant for all tests, including fecal immunochemical test (FIT), but I’ve found that if the patient is willing to do the initial test and it comes back positive, most are willing to do the follow-up. Of course, some folks have issues with this, but now we’ll have a marker in their medical records and can re-engage them through outreach.
Dr. Lieberman: I am concerned that a patient who previously declined to have a colonoscopy may not follow up an abnormal blood test with a colonoscopy. If this occurs, it will render a blood test program ineffective for those patients. Patients should be told upfront that if the test is abnormal, a colonoscopy would be recommended.
Dr. May: This is a big concern that I have. We already have two-step screening processes with FIT, Cologuard, and CT colonography, and strong data show there is attrition. All doctors and companies will need to make it clear that if patients have an abnormal test result, they must undergo a colonoscopy. We must have activated and involved systems of patient follow-up and navigation.
Dr. Lin: I already have some concerns, given that some patients with positive FIT tests don’t get timely follow-up. I see it in my own practice where we call patients to get a colonoscopy, but they don’t take it seriously or their initial counseling wasn’t clear about the possibility of needing a follow-up colonoscopy. If people aren’t being screened for whatever reason in the first place and they get a positive result on the Shield blood test, they might be even less likely to get the necessary follow-up testing afterward.
What might this mean for insurance coverage and costs for patients?
Dr. May: This is an important question because if we don’t have equal access, we create or widen disparities. For insurers to cover Shield, it’ll need to be endorsed by major medical societies, including USPSTF. But what will happen in the beginning is that wealthy patients who can pay out of pocket will use it, while lower-income individuals won’t have access until insurers cover it.
Dr. Golden: I could do 70 (or more) FIT tests for the cost of this one blood test. A FIT test should be offered first. We’re advising the Medicaid program that physicians should be required to explain why a patient doesn’t want a FIT test, prior to covering this blood test.
Dr. Venook: It’s too early to say. Although it’s approved, we now have to look at the monetization factor. At the end of the day, we still need a colonoscopy. The science is impressive, but it doesn’t mean we need to spend $900 doing a blood test.
Dr. Lin: I could see the coverage trajectory being similar to that for Cologuard, which had little coverage when it came out 10 years ago, but eventually, Medicare and commercial coverage happened. With Shield, initially, there will be some coverage gaps, especially with commercial insurance, and I can see insurance companies having concerns, especially because the test is expensive compared with other tests and the return isn’t well known. It could also be a waste of money if people with positive tests don’t receive follow-up colonoscopies.
What else would you like to share that people may not have considered?
Dr. Marshall: These tests could pick up other genes from other cancers. My worry is that people could have another cancer detected but not find it on a colonoscopy and think the blood test must be wrong. Or they’ll do a scan, which could lead to more scans and tests.
Dr. Golden: This test has received a lot of attention and coverage that didn’t discuss other screening options, limitations, or nuances. Let’s face it — we’ll see lots of TV ads about it, but once we start dealing with the total cost of care and alternate payment models, it’s going to be hard for this test to find a niche.
Dr. Venook: This test has only been validated in a population of ages 45 years or older, which is the conventional screening population. We desperately need something that can work in younger people, where CRC rates are increasing. I’d like to see the research move in that direction.
Dr. Lin: I thought it was unique that the FDA Advisory Panel clearly stated this was better than nothing but also should be used as second-line screening. The agency took pains to say this is not a colonoscopy or even equivalent to the fecal tests in use. But they appropriately did approve it because a lot of people aren’t getting anything at all, which is the biggest problem with CRC screening.
A version of this article first appeared on Medscape.com.
In July, the US Food and Drug Administration (FDA) approved the first blood-based test to screen for colorectal cancer (CRC).
The FDA’s approval of Shield (Guardant Health) marks a notable achievement, as individuals at average risk now have the option to receive a simple blood test for CRC screening, starting at age 45.
“No one has an excuse anymore not to be screened,” said John Marshall, MD, director of The Ruesch Center for the Cure of Gastrointestinal Cancers and chief medical officer of the Lombardi Comprehensive Cancer Center at the Georgetown University Medical Center in Washington, DC.
The approval was based on findings from the ECLIPSE study, which reported that Shield had 83% sensitivity for CRC and 90% specificity for advanced neoplasia, though only 13% sensitivity for advanced precancerous lesions.
While an exciting option, the test has its pros and cons.
The bad news, however, is that it does a poor job of detecting precancerous lesions. This could snowball if patients decide to replace a colonoscopy — which helps both detect and prevent CRC — with the blood test.
This news organization spoke to experts across three core specialties involved in the screening and treatment of CRC — primary care, gastroenterology, and oncology — to better understand both the potential value and potential pitfalls of this new option.
The interview responses have been condensed and edited for clarity.
What does this FDA approval mean for CRC screening?
David Lieberman, MD, gastroenterologist and professor emeritus at Oregon Health & Science University: Detecting circulating-free DNA associated with CRC in blood is a major scientific breakthrough. The ease of blood testing will appeal to patients and providers.
Folasade May, MD, director of the gastroenterology quality improvement program at the University of California, Los Angeles: The FDA approval means that we continue to broaden the scope of available tools to help reduce the impact of this largely preventable disease.
Dr. Marshall: Colonoscopy is still the gold standard, but we have to recognize that not everyone does it. And that not everyone wants to send their poop in the mail (with a stool-based test). Now there are no more excuses.
Alan Venook, MD, gastrointestinal medical oncologist at the University of California, San Francisco: Although it’s good to have a blood test that’s approved for CRC screening, I don’t think it moves the bar much in terms of screening. I worry about it overpromising and under-delivering. If it could find polyps or premalignant lesions, that would make a big difference; however, at 13%, that doesn’t really register, so this doesn’t really change anything.
Kenny Lin, MD, a family physician at Penn Medicine Lancaster General Health: I see this test as a good option for the 30% people of CRC screening age who are either not being screened or out of date for screening. I’m a little concerned about the people who are already getting recommended screening and may try to switch to this option.
William Golden, MD, internist and professor of medicine and public health at the University of Arkansas for Medical Sciences, Little Rock, Arkansas: On a scale of 1-10, I give it a 2. It’s expensive ($900 per test without insurance). It’s also not sensitive for early cancers, which would be its main value. Frankly, there are better strategies to get patients engaged.
What do you see as the pros and cons of this test?
Dr. Lin: The pros are that it’s very convenient for patients, and it’s especially easy for physicians if they have a lab in their office and can avoid a referral where patients may never get the test. However, the data I saw were disappointing, with sensitivity and specificity falling short of the stool-based Cologuard test, which is also not invasive and less likely to miss early cancers, precancerous lesions, and polyps.
Dr. Lieberman: A major con is the detection rate of only 13% for advanced precancerous lesions, which means that this test is not likely to result in much cancer prevention. There is good evidence that if advanced precancerous lesions are detected and removed, many — if not most — CRCs can be prevented.
Dr. Marshall: Another issue is the potential for a false-positive result (which occurs for 1 in every 10 tests). With this result, you would do a scope but can’t find what’s going on. This is a big deal. It’s the first of the blood tests that will be used for cancer screening, and it could be scary for a patient to receive a positive result but not be able to figure out where it’s coming from.
Will you be recommending this test or relying on its results?
Dr. Lieberman: Patients need to understand that the blood test is inferior to every other screening test and, if selected, would result in less protection against developing CRC or dying from CRC than other screening tests. But models suggest that this test will perform better than no screening. Therefore, it is reasonable to offer the test to individuals who decline any other form of screening.
Dr. May: I will do what I’ve always done — after the FDA approval, I wait for the US Preventive Services Task Force (USPSTF) to endorse it. If it does, then I feel it’s my responsibility to tell my patients about all the options they have and stay up to date on how the tests perform, what the pros and cons are, and what reliable information will help patients make the best decision.
Dr. Venook: No, but I could potentially see us moving it into surveillance mode, where CRC survivors or patients undergoing therapy could take it, which might give us a unique second bite of the apple. The test could potentially be of value in identifying early relapse or recurrence, which might give us a heads-up or jump start on follow-up.
Are you concerned that patients won’t return for a colonoscopy after a positive result?
Dr. Golden: This concern is relevant for all tests, including fecal immunochemical test (FIT), but I’ve found that if the patient is willing to do the initial test and it comes back positive, most are willing to do the follow-up. Of course, some folks have issues with this, but now we’ll have a marker in their medical records and can re-engage them through outreach.
Dr. Lieberman: I am concerned that a patient who previously declined to have a colonoscopy may not follow up an abnormal blood test with a colonoscopy. If this occurs, it will render a blood test program ineffective for those patients. Patients should be told upfront that if the test is abnormal, a colonoscopy would be recommended.
Dr. May: This is a big concern that I have. We already have two-step screening processes with FIT, Cologuard, and CT colonography, and strong data show there is attrition. All doctors and companies will need to make it clear that if patients have an abnormal test result, they must undergo a colonoscopy. We must have activated and involved systems of patient follow-up and navigation.
Dr. Lin: I already have some concerns, given that some patients with positive FIT tests don’t get timely follow-up. I see it in my own practice where we call patients to get a colonoscopy, but they don’t take it seriously or their initial counseling wasn’t clear about the possibility of needing a follow-up colonoscopy. If people aren’t being screened for whatever reason in the first place and they get a positive result on the Shield blood test, they might be even less likely to get the necessary follow-up testing afterward.
What might this mean for insurance coverage and costs for patients?
Dr. May: This is an important question because if we don’t have equal access, we create or widen disparities. For insurers to cover Shield, it’ll need to be endorsed by major medical societies, including USPSTF. But what will happen in the beginning is that wealthy patients who can pay out of pocket will use it, while lower-income individuals won’t have access until insurers cover it.
Dr. Golden: I could do 70 (or more) FIT tests for the cost of this one blood test. A FIT test should be offered first. We’re advising the Medicaid program that physicians should be required to explain why a patient doesn’t want a FIT test, prior to covering this blood test.
Dr. Venook: It’s too early to say. Although it’s approved, we now have to look at the monetization factor. At the end of the day, we still need a colonoscopy. The science is impressive, but it doesn’t mean we need to spend $900 doing a blood test.
Dr. Lin: I could see the coverage trajectory being similar to that for Cologuard, which had little coverage when it came out 10 years ago, but eventually, Medicare and commercial coverage happened. With Shield, initially, there will be some coverage gaps, especially with commercial insurance, and I can see insurance companies having concerns, especially because the test is expensive compared with other tests and the return isn’t well known. It could also be a waste of money if people with positive tests don’t receive follow-up colonoscopies.
What else would you like to share that people may not have considered?
Dr. Marshall: These tests could pick up other genes from other cancers. My worry is that people could have another cancer detected but not find it on a colonoscopy and think the blood test must be wrong. Or they’ll do a scan, which could lead to more scans and tests.
Dr. Golden: This test has received a lot of attention and coverage that didn’t discuss other screening options, limitations, or nuances. Let’s face it — we’ll see lots of TV ads about it, but once we start dealing with the total cost of care and alternate payment models, it’s going to be hard for this test to find a niche.
Dr. Venook: This test has only been validated in a population of ages 45 years or older, which is the conventional screening population. We desperately need something that can work in younger people, where CRC rates are increasing. I’d like to see the research move in that direction.
Dr. Lin: I thought it was unique that the FDA Advisory Panel clearly stated this was better than nothing but also should be used as second-line screening. The agency took pains to say this is not a colonoscopy or even equivalent to the fecal tests in use. But they appropriately did approve it because a lot of people aren’t getting anything at all, which is the biggest problem with CRC screening.
A version of this article first appeared on Medscape.com.
In July, the US Food and Drug Administration (FDA) approved the first blood-based test to screen for colorectal cancer (CRC).
The FDA’s approval of Shield (Guardant Health) marks a notable achievement, as individuals at average risk now have the option to receive a simple blood test for CRC screening, starting at age 45.
“No one has an excuse anymore not to be screened,” said John Marshall, MD, director of The Ruesch Center for the Cure of Gastrointestinal Cancers and chief medical officer of the Lombardi Comprehensive Cancer Center at the Georgetown University Medical Center in Washington, DC.
The approval was based on findings from the ECLIPSE study, which reported that Shield had 83% sensitivity for CRC and 90% specificity for advanced neoplasia, though only 13% sensitivity for advanced precancerous lesions.
While an exciting option, the test has its pros and cons.
The bad news, however, is that it does a poor job of detecting precancerous lesions. This could snowball if patients decide to replace a colonoscopy — which helps both detect and prevent CRC — with the blood test.
This news organization spoke to experts across three core specialties involved in the screening and treatment of CRC — primary care, gastroenterology, and oncology — to better understand both the potential value and potential pitfalls of this new option.
The interview responses have been condensed and edited for clarity.
What does this FDA approval mean for CRC screening?
David Lieberman, MD, gastroenterologist and professor emeritus at Oregon Health & Science University: Detecting circulating-free DNA associated with CRC in blood is a major scientific breakthrough. The ease of blood testing will appeal to patients and providers.
Folasade May, MD, director of the gastroenterology quality improvement program at the University of California, Los Angeles: The FDA approval means that we continue to broaden the scope of available tools to help reduce the impact of this largely preventable disease.
Dr. Marshall: Colonoscopy is still the gold standard, but we have to recognize that not everyone does it. And that not everyone wants to send their poop in the mail (with a stool-based test). Now there are no more excuses.
Alan Venook, MD, gastrointestinal medical oncologist at the University of California, San Francisco: Although it’s good to have a blood test that’s approved for CRC screening, I don’t think it moves the bar much in terms of screening. I worry about it overpromising and under-delivering. If it could find polyps or premalignant lesions, that would make a big difference; however, at 13%, that doesn’t really register, so this doesn’t really change anything.
Kenny Lin, MD, a family physician at Penn Medicine Lancaster General Health: I see this test as a good option for the 30% people of CRC screening age who are either not being screened or out of date for screening. I’m a little concerned about the people who are already getting recommended screening and may try to switch to this option.
William Golden, MD, internist and professor of medicine and public health at the University of Arkansas for Medical Sciences, Little Rock, Arkansas: On a scale of 1-10, I give it a 2. It’s expensive ($900 per test without insurance). It’s also not sensitive for early cancers, which would be its main value. Frankly, there are better strategies to get patients engaged.
What do you see as the pros and cons of this test?
Dr. Lin: The pros are that it’s very convenient for patients, and it’s especially easy for physicians if they have a lab in their office and can avoid a referral where patients may never get the test. However, the data I saw were disappointing, with sensitivity and specificity falling short of the stool-based Cologuard test, which is also not invasive and less likely to miss early cancers, precancerous lesions, and polyps.
Dr. Lieberman: A major con is the detection rate of only 13% for advanced precancerous lesions, which means that this test is not likely to result in much cancer prevention. There is good evidence that if advanced precancerous lesions are detected and removed, many — if not most — CRCs can be prevented.
Dr. Marshall: Another issue is the potential for a false-positive result (which occurs for 1 in every 10 tests). With this result, you would do a scope but can’t find what’s going on. This is a big deal. It’s the first of the blood tests that will be used for cancer screening, and it could be scary for a patient to receive a positive result but not be able to figure out where it’s coming from.
Will you be recommending this test or relying on its results?
Dr. Lieberman: Patients need to understand that the blood test is inferior to every other screening test and, if selected, would result in less protection against developing CRC or dying from CRC than other screening tests. But models suggest that this test will perform better than no screening. Therefore, it is reasonable to offer the test to individuals who decline any other form of screening.
Dr. May: I will do what I’ve always done — after the FDA approval, I wait for the US Preventive Services Task Force (USPSTF) to endorse it. If it does, then I feel it’s my responsibility to tell my patients about all the options they have and stay up to date on how the tests perform, what the pros and cons are, and what reliable information will help patients make the best decision.
Dr. Venook: No, but I could potentially see us moving it into surveillance mode, where CRC survivors or patients undergoing therapy could take it, which might give us a unique second bite of the apple. The test could potentially be of value in identifying early relapse or recurrence, which might give us a heads-up or jump start on follow-up.
Are you concerned that patients won’t return for a colonoscopy after a positive result?
Dr. Golden: This concern is relevant for all tests, including fecal immunochemical test (FIT), but I’ve found that if the patient is willing to do the initial test and it comes back positive, most are willing to do the follow-up. Of course, some folks have issues with this, but now we’ll have a marker in their medical records and can re-engage them through outreach.
Dr. Lieberman: I am concerned that a patient who previously declined to have a colonoscopy may not follow up an abnormal blood test with a colonoscopy. If this occurs, it will render a blood test program ineffective for those patients. Patients should be told upfront that if the test is abnormal, a colonoscopy would be recommended.
Dr. May: This is a big concern that I have. We already have two-step screening processes with FIT, Cologuard, and CT colonography, and strong data show there is attrition. All doctors and companies will need to make it clear that if patients have an abnormal test result, they must undergo a colonoscopy. We must have activated and involved systems of patient follow-up and navigation.
Dr. Lin: I already have some concerns, given that some patients with positive FIT tests don’t get timely follow-up. I see it in my own practice where we call patients to get a colonoscopy, but they don’t take it seriously or their initial counseling wasn’t clear about the possibility of needing a follow-up colonoscopy. If people aren’t being screened for whatever reason in the first place and they get a positive result on the Shield blood test, they might be even less likely to get the necessary follow-up testing afterward.
What might this mean for insurance coverage and costs for patients?
Dr. May: This is an important question because if we don’t have equal access, we create or widen disparities. For insurers to cover Shield, it’ll need to be endorsed by major medical societies, including USPSTF. But what will happen in the beginning is that wealthy patients who can pay out of pocket will use it, while lower-income individuals won’t have access until insurers cover it.
Dr. Golden: I could do 70 (or more) FIT tests for the cost of this one blood test. A FIT test should be offered first. We’re advising the Medicaid program that physicians should be required to explain why a patient doesn’t want a FIT test, prior to covering this blood test.
Dr. Venook: It’s too early to say. Although it’s approved, we now have to look at the monetization factor. At the end of the day, we still need a colonoscopy. The science is impressive, but it doesn’t mean we need to spend $900 doing a blood test.
Dr. Lin: I could see the coverage trajectory being similar to that for Cologuard, which had little coverage when it came out 10 years ago, but eventually, Medicare and commercial coverage happened. With Shield, initially, there will be some coverage gaps, especially with commercial insurance, and I can see insurance companies having concerns, especially because the test is expensive compared with other tests and the return isn’t well known. It could also be a waste of money if people with positive tests don’t receive follow-up colonoscopies.
What else would you like to share that people may not have considered?
Dr. Marshall: These tests could pick up other genes from other cancers. My worry is that people could have another cancer detected but not find it on a colonoscopy and think the blood test must be wrong. Or they’ll do a scan, which could lead to more scans and tests.
Dr. Golden: This test has received a lot of attention and coverage that didn’t discuss other screening options, limitations, or nuances. Let’s face it — we’ll see lots of TV ads about it, but once we start dealing with the total cost of care and alternate payment models, it’s going to be hard for this test to find a niche.
Dr. Venook: This test has only been validated in a population of ages 45 years or older, which is the conventional screening population. We desperately need something that can work in younger people, where CRC rates are increasing. I’d like to see the research move in that direction.
Dr. Lin: I thought it was unique that the FDA Advisory Panel clearly stated this was better than nothing but also should be used as second-line screening. The agency took pains to say this is not a colonoscopy or even equivalent to the fecal tests in use. But they appropriately did approve it because a lot of people aren’t getting anything at all, which is the biggest problem with CRC screening.
A version of this article first appeared on Medscape.com.
Five Steps to Improved Colonoscopy Performance
According to several experts who spoke at the American Gastroenterological Association’s Postgraduate Course this spring, which was offered at Digestive Disease Week (DDW), gastroenterologists can take these five steps to improve their performance: Addressing poor bowel prep, improving polyp detection, following the best intervals for polyp surveillance, reducing the environmental impact of gastrointestinal (GI) practice, and implementing artificial intelligence (AI) tools for efficiency and quality.
Addressing Poor Prep
To improve bowel preparation rates, clinicians may consider identifying those at high risk for inadequate prep, which could include known risk factors such as age, body mass index, inpatient status, constipation, tobacco use, and hypertension. However, other variables tend to serve as bigger predictors of inadequate prep, such as the patient’s status regarding cirrhosis, Parkinson’s disease, dementia, diabetes, opioid use, gastroparesis, tricyclics, and colorectal surgery.
Although several prediction models are based on some of these factors — looking at comorbidities, antidepressant use, constipation, and prior abdominal or pelvic surgery — the data don’t indicate whether knowing about or addressing these risks actually leads to better bowel prep, said Brian Jacobson, MD, associate professor of medicine at Harvard Medical School, Boston, and director of program development for gastroenterology at Massachusetts General Hospital in Boston.
Instead, the biggest return-on-investment option is to maximize prep for all patients, he said, especially since every patient has at least some risk of poor prep, either due to the required diet changes, medication considerations, or purgative solution and timing.
To create a state-of-the-art bowel prep process, Dr. Jacobson recommended numerous tactics for all patients: Verbal and written instructions for all components of prep, patient navigation with phone or virtual messaging to guide patients through the process, a low-fiber or all-liquid diet on the day before colonoscopy, and a split-dose 2-L prep regimen. Patients should begin the second half of the split-dose regimen 4-6 hours before colonoscopy and complete it at least 2 hours before the procedure starts, and clinicians should use an irrigation pump during colonoscopy to improve visibility.
Beyond that, Dr. Jacobson noted, higher risk patients can take a split-dose 4-L prep regimen with bisacodyl, a low-fiber diet 2-3 days before colonoscopy, and a clear liquid diet the day before colonoscopy. Using simethicone as an adjunct solution can also reduce bubbles in the colon.
Future tech developments may help clinicians as well, he said, such as using AI to identify patients at high risk and modifying their prep process, creating a personalized prep on a digital platform with videos that guide patients through the process, and using a phone checklist tool to indicate when they’re ready for colonoscopy.
Improving Polyp Detection
Adenoma detection rates (ADR) can be highly variable due to different techniques, technical skills, pattern recognition, interpretation, and experience. New adjunct and AI-based tools can help improve ADR, especially if clinicians want to improve, receive training, and use best-practice techniques.
“In colonoscopy, it’s tricky because it’s not just a blood test or an x-ray. There’s really a lot of technique involved, both cognitive awareness and pattern recognition, as well as our technical skills,” said Tonya Kaltenbach, MD, professor of clinical medicine at the University of California San Francisco and director of advanced endoscopy at the San Francisco VA Health Care System in San Francisco.
For instance, multiple tools and techniques may be needed in real time to interpret a lesion, such as washing, retroflexing, and using better lighting, while paying attention to alerts and noting areas for further inspection and resection.
“This is not innate. It’s a learned skill,” she said. “It’s something we need to intentionally make efforts on and get feedback to improve.”
Improvement starts with using the right mindset for lesion detection, Dr. Kaltenbach said, by having a “reflexive recognition of deconstructed patterns of normal” — following the lines, vessels, and folds and looking for interruptions, abnormal thickness, and mucus caps. On top of that, adjunctive tools such as caps/cuffs and dye chromoendoscopy can help with proper ergonomics, irrigation, and mucosa exposure.
In the past 3 years, real-world studies using AI and computer-assisted detection have shown mixed results, with some demonstrating significant increases in ADR, while others haven’t, she said. However, being willing to try AI and other tools, such as the Endocuff cap, may help improve ADR, standardize interpretation, improve efficiency, and increase reproducibility.
“We’re always better with intentional feedback and deliberate practice,” she said. “Remember that if you improve, you’re protecting the patient from death and reducing interval cancer.”
Following Polyp Surveillance Intervals
The US Multi-Society Task Force on Colorectal Cancer’s recommendations for follow-up after colonoscopy and polypectomy provide valuable information and rationale for how to determine surveillance intervals for patients. However, clinicians still may be unsure what to recommend for some patients — or tell them to come back too soon, leading to unnecessary colonoscopy.
For instance, a 47-year-old woman who presents for her initial screening and has a single 6-mm polyp, which pathology returns as a single adenoma may be considered to be at average risk and suggested to return in 7-10 years. The guidelines seem more obvious for patients with one or two adenomas under 10 mm removed en bloc.
However, once the case details shift into gray areas and include three or four adenomas between 10 and 20 mm, or piecemeal removal, clinicians may differ on their recommendations, said Rajesh N. Keswani, MD, associate professor of medicine at the Northwestern University Feinberg School of Medicine and director of endoscopy for Northwestern Medicine in Chicago. At DDW 2024, Dr. Keswani presented several case examples, often finding various audience opinions.
In addition, he noted, recent studies have found that clinicians may estimate imprecise polyp measurements, struggle to identify sessile serrated polyposis syndrome, and often don’t follow evidence-based guidelines.
“Why do we ignore the guidelines? There’s this perception that a patient has risk factors that aren’t addressed by the guidelines, with regards to family history or a distant history of a large polyp that we don’t want to leave to the usual intervals,” he said. “We feel uncomfortable, even with our meticulous colonoscopy, telling people to come back in 10 years.”
To improve guideline adherence, Dr. Keswani suggested providing additional education, implementing an automated surveillance calculator, and using guidelines at the point of care. At Northwestern, for instance, clinicians use a hyperlink with an interpreted version of the guidelines with prior colonoscopy considerations. Overall though, practitioners should feel comfortable leaning toward longer surveillance intervals, he noted.
“More effort should be spent on getting unscreened patients in for colonoscopy than bringing back low-risk patients too early,” he said.
Reducing Environmental Effects
In recent waste audits of endoscopy rooms, providers generate 1-3 kg of waste per procedure, which would fill 117 soccer fields to a depth of 1 m, based on 18 million procedures in the United States per year. This waste comes from procedure-related equipment, administration, medications, travel of patients and staff, and infrastructure with systems such as air conditioning. Taking steps toward a green practice can reduce waste and the carbon footprint of healthcare.
“When we think about improving colonoscopy performance, the goal is to prevent colon cancer death, but when we expand that, we have to apply sustainable practices as a domain of quality,” said Heiko Pohl, MD, professor of medicine at the Geisel School of Medicine at Dartmouth in Hanover, New Hampshire, and a gastroenterologist at White River Junction VA Medical Center in White River Junction, Vermont.
The GI Multisociety Strategic Plan on Environmental Sustainability suggests a 5-year initiative to improve sustainability and reduce waste across seven domains — clinical setting, education, research, society efforts, intersociety efforts, industry, and advocacy.
For instance, clinicians can take the biggest step toward sustainability by avoiding unneeded colonoscopies, Dr. Pohl said, noting that between 20% and 30% aren’t appropriate or indicated. Instead, practitioners can implement longer surveillance intervals, adhere to guidelines, and consider alternative tests, such as the fecal immunochemical test, fecal DNA, blood-based tests, and CT colonography, where relevant.
Clinicians can also rethink their approach to resection, such as using a snare first instead of forceps to reduce single-instrument use, using clip closure only when it’s truly indicated, and implementing AI-assisted optical diagnosis to help with leaving rectosigmoid polyps in place.
In terms of physical waste, practices may also reconsider how they sort bins and biohazards, looking at new ways to dispose of regulated medical waste, sharps, recyclables, and typical trash. Waste audits can help find ways to reduce paper, combine procedures, and create more efficient use of endoscopy rooms.
“We are really in a very precarious situation,” Dr. Pohl said. “It’s our generation that has a responsibility to change the course for our children’s and grandchildren’s sake.”
AI for Quality And Efficiency
Moving forward, AI tools will likely become more popular in various parts of GI practice, by assisting with documentation, spotting polyps, tracking mucosal surfaces, providing optical histopathology, and supervising performance through high-quality feedback.
“Endoscopy has reached the limits of human visual capacity, where seeing more pixels won’t necessarily improve clinical diagnosis. What’s next for elevating the care of patients really is AI,” said Jason B. Samarasena, MD, professor of medicine and program director of the interventional endoscopy training program at the University of California Irvine in Irvine, California.
As practices adopt AI-based systems, however, clinicians should be cautious about a false sense of comfort or “alarm fatigue” if bounding boxes become distracting. Instead, new tools need to be adopted as a “physician-AI hybrid,” with the endoscopist in mind, particularly if helpful for performing a better exam by watching withdrawal time or endoscope slippage.
“In real-world practice, this is being implemented without attention to endoscopist inclination and behavior,” he said. “Having a better understanding of physician attitudes could yield more optimal results.”
Notably, AI-assisted tools should be viewed akin to spell-check, which signals to the endoscopist when to pay attention and double-check an area — but primarily relies on the expert to do a high-quality exam, said Aasma Shaukat, MD, professor of medicine and director of GI outcomes research at the NYU Grossman School of Medicine, New York City.
“This should be an adjunct or an additional tool, not a replacement tool,” she added. “This doesn’t mean to stop doing astute observation.”
Future tools show promise in terms of tracking additional data related to prep quality, cecal landmarks, polyp size, mucosa exposure, histology prediction, and complete resection. These automated reports could also link to real-time dashboards, hospital or national registries, and reimbursement systems, Dr. Shaukat noted.
“At the end of the day, our interests are aligned,” she said. “Everybody cares about quality, patient satisfaction, and reimbursement, and with that goal in mind, I think some of the tools can be applied to show how we can achieve those principles together.”
Dr. Jacobson, Dr. Kaltenbach, Dr. Keswani, Dr. Pohl, Dr. Samarasena, and Dr. Shaukat reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
According to several experts who spoke at the American Gastroenterological Association’s Postgraduate Course this spring, which was offered at Digestive Disease Week (DDW), gastroenterologists can take these five steps to improve their performance: Addressing poor bowel prep, improving polyp detection, following the best intervals for polyp surveillance, reducing the environmental impact of gastrointestinal (GI) practice, and implementing artificial intelligence (AI) tools for efficiency and quality.
Addressing Poor Prep
To improve bowel preparation rates, clinicians may consider identifying those at high risk for inadequate prep, which could include known risk factors such as age, body mass index, inpatient status, constipation, tobacco use, and hypertension. However, other variables tend to serve as bigger predictors of inadequate prep, such as the patient’s status regarding cirrhosis, Parkinson’s disease, dementia, diabetes, opioid use, gastroparesis, tricyclics, and colorectal surgery.
Although several prediction models are based on some of these factors — looking at comorbidities, antidepressant use, constipation, and prior abdominal or pelvic surgery — the data don’t indicate whether knowing about or addressing these risks actually leads to better bowel prep, said Brian Jacobson, MD, associate professor of medicine at Harvard Medical School, Boston, and director of program development for gastroenterology at Massachusetts General Hospital in Boston.
Instead, the biggest return-on-investment option is to maximize prep for all patients, he said, especially since every patient has at least some risk of poor prep, either due to the required diet changes, medication considerations, or purgative solution and timing.
To create a state-of-the-art bowel prep process, Dr. Jacobson recommended numerous tactics for all patients: Verbal and written instructions for all components of prep, patient navigation with phone or virtual messaging to guide patients through the process, a low-fiber or all-liquid diet on the day before colonoscopy, and a split-dose 2-L prep regimen. Patients should begin the second half of the split-dose regimen 4-6 hours before colonoscopy and complete it at least 2 hours before the procedure starts, and clinicians should use an irrigation pump during colonoscopy to improve visibility.
Beyond that, Dr. Jacobson noted, higher risk patients can take a split-dose 4-L prep regimen with bisacodyl, a low-fiber diet 2-3 days before colonoscopy, and a clear liquid diet the day before colonoscopy. Using simethicone as an adjunct solution can also reduce bubbles in the colon.
Future tech developments may help clinicians as well, he said, such as using AI to identify patients at high risk and modifying their prep process, creating a personalized prep on a digital platform with videos that guide patients through the process, and using a phone checklist tool to indicate when they’re ready for colonoscopy.
Improving Polyp Detection
Adenoma detection rates (ADR) can be highly variable due to different techniques, technical skills, pattern recognition, interpretation, and experience. New adjunct and AI-based tools can help improve ADR, especially if clinicians want to improve, receive training, and use best-practice techniques.
“In colonoscopy, it’s tricky because it’s not just a blood test or an x-ray. There’s really a lot of technique involved, both cognitive awareness and pattern recognition, as well as our technical skills,” said Tonya Kaltenbach, MD, professor of clinical medicine at the University of California San Francisco and director of advanced endoscopy at the San Francisco VA Health Care System in San Francisco.
For instance, multiple tools and techniques may be needed in real time to interpret a lesion, such as washing, retroflexing, and using better lighting, while paying attention to alerts and noting areas for further inspection and resection.
“This is not innate. It’s a learned skill,” she said. “It’s something we need to intentionally make efforts on and get feedback to improve.”
Improvement starts with using the right mindset for lesion detection, Dr. Kaltenbach said, by having a “reflexive recognition of deconstructed patterns of normal” — following the lines, vessels, and folds and looking for interruptions, abnormal thickness, and mucus caps. On top of that, adjunctive tools such as caps/cuffs and dye chromoendoscopy can help with proper ergonomics, irrigation, and mucosa exposure.
In the past 3 years, real-world studies using AI and computer-assisted detection have shown mixed results, with some demonstrating significant increases in ADR, while others haven’t, she said. However, being willing to try AI and other tools, such as the Endocuff cap, may help improve ADR, standardize interpretation, improve efficiency, and increase reproducibility.
“We’re always better with intentional feedback and deliberate practice,” she said. “Remember that if you improve, you’re protecting the patient from death and reducing interval cancer.”
Following Polyp Surveillance Intervals
The US Multi-Society Task Force on Colorectal Cancer’s recommendations for follow-up after colonoscopy and polypectomy provide valuable information and rationale for how to determine surveillance intervals for patients. However, clinicians still may be unsure what to recommend for some patients — or tell them to come back too soon, leading to unnecessary colonoscopy.
For instance, a 47-year-old woman who presents for her initial screening and has a single 6-mm polyp, which pathology returns as a single adenoma may be considered to be at average risk and suggested to return in 7-10 years. The guidelines seem more obvious for patients with one or two adenomas under 10 mm removed en bloc.
However, once the case details shift into gray areas and include three or four adenomas between 10 and 20 mm, or piecemeal removal, clinicians may differ on their recommendations, said Rajesh N. Keswani, MD, associate professor of medicine at the Northwestern University Feinberg School of Medicine and director of endoscopy for Northwestern Medicine in Chicago. At DDW 2024, Dr. Keswani presented several case examples, often finding various audience opinions.
In addition, he noted, recent studies have found that clinicians may estimate imprecise polyp measurements, struggle to identify sessile serrated polyposis syndrome, and often don’t follow evidence-based guidelines.
“Why do we ignore the guidelines? There’s this perception that a patient has risk factors that aren’t addressed by the guidelines, with regards to family history or a distant history of a large polyp that we don’t want to leave to the usual intervals,” he said. “We feel uncomfortable, even with our meticulous colonoscopy, telling people to come back in 10 years.”
To improve guideline adherence, Dr. Keswani suggested providing additional education, implementing an automated surveillance calculator, and using guidelines at the point of care. At Northwestern, for instance, clinicians use a hyperlink with an interpreted version of the guidelines with prior colonoscopy considerations. Overall though, practitioners should feel comfortable leaning toward longer surveillance intervals, he noted.
“More effort should be spent on getting unscreened patients in for colonoscopy than bringing back low-risk patients too early,” he said.
Reducing Environmental Effects
In recent waste audits of endoscopy rooms, providers generate 1-3 kg of waste per procedure, which would fill 117 soccer fields to a depth of 1 m, based on 18 million procedures in the United States per year. This waste comes from procedure-related equipment, administration, medications, travel of patients and staff, and infrastructure with systems such as air conditioning. Taking steps toward a green practice can reduce waste and the carbon footprint of healthcare.
“When we think about improving colonoscopy performance, the goal is to prevent colon cancer death, but when we expand that, we have to apply sustainable practices as a domain of quality,” said Heiko Pohl, MD, professor of medicine at the Geisel School of Medicine at Dartmouth in Hanover, New Hampshire, and a gastroenterologist at White River Junction VA Medical Center in White River Junction, Vermont.
The GI Multisociety Strategic Plan on Environmental Sustainability suggests a 5-year initiative to improve sustainability and reduce waste across seven domains — clinical setting, education, research, society efforts, intersociety efforts, industry, and advocacy.
For instance, clinicians can take the biggest step toward sustainability by avoiding unneeded colonoscopies, Dr. Pohl said, noting that between 20% and 30% aren’t appropriate or indicated. Instead, practitioners can implement longer surveillance intervals, adhere to guidelines, and consider alternative tests, such as the fecal immunochemical test, fecal DNA, blood-based tests, and CT colonography, where relevant.
Clinicians can also rethink their approach to resection, such as using a snare first instead of forceps to reduce single-instrument use, using clip closure only when it’s truly indicated, and implementing AI-assisted optical diagnosis to help with leaving rectosigmoid polyps in place.
In terms of physical waste, practices may also reconsider how they sort bins and biohazards, looking at new ways to dispose of regulated medical waste, sharps, recyclables, and typical trash. Waste audits can help find ways to reduce paper, combine procedures, and create more efficient use of endoscopy rooms.
“We are really in a very precarious situation,” Dr. Pohl said. “It’s our generation that has a responsibility to change the course for our children’s and grandchildren’s sake.”
AI for Quality And Efficiency
Moving forward, AI tools will likely become more popular in various parts of GI practice, by assisting with documentation, spotting polyps, tracking mucosal surfaces, providing optical histopathology, and supervising performance through high-quality feedback.
“Endoscopy has reached the limits of human visual capacity, where seeing more pixels won’t necessarily improve clinical diagnosis. What’s next for elevating the care of patients really is AI,” said Jason B. Samarasena, MD, professor of medicine and program director of the interventional endoscopy training program at the University of California Irvine in Irvine, California.
As practices adopt AI-based systems, however, clinicians should be cautious about a false sense of comfort or “alarm fatigue” if bounding boxes become distracting. Instead, new tools need to be adopted as a “physician-AI hybrid,” with the endoscopist in mind, particularly if helpful for performing a better exam by watching withdrawal time or endoscope slippage.
“In real-world practice, this is being implemented without attention to endoscopist inclination and behavior,” he said. “Having a better understanding of physician attitudes could yield more optimal results.”
Notably, AI-assisted tools should be viewed akin to spell-check, which signals to the endoscopist when to pay attention and double-check an area — but primarily relies on the expert to do a high-quality exam, said Aasma Shaukat, MD, professor of medicine and director of GI outcomes research at the NYU Grossman School of Medicine, New York City.
“This should be an adjunct or an additional tool, not a replacement tool,” she added. “This doesn’t mean to stop doing astute observation.”
Future tools show promise in terms of tracking additional data related to prep quality, cecal landmarks, polyp size, mucosa exposure, histology prediction, and complete resection. These automated reports could also link to real-time dashboards, hospital or national registries, and reimbursement systems, Dr. Shaukat noted.
“At the end of the day, our interests are aligned,” she said. “Everybody cares about quality, patient satisfaction, and reimbursement, and with that goal in mind, I think some of the tools can be applied to show how we can achieve those principles together.”
Dr. Jacobson, Dr. Kaltenbach, Dr. Keswani, Dr. Pohl, Dr. Samarasena, and Dr. Shaukat reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
According to several experts who spoke at the American Gastroenterological Association’s Postgraduate Course this spring, which was offered at Digestive Disease Week (DDW), gastroenterologists can take these five steps to improve their performance: Addressing poor bowel prep, improving polyp detection, following the best intervals for polyp surveillance, reducing the environmental impact of gastrointestinal (GI) practice, and implementing artificial intelligence (AI) tools for efficiency and quality.
Addressing Poor Prep
To improve bowel preparation rates, clinicians may consider identifying those at high risk for inadequate prep, which could include known risk factors such as age, body mass index, inpatient status, constipation, tobacco use, and hypertension. However, other variables tend to serve as bigger predictors of inadequate prep, such as the patient’s status regarding cirrhosis, Parkinson’s disease, dementia, diabetes, opioid use, gastroparesis, tricyclics, and colorectal surgery.
Although several prediction models are based on some of these factors — looking at comorbidities, antidepressant use, constipation, and prior abdominal or pelvic surgery — the data don’t indicate whether knowing about or addressing these risks actually leads to better bowel prep, said Brian Jacobson, MD, associate professor of medicine at Harvard Medical School, Boston, and director of program development for gastroenterology at Massachusetts General Hospital in Boston.
Instead, the biggest return-on-investment option is to maximize prep for all patients, he said, especially since every patient has at least some risk of poor prep, either due to the required diet changes, medication considerations, or purgative solution and timing.
To create a state-of-the-art bowel prep process, Dr. Jacobson recommended numerous tactics for all patients: Verbal and written instructions for all components of prep, patient navigation with phone or virtual messaging to guide patients through the process, a low-fiber or all-liquid diet on the day before colonoscopy, and a split-dose 2-L prep regimen. Patients should begin the second half of the split-dose regimen 4-6 hours before colonoscopy and complete it at least 2 hours before the procedure starts, and clinicians should use an irrigation pump during colonoscopy to improve visibility.
Beyond that, Dr. Jacobson noted, higher risk patients can take a split-dose 4-L prep regimen with bisacodyl, a low-fiber diet 2-3 days before colonoscopy, and a clear liquid diet the day before colonoscopy. Using simethicone as an adjunct solution can also reduce bubbles in the colon.
Future tech developments may help clinicians as well, he said, such as using AI to identify patients at high risk and modifying their prep process, creating a personalized prep on a digital platform with videos that guide patients through the process, and using a phone checklist tool to indicate when they’re ready for colonoscopy.
Improving Polyp Detection
Adenoma detection rates (ADR) can be highly variable due to different techniques, technical skills, pattern recognition, interpretation, and experience. New adjunct and AI-based tools can help improve ADR, especially if clinicians want to improve, receive training, and use best-practice techniques.
“In colonoscopy, it’s tricky because it’s not just a blood test or an x-ray. There’s really a lot of technique involved, both cognitive awareness and pattern recognition, as well as our technical skills,” said Tonya Kaltenbach, MD, professor of clinical medicine at the University of California San Francisco and director of advanced endoscopy at the San Francisco VA Health Care System in San Francisco.
For instance, multiple tools and techniques may be needed in real time to interpret a lesion, such as washing, retroflexing, and using better lighting, while paying attention to alerts and noting areas for further inspection and resection.
“This is not innate. It’s a learned skill,” she said. “It’s something we need to intentionally make efforts on and get feedback to improve.”
Improvement starts with using the right mindset for lesion detection, Dr. Kaltenbach said, by having a “reflexive recognition of deconstructed patterns of normal” — following the lines, vessels, and folds and looking for interruptions, abnormal thickness, and mucus caps. On top of that, adjunctive tools such as caps/cuffs and dye chromoendoscopy can help with proper ergonomics, irrigation, and mucosa exposure.
In the past 3 years, real-world studies using AI and computer-assisted detection have shown mixed results, with some demonstrating significant increases in ADR, while others haven’t, she said. However, being willing to try AI and other tools, such as the Endocuff cap, may help improve ADR, standardize interpretation, improve efficiency, and increase reproducibility.
“We’re always better with intentional feedback and deliberate practice,” she said. “Remember that if you improve, you’re protecting the patient from death and reducing interval cancer.”
Following Polyp Surveillance Intervals
The US Multi-Society Task Force on Colorectal Cancer’s recommendations for follow-up after colonoscopy and polypectomy provide valuable information and rationale for how to determine surveillance intervals for patients. However, clinicians still may be unsure what to recommend for some patients — or tell them to come back too soon, leading to unnecessary colonoscopy.
For instance, a 47-year-old woman who presents for her initial screening and has a single 6-mm polyp, which pathology returns as a single adenoma may be considered to be at average risk and suggested to return in 7-10 years. The guidelines seem more obvious for patients with one or two adenomas under 10 mm removed en bloc.
However, once the case details shift into gray areas and include three or four adenomas between 10 and 20 mm, or piecemeal removal, clinicians may differ on their recommendations, said Rajesh N. Keswani, MD, associate professor of medicine at the Northwestern University Feinberg School of Medicine and director of endoscopy for Northwestern Medicine in Chicago. At DDW 2024, Dr. Keswani presented several case examples, often finding various audience opinions.
In addition, he noted, recent studies have found that clinicians may estimate imprecise polyp measurements, struggle to identify sessile serrated polyposis syndrome, and often don’t follow evidence-based guidelines.
“Why do we ignore the guidelines? There’s this perception that a patient has risk factors that aren’t addressed by the guidelines, with regards to family history or a distant history of a large polyp that we don’t want to leave to the usual intervals,” he said. “We feel uncomfortable, even with our meticulous colonoscopy, telling people to come back in 10 years.”
To improve guideline adherence, Dr. Keswani suggested providing additional education, implementing an automated surveillance calculator, and using guidelines at the point of care. At Northwestern, for instance, clinicians use a hyperlink with an interpreted version of the guidelines with prior colonoscopy considerations. Overall though, practitioners should feel comfortable leaning toward longer surveillance intervals, he noted.
“More effort should be spent on getting unscreened patients in for colonoscopy than bringing back low-risk patients too early,” he said.
Reducing Environmental Effects
In recent waste audits of endoscopy rooms, providers generate 1-3 kg of waste per procedure, which would fill 117 soccer fields to a depth of 1 m, based on 18 million procedures in the United States per year. This waste comes from procedure-related equipment, administration, medications, travel of patients and staff, and infrastructure with systems such as air conditioning. Taking steps toward a green practice can reduce waste and the carbon footprint of healthcare.
“When we think about improving colonoscopy performance, the goal is to prevent colon cancer death, but when we expand that, we have to apply sustainable practices as a domain of quality,” said Heiko Pohl, MD, professor of medicine at the Geisel School of Medicine at Dartmouth in Hanover, New Hampshire, and a gastroenterologist at White River Junction VA Medical Center in White River Junction, Vermont.
The GI Multisociety Strategic Plan on Environmental Sustainability suggests a 5-year initiative to improve sustainability and reduce waste across seven domains — clinical setting, education, research, society efforts, intersociety efforts, industry, and advocacy.
For instance, clinicians can take the biggest step toward sustainability by avoiding unneeded colonoscopies, Dr. Pohl said, noting that between 20% and 30% aren’t appropriate or indicated. Instead, practitioners can implement longer surveillance intervals, adhere to guidelines, and consider alternative tests, such as the fecal immunochemical test, fecal DNA, blood-based tests, and CT colonography, where relevant.
Clinicians can also rethink their approach to resection, such as using a snare first instead of forceps to reduce single-instrument use, using clip closure only when it’s truly indicated, and implementing AI-assisted optical diagnosis to help with leaving rectosigmoid polyps in place.
In terms of physical waste, practices may also reconsider how they sort bins and biohazards, looking at new ways to dispose of regulated medical waste, sharps, recyclables, and typical trash. Waste audits can help find ways to reduce paper, combine procedures, and create more efficient use of endoscopy rooms.
“We are really in a very precarious situation,” Dr. Pohl said. “It’s our generation that has a responsibility to change the course for our children’s and grandchildren’s sake.”
AI for Quality And Efficiency
Moving forward, AI tools will likely become more popular in various parts of GI practice, by assisting with documentation, spotting polyps, tracking mucosal surfaces, providing optical histopathology, and supervising performance through high-quality feedback.
“Endoscopy has reached the limits of human visual capacity, where seeing more pixels won’t necessarily improve clinical diagnosis. What’s next for elevating the care of patients really is AI,” said Jason B. Samarasena, MD, professor of medicine and program director of the interventional endoscopy training program at the University of California Irvine in Irvine, California.
As practices adopt AI-based systems, however, clinicians should be cautious about a false sense of comfort or “alarm fatigue” if bounding boxes become distracting. Instead, new tools need to be adopted as a “physician-AI hybrid,” with the endoscopist in mind, particularly if helpful for performing a better exam by watching withdrawal time or endoscope slippage.
“In real-world practice, this is being implemented without attention to endoscopist inclination and behavior,” he said. “Having a better understanding of physician attitudes could yield more optimal results.”
Notably, AI-assisted tools should be viewed akin to spell-check, which signals to the endoscopist when to pay attention and double-check an area — but primarily relies on the expert to do a high-quality exam, said Aasma Shaukat, MD, professor of medicine and director of GI outcomes research at the NYU Grossman School of Medicine, New York City.
“This should be an adjunct or an additional tool, not a replacement tool,” she added. “This doesn’t mean to stop doing astute observation.”
Future tools show promise in terms of tracking additional data related to prep quality, cecal landmarks, polyp size, mucosa exposure, histology prediction, and complete resection. These automated reports could also link to real-time dashboards, hospital or national registries, and reimbursement systems, Dr. Shaukat noted.
“At the end of the day, our interests are aligned,” she said. “Everybody cares about quality, patient satisfaction, and reimbursement, and with that goal in mind, I think some of the tools can be applied to show how we can achieve those principles together.”
Dr. Jacobson, Dr. Kaltenbach, Dr. Keswani, Dr. Pohl, Dr. Samarasena, and Dr. Shaukat reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
High-Dose Vitamin D Linked to Lower Disease Activity in CIS
COPENHAGEN — , results of a randomized, controlled trial suggest. In addition, cholecalciferol had a favorable safety profile and was well tolerated.
“These data support high-dose vitamin D supplementation in early MS and make vitamin D the best candidate for add-on therapy evaluation in the therapeutic strategy for multiple sclerosis [MS],” said study author Eric Thouvenot, MD, PhD, University Hospital of Nimes, Neurology Department, Nimes, France.
The study was presented at the 2024 ECTRIMS annual meeting.
Vitamin D Supplementation Versus Placebo
Research shows vitamin D deficiency is a risk factor for MS. However, results of previous research investigating vitamin D supplementation in MS, with different regimens and durations, have been contradictory.
The current double-blind study included 303 adults newly diagnosed with CIS (within 90 days) and a serum 25-hydroxy vitamin D concentration of less than 100 nmol/L at baseline. Participants had a median age of 34 years, and 70% were women.
About one third of participants had optic neuritis, two thirds had oligoclonal bands from cerebrospinal fluid analysis, and the median Expanded Disability Status Scale (EDSS) score was 1.0. Of the total, 89% fulfilled 2017 McDonald criteria for the diagnosis of relapsing-remitting MS (RRMS).
Participants were randomly assigned to receive high-dose (100,000 international units) oral cholecalciferol or placebo every 2 weeks for 24 months. Participants had a clinical visit at 3, 6, 12, 18, and 24 months, and brain and spinal cord MRI with and without gadolinium at 3, 12, and 24 months.
The primary outcome was occurrence of disease activity — relapse, new or enlarging T2 lesions, and presence of contrast-enhancing lesions.
Significant Difference
During follow-up, 60.3% in the vitamin group showed evidence of disease activity versus 74.1% in the placebo group (hazard ratio [HR], 0.66; 95% CI, 0.50-0.87; P = .004). In addition, the median time to evidence of disease activity was 432 days in the vitamin D group versus 224 days in the placebo group (P = .003).
“As you can see, the difference is really, really significant,” said Dr. Thouvenot, referring to a Kaplan-Meier curve. He said he was somewhat surprised by the “very rapid” effect of vitamin D.
He noted that the 34% reduction in relative risk for disease activity is “similar to that of some published platform therapies for CIS patients.”
An analysis of the 247 patients who met 2017 McDonald criteria for RRMS at baseline showed the same results.
Secondary analyses showed no significant reduction in relapses and no significant differences for annual change in EDSS, quality of life, fatigue, anxiety, or depression.
Additional analyses showed the HR was unchanged after adjusting for known prognostic factors including age, sex, number of lesions (< 9 vs ≥ 9), EDSS score at baseline, and delay between CIS and treatment onset.
Results showed vitamin D3 supplementation was safe and well tolerated. Dr. Thouvenot noted that 95% of participants completed the trial, and none of the 33 severe adverse events in 30 patients suggested hypercalcemia or were related to the study drug.
These encouraging new data support further studies of high-dose vitamin D supplementation as an add-on therapy in early MS, said Dr. Thouvenot. He noted that animal models suggest vitamin D added to interferon beta has a synergistic effect on the immune system.
‘Fabulous’ Research
During a question-and-answer session, delegates praised the study, with some describing it as “fantastic” or “fabulous.”
Addressing a query about why this study succeeded in showing the benefits of vitamin D while numerous previous studies did not, Dr. Thouvenot said it may be due to the longer duration or a design that was better powered to show differences.
Asked if researchers examined vitamin D blood levels during the study, Dr. Thouvenot said these measures are “ongoing.”
Responding to a question of whether high-dose vitamin D could be a lifelong treatment, he referred again to the “excellent” safety of the intervention. Not only is it well tolerated, but vitamin D benefits bones and the risk for hypercalcemia is low except perhaps for patients with tuberculosis or sarcoidosis, he said.
“When you exclude those patients, the safety is huge, so I don’t know why we should stop it once it’s started.”
This study was funded in part by the French Ministry of Health. Dr. Thouvenot reported no relevant disclosures.
A version of this article first appeared on Medscape.com.
COPENHAGEN — , results of a randomized, controlled trial suggest. In addition, cholecalciferol had a favorable safety profile and was well tolerated.
“These data support high-dose vitamin D supplementation in early MS and make vitamin D the best candidate for add-on therapy evaluation in the therapeutic strategy for multiple sclerosis [MS],” said study author Eric Thouvenot, MD, PhD, University Hospital of Nimes, Neurology Department, Nimes, France.
The study was presented at the 2024 ECTRIMS annual meeting.
Vitamin D Supplementation Versus Placebo
Research shows vitamin D deficiency is a risk factor for MS. However, results of previous research investigating vitamin D supplementation in MS, with different regimens and durations, have been contradictory.
The current double-blind study included 303 adults newly diagnosed with CIS (within 90 days) and a serum 25-hydroxy vitamin D concentration of less than 100 nmol/L at baseline. Participants had a median age of 34 years, and 70% were women.
About one third of participants had optic neuritis, two thirds had oligoclonal bands from cerebrospinal fluid analysis, and the median Expanded Disability Status Scale (EDSS) score was 1.0. Of the total, 89% fulfilled 2017 McDonald criteria for the diagnosis of relapsing-remitting MS (RRMS).
Participants were randomly assigned to receive high-dose (100,000 international units) oral cholecalciferol or placebo every 2 weeks for 24 months. Participants had a clinical visit at 3, 6, 12, 18, and 24 months, and brain and spinal cord MRI with and without gadolinium at 3, 12, and 24 months.
The primary outcome was occurrence of disease activity — relapse, new or enlarging T2 lesions, and presence of contrast-enhancing lesions.
Significant Difference
During follow-up, 60.3% in the vitamin group showed evidence of disease activity versus 74.1% in the placebo group (hazard ratio [HR], 0.66; 95% CI, 0.50-0.87; P = .004). In addition, the median time to evidence of disease activity was 432 days in the vitamin D group versus 224 days in the placebo group (P = .003).
“As you can see, the difference is really, really significant,” said Dr. Thouvenot, referring to a Kaplan-Meier curve. He said he was somewhat surprised by the “very rapid” effect of vitamin D.
He noted that the 34% reduction in relative risk for disease activity is “similar to that of some published platform therapies for CIS patients.”
An analysis of the 247 patients who met 2017 McDonald criteria for RRMS at baseline showed the same results.
Secondary analyses showed no significant reduction in relapses and no significant differences for annual change in EDSS, quality of life, fatigue, anxiety, or depression.
Additional analyses showed the HR was unchanged after adjusting for known prognostic factors including age, sex, number of lesions (< 9 vs ≥ 9), EDSS score at baseline, and delay between CIS and treatment onset.
Results showed vitamin D3 supplementation was safe and well tolerated. Dr. Thouvenot noted that 95% of participants completed the trial, and none of the 33 severe adverse events in 30 patients suggested hypercalcemia or were related to the study drug.
These encouraging new data support further studies of high-dose vitamin D supplementation as an add-on therapy in early MS, said Dr. Thouvenot. He noted that animal models suggest vitamin D added to interferon beta has a synergistic effect on the immune system.
‘Fabulous’ Research
During a question-and-answer session, delegates praised the study, with some describing it as “fantastic” or “fabulous.”
Addressing a query about why this study succeeded in showing the benefits of vitamin D while numerous previous studies did not, Dr. Thouvenot said it may be due to the longer duration or a design that was better powered to show differences.
Asked if researchers examined vitamin D blood levels during the study, Dr. Thouvenot said these measures are “ongoing.”
Responding to a question of whether high-dose vitamin D could be a lifelong treatment, he referred again to the “excellent” safety of the intervention. Not only is it well tolerated, but vitamin D benefits bones and the risk for hypercalcemia is low except perhaps for patients with tuberculosis or sarcoidosis, he said.
“When you exclude those patients, the safety is huge, so I don’t know why we should stop it once it’s started.”
This study was funded in part by the French Ministry of Health. Dr. Thouvenot reported no relevant disclosures.
A version of this article first appeared on Medscape.com.
COPENHAGEN — , results of a randomized, controlled trial suggest. In addition, cholecalciferol had a favorable safety profile and was well tolerated.
“These data support high-dose vitamin D supplementation in early MS and make vitamin D the best candidate for add-on therapy evaluation in the therapeutic strategy for multiple sclerosis [MS],” said study author Eric Thouvenot, MD, PhD, University Hospital of Nimes, Neurology Department, Nimes, France.
The study was presented at the 2024 ECTRIMS annual meeting.
Vitamin D Supplementation Versus Placebo
Research shows vitamin D deficiency is a risk factor for MS. However, results of previous research investigating vitamin D supplementation in MS, with different regimens and durations, have been contradictory.
The current double-blind study included 303 adults newly diagnosed with CIS (within 90 days) and a serum 25-hydroxy vitamin D concentration of less than 100 nmol/L at baseline. Participants had a median age of 34 years, and 70% were women.
About one third of participants had optic neuritis, two thirds had oligoclonal bands from cerebrospinal fluid analysis, and the median Expanded Disability Status Scale (EDSS) score was 1.0. Of the total, 89% fulfilled 2017 McDonald criteria for the diagnosis of relapsing-remitting MS (RRMS).
Participants were randomly assigned to receive high-dose (100,000 international units) oral cholecalciferol or placebo every 2 weeks for 24 months. Participants had a clinical visit at 3, 6, 12, 18, and 24 months, and brain and spinal cord MRI with and without gadolinium at 3, 12, and 24 months.
The primary outcome was occurrence of disease activity — relapse, new or enlarging T2 lesions, and presence of contrast-enhancing lesions.
Significant Difference
During follow-up, 60.3% in the vitamin group showed evidence of disease activity versus 74.1% in the placebo group (hazard ratio [HR], 0.66; 95% CI, 0.50-0.87; P = .004). In addition, the median time to evidence of disease activity was 432 days in the vitamin D group versus 224 days in the placebo group (P = .003).
“As you can see, the difference is really, really significant,” said Dr. Thouvenot, referring to a Kaplan-Meier curve. He said he was somewhat surprised by the “very rapid” effect of vitamin D.
He noted that the 34% reduction in relative risk for disease activity is “similar to that of some published platform therapies for CIS patients.”
An analysis of the 247 patients who met 2017 McDonald criteria for RRMS at baseline showed the same results.
Secondary analyses showed no significant reduction in relapses and no significant differences for annual change in EDSS, quality of life, fatigue, anxiety, or depression.
Additional analyses showed the HR was unchanged after adjusting for known prognostic factors including age, sex, number of lesions (< 9 vs ≥ 9), EDSS score at baseline, and delay between CIS and treatment onset.
Results showed vitamin D3 supplementation was safe and well tolerated. Dr. Thouvenot noted that 95% of participants completed the trial, and none of the 33 severe adverse events in 30 patients suggested hypercalcemia or were related to the study drug.
These encouraging new data support further studies of high-dose vitamin D supplementation as an add-on therapy in early MS, said Dr. Thouvenot. He noted that animal models suggest vitamin D added to interferon beta has a synergistic effect on the immune system.
‘Fabulous’ Research
During a question-and-answer session, delegates praised the study, with some describing it as “fantastic” or “fabulous.”
Addressing a query about why this study succeeded in showing the benefits of vitamin D while numerous previous studies did not, Dr. Thouvenot said it may be due to the longer duration or a design that was better powered to show differences.
Asked if researchers examined vitamin D blood levels during the study, Dr. Thouvenot said these measures are “ongoing.”
Responding to a question of whether high-dose vitamin D could be a lifelong treatment, he referred again to the “excellent” safety of the intervention. Not only is it well tolerated, but vitamin D benefits bones and the risk for hypercalcemia is low except perhaps for patients with tuberculosis or sarcoidosis, he said.
“When you exclude those patients, the safety is huge, so I don’t know why we should stop it once it’s started.”
This study was funded in part by the French Ministry of Health. Dr. Thouvenot reported no relevant disclosures.
A version of this article first appeared on Medscape.com.
FROM ECTRIMS 2024
Harnessing Doxycycline for STI Prevention: A Vital Role for Primary Care Physicians
Primary care physicians frequently offer postexposure prophylaxis for various infections, including influenza, pertussis, tetanus, hepatitis, and Lyme disease, among others. However, the scope of postexposure prophylaxis in primary care is expanding, presenting an opportunity to further integrate it into patient care. As primary care providers, we have the unique advantage of being involved in both preventive care and immediate response, particularly in urgent care or triage scenarios. This dual role is crucial, as timely administration of postexposure prophylaxis can prevent infections from taking hold, especially following high-risk exposures.
Recently, the use of doxycycline as a form of postexposure prophylaxis for sexually transmitted infections (STIs) has gained attention. Traditionally, doxycycline has been used as preexposure or postexposure prophylaxis for conditions like malaria and Lyme disease but has not been widely employed for STI prevention until now. Doxycycline is a relatively common medication, generally safe with side effects that typically resolve upon discontinuation. Several open-label studies have shown that taking 200 mg of doxycycline within 72 hours of condomless sex significantly reduces the incidence of chlamydia, gonorrhea, and syphilis among gay, bisexual, and other men who have sex with men, as well as transgender women who have previously had a bacterial STI. However, these benefits have not been consistently observed among cisgender women and heterosexual men.
Given these findings, the Centers for Disease Control and Prevention now recommends that clinicians discuss the risks and benefits of doxycycline PEP (Doxy PEP) with gay, bisexual, and other men who have sex with men, as well as transgender women who have had a bacterial STI in the past 12 months. This discussion should be part of a shared decision-making process, advising the use of 200 mg of doxycycline within 72 hours of oral, vaginal, or anal sex, with the recommendation not to exceed 200 mg every 24 hours and to reassess the need for continued use every 3-6 months. Doxy PEP can be safely prescribed with preexposure prophylaxis for HIV (PrEP). Patients who receive PrEP may often be eligible for Doxy PEP, though the groups are not always the same.
The shared decision-making process is essential when considering Doxy PEP. While cost-effective and proven to reduce the risk of gonorrhea, chlamydia, and syphilis, its benefits vary among different populations. Moreover, some patients may experience side effects such as photosensitivity and gastrointestinal discomfort. Since the effectiveness of prophylaxis is closely tied to the timing of exposure and the patient’s current risk factors, it is important to regularly evaluate whether Doxy PEP remains beneficial. As there is not yet clear benefit to heterosexual men and cisgender women, opportunities still need to be explored for them.
Integrating Doxy PEP into a primary care practice can be done efficiently. A standing order protocol could be established for telehealth visits or nurse triage, allowing timely administration when patients report an exposure within 72 hours. It could also be incorporated into electronic medical records as part of a smart set for easy access to orders and as standard educational material in after-visit instructions. As this option is new, it is also important to discuss it with patients before they may need it so that they are aware should the need arise. While concerns about antibiotic resistance are valid, studies have not yet shown significant resistance issues related to Doxy PEP use, though ongoing monitoring is necessary.
You might wonder why primary care should prioritize this intervention. As the first point of contact, primary care providers are well-positioned to identify the need for prophylaxis, particularly since its effectiveness diminishes over time. Furthermore, the established, trusting relationships that primary care physicians often have with their patients create a nonjudgmental environment that encourages disclosure of potential exposures. This trust, combined with easier access to care, can make a significant difference in the timely provision of postexposure prophylaxis. By offering comprehensive, holistic care, including prophylaxis, primary care physicians can prevent infections and address conditions before they lead to serious complications. Therefore, family medicine physicians should consider incorporating Doxy PEP into their practices as a standard of care.
Dr. Wheat is vice chair of Diversity, Equity, and Inclusion, Department of Family and Community Medicine, and associate professor, Family and Community Medicine, at Northwestern University’s Feinberg School of Medicine, Chicago. She has no relevant financial disclosures.
References
Bachmann LH et al. CDC Clinical Guidelines on the Use of Doxycycline Postexposure Prophylaxis for Bacterial Sexually Transmitted Infection Prevention, United States, 2024. MMWR Recomm Rep 2024;73(No. RR-2):1-8.
Traeger MW et al. Potential Impact of Doxycycline Postexposure Prophylaxis Prescribing Strategies on Incidence of Bacterial Sexually Transmitted Infections. (Clin Infect Dis. 2023 Aug 18. doi: 10.1093/cid/ciad488).
Primary care physicians frequently offer postexposure prophylaxis for various infections, including influenza, pertussis, tetanus, hepatitis, and Lyme disease, among others. However, the scope of postexposure prophylaxis in primary care is expanding, presenting an opportunity to further integrate it into patient care. As primary care providers, we have the unique advantage of being involved in both preventive care and immediate response, particularly in urgent care or triage scenarios. This dual role is crucial, as timely administration of postexposure prophylaxis can prevent infections from taking hold, especially following high-risk exposures.
Recently, the use of doxycycline as a form of postexposure prophylaxis for sexually transmitted infections (STIs) has gained attention. Traditionally, doxycycline has been used as preexposure or postexposure prophylaxis for conditions like malaria and Lyme disease but has not been widely employed for STI prevention until now. Doxycycline is a relatively common medication, generally safe with side effects that typically resolve upon discontinuation. Several open-label studies have shown that taking 200 mg of doxycycline within 72 hours of condomless sex significantly reduces the incidence of chlamydia, gonorrhea, and syphilis among gay, bisexual, and other men who have sex with men, as well as transgender women who have previously had a bacterial STI. However, these benefits have not been consistently observed among cisgender women and heterosexual men.
Given these findings, the Centers for Disease Control and Prevention now recommends that clinicians discuss the risks and benefits of doxycycline PEP (Doxy PEP) with gay, bisexual, and other men who have sex with men, as well as transgender women who have had a bacterial STI in the past 12 months. This discussion should be part of a shared decision-making process, advising the use of 200 mg of doxycycline within 72 hours of oral, vaginal, or anal sex, with the recommendation not to exceed 200 mg every 24 hours and to reassess the need for continued use every 3-6 months. Doxy PEP can be safely prescribed with preexposure prophylaxis for HIV (PrEP). Patients who receive PrEP may often be eligible for Doxy PEP, though the groups are not always the same.
The shared decision-making process is essential when considering Doxy PEP. While cost-effective and proven to reduce the risk of gonorrhea, chlamydia, and syphilis, its benefits vary among different populations. Moreover, some patients may experience side effects such as photosensitivity and gastrointestinal discomfort. Since the effectiveness of prophylaxis is closely tied to the timing of exposure and the patient’s current risk factors, it is important to regularly evaluate whether Doxy PEP remains beneficial. As there is not yet clear benefit to heterosexual men and cisgender women, opportunities still need to be explored for them.
Integrating Doxy PEP into a primary care practice can be done efficiently. A standing order protocol could be established for telehealth visits or nurse triage, allowing timely administration when patients report an exposure within 72 hours. It could also be incorporated into electronic medical records as part of a smart set for easy access to orders and as standard educational material in after-visit instructions. As this option is new, it is also important to discuss it with patients before they may need it so that they are aware should the need arise. While concerns about antibiotic resistance are valid, studies have not yet shown significant resistance issues related to Doxy PEP use, though ongoing monitoring is necessary.
You might wonder why primary care should prioritize this intervention. As the first point of contact, primary care providers are well-positioned to identify the need for prophylaxis, particularly since its effectiveness diminishes over time. Furthermore, the established, trusting relationships that primary care physicians often have with their patients create a nonjudgmental environment that encourages disclosure of potential exposures. This trust, combined with easier access to care, can make a significant difference in the timely provision of postexposure prophylaxis. By offering comprehensive, holistic care, including prophylaxis, primary care physicians can prevent infections and address conditions before they lead to serious complications. Therefore, family medicine physicians should consider incorporating Doxy PEP into their practices as a standard of care.
Dr. Wheat is vice chair of Diversity, Equity, and Inclusion, Department of Family and Community Medicine, and associate professor, Family and Community Medicine, at Northwestern University’s Feinberg School of Medicine, Chicago. She has no relevant financial disclosures.
References
Bachmann LH et al. CDC Clinical Guidelines on the Use of Doxycycline Postexposure Prophylaxis for Bacterial Sexually Transmitted Infection Prevention, United States, 2024. MMWR Recomm Rep 2024;73(No. RR-2):1-8.
Traeger MW et al. Potential Impact of Doxycycline Postexposure Prophylaxis Prescribing Strategies on Incidence of Bacterial Sexually Transmitted Infections. (Clin Infect Dis. 2023 Aug 18. doi: 10.1093/cid/ciad488).
Primary care physicians frequently offer postexposure prophylaxis for various infections, including influenza, pertussis, tetanus, hepatitis, and Lyme disease, among others. However, the scope of postexposure prophylaxis in primary care is expanding, presenting an opportunity to further integrate it into patient care. As primary care providers, we have the unique advantage of being involved in both preventive care and immediate response, particularly in urgent care or triage scenarios. This dual role is crucial, as timely administration of postexposure prophylaxis can prevent infections from taking hold, especially following high-risk exposures.
Recently, the use of doxycycline as a form of postexposure prophylaxis for sexually transmitted infections (STIs) has gained attention. Traditionally, doxycycline has been used as preexposure or postexposure prophylaxis for conditions like malaria and Lyme disease but has not been widely employed for STI prevention until now. Doxycycline is a relatively common medication, generally safe with side effects that typically resolve upon discontinuation. Several open-label studies have shown that taking 200 mg of doxycycline within 72 hours of condomless sex significantly reduces the incidence of chlamydia, gonorrhea, and syphilis among gay, bisexual, and other men who have sex with men, as well as transgender women who have previously had a bacterial STI. However, these benefits have not been consistently observed among cisgender women and heterosexual men.
Given these findings, the Centers for Disease Control and Prevention now recommends that clinicians discuss the risks and benefits of doxycycline PEP (Doxy PEP) with gay, bisexual, and other men who have sex with men, as well as transgender women who have had a bacterial STI in the past 12 months. This discussion should be part of a shared decision-making process, advising the use of 200 mg of doxycycline within 72 hours of oral, vaginal, or anal sex, with the recommendation not to exceed 200 mg every 24 hours and to reassess the need for continued use every 3-6 months. Doxy PEP can be safely prescribed with preexposure prophylaxis for HIV (PrEP). Patients who receive PrEP may often be eligible for Doxy PEP, though the groups are not always the same.
The shared decision-making process is essential when considering Doxy PEP. While cost-effective and proven to reduce the risk of gonorrhea, chlamydia, and syphilis, its benefits vary among different populations. Moreover, some patients may experience side effects such as photosensitivity and gastrointestinal discomfort. Since the effectiveness of prophylaxis is closely tied to the timing of exposure and the patient’s current risk factors, it is important to regularly evaluate whether Doxy PEP remains beneficial. As there is not yet clear benefit to heterosexual men and cisgender women, opportunities still need to be explored for them.
Integrating Doxy PEP into a primary care practice can be done efficiently. A standing order protocol could be established for telehealth visits or nurse triage, allowing timely administration when patients report an exposure within 72 hours. It could also be incorporated into electronic medical records as part of a smart set for easy access to orders and as standard educational material in after-visit instructions. As this option is new, it is also important to discuss it with patients before they may need it so that they are aware should the need arise. While concerns about antibiotic resistance are valid, studies have not yet shown significant resistance issues related to Doxy PEP use, though ongoing monitoring is necessary.
You might wonder why primary care should prioritize this intervention. As the first point of contact, primary care providers are well-positioned to identify the need for prophylaxis, particularly since its effectiveness diminishes over time. Furthermore, the established, trusting relationships that primary care physicians often have with their patients create a nonjudgmental environment that encourages disclosure of potential exposures. This trust, combined with easier access to care, can make a significant difference in the timely provision of postexposure prophylaxis. By offering comprehensive, holistic care, including prophylaxis, primary care physicians can prevent infections and address conditions before they lead to serious complications. Therefore, family medicine physicians should consider incorporating Doxy PEP into their practices as a standard of care.
Dr. Wheat is vice chair of Diversity, Equity, and Inclusion, Department of Family and Community Medicine, and associate professor, Family and Community Medicine, at Northwestern University’s Feinberg School of Medicine, Chicago. She has no relevant financial disclosures.
References
Bachmann LH et al. CDC Clinical Guidelines on the Use of Doxycycline Postexposure Prophylaxis for Bacterial Sexually Transmitted Infection Prevention, United States, 2024. MMWR Recomm Rep 2024;73(No. RR-2):1-8.
Traeger MW et al. Potential Impact of Doxycycline Postexposure Prophylaxis Prescribing Strategies on Incidence of Bacterial Sexually Transmitted Infections. (Clin Infect Dis. 2023 Aug 18. doi: 10.1093/cid/ciad488).
Controlling Six Risk Factors Can Combat CKD in Obesity
TOPLINE:
Optimal management of blood pressure, A1c levels, low-density lipoprotein cholesterol (LDL-C), albuminuria, smoking, and physical activity may reduce the excess risk for chronic kidney disease (CKD) typically linked to obesity. The protective effect is more pronounced in men, in those with lower healthy food scores, and in users of diabetes medication.
METHODOLOGY:
- Obesity is a significant risk factor for CKD, but it is unknown if managing multiple other obesity-related CKD risk factors can mitigate the excess CKD risk.
- Researchers assessed CKD risk factor control in 97,538 participants with obesity from the UK Biobank and compared them with an equal number of age- and sex-matched control participants with normal body weight and no CKD at baseline.
- Participants with obesity were assessed for six modifiable risk factors: Blood pressure, A1c levels, LDL-C, albuminuria, smoking, and physical activity.
- Overall, 2487, 12,720, 32,388, 36,988, and 15,381 participants with obesity had at most two, three, four, five, and six risk factors under combined control, respectively, with the two or fewer group serving as the reference.
- The primary outcome was incident CKD and the degree of combined risk factor control in persons. The CKD risk and risk factor control in participants with obesity were also compared with CKD incidence in matched normal weight participants.
TAKEAWAY:
- During a median follow-up period of 10.8 years, 3954 cases of incident CKD were reported in participants with obesity and 1498 cases in matched persons of normal body mass index (BMI).
- In a stepwise pattern, optimal control of each additional risk factor was associated with 11% (adjusted hazard ratio [aHR], 0.89; 95% CI, 0.86-0.91) reduction in the incidence of CKD events, down to a 49% reduction in CKD incidence (aHR, 0.51; 95% CI, 0.43-0.61) for combined control of all six risk factors in participants with obesity.
- The protective effect of combined control of risk factors was more pronounced in men vs women, in those with lower vs higher healthy diet scores, and in users vs nonusers of diabetes medication.
- A similar stepwise pattern emerged between the number of risk factors controlled and CKD risk in participants with obesity compared with matched individuals of normal BMI, with the excess CKD risk eliminated in participants with obesity with six risk factors under control.
IN PRACTICE:
“Comprehensive control of risk factors might effectively neutralize the excessive CKD risk associated with obesity, emphasizing the potential of a joint management approach in the prevention of CKD in this population,” the authors wrote.
SOURCE:
The study was led by Rui Tang, MS, Department of Epidemiology, School of Public Health and Tropical Medicine, Tulane University, New Orleans, Louisiana. It was published online in Diabetes, Obesity and Metabolism.
LIMITATIONS:
The evaluated risk factors for CKD were arbitrarily selected, which may not represent the ideal group. The study did not consider the time-varying effect of joint risk factor control owing to the lack of some variables such as A1c. The generalizability of the findings was limited because over 90% of the UK Biobank cohort is composed of White people and individuals with healthier behaviors compared with the overall UK population.
DISCLOSURES:
The study was supported by grants from the US National Heart, Lung, and Blood Institute and the National Institute of Diabetes and Digestive and Kidney Diseases. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Optimal management of blood pressure, A1c levels, low-density lipoprotein cholesterol (LDL-C), albuminuria, smoking, and physical activity may reduce the excess risk for chronic kidney disease (CKD) typically linked to obesity. The protective effect is more pronounced in men, in those with lower healthy food scores, and in users of diabetes medication.
METHODOLOGY:
- Obesity is a significant risk factor for CKD, but it is unknown if managing multiple other obesity-related CKD risk factors can mitigate the excess CKD risk.
- Researchers assessed CKD risk factor control in 97,538 participants with obesity from the UK Biobank and compared them with an equal number of age- and sex-matched control participants with normal body weight and no CKD at baseline.
- Participants with obesity were assessed for six modifiable risk factors: Blood pressure, A1c levels, LDL-C, albuminuria, smoking, and physical activity.
- Overall, 2487, 12,720, 32,388, 36,988, and 15,381 participants with obesity had at most two, three, four, five, and six risk factors under combined control, respectively, with the two or fewer group serving as the reference.
- The primary outcome was incident CKD and the degree of combined risk factor control in persons. The CKD risk and risk factor control in participants with obesity were also compared with CKD incidence in matched normal weight participants.
TAKEAWAY:
- During a median follow-up period of 10.8 years, 3954 cases of incident CKD were reported in participants with obesity and 1498 cases in matched persons of normal body mass index (BMI).
- In a stepwise pattern, optimal control of each additional risk factor was associated with 11% (adjusted hazard ratio [aHR], 0.89; 95% CI, 0.86-0.91) reduction in the incidence of CKD events, down to a 49% reduction in CKD incidence (aHR, 0.51; 95% CI, 0.43-0.61) for combined control of all six risk factors in participants with obesity.
- The protective effect of combined control of risk factors was more pronounced in men vs women, in those with lower vs higher healthy diet scores, and in users vs nonusers of diabetes medication.
- A similar stepwise pattern emerged between the number of risk factors controlled and CKD risk in participants with obesity compared with matched individuals of normal BMI, with the excess CKD risk eliminated in participants with obesity with six risk factors under control.
IN PRACTICE:
“Comprehensive control of risk factors might effectively neutralize the excessive CKD risk associated with obesity, emphasizing the potential of a joint management approach in the prevention of CKD in this population,” the authors wrote.
SOURCE:
The study was led by Rui Tang, MS, Department of Epidemiology, School of Public Health and Tropical Medicine, Tulane University, New Orleans, Louisiana. It was published online in Diabetes, Obesity and Metabolism.
LIMITATIONS:
The evaluated risk factors for CKD were arbitrarily selected, which may not represent the ideal group. The study did not consider the time-varying effect of joint risk factor control owing to the lack of some variables such as A1c. The generalizability of the findings was limited because over 90% of the UK Biobank cohort is composed of White people and individuals with healthier behaviors compared with the overall UK population.
DISCLOSURES:
The study was supported by grants from the US National Heart, Lung, and Blood Institute and the National Institute of Diabetes and Digestive and Kidney Diseases. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Optimal management of blood pressure, A1c levels, low-density lipoprotein cholesterol (LDL-C), albuminuria, smoking, and physical activity may reduce the excess risk for chronic kidney disease (CKD) typically linked to obesity. The protective effect is more pronounced in men, in those with lower healthy food scores, and in users of diabetes medication.
METHODOLOGY:
- Obesity is a significant risk factor for CKD, but it is unknown if managing multiple other obesity-related CKD risk factors can mitigate the excess CKD risk.
- Researchers assessed CKD risk factor control in 97,538 participants with obesity from the UK Biobank and compared them with an equal number of age- and sex-matched control participants with normal body weight and no CKD at baseline.
- Participants with obesity were assessed for six modifiable risk factors: Blood pressure, A1c levels, LDL-C, albuminuria, smoking, and physical activity.
- Overall, 2487, 12,720, 32,388, 36,988, and 15,381 participants with obesity had at most two, three, four, five, and six risk factors under combined control, respectively, with the two or fewer group serving as the reference.
- The primary outcome was incident CKD and the degree of combined risk factor control in persons. The CKD risk and risk factor control in participants with obesity were also compared with CKD incidence in matched normal weight participants.
TAKEAWAY:
- During a median follow-up period of 10.8 years, 3954 cases of incident CKD were reported in participants with obesity and 1498 cases in matched persons of normal body mass index (BMI).
- In a stepwise pattern, optimal control of each additional risk factor was associated with 11% (adjusted hazard ratio [aHR], 0.89; 95% CI, 0.86-0.91) reduction in the incidence of CKD events, down to a 49% reduction in CKD incidence (aHR, 0.51; 95% CI, 0.43-0.61) for combined control of all six risk factors in participants with obesity.
- The protective effect of combined control of risk factors was more pronounced in men vs women, in those with lower vs higher healthy diet scores, and in users vs nonusers of diabetes medication.
- A similar stepwise pattern emerged between the number of risk factors controlled and CKD risk in participants with obesity compared with matched individuals of normal BMI, with the excess CKD risk eliminated in participants with obesity with six risk factors under control.
IN PRACTICE:
“Comprehensive control of risk factors might effectively neutralize the excessive CKD risk associated with obesity, emphasizing the potential of a joint management approach in the prevention of CKD in this population,” the authors wrote.
SOURCE:
The study was led by Rui Tang, MS, Department of Epidemiology, School of Public Health and Tropical Medicine, Tulane University, New Orleans, Louisiana. It was published online in Diabetes, Obesity and Metabolism.
LIMITATIONS:
The evaluated risk factors for CKD were arbitrarily selected, which may not represent the ideal group. The study did not consider the time-varying effect of joint risk factor control owing to the lack of some variables such as A1c. The generalizability of the findings was limited because over 90% of the UK Biobank cohort is composed of White people and individuals with healthier behaviors compared with the overall UK population.
DISCLOSURES:
The study was supported by grants from the US National Heart, Lung, and Blood Institute and the National Institute of Diabetes and Digestive and Kidney Diseases. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Starting Mammography at Age 40 May Backfire Due to False Positives
Earlier this year, I wrote a Medscape commentary to explain my disagreement with the US Preventive Services Task Force (USPSTF)’s updated recommendation that all women at average risk for breast cancer start screening mammography at age 40. The bottom line is that when the evidence doesn’t change, the guidelines shouldn’t change. Since then, other screening experts have criticized the USPSTF guideline on similar grounds, and a national survey reported that nearly 4 out of 10 women in their 40s preferred to delay breast cancer screening after viewing a decision aid and a personalized breast cancer risk estimate.
The decision analysis performed for the USPSTF guideline estimated that compared with having mammography beginning at age 50, 1000 women who begin at age 40 experience 519 more false-positive results and 62 more benign breast biopsies. Another study suggested that anxiety and other psychosocial harms resulting from a false-positive test are similar between patients who require a biopsy vs additional imaging only. Of greater concern, women who have false-positive results are less likely to return for their next scheduled screening exam.
A recent analysis of 2005-2017 data from the US Breast Cancer Surveillance Consortium found that about 1 in 10 mammograms had a false-positive result. Sixty percent of these patients underwent immediate additional imaging, 27% were recalled for diagnostic imaging within the next few days to weeks, and 13% were advised to have a biopsy. While patients who had additional imaging at the same visit were only 1.9% less likely to return for screening mammography within 30 months compared with those with normal mammograms, women who were recalled for short-interval follow-up or recommended for biopsy were 15.9% and 10% less likely to return, respectively. For unclear reasons, women who identified as Asian or Hispanic had even lower rates of return screening after false-positive results.
These differences matter because women in their 40s, with the lowest incidence of breast cancer among those undergoing screening, have a lot of false positives. A patient who follows the USPSTF recommendation and starts screening at age 40 has a 42% chance of having at least one false positive with every-other-year screening, or a 61% chance with annual screening, by the time she turns 50. If some of these patients are so turned off by false positives that they don’t return for regular mammography in their 50s and 60s, when screening is the most likely to catch clinically significant cancers at treatable stages, then moving up the starting age may backfire and cause net harm.
The recently implemented FDA rule requiring mammography reports to include breast density could compound this problem. Because younger women are more likely to have dense breasts, more of them will probably decide to have supplemental imaging for cancer. I previously pointed out that we don’t know whether supplemental imaging with breast ultrasonography or MRI reduces cancer deaths, but we do know that it increases false-positive results.
I have personally cared for several patients who abandoned screening mammography for long stretches, or permanently, after having endured one or more benign biopsies prompted by a false-positive result. I vividly recall one woman in her 60s who was very reluctant to have screening tests in general, and mammography in particular, for that reason. After she had been my patient for a few years, I finally persuaded her to resume screening. We were both surprised when her first mammogram in more than a decade revealed an early-stage breast cancer. Fortunately, the tumor was successfully treated, but for her, an earlier false-positive result nearly ended up having critical health consequences.
Dr. Lin is associate director, Family Medicine Residency Program, Lancaster General Hospital, Lancaster, Pennsylvania. He blogs at Common Sense Family Doctor. He has no relevant financial relationships.
A version of this article appeared on Medscape.com.
Earlier this year, I wrote a Medscape commentary to explain my disagreement with the US Preventive Services Task Force (USPSTF)’s updated recommendation that all women at average risk for breast cancer start screening mammography at age 40. The bottom line is that when the evidence doesn’t change, the guidelines shouldn’t change. Since then, other screening experts have criticized the USPSTF guideline on similar grounds, and a national survey reported that nearly 4 out of 10 women in their 40s preferred to delay breast cancer screening after viewing a decision aid and a personalized breast cancer risk estimate.
The decision analysis performed for the USPSTF guideline estimated that compared with having mammography beginning at age 50, 1000 women who begin at age 40 experience 519 more false-positive results and 62 more benign breast biopsies. Another study suggested that anxiety and other psychosocial harms resulting from a false-positive test are similar between patients who require a biopsy vs additional imaging only. Of greater concern, women who have false-positive results are less likely to return for their next scheduled screening exam.
A recent analysis of 2005-2017 data from the US Breast Cancer Surveillance Consortium found that about 1 in 10 mammograms had a false-positive result. Sixty percent of these patients underwent immediate additional imaging, 27% were recalled for diagnostic imaging within the next few days to weeks, and 13% were advised to have a biopsy. While patients who had additional imaging at the same visit were only 1.9% less likely to return for screening mammography within 30 months compared with those with normal mammograms, women who were recalled for short-interval follow-up or recommended for biopsy were 15.9% and 10% less likely to return, respectively. For unclear reasons, women who identified as Asian or Hispanic had even lower rates of return screening after false-positive results.
These differences matter because women in their 40s, with the lowest incidence of breast cancer among those undergoing screening, have a lot of false positives. A patient who follows the USPSTF recommendation and starts screening at age 40 has a 42% chance of having at least one false positive with every-other-year screening, or a 61% chance with annual screening, by the time she turns 50. If some of these patients are so turned off by false positives that they don’t return for regular mammography in their 50s and 60s, when screening is the most likely to catch clinically significant cancers at treatable stages, then moving up the starting age may backfire and cause net harm.
The recently implemented FDA rule requiring mammography reports to include breast density could compound this problem. Because younger women are more likely to have dense breasts, more of them will probably decide to have supplemental imaging for cancer. I previously pointed out that we don’t know whether supplemental imaging with breast ultrasonography or MRI reduces cancer deaths, but we do know that it increases false-positive results.
I have personally cared for several patients who abandoned screening mammography for long stretches, or permanently, after having endured one or more benign biopsies prompted by a false-positive result. I vividly recall one woman in her 60s who was very reluctant to have screening tests in general, and mammography in particular, for that reason. After she had been my patient for a few years, I finally persuaded her to resume screening. We were both surprised when her first mammogram in more than a decade revealed an early-stage breast cancer. Fortunately, the tumor was successfully treated, but for her, an earlier false-positive result nearly ended up having critical health consequences.
Dr. Lin is associate director, Family Medicine Residency Program, Lancaster General Hospital, Lancaster, Pennsylvania. He blogs at Common Sense Family Doctor. He has no relevant financial relationships.
A version of this article appeared on Medscape.com.
Earlier this year, I wrote a Medscape commentary to explain my disagreement with the US Preventive Services Task Force (USPSTF)’s updated recommendation that all women at average risk for breast cancer start screening mammography at age 40. The bottom line is that when the evidence doesn’t change, the guidelines shouldn’t change. Since then, other screening experts have criticized the USPSTF guideline on similar grounds, and a national survey reported that nearly 4 out of 10 women in their 40s preferred to delay breast cancer screening after viewing a decision aid and a personalized breast cancer risk estimate.
The decision analysis performed for the USPSTF guideline estimated that compared with having mammography beginning at age 50, 1000 women who begin at age 40 experience 519 more false-positive results and 62 more benign breast biopsies. Another study suggested that anxiety and other psychosocial harms resulting from a false-positive test are similar between patients who require a biopsy vs additional imaging only. Of greater concern, women who have false-positive results are less likely to return for their next scheduled screening exam.
A recent analysis of 2005-2017 data from the US Breast Cancer Surveillance Consortium found that about 1 in 10 mammograms had a false-positive result. Sixty percent of these patients underwent immediate additional imaging, 27% were recalled for diagnostic imaging within the next few days to weeks, and 13% were advised to have a biopsy. While patients who had additional imaging at the same visit were only 1.9% less likely to return for screening mammography within 30 months compared with those with normal mammograms, women who were recalled for short-interval follow-up or recommended for biopsy were 15.9% and 10% less likely to return, respectively. For unclear reasons, women who identified as Asian or Hispanic had even lower rates of return screening after false-positive results.
These differences matter because women in their 40s, with the lowest incidence of breast cancer among those undergoing screening, have a lot of false positives. A patient who follows the USPSTF recommendation and starts screening at age 40 has a 42% chance of having at least one false positive with every-other-year screening, or a 61% chance with annual screening, by the time she turns 50. If some of these patients are so turned off by false positives that they don’t return for regular mammography in their 50s and 60s, when screening is the most likely to catch clinically significant cancers at treatable stages, then moving up the starting age may backfire and cause net harm.
The recently implemented FDA rule requiring mammography reports to include breast density could compound this problem. Because younger women are more likely to have dense breasts, more of them will probably decide to have supplemental imaging for cancer. I previously pointed out that we don’t know whether supplemental imaging with breast ultrasonography or MRI reduces cancer deaths, but we do know that it increases false-positive results.
I have personally cared for several patients who abandoned screening mammography for long stretches, or permanently, after having endured one or more benign biopsies prompted by a false-positive result. I vividly recall one woman in her 60s who was very reluctant to have screening tests in general, and mammography in particular, for that reason. After she had been my patient for a few years, I finally persuaded her to resume screening. We were both surprised when her first mammogram in more than a decade revealed an early-stage breast cancer. Fortunately, the tumor was successfully treated, but for her, an earlier false-positive result nearly ended up having critical health consequences.
Dr. Lin is associate director, Family Medicine Residency Program, Lancaster General Hospital, Lancaster, Pennsylvania. He blogs at Common Sense Family Doctor. He has no relevant financial relationships.
A version of this article appeared on Medscape.com.
Should There Be a Mandatory Retirement Age for Physicians?
This transcript has been edited for clarity.
I’d like to pose a question: When should doctors retire? When, as practicing physicians or surgeons, do we become too old to deliver competent service?
You will be amazed to hear, those of you who have listened to my videos before — and although it is a matter of public knowledge — that I’m 68. I know it’s impossible to imagine, due to this youthful appearance, visage, and so on, but I am. I’ve been a cancer doctor for 40 years; therefore, I need to think a little about retirement.
There are two elements of this for me. I’m a university professor, and in Oxford we did vote, as a democracy of scholars, to have a mandatory retirement age around 68. This is so that we can bring new blood forward so that we can create the space to promote new professors, to bring youngsters in to make new ideas, and to get rid of us fusty old lot.
The other argument would be, of course, that we are wise, we’re experienced, we are world-weary, and we’re successful — otherwise, we wouldn’t have lasted as academics as long. Nevertheless, we voted to do that.
It’s possible to have a discussion with the university to extend this, and for those of us who are clinical academics, I have an honorary appointment as a consultant cancer physician in the hospital and my university professorial appointment, too.
I can extend it probably until I’m about 70. It feels like a nice, round number at which to retire — somewhat arbitrarily, one would admit. But does that feel right?
In the United States, more than 25% of the physician workforce is over the age of 65. There are many studies showing that there is a 20% cognitive decline for most individuals between the ages of 45 and 65.
Are we as capable as an elderly workforce as once we were? Clearly, it’s hardly individualistic. It depends on each of our own health status, where we started from, and so on, but are there any general rules that we can apply? I think these are starting to creep in around the sense of revalidation.
In the United Kingdom, we have a General Medical Council (GMC). I need to have a license to practice from the GMC and a sense of fitness to practice. I have annual appraisals within the hospital system, in which I explore delivery of care, how I’m doing as a mentor, am I reaching the milestones I’ve set in terms of academic achievements, and so on.
This is a peer-to-peer process. We have senior physicians — people like myself — who act as appraisers to support our colleagues and to maintain that sense of fitness to practice. Every 5 years, I’m revalidated by the GMC. They take account of the annual appraisals and a report made by the senior physician within my hospital network who’s a so-called designated person.
These two elements come together with patient feedback, with 360-degree feedback from colleagues, and so on. This is quite a firmly regulated system that I think works. Our mandatory retirement age of 65 has gone. That was phased out by the government. In fact, our NHS is making an effort to retain older elders in the workforce.
They see the benefits of mentorship, experience, leadership, and networks. At a time when the majority of NHS are actively seeking to retire when 65, the NHS is trying to retain and pull back those of us who have been around for that wee bit longer and who still feel committed to doing it.
I’d be really interested to see what you think. There’s variation from country to country. I know that, in Australia, they’re talking about annual appraisals of doctors over the age of 70. I’d be very interested to hear what you think is likely to happen in the United States.
I think our system works pretty well, as long as you’re within the NHS and hospital system. If you wanted to still practice, but practice privately, you would still have to find somebody who’d be prepared to conduct appraisals and so on outside of the NHS. It’s an interesting area.
For myself, I still feel competent. Patients seem to like me. That’s an objective assessment by this 360-degree thing in which patients reflected very positively, indeed, in my approach to the delivery of the care and so on, as did colleagues. I’m still publishing, I go to meetings, I cheer things, bits and bobs. I’d say I’m a wee bit unusual in terms of still having a strong academic profile in doing stuff.
It’s an interesting question. Richard Doll, one of the world’s great epidemiologists who, of course, was the dominant discoverer of the link between smoking and lung cancer, was attending seminars, sitting in the front row, and coming into university 3 days a week at age 90, continuing to be contributory with his extraordinarily sharp intellect and vast, vast experience.
When I think of experience, all young cancer doctors are now immunologists. When I was a young doctor, I was a clinical pharmacologist. There are many lessons and tricks that I learned which I do need to pass on to the younger generation of today. What do you think? Should there be a mandatory retirement age? How do we best measure, assess, and revalidate elderly physicians and surgeons? How can we continue to contribute to those who choose to do so? For the time being, as always, thanks for listening.
Dr. Kerr is professor, Nuffield Department of Clinical Laboratory Science, University of Oxford, and professor of cancer medicine, Oxford Cancer Centre, Oxford, United Kingdom. He has disclosed ties with Celleron Therapeutics, Oxford Cancer Biomarkers (Board of Directors); Afrox (charity; Trustee); GlaxoSmithKline and Bayer HealthCare Pharmaceuticals (Consultant), Genomic Health; Merck Serono, and Roche.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
I’d like to pose a question: When should doctors retire? When, as practicing physicians or surgeons, do we become too old to deliver competent service?
You will be amazed to hear, those of you who have listened to my videos before — and although it is a matter of public knowledge — that I’m 68. I know it’s impossible to imagine, due to this youthful appearance, visage, and so on, but I am. I’ve been a cancer doctor for 40 years; therefore, I need to think a little about retirement.
There are two elements of this for me. I’m a university professor, and in Oxford we did vote, as a democracy of scholars, to have a mandatory retirement age around 68. This is so that we can bring new blood forward so that we can create the space to promote new professors, to bring youngsters in to make new ideas, and to get rid of us fusty old lot.
The other argument would be, of course, that we are wise, we’re experienced, we are world-weary, and we’re successful — otherwise, we wouldn’t have lasted as academics as long. Nevertheless, we voted to do that.
It’s possible to have a discussion with the university to extend this, and for those of us who are clinical academics, I have an honorary appointment as a consultant cancer physician in the hospital and my university professorial appointment, too.
I can extend it probably until I’m about 70. It feels like a nice, round number at which to retire — somewhat arbitrarily, one would admit. But does that feel right?
In the United States, more than 25% of the physician workforce is over the age of 65. There are many studies showing that there is a 20% cognitive decline for most individuals between the ages of 45 and 65.
Are we as capable as an elderly workforce as once we were? Clearly, it’s hardly individualistic. It depends on each of our own health status, where we started from, and so on, but are there any general rules that we can apply? I think these are starting to creep in around the sense of revalidation.
In the United Kingdom, we have a General Medical Council (GMC). I need to have a license to practice from the GMC and a sense of fitness to practice. I have annual appraisals within the hospital system, in which I explore delivery of care, how I’m doing as a mentor, am I reaching the milestones I’ve set in terms of academic achievements, and so on.
This is a peer-to-peer process. We have senior physicians — people like myself — who act as appraisers to support our colleagues and to maintain that sense of fitness to practice. Every 5 years, I’m revalidated by the GMC. They take account of the annual appraisals and a report made by the senior physician within my hospital network who’s a so-called designated person.
These two elements come together with patient feedback, with 360-degree feedback from colleagues, and so on. This is quite a firmly regulated system that I think works. Our mandatory retirement age of 65 has gone. That was phased out by the government. In fact, our NHS is making an effort to retain older elders in the workforce.
They see the benefits of mentorship, experience, leadership, and networks. At a time when the majority of NHS are actively seeking to retire when 65, the NHS is trying to retain and pull back those of us who have been around for that wee bit longer and who still feel committed to doing it.
I’d be really interested to see what you think. There’s variation from country to country. I know that, in Australia, they’re talking about annual appraisals of doctors over the age of 70. I’d be very interested to hear what you think is likely to happen in the United States.
I think our system works pretty well, as long as you’re within the NHS and hospital system. If you wanted to still practice, but practice privately, you would still have to find somebody who’d be prepared to conduct appraisals and so on outside of the NHS. It’s an interesting area.
For myself, I still feel competent. Patients seem to like me. That’s an objective assessment by this 360-degree thing in which patients reflected very positively, indeed, in my approach to the delivery of the care and so on, as did colleagues. I’m still publishing, I go to meetings, I cheer things, bits and bobs. I’d say I’m a wee bit unusual in terms of still having a strong academic profile in doing stuff.
It’s an interesting question. Richard Doll, one of the world’s great epidemiologists who, of course, was the dominant discoverer of the link between smoking and lung cancer, was attending seminars, sitting in the front row, and coming into university 3 days a week at age 90, continuing to be contributory with his extraordinarily sharp intellect and vast, vast experience.
When I think of experience, all young cancer doctors are now immunologists. When I was a young doctor, I was a clinical pharmacologist. There are many lessons and tricks that I learned which I do need to pass on to the younger generation of today. What do you think? Should there be a mandatory retirement age? How do we best measure, assess, and revalidate elderly physicians and surgeons? How can we continue to contribute to those who choose to do so? For the time being, as always, thanks for listening.
Dr. Kerr is professor, Nuffield Department of Clinical Laboratory Science, University of Oxford, and professor of cancer medicine, Oxford Cancer Centre, Oxford, United Kingdom. He has disclosed ties with Celleron Therapeutics, Oxford Cancer Biomarkers (Board of Directors); Afrox (charity; Trustee); GlaxoSmithKline and Bayer HealthCare Pharmaceuticals (Consultant), Genomic Health; Merck Serono, and Roche.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
I’d like to pose a question: When should doctors retire? When, as practicing physicians or surgeons, do we become too old to deliver competent service?
You will be amazed to hear, those of you who have listened to my videos before — and although it is a matter of public knowledge — that I’m 68. I know it’s impossible to imagine, due to this youthful appearance, visage, and so on, but I am. I’ve been a cancer doctor for 40 years; therefore, I need to think a little about retirement.
There are two elements of this for me. I’m a university professor, and in Oxford we did vote, as a democracy of scholars, to have a mandatory retirement age around 68. This is so that we can bring new blood forward so that we can create the space to promote new professors, to bring youngsters in to make new ideas, and to get rid of us fusty old lot.
The other argument would be, of course, that we are wise, we’re experienced, we are world-weary, and we’re successful — otherwise, we wouldn’t have lasted as academics as long. Nevertheless, we voted to do that.
It’s possible to have a discussion with the university to extend this, and for those of us who are clinical academics, I have an honorary appointment as a consultant cancer physician in the hospital and my university professorial appointment, too.
I can extend it probably until I’m about 70. It feels like a nice, round number at which to retire — somewhat arbitrarily, one would admit. But does that feel right?
In the United States, more than 25% of the physician workforce is over the age of 65. There are many studies showing that there is a 20% cognitive decline for most individuals between the ages of 45 and 65.
Are we as capable as an elderly workforce as once we were? Clearly, it’s hardly individualistic. It depends on each of our own health status, where we started from, and so on, but are there any general rules that we can apply? I think these are starting to creep in around the sense of revalidation.
In the United Kingdom, we have a General Medical Council (GMC). I need to have a license to practice from the GMC and a sense of fitness to practice. I have annual appraisals within the hospital system, in which I explore delivery of care, how I’m doing as a mentor, am I reaching the milestones I’ve set in terms of academic achievements, and so on.
This is a peer-to-peer process. We have senior physicians — people like myself — who act as appraisers to support our colleagues and to maintain that sense of fitness to practice. Every 5 years, I’m revalidated by the GMC. They take account of the annual appraisals and a report made by the senior physician within my hospital network who’s a so-called designated person.
These two elements come together with patient feedback, with 360-degree feedback from colleagues, and so on. This is quite a firmly regulated system that I think works. Our mandatory retirement age of 65 has gone. That was phased out by the government. In fact, our NHS is making an effort to retain older elders in the workforce.
They see the benefits of mentorship, experience, leadership, and networks. At a time when the majority of NHS are actively seeking to retire when 65, the NHS is trying to retain and pull back those of us who have been around for that wee bit longer and who still feel committed to doing it.
I’d be really interested to see what you think. There’s variation from country to country. I know that, in Australia, they’re talking about annual appraisals of doctors over the age of 70. I’d be very interested to hear what you think is likely to happen in the United States.
I think our system works pretty well, as long as you’re within the NHS and hospital system. If you wanted to still practice, but practice privately, you would still have to find somebody who’d be prepared to conduct appraisals and so on outside of the NHS. It’s an interesting area.
For myself, I still feel competent. Patients seem to like me. That’s an objective assessment by this 360-degree thing in which patients reflected very positively, indeed, in my approach to the delivery of the care and so on, as did colleagues. I’m still publishing, I go to meetings, I cheer things, bits and bobs. I’d say I’m a wee bit unusual in terms of still having a strong academic profile in doing stuff.
It’s an interesting question. Richard Doll, one of the world’s great epidemiologists who, of course, was the dominant discoverer of the link between smoking and lung cancer, was attending seminars, sitting in the front row, and coming into university 3 days a week at age 90, continuing to be contributory with his extraordinarily sharp intellect and vast, vast experience.
When I think of experience, all young cancer doctors are now immunologists. When I was a young doctor, I was a clinical pharmacologist. There are many lessons and tricks that I learned which I do need to pass on to the younger generation of today. What do you think? Should there be a mandatory retirement age? How do we best measure, assess, and revalidate elderly physicians and surgeons? How can we continue to contribute to those who choose to do so? For the time being, as always, thanks for listening.
Dr. Kerr is professor, Nuffield Department of Clinical Laboratory Science, University of Oxford, and professor of cancer medicine, Oxford Cancer Centre, Oxford, United Kingdom. He has disclosed ties with Celleron Therapeutics, Oxford Cancer Biomarkers (Board of Directors); Afrox (charity; Trustee); GlaxoSmithKline and Bayer HealthCare Pharmaceuticals (Consultant), Genomic Health; Merck Serono, and Roche.
A version of this article appeared on Medscape.com.
Fecal Immunochemical Test Performance for CRC Screening Varies Widely
, new research suggests.
In a comparative performance analysis of five commonly used FITs for colorectal cancer (CRC) screening, researchers found statistically significant differences in positivity rates, sensitivity, and specificity, as well as important differences in rates of unusable tests.
“Our findings have practical importance for FIT-based screening programs as these differences affect the need for repeated FIT, the yield of ACN detection, and the number of diagnostic colonoscopies that would be required to follow-up on abnormal findings,” wrote the researchers, led by Barcey T. Levy, MD, PhD, with University of Iowa, Iowa City.
The study was published online in Annals of Internal Medicine.
Wide Variation Found
Despite widespread use of FITs for CRC screening, there is limited data to help guide test selection. Understanding the comparative performance of different FITs is “crucial” for a successful FIT-based screening program, the researchers wrote.
Dr. Levy and colleagues directly compared the performance of five commercially available FITs — including four qualitative tests (Hemoccult ICT, Hemosure iFOB, OC-Light S FIT, and QuickVue iFOB) and one quantitative test (OC-Auto FIT) — using colonoscopy as the reference standard.
Participants included a diverse group of 3761 adults (mean age, 62 years; 63% women). Each participant was given all five tests and completed them using the same stool sample. They sent the tests by first class mail to a central location, where FITs were analyzed by a trained professional on the day of receipt.
The primary outcome was test performance (sensitivity and specificity) for ACN, defined as advanced polyps or CRC.
A total of 320 participants (8.5%) were found to have ACN based on colonoscopy results, including nine with CRC (0.2%) — rates that are similar to those found in other studies.
The sensitivity for detecting ACN ranged from 10.1% (Hemoccult ICT) to 36.7% (OC-Light S FIT), and specificity varied from 85.5% (OC-Light S FIT) to 96.6% (Hemoccult ICT).
“Given the variation in FIT cutoffs reported by manufacturers, it is not surprising that tests with lower cutoffs (such as OC-Light S FIT) had higher sensitivity than tests with higher cutoffs (such as Hemoccult ICT),” Dr. Levy and colleagues wrote.
Test positivity rates varied fourfold across FITs, from 3.9% for Hemoccult ICT to 16.4% for OC-Light S FIT.
The rates of tests deemed unevaluable (due to factors such as indeterminant results or user mistakes) ranged from 0.2% for OC-Auto FIT to 2.5% for QuickVue iFOB.
The highest positive predictive value (PPV) was observed with OC-Auto FIT (28.9%) and the lowest with Hemosure iFOB (18.2%). The negative predictive value was similar across tests, ranging from 92.2% to 93.3%, indicating consistent performance in ruling out disease.
The study also identified significant differences in test sensitivity based on factors such as the location of neoplasia (higher sensitivity for distal lesions) and patient characteristics (higher sensitivity in people with higher body mass index and lower income).
Dr. Levy and colleagues said their findings have implications both in terms of clinical benefits and cost-effectiveness of CRC screening using FITs.
“Tests with lower sensitivity will miss more patients with CRC and advanced polyps, and tests with higher sensitivity and lower PPV will require more colonoscopies to detect patients with actionable findings,” they wrote.
‘Jaw-Dropping’ Results
The sensitivity results are “jaw-dropping,” Robert Smith, PhD, senior vice-president for cancer screening at the American Cancer Society, said in an interview. “A patient should have at least a 50/50 chance of having their colorectal cancer detected with a stool test at the time of testing.”
“What these numbers show is that the level that the manufacturers believe their test is performing is not reproduced,” Dr. Smith added.
This study adds to “concerns that have been raised about the inherent limitations and the performance of these tests that have been cleared for use and that are supposed to be lifesaving,” he said.
Clearance by the US Food and Drug Administration should mean that there’s essentially “no risk to using the test in terms of the test itself being harmful,” Dr. Smith said. But that’s not the case with FITs “because it’s harmful if you have cancer and your test doesn’t find it.”
By way of study limitations, Dr. Levy and colleagues said it’s important to note that they did not evaluate the “programmatic” sensitivity of repeating FIT testing every 1-2 years, as is generally recommended in screening guidelines. Therefore, the sensitivity of a single FIT may be lower than that of a repeated FIT. Also, variability in the FIT collection process by participants might have affected the results.
The study had no commercial funding. Disclosures for authors are available with the original article. Dr. Smith had no relevant disclosures.
A version of this article appeared on Medscape.com.
, new research suggests.
In a comparative performance analysis of five commonly used FITs for colorectal cancer (CRC) screening, researchers found statistically significant differences in positivity rates, sensitivity, and specificity, as well as important differences in rates of unusable tests.
“Our findings have practical importance for FIT-based screening programs as these differences affect the need for repeated FIT, the yield of ACN detection, and the number of diagnostic colonoscopies that would be required to follow-up on abnormal findings,” wrote the researchers, led by Barcey T. Levy, MD, PhD, with University of Iowa, Iowa City.
The study was published online in Annals of Internal Medicine.
Wide Variation Found
Despite widespread use of FITs for CRC screening, there is limited data to help guide test selection. Understanding the comparative performance of different FITs is “crucial” for a successful FIT-based screening program, the researchers wrote.
Dr. Levy and colleagues directly compared the performance of five commercially available FITs — including four qualitative tests (Hemoccult ICT, Hemosure iFOB, OC-Light S FIT, and QuickVue iFOB) and one quantitative test (OC-Auto FIT) — using colonoscopy as the reference standard.
Participants included a diverse group of 3761 adults (mean age, 62 years; 63% women). Each participant was given all five tests and completed them using the same stool sample. They sent the tests by first class mail to a central location, where FITs were analyzed by a trained professional on the day of receipt.
The primary outcome was test performance (sensitivity and specificity) for ACN, defined as advanced polyps or CRC.
A total of 320 participants (8.5%) were found to have ACN based on colonoscopy results, including nine with CRC (0.2%) — rates that are similar to those found in other studies.
The sensitivity for detecting ACN ranged from 10.1% (Hemoccult ICT) to 36.7% (OC-Light S FIT), and specificity varied from 85.5% (OC-Light S FIT) to 96.6% (Hemoccult ICT).
“Given the variation in FIT cutoffs reported by manufacturers, it is not surprising that tests with lower cutoffs (such as OC-Light S FIT) had higher sensitivity than tests with higher cutoffs (such as Hemoccult ICT),” Dr. Levy and colleagues wrote.
Test positivity rates varied fourfold across FITs, from 3.9% for Hemoccult ICT to 16.4% for OC-Light S FIT.
The rates of tests deemed unevaluable (due to factors such as indeterminant results or user mistakes) ranged from 0.2% for OC-Auto FIT to 2.5% for QuickVue iFOB.
The highest positive predictive value (PPV) was observed with OC-Auto FIT (28.9%) and the lowest with Hemosure iFOB (18.2%). The negative predictive value was similar across tests, ranging from 92.2% to 93.3%, indicating consistent performance in ruling out disease.
The study also identified significant differences in test sensitivity based on factors such as the location of neoplasia (higher sensitivity for distal lesions) and patient characteristics (higher sensitivity in people with higher body mass index and lower income).
Dr. Levy and colleagues said their findings have implications both in terms of clinical benefits and cost-effectiveness of CRC screening using FITs.
“Tests with lower sensitivity will miss more patients with CRC and advanced polyps, and tests with higher sensitivity and lower PPV will require more colonoscopies to detect patients with actionable findings,” they wrote.
‘Jaw-Dropping’ Results
The sensitivity results are “jaw-dropping,” Robert Smith, PhD, senior vice-president for cancer screening at the American Cancer Society, said in an interview. “A patient should have at least a 50/50 chance of having their colorectal cancer detected with a stool test at the time of testing.”
“What these numbers show is that the level that the manufacturers believe their test is performing is not reproduced,” Dr. Smith added.
This study adds to “concerns that have been raised about the inherent limitations and the performance of these tests that have been cleared for use and that are supposed to be lifesaving,” he said.
Clearance by the US Food and Drug Administration should mean that there’s essentially “no risk to using the test in terms of the test itself being harmful,” Dr. Smith said. But that’s not the case with FITs “because it’s harmful if you have cancer and your test doesn’t find it.”
By way of study limitations, Dr. Levy and colleagues said it’s important to note that they did not evaluate the “programmatic” sensitivity of repeating FIT testing every 1-2 years, as is generally recommended in screening guidelines. Therefore, the sensitivity of a single FIT may be lower than that of a repeated FIT. Also, variability in the FIT collection process by participants might have affected the results.
The study had no commercial funding. Disclosures for authors are available with the original article. Dr. Smith had no relevant disclosures.
A version of this article appeared on Medscape.com.
, new research suggests.
In a comparative performance analysis of five commonly used FITs for colorectal cancer (CRC) screening, researchers found statistically significant differences in positivity rates, sensitivity, and specificity, as well as important differences in rates of unusable tests.
“Our findings have practical importance for FIT-based screening programs as these differences affect the need for repeated FIT, the yield of ACN detection, and the number of diagnostic colonoscopies that would be required to follow-up on abnormal findings,” wrote the researchers, led by Barcey T. Levy, MD, PhD, with University of Iowa, Iowa City.
The study was published online in Annals of Internal Medicine.
Wide Variation Found
Despite widespread use of FITs for CRC screening, there is limited data to help guide test selection. Understanding the comparative performance of different FITs is “crucial” for a successful FIT-based screening program, the researchers wrote.
Dr. Levy and colleagues directly compared the performance of five commercially available FITs — including four qualitative tests (Hemoccult ICT, Hemosure iFOB, OC-Light S FIT, and QuickVue iFOB) and one quantitative test (OC-Auto FIT) — using colonoscopy as the reference standard.
Participants included a diverse group of 3761 adults (mean age, 62 years; 63% women). Each participant was given all five tests and completed them using the same stool sample. They sent the tests by first class mail to a central location, where FITs were analyzed by a trained professional on the day of receipt.
The primary outcome was test performance (sensitivity and specificity) for ACN, defined as advanced polyps or CRC.
A total of 320 participants (8.5%) were found to have ACN based on colonoscopy results, including nine with CRC (0.2%) — rates that are similar to those found in other studies.
The sensitivity for detecting ACN ranged from 10.1% (Hemoccult ICT) to 36.7% (OC-Light S FIT), and specificity varied from 85.5% (OC-Light S FIT) to 96.6% (Hemoccult ICT).
“Given the variation in FIT cutoffs reported by manufacturers, it is not surprising that tests with lower cutoffs (such as OC-Light S FIT) had higher sensitivity than tests with higher cutoffs (such as Hemoccult ICT),” Dr. Levy and colleagues wrote.
Test positivity rates varied fourfold across FITs, from 3.9% for Hemoccult ICT to 16.4% for OC-Light S FIT.
The rates of tests deemed unevaluable (due to factors such as indeterminant results or user mistakes) ranged from 0.2% for OC-Auto FIT to 2.5% for QuickVue iFOB.
The highest positive predictive value (PPV) was observed with OC-Auto FIT (28.9%) and the lowest with Hemosure iFOB (18.2%). The negative predictive value was similar across tests, ranging from 92.2% to 93.3%, indicating consistent performance in ruling out disease.
The study also identified significant differences in test sensitivity based on factors such as the location of neoplasia (higher sensitivity for distal lesions) and patient characteristics (higher sensitivity in people with higher body mass index and lower income).
Dr. Levy and colleagues said their findings have implications both in terms of clinical benefits and cost-effectiveness of CRC screening using FITs.
“Tests with lower sensitivity will miss more patients with CRC and advanced polyps, and tests with higher sensitivity and lower PPV will require more colonoscopies to detect patients with actionable findings,” they wrote.
‘Jaw-Dropping’ Results
The sensitivity results are “jaw-dropping,” Robert Smith, PhD, senior vice-president for cancer screening at the American Cancer Society, said in an interview. “A patient should have at least a 50/50 chance of having their colorectal cancer detected with a stool test at the time of testing.”
“What these numbers show is that the level that the manufacturers believe their test is performing is not reproduced,” Dr. Smith added.
This study adds to “concerns that have been raised about the inherent limitations and the performance of these tests that have been cleared for use and that are supposed to be lifesaving,” he said.
Clearance by the US Food and Drug Administration should mean that there’s essentially “no risk to using the test in terms of the test itself being harmful,” Dr. Smith said. But that’s not the case with FITs “because it’s harmful if you have cancer and your test doesn’t find it.”
By way of study limitations, Dr. Levy and colleagues said it’s important to note that they did not evaluate the “programmatic” sensitivity of repeating FIT testing every 1-2 years, as is generally recommended in screening guidelines. Therefore, the sensitivity of a single FIT may be lower than that of a repeated FIT. Also, variability in the FIT collection process by participants might have affected the results.
The study had no commercial funding. Disclosures for authors are available with the original article. Dr. Smith had no relevant disclosures.
A version of this article appeared on Medscape.com.
Hidden in Plain Sight: The Growing Epidemic of Ultraprocessed Food Addiction
become increasingly prominent in diets globally.
Yet, even as this evidence mounted, these food items haveNow, recent studies are unlocking why cutting back on ultraprocessed foods can be so challenging. In their ability to fuel intense cravings, loss of control, and even withdrawal symptoms, ultraprocessed foods appear as capable of triggering addiction as traditional culprits like tobacco and alcohol.
This has driven efforts to better understand the addictive nature of these foods and identify strategies for combating it.
The Key Role of the Food Industry
Some foods are more likely to trigger addictions than others. For instance, in our studies, participants frequently mention chocolate, pizza, French fries, potato chips, and soda as some of the most addictive foods. What these foods all share is an ability to deliver high doses of refined carbohydrates, fat, or salt at levels exceeding those found in natural foods (eg, fruits, vegetables, beans).
Furthermore, ultraprocessed foods are industrially mass-produced in a process that relies on the heavy use of flavor enhancers and additives, as well as preservatives and packaging that make them shelf-stable. This has flooded our food supply with cheap, accessible, hyperrewarding foods that our brains are not well equipped to resist.
To add to these already substantial effects, the food industry often employs strategies reminiscent of Big Tobacco. They engineer foods to hit our “bliss points,” maximizing craving and fostering brand loyalty from a young age. This product engineering, coupled with aggressive marketing, makes these foods both attractive and seemingly ubiquitous.
How Many People Are Affected?
Addiction to ultraprocessed food is more common than you might think. According to the Yale Food Addiction Scale — a tool that uses the same criteria for diagnosing substance use disorders to assess ultraprocessed food addiction (UPFA) — about 14% of adults and 12% of children show clinically significant signs of addiction to such foods. This is quite similar to addiction rates among adults for legal substances like alcohol and tobacco.
Research has shown that behaviors and brain mechanisms contributing to addictive disorders, such as cravings and impulsivity, also apply to UPFA.
Many more people outside of those who meet the criteria for UPFA are influenced by their addictive properties. Picture a teenager craving a sugary drink after school, a child needing the morning cereal fix, or adults reaching for candy and fast food; these scenarios illustrate how addictive ultraprocessed foods permeate our daily lives.
From a public health standpoint, this comes at a significant cost. Even experiencing one or two symptoms of UPFA, such as intense cravings or a feeling of loss of control over intake, can lead to consuming too many calories, sugar, fat, and sodium in a way that puts health at risk.
Clinical Implications
Numerous studies have found that individuals who exhibit UPFA have more severe mental and physical health challenges. For example, UPFA is associated with higher rates of diet-related diseases (like type 2 diabetes), greater overall mental health issues, and generally poorer outcomes in weight loss treatments.
Despite the growing understanding of UPFA’s relevance in clinical settings, research is still limited on how to best treat, manage, or prevent it. Most of the existing work has focused on investigating whether UPFA is indeed a real condition, with efforts to create clinical guidelines only just beginning.
Of note, UPFA isn’t officially recognized as a diagnosis — yet. If it were, it could spark much more research into how to handle it clinically.
There is some debate about whether we really need this new diagnosis, given that eating disorders are already recognized. However, the statistics tell a different story: Around 14% of people might have UPFA compared with about 1% for binge-type eating disorders. This suggests that many individuals with problematic eating habits are currently flying under the radar with our existing diagnostic categories.
What’s even more concerning is that these individuals often suffer significant problems and exhibit distinct brain differences, even if they do not neatly fit into an existing eating disorder diagnosis. Officially recognizing UPFA could open up new avenues for support and lead to better treatments aimed at reducing compulsive eating patterns.
Treatment Options
Treatment options for UPFA are still being explored. Initial evidence suggests that medications used for treating substance addiction, such as naltrexone and bupropion, might help with highly processed food addiction as well. Newer drugs, like glucagon-like peptide-1 receptor agonists, which appear to curb food cravings and manage addictive behaviors, also look promising.
Psychosocial approaches can also be used to address UPFA. Strategies include:
- Helping individuals become more aware of their triggers for addictive patterns of intake. This often involves identifying certain types of food (eg, potato chips, candy), specific places or times of day (eg, sitting on the couch at night while watching TV), and particular emotional states (eg, anger, loneliness, boredom, sadness). Increasing awareness of personal triggers can help people minimize their exposure to these and develop coping strategies when they do arise.
- Many people use ultraprocessed foods to cope with challenging emotions. Helping individuals develop healthier strategies to regulate their emotions can be key. This may include seeking out social support, journaling, going for a walk, or practicing mindfulness.
- UPFA can be associated with erratic and inconsistent eating patterns. Stabilizing eating habits by consuming regular meals composed of more minimally processed foods (eg, vegetables, fruits, high-quality protein, beans) can help heal the body and reduce vulnerability to ultraprocessed food triggers.
- Many people with UPFA have other existing mental health conditions, including mood disorders, anxiety, substance use disorders, or trauma-related disorders. Addressing these co-occurring mental health conditions can help reduce reliance on ultraprocessed foods.
Public-policy interventions may also help safeguard vulnerable populations from developing UPFA. For instance, support exists for policies to protect children from cigarette marketing and to put clear addiction warning labels on cigarette packages. A similar approach could be applied to reduce the harms associated with ultraprocessed foods, particularly for children.
Combating this growing problem requires treating ultraprocessed foods like other addictive substances. By identifying the threat posed by these common food items, we can not only help patients with UPFA, but also potentially stave off the development of several diet-related conditions.
Dr. Gearhardt, professor of psychology, University of Michigan, Ann Arbor, has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
become increasingly prominent in diets globally.
Yet, even as this evidence mounted, these food items haveNow, recent studies are unlocking why cutting back on ultraprocessed foods can be so challenging. In their ability to fuel intense cravings, loss of control, and even withdrawal symptoms, ultraprocessed foods appear as capable of triggering addiction as traditional culprits like tobacco and alcohol.
This has driven efforts to better understand the addictive nature of these foods and identify strategies for combating it.
The Key Role of the Food Industry
Some foods are more likely to trigger addictions than others. For instance, in our studies, participants frequently mention chocolate, pizza, French fries, potato chips, and soda as some of the most addictive foods. What these foods all share is an ability to deliver high doses of refined carbohydrates, fat, or salt at levels exceeding those found in natural foods (eg, fruits, vegetables, beans).
Furthermore, ultraprocessed foods are industrially mass-produced in a process that relies on the heavy use of flavor enhancers and additives, as well as preservatives and packaging that make them shelf-stable. This has flooded our food supply with cheap, accessible, hyperrewarding foods that our brains are not well equipped to resist.
To add to these already substantial effects, the food industry often employs strategies reminiscent of Big Tobacco. They engineer foods to hit our “bliss points,” maximizing craving and fostering brand loyalty from a young age. This product engineering, coupled with aggressive marketing, makes these foods both attractive and seemingly ubiquitous.
How Many People Are Affected?
Addiction to ultraprocessed food is more common than you might think. According to the Yale Food Addiction Scale — a tool that uses the same criteria for diagnosing substance use disorders to assess ultraprocessed food addiction (UPFA) — about 14% of adults and 12% of children show clinically significant signs of addiction to such foods. This is quite similar to addiction rates among adults for legal substances like alcohol and tobacco.
Research has shown that behaviors and brain mechanisms contributing to addictive disorders, such as cravings and impulsivity, also apply to UPFA.
Many more people outside of those who meet the criteria for UPFA are influenced by their addictive properties. Picture a teenager craving a sugary drink after school, a child needing the morning cereal fix, or adults reaching for candy and fast food; these scenarios illustrate how addictive ultraprocessed foods permeate our daily lives.
From a public health standpoint, this comes at a significant cost. Even experiencing one or two symptoms of UPFA, such as intense cravings or a feeling of loss of control over intake, can lead to consuming too many calories, sugar, fat, and sodium in a way that puts health at risk.
Clinical Implications
Numerous studies have found that individuals who exhibit UPFA have more severe mental and physical health challenges. For example, UPFA is associated with higher rates of diet-related diseases (like type 2 diabetes), greater overall mental health issues, and generally poorer outcomes in weight loss treatments.
Despite the growing understanding of UPFA’s relevance in clinical settings, research is still limited on how to best treat, manage, or prevent it. Most of the existing work has focused on investigating whether UPFA is indeed a real condition, with efforts to create clinical guidelines only just beginning.
Of note, UPFA isn’t officially recognized as a diagnosis — yet. If it were, it could spark much more research into how to handle it clinically.
There is some debate about whether we really need this new diagnosis, given that eating disorders are already recognized. However, the statistics tell a different story: Around 14% of people might have UPFA compared with about 1% for binge-type eating disorders. This suggests that many individuals with problematic eating habits are currently flying under the radar with our existing diagnostic categories.
What’s even more concerning is that these individuals often suffer significant problems and exhibit distinct brain differences, even if they do not neatly fit into an existing eating disorder diagnosis. Officially recognizing UPFA could open up new avenues for support and lead to better treatments aimed at reducing compulsive eating patterns.
Treatment Options
Treatment options for UPFA are still being explored. Initial evidence suggests that medications used for treating substance addiction, such as naltrexone and bupropion, might help with highly processed food addiction as well. Newer drugs, like glucagon-like peptide-1 receptor agonists, which appear to curb food cravings and manage addictive behaviors, also look promising.
Psychosocial approaches can also be used to address UPFA. Strategies include:
- Helping individuals become more aware of their triggers for addictive patterns of intake. This often involves identifying certain types of food (eg, potato chips, candy), specific places or times of day (eg, sitting on the couch at night while watching TV), and particular emotional states (eg, anger, loneliness, boredom, sadness). Increasing awareness of personal triggers can help people minimize their exposure to these and develop coping strategies when they do arise.
- Many people use ultraprocessed foods to cope with challenging emotions. Helping individuals develop healthier strategies to regulate their emotions can be key. This may include seeking out social support, journaling, going for a walk, or practicing mindfulness.
- UPFA can be associated with erratic and inconsistent eating patterns. Stabilizing eating habits by consuming regular meals composed of more minimally processed foods (eg, vegetables, fruits, high-quality protein, beans) can help heal the body and reduce vulnerability to ultraprocessed food triggers.
- Many people with UPFA have other existing mental health conditions, including mood disorders, anxiety, substance use disorders, or trauma-related disorders. Addressing these co-occurring mental health conditions can help reduce reliance on ultraprocessed foods.
Public-policy interventions may also help safeguard vulnerable populations from developing UPFA. For instance, support exists for policies to protect children from cigarette marketing and to put clear addiction warning labels on cigarette packages. A similar approach could be applied to reduce the harms associated with ultraprocessed foods, particularly for children.
Combating this growing problem requires treating ultraprocessed foods like other addictive substances. By identifying the threat posed by these common food items, we can not only help patients with UPFA, but also potentially stave off the development of several diet-related conditions.
Dr. Gearhardt, professor of psychology, University of Michigan, Ann Arbor, has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
become increasingly prominent in diets globally.
Yet, even as this evidence mounted, these food items haveNow, recent studies are unlocking why cutting back on ultraprocessed foods can be so challenging. In their ability to fuel intense cravings, loss of control, and even withdrawal symptoms, ultraprocessed foods appear as capable of triggering addiction as traditional culprits like tobacco and alcohol.
This has driven efforts to better understand the addictive nature of these foods and identify strategies for combating it.
The Key Role of the Food Industry
Some foods are more likely to trigger addictions than others. For instance, in our studies, participants frequently mention chocolate, pizza, French fries, potato chips, and soda as some of the most addictive foods. What these foods all share is an ability to deliver high doses of refined carbohydrates, fat, or salt at levels exceeding those found in natural foods (eg, fruits, vegetables, beans).
Furthermore, ultraprocessed foods are industrially mass-produced in a process that relies on the heavy use of flavor enhancers and additives, as well as preservatives and packaging that make them shelf-stable. This has flooded our food supply with cheap, accessible, hyperrewarding foods that our brains are not well equipped to resist.
To add to these already substantial effects, the food industry often employs strategies reminiscent of Big Tobacco. They engineer foods to hit our “bliss points,” maximizing craving and fostering brand loyalty from a young age. This product engineering, coupled with aggressive marketing, makes these foods both attractive and seemingly ubiquitous.
How Many People Are Affected?
Addiction to ultraprocessed food is more common than you might think. According to the Yale Food Addiction Scale — a tool that uses the same criteria for diagnosing substance use disorders to assess ultraprocessed food addiction (UPFA) — about 14% of adults and 12% of children show clinically significant signs of addiction to such foods. This is quite similar to addiction rates among adults for legal substances like alcohol and tobacco.
Research has shown that behaviors and brain mechanisms contributing to addictive disorders, such as cravings and impulsivity, also apply to UPFA.
Many more people outside of those who meet the criteria for UPFA are influenced by their addictive properties. Picture a teenager craving a sugary drink after school, a child needing the morning cereal fix, or adults reaching for candy and fast food; these scenarios illustrate how addictive ultraprocessed foods permeate our daily lives.
From a public health standpoint, this comes at a significant cost. Even experiencing one or two symptoms of UPFA, such as intense cravings or a feeling of loss of control over intake, can lead to consuming too many calories, sugar, fat, and sodium in a way that puts health at risk.
Clinical Implications
Numerous studies have found that individuals who exhibit UPFA have more severe mental and physical health challenges. For example, UPFA is associated with higher rates of diet-related diseases (like type 2 diabetes), greater overall mental health issues, and generally poorer outcomes in weight loss treatments.
Despite the growing understanding of UPFA’s relevance in clinical settings, research is still limited on how to best treat, manage, or prevent it. Most of the existing work has focused on investigating whether UPFA is indeed a real condition, with efforts to create clinical guidelines only just beginning.
Of note, UPFA isn’t officially recognized as a diagnosis — yet. If it were, it could spark much more research into how to handle it clinically.
There is some debate about whether we really need this new diagnosis, given that eating disorders are already recognized. However, the statistics tell a different story: Around 14% of people might have UPFA compared with about 1% for binge-type eating disorders. This suggests that many individuals with problematic eating habits are currently flying under the radar with our existing diagnostic categories.
What’s even more concerning is that these individuals often suffer significant problems and exhibit distinct brain differences, even if they do not neatly fit into an existing eating disorder diagnosis. Officially recognizing UPFA could open up new avenues for support and lead to better treatments aimed at reducing compulsive eating patterns.
Treatment Options
Treatment options for UPFA are still being explored. Initial evidence suggests that medications used for treating substance addiction, such as naltrexone and bupropion, might help with highly processed food addiction as well. Newer drugs, like glucagon-like peptide-1 receptor agonists, which appear to curb food cravings and manage addictive behaviors, also look promising.
Psychosocial approaches can also be used to address UPFA. Strategies include:
- Helping individuals become more aware of their triggers for addictive patterns of intake. This often involves identifying certain types of food (eg, potato chips, candy), specific places or times of day (eg, sitting on the couch at night while watching TV), and particular emotional states (eg, anger, loneliness, boredom, sadness). Increasing awareness of personal triggers can help people minimize their exposure to these and develop coping strategies when they do arise.
- Many people use ultraprocessed foods to cope with challenging emotions. Helping individuals develop healthier strategies to regulate their emotions can be key. This may include seeking out social support, journaling, going for a walk, or practicing mindfulness.
- UPFA can be associated with erratic and inconsistent eating patterns. Stabilizing eating habits by consuming regular meals composed of more minimally processed foods (eg, vegetables, fruits, high-quality protein, beans) can help heal the body and reduce vulnerability to ultraprocessed food triggers.
- Many people with UPFA have other existing mental health conditions, including mood disorders, anxiety, substance use disorders, or trauma-related disorders. Addressing these co-occurring mental health conditions can help reduce reliance on ultraprocessed foods.
Public-policy interventions may also help safeguard vulnerable populations from developing UPFA. For instance, support exists for policies to protect children from cigarette marketing and to put clear addiction warning labels on cigarette packages. A similar approach could be applied to reduce the harms associated with ultraprocessed foods, particularly for children.
Combating this growing problem requires treating ultraprocessed foods like other addictive substances. By identifying the threat posed by these common food items, we can not only help patients with UPFA, but also potentially stave off the development of several diet-related conditions.
Dr. Gearhardt, professor of psychology, University of Michigan, Ann Arbor, has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
Bariatric Surgery and Weight Loss Make Brain Say Meh to Sweets
TOPLINE:
METHODOLOGY:
- Previous studies have suggested that individuals undergoing bariatric surgery show reduced preference for sweet-tasting food post-surgery, but the mechanisms behind these changes remain unclear.
- This observational cohort study aimed to examine the neural processing of sweet taste in the reward regions of the brain before and after bariatric surgery in 24 women with obesity (mean body mass index [BMI], 47) who underwent bariatric surgery and 21 control participants with normal to overweight (mean BMI, 23.5).
- Participants (mean age about 43 years; 75%-81% White) underwent sucrose taste testing and functional MRI (fMRI) to compare the responses of the brain with sucrose solutions of 0.10 M and 0.40 M (akin to sugar-sweetened beverages, such as Coca-Cola at ~0.32 M) and Mountain Dew at ~0.35 M) versus water.
- In the bariatric surgery group, participants underwent fMRI 1-117 days before surgery, and 21 participants who lost about 20% of their weight after the surgery underwent a follow-up fMRI roughly 3-4 months later.
- The researchers analyzed the brain’s reward response using a composite activation of several reward system regions (the ventral tegmental area, ventral striatum, and orbitofrontal cortex) and of sensory regions (the primary somatosensory cortex and primary insula taste cortex).
TAKEAWAY:
- The perceived intensity of sweetness was comparable between the control group and the bariatric surgery group both before and after surgery.
- In the bariatric surgery group, the average preferred sweet concentration decreased from 0.52 M before surgery to 0.29 M after surgery (P = .008).
- The fMRI analysis indicated that women showed a trend toward a higher reward response to 0.4 M sucrose before bariatric surgery than the control participants.
- The activation of the reward region in response to 0.4 M sucrose (but not 0.1 M) declined in the bariatric surgery group after surgery (P = .042).
IN PRACTICE:
“Our findings suggest that both the brain reward response to and subjective liking of an innately desirable taste decline following bariatric surgery,” the authors wrote.
SOURCE:
This study was led by Jonathan Alessi, Indiana University School of Medicine, Indianapolis, and published online in Obesity.
LIMITATIONS:
The study sample size was relatively small, and the duration of follow-up was short, with recruitment curtailed by the COVID-19 pandemic. This study did not assess the consumption of sugar or sweetened food, which could provide further insights into changes in the dietary behavior post-surgery. Participants included women only, and the findings could have been different if men were recruited.
DISCLOSURES:
This study was funded by the American Diabetes Association, Indiana Clinical and Translational Sciences Institute, and National Institute on Alcohol Abuse and Alcoholism. Three authors reported financial relationships with some pharmaceutical companies outside of this study.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Previous studies have suggested that individuals undergoing bariatric surgery show reduced preference for sweet-tasting food post-surgery, but the mechanisms behind these changes remain unclear.
- This observational cohort study aimed to examine the neural processing of sweet taste in the reward regions of the brain before and after bariatric surgery in 24 women with obesity (mean body mass index [BMI], 47) who underwent bariatric surgery and 21 control participants with normal to overweight (mean BMI, 23.5).
- Participants (mean age about 43 years; 75%-81% White) underwent sucrose taste testing and functional MRI (fMRI) to compare the responses of the brain with sucrose solutions of 0.10 M and 0.40 M (akin to sugar-sweetened beverages, such as Coca-Cola at ~0.32 M) and Mountain Dew at ~0.35 M) versus water.
- In the bariatric surgery group, participants underwent fMRI 1-117 days before surgery, and 21 participants who lost about 20% of their weight after the surgery underwent a follow-up fMRI roughly 3-4 months later.
- The researchers analyzed the brain’s reward response using a composite activation of several reward system regions (the ventral tegmental area, ventral striatum, and orbitofrontal cortex) and of sensory regions (the primary somatosensory cortex and primary insula taste cortex).
TAKEAWAY:
- The perceived intensity of sweetness was comparable between the control group and the bariatric surgery group both before and after surgery.
- In the bariatric surgery group, the average preferred sweet concentration decreased from 0.52 M before surgery to 0.29 M after surgery (P = .008).
- The fMRI analysis indicated that women showed a trend toward a higher reward response to 0.4 M sucrose before bariatric surgery than the control participants.
- The activation of the reward region in response to 0.4 M sucrose (but not 0.1 M) declined in the bariatric surgery group after surgery (P = .042).
IN PRACTICE:
“Our findings suggest that both the brain reward response to and subjective liking of an innately desirable taste decline following bariatric surgery,” the authors wrote.
SOURCE:
This study was led by Jonathan Alessi, Indiana University School of Medicine, Indianapolis, and published online in Obesity.
LIMITATIONS:
The study sample size was relatively small, and the duration of follow-up was short, with recruitment curtailed by the COVID-19 pandemic. This study did not assess the consumption of sugar or sweetened food, which could provide further insights into changes in the dietary behavior post-surgery. Participants included women only, and the findings could have been different if men were recruited.
DISCLOSURES:
This study was funded by the American Diabetes Association, Indiana Clinical and Translational Sciences Institute, and National Institute on Alcohol Abuse and Alcoholism. Three authors reported financial relationships with some pharmaceutical companies outside of this study.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Previous studies have suggested that individuals undergoing bariatric surgery show reduced preference for sweet-tasting food post-surgery, but the mechanisms behind these changes remain unclear.
- This observational cohort study aimed to examine the neural processing of sweet taste in the reward regions of the brain before and after bariatric surgery in 24 women with obesity (mean body mass index [BMI], 47) who underwent bariatric surgery and 21 control participants with normal to overweight (mean BMI, 23.5).
- Participants (mean age about 43 years; 75%-81% White) underwent sucrose taste testing and functional MRI (fMRI) to compare the responses of the brain with sucrose solutions of 0.10 M and 0.40 M (akin to sugar-sweetened beverages, such as Coca-Cola at ~0.32 M) and Mountain Dew at ~0.35 M) versus water.
- In the bariatric surgery group, participants underwent fMRI 1-117 days before surgery, and 21 participants who lost about 20% of their weight after the surgery underwent a follow-up fMRI roughly 3-4 months later.
- The researchers analyzed the brain’s reward response using a composite activation of several reward system regions (the ventral tegmental area, ventral striatum, and orbitofrontal cortex) and of sensory regions (the primary somatosensory cortex and primary insula taste cortex).
TAKEAWAY:
- The perceived intensity of sweetness was comparable between the control group and the bariatric surgery group both before and after surgery.
- In the bariatric surgery group, the average preferred sweet concentration decreased from 0.52 M before surgery to 0.29 M after surgery (P = .008).
- The fMRI analysis indicated that women showed a trend toward a higher reward response to 0.4 M sucrose before bariatric surgery than the control participants.
- The activation of the reward region in response to 0.4 M sucrose (but not 0.1 M) declined in the bariatric surgery group after surgery (P = .042).
IN PRACTICE:
“Our findings suggest that both the brain reward response to and subjective liking of an innately desirable taste decline following bariatric surgery,” the authors wrote.
SOURCE:
This study was led by Jonathan Alessi, Indiana University School of Medicine, Indianapolis, and published online in Obesity.
LIMITATIONS:
The study sample size was relatively small, and the duration of follow-up was short, with recruitment curtailed by the COVID-19 pandemic. This study did not assess the consumption of sugar or sweetened food, which could provide further insights into changes in the dietary behavior post-surgery. Participants included women only, and the findings could have been different if men were recruited.
DISCLOSURES:
This study was funded by the American Diabetes Association, Indiana Clinical and Translational Sciences Institute, and National Institute on Alcohol Abuse and Alcoholism. Three authors reported financial relationships with some pharmaceutical companies outside of this study.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.