User login
Lung Cancer Screening Unveils Hidden Health Risks
The reason is because the low-dose CT scans used for screening cover the lower neck down to the upper abdomen, revealing far more anatomy than simply the lungs.
In fact, lung cancer screening can provide information on three of the top 10 causes of death worldwide: ischemic heart disease, chronic obstructive pulmonary disease, and, of course, lung cancer.
With lung cancer screening, “we are basically targeting many birds with one low-dose stone,” explained Jelena Spasic MD, PhD, at the European Lung Cancer Congress (ELCC) 2024.
Dr. Spasic, a medical oncologist at the Institute for Oncology and Radiology of Serbia in Belgrade, was the discussant on a study that gave an indication on just how useful screening can be for other diseases.
The study, dubbed 4-IN-THE-LUNG-RUN trial (4ITLR), is an ongoing prospective trial in six European countries that is using lung cancer screening scans to also look for coronary artery calcifications, a marker of atherosclerosis.
Usually, coronary calcifications are considered incidental findings on lung cancer screenings and reported to subjects’ physicians for heart disease risk assessment.
The difference in 4ITLR is that investigators are actively looking for the lesions and quantifying the extent of calcifications.
It’s made possible by the artificial intelligence-based software being used to read the scans. In addition to generating reports on lung nodules, it also automatically calculates an Agatston score, a quantification of the degree of coronary artery calcification for each subject.
At the meeting, which was organized by the European Society for Clinical Oncology, 4ITLR investigator Daiwei Han, MD, PhD, a research associate at the Institute for Diagnostic Accuracy in Groningen, the Netherlands, reported outcomes in the first 2487 of the 24,000 planned subjects.
To be eligible for screening, participants had to be 60-79 years old and either current smokers, past smokers who had quit within 10 years, or people with a 35 or more pack-year history. The median age in the study was 68.1 years.
Overall, 53% of subjects had Agatston scores of 100 or more, indicating the need for treatment to prevent active coronary artery disease, Dr. Han said.
Fifteen percent were at high risk for heart disease with scores of 400-999, indicating extensive coronary artery calcification, and 16.2% were at very high risk, with scores of 1000 or higher. The information is being shared with participants’ physicians.
The risk of heart disease was far higher in men, who made up 56% of the study population. While women had a median Agatston score of 61, the median score for men was 211.1.
The findings illustrate the potential of dedicated cardiovascular screening within lung cancer screening programs, Dr. Han said, noting that 4ITLR will also incorporate COPD risk assessment.
The study also shows the increased impact lung cancer screening programs could have if greater use were made of the CT images to look for other diseases, Dr. Spasic said.
4ITLR is funded by the European Union’s Horizon 2020 Program. Dr. Spasic and Dr. Han didn’t have any relevant disclosures.
The reason is because the low-dose CT scans used for screening cover the lower neck down to the upper abdomen, revealing far more anatomy than simply the lungs.
In fact, lung cancer screening can provide information on three of the top 10 causes of death worldwide: ischemic heart disease, chronic obstructive pulmonary disease, and, of course, lung cancer.
With lung cancer screening, “we are basically targeting many birds with one low-dose stone,” explained Jelena Spasic MD, PhD, at the European Lung Cancer Congress (ELCC) 2024.
Dr. Spasic, a medical oncologist at the Institute for Oncology and Radiology of Serbia in Belgrade, was the discussant on a study that gave an indication on just how useful screening can be for other diseases.
The study, dubbed 4-IN-THE-LUNG-RUN trial (4ITLR), is an ongoing prospective trial in six European countries that is using lung cancer screening scans to also look for coronary artery calcifications, a marker of atherosclerosis.
Usually, coronary calcifications are considered incidental findings on lung cancer screenings and reported to subjects’ physicians for heart disease risk assessment.
The difference in 4ITLR is that investigators are actively looking for the lesions and quantifying the extent of calcifications.
It’s made possible by the artificial intelligence-based software being used to read the scans. In addition to generating reports on lung nodules, it also automatically calculates an Agatston score, a quantification of the degree of coronary artery calcification for each subject.
At the meeting, which was organized by the European Society for Clinical Oncology, 4ITLR investigator Daiwei Han, MD, PhD, a research associate at the Institute for Diagnostic Accuracy in Groningen, the Netherlands, reported outcomes in the first 2487 of the 24,000 planned subjects.
To be eligible for screening, participants had to be 60-79 years old and either current smokers, past smokers who had quit within 10 years, or people with a 35 or more pack-year history. The median age in the study was 68.1 years.
Overall, 53% of subjects had Agatston scores of 100 or more, indicating the need for treatment to prevent active coronary artery disease, Dr. Han said.
Fifteen percent were at high risk for heart disease with scores of 400-999, indicating extensive coronary artery calcification, and 16.2% were at very high risk, with scores of 1000 or higher. The information is being shared with participants’ physicians.
The risk of heart disease was far higher in men, who made up 56% of the study population. While women had a median Agatston score of 61, the median score for men was 211.1.
The findings illustrate the potential of dedicated cardiovascular screening within lung cancer screening programs, Dr. Han said, noting that 4ITLR will also incorporate COPD risk assessment.
The study also shows the increased impact lung cancer screening programs could have if greater use were made of the CT images to look for other diseases, Dr. Spasic said.
4ITLR is funded by the European Union’s Horizon 2020 Program. Dr. Spasic and Dr. Han didn’t have any relevant disclosures.
The reason is because the low-dose CT scans used for screening cover the lower neck down to the upper abdomen, revealing far more anatomy than simply the lungs.
In fact, lung cancer screening can provide information on three of the top 10 causes of death worldwide: ischemic heart disease, chronic obstructive pulmonary disease, and, of course, lung cancer.
With lung cancer screening, “we are basically targeting many birds with one low-dose stone,” explained Jelena Spasic MD, PhD, at the European Lung Cancer Congress (ELCC) 2024.
Dr. Spasic, a medical oncologist at the Institute for Oncology and Radiology of Serbia in Belgrade, was the discussant on a study that gave an indication on just how useful screening can be for other diseases.
The study, dubbed 4-IN-THE-LUNG-RUN trial (4ITLR), is an ongoing prospective trial in six European countries that is using lung cancer screening scans to also look for coronary artery calcifications, a marker of atherosclerosis.
Usually, coronary calcifications are considered incidental findings on lung cancer screenings and reported to subjects’ physicians for heart disease risk assessment.
The difference in 4ITLR is that investigators are actively looking for the lesions and quantifying the extent of calcifications.
It’s made possible by the artificial intelligence-based software being used to read the scans. In addition to generating reports on lung nodules, it also automatically calculates an Agatston score, a quantification of the degree of coronary artery calcification for each subject.
At the meeting, which was organized by the European Society for Clinical Oncology, 4ITLR investigator Daiwei Han, MD, PhD, a research associate at the Institute for Diagnostic Accuracy in Groningen, the Netherlands, reported outcomes in the first 2487 of the 24,000 planned subjects.
To be eligible for screening, participants had to be 60-79 years old and either current smokers, past smokers who had quit within 10 years, or people with a 35 or more pack-year history. The median age in the study was 68.1 years.
Overall, 53% of subjects had Agatston scores of 100 or more, indicating the need for treatment to prevent active coronary artery disease, Dr. Han said.
Fifteen percent were at high risk for heart disease with scores of 400-999, indicating extensive coronary artery calcification, and 16.2% were at very high risk, with scores of 1000 or higher. The information is being shared with participants’ physicians.
The risk of heart disease was far higher in men, who made up 56% of the study population. While women had a median Agatston score of 61, the median score for men was 211.1.
The findings illustrate the potential of dedicated cardiovascular screening within lung cancer screening programs, Dr. Han said, noting that 4ITLR will also incorporate COPD risk assessment.
The study also shows the increased impact lung cancer screening programs could have if greater use were made of the CT images to look for other diseases, Dr. Spasic said.
4ITLR is funded by the European Union’s Horizon 2020 Program. Dr. Spasic and Dr. Han didn’t have any relevant disclosures.
FROM ELCC 2024
The Simple Change That Can Improve Patient Satisfaction
This transcript has been edited for clarity.
Hello. I’m David Kerr, professor of cancer medicine from University of Oxford. I’d like to talk today about how we communicate with patients.
This is current on my mind because on Friday after clinic, I popped around to see a couple of patients who were in our local hospice. They were there for end-of-life care, being wonderfully well looked after. These were patients I have looked after for 3, 4, or 5 years, patients whom I cared for, and patients of whom I was fond. I think that relationship was reciprocated by them.
We know that any effective communication between patients and doctors is absolutely critical and fundamental to the delivery of patient-centered care. It’s really hard to measure and challenging to attain in the dynamic, often noisy environment of a busy ward or even in the relative peace and quiet of a hospice.
We know that specific behavior by doctors can make a real difference to how they’re perceived by the patient, including their communicative skills and so on. I’ve been a doctor for more than 40 years, but sophisticated communicator though I think I am, there I was, standing by the bedside. It’s really interesting and odd, actually, when you stop and think about it.
There’s an increasing body of evidence that suggests that if the physician sits at the patient’s bedside, establishes better, more direct eye-to-eye contact and so on, then the quality of communication and patient satisfaction is improved.
I picked up on a recent study published just a few days ago in The BMJ; the title of the study is “Effect of Chair Placement on Physicians’ Behavior and Patients’ Satisfaction: Randomized Deception Trial.”
It was done in a single center and there were 125 separate physician interactions. In half of them, the chair in the patient’s room was in its conventional place back against the wall, round a corner, not particularly accessible. The randomization, or the active intervention, if you like, was to have a chair placed less than 3 feet from the patient’s bed and at the patient’s eye level.
What was really interesting was that of these randomized interventions in the setting in which the chair placement was close to the patient’s bed — it was accessible, less than 3 feet — 38 of the 60 physicians sat down in the chair and engaged with the patient from that level.
In the other setting, in which the chair wasn’t immediately adjacent to the bedside (it was back against the wall, out of the way), only in 5 of 60 did the physician retrieve the chair and move it to the right position. Otherwise, they stood and talked to the patient in that way.
The patient satisfaction scores that were measured using a conventional tool were much better for those seated physicians rather than those who stood and towered above.
This is an interesting study with statistically significant findings. It didn’t mean that the physicians who sat spent more time with the patient. It was the same in both settings, at about 10 or 11 minutes. It didn’t alter the physician’s perception of how long they spent with the patient — they guessed it was about 10 minutes, equally on both sides — or indeed the patient’s interpretation of how long the physician stayed.
It wasn’t a temporal thing but just the quality of communication. The patient satisfaction was much better, just simply by sitting at the patient’s bedside and engaging with them. It’s a tiny thing to do that made for a significant qualitative improvement. I’ve learned that lesson. No more towering above. No more standing at the bottom of the patient’s bedside, as I was taught and as I’ve always done.
I’m going to nudge my behavior. I’m going to use the psychology of that small study to nudge myself, the junior doctors that I train, and perhaps even my consultant colleagues, to do the same. It’s a small but effective step forward in improving patient-centered communication.
I’d be delighted to see what you think. How many of you stand? Being old-school, I would have thought that that’s most of us. How many of you make the effort to drag the chair over to sit at the patient’s bedside and to engage more fully? I’d be really interested in any comments that you’ve got.
For the time being, over and out. Ahoy. Thanks for listening.
Dr. Kerr disclosed the following relevant financial relationships Served as a director, officer, partner, employee, advisor, consultant, or trustee for Celleron Therapeutics and Oxford Cancer Biomarkers (board of directors); Afrox (charity; trustee); and GlaxoSmithKline and Bayer HealthCare Pharmaceuticals (consultant). Serve(d) as a speaker or a member of a speakers bureau for Genomic Health and Merck Serono. Received research grant from Roche. Has a 5% or greater equity interest in Celleron Therapeutics and Oxford Cancer Biomarkers.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
Hello. I’m David Kerr, professor of cancer medicine from University of Oxford. I’d like to talk today about how we communicate with patients.
This is current on my mind because on Friday after clinic, I popped around to see a couple of patients who were in our local hospice. They were there for end-of-life care, being wonderfully well looked after. These were patients I have looked after for 3, 4, or 5 years, patients whom I cared for, and patients of whom I was fond. I think that relationship was reciprocated by them.
We know that any effective communication between patients and doctors is absolutely critical and fundamental to the delivery of patient-centered care. It’s really hard to measure and challenging to attain in the dynamic, often noisy environment of a busy ward or even in the relative peace and quiet of a hospice.
We know that specific behavior by doctors can make a real difference to how they’re perceived by the patient, including their communicative skills and so on. I’ve been a doctor for more than 40 years, but sophisticated communicator though I think I am, there I was, standing by the bedside. It’s really interesting and odd, actually, when you stop and think about it.
There’s an increasing body of evidence that suggests that if the physician sits at the patient’s bedside, establishes better, more direct eye-to-eye contact and so on, then the quality of communication and patient satisfaction is improved.
I picked up on a recent study published just a few days ago in The BMJ; the title of the study is “Effect of Chair Placement on Physicians’ Behavior and Patients’ Satisfaction: Randomized Deception Trial.”
It was done in a single center and there were 125 separate physician interactions. In half of them, the chair in the patient’s room was in its conventional place back against the wall, round a corner, not particularly accessible. The randomization, or the active intervention, if you like, was to have a chair placed less than 3 feet from the patient’s bed and at the patient’s eye level.
What was really interesting was that of these randomized interventions in the setting in which the chair placement was close to the patient’s bed — it was accessible, less than 3 feet — 38 of the 60 physicians sat down in the chair and engaged with the patient from that level.
In the other setting, in which the chair wasn’t immediately adjacent to the bedside (it was back against the wall, out of the way), only in 5 of 60 did the physician retrieve the chair and move it to the right position. Otherwise, they stood and talked to the patient in that way.
The patient satisfaction scores that were measured using a conventional tool were much better for those seated physicians rather than those who stood and towered above.
This is an interesting study with statistically significant findings. It didn’t mean that the physicians who sat spent more time with the patient. It was the same in both settings, at about 10 or 11 minutes. It didn’t alter the physician’s perception of how long they spent with the patient — they guessed it was about 10 minutes, equally on both sides — or indeed the patient’s interpretation of how long the physician stayed.
It wasn’t a temporal thing but just the quality of communication. The patient satisfaction was much better, just simply by sitting at the patient’s bedside and engaging with them. It’s a tiny thing to do that made for a significant qualitative improvement. I’ve learned that lesson. No more towering above. No more standing at the bottom of the patient’s bedside, as I was taught and as I’ve always done.
I’m going to nudge my behavior. I’m going to use the psychology of that small study to nudge myself, the junior doctors that I train, and perhaps even my consultant colleagues, to do the same. It’s a small but effective step forward in improving patient-centered communication.
I’d be delighted to see what you think. How many of you stand? Being old-school, I would have thought that that’s most of us. How many of you make the effort to drag the chair over to sit at the patient’s bedside and to engage more fully? I’d be really interested in any comments that you’ve got.
For the time being, over and out. Ahoy. Thanks for listening.
Dr. Kerr disclosed the following relevant financial relationships Served as a director, officer, partner, employee, advisor, consultant, or trustee for Celleron Therapeutics and Oxford Cancer Biomarkers (board of directors); Afrox (charity; trustee); and GlaxoSmithKline and Bayer HealthCare Pharmaceuticals (consultant). Serve(d) as a speaker or a member of a speakers bureau for Genomic Health and Merck Serono. Received research grant from Roche. Has a 5% or greater equity interest in Celleron Therapeutics and Oxford Cancer Biomarkers.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
Hello. I’m David Kerr, professor of cancer medicine from University of Oxford. I’d like to talk today about how we communicate with patients.
This is current on my mind because on Friday after clinic, I popped around to see a couple of patients who were in our local hospice. They were there for end-of-life care, being wonderfully well looked after. These were patients I have looked after for 3, 4, or 5 years, patients whom I cared for, and patients of whom I was fond. I think that relationship was reciprocated by them.
We know that any effective communication between patients and doctors is absolutely critical and fundamental to the delivery of patient-centered care. It’s really hard to measure and challenging to attain in the dynamic, often noisy environment of a busy ward or even in the relative peace and quiet of a hospice.
We know that specific behavior by doctors can make a real difference to how they’re perceived by the patient, including their communicative skills and so on. I’ve been a doctor for more than 40 years, but sophisticated communicator though I think I am, there I was, standing by the bedside. It’s really interesting and odd, actually, when you stop and think about it.
There’s an increasing body of evidence that suggests that if the physician sits at the patient’s bedside, establishes better, more direct eye-to-eye contact and so on, then the quality of communication and patient satisfaction is improved.
I picked up on a recent study published just a few days ago in The BMJ; the title of the study is “Effect of Chair Placement on Physicians’ Behavior and Patients’ Satisfaction: Randomized Deception Trial.”
It was done in a single center and there were 125 separate physician interactions. In half of them, the chair in the patient’s room was in its conventional place back against the wall, round a corner, not particularly accessible. The randomization, or the active intervention, if you like, was to have a chair placed less than 3 feet from the patient’s bed and at the patient’s eye level.
What was really interesting was that of these randomized interventions in the setting in which the chair placement was close to the patient’s bed — it was accessible, less than 3 feet — 38 of the 60 physicians sat down in the chair and engaged with the patient from that level.
In the other setting, in which the chair wasn’t immediately adjacent to the bedside (it was back against the wall, out of the way), only in 5 of 60 did the physician retrieve the chair and move it to the right position. Otherwise, they stood and talked to the patient in that way.
The patient satisfaction scores that were measured using a conventional tool were much better for those seated physicians rather than those who stood and towered above.
This is an interesting study with statistically significant findings. It didn’t mean that the physicians who sat spent more time with the patient. It was the same in both settings, at about 10 or 11 minutes. It didn’t alter the physician’s perception of how long they spent with the patient — they guessed it was about 10 minutes, equally on both sides — or indeed the patient’s interpretation of how long the physician stayed.
It wasn’t a temporal thing but just the quality of communication. The patient satisfaction was much better, just simply by sitting at the patient’s bedside and engaging with them. It’s a tiny thing to do that made for a significant qualitative improvement. I’ve learned that lesson. No more towering above. No more standing at the bottom of the patient’s bedside, as I was taught and as I’ve always done.
I’m going to nudge my behavior. I’m going to use the psychology of that small study to nudge myself, the junior doctors that I train, and perhaps even my consultant colleagues, to do the same. It’s a small but effective step forward in improving patient-centered communication.
I’d be delighted to see what you think. How many of you stand? Being old-school, I would have thought that that’s most of us. How many of you make the effort to drag the chair over to sit at the patient’s bedside and to engage more fully? I’d be really interested in any comments that you’ve got.
For the time being, over and out. Ahoy. Thanks for listening.
Dr. Kerr disclosed the following relevant financial relationships Served as a director, officer, partner, employee, advisor, consultant, or trustee for Celleron Therapeutics and Oxford Cancer Biomarkers (board of directors); Afrox (charity; trustee); and GlaxoSmithKline and Bayer HealthCare Pharmaceuticals (consultant). Serve(d) as a speaker or a member of a speakers bureau for Genomic Health and Merck Serono. Received research grant from Roche. Has a 5% or greater equity interest in Celleron Therapeutics and Oxford Cancer Biomarkers.
A version of this article appeared on Medscape.com.
VA to Expand Cancer Prevention Services
The US Department of Veterans Affairs (VA) announced plans to expand preventive services, health care, and benefits for veterans with cancer.
Urethral cancers are set to be added to the list of > 300 conditions considered presumptive under the Sergeant First Class Heath Robinson Honoring our Promise to Address Comprehensive Toxics (PACT) Act of 2022. Veterans deployed to Iraq, Afghanistan, Somalia, Djibouti, Egypt, Jordan, Lebanon, Syria, Yemen, Uzbekistan, and the entire Southwest Asia theater will not need to prove their service caused their urethral cancer in order to receive treatment for it. Additionally, the VA plans to evaluate whether there is a relationship between urinary bladder and ureteral cancers and toxic exposures for these veterans, and determine whether these conditions are presumptive. The VA has already screened > 5 million veterans for toxic exposures under the PACT Act, as part of an ongoing mission to expand cancer care services.
The VA is also set to expand access to screening programs in 2024 by providing:
- genetic testing to every veteran who may need it;
- lung cancer screening programs to every VA medical center; and
- home tests for colorectal cancer to > 1 million veterans nationwide.
The VA continues to expand the reach of smoking cessation services, with ≥ 6 additional sites added to the Quit VET eReferral program by the end of 2024, and a new pilot program to integrate smoking cessation services into lung cancer screening.
The VA has already taken steps to build on the Biden-Harris Administration Cancer Moonshot program, which has the goals of preventing ≥ 4 million cancer deaths by 2047 and to improve the experience of individuals with cancer. For instance, it has prioritized claims processing for veterans with cancer and expanded cancer risk assessments and mammograms to veterans aged < 40 years, regardless of age, symptoms, family history, or whether they are enrolled in VA health care. In September, the VA and the National Cancer Institute announced a data-sharing collaboration to better understand and treat cancer among veterans.
“VA is planting the seeds for the future of cancer care,” said VHA Under Secretary for Health Shereef Elnahal, MD. “By investing in screenings, expanding access, and embracing cutting-edge technologies, VA is revolutionizing cancer care delivery, providing the best care possible to our nation’s heroes.”
The US Department of Veterans Affairs (VA) announced plans to expand preventive services, health care, and benefits for veterans with cancer.
Urethral cancers are set to be added to the list of > 300 conditions considered presumptive under the Sergeant First Class Heath Robinson Honoring our Promise to Address Comprehensive Toxics (PACT) Act of 2022. Veterans deployed to Iraq, Afghanistan, Somalia, Djibouti, Egypt, Jordan, Lebanon, Syria, Yemen, Uzbekistan, and the entire Southwest Asia theater will not need to prove their service caused their urethral cancer in order to receive treatment for it. Additionally, the VA plans to evaluate whether there is a relationship between urinary bladder and ureteral cancers and toxic exposures for these veterans, and determine whether these conditions are presumptive. The VA has already screened > 5 million veterans for toxic exposures under the PACT Act, as part of an ongoing mission to expand cancer care services.
The VA is also set to expand access to screening programs in 2024 by providing:
- genetic testing to every veteran who may need it;
- lung cancer screening programs to every VA medical center; and
- home tests for colorectal cancer to > 1 million veterans nationwide.
The VA continues to expand the reach of smoking cessation services, with ≥ 6 additional sites added to the Quit VET eReferral program by the end of 2024, and a new pilot program to integrate smoking cessation services into lung cancer screening.
The VA has already taken steps to build on the Biden-Harris Administration Cancer Moonshot program, which has the goals of preventing ≥ 4 million cancer deaths by 2047 and to improve the experience of individuals with cancer. For instance, it has prioritized claims processing for veterans with cancer and expanded cancer risk assessments and mammograms to veterans aged < 40 years, regardless of age, symptoms, family history, or whether they are enrolled in VA health care. In September, the VA and the National Cancer Institute announced a data-sharing collaboration to better understand and treat cancer among veterans.
“VA is planting the seeds for the future of cancer care,” said VHA Under Secretary for Health Shereef Elnahal, MD. “By investing in screenings, expanding access, and embracing cutting-edge technologies, VA is revolutionizing cancer care delivery, providing the best care possible to our nation’s heroes.”
The US Department of Veterans Affairs (VA) announced plans to expand preventive services, health care, and benefits for veterans with cancer.
Urethral cancers are set to be added to the list of > 300 conditions considered presumptive under the Sergeant First Class Heath Robinson Honoring our Promise to Address Comprehensive Toxics (PACT) Act of 2022. Veterans deployed to Iraq, Afghanistan, Somalia, Djibouti, Egypt, Jordan, Lebanon, Syria, Yemen, Uzbekistan, and the entire Southwest Asia theater will not need to prove their service caused their urethral cancer in order to receive treatment for it. Additionally, the VA plans to evaluate whether there is a relationship between urinary bladder and ureteral cancers and toxic exposures for these veterans, and determine whether these conditions are presumptive. The VA has already screened > 5 million veterans for toxic exposures under the PACT Act, as part of an ongoing mission to expand cancer care services.
The VA is also set to expand access to screening programs in 2024 by providing:
- genetic testing to every veteran who may need it;
- lung cancer screening programs to every VA medical center; and
- home tests for colorectal cancer to > 1 million veterans nationwide.
The VA continues to expand the reach of smoking cessation services, with ≥ 6 additional sites added to the Quit VET eReferral program by the end of 2024, and a new pilot program to integrate smoking cessation services into lung cancer screening.
The VA has already taken steps to build on the Biden-Harris Administration Cancer Moonshot program, which has the goals of preventing ≥ 4 million cancer deaths by 2047 and to improve the experience of individuals with cancer. For instance, it has prioritized claims processing for veterans with cancer and expanded cancer risk assessments and mammograms to veterans aged < 40 years, regardless of age, symptoms, family history, or whether they are enrolled in VA health care. In September, the VA and the National Cancer Institute announced a data-sharing collaboration to better understand and treat cancer among veterans.
“VA is planting the seeds for the future of cancer care,” said VHA Under Secretary for Health Shereef Elnahal, MD. “By investing in screenings, expanding access, and embracing cutting-edge technologies, VA is revolutionizing cancer care delivery, providing the best care possible to our nation’s heroes.”
Disadvantaged Neighborhoods Tied to Higher Dementia Risk, Brain Aging
Living in a disadvantaged neighborhood is associated with accelerated brain aging and a higher risk for early dementia, regardless of income level or education, new research suggested.
“If you want to prevent dementia and you’re not asking someone about their neighborhood, you’re missing information that’s important to know,” lead author Aaron Reuben, PhD, postdoctoral scholar in neuropsychology and environmental health at Duke University, Durham, North Carolina, said in a news release.
The study was published online in Alzheimer’s & Dementia.
Higher Risk in Men
Few interventions exist to halt or delay the progression of Alzheimer’s disease and related dementias (ADRD), which has increasingly led to a focus on primary prevention.
Although previous research pointed to a link between socioeconomically disadvantaged neighborhoods and a greater risk for cognitive deficits, mild cognitive impairment, dementia, and poor brain health, the timeline for the emergence of that risk is unknown.
To fill in the gaps, investigators studied data on all 1.4 million New Zealand residents, dividing neighborhoods into quintiles based on level of disadvantage (assessed by the New Zealand Index of Deprivation) to see whether dementia diagnoses followed neighborhood socioeconomic gradients.
After adjusting for covariates, they found that overall, those living in disadvantaged areas were slightly more likely to develop dementia across the 20-year study period (adjusted hazard ratio [HR], 1.09; 95% CI, 1.08-1.10).
The more disadvantaged the neighborhood, the higher the dementia risk, with a 43% higher risk for ADRD among those in the highest quintile than among those in the lowest quintile (HR, 1.43; 95% CI, 1.36-1.49).
The effect was larger in men than in women and in younger vs older individuals, with the youngest age group showing 21% greater risk in women and 26% greater risk in men vs the oldest age group.
Dementia Prevention Starts Early
Researchers then turned to the Dunedin Study, a cohort of 938 New Zealanders (50% female) followed from birth to age 45 to track their psychological, social, and physiological health with brain scans, memory tests, and cognitive self-assessments.
The analysis suggested that by age 45, those living in more disadvantaged neighborhoods across adulthood had accumulated a significantly greater number of midlife risk factors for later ADRD.
They also had worse structural brain integrity, with each standard deviation increase in neighborhood disadvantage resulting in a thinner cortex, greater white matter hyperintensities volume, and older brain age.
Those living in poorer areas had lower cognitive test scores, reported more issues with everyday cognitive function, and showed a greater reduction in IQ from childhood to midlife. Analysis of brain scans also revealed mean brain ages 2.98 years older than those living in the least disadvantaged areas (P = .001).
Limitations included the study’s observational design, which could not establish causation, and the fact that the researchers did not have access to individual-level socioeconomic information for the entire population. Additionally, brain-integrity measures in the Dunedin Study were largely cross-sectional.
“If you want to truly prevent dementia, you’ve got to start early because 20 years before anyone will get a diagnosis, we’re seeing dementia’s emergence,” Dr. Reuben said. “And it could be even earlier.”
Funding for the study was provided by the National Institutes for Health; UK Medical Research Council; Health Research Council of New Zealand; Brain Research New Zealand; New Zealand Ministry of Business, Innovation, & Employment; and the Duke University and the University of North Carolina Alzheimer’s Disease Research Center. The authors declared no relevant financial relationships.
A version of this article appeared on Medscape.com.
Living in a disadvantaged neighborhood is associated with accelerated brain aging and a higher risk for early dementia, regardless of income level or education, new research suggested.
“If you want to prevent dementia and you’re not asking someone about their neighborhood, you’re missing information that’s important to know,” lead author Aaron Reuben, PhD, postdoctoral scholar in neuropsychology and environmental health at Duke University, Durham, North Carolina, said in a news release.
The study was published online in Alzheimer’s & Dementia.
Higher Risk in Men
Few interventions exist to halt or delay the progression of Alzheimer’s disease and related dementias (ADRD), which has increasingly led to a focus on primary prevention.
Although previous research pointed to a link between socioeconomically disadvantaged neighborhoods and a greater risk for cognitive deficits, mild cognitive impairment, dementia, and poor brain health, the timeline for the emergence of that risk is unknown.
To fill in the gaps, investigators studied data on all 1.4 million New Zealand residents, dividing neighborhoods into quintiles based on level of disadvantage (assessed by the New Zealand Index of Deprivation) to see whether dementia diagnoses followed neighborhood socioeconomic gradients.
After adjusting for covariates, they found that overall, those living in disadvantaged areas were slightly more likely to develop dementia across the 20-year study period (adjusted hazard ratio [HR], 1.09; 95% CI, 1.08-1.10).
The more disadvantaged the neighborhood, the higher the dementia risk, with a 43% higher risk for ADRD among those in the highest quintile than among those in the lowest quintile (HR, 1.43; 95% CI, 1.36-1.49).
The effect was larger in men than in women and in younger vs older individuals, with the youngest age group showing 21% greater risk in women and 26% greater risk in men vs the oldest age group.
Dementia Prevention Starts Early
Researchers then turned to the Dunedin Study, a cohort of 938 New Zealanders (50% female) followed from birth to age 45 to track their psychological, social, and physiological health with brain scans, memory tests, and cognitive self-assessments.
The analysis suggested that by age 45, those living in more disadvantaged neighborhoods across adulthood had accumulated a significantly greater number of midlife risk factors for later ADRD.
They also had worse structural brain integrity, with each standard deviation increase in neighborhood disadvantage resulting in a thinner cortex, greater white matter hyperintensities volume, and older brain age.
Those living in poorer areas had lower cognitive test scores, reported more issues with everyday cognitive function, and showed a greater reduction in IQ from childhood to midlife. Analysis of brain scans also revealed mean brain ages 2.98 years older than those living in the least disadvantaged areas (P = .001).
Limitations included the study’s observational design, which could not establish causation, and the fact that the researchers did not have access to individual-level socioeconomic information for the entire population. Additionally, brain-integrity measures in the Dunedin Study were largely cross-sectional.
“If you want to truly prevent dementia, you’ve got to start early because 20 years before anyone will get a diagnosis, we’re seeing dementia’s emergence,” Dr. Reuben said. “And it could be even earlier.”
Funding for the study was provided by the National Institutes for Health; UK Medical Research Council; Health Research Council of New Zealand; Brain Research New Zealand; New Zealand Ministry of Business, Innovation, & Employment; and the Duke University and the University of North Carolina Alzheimer’s Disease Research Center. The authors declared no relevant financial relationships.
A version of this article appeared on Medscape.com.
Living in a disadvantaged neighborhood is associated with accelerated brain aging and a higher risk for early dementia, regardless of income level or education, new research suggested.
“If you want to prevent dementia and you’re not asking someone about their neighborhood, you’re missing information that’s important to know,” lead author Aaron Reuben, PhD, postdoctoral scholar in neuropsychology and environmental health at Duke University, Durham, North Carolina, said in a news release.
The study was published online in Alzheimer’s & Dementia.
Higher Risk in Men
Few interventions exist to halt or delay the progression of Alzheimer’s disease and related dementias (ADRD), which has increasingly led to a focus on primary prevention.
Although previous research pointed to a link between socioeconomically disadvantaged neighborhoods and a greater risk for cognitive deficits, mild cognitive impairment, dementia, and poor brain health, the timeline for the emergence of that risk is unknown.
To fill in the gaps, investigators studied data on all 1.4 million New Zealand residents, dividing neighborhoods into quintiles based on level of disadvantage (assessed by the New Zealand Index of Deprivation) to see whether dementia diagnoses followed neighborhood socioeconomic gradients.
After adjusting for covariates, they found that overall, those living in disadvantaged areas were slightly more likely to develop dementia across the 20-year study period (adjusted hazard ratio [HR], 1.09; 95% CI, 1.08-1.10).
The more disadvantaged the neighborhood, the higher the dementia risk, with a 43% higher risk for ADRD among those in the highest quintile than among those in the lowest quintile (HR, 1.43; 95% CI, 1.36-1.49).
The effect was larger in men than in women and in younger vs older individuals, with the youngest age group showing 21% greater risk in women and 26% greater risk in men vs the oldest age group.
Dementia Prevention Starts Early
Researchers then turned to the Dunedin Study, a cohort of 938 New Zealanders (50% female) followed from birth to age 45 to track their psychological, social, and physiological health with brain scans, memory tests, and cognitive self-assessments.
The analysis suggested that by age 45, those living in more disadvantaged neighborhoods across adulthood had accumulated a significantly greater number of midlife risk factors for later ADRD.
They also had worse structural brain integrity, with each standard deviation increase in neighborhood disadvantage resulting in a thinner cortex, greater white matter hyperintensities volume, and older brain age.
Those living in poorer areas had lower cognitive test scores, reported more issues with everyday cognitive function, and showed a greater reduction in IQ from childhood to midlife. Analysis of brain scans also revealed mean brain ages 2.98 years older than those living in the least disadvantaged areas (P = .001).
Limitations included the study’s observational design, which could not establish causation, and the fact that the researchers did not have access to individual-level socioeconomic information for the entire population. Additionally, brain-integrity measures in the Dunedin Study were largely cross-sectional.
“If you want to truly prevent dementia, you’ve got to start early because 20 years before anyone will get a diagnosis, we’re seeing dementia’s emergence,” Dr. Reuben said. “And it could be even earlier.”
Funding for the study was provided by the National Institutes for Health; UK Medical Research Council; Health Research Council of New Zealand; Brain Research New Zealand; New Zealand Ministry of Business, Innovation, & Employment; and the Duke University and the University of North Carolina Alzheimer’s Disease Research Center. The authors declared no relevant financial relationships.
A version of this article appeared on Medscape.com.
FROM ALZHEIMER’S AND DEMENTIA
Upfront Low-Dose Radiation Improves Advanced SCLC Outcomes
The analysis, presented at the 2024 European Lung Cancer Congress, revealed that low-dose radiation improved patients’ median progression-free and overall survival compared with standard first-line treatment, reported in a 2019 trial, lead author Yan Zhang, MD, reported.
The standard first-line treatment results came from the 2019 CASPIAN trial, which found that patients receiving the first-line regimen had a median progression-free survival of 5 months and a median overall survival of 13 months, with 54% of patient alive at 1 year.
The latest data, which included a small cohort of 30 patients, revealed that adding low-dose radiation to the standard first-line therapy led to a higher median progression-free survival of 8.3 months and extended median overall survival beyond the study follow-up period of 17.3 months. Overall, 66% of patients were alive at 1 year.
These are “promising” improvements over CASPIAN, Dr. Zhang, a lung cancer medical oncologist at Sichuan University, Chengdu, China, said at the Congress, which was organized by the European Society for Medical Oncology.
Study discussant Gerry Hanna, PhD, MBBS, a radiation oncologist at Belfast City Hospital, Belfast, Northern Ireland, agreed. Although there were just 30 patients, “you cannot deny these are [strong] results in terms of extensive-stage small cell cancer,” Dr. Hanna said.
Although standard first-line treatment of extensive-stage SCLC is durvalumab plus etoposide-platinum chemotherapy, the benefits aren’t durable for many patients.
This problem led Dr. Zhang and his colleagues to look for ways to improve outcomes. Because the CASPIAN trial did not include radiation to the primary tumor, it seemed a logical strategy to explore.
In the current single-arm study, Dr. Zhang and his team added 15 Gy radiation in five fractions to the primary lung tumors of 30 patients during the first cycle of durvalumab plus etoposide-platinum.
Subjects received 1500 mg of durvalumab plus etoposide-platinum every 3 weeks for four cycles. Low-dose radiation to the primary tumor was delivered over 5 days at the start of treatment. Patients then continued with durvalumab maintenance every 4 weeks until progression or intolerable toxicity.
Six patients (20%) had liver metastases at the baseline, and three (10%) had brain metastases. Over half had prophylactic cranial radiation. Performance scores were 0-1, and all but one of the participants were men.
Six- and 12-month progression-free survival rates were 57% and 40%, respectively. Overall survival was 90% at 6 months and 66% at 12 months. Median overall survival was 13 months in the CASPIAN trial but not reached in Dr. Zhang’s trial after a median follow-up of 17.3 months, with the earliest deaths occurring at 10.8 months.
Grade 3 treatment-related adverse events occurred in 80% of patients, most frequently hematologic toxicities. Five patients (16.7%) had severe adverse reactions to radiation. Although the overall dose of radiation was low, at 3 Gy each, the fractions were on the large side.
Hanna wanted more information on the radiotoxicity issue, but even so, he said that adding low-dose radiation to our durvalumab-chemotherapy doublet warrants further investigation.
Both Dr. Hanna and Dr. Zhang thought that instead of killing cancer cells directly, the greatest benefit of upfront radiation, and the peritumoral inflammation it causes, is to augment durvalumab’s effect.
Overall, Dr. Hanna stressed that we haven’t had results like these before in a SCLC study, particularly for novel agents, let alone radiation.
The study was funded by AstraZeneca, maker of durvalumab. Dr. Zhang and Dr. Hanna didn’t have any relevant disclosures.
A version of this article appeared on Medscape.com.
The analysis, presented at the 2024 European Lung Cancer Congress, revealed that low-dose radiation improved patients’ median progression-free and overall survival compared with standard first-line treatment, reported in a 2019 trial, lead author Yan Zhang, MD, reported.
The standard first-line treatment results came from the 2019 CASPIAN trial, which found that patients receiving the first-line regimen had a median progression-free survival of 5 months and a median overall survival of 13 months, with 54% of patient alive at 1 year.
The latest data, which included a small cohort of 30 patients, revealed that adding low-dose radiation to the standard first-line therapy led to a higher median progression-free survival of 8.3 months and extended median overall survival beyond the study follow-up period of 17.3 months. Overall, 66% of patients were alive at 1 year.
These are “promising” improvements over CASPIAN, Dr. Zhang, a lung cancer medical oncologist at Sichuan University, Chengdu, China, said at the Congress, which was organized by the European Society for Medical Oncology.
Study discussant Gerry Hanna, PhD, MBBS, a radiation oncologist at Belfast City Hospital, Belfast, Northern Ireland, agreed. Although there were just 30 patients, “you cannot deny these are [strong] results in terms of extensive-stage small cell cancer,” Dr. Hanna said.
Although standard first-line treatment of extensive-stage SCLC is durvalumab plus etoposide-platinum chemotherapy, the benefits aren’t durable for many patients.
This problem led Dr. Zhang and his colleagues to look for ways to improve outcomes. Because the CASPIAN trial did not include radiation to the primary tumor, it seemed a logical strategy to explore.
In the current single-arm study, Dr. Zhang and his team added 15 Gy radiation in five fractions to the primary lung tumors of 30 patients during the first cycle of durvalumab plus etoposide-platinum.
Subjects received 1500 mg of durvalumab plus etoposide-platinum every 3 weeks for four cycles. Low-dose radiation to the primary tumor was delivered over 5 days at the start of treatment. Patients then continued with durvalumab maintenance every 4 weeks until progression or intolerable toxicity.
Six patients (20%) had liver metastases at the baseline, and three (10%) had brain metastases. Over half had prophylactic cranial radiation. Performance scores were 0-1, and all but one of the participants were men.
Six- and 12-month progression-free survival rates were 57% and 40%, respectively. Overall survival was 90% at 6 months and 66% at 12 months. Median overall survival was 13 months in the CASPIAN trial but not reached in Dr. Zhang’s trial after a median follow-up of 17.3 months, with the earliest deaths occurring at 10.8 months.
Grade 3 treatment-related adverse events occurred in 80% of patients, most frequently hematologic toxicities. Five patients (16.7%) had severe adverse reactions to radiation. Although the overall dose of radiation was low, at 3 Gy each, the fractions were on the large side.
Hanna wanted more information on the radiotoxicity issue, but even so, he said that adding low-dose radiation to our durvalumab-chemotherapy doublet warrants further investigation.
Both Dr. Hanna and Dr. Zhang thought that instead of killing cancer cells directly, the greatest benefit of upfront radiation, and the peritumoral inflammation it causes, is to augment durvalumab’s effect.
Overall, Dr. Hanna stressed that we haven’t had results like these before in a SCLC study, particularly for novel agents, let alone radiation.
The study was funded by AstraZeneca, maker of durvalumab. Dr. Zhang and Dr. Hanna didn’t have any relevant disclosures.
A version of this article appeared on Medscape.com.
The analysis, presented at the 2024 European Lung Cancer Congress, revealed that low-dose radiation improved patients’ median progression-free and overall survival compared with standard first-line treatment, reported in a 2019 trial, lead author Yan Zhang, MD, reported.
The standard first-line treatment results came from the 2019 CASPIAN trial, which found that patients receiving the first-line regimen had a median progression-free survival of 5 months and a median overall survival of 13 months, with 54% of patient alive at 1 year.
The latest data, which included a small cohort of 30 patients, revealed that adding low-dose radiation to the standard first-line therapy led to a higher median progression-free survival of 8.3 months and extended median overall survival beyond the study follow-up period of 17.3 months. Overall, 66% of patients were alive at 1 year.
These are “promising” improvements over CASPIAN, Dr. Zhang, a lung cancer medical oncologist at Sichuan University, Chengdu, China, said at the Congress, which was organized by the European Society for Medical Oncology.
Study discussant Gerry Hanna, PhD, MBBS, a radiation oncologist at Belfast City Hospital, Belfast, Northern Ireland, agreed. Although there were just 30 patients, “you cannot deny these are [strong] results in terms of extensive-stage small cell cancer,” Dr. Hanna said.
Although standard first-line treatment of extensive-stage SCLC is durvalumab plus etoposide-platinum chemotherapy, the benefits aren’t durable for many patients.
This problem led Dr. Zhang and his colleagues to look for ways to improve outcomes. Because the CASPIAN trial did not include radiation to the primary tumor, it seemed a logical strategy to explore.
In the current single-arm study, Dr. Zhang and his team added 15 Gy radiation in five fractions to the primary lung tumors of 30 patients during the first cycle of durvalumab plus etoposide-platinum.
Subjects received 1500 mg of durvalumab plus etoposide-platinum every 3 weeks for four cycles. Low-dose radiation to the primary tumor was delivered over 5 days at the start of treatment. Patients then continued with durvalumab maintenance every 4 weeks until progression or intolerable toxicity.
Six patients (20%) had liver metastases at the baseline, and three (10%) had brain metastases. Over half had prophylactic cranial radiation. Performance scores were 0-1, and all but one of the participants were men.
Six- and 12-month progression-free survival rates were 57% and 40%, respectively. Overall survival was 90% at 6 months and 66% at 12 months. Median overall survival was 13 months in the CASPIAN trial but not reached in Dr. Zhang’s trial after a median follow-up of 17.3 months, with the earliest deaths occurring at 10.8 months.
Grade 3 treatment-related adverse events occurred in 80% of patients, most frequently hematologic toxicities. Five patients (16.7%) had severe adverse reactions to radiation. Although the overall dose of radiation was low, at 3 Gy each, the fractions were on the large side.
Hanna wanted more information on the radiotoxicity issue, but even so, he said that adding low-dose radiation to our durvalumab-chemotherapy doublet warrants further investigation.
Both Dr. Hanna and Dr. Zhang thought that instead of killing cancer cells directly, the greatest benefit of upfront radiation, and the peritumoral inflammation it causes, is to augment durvalumab’s effect.
Overall, Dr. Hanna stressed that we haven’t had results like these before in a SCLC study, particularly for novel agents, let alone radiation.
The study was funded by AstraZeneca, maker of durvalumab. Dr. Zhang and Dr. Hanna didn’t have any relevant disclosures.
A version of this article appeared on Medscape.com.
FROM ELCC 2024
Vitamin D Deficiency May Be Linked to Peripheral Neuropathy
TOPLINE:
Vitamin D deficiency is independently linked to the risk for diabetic peripheral neuropathy (DPN) by potentially affecting large nerve fibers in older patients with type 2 diabetes (T2D).
METHODOLOGY:
- Although previous research has shown that vitamin D deficiency is common in patients with diabetes and may increase the risk for peripheral neuropathy, its effects on large and small nerve fiber lesions have not been well explored yet.
- Researchers conducted a cross-sectional study to understand the association between vitamin D deficiency and DPN development in 230 older patients (mean age, 67 years) with T2D for about 15 years who were recruited from Beijing Hospital between 2020 and 2023.
- All patients were evaluated for DPN based on poor blood sugar control or symptoms such as pain and sensory abnormalities, of which 175 patients diagnosed with DPN were propensity-matched with 55 patients without DPN.
- Vitamin D deficiency, defined as serum 25-hydroxyvitamin D circulating levels below 20 ng/mL, was reported in 169 patients.
- Large nerve fiber lesions were evaluated using electromyography, and small nerve fiber lesions were assessed by measuring skin conductance.
TAKEAWAY:
- Vitamin D deficiency was more likely to affect large fiber lesions, suggested by longer median sensory nerve latency, minimum latency of the F-wave, and median nerve motor evoked potential latency than those in the vitamin D–sufficient group.
- Furthermore, vitamin D deficiency was linked to large fiber neuropathy with increased odds of prolongation of motor nerve latency (odds ratio, 1.362; P = .038).
- The electrochemical skin conductance, which indicates damage to small nerve fibers, was comparable between patients with and without vitamin D deficiency.
IN PRACTICE:
This study is too preliminary to have practice application.
SOURCE:
This study was led by Sijia Fei, Department of Endocrinology, Beijing Hospital, Beijing, People’s Republic of China, and was published online in Diabetes Research and Clinical Practice.
LIMITATIONS:
Skin biopsy, the “gold-standard” for quantifying intraepidermal nerve fiber density, was not used to assess small nerve fiber lesions. Additionally, a causal link between vitamin D deficiency and diabetic nerve damage was not established owing to the cross-sectional nature of the study. Some patients with T2D may have been receiving insulin therapy, which may have affected vitamin D levels.
DISCLOSURES:
The study was supported by grants from the National Natural Science Foundation of China and China National Key R&D Program. The authors declared no conflicts of interest.
A version of this article appeared on Medscape.com.
TOPLINE:
Vitamin D deficiency is independently linked to the risk for diabetic peripheral neuropathy (DPN) by potentially affecting large nerve fibers in older patients with type 2 diabetes (T2D).
METHODOLOGY:
- Although previous research has shown that vitamin D deficiency is common in patients with diabetes and may increase the risk for peripheral neuropathy, its effects on large and small nerve fiber lesions have not been well explored yet.
- Researchers conducted a cross-sectional study to understand the association between vitamin D deficiency and DPN development in 230 older patients (mean age, 67 years) with T2D for about 15 years who were recruited from Beijing Hospital between 2020 and 2023.
- All patients were evaluated for DPN based on poor blood sugar control or symptoms such as pain and sensory abnormalities, of which 175 patients diagnosed with DPN were propensity-matched with 55 patients without DPN.
- Vitamin D deficiency, defined as serum 25-hydroxyvitamin D circulating levels below 20 ng/mL, was reported in 169 patients.
- Large nerve fiber lesions were evaluated using electromyography, and small nerve fiber lesions were assessed by measuring skin conductance.
TAKEAWAY:
- Vitamin D deficiency was more likely to affect large fiber lesions, suggested by longer median sensory nerve latency, minimum latency of the F-wave, and median nerve motor evoked potential latency than those in the vitamin D–sufficient group.
- Furthermore, vitamin D deficiency was linked to large fiber neuropathy with increased odds of prolongation of motor nerve latency (odds ratio, 1.362; P = .038).
- The electrochemical skin conductance, which indicates damage to small nerve fibers, was comparable between patients with and without vitamin D deficiency.
IN PRACTICE:
This study is too preliminary to have practice application.
SOURCE:
This study was led by Sijia Fei, Department of Endocrinology, Beijing Hospital, Beijing, People’s Republic of China, and was published online in Diabetes Research and Clinical Practice.
LIMITATIONS:
Skin biopsy, the “gold-standard” for quantifying intraepidermal nerve fiber density, was not used to assess small nerve fiber lesions. Additionally, a causal link between vitamin D deficiency and diabetic nerve damage was not established owing to the cross-sectional nature of the study. Some patients with T2D may have been receiving insulin therapy, which may have affected vitamin D levels.
DISCLOSURES:
The study was supported by grants from the National Natural Science Foundation of China and China National Key R&D Program. The authors declared no conflicts of interest.
A version of this article appeared on Medscape.com.
TOPLINE:
Vitamin D deficiency is independently linked to the risk for diabetic peripheral neuropathy (DPN) by potentially affecting large nerve fibers in older patients with type 2 diabetes (T2D).
METHODOLOGY:
- Although previous research has shown that vitamin D deficiency is common in patients with diabetes and may increase the risk for peripheral neuropathy, its effects on large and small nerve fiber lesions have not been well explored yet.
- Researchers conducted a cross-sectional study to understand the association between vitamin D deficiency and DPN development in 230 older patients (mean age, 67 years) with T2D for about 15 years who were recruited from Beijing Hospital between 2020 and 2023.
- All patients were evaluated for DPN based on poor blood sugar control or symptoms such as pain and sensory abnormalities, of which 175 patients diagnosed with DPN were propensity-matched with 55 patients without DPN.
- Vitamin D deficiency, defined as serum 25-hydroxyvitamin D circulating levels below 20 ng/mL, was reported in 169 patients.
- Large nerve fiber lesions were evaluated using electromyography, and small nerve fiber lesions were assessed by measuring skin conductance.
TAKEAWAY:
- Vitamin D deficiency was more likely to affect large fiber lesions, suggested by longer median sensory nerve latency, minimum latency of the F-wave, and median nerve motor evoked potential latency than those in the vitamin D–sufficient group.
- Furthermore, vitamin D deficiency was linked to large fiber neuropathy with increased odds of prolongation of motor nerve latency (odds ratio, 1.362; P = .038).
- The electrochemical skin conductance, which indicates damage to small nerve fibers, was comparable between patients with and without vitamin D deficiency.
IN PRACTICE:
This study is too preliminary to have practice application.
SOURCE:
This study was led by Sijia Fei, Department of Endocrinology, Beijing Hospital, Beijing, People’s Republic of China, and was published online in Diabetes Research and Clinical Practice.
LIMITATIONS:
Skin biopsy, the “gold-standard” for quantifying intraepidermal nerve fiber density, was not used to assess small nerve fiber lesions. Additionally, a causal link between vitamin D deficiency and diabetic nerve damage was not established owing to the cross-sectional nature of the study. Some patients with T2D may have been receiving insulin therapy, which may have affected vitamin D levels.
DISCLOSURES:
The study was supported by grants from the National Natural Science Foundation of China and China National Key R&D Program. The authors declared no conflicts of interest.
A version of this article appeared on Medscape.com.
New Guidelines: Start PSA Screening Earlier in Black Men
Lowering the recommended age for baseline prostate-specific antigen (PSA) would reduce prostate cancer deaths by about 30% in Black men without significantly increasing the rate of overdiagnosis, according to new screening guidelines from the Prostate Cancer Foundation.
Specifically, , a multidisciplinary panel of experts and patient advocates determined based on a comprehensive literature review.
The panel’s findings were presented in a poster at the ASCO Genitourinary Cancers Symposium.
“Black men in the United States are considered a high-risk population for being diagnosed with and dying from prostate cancer,” wrote lead author Isla Garraway, MD, PhD, of the University of California, Los Angeles, and colleagues. Specifically, Black men are about two times more likely to be diagnosed with and die from prostate cancer than White men. But, the authors continued, “few guidelines have outlined specific recommendations for PSA-based prostate cancer screening among Black men.”
The US Preventive Services Task Force recommendations, which are currently being updated, set the PSA screening start age at 55. The task force recommendations, which dictate insurance coverage in the United States, acknowledged “a potential mortality benefit for African American men when beginning screening before age 55 years” but did not explicitly recommend screening earlier.
Current guidelines from the American Cancer Society call for discussions about screening in average-risk men to begin at age 50-55. The recommendations do specify lowering the age to 45 for those at a high risk for prostate cancer, which includes Black men as well as those with a first-degree relative diagnosed with prostate cancer before age 65. In some cases, screening can begin at age 40 in the highest risk men — those with more than one first-degree relative who had prostate cancer at a young age.
The Prostate Cancer Foundation “wanted to address the confusion around different guideline statements and the lack of clarity around screening recommendations for Black men,” said William K. Oh, MD, of The Tisch Cancer Institute, Icahn School of Medicine at Mount Sinai, New York City, who chaired the panel for the new guidelines. “We thus convened a distinguished panel of experts from diverse backgrounds and expertise to create six guidelines statements to help Black men, their families, and their healthcare providers to consider options for prostate cancer screening based on the best available evidence.”
After reviewing 287, the expert panel developed six new guideline statements, reaching at least 80% consensus among panel members, addressing screening for Black men:
- Because Black men are at a high risk for prostate cancer, the benefits of screening generally outweigh the risks.
- PSA testing should be considered first line for prostate cancer screening, although some providers may recommend an optional digital rectal exam in addition to the PSA test.
- Black men should engage in shared decision-making with their healthcare providers and other trusted sources of information to learn about the pros and cons of screening.
- For Black men who elect screening, a baseline PSA test should be done between ages 40 and 45, and annual PSA screening should be strongly considered based on the PSA value and the individual’s health status.
- Black men over age 70 who have been undergoing prostate cancer screening should talk with their healthcare provider about whether to continue PSA testing and make an informed decision based on their age, life expectancy, health status, family history, and prior PSA levels.
- Black men who are at even higher risk due to a strong family history and/or known carriers of high-risk genetic variants should consider initiating annual PSA screening as early as age 40.
These statements are based on “the best available evidence, which overwhelmingly supports the conclusion that Black men in the US could benefit from a risk-adapted PSA screening,” the investigators concluded, noting that the latest evidence “warrants revisiting current recommendations for early [prostate cancer] detection in Black men from other national guideline groups.”
“We believe that the outcome of these more directed guidelines will be to give clarity to these men,” added Oh, who is also chief medical officer for the Prostate Cancer Foundation.
This research was funded by the Prostate Cancer Foundation, National Cancer Institute, Veterans Affairs, Jean Perkins Foundation, and Department of Defense. Garraway reported having no disclosures.
A version of this article appeared on Medscape.com.
Lowering the recommended age for baseline prostate-specific antigen (PSA) would reduce prostate cancer deaths by about 30% in Black men without significantly increasing the rate of overdiagnosis, according to new screening guidelines from the Prostate Cancer Foundation.
Specifically, , a multidisciplinary panel of experts and patient advocates determined based on a comprehensive literature review.
The panel’s findings were presented in a poster at the ASCO Genitourinary Cancers Symposium.
“Black men in the United States are considered a high-risk population for being diagnosed with and dying from prostate cancer,” wrote lead author Isla Garraway, MD, PhD, of the University of California, Los Angeles, and colleagues. Specifically, Black men are about two times more likely to be diagnosed with and die from prostate cancer than White men. But, the authors continued, “few guidelines have outlined specific recommendations for PSA-based prostate cancer screening among Black men.”
The US Preventive Services Task Force recommendations, which are currently being updated, set the PSA screening start age at 55. The task force recommendations, which dictate insurance coverage in the United States, acknowledged “a potential mortality benefit for African American men when beginning screening before age 55 years” but did not explicitly recommend screening earlier.
Current guidelines from the American Cancer Society call for discussions about screening in average-risk men to begin at age 50-55. The recommendations do specify lowering the age to 45 for those at a high risk for prostate cancer, which includes Black men as well as those with a first-degree relative diagnosed with prostate cancer before age 65. In some cases, screening can begin at age 40 in the highest risk men — those with more than one first-degree relative who had prostate cancer at a young age.
The Prostate Cancer Foundation “wanted to address the confusion around different guideline statements and the lack of clarity around screening recommendations for Black men,” said William K. Oh, MD, of The Tisch Cancer Institute, Icahn School of Medicine at Mount Sinai, New York City, who chaired the panel for the new guidelines. “We thus convened a distinguished panel of experts from diverse backgrounds and expertise to create six guidelines statements to help Black men, their families, and their healthcare providers to consider options for prostate cancer screening based on the best available evidence.”
After reviewing 287, the expert panel developed six new guideline statements, reaching at least 80% consensus among panel members, addressing screening for Black men:
- Because Black men are at a high risk for prostate cancer, the benefits of screening generally outweigh the risks.
- PSA testing should be considered first line for prostate cancer screening, although some providers may recommend an optional digital rectal exam in addition to the PSA test.
- Black men should engage in shared decision-making with their healthcare providers and other trusted sources of information to learn about the pros and cons of screening.
- For Black men who elect screening, a baseline PSA test should be done between ages 40 and 45, and annual PSA screening should be strongly considered based on the PSA value and the individual’s health status.
- Black men over age 70 who have been undergoing prostate cancer screening should talk with their healthcare provider about whether to continue PSA testing and make an informed decision based on their age, life expectancy, health status, family history, and prior PSA levels.
- Black men who are at even higher risk due to a strong family history and/or known carriers of high-risk genetic variants should consider initiating annual PSA screening as early as age 40.
These statements are based on “the best available evidence, which overwhelmingly supports the conclusion that Black men in the US could benefit from a risk-adapted PSA screening,” the investigators concluded, noting that the latest evidence “warrants revisiting current recommendations for early [prostate cancer] detection in Black men from other national guideline groups.”
“We believe that the outcome of these more directed guidelines will be to give clarity to these men,” added Oh, who is also chief medical officer for the Prostate Cancer Foundation.
This research was funded by the Prostate Cancer Foundation, National Cancer Institute, Veterans Affairs, Jean Perkins Foundation, and Department of Defense. Garraway reported having no disclosures.
A version of this article appeared on Medscape.com.
Lowering the recommended age for baseline prostate-specific antigen (PSA) would reduce prostate cancer deaths by about 30% in Black men without significantly increasing the rate of overdiagnosis, according to new screening guidelines from the Prostate Cancer Foundation.
Specifically, , a multidisciplinary panel of experts and patient advocates determined based on a comprehensive literature review.
The panel’s findings were presented in a poster at the ASCO Genitourinary Cancers Symposium.
“Black men in the United States are considered a high-risk population for being diagnosed with and dying from prostate cancer,” wrote lead author Isla Garraway, MD, PhD, of the University of California, Los Angeles, and colleagues. Specifically, Black men are about two times more likely to be diagnosed with and die from prostate cancer than White men. But, the authors continued, “few guidelines have outlined specific recommendations for PSA-based prostate cancer screening among Black men.”
The US Preventive Services Task Force recommendations, which are currently being updated, set the PSA screening start age at 55. The task force recommendations, which dictate insurance coverage in the United States, acknowledged “a potential mortality benefit for African American men when beginning screening before age 55 years” but did not explicitly recommend screening earlier.
Current guidelines from the American Cancer Society call for discussions about screening in average-risk men to begin at age 50-55. The recommendations do specify lowering the age to 45 for those at a high risk for prostate cancer, which includes Black men as well as those with a first-degree relative diagnosed with prostate cancer before age 65. In some cases, screening can begin at age 40 in the highest risk men — those with more than one first-degree relative who had prostate cancer at a young age.
The Prostate Cancer Foundation “wanted to address the confusion around different guideline statements and the lack of clarity around screening recommendations for Black men,” said William K. Oh, MD, of The Tisch Cancer Institute, Icahn School of Medicine at Mount Sinai, New York City, who chaired the panel for the new guidelines. “We thus convened a distinguished panel of experts from diverse backgrounds and expertise to create six guidelines statements to help Black men, their families, and their healthcare providers to consider options for prostate cancer screening based on the best available evidence.”
After reviewing 287, the expert panel developed six new guideline statements, reaching at least 80% consensus among panel members, addressing screening for Black men:
- Because Black men are at a high risk for prostate cancer, the benefits of screening generally outweigh the risks.
- PSA testing should be considered first line for prostate cancer screening, although some providers may recommend an optional digital rectal exam in addition to the PSA test.
- Black men should engage in shared decision-making with their healthcare providers and other trusted sources of information to learn about the pros and cons of screening.
- For Black men who elect screening, a baseline PSA test should be done between ages 40 and 45, and annual PSA screening should be strongly considered based on the PSA value and the individual’s health status.
- Black men over age 70 who have been undergoing prostate cancer screening should talk with their healthcare provider about whether to continue PSA testing and make an informed decision based on their age, life expectancy, health status, family history, and prior PSA levels.
- Black men who are at even higher risk due to a strong family history and/or known carriers of high-risk genetic variants should consider initiating annual PSA screening as early as age 40.
These statements are based on “the best available evidence, which overwhelmingly supports the conclusion that Black men in the US could benefit from a risk-adapted PSA screening,” the investigators concluded, noting that the latest evidence “warrants revisiting current recommendations for early [prostate cancer] detection in Black men from other national guideline groups.”
“We believe that the outcome of these more directed guidelines will be to give clarity to these men,” added Oh, who is also chief medical officer for the Prostate Cancer Foundation.
This research was funded by the Prostate Cancer Foundation, National Cancer Institute, Veterans Affairs, Jean Perkins Foundation, and Department of Defense. Garraway reported having no disclosures.
A version of this article appeared on Medscape.com.
FROM ASCO GU 2024
New Transparent AI Predicts Breast Cancer 5 Years Out
A new way of using artificial intelligence (AI) can predict breast cancer 5 years in advance with impressive accuracy — and unlike previous AI models, we know how this one works.
The new AI system, called AsymMirai, simplifies previous models by solely comparing differences between right and left breasts to predict risk. It could potentially save lives, prevent unnecessary testing, and save the healthcare system money, its creators say.
“With traditional AI, you ask it a question and it spits out an answer, but no one really knows how it makes its decisions. It’s a black box,” said Jon Donnelly, a PhD student in the department of computer science at Duke University, Durham, North Carolina, and first author on a new paper in Radiology describing the model.
“With our approach, people know how the algorithm comes up with its output so they can fact-check it and trust it,” he said.
One in eight women will develop invasive breast cancer, and 1 in 39 will die from it. Mammograms miss about 20% of breast cancers. (The shortcomings of genetic screening and mammograms received extra attention recently when actress Olivia Munn disclosed that she’d been treated for an aggressive form of breast cancer despite a normal mammogram and a negative genetic test.)
The model could help doctors bring the often-abstract idea of AI to the bedside in a meaningful way, said radiologist Vivianne Freitas, MD, assistant professor of medical imaging at the University of Toronto.
“This marks a new chapter in the field of AI,” said Dr. Freitas, who authored an editorial lauding the new paper. “It makes AI more tangible and understandable, thereby improving its potential for acceptance.”
AI as a Second Set of Eyes
Mr. Donnelly described AsymMirai as a simpler, more transparent, and easier-to-use version of Mirai, a breakthrough AI model which made headlines in 2021 with its promise to determine with unprecedented accuracy whether a patient is likely to get breast cancer within the next 5 years.
Mirai identified up to twice as many future cancer diagnoses as the conventional risk calculator Tyrer-Cuzick. It also maintained accuracy across a diverse set of patients — a notable plus for two fields (AI and healthcare) notorious for delivering poorer results for minorities.
Tyrer-Cuzick and other lower-tech risk calculators use personal and family history to statistically calculate risk. Mirai, on the other hand, analyzes countless bits of raw data embedded in a mammogram to decipher patterns a radiologist’s eyes may not catch. Four images, including two angles from each breast, are fed into the model, which produces a score between 0 and 1 to indicate the person’s risk of getting breast cancer in 1, 3, or 5 years.
But even Mirai’s creators have conceded they didn’t know exactly how it arrives at that score — a fact that has fueled hesitancy among clinicians.
Study coauthor Fides Schwartz, MD, a radiologist at Brigham and Women’s Hospital, Boston, said researchers were able to crack the code on Mirai’s “black box,” finding that its scores were largely determined by assessing subtle differences between right breast tissue and left breast tissue.
Knowing this, the research team simplified the model to predict risk based solely on “local bilateral dissimilarity.” AsymMirai was born.
The team then used AsymMirai to look back at > 200,000 mammograms from nearly 82,000 patients. They found it worked nearly as well as its predecessor, assigning a higher risk to those who would go on to develop cancer 66% of the time (vs Mirai’s 71%). In patients where it noticed the same asymmetry multiple years in a row it worked even better, with an 88% chance of giving people who would develop cancer later a higher score than those who would not.
“We found that we can, with surprisingly high accuracy, predict whether a woman will develop cancer in the next 1-5 years based solely on localized differences between her left and right breast tissue,” said Mr. Donnelly.
Dr. Schwartz imagines a day when radiologists could use the model to help develop personalized screening strategies for patients. Doctors might advise those with higher scores to get screened more often than guidelines suggest, supplement mammograms with an MRI , and keep a close watch on trouble spots identified by AI.
“For people with really low risk, on the other hand, maybe we can save them an annual exam that’s not super pleasant and might not be necessary,” said Dr. Schwartz.
Cautious Optimism
Robert Smith, PhD, senior vice president of early cancer detection science at the American Cancer Society, noted that AI has been used for decades to try to reduce radiologists’ workload and improve diagnoses.
“But AI just never really lived up to its fullest potential,” Dr. Smith said, “quite often because it was being used as a crutch by inexperienced radiologists who, instead of interpreting the mammogram and then seeing what AI had to say ended up letting AI do most of the work which, frankly, just wasn’t that accurate.”
He’s hopeful that newer, more sophisticated iterations of AI medical imaging platforms (roughly 18-20 models are in development) can ultimately save women’s lives, particularly in areas where radiologists are in short supply.
But he believes it will be a long time before doctors, or their patients, are willing to risk postponing a mammogram based on an algorithm.
A version of this article appeared on Medscape.com.
A new way of using artificial intelligence (AI) can predict breast cancer 5 years in advance with impressive accuracy — and unlike previous AI models, we know how this one works.
The new AI system, called AsymMirai, simplifies previous models by solely comparing differences between right and left breasts to predict risk. It could potentially save lives, prevent unnecessary testing, and save the healthcare system money, its creators say.
“With traditional AI, you ask it a question and it spits out an answer, but no one really knows how it makes its decisions. It’s a black box,” said Jon Donnelly, a PhD student in the department of computer science at Duke University, Durham, North Carolina, and first author on a new paper in Radiology describing the model.
“With our approach, people know how the algorithm comes up with its output so they can fact-check it and trust it,” he said.
One in eight women will develop invasive breast cancer, and 1 in 39 will die from it. Mammograms miss about 20% of breast cancers. (The shortcomings of genetic screening and mammograms received extra attention recently when actress Olivia Munn disclosed that she’d been treated for an aggressive form of breast cancer despite a normal mammogram and a negative genetic test.)
The model could help doctors bring the often-abstract idea of AI to the bedside in a meaningful way, said radiologist Vivianne Freitas, MD, assistant professor of medical imaging at the University of Toronto.
“This marks a new chapter in the field of AI,” said Dr. Freitas, who authored an editorial lauding the new paper. “It makes AI more tangible and understandable, thereby improving its potential for acceptance.”
AI as a Second Set of Eyes
Mr. Donnelly described AsymMirai as a simpler, more transparent, and easier-to-use version of Mirai, a breakthrough AI model which made headlines in 2021 with its promise to determine with unprecedented accuracy whether a patient is likely to get breast cancer within the next 5 years.
Mirai identified up to twice as many future cancer diagnoses as the conventional risk calculator Tyrer-Cuzick. It also maintained accuracy across a diverse set of patients — a notable plus for two fields (AI and healthcare) notorious for delivering poorer results for minorities.
Tyrer-Cuzick and other lower-tech risk calculators use personal and family history to statistically calculate risk. Mirai, on the other hand, analyzes countless bits of raw data embedded in a mammogram to decipher patterns a radiologist’s eyes may not catch. Four images, including two angles from each breast, are fed into the model, which produces a score between 0 and 1 to indicate the person’s risk of getting breast cancer in 1, 3, or 5 years.
But even Mirai’s creators have conceded they didn’t know exactly how it arrives at that score — a fact that has fueled hesitancy among clinicians.
Study coauthor Fides Schwartz, MD, a radiologist at Brigham and Women’s Hospital, Boston, said researchers were able to crack the code on Mirai’s “black box,” finding that its scores were largely determined by assessing subtle differences between right breast tissue and left breast tissue.
Knowing this, the research team simplified the model to predict risk based solely on “local bilateral dissimilarity.” AsymMirai was born.
The team then used AsymMirai to look back at > 200,000 mammograms from nearly 82,000 patients. They found it worked nearly as well as its predecessor, assigning a higher risk to those who would go on to develop cancer 66% of the time (vs Mirai’s 71%). In patients where it noticed the same asymmetry multiple years in a row it worked even better, with an 88% chance of giving people who would develop cancer later a higher score than those who would not.
“We found that we can, with surprisingly high accuracy, predict whether a woman will develop cancer in the next 1-5 years based solely on localized differences between her left and right breast tissue,” said Mr. Donnelly.
Dr. Schwartz imagines a day when radiologists could use the model to help develop personalized screening strategies for patients. Doctors might advise those with higher scores to get screened more often than guidelines suggest, supplement mammograms with an MRI , and keep a close watch on trouble spots identified by AI.
“For people with really low risk, on the other hand, maybe we can save them an annual exam that’s not super pleasant and might not be necessary,” said Dr. Schwartz.
Cautious Optimism
Robert Smith, PhD, senior vice president of early cancer detection science at the American Cancer Society, noted that AI has been used for decades to try to reduce radiologists’ workload and improve diagnoses.
“But AI just never really lived up to its fullest potential,” Dr. Smith said, “quite often because it was being used as a crutch by inexperienced radiologists who, instead of interpreting the mammogram and then seeing what AI had to say ended up letting AI do most of the work which, frankly, just wasn’t that accurate.”
He’s hopeful that newer, more sophisticated iterations of AI medical imaging platforms (roughly 18-20 models are in development) can ultimately save women’s lives, particularly in areas where radiologists are in short supply.
But he believes it will be a long time before doctors, or their patients, are willing to risk postponing a mammogram based on an algorithm.
A version of this article appeared on Medscape.com.
A new way of using artificial intelligence (AI) can predict breast cancer 5 years in advance with impressive accuracy — and unlike previous AI models, we know how this one works.
The new AI system, called AsymMirai, simplifies previous models by solely comparing differences between right and left breasts to predict risk. It could potentially save lives, prevent unnecessary testing, and save the healthcare system money, its creators say.
“With traditional AI, you ask it a question and it spits out an answer, but no one really knows how it makes its decisions. It’s a black box,” said Jon Donnelly, a PhD student in the department of computer science at Duke University, Durham, North Carolina, and first author on a new paper in Radiology describing the model.
“With our approach, people know how the algorithm comes up with its output so they can fact-check it and trust it,” he said.
One in eight women will develop invasive breast cancer, and 1 in 39 will die from it. Mammograms miss about 20% of breast cancers. (The shortcomings of genetic screening and mammograms received extra attention recently when actress Olivia Munn disclosed that she’d been treated for an aggressive form of breast cancer despite a normal mammogram and a negative genetic test.)
The model could help doctors bring the often-abstract idea of AI to the bedside in a meaningful way, said radiologist Vivianne Freitas, MD, assistant professor of medical imaging at the University of Toronto.
“This marks a new chapter in the field of AI,” said Dr. Freitas, who authored an editorial lauding the new paper. “It makes AI more tangible and understandable, thereby improving its potential for acceptance.”
AI as a Second Set of Eyes
Mr. Donnelly described AsymMirai as a simpler, more transparent, and easier-to-use version of Mirai, a breakthrough AI model which made headlines in 2021 with its promise to determine with unprecedented accuracy whether a patient is likely to get breast cancer within the next 5 years.
Mirai identified up to twice as many future cancer diagnoses as the conventional risk calculator Tyrer-Cuzick. It also maintained accuracy across a diverse set of patients — a notable plus for two fields (AI and healthcare) notorious for delivering poorer results for minorities.
Tyrer-Cuzick and other lower-tech risk calculators use personal and family history to statistically calculate risk. Mirai, on the other hand, analyzes countless bits of raw data embedded in a mammogram to decipher patterns a radiologist’s eyes may not catch. Four images, including two angles from each breast, are fed into the model, which produces a score between 0 and 1 to indicate the person’s risk of getting breast cancer in 1, 3, or 5 years.
But even Mirai’s creators have conceded they didn’t know exactly how it arrives at that score — a fact that has fueled hesitancy among clinicians.
Study coauthor Fides Schwartz, MD, a radiologist at Brigham and Women’s Hospital, Boston, said researchers were able to crack the code on Mirai’s “black box,” finding that its scores were largely determined by assessing subtle differences between right breast tissue and left breast tissue.
Knowing this, the research team simplified the model to predict risk based solely on “local bilateral dissimilarity.” AsymMirai was born.
The team then used AsymMirai to look back at > 200,000 mammograms from nearly 82,000 patients. They found it worked nearly as well as its predecessor, assigning a higher risk to those who would go on to develop cancer 66% of the time (vs Mirai’s 71%). In patients where it noticed the same asymmetry multiple years in a row it worked even better, with an 88% chance of giving people who would develop cancer later a higher score than those who would not.
“We found that we can, with surprisingly high accuracy, predict whether a woman will develop cancer in the next 1-5 years based solely on localized differences between her left and right breast tissue,” said Mr. Donnelly.
Dr. Schwartz imagines a day when radiologists could use the model to help develop personalized screening strategies for patients. Doctors might advise those with higher scores to get screened more often than guidelines suggest, supplement mammograms with an MRI , and keep a close watch on trouble spots identified by AI.
“For people with really low risk, on the other hand, maybe we can save them an annual exam that’s not super pleasant and might not be necessary,” said Dr. Schwartz.
Cautious Optimism
Robert Smith, PhD, senior vice president of early cancer detection science at the American Cancer Society, noted that AI has been used for decades to try to reduce radiologists’ workload and improve diagnoses.
“But AI just never really lived up to its fullest potential,” Dr. Smith said, “quite often because it was being used as a crutch by inexperienced radiologists who, instead of interpreting the mammogram and then seeing what AI had to say ended up letting AI do most of the work which, frankly, just wasn’t that accurate.”
He’s hopeful that newer, more sophisticated iterations of AI medical imaging platforms (roughly 18-20 models are in development) can ultimately save women’s lives, particularly in areas where radiologists are in short supply.
But he believes it will be a long time before doctors, or their patients, are willing to risk postponing a mammogram based on an algorithm.
A version of this article appeared on Medscape.com.
New CRC Risk Prediction Model Outperforms Polyp-Based Model
TOPLINE:
A comprehensive model considering patient age, diabetes, colonoscopy indications, and polyp findings can predict colorectal cancer (CRC) risk more accurately than the solely polyp-based model in patients with a first diagnosis of adenoma on colonoscopy.
METHODOLOGY:
- Because colonoscopy surveillance guidelines relying solely on previous polyp findings to assess CRC risk are imprecise, researchers developed and tested a comprehensive risk prediction model from a list of CRC-related predictors that included patient characteristics and clinical factors in addition to polyp findings.
- The comprehensive model included baseline colonoscopy indication, age group, diabetes diagnosis, and polyp findings (adenoma with advanced histology, polyp size ≥ 10 mm, and sessile serrated or traditional serrated adenoma).
- They randomly assigned 95,001 patients (mean age, 61.9 years; 45.5% women) who underwent colonoscopy with polypectomy to remove a conventional adenoma into two cohorts: Model development (66,500) and internal validation (28,501).
- In both cohorts, researchers compared the performance of the polyp findings-only method against the comprehensive model in predicting CRC, defined as an adenocarcinoma of the colon or rectum diagnosed a year after the baseline colonoscopy.
TAKEAWAY:
- During the follow-up period starting 1 year after colonoscopy, 495 patients were diagnosed with CRC; 354 were in the development cohort and 141 were in the validation cohort.
- The comprehensive model demonstrated better predictive performance than the traditional polyp-based model in the development cohort (area under the curve [AUC], 0.71 vs 0.61) and in the validation cohort (AUC, 0.7 vs 0.62).
- The difference in the Akaike Information Criterion values between the comprehensive and polyp models was 45.7, much above the threshold of 10, strongly indicating the superior performance of the comprehensive model.
IN PRACTICE:
“Improving the ability to accurately predict the patients at highest risk for CRC after polypectomy is critically important, given the considerable costs and resources associated with treating CRC and the better prognosis associated with early cancer detection. The current findings provide proof of concept that inclusion of CRC risk factors beyond prior polyp findings has the potential to improve post-colonoscopy risk stratification,” the authors wrote.
SOURCE:
The study, led by Jeffrey K. Lee, MD, MPH, Division of Research, Kaiser Permanente Northern California, Oakland, California, was published online in The American Journal of Gastroenterology.
LIMITATIONS:
External validation of the model’s performance is needed in different practice settings. The generalizability of the findings is limited because the study population did not include individuals without a prior adenoma or those with an isolated serrated polyp. Moreover, the examination of polyp size > 20 mm as a potential predictor of CRC was precluded due to incomplete data.
DISCLOSURES:
The study was conducted within the National Cancer Institute–funded Population-Based Research to Optimize the Screening Process II consortium and funded by a career development grant from the National Cancer Institute to Lee. The authors declared no conflicts of interest.
A version of this article appeared on Medscape.com.
TOPLINE:
A comprehensive model considering patient age, diabetes, colonoscopy indications, and polyp findings can predict colorectal cancer (CRC) risk more accurately than the solely polyp-based model in patients with a first diagnosis of adenoma on colonoscopy.
METHODOLOGY:
- Because colonoscopy surveillance guidelines relying solely on previous polyp findings to assess CRC risk are imprecise, researchers developed and tested a comprehensive risk prediction model from a list of CRC-related predictors that included patient characteristics and clinical factors in addition to polyp findings.
- The comprehensive model included baseline colonoscopy indication, age group, diabetes diagnosis, and polyp findings (adenoma with advanced histology, polyp size ≥ 10 mm, and sessile serrated or traditional serrated adenoma).
- They randomly assigned 95,001 patients (mean age, 61.9 years; 45.5% women) who underwent colonoscopy with polypectomy to remove a conventional adenoma into two cohorts: Model development (66,500) and internal validation (28,501).
- In both cohorts, researchers compared the performance of the polyp findings-only method against the comprehensive model in predicting CRC, defined as an adenocarcinoma of the colon or rectum diagnosed a year after the baseline colonoscopy.
TAKEAWAY:
- During the follow-up period starting 1 year after colonoscopy, 495 patients were diagnosed with CRC; 354 were in the development cohort and 141 were in the validation cohort.
- The comprehensive model demonstrated better predictive performance than the traditional polyp-based model in the development cohort (area under the curve [AUC], 0.71 vs 0.61) and in the validation cohort (AUC, 0.7 vs 0.62).
- The difference in the Akaike Information Criterion values between the comprehensive and polyp models was 45.7, much above the threshold of 10, strongly indicating the superior performance of the comprehensive model.
IN PRACTICE:
“Improving the ability to accurately predict the patients at highest risk for CRC after polypectomy is critically important, given the considerable costs and resources associated with treating CRC and the better prognosis associated with early cancer detection. The current findings provide proof of concept that inclusion of CRC risk factors beyond prior polyp findings has the potential to improve post-colonoscopy risk stratification,” the authors wrote.
SOURCE:
The study, led by Jeffrey K. Lee, MD, MPH, Division of Research, Kaiser Permanente Northern California, Oakland, California, was published online in The American Journal of Gastroenterology.
LIMITATIONS:
External validation of the model’s performance is needed in different practice settings. The generalizability of the findings is limited because the study population did not include individuals without a prior adenoma or those with an isolated serrated polyp. Moreover, the examination of polyp size > 20 mm as a potential predictor of CRC was precluded due to incomplete data.
DISCLOSURES:
The study was conducted within the National Cancer Institute–funded Population-Based Research to Optimize the Screening Process II consortium and funded by a career development grant from the National Cancer Institute to Lee. The authors declared no conflicts of interest.
A version of this article appeared on Medscape.com.
TOPLINE:
A comprehensive model considering patient age, diabetes, colonoscopy indications, and polyp findings can predict colorectal cancer (CRC) risk more accurately than the solely polyp-based model in patients with a first diagnosis of adenoma on colonoscopy.
METHODOLOGY:
- Because colonoscopy surveillance guidelines relying solely on previous polyp findings to assess CRC risk are imprecise, researchers developed and tested a comprehensive risk prediction model from a list of CRC-related predictors that included patient characteristics and clinical factors in addition to polyp findings.
- The comprehensive model included baseline colonoscopy indication, age group, diabetes diagnosis, and polyp findings (adenoma with advanced histology, polyp size ≥ 10 mm, and sessile serrated or traditional serrated adenoma).
- They randomly assigned 95,001 patients (mean age, 61.9 years; 45.5% women) who underwent colonoscopy with polypectomy to remove a conventional adenoma into two cohorts: Model development (66,500) and internal validation (28,501).
- In both cohorts, researchers compared the performance of the polyp findings-only method against the comprehensive model in predicting CRC, defined as an adenocarcinoma of the colon or rectum diagnosed a year after the baseline colonoscopy.
TAKEAWAY:
- During the follow-up period starting 1 year after colonoscopy, 495 patients were diagnosed with CRC; 354 were in the development cohort and 141 were in the validation cohort.
- The comprehensive model demonstrated better predictive performance than the traditional polyp-based model in the development cohort (area under the curve [AUC], 0.71 vs 0.61) and in the validation cohort (AUC, 0.7 vs 0.62).
- The difference in the Akaike Information Criterion values between the comprehensive and polyp models was 45.7, much above the threshold of 10, strongly indicating the superior performance of the comprehensive model.
IN PRACTICE:
“Improving the ability to accurately predict the patients at highest risk for CRC after polypectomy is critically important, given the considerable costs and resources associated with treating CRC and the better prognosis associated with early cancer detection. The current findings provide proof of concept that inclusion of CRC risk factors beyond prior polyp findings has the potential to improve post-colonoscopy risk stratification,” the authors wrote.
SOURCE:
The study, led by Jeffrey K. Lee, MD, MPH, Division of Research, Kaiser Permanente Northern California, Oakland, California, was published online in The American Journal of Gastroenterology.
LIMITATIONS:
External validation of the model’s performance is needed in different practice settings. The generalizability of the findings is limited because the study population did not include individuals without a prior adenoma or those with an isolated serrated polyp. Moreover, the examination of polyp size > 20 mm as a potential predictor of CRC was precluded due to incomplete data.
DISCLOSURES:
The study was conducted within the National Cancer Institute–funded Population-Based Research to Optimize the Screening Process II consortium and funded by a career development grant from the National Cancer Institute to Lee. The authors declared no conflicts of interest.
A version of this article appeared on Medscape.com.
New CRC stool test beats FIT for sensitivity but not specificity
, according to the large prospective BLUE-C study.
The multi-target assay by Exact Sciences Corporation, the makers of Cologuard, includes new biomarkers designed to increase specificity without decreasing sensitivity. It showed a sensitivity for CRC of almost 94%, with more than 43% sensitivity for advanced precancerous lesions and nearly 91% specificity for advanced neoplasia, according to the study results, which were published in The New England Journal of Medicine.
Adherence to CRC screening in the United States is well below the 80% national target, and the quest continues for noninvasive screening assays that might improve screening adherence, noted lead author Thomas F. Imperiale, MD, AGAF, a professor of medicine at Indiana University School of medicine in Indianapolis, and colleagues.
“The test’s manufacturer developed a new version of its existing Cologuard FIT/DNA test because it took to heart the feedback from primary care providers and gastroenterologists about the test’s low specificity,” Dr. Imperiale said in an interview. “The goal of the new test was to improve specificity without losing, and perhaps even gaining, some sensitivity — a goal that is not easily accomplished when you’re trying to improve on a sensitivity for colorectal cancer that was already 92.3% in the current version of Cologuard.”
Compared with the earlier version of Cologuard, he added, the new generation retained sensitivity for CRC and advanced precancerous lesions or polyps while improving specificity by 30% (90.6% vs 86.6%) for advanced neoplasia — a combination of CRC and advanced precancerous lesions, he said. “This with the caveat, however, that the two versions were not compared head-to-head in this new study,” Dr. Imperiale said.
The higher specificity for advanced lesions is expected to translate to a lower false positive rate. Lowering false positive rates is crucial because that reduces the need for costly, invasive, and unnecessary colonoscopies, said Aasma Shaukat, MD, MPH, AGAF, director of outcomes research in NYU Langone Health’s division of gastroenterology and hepatology in New York City.
“Many physicians felt there were too many false positives with the existing version, and that is anxiety-provoking in patients and providers,” said Dr. Shaukat, who was not involved in the study.
In her view, however, the test’s moderate improvements in detecting certain lesions does not make it demonstrably superior to its predecessor, and there is always the possibility of higher cost to consider.
While acknowledging that a higher sensitivity for all advanced precancerous lesions would have been welcome, Dr. Imperiale said the test detected 75% of the most worrisome of such lesions — “the ones containing high-grade dysplastic cells and suggesting near-term conversion to cancer. And its ability to detect other advanced lesions improved as the size of the lesions increased.”
Testing details
Almost 21,000 asymptomatic participants age 40 years and older undergoing screening colonoscopy were evaluated at 186 US sites during the period 2019 to 2023. Of the cohort, 98 had CRC, 2144 had advanced precancerous lesions, 6973 had nonadvanced adenomas, and 10,961 had nonneoplastic findings or negative colonoscopy.
Advanced precancerous lesions included one or more adenomas or sessile serrated lesions measuring at least 1 cm in the longest dimension, lesions with villous histologic features, and high-grade dysplasia. The new DNA test identified 92 of 98 participants with CRC and 76 of 82 participants with screening-relevant cancers. Among the findings for the new assay:
- Sensitivity for any-stage CRC was 93.9% (95% confidence interval [CI], 87.1- 97.7)
- Sensitivity for advanced precancerous lesions was 43.4% (95% CI, 41.3-45.6)
- Sensitivity for high-grade dysplasia was 74.6% (95% CI, 65.6-82.3)
- Specificity for advanced neoplasia was 90.6% (95% CI, 90.1- 91.0).
- Specificity for nonneoplastic findings or negative colonoscopy was 92.7% (95% CI, 92.2-93.1)
- Specificity for negative colonoscopy was 93.3 (95% CI, 92.8-93.9)
- No adverse events occurred.
In the comparator assay, OC-AUTO FIT by Polymedco, sensitivity was 67.3% (95% CI, 57.1-76.5) for CRC, 23.3% (95% CI, 21.5-25.2) for advanced precancerous lesions, and 47.4% (95% CI, 37.9-56.9) for high-grade dysplasia. In the comparator FIT, however, specificity was better across all age groups — at 94.8% (95% CI, 94.4-95.1) for advanced neoplasia, 95.7% (95% CI, 95.3- 96.1) for nonneoplastic findings, and 96.0% (95% CI, 95.5-96.4) for negative colonoscopy.
In another article in the same issue of NEJM, Guardant Health’s cell-free DNA blood-based test had 83% sensitivity for CRC, 90% specificity for advanced neoplasia, and 13% sensitivity for advanced precancerous lesions in an average-risk population.
An age-related decrease in specificity was observed with the new Cologuard test, but that did not concern Dr. Imperiale because the same observation was made with the current version. “In fact, the next-gen version appears to have less of an age-related decrease in specificity than the current version, although, again, the two versions were not tested head-to-head,” he noted.
The effect of age-related background methylation of DNA is well known, he explained. “Clinicians and older patients in the screening age range do need to be aware of this effect on specificity before ordering or agreeing to do the test. I do not see this as a stumbling block to implementation, but it does require discussion between patient and ordering provider.”
The new version of the DNA test is expected to be available in about a year.
According to Dr. Imperiale, further research is needed to ascertain the test’s acceptability and adherence rates and to quantify its yield in population-based screening. Determining its cost-effectiveness and making it easier to use are other goals. “And most importantly, the degree of reduction in the incidence and mortality from colorectal cancer,” he said.
Cost-effectiveness and the selection of the testing interval may play roles in adherence, particularly in populations with lower rates of screening adherence than the general population, John M. Carethers, MD, AGAF, of the University of California, San Diego, noted in a related editorial.
“Adherence to screening varies according to age group, including persons in the 45- to 49-year age group who are now eligible for average-risk screening,” he wrote. “It is hoped that these newer tests will increase use and adherence and elevate the percentage of the population undergoing screening in order to reduce deaths from colorectal cancer.”
This study was sponsored by Exact Sciences Corporation, which conducted the stool testing at its laboratories.
Dr. Imperiale had no competing interests to disclose. Several study co-authors reported employment with Exact Sciences, or stock and intellectual property ownership. Dr. Shaukat disclosed consulting for Freenome. Dr. Carethers reported ties to Avantor Inc. and Geneoscopy.
, according to the large prospective BLUE-C study.
The multi-target assay by Exact Sciences Corporation, the makers of Cologuard, includes new biomarkers designed to increase specificity without decreasing sensitivity. It showed a sensitivity for CRC of almost 94%, with more than 43% sensitivity for advanced precancerous lesions and nearly 91% specificity for advanced neoplasia, according to the study results, which were published in The New England Journal of Medicine.
Adherence to CRC screening in the United States is well below the 80% national target, and the quest continues for noninvasive screening assays that might improve screening adherence, noted lead author Thomas F. Imperiale, MD, AGAF, a professor of medicine at Indiana University School of medicine in Indianapolis, and colleagues.
“The test’s manufacturer developed a new version of its existing Cologuard FIT/DNA test because it took to heart the feedback from primary care providers and gastroenterologists about the test’s low specificity,” Dr. Imperiale said in an interview. “The goal of the new test was to improve specificity without losing, and perhaps even gaining, some sensitivity — a goal that is not easily accomplished when you’re trying to improve on a sensitivity for colorectal cancer that was already 92.3% in the current version of Cologuard.”
Compared with the earlier version of Cologuard, he added, the new generation retained sensitivity for CRC and advanced precancerous lesions or polyps while improving specificity by 30% (90.6% vs 86.6%) for advanced neoplasia — a combination of CRC and advanced precancerous lesions, he said. “This with the caveat, however, that the two versions were not compared head-to-head in this new study,” Dr. Imperiale said.
The higher specificity for advanced lesions is expected to translate to a lower false positive rate. Lowering false positive rates is crucial because that reduces the need for costly, invasive, and unnecessary colonoscopies, said Aasma Shaukat, MD, MPH, AGAF, director of outcomes research in NYU Langone Health’s division of gastroenterology and hepatology in New York City.
“Many physicians felt there were too many false positives with the existing version, and that is anxiety-provoking in patients and providers,” said Dr. Shaukat, who was not involved in the study.
In her view, however, the test’s moderate improvements in detecting certain lesions does not make it demonstrably superior to its predecessor, and there is always the possibility of higher cost to consider.
While acknowledging that a higher sensitivity for all advanced precancerous lesions would have been welcome, Dr. Imperiale said the test detected 75% of the most worrisome of such lesions — “the ones containing high-grade dysplastic cells and suggesting near-term conversion to cancer. And its ability to detect other advanced lesions improved as the size of the lesions increased.”
Testing details
Almost 21,000 asymptomatic participants age 40 years and older undergoing screening colonoscopy were evaluated at 186 US sites during the period 2019 to 2023. Of the cohort, 98 had CRC, 2144 had advanced precancerous lesions, 6973 had nonadvanced adenomas, and 10,961 had nonneoplastic findings or negative colonoscopy.
Advanced precancerous lesions included one or more adenomas or sessile serrated lesions measuring at least 1 cm in the longest dimension, lesions with villous histologic features, and high-grade dysplasia. The new DNA test identified 92 of 98 participants with CRC and 76 of 82 participants with screening-relevant cancers. Among the findings for the new assay:
- Sensitivity for any-stage CRC was 93.9% (95% confidence interval [CI], 87.1- 97.7)
- Sensitivity for advanced precancerous lesions was 43.4% (95% CI, 41.3-45.6)
- Sensitivity for high-grade dysplasia was 74.6% (95% CI, 65.6-82.3)
- Specificity for advanced neoplasia was 90.6% (95% CI, 90.1- 91.0).
- Specificity for nonneoplastic findings or negative colonoscopy was 92.7% (95% CI, 92.2-93.1)
- Specificity for negative colonoscopy was 93.3 (95% CI, 92.8-93.9)
- No adverse events occurred.
In the comparator assay, OC-AUTO FIT by Polymedco, sensitivity was 67.3% (95% CI, 57.1-76.5) for CRC, 23.3% (95% CI, 21.5-25.2) for advanced precancerous lesions, and 47.4% (95% CI, 37.9-56.9) for high-grade dysplasia. In the comparator FIT, however, specificity was better across all age groups — at 94.8% (95% CI, 94.4-95.1) for advanced neoplasia, 95.7% (95% CI, 95.3- 96.1) for nonneoplastic findings, and 96.0% (95% CI, 95.5-96.4) for negative colonoscopy.
In another article in the same issue of NEJM, Guardant Health’s cell-free DNA blood-based test had 83% sensitivity for CRC, 90% specificity for advanced neoplasia, and 13% sensitivity for advanced precancerous lesions in an average-risk population.
An age-related decrease in specificity was observed with the new Cologuard test, but that did not concern Dr. Imperiale because the same observation was made with the current version. “In fact, the next-gen version appears to have less of an age-related decrease in specificity than the current version, although, again, the two versions were not tested head-to-head,” he noted.
The effect of age-related background methylation of DNA is well known, he explained. “Clinicians and older patients in the screening age range do need to be aware of this effect on specificity before ordering or agreeing to do the test. I do not see this as a stumbling block to implementation, but it does require discussion between patient and ordering provider.”
The new version of the DNA test is expected to be available in about a year.
According to Dr. Imperiale, further research is needed to ascertain the test’s acceptability and adherence rates and to quantify its yield in population-based screening. Determining its cost-effectiveness and making it easier to use are other goals. “And most importantly, the degree of reduction in the incidence and mortality from colorectal cancer,” he said.
Cost-effectiveness and the selection of the testing interval may play roles in adherence, particularly in populations with lower rates of screening adherence than the general population, John M. Carethers, MD, AGAF, of the University of California, San Diego, noted in a related editorial.
“Adherence to screening varies according to age group, including persons in the 45- to 49-year age group who are now eligible for average-risk screening,” he wrote. “It is hoped that these newer tests will increase use and adherence and elevate the percentage of the population undergoing screening in order to reduce deaths from colorectal cancer.”
This study was sponsored by Exact Sciences Corporation, which conducted the stool testing at its laboratories.
Dr. Imperiale had no competing interests to disclose. Several study co-authors reported employment with Exact Sciences, or stock and intellectual property ownership. Dr. Shaukat disclosed consulting for Freenome. Dr. Carethers reported ties to Avantor Inc. and Geneoscopy.
, according to the large prospective BLUE-C study.
The multi-target assay by Exact Sciences Corporation, the makers of Cologuard, includes new biomarkers designed to increase specificity without decreasing sensitivity. It showed a sensitivity for CRC of almost 94%, with more than 43% sensitivity for advanced precancerous lesions and nearly 91% specificity for advanced neoplasia, according to the study results, which were published in The New England Journal of Medicine.
Adherence to CRC screening in the United States is well below the 80% national target, and the quest continues for noninvasive screening assays that might improve screening adherence, noted lead author Thomas F. Imperiale, MD, AGAF, a professor of medicine at Indiana University School of medicine in Indianapolis, and colleagues.
“The test’s manufacturer developed a new version of its existing Cologuard FIT/DNA test because it took to heart the feedback from primary care providers and gastroenterologists about the test’s low specificity,” Dr. Imperiale said in an interview. “The goal of the new test was to improve specificity without losing, and perhaps even gaining, some sensitivity — a goal that is not easily accomplished when you’re trying to improve on a sensitivity for colorectal cancer that was already 92.3% in the current version of Cologuard.”
Compared with the earlier version of Cologuard, he added, the new generation retained sensitivity for CRC and advanced precancerous lesions or polyps while improving specificity by 30% (90.6% vs 86.6%) for advanced neoplasia — a combination of CRC and advanced precancerous lesions, he said. “This with the caveat, however, that the two versions were not compared head-to-head in this new study,” Dr. Imperiale said.
The higher specificity for advanced lesions is expected to translate to a lower false positive rate. Lowering false positive rates is crucial because that reduces the need for costly, invasive, and unnecessary colonoscopies, said Aasma Shaukat, MD, MPH, AGAF, director of outcomes research in NYU Langone Health’s division of gastroenterology and hepatology in New York City.
“Many physicians felt there were too many false positives with the existing version, and that is anxiety-provoking in patients and providers,” said Dr. Shaukat, who was not involved in the study.
In her view, however, the test’s moderate improvements in detecting certain lesions does not make it demonstrably superior to its predecessor, and there is always the possibility of higher cost to consider.
While acknowledging that a higher sensitivity for all advanced precancerous lesions would have been welcome, Dr. Imperiale said the test detected 75% of the most worrisome of such lesions — “the ones containing high-grade dysplastic cells and suggesting near-term conversion to cancer. And its ability to detect other advanced lesions improved as the size of the lesions increased.”
Testing details
Almost 21,000 asymptomatic participants age 40 years and older undergoing screening colonoscopy were evaluated at 186 US sites during the period 2019 to 2023. Of the cohort, 98 had CRC, 2144 had advanced precancerous lesions, 6973 had nonadvanced adenomas, and 10,961 had nonneoplastic findings or negative colonoscopy.
Advanced precancerous lesions included one or more adenomas or sessile serrated lesions measuring at least 1 cm in the longest dimension, lesions with villous histologic features, and high-grade dysplasia. The new DNA test identified 92 of 98 participants with CRC and 76 of 82 participants with screening-relevant cancers. Among the findings for the new assay:
- Sensitivity for any-stage CRC was 93.9% (95% confidence interval [CI], 87.1- 97.7)
- Sensitivity for advanced precancerous lesions was 43.4% (95% CI, 41.3-45.6)
- Sensitivity for high-grade dysplasia was 74.6% (95% CI, 65.6-82.3)
- Specificity for advanced neoplasia was 90.6% (95% CI, 90.1- 91.0).
- Specificity for nonneoplastic findings or negative colonoscopy was 92.7% (95% CI, 92.2-93.1)
- Specificity for negative colonoscopy was 93.3 (95% CI, 92.8-93.9)
- No adverse events occurred.
In the comparator assay, OC-AUTO FIT by Polymedco, sensitivity was 67.3% (95% CI, 57.1-76.5) for CRC, 23.3% (95% CI, 21.5-25.2) for advanced precancerous lesions, and 47.4% (95% CI, 37.9-56.9) for high-grade dysplasia. In the comparator FIT, however, specificity was better across all age groups — at 94.8% (95% CI, 94.4-95.1) for advanced neoplasia, 95.7% (95% CI, 95.3- 96.1) for nonneoplastic findings, and 96.0% (95% CI, 95.5-96.4) for negative colonoscopy.
In another article in the same issue of NEJM, Guardant Health’s cell-free DNA blood-based test had 83% sensitivity for CRC, 90% specificity for advanced neoplasia, and 13% sensitivity for advanced precancerous lesions in an average-risk population.
An age-related decrease in specificity was observed with the new Cologuard test, but that did not concern Dr. Imperiale because the same observation was made with the current version. “In fact, the next-gen version appears to have less of an age-related decrease in specificity than the current version, although, again, the two versions were not tested head-to-head,” he noted.
The effect of age-related background methylation of DNA is well known, he explained. “Clinicians and older patients in the screening age range do need to be aware of this effect on specificity before ordering or agreeing to do the test. I do not see this as a stumbling block to implementation, but it does require discussion between patient and ordering provider.”
The new version of the DNA test is expected to be available in about a year.
According to Dr. Imperiale, further research is needed to ascertain the test’s acceptability and adherence rates and to quantify its yield in population-based screening. Determining its cost-effectiveness and making it easier to use are other goals. “And most importantly, the degree of reduction in the incidence and mortality from colorectal cancer,” he said.
Cost-effectiveness and the selection of the testing interval may play roles in adherence, particularly in populations with lower rates of screening adherence than the general population, John M. Carethers, MD, AGAF, of the University of California, San Diego, noted in a related editorial.
“Adherence to screening varies according to age group, including persons in the 45- to 49-year age group who are now eligible for average-risk screening,” he wrote. “It is hoped that these newer tests will increase use and adherence and elevate the percentage of the population undergoing screening in order to reduce deaths from colorectal cancer.”
This study was sponsored by Exact Sciences Corporation, which conducted the stool testing at its laboratories.
Dr. Imperiale had no competing interests to disclose. Several study co-authors reported employment with Exact Sciences, or stock and intellectual property ownership. Dr. Shaukat disclosed consulting for Freenome. Dr. Carethers reported ties to Avantor Inc. and Geneoscopy.
FROM NEW ENGLAND JOURNAL OF MEDICINE