User login
The Journal of Clinical Outcomes Management® is an independent, peer-reviewed journal offering evidence-based, practical information for improving the quality, safety, and value of health care.
div[contains(@class, 'header__large-screen')]
div[contains(@class, 'read-next-article')]
div[contains(@class, 'nav-primary')]
nav[contains(@class, 'nav-primary')]
section[contains(@class, 'footer-nav-section-wrapper')]
footer[@id='footer']
div[contains(@class, 'main-prefix')]
section[contains(@class, 'nav-hidden')]
div[contains(@class, 'ce-card-content')]
nav[contains(@class, 'nav-ce-stack')]
New blood test could reshape early CRC screening
A simple blood test that looks for a combination of specific RNA snippets may become a novel way to screen for early-onset colorectal cancer, suggests a new study published online in Gastroenterology.
Researchers identified four microRNAs that together comprise a signature biomarker that can be used to detect and diagnose the presence of colorectal cancer from a liquid biopsy in a younger population.
MicroRNAs, or miRNAs, are small RNA molecules that do not encode proteins but are used instead to regulate gene expression. The study authors developed and validated a panel that detects four miRNAs occurring at higher levels in plasma samples from patients with early-onset colorectal cancer, with high sensitivity and specificity.
“The point would be to use this test as a routine part of annual healthcare, or for people in high-risk families every 6 months,” study senior author Ajay Goel, PhD, MS, chair of the department of molecular diagnostics and experimental therapeutics at the City of Hope Comprehensive Cancer Center, Duarte, Calif., said in an interview.
“It’s affordable, it can be done easily from a small tube of blood, and as long as that test stays negative, you’re good,” Dr. Goel said, because even if patients miss a test, the next one, whether it’s 6 months or a year later, will catch any potential cancer.
“Colon cancer is not going to kill somebody overnight, so this should be used as a precursor to colonoscopy. As long as that test is negative, you can postpone a colonoscopy,” he said.
Andrew T. Chan, MD, MPH, a professor of medicine at Harvard Medical School and vice chair of gastroenterology at Massachusetts General Hospital, both in Boston, who was not involved in the research, said in an interview that the findings are exciting.
“It would be really value-added to have a blood-based screening test,” Dr. Chan said, adding that researchers have pursued multiple different avenues in pursuit of one. “It’s very nice to see that area progress and to actually have some evidence that microRNAs could be a potential biomarker for colorectal cancer.”
Screening now insufficient for early-onset disease
The U.S. Preventive Services Task Force recently lowered the recommended age to 45 years to begin screening for colorectal cancer. Part of the rationale for the change came from the rising rates of early-onset colorectal cancer, a distinct clinical and molecular entity that tends to have poorer survival than late-onset disease, the authors noted.
Early-onset disease, occurring primarily in people under 50 without a family or genetic history of colorectal cancer, now makes up about 10%-15% of all new cases and continues to rise, they write.
“Early-onset colorectal cancer patients are more likely to exhibit an advanced stage tumor at initial presentation, distal tumor localization, signet ring histology, and a disease presentation with concurrent metastasis,” the authors wrote. “This raises the logistical clinical concern that, since the tumors in early-onset colorectal cancer patients are often more aggressive than those with late-onset colorectal cancer, a delayed diagnosis could have a significant adverse impact and can lead to early death.”
Yet current screening strategies are insufficient for detecting enough early-onset cases, the authors assert.
Colonoscopies are invasive, carry a risk for complications, and are cost- and time-prohibitive for people at average risk. Meanwhile, existing fecal and blood tests “lack adequate diagnostic performance for the early detection of colorectal cancer, especially early-onset colorectal cancer, as these assays have yet to be explored or developed in this population,” they wrote.
The ideal “diagnostic modality should preferably be acceptable to healthy individuals, inexpensive, rapid, and preferably noninvasive,” they note.
Finding and validating miRNA
The researchers therefore turned to the concept of a liquid biopsy, focusing on identifying miRNAs associated with colorectal cancer, because their expression tends to be stable in tissues, blood, stool, and other body fluids.
They first analyzed an miRNA expression profiling dataset from 1,061 individuals to look for miRNAs whose expression was higher in colorectal cancer patients. The dataset included 42 patients with stage 1-2 early-onset colorectal cancer, 370 patients with stage 1-2 late-onset colorectal cancer, 62 patients younger than 50 years without cancer, and 587 patients aged 50 years or older without cancer.
The researchers found 28 miRNAs that were significantly unregulated in early-onset colorectal cancer tissue samples, compared with cancer-free samples and 11 miRNAs unregulated specifically in only the early-onset colorectal cancer samples. Four of these 11 miRNAs were adequately distinct from one another and were detectable in the plasma samples that the researchers would use to train and validate them as a combination biomarker.
The researchers used 117 plasma samples from Japan, including 72 from people with early-onset colorectal cancer and 45 from healthy donors, to develop and train an assay detecting the four miRNAs. They then validated the assay using 142 plasma samples from Spain, including 77 with early-onset colorectal cancer and 65 healthy donors.
In the Japan cohort, the four-miRNA assay had a sensitivity of 90% and a specificity of 80%, with a positive predictive value (PPV) of 88% and a negative predictive value (NPV) of 84%. In the Spain cohort used for validation, the assay performed with a sensitivity of 82%, a specificity of 86%, a PPV of 88%, and an NPV of 80%.
“Taken together, the genome-wide transcriptomic profiling approach was indeed robust, as it identified the biomarkers that were successfully trained and validated in plasma specimens from independent cohorts of patients with early-onset colorectal cancer, hence highlighting their translational potential in the clinic for the detection of this malignancy in early stages,” the authors wrote.
By disease stage, the four-miRNA panel identified both early-stage (stage 1-2; sensitivity, 92%; specificity, 80%) and late-stage (stage 3-4; sensitivity, 79%; specificity, 86%) early-onset colorectal cancer in the validation cohort.
Clinical benefit of blood test
The researchers also assessed the benefit-harm trade-off of this liquid biopsy assay compared with other screening modalities, taking into consideration the risk for false positives and false negatives.
A decision curve analysis “revealed that the miRNA panel achieved a higher net benefit regardless of threshold probability in comparison to intervention for all patients or none of the patients,” the researchers reported. “These findings suggest that this miRNA panel might offer more clinical benefit with regards to the avoidance of physical harm and misdiagnosis.”
They also found that expression levels of these four miRNAs significantly decreased after surgical removal of the colorectal cancer, strongly suggesting that the miRNAs do originate with the tumor.
“To have a relatively inexpensive and noninvasive means of screening a younger population is a very important unmet need,” said Dr. Chan.
It’s not feasible to recommend colonoscopies in people younger than 45 years because of resource constraints, he said, so “this is a wonderful new development to actually have the possibility of a blood-based screening test for younger individuals, especially given that rising incidence of young-onset colorectal cancer.”
Dr. Goel pointed out that only half of those recommended to get screened for colorectal cancer actually undergo screening, and a large reason for that is the desire to avoid colonoscopy, a concern echoed in the findings of a recent study by Christopher V. Almario, MD, MSHPM, and colleagues.
Dr. Goel expects that this strategy would increase compliance with screening because it’s less invasive and more affordable, particularly for younger patients. He estimates that a commercial assay using this panel, if approved by the Food and Drug Administration, should cost less than $100.
Dr. Almario, an assistant professor of medicine at the Cedars-Sinai Karsh Division of Gastroenterology and Hepatology in Los Angeles, agreed that an FDA-approved blood-based screening test would be a “game-changer,” as long as it’s accurate and effective.
Though Dr. Almario did not review the data in Goel’s study, he said in an interview that a blood test for colorectal cancer screening would be “the holy grail, so to speak, in terms of really moving the needle on screening uptake.”
Next steps
Dr. Chan noted that one caveat to consider with this study is that it was done in a relatively small population of individuals, even though the test was validated in a second set of plasma samples.
“Additional validation needs to be done in larger numbers of patients to really understand the performance characteristics because it is possible that some of these signatures may, when they’re using a broader group of individuals, not perform as well,” Dr. Chan said.
Dr. Goel said he is working with several companies right now to develop and further test a commercial product. He anticipates it may be shelf-ready in 2-5 years.
“The take-home message is that clinicians need to be more cognizant of the fact that incidence of this disease is rising, and we need to do something about it,” Dr. Goel said, particularly for those younger than 45 years who currently don’t have a screening option.
“Now we have at least a sliver of hope for those who might be suffering from this disease, for those for whom we have zero screening or diagnostic tests,” he said.
The research was funded by the National Cancer Institute and Fundación MAPFRE Guanarteme. Dr. Goel, Dr. Chan, and Dr. Almario reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
A simple blood test that looks for a combination of specific RNA snippets may become a novel way to screen for early-onset colorectal cancer, suggests a new study published online in Gastroenterology.
Researchers identified four microRNAs that together comprise a signature biomarker that can be used to detect and diagnose the presence of colorectal cancer from a liquid biopsy in a younger population.
MicroRNAs, or miRNAs, are small RNA molecules that do not encode proteins but are used instead to regulate gene expression. The study authors developed and validated a panel that detects four miRNAs occurring at higher levels in plasma samples from patients with early-onset colorectal cancer, with high sensitivity and specificity.
“The point would be to use this test as a routine part of annual healthcare, or for people in high-risk families every 6 months,” study senior author Ajay Goel, PhD, MS, chair of the department of molecular diagnostics and experimental therapeutics at the City of Hope Comprehensive Cancer Center, Duarte, Calif., said in an interview.
“It’s affordable, it can be done easily from a small tube of blood, and as long as that test stays negative, you’re good,” Dr. Goel said, because even if patients miss a test, the next one, whether it’s 6 months or a year later, will catch any potential cancer.
“Colon cancer is not going to kill somebody overnight, so this should be used as a precursor to colonoscopy. As long as that test is negative, you can postpone a colonoscopy,” he said.
Andrew T. Chan, MD, MPH, a professor of medicine at Harvard Medical School and vice chair of gastroenterology at Massachusetts General Hospital, both in Boston, who was not involved in the research, said in an interview that the findings are exciting.
“It would be really value-added to have a blood-based screening test,” Dr. Chan said, adding that researchers have pursued multiple different avenues in pursuit of one. “It’s very nice to see that area progress and to actually have some evidence that microRNAs could be a potential biomarker for colorectal cancer.”
Screening now insufficient for early-onset disease
The U.S. Preventive Services Task Force recently lowered the recommended age to 45 years to begin screening for colorectal cancer. Part of the rationale for the change came from the rising rates of early-onset colorectal cancer, a distinct clinical and molecular entity that tends to have poorer survival than late-onset disease, the authors noted.
Early-onset disease, occurring primarily in people under 50 without a family or genetic history of colorectal cancer, now makes up about 10%-15% of all new cases and continues to rise, they write.
“Early-onset colorectal cancer patients are more likely to exhibit an advanced stage tumor at initial presentation, distal tumor localization, signet ring histology, and a disease presentation with concurrent metastasis,” the authors wrote. “This raises the logistical clinical concern that, since the tumors in early-onset colorectal cancer patients are often more aggressive than those with late-onset colorectal cancer, a delayed diagnosis could have a significant adverse impact and can lead to early death.”
Yet current screening strategies are insufficient for detecting enough early-onset cases, the authors assert.
Colonoscopies are invasive, carry a risk for complications, and are cost- and time-prohibitive for people at average risk. Meanwhile, existing fecal and blood tests “lack adequate diagnostic performance for the early detection of colorectal cancer, especially early-onset colorectal cancer, as these assays have yet to be explored or developed in this population,” they wrote.
The ideal “diagnostic modality should preferably be acceptable to healthy individuals, inexpensive, rapid, and preferably noninvasive,” they note.
Finding and validating miRNA
The researchers therefore turned to the concept of a liquid biopsy, focusing on identifying miRNAs associated with colorectal cancer, because their expression tends to be stable in tissues, blood, stool, and other body fluids.
They first analyzed an miRNA expression profiling dataset from 1,061 individuals to look for miRNAs whose expression was higher in colorectal cancer patients. The dataset included 42 patients with stage 1-2 early-onset colorectal cancer, 370 patients with stage 1-2 late-onset colorectal cancer, 62 patients younger than 50 years without cancer, and 587 patients aged 50 years or older without cancer.
The researchers found 28 miRNAs that were significantly unregulated in early-onset colorectal cancer tissue samples, compared with cancer-free samples and 11 miRNAs unregulated specifically in only the early-onset colorectal cancer samples. Four of these 11 miRNAs were adequately distinct from one another and were detectable in the plasma samples that the researchers would use to train and validate them as a combination biomarker.
The researchers used 117 plasma samples from Japan, including 72 from people with early-onset colorectal cancer and 45 from healthy donors, to develop and train an assay detecting the four miRNAs. They then validated the assay using 142 plasma samples from Spain, including 77 with early-onset colorectal cancer and 65 healthy donors.
In the Japan cohort, the four-miRNA assay had a sensitivity of 90% and a specificity of 80%, with a positive predictive value (PPV) of 88% and a negative predictive value (NPV) of 84%. In the Spain cohort used for validation, the assay performed with a sensitivity of 82%, a specificity of 86%, a PPV of 88%, and an NPV of 80%.
“Taken together, the genome-wide transcriptomic profiling approach was indeed robust, as it identified the biomarkers that were successfully trained and validated in plasma specimens from independent cohorts of patients with early-onset colorectal cancer, hence highlighting their translational potential in the clinic for the detection of this malignancy in early stages,” the authors wrote.
By disease stage, the four-miRNA panel identified both early-stage (stage 1-2; sensitivity, 92%; specificity, 80%) and late-stage (stage 3-4; sensitivity, 79%; specificity, 86%) early-onset colorectal cancer in the validation cohort.
Clinical benefit of blood test
The researchers also assessed the benefit-harm trade-off of this liquid biopsy assay compared with other screening modalities, taking into consideration the risk for false positives and false negatives.
A decision curve analysis “revealed that the miRNA panel achieved a higher net benefit regardless of threshold probability in comparison to intervention for all patients or none of the patients,” the researchers reported. “These findings suggest that this miRNA panel might offer more clinical benefit with regards to the avoidance of physical harm and misdiagnosis.”
They also found that expression levels of these four miRNAs significantly decreased after surgical removal of the colorectal cancer, strongly suggesting that the miRNAs do originate with the tumor.
“To have a relatively inexpensive and noninvasive means of screening a younger population is a very important unmet need,” said Dr. Chan.
It’s not feasible to recommend colonoscopies in people younger than 45 years because of resource constraints, he said, so “this is a wonderful new development to actually have the possibility of a blood-based screening test for younger individuals, especially given that rising incidence of young-onset colorectal cancer.”
Dr. Goel pointed out that only half of those recommended to get screened for colorectal cancer actually undergo screening, and a large reason for that is the desire to avoid colonoscopy, a concern echoed in the findings of a recent study by Christopher V. Almario, MD, MSHPM, and colleagues.
Dr. Goel expects that this strategy would increase compliance with screening because it’s less invasive and more affordable, particularly for younger patients. He estimates that a commercial assay using this panel, if approved by the Food and Drug Administration, should cost less than $100.
Dr. Almario, an assistant professor of medicine at the Cedars-Sinai Karsh Division of Gastroenterology and Hepatology in Los Angeles, agreed that an FDA-approved blood-based screening test would be a “game-changer,” as long as it’s accurate and effective.
Though Dr. Almario did not review the data in Goel’s study, he said in an interview that a blood test for colorectal cancer screening would be “the holy grail, so to speak, in terms of really moving the needle on screening uptake.”
Next steps
Dr. Chan noted that one caveat to consider with this study is that it was done in a relatively small population of individuals, even though the test was validated in a second set of plasma samples.
“Additional validation needs to be done in larger numbers of patients to really understand the performance characteristics because it is possible that some of these signatures may, when they’re using a broader group of individuals, not perform as well,” Dr. Chan said.
Dr. Goel said he is working with several companies right now to develop and further test a commercial product. He anticipates it may be shelf-ready in 2-5 years.
“The take-home message is that clinicians need to be more cognizant of the fact that incidence of this disease is rising, and we need to do something about it,” Dr. Goel said, particularly for those younger than 45 years who currently don’t have a screening option.
“Now we have at least a sliver of hope for those who might be suffering from this disease, for those for whom we have zero screening or diagnostic tests,” he said.
The research was funded by the National Cancer Institute and Fundación MAPFRE Guanarteme. Dr. Goel, Dr. Chan, and Dr. Almario reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
A simple blood test that looks for a combination of specific RNA snippets may become a novel way to screen for early-onset colorectal cancer, suggests a new study published online in Gastroenterology.
Researchers identified four microRNAs that together comprise a signature biomarker that can be used to detect and diagnose the presence of colorectal cancer from a liquid biopsy in a younger population.
MicroRNAs, or miRNAs, are small RNA molecules that do not encode proteins but are used instead to regulate gene expression. The study authors developed and validated a panel that detects four miRNAs occurring at higher levels in plasma samples from patients with early-onset colorectal cancer, with high sensitivity and specificity.
“The point would be to use this test as a routine part of annual healthcare, or for people in high-risk families every 6 months,” study senior author Ajay Goel, PhD, MS, chair of the department of molecular diagnostics and experimental therapeutics at the City of Hope Comprehensive Cancer Center, Duarte, Calif., said in an interview.
“It’s affordable, it can be done easily from a small tube of blood, and as long as that test stays negative, you’re good,” Dr. Goel said, because even if patients miss a test, the next one, whether it’s 6 months or a year later, will catch any potential cancer.
“Colon cancer is not going to kill somebody overnight, so this should be used as a precursor to colonoscopy. As long as that test is negative, you can postpone a colonoscopy,” he said.
Andrew T. Chan, MD, MPH, a professor of medicine at Harvard Medical School and vice chair of gastroenterology at Massachusetts General Hospital, both in Boston, who was not involved in the research, said in an interview that the findings are exciting.
“It would be really value-added to have a blood-based screening test,” Dr. Chan said, adding that researchers have pursued multiple different avenues in pursuit of one. “It’s very nice to see that area progress and to actually have some evidence that microRNAs could be a potential biomarker for colorectal cancer.”
Screening now insufficient for early-onset disease
The U.S. Preventive Services Task Force recently lowered the recommended age to 45 years to begin screening for colorectal cancer. Part of the rationale for the change came from the rising rates of early-onset colorectal cancer, a distinct clinical and molecular entity that tends to have poorer survival than late-onset disease, the authors noted.
Early-onset disease, occurring primarily in people under 50 without a family or genetic history of colorectal cancer, now makes up about 10%-15% of all new cases and continues to rise, they write.
“Early-onset colorectal cancer patients are more likely to exhibit an advanced stage tumor at initial presentation, distal tumor localization, signet ring histology, and a disease presentation with concurrent metastasis,” the authors wrote. “This raises the logistical clinical concern that, since the tumors in early-onset colorectal cancer patients are often more aggressive than those with late-onset colorectal cancer, a delayed diagnosis could have a significant adverse impact and can lead to early death.”
Yet current screening strategies are insufficient for detecting enough early-onset cases, the authors assert.
Colonoscopies are invasive, carry a risk for complications, and are cost- and time-prohibitive for people at average risk. Meanwhile, existing fecal and blood tests “lack adequate diagnostic performance for the early detection of colorectal cancer, especially early-onset colorectal cancer, as these assays have yet to be explored or developed in this population,” they wrote.
The ideal “diagnostic modality should preferably be acceptable to healthy individuals, inexpensive, rapid, and preferably noninvasive,” they note.
Finding and validating miRNA
The researchers therefore turned to the concept of a liquid biopsy, focusing on identifying miRNAs associated with colorectal cancer, because their expression tends to be stable in tissues, blood, stool, and other body fluids.
They first analyzed an miRNA expression profiling dataset from 1,061 individuals to look for miRNAs whose expression was higher in colorectal cancer patients. The dataset included 42 patients with stage 1-2 early-onset colorectal cancer, 370 patients with stage 1-2 late-onset colorectal cancer, 62 patients younger than 50 years without cancer, and 587 patients aged 50 years or older without cancer.
The researchers found 28 miRNAs that were significantly unregulated in early-onset colorectal cancer tissue samples, compared with cancer-free samples and 11 miRNAs unregulated specifically in only the early-onset colorectal cancer samples. Four of these 11 miRNAs were adequately distinct from one another and were detectable in the plasma samples that the researchers would use to train and validate them as a combination biomarker.
The researchers used 117 plasma samples from Japan, including 72 from people with early-onset colorectal cancer and 45 from healthy donors, to develop and train an assay detecting the four miRNAs. They then validated the assay using 142 plasma samples from Spain, including 77 with early-onset colorectal cancer and 65 healthy donors.
In the Japan cohort, the four-miRNA assay had a sensitivity of 90% and a specificity of 80%, with a positive predictive value (PPV) of 88% and a negative predictive value (NPV) of 84%. In the Spain cohort used for validation, the assay performed with a sensitivity of 82%, a specificity of 86%, a PPV of 88%, and an NPV of 80%.
“Taken together, the genome-wide transcriptomic profiling approach was indeed robust, as it identified the biomarkers that were successfully trained and validated in plasma specimens from independent cohorts of patients with early-onset colorectal cancer, hence highlighting their translational potential in the clinic for the detection of this malignancy in early stages,” the authors wrote.
By disease stage, the four-miRNA panel identified both early-stage (stage 1-2; sensitivity, 92%; specificity, 80%) and late-stage (stage 3-4; sensitivity, 79%; specificity, 86%) early-onset colorectal cancer in the validation cohort.
Clinical benefit of blood test
The researchers also assessed the benefit-harm trade-off of this liquid biopsy assay compared with other screening modalities, taking into consideration the risk for false positives and false negatives.
A decision curve analysis “revealed that the miRNA panel achieved a higher net benefit regardless of threshold probability in comparison to intervention for all patients or none of the patients,” the researchers reported. “These findings suggest that this miRNA panel might offer more clinical benefit with regards to the avoidance of physical harm and misdiagnosis.”
They also found that expression levels of these four miRNAs significantly decreased after surgical removal of the colorectal cancer, strongly suggesting that the miRNAs do originate with the tumor.
“To have a relatively inexpensive and noninvasive means of screening a younger population is a very important unmet need,” said Dr. Chan.
It’s not feasible to recommend colonoscopies in people younger than 45 years because of resource constraints, he said, so “this is a wonderful new development to actually have the possibility of a blood-based screening test for younger individuals, especially given that rising incidence of young-onset colorectal cancer.”
Dr. Goel pointed out that only half of those recommended to get screened for colorectal cancer actually undergo screening, and a large reason for that is the desire to avoid colonoscopy, a concern echoed in the findings of a recent study by Christopher V. Almario, MD, MSHPM, and colleagues.
Dr. Goel expects that this strategy would increase compliance with screening because it’s less invasive and more affordable, particularly for younger patients. He estimates that a commercial assay using this panel, if approved by the Food and Drug Administration, should cost less than $100.
Dr. Almario, an assistant professor of medicine at the Cedars-Sinai Karsh Division of Gastroenterology and Hepatology in Los Angeles, agreed that an FDA-approved blood-based screening test would be a “game-changer,” as long as it’s accurate and effective.
Though Dr. Almario did not review the data in Goel’s study, he said in an interview that a blood test for colorectal cancer screening would be “the holy grail, so to speak, in terms of really moving the needle on screening uptake.”
Next steps
Dr. Chan noted that one caveat to consider with this study is that it was done in a relatively small population of individuals, even though the test was validated in a second set of plasma samples.
“Additional validation needs to be done in larger numbers of patients to really understand the performance characteristics because it is possible that some of these signatures may, when they’re using a broader group of individuals, not perform as well,” Dr. Chan said.
Dr. Goel said he is working with several companies right now to develop and further test a commercial product. He anticipates it may be shelf-ready in 2-5 years.
“The take-home message is that clinicians need to be more cognizant of the fact that incidence of this disease is rising, and we need to do something about it,” Dr. Goel said, particularly for those younger than 45 years who currently don’t have a screening option.
“Now we have at least a sliver of hope for those who might be suffering from this disease, for those for whom we have zero screening or diagnostic tests,” he said.
The research was funded by the National Cancer Institute and Fundación MAPFRE Guanarteme. Dr. Goel, Dr. Chan, and Dr. Almario reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
FROM GASTROENTEROLOGY
Social isolation, loneliness tied to death, MI, stroke: AHA
People who are socially isolated or lonely have an increased risk for myocardial infarction, stroke, and death, independent of other factors, the American Heart Association concludes in a new scientific statement.
More than 4 decades of research have “clearly demonstrated that social isolation and loneliness are both associated with adverse health outcomes,” writing group chair Crystal Wiley Cené, MD, University of California San Diego Health, said in a news release.
“Given the prevalence of social disconnectedness across the United States, the public health impact is quite significant,” Dr. Cené added.
The writing group says more research is needed to develop, implement, and test interventions to improve cardiovascular (CV) and brain health in people who are socially isolated or lonely.
The scientific statement was published online in the Journal of the American Heart Association.
Common and potentially deadly
Social isolation is defined as having infrequent in-person contact with people and loneliness is when a person feels he or she is alone or has less connection with others than desired.
It’s estimated that one-quarter of community-dwelling Americans 65 years and older are socially isolated, with even more experiencing loneliness.
The problem is not limited to older adults, however. Research suggests that younger adults also experience social isolation and loneliness, which might be attributed to more social media use and less frequent in-person activities.
Dr. Cené and colleagues reviewed observational and intervention research on social isolation published through July 2021 to examine the impact of social isolation and loneliness on CV and brain health.
The evidence is most consistent for a direct association between social isolation, loneliness, and death from coronary heart disease (CHD) and stroke, they reported.
For example, one meta-analysis of 19 studies showed that social isolation and loneliness increase the risk for CHD by 29%; most of these studies focused on acute MI and/or CHD death as the measure of CHD.
A meta-analysis of eight longitudinal observational studies showed social isolation and loneliness were associated with a 32% increased risk for stroke, after adjustment for age, sex, and socioeconomic status.
The literature also suggests social isolation and loneliness are associated with worse prognoses in adults with existing CHD or history of stroke.
One systematic review showed that socially isolated people with CHD had a two- to threefold increase in illness and death over 6 years, independent of cardiac risk factors.
Other research suggests that socially isolated adults with three or fewer social contacts per month have a 40% increased risk for recurrent stroke or MI.
There are fewer and less robust data on the association between social isolation and loneliness with heart failure (HF), dementia, and cognitive impairment, the writing group noted.
It’s also unclear whether actually being isolated (social isolation) or feeling isolated (loneliness) matters most for cardiovascular and brain health, because only a few studies have examined both in the same sample, they pointed out.
However, a study published in Neurology in June showed that older adults who reported feeling socially isolated had worse cognitive function at baseline than did those who did not report social isolation, and were 26% more likely to have dementia at follow-up, as reported by this news organization.
Urgent need for interventions
“There is an urgent need to develop, implement, and evaluate programs and strategies to reduce the negative effects of social isolation and loneliness on cardiovascular and brain health, particularly for at-risk populations,” Dr. Cené said in the news release.
She encourages clinicians to ask patients about their social life and whether they are satisfied with their level of interactions with friends and family, and to be prepared to refer patients who are socially isolated or lonely, especially those with a history of CHD or stroke, to community resources to help them connect with others.
Fitness programs and recreational activities at senior centers, as well as interventions that address negative thoughts of self-worth and other negative thinking, have shown promise in reducing isolation and loneliness, the writing group said.
This scientific statement was prepared by the volunteer writing group on behalf of the AHA Social Determinants of Health Committee of the Council on Epidemiology and Prevention and the Council on Quality of Care and Outcomes Research; the Prevention Science Committee of the Council on Epidemiology and Prevention and the Council on Quality of Care and Outcomes Research; the Prevention Science Committee of the Council on Epidemiology and Prevention and the Council on Cardiovascular and Stroke Nursing; the Council on Arteriosclerosis, Thrombosis, and Vascular Biology; and the Stroke Council.
This research had no commercial funding. Members of the writing group have disclosed no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
People who are socially isolated or lonely have an increased risk for myocardial infarction, stroke, and death, independent of other factors, the American Heart Association concludes in a new scientific statement.
More than 4 decades of research have “clearly demonstrated that social isolation and loneliness are both associated with adverse health outcomes,” writing group chair Crystal Wiley Cené, MD, University of California San Diego Health, said in a news release.
“Given the prevalence of social disconnectedness across the United States, the public health impact is quite significant,” Dr. Cené added.
The writing group says more research is needed to develop, implement, and test interventions to improve cardiovascular (CV) and brain health in people who are socially isolated or lonely.
The scientific statement was published online in the Journal of the American Heart Association.
Common and potentially deadly
Social isolation is defined as having infrequent in-person contact with people and loneliness is when a person feels he or she is alone or has less connection with others than desired.
It’s estimated that one-quarter of community-dwelling Americans 65 years and older are socially isolated, with even more experiencing loneliness.
The problem is not limited to older adults, however. Research suggests that younger adults also experience social isolation and loneliness, which might be attributed to more social media use and less frequent in-person activities.
Dr. Cené and colleagues reviewed observational and intervention research on social isolation published through July 2021 to examine the impact of social isolation and loneliness on CV and brain health.
The evidence is most consistent for a direct association between social isolation, loneliness, and death from coronary heart disease (CHD) and stroke, they reported.
For example, one meta-analysis of 19 studies showed that social isolation and loneliness increase the risk for CHD by 29%; most of these studies focused on acute MI and/or CHD death as the measure of CHD.
A meta-analysis of eight longitudinal observational studies showed social isolation and loneliness were associated with a 32% increased risk for stroke, after adjustment for age, sex, and socioeconomic status.
The literature also suggests social isolation and loneliness are associated with worse prognoses in adults with existing CHD or history of stroke.
One systematic review showed that socially isolated people with CHD had a two- to threefold increase in illness and death over 6 years, independent of cardiac risk factors.
Other research suggests that socially isolated adults with three or fewer social contacts per month have a 40% increased risk for recurrent stroke or MI.
There are fewer and less robust data on the association between social isolation and loneliness with heart failure (HF), dementia, and cognitive impairment, the writing group noted.
It’s also unclear whether actually being isolated (social isolation) or feeling isolated (loneliness) matters most for cardiovascular and brain health, because only a few studies have examined both in the same sample, they pointed out.
However, a study published in Neurology in June showed that older adults who reported feeling socially isolated had worse cognitive function at baseline than did those who did not report social isolation, and were 26% more likely to have dementia at follow-up, as reported by this news organization.
Urgent need for interventions
“There is an urgent need to develop, implement, and evaluate programs and strategies to reduce the negative effects of social isolation and loneliness on cardiovascular and brain health, particularly for at-risk populations,” Dr. Cené said in the news release.
She encourages clinicians to ask patients about their social life and whether they are satisfied with their level of interactions with friends and family, and to be prepared to refer patients who are socially isolated or lonely, especially those with a history of CHD or stroke, to community resources to help them connect with others.
Fitness programs and recreational activities at senior centers, as well as interventions that address negative thoughts of self-worth and other negative thinking, have shown promise in reducing isolation and loneliness, the writing group said.
This scientific statement was prepared by the volunteer writing group on behalf of the AHA Social Determinants of Health Committee of the Council on Epidemiology and Prevention and the Council on Quality of Care and Outcomes Research; the Prevention Science Committee of the Council on Epidemiology and Prevention and the Council on Quality of Care and Outcomes Research; the Prevention Science Committee of the Council on Epidemiology and Prevention and the Council on Cardiovascular and Stroke Nursing; the Council on Arteriosclerosis, Thrombosis, and Vascular Biology; and the Stroke Council.
This research had no commercial funding. Members of the writing group have disclosed no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
People who are socially isolated or lonely have an increased risk for myocardial infarction, stroke, and death, independent of other factors, the American Heart Association concludes in a new scientific statement.
More than 4 decades of research have “clearly demonstrated that social isolation and loneliness are both associated with adverse health outcomes,” writing group chair Crystal Wiley Cené, MD, University of California San Diego Health, said in a news release.
“Given the prevalence of social disconnectedness across the United States, the public health impact is quite significant,” Dr. Cené added.
The writing group says more research is needed to develop, implement, and test interventions to improve cardiovascular (CV) and brain health in people who are socially isolated or lonely.
The scientific statement was published online in the Journal of the American Heart Association.
Common and potentially deadly
Social isolation is defined as having infrequent in-person contact with people and loneliness is when a person feels he or she is alone or has less connection with others than desired.
It’s estimated that one-quarter of community-dwelling Americans 65 years and older are socially isolated, with even more experiencing loneliness.
The problem is not limited to older adults, however. Research suggests that younger adults also experience social isolation and loneliness, which might be attributed to more social media use and less frequent in-person activities.
Dr. Cené and colleagues reviewed observational and intervention research on social isolation published through July 2021 to examine the impact of social isolation and loneliness on CV and brain health.
The evidence is most consistent for a direct association between social isolation, loneliness, and death from coronary heart disease (CHD) and stroke, they reported.
For example, one meta-analysis of 19 studies showed that social isolation and loneliness increase the risk for CHD by 29%; most of these studies focused on acute MI and/or CHD death as the measure of CHD.
A meta-analysis of eight longitudinal observational studies showed social isolation and loneliness were associated with a 32% increased risk for stroke, after adjustment for age, sex, and socioeconomic status.
The literature also suggests social isolation and loneliness are associated with worse prognoses in adults with existing CHD or history of stroke.
One systematic review showed that socially isolated people with CHD had a two- to threefold increase in illness and death over 6 years, independent of cardiac risk factors.
Other research suggests that socially isolated adults with three or fewer social contacts per month have a 40% increased risk for recurrent stroke or MI.
There are fewer and less robust data on the association between social isolation and loneliness with heart failure (HF), dementia, and cognitive impairment, the writing group noted.
It’s also unclear whether actually being isolated (social isolation) or feeling isolated (loneliness) matters most for cardiovascular and brain health, because only a few studies have examined both in the same sample, they pointed out.
However, a study published in Neurology in June showed that older adults who reported feeling socially isolated had worse cognitive function at baseline than did those who did not report social isolation, and were 26% more likely to have dementia at follow-up, as reported by this news organization.
Urgent need for interventions
“There is an urgent need to develop, implement, and evaluate programs and strategies to reduce the negative effects of social isolation and loneliness on cardiovascular and brain health, particularly for at-risk populations,” Dr. Cené said in the news release.
She encourages clinicians to ask patients about their social life and whether they are satisfied with their level of interactions with friends and family, and to be prepared to refer patients who are socially isolated or lonely, especially those with a history of CHD or stroke, to community resources to help them connect with others.
Fitness programs and recreational activities at senior centers, as well as interventions that address negative thoughts of self-worth and other negative thinking, have shown promise in reducing isolation and loneliness, the writing group said.
This scientific statement was prepared by the volunteer writing group on behalf of the AHA Social Determinants of Health Committee of the Council on Epidemiology and Prevention and the Council on Quality of Care and Outcomes Research; the Prevention Science Committee of the Council on Epidemiology and Prevention and the Council on Quality of Care and Outcomes Research; the Prevention Science Committee of the Council on Epidemiology and Prevention and the Council on Cardiovascular and Stroke Nursing; the Council on Arteriosclerosis, Thrombosis, and Vascular Biology; and the Stroke Council.
This research had no commercial funding. Members of the writing group have disclosed no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
FROM THE JOURNAL OF THE AMERICAN HEART ASSOCIATION
A ‘promising target’ to improve outcomes in late-life depression
A new study sheds light on the neurologic underpinnings of late-life depression (LLD) with apathy and its frequently poor response to treatment.
Investigators headed by Faith Gunning, PhD, of the Institute of Geriatric Psychiatry, Weill Cornell Medicine, New York, analyzed baseline and posttreatment brain MRIs and functional MRIs (fMRIs) of older adults with depression who participated in a 12-week open-label nonrandomized clinical trial of escitalopram. Participants had undergone clinical and cognitive assessments.
Disturbances were found in resting state functional connectivity (rsFC) between the salience network (SN) and other large-scale networks that support goal-directed behavior, especially in patients with depression who also had features of apathy.
“This study suggests that, among older adults with depression, distinct network abnormalities may be associated with apathy and poor response to first-line pharmacotherapy and may serve as promising targets for novel interventions,” the investigators write.
The study was published online in JAMA Network Open.
A leading cause of disability
LLD is a “leading cause of disability and medical morbidity in older adulthood,” with one-third to one-half of patients with LLD also suffering from apathy, the authors write.
Older adults with depression and comorbid apathy have poorer outcomes, including lower remission rates and poorer response to first-line antidepressants, compared with those with LLD but who do not have apathy.
Despite the high prevalence of apathy in people with depression, “little is known about its optimal treatment and, more broadly, about the brain-based mechanisms of apathy,” the authors note.
An “emerging hypothesis” points to the role of a compromised SN and its large-scale connections between apathy and poor treatment response in LLD.
The SN (which includes the insula and the dorsal anterior cingulate cortex) “attributes motivational value to a stimulus” and “dynamically coordinates the activity of other large-scale networks, including the executive control network and default mode network (DMN).”
Preliminary studies of apathy in patients with depression report reduced volume in structures of the SN and suggest disruption in functional connectivity among the SN, DMN, and the executive control network; but the mechanisms linking apathy to poor antidepressant response in LLD “are not well understood.”
“Connectometry” is a “novel approach to diffusion MRI analysis that quantifies the local connectome of white matter pathways.” It has been used along with resting-state imagery, but it had not been used in studying apathy.
The researchers investigated the functional connectivity of the SN, hypothesizing that alterations in connectivity among key nodes of the SN and other core circuits that modulate goal-directed behavior (DMN and the executive control network) were implicated in individuals with depression and apathy.
They applied connectometry to “identify pathway-level disruptions in structural connectivity,” hypothesizing that compromise of frontoparietal and frontolimbic pathways would be associated with apathy in patients with LLD.
They also wanted to know whether apathy-related network abnormalities were associated with antidepressant response after 12 weeks of pharmacotherapy with the selective serotonin reuptake inhibitor escitalopram.
Emerging model
The study included 40 older adults (65% women; mean [SD] age, 70.0 [6.6] years) with DSM-IV–diagnosis major depressive disorder (without psychotic features) who were from a single-group, open-label escitalopram treatment trial.
The Hamilton-Depression (HAM-D) scale was used to assess depression, while the Apathy Evaluation Scale was used to assess apathy. On the Apathy Evaluation Scale, a score of greater than 40.5 represents “clinically significant apathy.” Participants completed these tests at baseline and after 12 weeks of escitalopram treatment.
They also completed a battery of neuropsychological tests to assess cognition and underwent MRI imaging. fMRI was used to map group differences in rsFC of the SN, and diffusion connectometry was used to “evaluate pathway-level disruptions in structural connectivity.”
Of the participants, 20 had clinically significant apathy. There were no differences in age, sex, educational level, or the severity of depression at baseline between those who did and those who did not have apathy.
Compared with participants with depression but not apathy, those with depression and comorbid apathy had lower rsFC of salience network seeds (specifically, the dorsolateral prefrontal cortex [DLPFC], premotor cortex, midcingulate cortex, and paracentral lobule).
They also had greater rsFC in the lateral temporal cortex and temporal pole (z > 2.7; Bonferroni-corrected threshold of P < .0125).
Additionally, participants with apathy had lower structural connectivity in the splenium, cingulum, and fronto-occipital fasciculus, compared with those without apathy (t > 2.5; false discovery rate–corrected P = .02).
Of the 27 participants who completed escitalopram treatment; 16 (59%) achieved remission (defined as an HAM-D score <10). Participants with apathy had poorer response to escitalopram treatment.
Lower insula-DLPFC/midcingulate cortex rsFC was associated with less improvement in depressive symptoms (HAM-D percentage change, beta [df] = .588 [26]; P = .001) as well as a greater likelihood that the participant would not achieve remission after treatment (odds ratio, 1.041; 95% confidence interval, 1.003-1.081; P = .04).
In regression models, lower insula-DLPFC/midcingulate cortex rsFC was found to be a mediator of the association between baseline apathy and persistence of depression.
The SN findings were also relevant to cognition. Lower dorsal anterior cingulate-DLPFC/paracentral rsFC was found to be associated with residual cognitive difficulties on measures of attention and executive function (beta [df] = .445 [26] and beta [df] = .384 [26], respectively; for each, P = .04).
“These findings support an emerging model of apathy, which proposes that apathy may arise from dysfunctional interactions among core networks (that is, SN, DMN, and executive control) that support motivated behavior,” the investigators write.
“This may cause a failure of network integration, leading to difficulties with salience processing, action planning, and behavioral initiation that manifests clinically as apathy,” they conclude.
One limitation they note was the lack of longitudinal follow-up after acute treatment and a “relatively limited neuropsychological battery.” Therefore, they could not “establish the persistence of treatment differences nor the specificity of cognitive associations.”
The investigators add that “novel interventions that modulate interactions among affected circuits may help to improve clinical outcomes in this distinct subgroup of older adults with depression, for whom few effective treatments exist.”
Commenting on the study, Helen Lavretsy, MD, professor of psychiatry in residence and director of the Late-Life Mood, Stress, and Wellness Research Program and the Integrative Psychiatry Clinic, Jane and Terry Semel Institute for Neuroscience and Human Behavior, University of California, Los Angeles, said, the findings “can be used in future studies targeting apathy and the underlying neural mechanisms of brain connectivity.” Dr. Lavretsy was not involved with the study.
The study was supported by grants from the National Institute of Mental Health. Dr. Gunning reported receiving grants from the National Institute of Mental Health during the conduct of the study and grants from Akili Interactive. The other authors’ disclosures are listed on the original article. Dr. Lavretsky reports no relevant financial relationships.
A version of this article first appeared on Medscape.com.
A new study sheds light on the neurologic underpinnings of late-life depression (LLD) with apathy and its frequently poor response to treatment.
Investigators headed by Faith Gunning, PhD, of the Institute of Geriatric Psychiatry, Weill Cornell Medicine, New York, analyzed baseline and posttreatment brain MRIs and functional MRIs (fMRIs) of older adults with depression who participated in a 12-week open-label nonrandomized clinical trial of escitalopram. Participants had undergone clinical and cognitive assessments.
Disturbances were found in resting state functional connectivity (rsFC) between the salience network (SN) and other large-scale networks that support goal-directed behavior, especially in patients with depression who also had features of apathy.
“This study suggests that, among older adults with depression, distinct network abnormalities may be associated with apathy and poor response to first-line pharmacotherapy and may serve as promising targets for novel interventions,” the investigators write.
The study was published online in JAMA Network Open.
A leading cause of disability
LLD is a “leading cause of disability and medical morbidity in older adulthood,” with one-third to one-half of patients with LLD also suffering from apathy, the authors write.
Older adults with depression and comorbid apathy have poorer outcomes, including lower remission rates and poorer response to first-line antidepressants, compared with those with LLD but who do not have apathy.
Despite the high prevalence of apathy in people with depression, “little is known about its optimal treatment and, more broadly, about the brain-based mechanisms of apathy,” the authors note.
An “emerging hypothesis” points to the role of a compromised SN and its large-scale connections between apathy and poor treatment response in LLD.
The SN (which includes the insula and the dorsal anterior cingulate cortex) “attributes motivational value to a stimulus” and “dynamically coordinates the activity of other large-scale networks, including the executive control network and default mode network (DMN).”
Preliminary studies of apathy in patients with depression report reduced volume in structures of the SN and suggest disruption in functional connectivity among the SN, DMN, and the executive control network; but the mechanisms linking apathy to poor antidepressant response in LLD “are not well understood.”
“Connectometry” is a “novel approach to diffusion MRI analysis that quantifies the local connectome of white matter pathways.” It has been used along with resting-state imagery, but it had not been used in studying apathy.
The researchers investigated the functional connectivity of the SN, hypothesizing that alterations in connectivity among key nodes of the SN and other core circuits that modulate goal-directed behavior (DMN and the executive control network) were implicated in individuals with depression and apathy.
They applied connectometry to “identify pathway-level disruptions in structural connectivity,” hypothesizing that compromise of frontoparietal and frontolimbic pathways would be associated with apathy in patients with LLD.
They also wanted to know whether apathy-related network abnormalities were associated with antidepressant response after 12 weeks of pharmacotherapy with the selective serotonin reuptake inhibitor escitalopram.
Emerging model
The study included 40 older adults (65% women; mean [SD] age, 70.0 [6.6] years) with DSM-IV–diagnosis major depressive disorder (without psychotic features) who were from a single-group, open-label escitalopram treatment trial.
The Hamilton-Depression (HAM-D) scale was used to assess depression, while the Apathy Evaluation Scale was used to assess apathy. On the Apathy Evaluation Scale, a score of greater than 40.5 represents “clinically significant apathy.” Participants completed these tests at baseline and after 12 weeks of escitalopram treatment.
They also completed a battery of neuropsychological tests to assess cognition and underwent MRI imaging. fMRI was used to map group differences in rsFC of the SN, and diffusion connectometry was used to “evaluate pathway-level disruptions in structural connectivity.”
Of the participants, 20 had clinically significant apathy. There were no differences in age, sex, educational level, or the severity of depression at baseline between those who did and those who did not have apathy.
Compared with participants with depression but not apathy, those with depression and comorbid apathy had lower rsFC of salience network seeds (specifically, the dorsolateral prefrontal cortex [DLPFC], premotor cortex, midcingulate cortex, and paracentral lobule).
They also had greater rsFC in the lateral temporal cortex and temporal pole (z > 2.7; Bonferroni-corrected threshold of P < .0125).
Additionally, participants with apathy had lower structural connectivity in the splenium, cingulum, and fronto-occipital fasciculus, compared with those without apathy (t > 2.5; false discovery rate–corrected P = .02).
Of the 27 participants who completed escitalopram treatment; 16 (59%) achieved remission (defined as an HAM-D score <10). Participants with apathy had poorer response to escitalopram treatment.
Lower insula-DLPFC/midcingulate cortex rsFC was associated with less improvement in depressive symptoms (HAM-D percentage change, beta [df] = .588 [26]; P = .001) as well as a greater likelihood that the participant would not achieve remission after treatment (odds ratio, 1.041; 95% confidence interval, 1.003-1.081; P = .04).
In regression models, lower insula-DLPFC/midcingulate cortex rsFC was found to be a mediator of the association between baseline apathy and persistence of depression.
The SN findings were also relevant to cognition. Lower dorsal anterior cingulate-DLPFC/paracentral rsFC was found to be associated with residual cognitive difficulties on measures of attention and executive function (beta [df] = .445 [26] and beta [df] = .384 [26], respectively; for each, P = .04).
“These findings support an emerging model of apathy, which proposes that apathy may arise from dysfunctional interactions among core networks (that is, SN, DMN, and executive control) that support motivated behavior,” the investigators write.
“This may cause a failure of network integration, leading to difficulties with salience processing, action planning, and behavioral initiation that manifests clinically as apathy,” they conclude.
One limitation they note was the lack of longitudinal follow-up after acute treatment and a “relatively limited neuropsychological battery.” Therefore, they could not “establish the persistence of treatment differences nor the specificity of cognitive associations.”
The investigators add that “novel interventions that modulate interactions among affected circuits may help to improve clinical outcomes in this distinct subgroup of older adults with depression, for whom few effective treatments exist.”
Commenting on the study, Helen Lavretsy, MD, professor of psychiatry in residence and director of the Late-Life Mood, Stress, and Wellness Research Program and the Integrative Psychiatry Clinic, Jane and Terry Semel Institute for Neuroscience and Human Behavior, University of California, Los Angeles, said, the findings “can be used in future studies targeting apathy and the underlying neural mechanisms of brain connectivity.” Dr. Lavretsy was not involved with the study.
The study was supported by grants from the National Institute of Mental Health. Dr. Gunning reported receiving grants from the National Institute of Mental Health during the conduct of the study and grants from Akili Interactive. The other authors’ disclosures are listed on the original article. Dr. Lavretsky reports no relevant financial relationships.
A version of this article first appeared on Medscape.com.
A new study sheds light on the neurologic underpinnings of late-life depression (LLD) with apathy and its frequently poor response to treatment.
Investigators headed by Faith Gunning, PhD, of the Institute of Geriatric Psychiatry, Weill Cornell Medicine, New York, analyzed baseline and posttreatment brain MRIs and functional MRIs (fMRIs) of older adults with depression who participated in a 12-week open-label nonrandomized clinical trial of escitalopram. Participants had undergone clinical and cognitive assessments.
Disturbances were found in resting state functional connectivity (rsFC) between the salience network (SN) and other large-scale networks that support goal-directed behavior, especially in patients with depression who also had features of apathy.
“This study suggests that, among older adults with depression, distinct network abnormalities may be associated with apathy and poor response to first-line pharmacotherapy and may serve as promising targets for novel interventions,” the investigators write.
The study was published online in JAMA Network Open.
A leading cause of disability
LLD is a “leading cause of disability and medical morbidity in older adulthood,” with one-third to one-half of patients with LLD also suffering from apathy, the authors write.
Older adults with depression and comorbid apathy have poorer outcomes, including lower remission rates and poorer response to first-line antidepressants, compared with those with LLD but who do not have apathy.
Despite the high prevalence of apathy in people with depression, “little is known about its optimal treatment and, more broadly, about the brain-based mechanisms of apathy,” the authors note.
An “emerging hypothesis” points to the role of a compromised SN and its large-scale connections between apathy and poor treatment response in LLD.
The SN (which includes the insula and the dorsal anterior cingulate cortex) “attributes motivational value to a stimulus” and “dynamically coordinates the activity of other large-scale networks, including the executive control network and default mode network (DMN).”
Preliminary studies of apathy in patients with depression report reduced volume in structures of the SN and suggest disruption in functional connectivity among the SN, DMN, and the executive control network; but the mechanisms linking apathy to poor antidepressant response in LLD “are not well understood.”
“Connectometry” is a “novel approach to diffusion MRI analysis that quantifies the local connectome of white matter pathways.” It has been used along with resting-state imagery, but it had not been used in studying apathy.
The researchers investigated the functional connectivity of the SN, hypothesizing that alterations in connectivity among key nodes of the SN and other core circuits that modulate goal-directed behavior (DMN and the executive control network) were implicated in individuals with depression and apathy.
They applied connectometry to “identify pathway-level disruptions in structural connectivity,” hypothesizing that compromise of frontoparietal and frontolimbic pathways would be associated with apathy in patients with LLD.
They also wanted to know whether apathy-related network abnormalities were associated with antidepressant response after 12 weeks of pharmacotherapy with the selective serotonin reuptake inhibitor escitalopram.
Emerging model
The study included 40 older adults (65% women; mean [SD] age, 70.0 [6.6] years) with DSM-IV–diagnosis major depressive disorder (without psychotic features) who were from a single-group, open-label escitalopram treatment trial.
The Hamilton-Depression (HAM-D) scale was used to assess depression, while the Apathy Evaluation Scale was used to assess apathy. On the Apathy Evaluation Scale, a score of greater than 40.5 represents “clinically significant apathy.” Participants completed these tests at baseline and after 12 weeks of escitalopram treatment.
They also completed a battery of neuropsychological tests to assess cognition and underwent MRI imaging. fMRI was used to map group differences in rsFC of the SN, and diffusion connectometry was used to “evaluate pathway-level disruptions in structural connectivity.”
Of the participants, 20 had clinically significant apathy. There were no differences in age, sex, educational level, or the severity of depression at baseline between those who did and those who did not have apathy.
Compared with participants with depression but not apathy, those with depression and comorbid apathy had lower rsFC of salience network seeds (specifically, the dorsolateral prefrontal cortex [DLPFC], premotor cortex, midcingulate cortex, and paracentral lobule).
They also had greater rsFC in the lateral temporal cortex and temporal pole (z > 2.7; Bonferroni-corrected threshold of P < .0125).
Additionally, participants with apathy had lower structural connectivity in the splenium, cingulum, and fronto-occipital fasciculus, compared with those without apathy (t > 2.5; false discovery rate–corrected P = .02).
Of the 27 participants who completed escitalopram treatment; 16 (59%) achieved remission (defined as an HAM-D score <10). Participants with apathy had poorer response to escitalopram treatment.
Lower insula-DLPFC/midcingulate cortex rsFC was associated with less improvement in depressive symptoms (HAM-D percentage change, beta [df] = .588 [26]; P = .001) as well as a greater likelihood that the participant would not achieve remission after treatment (odds ratio, 1.041; 95% confidence interval, 1.003-1.081; P = .04).
In regression models, lower insula-DLPFC/midcingulate cortex rsFC was found to be a mediator of the association between baseline apathy and persistence of depression.
The SN findings were also relevant to cognition. Lower dorsal anterior cingulate-DLPFC/paracentral rsFC was found to be associated with residual cognitive difficulties on measures of attention and executive function (beta [df] = .445 [26] and beta [df] = .384 [26], respectively; for each, P = .04).
“These findings support an emerging model of apathy, which proposes that apathy may arise from dysfunctional interactions among core networks (that is, SN, DMN, and executive control) that support motivated behavior,” the investigators write.
“This may cause a failure of network integration, leading to difficulties with salience processing, action planning, and behavioral initiation that manifests clinically as apathy,” they conclude.
One limitation they note was the lack of longitudinal follow-up after acute treatment and a “relatively limited neuropsychological battery.” Therefore, they could not “establish the persistence of treatment differences nor the specificity of cognitive associations.”
The investigators add that “novel interventions that modulate interactions among affected circuits may help to improve clinical outcomes in this distinct subgroup of older adults with depression, for whom few effective treatments exist.”
Commenting on the study, Helen Lavretsy, MD, professor of psychiatry in residence and director of the Late-Life Mood, Stress, and Wellness Research Program and the Integrative Psychiatry Clinic, Jane and Terry Semel Institute for Neuroscience and Human Behavior, University of California, Los Angeles, said, the findings “can be used in future studies targeting apathy and the underlying neural mechanisms of brain connectivity.” Dr. Lavretsy was not involved with the study.
The study was supported by grants from the National Institute of Mental Health. Dr. Gunning reported receiving grants from the National Institute of Mental Health during the conduct of the study and grants from Akili Interactive. The other authors’ disclosures are listed on the original article. Dr. Lavretsky reports no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM JAMA NETWORK OPEN
Neuropathy drives hypoglycemia cluelessness in T1D
Researchers published the study covered in this summary on researchsquare.com as a preprint that has not yet been peer reviewed.
Key takeaways
- In Japanese adults with type 1 diabetes insulin-pump treatment (continuous subcutaneous insulin infusion) and higher problem-solving perception appear protective against impaired awareness of hypoglycemia (IAH), while diabetic peripheral neuropathy (DPN) is associated with increased risk.
- Diabetes distress and fear of hypoglycemia are common in people with IAH.
Why this matters
- Adults with type 1 diabetes and IAH have a reduced ability to perceive hypoglycemic symptoms and are at risk of severe hypoglycemic events because they are unable to take immediate corrective action.
- This is the first study to identify protective factors and risk factors of IAH in Japanese adults with type 1 diabetes.
- People with IAH may plan to loosen tight glucose management and intentionally omit insulin injection to prevent severe hypoglycemia.
- The information in this report may help improve the management of people with problematic hypoglycemia, the authors suggested. Treatment with an insulin pump and structured education aimed at improving problem-solving skills may be useful interventions for adults with type 1 diabetes and IAH, they suggested.
Study design
- The study involved a cross-sectional analysis of 288 Japanese adults with type 1 diabetes who averaged 50 years old, had diabetes for an average of about 18 years, had an average hemoglobin A1c at baseline of 7.7%, and included about 37% men and 63% women.
- The cohort included 55 people with IAH (19%) and 233 with no impairment of their hypoglycemia awareness, based on their score on the .
Key results
- DPN was significantly more prevalent in the IAH group than in the control group (12.0% vs. 26.5%). A logistic regression analysis showed that the odds ratio for DPN was 2.63-fold higher among people with IAH, compared with those without IAH, but there were no differences in other complications or by HbA1c levels.
- Treatment with continuous subcutaneous insulin therapy (an insulin pump) was significantly less prevalent in the IAH group, compared with those without IAH (23.6% vs 39.5%), with an adjusted odds ratio of 0.48. The two subgroups showed no differences in use of continuous glucose monitoring, used by 56% of the people in each of the two subgroups.
- The two subgroups showed no differences in their healthy lifestyle score, sleep debt, or rates of excessive drinking.
- Mean autonomic symptom scores for both sweating and shaking were significantly reduced in the IAH group, but no between-group differences appeared for palpations or hunger.
- All mean neuroglycopenic symptom scores were significantly lower in those without IAH, including confusion and speech difficulty.
- Scores for measures of diabetes distress and for the worry component of the fear of hypoglycemia were significantly higher in the IAH group, but there were no differences in other psychological measures.
- Higher were significantly associated with decreased IAH risk with a calculated odds ratio of 0.54, but other aspects of hypoglycemia problem-solving such as detection control, goal setting, and strategy evaluation showed no significant links.
Limitations
- The study used a cross-sectional design, which is not suited to making causal inferences.
- The authors characterized DPN as either present or absent. They did not evaluate or analyze the severity of peripheral neuropathy.
- The authors evaluated diabetic cardiac autonomic neuropathy (DCAN) by a person’s coefficient of variation of R-R intervals, and definitive diagnosis of DCAN required at least two positive results on a cardiac autonomic test. More vigorous evaluation using a more definitive assessment of DCAN is needed to relate DCAN and IAH status.
Disclosures
- The study received no commercial funding.
- The authors have disclosed no relevant financial relationships.
This is a summary of a preprint research study, “Protective and risk factors of impaired awareness of hypoglycemia in patients with type 1 diabetes: a cross- sectional analysis of baseline data from the PR-IAH study,” written by researchers at several hospitals in Japan, all affiliated with the National Hospital Organization, on Research Square. The study has not yet been peer reviewed. The full text of the study can be found on researchsquare.com.
A version of this article first appeared on Medscape.com.
Researchers published the study covered in this summary on researchsquare.com as a preprint that has not yet been peer reviewed.
Key takeaways
- In Japanese adults with type 1 diabetes insulin-pump treatment (continuous subcutaneous insulin infusion) and higher problem-solving perception appear protective against impaired awareness of hypoglycemia (IAH), while diabetic peripheral neuropathy (DPN) is associated with increased risk.
- Diabetes distress and fear of hypoglycemia are common in people with IAH.
Why this matters
- Adults with type 1 diabetes and IAH have a reduced ability to perceive hypoglycemic symptoms and are at risk of severe hypoglycemic events because they are unable to take immediate corrective action.
- This is the first study to identify protective factors and risk factors of IAH in Japanese adults with type 1 diabetes.
- People with IAH may plan to loosen tight glucose management and intentionally omit insulin injection to prevent severe hypoglycemia.
- The information in this report may help improve the management of people with problematic hypoglycemia, the authors suggested. Treatment with an insulin pump and structured education aimed at improving problem-solving skills may be useful interventions for adults with type 1 diabetes and IAH, they suggested.
Study design
- The study involved a cross-sectional analysis of 288 Japanese adults with type 1 diabetes who averaged 50 years old, had diabetes for an average of about 18 years, had an average hemoglobin A1c at baseline of 7.7%, and included about 37% men and 63% women.
- The cohort included 55 people with IAH (19%) and 233 with no impairment of their hypoglycemia awareness, based on their score on the .
Key results
- DPN was significantly more prevalent in the IAH group than in the control group (12.0% vs. 26.5%). A logistic regression analysis showed that the odds ratio for DPN was 2.63-fold higher among people with IAH, compared with those without IAH, but there were no differences in other complications or by HbA1c levels.
- Treatment with continuous subcutaneous insulin therapy (an insulin pump) was significantly less prevalent in the IAH group, compared with those without IAH (23.6% vs 39.5%), with an adjusted odds ratio of 0.48. The two subgroups showed no differences in use of continuous glucose monitoring, used by 56% of the people in each of the two subgroups.
- The two subgroups showed no differences in their healthy lifestyle score, sleep debt, or rates of excessive drinking.
- Mean autonomic symptom scores for both sweating and shaking were significantly reduced in the IAH group, but no between-group differences appeared for palpations or hunger.
- All mean neuroglycopenic symptom scores were significantly lower in those without IAH, including confusion and speech difficulty.
- Scores for measures of diabetes distress and for the worry component of the fear of hypoglycemia were significantly higher in the IAH group, but there were no differences in other psychological measures.
- Higher were significantly associated with decreased IAH risk with a calculated odds ratio of 0.54, but other aspects of hypoglycemia problem-solving such as detection control, goal setting, and strategy evaluation showed no significant links.
Limitations
- The study used a cross-sectional design, which is not suited to making causal inferences.
- The authors characterized DPN as either present or absent. They did not evaluate or analyze the severity of peripheral neuropathy.
- The authors evaluated diabetic cardiac autonomic neuropathy (DCAN) by a person’s coefficient of variation of R-R intervals, and definitive diagnosis of DCAN required at least two positive results on a cardiac autonomic test. More vigorous evaluation using a more definitive assessment of DCAN is needed to relate DCAN and IAH status.
Disclosures
- The study received no commercial funding.
- The authors have disclosed no relevant financial relationships.
This is a summary of a preprint research study, “Protective and risk factors of impaired awareness of hypoglycemia in patients with type 1 diabetes: a cross- sectional analysis of baseline data from the PR-IAH study,” written by researchers at several hospitals in Japan, all affiliated with the National Hospital Organization, on Research Square. The study has not yet been peer reviewed. The full text of the study can be found on researchsquare.com.
A version of this article first appeared on Medscape.com.
Researchers published the study covered in this summary on researchsquare.com as a preprint that has not yet been peer reviewed.
Key takeaways
- In Japanese adults with type 1 diabetes insulin-pump treatment (continuous subcutaneous insulin infusion) and higher problem-solving perception appear protective against impaired awareness of hypoglycemia (IAH), while diabetic peripheral neuropathy (DPN) is associated with increased risk.
- Diabetes distress and fear of hypoglycemia are common in people with IAH.
Why this matters
- Adults with type 1 diabetes and IAH have a reduced ability to perceive hypoglycemic symptoms and are at risk of severe hypoglycemic events because they are unable to take immediate corrective action.
- This is the first study to identify protective factors and risk factors of IAH in Japanese adults with type 1 diabetes.
- People with IAH may plan to loosen tight glucose management and intentionally omit insulin injection to prevent severe hypoglycemia.
- The information in this report may help improve the management of people with problematic hypoglycemia, the authors suggested. Treatment with an insulin pump and structured education aimed at improving problem-solving skills may be useful interventions for adults with type 1 diabetes and IAH, they suggested.
Study design
- The study involved a cross-sectional analysis of 288 Japanese adults with type 1 diabetes who averaged 50 years old, had diabetes for an average of about 18 years, had an average hemoglobin A1c at baseline of 7.7%, and included about 37% men and 63% women.
- The cohort included 55 people with IAH (19%) and 233 with no impairment of their hypoglycemia awareness, based on their score on the .
Key results
- DPN was significantly more prevalent in the IAH group than in the control group (12.0% vs. 26.5%). A logistic regression analysis showed that the odds ratio for DPN was 2.63-fold higher among people with IAH, compared with those without IAH, but there were no differences in other complications or by HbA1c levels.
- Treatment with continuous subcutaneous insulin therapy (an insulin pump) was significantly less prevalent in the IAH group, compared with those without IAH (23.6% vs 39.5%), with an adjusted odds ratio of 0.48. The two subgroups showed no differences in use of continuous glucose monitoring, used by 56% of the people in each of the two subgroups.
- The two subgroups showed no differences in their healthy lifestyle score, sleep debt, or rates of excessive drinking.
- Mean autonomic symptom scores for both sweating and shaking were significantly reduced in the IAH group, but no between-group differences appeared for palpations or hunger.
- All mean neuroglycopenic symptom scores were significantly lower in those without IAH, including confusion and speech difficulty.
- Scores for measures of diabetes distress and for the worry component of the fear of hypoglycemia were significantly higher in the IAH group, but there were no differences in other psychological measures.
- Higher were significantly associated with decreased IAH risk with a calculated odds ratio of 0.54, but other aspects of hypoglycemia problem-solving such as detection control, goal setting, and strategy evaluation showed no significant links.
Limitations
- The study used a cross-sectional design, which is not suited to making causal inferences.
- The authors characterized DPN as either present or absent. They did not evaluate or analyze the severity of peripheral neuropathy.
- The authors evaluated diabetic cardiac autonomic neuropathy (DCAN) by a person’s coefficient of variation of R-R intervals, and definitive diagnosis of DCAN required at least two positive results on a cardiac autonomic test. More vigorous evaluation using a more definitive assessment of DCAN is needed to relate DCAN and IAH status.
Disclosures
- The study received no commercial funding.
- The authors have disclosed no relevant financial relationships.
This is a summary of a preprint research study, “Protective and risk factors of impaired awareness of hypoglycemia in patients with type 1 diabetes: a cross- sectional analysis of baseline data from the PR-IAH study,” written by researchers at several hospitals in Japan, all affiliated with the National Hospital Organization, on Research Square. The study has not yet been peer reviewed. The full text of the study can be found on researchsquare.com.
A version of this article first appeared on Medscape.com.
Onset and awareness of hypertension varies by race, ethnicity
Black and Hispanic adults are diagnosed with hypertension at a significantly younger age than are white adults, and they also are more likely than Whites to be unaware of undiagnosed high blood pressure, based on national survey data collected from 2011 to 2020.
“Earlier hypertension onset in Black and Hispanic adults may contribute to racial and ethnic CVD disparities,” Xiaoning Huang, PhD, and associates wrote in JAMA Cardiology, also noting that “lower hypertension awareness among racial and ethnic minoritized groups suggests potential for underestimating differences in age at onset.”
Overall mean age at diagnosis was 46 years for the overall study sample of 9,627 participants in the National Health and Nutrition Examination Surveys over the 10 years covered in the analysis. Black adults, with a median age of 42 years, and Hispanic adults (median, 43 years) were significantly younger at diagnosis than White adults, who had a median age of 47 years, the investigators reported.
“Earlier age at hypertension onset may mean greater cumulative exposure to high blood pressure across the life course, which is associated with increased risk of [cardiovascular disease] and may contribute to racial disparities in hypertension-related outcomes,” said Dr. Huang and associates at Northwestern University, Chicago.
The increased cumulative exposure can be seen when age at diagnosis is stratified “across the life course.” Black/Hispanic adults were significantly more likely than White/Asian adults to be diagnosed at or before 30 years of age, and that difference continued to at least age 50 years, the investigators said.
Many adults unaware of their hypertension
There was a somewhat different trend among those in the study population who reported BP at or above 140/90 mm Hg but did not report a hypertension diagnosis. Black, Hispanic, and Asian adults all were significantly more likely than White adults to be unaware of their hypertension, the survey data showed.
Overall, 18% of those who did not report a hypertension diagnosis had a BP of 140/90 mm Hg or higher and 38% had a BP of 130/80 mm Hg or more. Broken down by race and ethnicity, 16% and 36% of Whites reporting no hypertension had BPs of 140/90 and 130/80 mm Hg, respectively; those proportions were 21% and 42% for Hispanics, 24% and 44% for Asians, and 28% and 51% for Blacks, with all of the differences between Whites and the others significant, the research team reported.
One investigator is an associate editor for JAMA Cardiology and reported receiving grants from the American Heart Association and the National Institutes of Health during the conduct of the study. None of the other investigators reported any conflicts.
Black and Hispanic adults are diagnosed with hypertension at a significantly younger age than are white adults, and they also are more likely than Whites to be unaware of undiagnosed high blood pressure, based on national survey data collected from 2011 to 2020.
“Earlier hypertension onset in Black and Hispanic adults may contribute to racial and ethnic CVD disparities,” Xiaoning Huang, PhD, and associates wrote in JAMA Cardiology, also noting that “lower hypertension awareness among racial and ethnic minoritized groups suggests potential for underestimating differences in age at onset.”
Overall mean age at diagnosis was 46 years for the overall study sample of 9,627 participants in the National Health and Nutrition Examination Surveys over the 10 years covered in the analysis. Black adults, with a median age of 42 years, and Hispanic adults (median, 43 years) were significantly younger at diagnosis than White adults, who had a median age of 47 years, the investigators reported.
“Earlier age at hypertension onset may mean greater cumulative exposure to high blood pressure across the life course, which is associated with increased risk of [cardiovascular disease] and may contribute to racial disparities in hypertension-related outcomes,” said Dr. Huang and associates at Northwestern University, Chicago.
The increased cumulative exposure can be seen when age at diagnosis is stratified “across the life course.” Black/Hispanic adults were significantly more likely than White/Asian adults to be diagnosed at or before 30 years of age, and that difference continued to at least age 50 years, the investigators said.
Many adults unaware of their hypertension
There was a somewhat different trend among those in the study population who reported BP at or above 140/90 mm Hg but did not report a hypertension diagnosis. Black, Hispanic, and Asian adults all were significantly more likely than White adults to be unaware of their hypertension, the survey data showed.
Overall, 18% of those who did not report a hypertension diagnosis had a BP of 140/90 mm Hg or higher and 38% had a BP of 130/80 mm Hg or more. Broken down by race and ethnicity, 16% and 36% of Whites reporting no hypertension had BPs of 140/90 and 130/80 mm Hg, respectively; those proportions were 21% and 42% for Hispanics, 24% and 44% for Asians, and 28% and 51% for Blacks, with all of the differences between Whites and the others significant, the research team reported.
One investigator is an associate editor for JAMA Cardiology and reported receiving grants from the American Heart Association and the National Institutes of Health during the conduct of the study. None of the other investigators reported any conflicts.
Black and Hispanic adults are diagnosed with hypertension at a significantly younger age than are white adults, and they also are more likely than Whites to be unaware of undiagnosed high blood pressure, based on national survey data collected from 2011 to 2020.
“Earlier hypertension onset in Black and Hispanic adults may contribute to racial and ethnic CVD disparities,” Xiaoning Huang, PhD, and associates wrote in JAMA Cardiology, also noting that “lower hypertension awareness among racial and ethnic minoritized groups suggests potential for underestimating differences in age at onset.”
Overall mean age at diagnosis was 46 years for the overall study sample of 9,627 participants in the National Health and Nutrition Examination Surveys over the 10 years covered in the analysis. Black adults, with a median age of 42 years, and Hispanic adults (median, 43 years) were significantly younger at diagnosis than White adults, who had a median age of 47 years, the investigators reported.
“Earlier age at hypertension onset may mean greater cumulative exposure to high blood pressure across the life course, which is associated with increased risk of [cardiovascular disease] and may contribute to racial disparities in hypertension-related outcomes,” said Dr. Huang and associates at Northwestern University, Chicago.
The increased cumulative exposure can be seen when age at diagnosis is stratified “across the life course.” Black/Hispanic adults were significantly more likely than White/Asian adults to be diagnosed at or before 30 years of age, and that difference continued to at least age 50 years, the investigators said.
Many adults unaware of their hypertension
There was a somewhat different trend among those in the study population who reported BP at or above 140/90 mm Hg but did not report a hypertension diagnosis. Black, Hispanic, and Asian adults all were significantly more likely than White adults to be unaware of their hypertension, the survey data showed.
Overall, 18% of those who did not report a hypertension diagnosis had a BP of 140/90 mm Hg or higher and 38% had a BP of 130/80 mm Hg or more. Broken down by race and ethnicity, 16% and 36% of Whites reporting no hypertension had BPs of 140/90 and 130/80 mm Hg, respectively; those proportions were 21% and 42% for Hispanics, 24% and 44% for Asians, and 28% and 51% for Blacks, with all of the differences between Whites and the others significant, the research team reported.
One investigator is an associate editor for JAMA Cardiology and reported receiving grants from the American Heart Association and the National Institutes of Health during the conduct of the study. None of the other investigators reported any conflicts.
FROM JAMA CARDIOLOGY
Antibiotic-resistant bacteria emerging in community settings
A new study from the Centers for Disease Control and Prevention found that
Traditionally, CRE has been thought of as a nosocomial infection, acquired in a hospital or other health care facility (nursing home, long-term acute care hospital, dialysis center, etc.). This is the first population-level study to show otherwise, with fully 10% of the CRE isolates found to be community acquired.
CREs are a group of multidrug-resistant bacteria considered an urgent health threat by the CDC because they can rapidly spread between patients, especially those who are most seriously ill and vulnerable, and because they are so difficult to treat. These patients often require treatment with toxic antibiotics, such as colistin, and carry a high mortality rate – up to 50% in some studies.
Overall, 30% of CREs carry a carbapenemase – an enzyme that can make them resistant to carbapenem antibiotics. The genes for this are readily transferable between bacteria and help account for their spread in hospitals.
But in this study, published in the American Journal of Infection Control, of the 12 isolates that underwent whole-genome sequencing, 42% of the CA-CRE isolates carried the carbapenemase gene. Lead author Sandra Bulens, MPH, a health scientist in the CDC’s division of health care quality promotion, said in an interview, “The findings highlight the potential for CP-CRE to move from health care settings into the community. The fact that 5 of the 12 isolates harbored a carbapenemase gene introduces new challenges for controlling spread of CP-CRE.”
CDC researchers analyzed data from eight U.S. metropolitan areas between 2012 and 2015 as part of the CDC’s Emerging Infections Program (EIP) health care–associated infections – community interface activity, which conducts surveillance for CRE and other drug-resistant gram-negative bacteria. Cases of CA-CRE were compared with HCA-CRE, with 1499 cases in 1,194 case-patients being analyzed. Though Klebsiella pneumoniae was the most common isolate, there were some differences between metropolitan areas.
The incidence of CRE cases per 100,000 population was 2.96 (95% confidence interval, 2.81-3.11) overall and 0.29 (95% CI, 0.25-0.25) for CA-CRE. Most CA-CRE cases were in White persons (73%) and women (84%). Urine cultures were the source of 98% of all CA-CRE cases, compared with 86% of HCA-CRE cases (P < .001). Though small numbers, the numbers of patients with CA-CRE without apparent underlying medical condition (n = 51; 37%) was greater when compared with patients with HCA-CRE (n = 36; 3%; P < .001).
Asked for independent comment, Lance Price, PhD, of George Washington University and the founding director of GW’s Antibiotic Resistance Action Center, Washington, said, “what’s striking about these data is that: ‘Who is the front line, at least in the United States for CRE?’ It’s women, older women. ... At some point, we have to frame drug resistance as a women’s health issue.”
Dr. Price noted that the 10% of patients with CA-CRE acquired it in the community. “I would argue that probably none of them had any idea, because there’s this silent community epidemic,” he said. “It’s asymptomatic carriage and transmission in the community. Somebody can be this walking reservoir of these really dangerous bacteria and have no idea.”
This is an increasingly serious problem for women, Dr. Price said, because, “with a community-acquired bladder infection, you’re going to call your doctor or go to an urgent care, and they’re not going to test you. They’re going to guess what you have, and they’re going to prescribe an antibiotic, and that antibiotic is going to fail. So then your bladder infection continues, and then you wait a few more days, and you start to get flank pain and kidney infection. ... If you start getting a fever, they might admit you. They are going to start treating you immediately, and they might miss it because you’ve got this organism that’s resistant to all the best antibiotics. ... The gateway to the blood is the UTI.”
Because of such empiric treatment and increasing resistance, the risk for treatment failure is quite high, especially for older women. Ms. Bulens, however, said that, “[although] 10% of CRE were in persons without health care risk factors, the proportion of all UTIs in this population that are CRE is going to be very, very small.”
This study involved cultures from 2012 to 2015. Before the pandemic, from 2012 to 2017, U.S. deaths from antibiotic resistance fell by 18% overall and by 30% in hospitals.
But in the first year of the COVID-19 pandemic, there was a 15% increase in infections and deaths from antibiotic-resistant (AMR), hospital-acquired bacteria. In 2020, 29,400 patients died from AMR infections. There was a 78% increase in carbapenem-resistant Acinetobacter baumannii health care–associated infections, a 35% increase in carbapenem-resistant Enterobacterales, and 32% increases in both multidrug-resistant Pseudomonas aeruginosa and extended-spectrum beta-lactamase–producing Enterobacterales. Aside from gram-negative bacteria, methicillin-resistant Staphylococcus aureus rose 13%, and Candida auris rose 60%. But owing to limited surveillance, recent sound figures are lacking.
Ms. Bulens and Dr. Price reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
A new study from the Centers for Disease Control and Prevention found that
Traditionally, CRE has been thought of as a nosocomial infection, acquired in a hospital or other health care facility (nursing home, long-term acute care hospital, dialysis center, etc.). This is the first population-level study to show otherwise, with fully 10% of the CRE isolates found to be community acquired.
CREs are a group of multidrug-resistant bacteria considered an urgent health threat by the CDC because they can rapidly spread between patients, especially those who are most seriously ill and vulnerable, and because they are so difficult to treat. These patients often require treatment with toxic antibiotics, such as colistin, and carry a high mortality rate – up to 50% in some studies.
Overall, 30% of CREs carry a carbapenemase – an enzyme that can make them resistant to carbapenem antibiotics. The genes for this are readily transferable between bacteria and help account for their spread in hospitals.
But in this study, published in the American Journal of Infection Control, of the 12 isolates that underwent whole-genome sequencing, 42% of the CA-CRE isolates carried the carbapenemase gene. Lead author Sandra Bulens, MPH, a health scientist in the CDC’s division of health care quality promotion, said in an interview, “The findings highlight the potential for CP-CRE to move from health care settings into the community. The fact that 5 of the 12 isolates harbored a carbapenemase gene introduces new challenges for controlling spread of CP-CRE.”
CDC researchers analyzed data from eight U.S. metropolitan areas between 2012 and 2015 as part of the CDC’s Emerging Infections Program (EIP) health care–associated infections – community interface activity, which conducts surveillance for CRE and other drug-resistant gram-negative bacteria. Cases of CA-CRE were compared with HCA-CRE, with 1499 cases in 1,194 case-patients being analyzed. Though Klebsiella pneumoniae was the most common isolate, there were some differences between metropolitan areas.
The incidence of CRE cases per 100,000 population was 2.96 (95% confidence interval, 2.81-3.11) overall and 0.29 (95% CI, 0.25-0.25) for CA-CRE. Most CA-CRE cases were in White persons (73%) and women (84%). Urine cultures were the source of 98% of all CA-CRE cases, compared with 86% of HCA-CRE cases (P < .001). Though small numbers, the numbers of patients with CA-CRE without apparent underlying medical condition (n = 51; 37%) was greater when compared with patients with HCA-CRE (n = 36; 3%; P < .001).
Asked for independent comment, Lance Price, PhD, of George Washington University and the founding director of GW’s Antibiotic Resistance Action Center, Washington, said, “what’s striking about these data is that: ‘Who is the front line, at least in the United States for CRE?’ It’s women, older women. ... At some point, we have to frame drug resistance as a women’s health issue.”
Dr. Price noted that the 10% of patients with CA-CRE acquired it in the community. “I would argue that probably none of them had any idea, because there’s this silent community epidemic,” he said. “It’s asymptomatic carriage and transmission in the community. Somebody can be this walking reservoir of these really dangerous bacteria and have no idea.”
This is an increasingly serious problem for women, Dr. Price said, because, “with a community-acquired bladder infection, you’re going to call your doctor or go to an urgent care, and they’re not going to test you. They’re going to guess what you have, and they’re going to prescribe an antibiotic, and that antibiotic is going to fail. So then your bladder infection continues, and then you wait a few more days, and you start to get flank pain and kidney infection. ... If you start getting a fever, they might admit you. They are going to start treating you immediately, and they might miss it because you’ve got this organism that’s resistant to all the best antibiotics. ... The gateway to the blood is the UTI.”
Because of such empiric treatment and increasing resistance, the risk for treatment failure is quite high, especially for older women. Ms. Bulens, however, said that, “[although] 10% of CRE were in persons without health care risk factors, the proportion of all UTIs in this population that are CRE is going to be very, very small.”
This study involved cultures from 2012 to 2015. Before the pandemic, from 2012 to 2017, U.S. deaths from antibiotic resistance fell by 18% overall and by 30% in hospitals.
But in the first year of the COVID-19 pandemic, there was a 15% increase in infections and deaths from antibiotic-resistant (AMR), hospital-acquired bacteria. In 2020, 29,400 patients died from AMR infections. There was a 78% increase in carbapenem-resistant Acinetobacter baumannii health care–associated infections, a 35% increase in carbapenem-resistant Enterobacterales, and 32% increases in both multidrug-resistant Pseudomonas aeruginosa and extended-spectrum beta-lactamase–producing Enterobacterales. Aside from gram-negative bacteria, methicillin-resistant Staphylococcus aureus rose 13%, and Candida auris rose 60%. But owing to limited surveillance, recent sound figures are lacking.
Ms. Bulens and Dr. Price reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
A new study from the Centers for Disease Control and Prevention found that
Traditionally, CRE has been thought of as a nosocomial infection, acquired in a hospital or other health care facility (nursing home, long-term acute care hospital, dialysis center, etc.). This is the first population-level study to show otherwise, with fully 10% of the CRE isolates found to be community acquired.
CREs are a group of multidrug-resistant bacteria considered an urgent health threat by the CDC because they can rapidly spread between patients, especially those who are most seriously ill and vulnerable, and because they are so difficult to treat. These patients often require treatment with toxic antibiotics, such as colistin, and carry a high mortality rate – up to 50% in some studies.
Overall, 30% of CREs carry a carbapenemase – an enzyme that can make them resistant to carbapenem antibiotics. The genes for this are readily transferable between bacteria and help account for their spread in hospitals.
But in this study, published in the American Journal of Infection Control, of the 12 isolates that underwent whole-genome sequencing, 42% of the CA-CRE isolates carried the carbapenemase gene. Lead author Sandra Bulens, MPH, a health scientist in the CDC’s division of health care quality promotion, said in an interview, “The findings highlight the potential for CP-CRE to move from health care settings into the community. The fact that 5 of the 12 isolates harbored a carbapenemase gene introduces new challenges for controlling spread of CP-CRE.”
CDC researchers analyzed data from eight U.S. metropolitan areas between 2012 and 2015 as part of the CDC’s Emerging Infections Program (EIP) health care–associated infections – community interface activity, which conducts surveillance for CRE and other drug-resistant gram-negative bacteria. Cases of CA-CRE were compared with HCA-CRE, with 1499 cases in 1,194 case-patients being analyzed. Though Klebsiella pneumoniae was the most common isolate, there were some differences between metropolitan areas.
The incidence of CRE cases per 100,000 population was 2.96 (95% confidence interval, 2.81-3.11) overall and 0.29 (95% CI, 0.25-0.25) for CA-CRE. Most CA-CRE cases were in White persons (73%) and women (84%). Urine cultures were the source of 98% of all CA-CRE cases, compared with 86% of HCA-CRE cases (P < .001). Though small numbers, the numbers of patients with CA-CRE without apparent underlying medical condition (n = 51; 37%) was greater when compared with patients with HCA-CRE (n = 36; 3%; P < .001).
Asked for independent comment, Lance Price, PhD, of George Washington University and the founding director of GW’s Antibiotic Resistance Action Center, Washington, said, “what’s striking about these data is that: ‘Who is the front line, at least in the United States for CRE?’ It’s women, older women. ... At some point, we have to frame drug resistance as a women’s health issue.”
Dr. Price noted that the 10% of patients with CA-CRE acquired it in the community. “I would argue that probably none of them had any idea, because there’s this silent community epidemic,” he said. “It’s asymptomatic carriage and transmission in the community. Somebody can be this walking reservoir of these really dangerous bacteria and have no idea.”
This is an increasingly serious problem for women, Dr. Price said, because, “with a community-acquired bladder infection, you’re going to call your doctor or go to an urgent care, and they’re not going to test you. They’re going to guess what you have, and they’re going to prescribe an antibiotic, and that antibiotic is going to fail. So then your bladder infection continues, and then you wait a few more days, and you start to get flank pain and kidney infection. ... If you start getting a fever, they might admit you. They are going to start treating you immediately, and they might miss it because you’ve got this organism that’s resistant to all the best antibiotics. ... The gateway to the blood is the UTI.”
Because of such empiric treatment and increasing resistance, the risk for treatment failure is quite high, especially for older women. Ms. Bulens, however, said that, “[although] 10% of CRE were in persons without health care risk factors, the proportion of all UTIs in this population that are CRE is going to be very, very small.”
This study involved cultures from 2012 to 2015. Before the pandemic, from 2012 to 2017, U.S. deaths from antibiotic resistance fell by 18% overall and by 30% in hospitals.
But in the first year of the COVID-19 pandemic, there was a 15% increase in infections and deaths from antibiotic-resistant (AMR), hospital-acquired bacteria. In 2020, 29,400 patients died from AMR infections. There was a 78% increase in carbapenem-resistant Acinetobacter baumannii health care–associated infections, a 35% increase in carbapenem-resistant Enterobacterales, and 32% increases in both multidrug-resistant Pseudomonas aeruginosa and extended-spectrum beta-lactamase–producing Enterobacterales. Aside from gram-negative bacteria, methicillin-resistant Staphylococcus aureus rose 13%, and Candida auris rose 60%. But owing to limited surveillance, recent sound figures are lacking.
Ms. Bulens and Dr. Price reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM THE AMERICAN JOURNAL OF INFECTION CONTROL
‘Self-boosting’ vaccines could be immunizations of the future
Most vaccines don’t come as one-shot deals. A series of boosters is needed to step up immunity to COVID-19, tetanus, and other infectious threats over time.
But what if you could receive just one shot that boosts itself whenever you need a bump in protection?
Researchers at the Massachusetts Institute of Technology (MIT) have developed microparticles that could be used to create self-boosting vaccines that deliver their contents at carefully set time points. In a new study published in the journal Science Advances, the scientists describe how they tune the particles to release the goods at the right time and offer insights on how they can keep the particles stable until then.
How self-boosting vaccines could work
The team developed tiny particles that look like coffee cups – except instead of your favorite brew, they’re filled with vaccine.
“You can put the lid on, and then inject it into the body, and once the lid breaks, whatever is in there is released,” says study author Ana Jaklenec, PhD, a research scientist at MIT’s Koch Institute for Integrative Cancer Research.
To make the tiny cups, the researchers use various polymers already used in medical applications, such as dissolvable stitches. Then they fill the cups with vaccine material that is dried and combined with sugars and other stabilizers.
The particles can be made in various shapes and fine-tuned using polymers with different properties. Some polymers last longer in the body than others, so their choice helps determine how long everything will stay stable under the skin after the injection and when the particles will release their cargo. It could be days or months after the injection.
One challenge is that as the particles open, the environment around them becomes more acidic. The team is working on ways to curb that acidity to make the vaccine material more stable.
“We have ongoing research that has produced some really, really exciting results about their stability and [shows] that you’re able to maintain really sensitive vaccines, stable for a good period of time,” says study author Morteza Sarmadi, PhD, a research specialist at the Koch Institute.
The potential public health impact
This research, funded by the Bill & Melinda Gates Foundation, started with the developing world in mind.
“The intent was actually helping people in the developing world, because a lot of times, people don’t come back for a second injection,” says study author Robert Langer, ScD, the David H. Koch Institute professor at MIT.
But a one-shot plan could benefit the developed world, too. One reason is that self-boosting vaccines could help those who get one achieve higher antibody responses than they would with just one dose. That could mean more protection for the person and the population, because as people develop stronger immunity, germs may have less of a chance to evolve and spread.
Take the COVID-19 pandemic, for example. Only 67% of Americans are fully vaccinated, and most people eligible for first and second boosters haven’t gotten them. New variants, such as the recent Omicron ones, continue to emerge and infect.
“I think those variants would have had a lot less chance to come about if everybody that had gotten vaccinated the first time got repeat injections, which they didn’t,” says Dr. Langer.
Self-boosting vaccines could also benefit infants, children who fear shots, and older adults who have a hard time getting health care.
Also, because the vaccine material is encapsulated and its release can be staggered, this technology might help people receive multiple vaccines at the same time that must now be given separately.
What comes next
The team is testing self-boosting polio and hepatitis vaccines in non-human primates. A small trial in healthy humans might follow within the next few years.
“We think that there’s really high potential for this technology, and we hope it can be developed and get to the human phase very soon,” says Dr. Jaklenec.
In smaller animal models, they are exploring the potential of self-boosting mRNA vaccines. They’re also working with scientists who are studying HIV vaccines.
“There has been some recent progress where very complex regimens seem to be working, but they’re not practical,” says Dr. Jaklenec. “And so, this is where this particular technology could be useful, because you have to prime and boost with different things, and this allows you to do that.”
This system could also extend beyond vaccines and be used to deliver cancer therapies, hormones, and biologics in a shot.
Through new work with researchers at Georgia Tech University, the team will study the potential of giving self-boosting vaccines through 3D-printed microneedles. These vaccines, which would stick on your skin like a bandage, could be self-administered and deployed globally in response to local outbreaks.
A version of this article first appeared on WebMD.com.
Most vaccines don’t come as one-shot deals. A series of boosters is needed to step up immunity to COVID-19, tetanus, and other infectious threats over time.
But what if you could receive just one shot that boosts itself whenever you need a bump in protection?
Researchers at the Massachusetts Institute of Technology (MIT) have developed microparticles that could be used to create self-boosting vaccines that deliver their contents at carefully set time points. In a new study published in the journal Science Advances, the scientists describe how they tune the particles to release the goods at the right time and offer insights on how they can keep the particles stable until then.
How self-boosting vaccines could work
The team developed tiny particles that look like coffee cups – except instead of your favorite brew, they’re filled with vaccine.
“You can put the lid on, and then inject it into the body, and once the lid breaks, whatever is in there is released,” says study author Ana Jaklenec, PhD, a research scientist at MIT’s Koch Institute for Integrative Cancer Research.
To make the tiny cups, the researchers use various polymers already used in medical applications, such as dissolvable stitches. Then they fill the cups with vaccine material that is dried and combined with sugars and other stabilizers.
The particles can be made in various shapes and fine-tuned using polymers with different properties. Some polymers last longer in the body than others, so their choice helps determine how long everything will stay stable under the skin after the injection and when the particles will release their cargo. It could be days or months after the injection.
One challenge is that as the particles open, the environment around them becomes more acidic. The team is working on ways to curb that acidity to make the vaccine material more stable.
“We have ongoing research that has produced some really, really exciting results about their stability and [shows] that you’re able to maintain really sensitive vaccines, stable for a good period of time,” says study author Morteza Sarmadi, PhD, a research specialist at the Koch Institute.
The potential public health impact
This research, funded by the Bill & Melinda Gates Foundation, started with the developing world in mind.
“The intent was actually helping people in the developing world, because a lot of times, people don’t come back for a second injection,” says study author Robert Langer, ScD, the David H. Koch Institute professor at MIT.
But a one-shot plan could benefit the developed world, too. One reason is that self-boosting vaccines could help those who get one achieve higher antibody responses than they would with just one dose. That could mean more protection for the person and the population, because as people develop stronger immunity, germs may have less of a chance to evolve and spread.
Take the COVID-19 pandemic, for example. Only 67% of Americans are fully vaccinated, and most people eligible for first and second boosters haven’t gotten them. New variants, such as the recent Omicron ones, continue to emerge and infect.
“I think those variants would have had a lot less chance to come about if everybody that had gotten vaccinated the first time got repeat injections, which they didn’t,” says Dr. Langer.
Self-boosting vaccines could also benefit infants, children who fear shots, and older adults who have a hard time getting health care.
Also, because the vaccine material is encapsulated and its release can be staggered, this technology might help people receive multiple vaccines at the same time that must now be given separately.
What comes next
The team is testing self-boosting polio and hepatitis vaccines in non-human primates. A small trial in healthy humans might follow within the next few years.
“We think that there’s really high potential for this technology, and we hope it can be developed and get to the human phase very soon,” says Dr. Jaklenec.
In smaller animal models, they are exploring the potential of self-boosting mRNA vaccines. They’re also working with scientists who are studying HIV vaccines.
“There has been some recent progress where very complex regimens seem to be working, but they’re not practical,” says Dr. Jaklenec. “And so, this is where this particular technology could be useful, because you have to prime and boost with different things, and this allows you to do that.”
This system could also extend beyond vaccines and be used to deliver cancer therapies, hormones, and biologics in a shot.
Through new work with researchers at Georgia Tech University, the team will study the potential of giving self-boosting vaccines through 3D-printed microneedles. These vaccines, which would stick on your skin like a bandage, could be self-administered and deployed globally in response to local outbreaks.
A version of this article first appeared on WebMD.com.
Most vaccines don’t come as one-shot deals. A series of boosters is needed to step up immunity to COVID-19, tetanus, and other infectious threats over time.
But what if you could receive just one shot that boosts itself whenever you need a bump in protection?
Researchers at the Massachusetts Institute of Technology (MIT) have developed microparticles that could be used to create self-boosting vaccines that deliver their contents at carefully set time points. In a new study published in the journal Science Advances, the scientists describe how they tune the particles to release the goods at the right time and offer insights on how they can keep the particles stable until then.
How self-boosting vaccines could work
The team developed tiny particles that look like coffee cups – except instead of your favorite brew, they’re filled with vaccine.
“You can put the lid on, and then inject it into the body, and once the lid breaks, whatever is in there is released,” says study author Ana Jaklenec, PhD, a research scientist at MIT’s Koch Institute for Integrative Cancer Research.
To make the tiny cups, the researchers use various polymers already used in medical applications, such as dissolvable stitches. Then they fill the cups with vaccine material that is dried and combined with sugars and other stabilizers.
The particles can be made in various shapes and fine-tuned using polymers with different properties. Some polymers last longer in the body than others, so their choice helps determine how long everything will stay stable under the skin after the injection and when the particles will release their cargo. It could be days or months after the injection.
One challenge is that as the particles open, the environment around them becomes more acidic. The team is working on ways to curb that acidity to make the vaccine material more stable.
“We have ongoing research that has produced some really, really exciting results about their stability and [shows] that you’re able to maintain really sensitive vaccines, stable for a good period of time,” says study author Morteza Sarmadi, PhD, a research specialist at the Koch Institute.
The potential public health impact
This research, funded by the Bill & Melinda Gates Foundation, started with the developing world in mind.
“The intent was actually helping people in the developing world, because a lot of times, people don’t come back for a second injection,” says study author Robert Langer, ScD, the David H. Koch Institute professor at MIT.
But a one-shot plan could benefit the developed world, too. One reason is that self-boosting vaccines could help those who get one achieve higher antibody responses than they would with just one dose. That could mean more protection for the person and the population, because as people develop stronger immunity, germs may have less of a chance to evolve and spread.
Take the COVID-19 pandemic, for example. Only 67% of Americans are fully vaccinated, and most people eligible for first and second boosters haven’t gotten them. New variants, such as the recent Omicron ones, continue to emerge and infect.
“I think those variants would have had a lot less chance to come about if everybody that had gotten vaccinated the first time got repeat injections, which they didn’t,” says Dr. Langer.
Self-boosting vaccines could also benefit infants, children who fear shots, and older adults who have a hard time getting health care.
Also, because the vaccine material is encapsulated and its release can be staggered, this technology might help people receive multiple vaccines at the same time that must now be given separately.
What comes next
The team is testing self-boosting polio and hepatitis vaccines in non-human primates. A small trial in healthy humans might follow within the next few years.
“We think that there’s really high potential for this technology, and we hope it can be developed and get to the human phase very soon,” says Dr. Jaklenec.
In smaller animal models, they are exploring the potential of self-boosting mRNA vaccines. They’re also working with scientists who are studying HIV vaccines.
“There has been some recent progress where very complex regimens seem to be working, but they’re not practical,” says Dr. Jaklenec. “And so, this is where this particular technology could be useful, because you have to prime and boost with different things, and this allows you to do that.”
This system could also extend beyond vaccines and be used to deliver cancer therapies, hormones, and biologics in a shot.
Through new work with researchers at Georgia Tech University, the team will study the potential of giving self-boosting vaccines through 3D-printed microneedles. These vaccines, which would stick on your skin like a bandage, could be self-administered and deployed globally in response to local outbreaks.
A version of this article first appeared on WebMD.com.
FROM SCIENCE ADVANCES
One in eight COVID patients likely to develop long COVID: Large study
a large study published in The Lancet indicates.
The researchers determined that percentage by comparing long-term symptoms in people infected by SARS-CoV-2 with similar symptoms in uninfected people over the same time period.
Among the group of infected study participants in the Netherlands, 21.4% had at least one new or severely increased symptom 3-5 months after infection compared with before infection. When that group of 21.4% was compared with 8.7% of uninfected people in the same study, the researchers were able to calculate a prevalence 12.7% with long COVID.
“This finding shows that post–COVID-19 condition is an urgent problem with a mounting human toll,” the study authors wrote.
The research design was novel, two editorialists said in an accompanying commentary.
Christopher Brightling, PhD, and Rachael Evans, MBChB, PhD, of the Institute for Lung Health, University of Leicester (England), noted: “This is a major advance on prior long COVID prevalence estimates as it includes a matched uninfected group and accounts for symptoms before COVID-19 infection.”
Symptoms that persist
The Lancet study found that 3-5 months after COVID (compared with before COVID) and compared with the non-COVID comparison group, the symptoms that persist were chest pain, breathing difficulties, pain when breathing, muscle pain, loss of taste and/or smell, tingling extremities, lump in throat, feeling hot and cold alternately, heavy limbs, and tiredness.
The authors noted that symptoms such as brain fog were found to be relevant to long COVID after the data collection period for this paper and were not included in this research.
Researcher Aranka V. Ballering, MSc, PhD candidate, said in an interview that the researchers found fever is a symptom that is clearly present during the acute phase of the disease and it peaks the day of the COVID-19 diagnosis, but also wears off.
Loss of taste and smell, however, rapidly increases in severity when COVID-19 is diagnosed, but also persists and is still present 3-5 months after COVID.
Ms. Ballering, with the department of psychiatry at the University of Groningen (the Netherlands), said she was surprised by the sex difference made evident in their research: “Women showed more severe persistent symptoms than men.”
Closer to a clearer definition
The authors said their findings also pinpoint symptoms that bring us closer to a better definition of long COVID, which has many different definitions globally.
“These symptoms have the highest discriminative ability to distinguish between post–COVID-19 condition and non–COVID-19–related symptoms,” they wrote.
Researchers collected data by asking participants in the northern Netherlands, who were part of the population-based Lifelines COVID-19 study, to regularly complete digital questionnaires on 23 symptoms commonly associated with long COVID. The questionnaire was sent out 24 times to the same people between March 2020 and August 2021. At that time, people had the Alpha or earlier variants.
Participants were considered COVID-19 positive if they had either a positive test or a doctor’s diagnosis of COVID-19.
Of 76,422 study participants, the 5.5% (4,231) who had COVID were matched to 8,462 controls. Researchers accounted for sex, age, and time of completing questionnaires.
Effect of hospitalization, vaccination unclear
Ms. Ballering said it’s unclear from this data whether vaccination or whether a person was hospitalized would change the prevalence of persistent symptoms.
Because of the period when the data were collected, “the vast majority of our study population was not fully vaccinated,” she said.
However, she pointed to recent research that shows that immunization against COVID is only partially effective against persistent somatic symptoms after COVID.
Also, only 5% of men and 2.5% of women in the study were hospitalized as a result of COVID-19, so the findings can’t easily be generalized to hospitalized patients.
The Lifelines study was an add-on study to the multidisciplinary, prospective, population-based, observational Dutch Lifelines cohort study examining 167,729 people in the Netherlands. Almost all were White, a limitation of the study, and 58% were female. Average age was 54.
The editorialists also noted additional limitations of the study were that this research “did not fully consider the impact on mental health” and was conducted in one region in the Netherlands.
Janko Nikolich-Žugich, MD, PhD, director of the Aegis Consortium for Pandemic-Free Future and head of the immunobiology department at University of Arizona, Tucson, said in an interview that he agreed with the editorialists that a primary benefit of this study is that it corrected for symptoms people had before COVID, something other studies have not been able to do.
However, he cautioned about generalizing the results for the United States and other countries because of the lack of diversity in the study population with regard to education level, socioeconomic factors, and race. He pointed out that access issues are also different in the Netherlands, which has universal health care.
He said brain fog as a symptom of long COVID is of high interest and will be important to include in future studies that are able to extend the study period.
The work was funded by ZonMw; the Dutch Ministry of Health, Welfare, and Sport; Dutch Ministry of Economic Affairs; University Medical Center Groningen, University of Groningen; and the provinces of Drenthe, Friesland, and Groningen. The study authors and Dr. Nikolich-Žugich have reported no relevant financial relationships. Dr. Brightling has received consultancy and or grants paid to his institution from GlaxoSmithKline, AstraZeneca, Boehringer Ingelheim, Novartis, Chiesi, Genentech, Roche, Sanofi, Regeneron, Mologic, and 4DPharma for asthma and chronic obstructive pulmonary disease research. Dr. Evans has received consultancy fees from AstraZeneca on the topic of long COVID and from GlaxoSmithKline on digital health, and speaker’s fees from Boehringer Ingelheim on long COVID.
A version of this article first appeared on Medscape.com.
a large study published in The Lancet indicates.
The researchers determined that percentage by comparing long-term symptoms in people infected by SARS-CoV-2 with similar symptoms in uninfected people over the same time period.
Among the group of infected study participants in the Netherlands, 21.4% had at least one new or severely increased symptom 3-5 months after infection compared with before infection. When that group of 21.4% was compared with 8.7% of uninfected people in the same study, the researchers were able to calculate a prevalence 12.7% with long COVID.
“This finding shows that post–COVID-19 condition is an urgent problem with a mounting human toll,” the study authors wrote.
The research design was novel, two editorialists said in an accompanying commentary.
Christopher Brightling, PhD, and Rachael Evans, MBChB, PhD, of the Institute for Lung Health, University of Leicester (England), noted: “This is a major advance on prior long COVID prevalence estimates as it includes a matched uninfected group and accounts for symptoms before COVID-19 infection.”
Symptoms that persist
The Lancet study found that 3-5 months after COVID (compared with before COVID) and compared with the non-COVID comparison group, the symptoms that persist were chest pain, breathing difficulties, pain when breathing, muscle pain, loss of taste and/or smell, tingling extremities, lump in throat, feeling hot and cold alternately, heavy limbs, and tiredness.
The authors noted that symptoms such as brain fog were found to be relevant to long COVID after the data collection period for this paper and were not included in this research.
Researcher Aranka V. Ballering, MSc, PhD candidate, said in an interview that the researchers found fever is a symptom that is clearly present during the acute phase of the disease and it peaks the day of the COVID-19 diagnosis, but also wears off.
Loss of taste and smell, however, rapidly increases in severity when COVID-19 is diagnosed, but also persists and is still present 3-5 months after COVID.
Ms. Ballering, with the department of psychiatry at the University of Groningen (the Netherlands), said she was surprised by the sex difference made evident in their research: “Women showed more severe persistent symptoms than men.”
Closer to a clearer definition
The authors said their findings also pinpoint symptoms that bring us closer to a better definition of long COVID, which has many different definitions globally.
“These symptoms have the highest discriminative ability to distinguish between post–COVID-19 condition and non–COVID-19–related symptoms,” they wrote.
Researchers collected data by asking participants in the northern Netherlands, who were part of the population-based Lifelines COVID-19 study, to regularly complete digital questionnaires on 23 symptoms commonly associated with long COVID. The questionnaire was sent out 24 times to the same people between March 2020 and August 2021. At that time, people had the Alpha or earlier variants.
Participants were considered COVID-19 positive if they had either a positive test or a doctor’s diagnosis of COVID-19.
Of 76,422 study participants, the 5.5% (4,231) who had COVID were matched to 8,462 controls. Researchers accounted for sex, age, and time of completing questionnaires.
Effect of hospitalization, vaccination unclear
Ms. Ballering said it’s unclear from this data whether vaccination or whether a person was hospitalized would change the prevalence of persistent symptoms.
Because of the period when the data were collected, “the vast majority of our study population was not fully vaccinated,” she said.
However, she pointed to recent research that shows that immunization against COVID is only partially effective against persistent somatic symptoms after COVID.
Also, only 5% of men and 2.5% of women in the study were hospitalized as a result of COVID-19, so the findings can’t easily be generalized to hospitalized patients.
The Lifelines study was an add-on study to the multidisciplinary, prospective, population-based, observational Dutch Lifelines cohort study examining 167,729 people in the Netherlands. Almost all were White, a limitation of the study, and 58% were female. Average age was 54.
The editorialists also noted additional limitations of the study were that this research “did not fully consider the impact on mental health” and was conducted in one region in the Netherlands.
Janko Nikolich-Žugich, MD, PhD, director of the Aegis Consortium for Pandemic-Free Future and head of the immunobiology department at University of Arizona, Tucson, said in an interview that he agreed with the editorialists that a primary benefit of this study is that it corrected for symptoms people had before COVID, something other studies have not been able to do.
However, he cautioned about generalizing the results for the United States and other countries because of the lack of diversity in the study population with regard to education level, socioeconomic factors, and race. He pointed out that access issues are also different in the Netherlands, which has universal health care.
He said brain fog as a symptom of long COVID is of high interest and will be important to include in future studies that are able to extend the study period.
The work was funded by ZonMw; the Dutch Ministry of Health, Welfare, and Sport; Dutch Ministry of Economic Affairs; University Medical Center Groningen, University of Groningen; and the provinces of Drenthe, Friesland, and Groningen. The study authors and Dr. Nikolich-Žugich have reported no relevant financial relationships. Dr. Brightling has received consultancy and or grants paid to his institution from GlaxoSmithKline, AstraZeneca, Boehringer Ingelheim, Novartis, Chiesi, Genentech, Roche, Sanofi, Regeneron, Mologic, and 4DPharma for asthma and chronic obstructive pulmonary disease research. Dr. Evans has received consultancy fees from AstraZeneca on the topic of long COVID and from GlaxoSmithKline on digital health, and speaker’s fees from Boehringer Ingelheim on long COVID.
A version of this article first appeared on Medscape.com.
a large study published in The Lancet indicates.
The researchers determined that percentage by comparing long-term symptoms in people infected by SARS-CoV-2 with similar symptoms in uninfected people over the same time period.
Among the group of infected study participants in the Netherlands, 21.4% had at least one new or severely increased symptom 3-5 months after infection compared with before infection. When that group of 21.4% was compared with 8.7% of uninfected people in the same study, the researchers were able to calculate a prevalence 12.7% with long COVID.
“This finding shows that post–COVID-19 condition is an urgent problem with a mounting human toll,” the study authors wrote.
The research design was novel, two editorialists said in an accompanying commentary.
Christopher Brightling, PhD, and Rachael Evans, MBChB, PhD, of the Institute for Lung Health, University of Leicester (England), noted: “This is a major advance on prior long COVID prevalence estimates as it includes a matched uninfected group and accounts for symptoms before COVID-19 infection.”
Symptoms that persist
The Lancet study found that 3-5 months after COVID (compared with before COVID) and compared with the non-COVID comparison group, the symptoms that persist were chest pain, breathing difficulties, pain when breathing, muscle pain, loss of taste and/or smell, tingling extremities, lump in throat, feeling hot and cold alternately, heavy limbs, and tiredness.
The authors noted that symptoms such as brain fog were found to be relevant to long COVID after the data collection period for this paper and were not included in this research.
Researcher Aranka V. Ballering, MSc, PhD candidate, said in an interview that the researchers found fever is a symptom that is clearly present during the acute phase of the disease and it peaks the day of the COVID-19 diagnosis, but also wears off.
Loss of taste and smell, however, rapidly increases in severity when COVID-19 is diagnosed, but also persists and is still present 3-5 months after COVID.
Ms. Ballering, with the department of psychiatry at the University of Groningen (the Netherlands), said she was surprised by the sex difference made evident in their research: “Women showed more severe persistent symptoms than men.”
Closer to a clearer definition
The authors said their findings also pinpoint symptoms that bring us closer to a better definition of long COVID, which has many different definitions globally.
“These symptoms have the highest discriminative ability to distinguish between post–COVID-19 condition and non–COVID-19–related symptoms,” they wrote.
Researchers collected data by asking participants in the northern Netherlands, who were part of the population-based Lifelines COVID-19 study, to regularly complete digital questionnaires on 23 symptoms commonly associated with long COVID. The questionnaire was sent out 24 times to the same people between March 2020 and August 2021. At that time, people had the Alpha or earlier variants.
Participants were considered COVID-19 positive if they had either a positive test or a doctor’s diagnosis of COVID-19.
Of 76,422 study participants, the 5.5% (4,231) who had COVID were matched to 8,462 controls. Researchers accounted for sex, age, and time of completing questionnaires.
Effect of hospitalization, vaccination unclear
Ms. Ballering said it’s unclear from this data whether vaccination or whether a person was hospitalized would change the prevalence of persistent symptoms.
Because of the period when the data were collected, “the vast majority of our study population was not fully vaccinated,” she said.
However, she pointed to recent research that shows that immunization against COVID is only partially effective against persistent somatic symptoms after COVID.
Also, only 5% of men and 2.5% of women in the study were hospitalized as a result of COVID-19, so the findings can’t easily be generalized to hospitalized patients.
The Lifelines study was an add-on study to the multidisciplinary, prospective, population-based, observational Dutch Lifelines cohort study examining 167,729 people in the Netherlands. Almost all were White, a limitation of the study, and 58% were female. Average age was 54.
The editorialists also noted additional limitations of the study were that this research “did not fully consider the impact on mental health” and was conducted in one region in the Netherlands.
Janko Nikolich-Žugich, MD, PhD, director of the Aegis Consortium for Pandemic-Free Future and head of the immunobiology department at University of Arizona, Tucson, said in an interview that he agreed with the editorialists that a primary benefit of this study is that it corrected for symptoms people had before COVID, something other studies have not been able to do.
However, he cautioned about generalizing the results for the United States and other countries because of the lack of diversity in the study population with regard to education level, socioeconomic factors, and race. He pointed out that access issues are also different in the Netherlands, which has universal health care.
He said brain fog as a symptom of long COVID is of high interest and will be important to include in future studies that are able to extend the study period.
The work was funded by ZonMw; the Dutch Ministry of Health, Welfare, and Sport; Dutch Ministry of Economic Affairs; University Medical Center Groningen, University of Groningen; and the provinces of Drenthe, Friesland, and Groningen. The study authors and Dr. Nikolich-Žugich have reported no relevant financial relationships. Dr. Brightling has received consultancy and or grants paid to his institution from GlaxoSmithKline, AstraZeneca, Boehringer Ingelheim, Novartis, Chiesi, Genentech, Roche, Sanofi, Regeneron, Mologic, and 4DPharma for asthma and chronic obstructive pulmonary disease research. Dr. Evans has received consultancy fees from AstraZeneca on the topic of long COVID and from GlaxoSmithKline on digital health, and speaker’s fees from Boehringer Ingelheim on long COVID.
A version of this article first appeared on Medscape.com.
FROM THE LANCET
Long COVID doubles risk of some serious outcomes in children, teens
Researchers from the Centers for Disease Control and Prevention report that
Heart inflammation; a blood clot in the lung; or a blood clot in the lower leg, thigh, or pelvis were the most common bad outcomes in a new study. Even though the risk was higher for these and some other serious events, the overall numbers were small.
“Many of these conditions were rare or uncommon among children in this analysis, but even a small increase in these conditions is notable,” a CDC new release stated.
The investigators said their findings stress the importance of COVID-19 vaccination in Americans under the age of 18.
The study was published online in the CDC’s Morbidity and Mortality Weekly Report.
Less is known about long COVID in children
Lyudmyla Kompaniyets, PhD, and colleagues noted that most research on long COVID to date has been done in adults, so little information is available about the risks to Americans ages 17 and younger.
To learn more, they compared post–COVID-19 symptoms and conditions between 781,419 children and teenagers with confirmed COVID-19 to another 2,344,257 without COVID-19. They looked at medical claims and laboratory data for these children and teenagers from March 1, 2020, through Jan. 31, 2022, to see who got any of 15 specific outcomes linked to long COVID-19.
Long COVID was defined as a condition where symptoms that last for or begin at least 4 weeks after a COVID-19 diagnosis.
Compared to children with no history of a COVID-19 diagnosis, the long COVID-19 group was 101% more likely to have an acute pulmonary embolism, 99% more likely to have myocarditis or cardiomyopathy, 87% more likely to have a venous thromboembolic event, 32% more likely to have acute and unspecified renal failure, and 23% more likely to have type 1 diabetes.
“This report points to the fact that the risks of COVID infection itself, both in terms of the acute effects, MIS-C [multisystem inflammatory syndrome in children], as well as the long-term effects, are real, are concerning, and are potentially very serious,” said Stuart Berger, MD, chair of the American Academy of Pediatrics Section on Cardiology and Cardiac Surgery.
“The message that we should take away from this is that we should be very keen on all the methods of prevention for COVID, especially the vaccine,” said Dr. Berger, chief of cardiology in the department of pediatrics at Northwestern University in Chicago.
A ‘wake-up call’
The study findings are “sobering” and are “a reminder of the seriousness of COVID infection,” says Gregory Poland, MD, an infectious disease expert at the Mayo Clinic in Rochester, Minn.
“When you look in particular at the more serious complications from COVID in this young age group, those are life-altering complications that will have consequences and ramifications throughout their lives,” he said.
“I would take this as a serious wake-up call to parents [at a time when] the immunization rates in younger children are so pitifully low,” Dr. Poland said.
Still early days
The study is suggestive but not definitive, said Peter Katona, MD, professor of medicine and infectious diseases expert at the UCLA Fielding School of Public Health.
It’s still too early to draw conclusions about long COVID, including in children, because many questions remain, he said: Should long COVID be defined as symptoms at 1 month or 3 months after infection? How do you define brain fog?
Dr. Katona and colleagues are studying long COVID intervention among students at UCLA to answer some of these questions, including the incidence and effect of early intervention.
The study had “at least seven limitations,” the researchers noted. Among them was the use of medical claims data that noted long COVID outcomes but not how severe they were; some people in the no COVID group might have had the illness but not been diagnosed; and the researchers did not adjust for vaccination status.
Dr. Poland noted that the study was done during surges in COVID variants including Delta and Omicron. In other words, any long COVID effects linked to more recent variants such as BA.5 or BA.2.75 are unknown.
A version of this article first appeared on WebMD.com.
Researchers from the Centers for Disease Control and Prevention report that
Heart inflammation; a blood clot in the lung; or a blood clot in the lower leg, thigh, or pelvis were the most common bad outcomes in a new study. Even though the risk was higher for these and some other serious events, the overall numbers were small.
“Many of these conditions were rare or uncommon among children in this analysis, but even a small increase in these conditions is notable,” a CDC new release stated.
The investigators said their findings stress the importance of COVID-19 vaccination in Americans under the age of 18.
The study was published online in the CDC’s Morbidity and Mortality Weekly Report.
Less is known about long COVID in children
Lyudmyla Kompaniyets, PhD, and colleagues noted that most research on long COVID to date has been done in adults, so little information is available about the risks to Americans ages 17 and younger.
To learn more, they compared post–COVID-19 symptoms and conditions between 781,419 children and teenagers with confirmed COVID-19 to another 2,344,257 without COVID-19. They looked at medical claims and laboratory data for these children and teenagers from March 1, 2020, through Jan. 31, 2022, to see who got any of 15 specific outcomes linked to long COVID-19.
Long COVID was defined as a condition where symptoms that last for or begin at least 4 weeks after a COVID-19 diagnosis.
Compared to children with no history of a COVID-19 diagnosis, the long COVID-19 group was 101% more likely to have an acute pulmonary embolism, 99% more likely to have myocarditis or cardiomyopathy, 87% more likely to have a venous thromboembolic event, 32% more likely to have acute and unspecified renal failure, and 23% more likely to have type 1 diabetes.
“This report points to the fact that the risks of COVID infection itself, both in terms of the acute effects, MIS-C [multisystem inflammatory syndrome in children], as well as the long-term effects, are real, are concerning, and are potentially very serious,” said Stuart Berger, MD, chair of the American Academy of Pediatrics Section on Cardiology and Cardiac Surgery.
“The message that we should take away from this is that we should be very keen on all the methods of prevention for COVID, especially the vaccine,” said Dr. Berger, chief of cardiology in the department of pediatrics at Northwestern University in Chicago.
A ‘wake-up call’
The study findings are “sobering” and are “a reminder of the seriousness of COVID infection,” says Gregory Poland, MD, an infectious disease expert at the Mayo Clinic in Rochester, Minn.
“When you look in particular at the more serious complications from COVID in this young age group, those are life-altering complications that will have consequences and ramifications throughout their lives,” he said.
“I would take this as a serious wake-up call to parents [at a time when] the immunization rates in younger children are so pitifully low,” Dr. Poland said.
Still early days
The study is suggestive but not definitive, said Peter Katona, MD, professor of medicine and infectious diseases expert at the UCLA Fielding School of Public Health.
It’s still too early to draw conclusions about long COVID, including in children, because many questions remain, he said: Should long COVID be defined as symptoms at 1 month or 3 months after infection? How do you define brain fog?
Dr. Katona and colleagues are studying long COVID intervention among students at UCLA to answer some of these questions, including the incidence and effect of early intervention.
The study had “at least seven limitations,” the researchers noted. Among them was the use of medical claims data that noted long COVID outcomes but not how severe they were; some people in the no COVID group might have had the illness but not been diagnosed; and the researchers did not adjust for vaccination status.
Dr. Poland noted that the study was done during surges in COVID variants including Delta and Omicron. In other words, any long COVID effects linked to more recent variants such as BA.5 or BA.2.75 are unknown.
A version of this article first appeared on WebMD.com.
Researchers from the Centers for Disease Control and Prevention report that
Heart inflammation; a blood clot in the lung; or a blood clot in the lower leg, thigh, or pelvis were the most common bad outcomes in a new study. Even though the risk was higher for these and some other serious events, the overall numbers were small.
“Many of these conditions were rare or uncommon among children in this analysis, but even a small increase in these conditions is notable,” a CDC new release stated.
The investigators said their findings stress the importance of COVID-19 vaccination in Americans under the age of 18.
The study was published online in the CDC’s Morbidity and Mortality Weekly Report.
Less is known about long COVID in children
Lyudmyla Kompaniyets, PhD, and colleagues noted that most research on long COVID to date has been done in adults, so little information is available about the risks to Americans ages 17 and younger.
To learn more, they compared post–COVID-19 symptoms and conditions between 781,419 children and teenagers with confirmed COVID-19 to another 2,344,257 without COVID-19. They looked at medical claims and laboratory data for these children and teenagers from March 1, 2020, through Jan. 31, 2022, to see who got any of 15 specific outcomes linked to long COVID-19.
Long COVID was defined as a condition where symptoms that last for or begin at least 4 weeks after a COVID-19 diagnosis.
Compared to children with no history of a COVID-19 diagnosis, the long COVID-19 group was 101% more likely to have an acute pulmonary embolism, 99% more likely to have myocarditis or cardiomyopathy, 87% more likely to have a venous thromboembolic event, 32% more likely to have acute and unspecified renal failure, and 23% more likely to have type 1 diabetes.
“This report points to the fact that the risks of COVID infection itself, both in terms of the acute effects, MIS-C [multisystem inflammatory syndrome in children], as well as the long-term effects, are real, are concerning, and are potentially very serious,” said Stuart Berger, MD, chair of the American Academy of Pediatrics Section on Cardiology and Cardiac Surgery.
“The message that we should take away from this is that we should be very keen on all the methods of prevention for COVID, especially the vaccine,” said Dr. Berger, chief of cardiology in the department of pediatrics at Northwestern University in Chicago.
A ‘wake-up call’
The study findings are “sobering” and are “a reminder of the seriousness of COVID infection,” says Gregory Poland, MD, an infectious disease expert at the Mayo Clinic in Rochester, Minn.
“When you look in particular at the more serious complications from COVID in this young age group, those are life-altering complications that will have consequences and ramifications throughout their lives,” he said.
“I would take this as a serious wake-up call to parents [at a time when] the immunization rates in younger children are so pitifully low,” Dr. Poland said.
Still early days
The study is suggestive but not definitive, said Peter Katona, MD, professor of medicine and infectious diseases expert at the UCLA Fielding School of Public Health.
It’s still too early to draw conclusions about long COVID, including in children, because many questions remain, he said: Should long COVID be defined as symptoms at 1 month or 3 months after infection? How do you define brain fog?
Dr. Katona and colleagues are studying long COVID intervention among students at UCLA to answer some of these questions, including the incidence and effect of early intervention.
The study had “at least seven limitations,” the researchers noted. Among them was the use of medical claims data that noted long COVID outcomes but not how severe they were; some people in the no COVID group might have had the illness but not been diagnosed; and the researchers did not adjust for vaccination status.
Dr. Poland noted that the study was done during surges in COVID variants including Delta and Omicron. In other words, any long COVID effects linked to more recent variants such as BA.5 or BA.2.75 are unknown.
A version of this article first appeared on WebMD.com.
FROM THE MMWR
Patient CRC screening preferences don’t match what they’re being offered
Patients said they’d prefer fecal immunochemical test (FIT)–fecal DNA tests over any of the other colorectal cancer screening (CRC) modalities currently recommended by the U.S. Multi-Society Task Force, according to a study published in Clinical Gastroenterology and Hepatology.
Just over a third of American adults aged 40 and older who hadn’t yet been screened for CRC preferred the FIT–fecal DNA test every 3 years, whereas just one in seven respondents preferred a colonoscopy – considered the gold standard in colorectal cancer screening – every 10 years.
”When you talk to patients and to your friends and family members, people tend to think colonoscopy is synonymous with colon cancer screening, but we have lots of different tests,” senior author Christopher V. Almario, MD, MSHPM, of the department of medicine at the Karsh division of gastroenterology and hepatology, Cedars-Sinai Medical Center, Los Angeles, said in an interview.
“Most people in general tend to prefer noninvasive stool tests, and when we try to predict who would prefer what, we actually couldn’t, so this is a very personal decision,” Dr. Almario said. “It’s important for clinicians to offer multiple choices to their patients, not to mention just colonoscopy. We have data from observing clinician-patient interactions showing that, a lot of times, colonoscopy is the only test that’s offered, despite there being multiple options.”
At the very least, Dr. Almario said, providers should offer patients a colonoscopy along with a noninvasive test, particularly a stool test, and discuss the two options, getting the patient’s input in terms of what they prefer. “The best test is the test that actually gets done,” he said.
Offering patients options
Reid M. Ness, MD, MPH, an associate professor of medicine in the division of gastroenterology, hepatology and nutrition at Vanderbilt University Medical Center in Nashville, was not involved with the study but wasn’t surprised at the findings since “most people wisely prefer to avoid invasive procedures,” he said in an interview. He agreed that many patients aren’t necessarily informed of all their options for screening.
“Many people who are now being offered colonoscopy as their only screening option may prefer a noninvasive option, such as FIT or multitarget stool DNA testing,” Dr. Ness said. “Also, people now refusing colonoscopy for colorectal cancer screening may instead accept FIT or multitarget stool DNA testing. It is difficult to know how many people now refusing colorectal cancer screening may have accepted screening if it had been offered differently.”
That’s precisely what Dr. Almario and his colleagues wanted to find out. They surveyed 1,000 people aged 40 and older who were at average risk for colorectal cancer to find out their preferences for different screening modalities and what features of different screening types they most valued. The researchers asked about the following screening tests recommended by the U.S. Multi-Society Task Force:
- FIT every year.
- FIT–fecal DNA every 3 years.
- Colon video capsule every 5 years.
- CT colonography every 5 years.
- Colonoscopy every 10 years.
The respondents who completed the online survey were recruited from a sample of more than 20 million people across the United States who have agreed to receive survey invitations. Respondents were excluded if they had a first-degree relative with colorectal cancer, had already undergone colorectal cancer screening or had been diagnosed with colon polyps, Crohn’s disease, or ulcerative colitis.
The respondents were split into those aged 40-49 (61% of the sample) who had not yet discussed colorectal cancer screening with their providers and those aged 50 and older, who might have already discussed it and declined. Eighty percent of the respondents were White, 6% were Black, 6% were Hispanic, 4% were Asian, and 3% reported another race/ethnicity. Just over half (52%) had at least two comorbidities. A quarter (25%) reported one comorbidity, and 22% reported none.
In thinking about the decision to get screened, respondents ranked the test type as the most important consideration, followed by the reduction in their chance of developing colorectal cancer and then frequency of the test. Lower priority on the list of considerations were their chances of a complication, bowel prep before the test, and required diet changes before the test.
The test preferred by the highest proportion of respondents was the FIT–fecal DNA test every 3 years, preferred by 35% of respondents, followed by the colon capsule video test every 5 years (28%). About one in seven respondents (14%) preferred a colonoscopy every 10 years, followed by the annual FIT (12%) and CT colonography every 5 years (11%). When limited only to the two tier 1–option tests – the annual FIT or a colonoscopy every 10 years – a substantial majority of the younger (69%) and older (77%) groups preferred the annual FIT.
”This finding is discordant with current CRC screening utilization in the United States where colonoscopy is the most commonly performed test, and this may partially explain our suboptimal screening rates,” the authors wrote. “Our findings suggest that screening programs should strongly consider a sequential-based strategy where FIT is offered first, and if declined then colonoscopy.”
Underlying factors
Dr. Ness said that many primary care providers might prefer to offer colonoscopies instead of annual FIT tests because it’s easier to track a test given every 10 years instead of every year or every 3 years.
“Providers across most of the U.S. are incentivized to recommend colonoscopy as the primary screening modality because the burden of follow-up on them is less,” Dr. Ness said. “They are able to justify this choice given colonoscopy remains the most accurate screening modality.”
Dr. Ness pointed to the programmatic screening program at Kaiser Permanente of Northern California health care system as a model for a program that utilizes FIT tests more often.
“The only way to accomplish an efficient and equitable colorectal cancer screening program is within the context of a national health service or plan,” Dr. Ness added. “Otherwise, the uninsured and underinsured will remain excluded from the benefits of colorectal cancer screening.”
Preferences did not differ a great deal between the age groups, with 35% of the younger group and 37% of the older group both preferring the FIT–fecal DNA tests every 3 years. Slightly more people in the 50+ age group preferred an annual fit (19% vs. 12%) as opposed to the colon capsule video every 5 years (28% of younger group vs. 23%) or colon CT scan every 5 years (11% of younger group vs. 8%), but the differences were statistically significant (P = .019).
In fact, “sociodemographic, clinical characteristics, and colorectal cancer screening knowledge, attitudes, and beliefs were not predictive of selecting FIT or colonoscopy,” the authors found. ”This demonstrates the individualized nature of decision making on colorectal cancer screening tests. Moreover, as most individuals preferred FIT, it again emphasizes the importance of sequential or choice-based strategies for colorectal cancer screening.”
However, one of the study’s notable limitations was its high proportion of White patients relative to other racial/ethnic groups, so additional research may illuminate whether different sociodemographic groups do have slight preferences for one test over another, Dr. Almario said. The advantage to colonoscopies, he noted, is that they only occur every 10 years and if polyps are discovered, they can be taken care of right away.
”You don’t have to think about it for a decade, which is certainly a pro for the colonoscopy,” Dr. Almario said. “The FIT test is obviously less invasive, but you have to do it every year for it to be an effective screening test.” He noted that some data have shown a drop-off in compliance over multiple years. “We certainly need more systems in place to remind patients and providers to do it annually so that we can see the ultimate screening benefit from doing that test specifically.”
“The most important point from the clinical perspective is, when we’re talking to patients about colon cancer screening, make sure to give them a choice,” Dr. Almario said. “We just can’t look at someone’s chart, their clinical characteristics or demographics, and predict what tests they would prefer. We need to ask them. We need to present them with the options, go over the pros and cons of colonoscopy, the pros and cons of the stool test, and ask the patient what they would prefer to do.”
The research was funded by the National Cancer Institute and the National Institutes of Health. One author served on an advisory board with Exact Sciences. The other authors and Dr. Ness had no disclosures.
Patients said they’d prefer fecal immunochemical test (FIT)–fecal DNA tests over any of the other colorectal cancer screening (CRC) modalities currently recommended by the U.S. Multi-Society Task Force, according to a study published in Clinical Gastroenterology and Hepatology.
Just over a third of American adults aged 40 and older who hadn’t yet been screened for CRC preferred the FIT–fecal DNA test every 3 years, whereas just one in seven respondents preferred a colonoscopy – considered the gold standard in colorectal cancer screening – every 10 years.
”When you talk to patients and to your friends and family members, people tend to think colonoscopy is synonymous with colon cancer screening, but we have lots of different tests,” senior author Christopher V. Almario, MD, MSHPM, of the department of medicine at the Karsh division of gastroenterology and hepatology, Cedars-Sinai Medical Center, Los Angeles, said in an interview.
“Most people in general tend to prefer noninvasive stool tests, and when we try to predict who would prefer what, we actually couldn’t, so this is a very personal decision,” Dr. Almario said. “It’s important for clinicians to offer multiple choices to their patients, not to mention just colonoscopy. We have data from observing clinician-patient interactions showing that, a lot of times, colonoscopy is the only test that’s offered, despite there being multiple options.”
At the very least, Dr. Almario said, providers should offer patients a colonoscopy along with a noninvasive test, particularly a stool test, and discuss the two options, getting the patient’s input in terms of what they prefer. “The best test is the test that actually gets done,” he said.
Offering patients options
Reid M. Ness, MD, MPH, an associate professor of medicine in the division of gastroenterology, hepatology and nutrition at Vanderbilt University Medical Center in Nashville, was not involved with the study but wasn’t surprised at the findings since “most people wisely prefer to avoid invasive procedures,” he said in an interview. He agreed that many patients aren’t necessarily informed of all their options for screening.
“Many people who are now being offered colonoscopy as their only screening option may prefer a noninvasive option, such as FIT or multitarget stool DNA testing,” Dr. Ness said. “Also, people now refusing colonoscopy for colorectal cancer screening may instead accept FIT or multitarget stool DNA testing. It is difficult to know how many people now refusing colorectal cancer screening may have accepted screening if it had been offered differently.”
That’s precisely what Dr. Almario and his colleagues wanted to find out. They surveyed 1,000 people aged 40 and older who were at average risk for colorectal cancer to find out their preferences for different screening modalities and what features of different screening types they most valued. The researchers asked about the following screening tests recommended by the U.S. Multi-Society Task Force:
- FIT every year.
- FIT–fecal DNA every 3 years.
- Colon video capsule every 5 years.
- CT colonography every 5 years.
- Colonoscopy every 10 years.
The respondents who completed the online survey were recruited from a sample of more than 20 million people across the United States who have agreed to receive survey invitations. Respondents were excluded if they had a first-degree relative with colorectal cancer, had already undergone colorectal cancer screening or had been diagnosed with colon polyps, Crohn’s disease, or ulcerative colitis.
The respondents were split into those aged 40-49 (61% of the sample) who had not yet discussed colorectal cancer screening with their providers and those aged 50 and older, who might have already discussed it and declined. Eighty percent of the respondents were White, 6% were Black, 6% were Hispanic, 4% were Asian, and 3% reported another race/ethnicity. Just over half (52%) had at least two comorbidities. A quarter (25%) reported one comorbidity, and 22% reported none.
In thinking about the decision to get screened, respondents ranked the test type as the most important consideration, followed by the reduction in their chance of developing colorectal cancer and then frequency of the test. Lower priority on the list of considerations were their chances of a complication, bowel prep before the test, and required diet changes before the test.
The test preferred by the highest proportion of respondents was the FIT–fecal DNA test every 3 years, preferred by 35% of respondents, followed by the colon capsule video test every 5 years (28%). About one in seven respondents (14%) preferred a colonoscopy every 10 years, followed by the annual FIT (12%) and CT colonography every 5 years (11%). When limited only to the two tier 1–option tests – the annual FIT or a colonoscopy every 10 years – a substantial majority of the younger (69%) and older (77%) groups preferred the annual FIT.
”This finding is discordant with current CRC screening utilization in the United States where colonoscopy is the most commonly performed test, and this may partially explain our suboptimal screening rates,” the authors wrote. “Our findings suggest that screening programs should strongly consider a sequential-based strategy where FIT is offered first, and if declined then colonoscopy.”
Underlying factors
Dr. Ness said that many primary care providers might prefer to offer colonoscopies instead of annual FIT tests because it’s easier to track a test given every 10 years instead of every year or every 3 years.
“Providers across most of the U.S. are incentivized to recommend colonoscopy as the primary screening modality because the burden of follow-up on them is less,” Dr. Ness said. “They are able to justify this choice given colonoscopy remains the most accurate screening modality.”
Dr. Ness pointed to the programmatic screening program at Kaiser Permanente of Northern California health care system as a model for a program that utilizes FIT tests more often.
“The only way to accomplish an efficient and equitable colorectal cancer screening program is within the context of a national health service or plan,” Dr. Ness added. “Otherwise, the uninsured and underinsured will remain excluded from the benefits of colorectal cancer screening.”
Preferences did not differ a great deal between the age groups, with 35% of the younger group and 37% of the older group both preferring the FIT–fecal DNA tests every 3 years. Slightly more people in the 50+ age group preferred an annual fit (19% vs. 12%) as opposed to the colon capsule video every 5 years (28% of younger group vs. 23%) or colon CT scan every 5 years (11% of younger group vs. 8%), but the differences were statistically significant (P = .019).
In fact, “sociodemographic, clinical characteristics, and colorectal cancer screening knowledge, attitudes, and beliefs were not predictive of selecting FIT or colonoscopy,” the authors found. ”This demonstrates the individualized nature of decision making on colorectal cancer screening tests. Moreover, as most individuals preferred FIT, it again emphasizes the importance of sequential or choice-based strategies for colorectal cancer screening.”
However, one of the study’s notable limitations was its high proportion of White patients relative to other racial/ethnic groups, so additional research may illuminate whether different sociodemographic groups do have slight preferences for one test over another, Dr. Almario said. The advantage to colonoscopies, he noted, is that they only occur every 10 years and if polyps are discovered, they can be taken care of right away.
”You don’t have to think about it for a decade, which is certainly a pro for the colonoscopy,” Dr. Almario said. “The FIT test is obviously less invasive, but you have to do it every year for it to be an effective screening test.” He noted that some data have shown a drop-off in compliance over multiple years. “We certainly need more systems in place to remind patients and providers to do it annually so that we can see the ultimate screening benefit from doing that test specifically.”
“The most important point from the clinical perspective is, when we’re talking to patients about colon cancer screening, make sure to give them a choice,” Dr. Almario said. “We just can’t look at someone’s chart, their clinical characteristics or demographics, and predict what tests they would prefer. We need to ask them. We need to present them with the options, go over the pros and cons of colonoscopy, the pros and cons of the stool test, and ask the patient what they would prefer to do.”
The research was funded by the National Cancer Institute and the National Institutes of Health. One author served on an advisory board with Exact Sciences. The other authors and Dr. Ness had no disclosures.
Patients said they’d prefer fecal immunochemical test (FIT)–fecal DNA tests over any of the other colorectal cancer screening (CRC) modalities currently recommended by the U.S. Multi-Society Task Force, according to a study published in Clinical Gastroenterology and Hepatology.
Just over a third of American adults aged 40 and older who hadn’t yet been screened for CRC preferred the FIT–fecal DNA test every 3 years, whereas just one in seven respondents preferred a colonoscopy – considered the gold standard in colorectal cancer screening – every 10 years.
”When you talk to patients and to your friends and family members, people tend to think colonoscopy is synonymous with colon cancer screening, but we have lots of different tests,” senior author Christopher V. Almario, MD, MSHPM, of the department of medicine at the Karsh division of gastroenterology and hepatology, Cedars-Sinai Medical Center, Los Angeles, said in an interview.
“Most people in general tend to prefer noninvasive stool tests, and when we try to predict who would prefer what, we actually couldn’t, so this is a very personal decision,” Dr. Almario said. “It’s important for clinicians to offer multiple choices to their patients, not to mention just colonoscopy. We have data from observing clinician-patient interactions showing that, a lot of times, colonoscopy is the only test that’s offered, despite there being multiple options.”
At the very least, Dr. Almario said, providers should offer patients a colonoscopy along with a noninvasive test, particularly a stool test, and discuss the two options, getting the patient’s input in terms of what they prefer. “The best test is the test that actually gets done,” he said.
Offering patients options
Reid M. Ness, MD, MPH, an associate professor of medicine in the division of gastroenterology, hepatology and nutrition at Vanderbilt University Medical Center in Nashville, was not involved with the study but wasn’t surprised at the findings since “most people wisely prefer to avoid invasive procedures,” he said in an interview. He agreed that many patients aren’t necessarily informed of all their options for screening.
“Many people who are now being offered colonoscopy as their only screening option may prefer a noninvasive option, such as FIT or multitarget stool DNA testing,” Dr. Ness said. “Also, people now refusing colonoscopy for colorectal cancer screening may instead accept FIT or multitarget stool DNA testing. It is difficult to know how many people now refusing colorectal cancer screening may have accepted screening if it had been offered differently.”
That’s precisely what Dr. Almario and his colleagues wanted to find out. They surveyed 1,000 people aged 40 and older who were at average risk for colorectal cancer to find out their preferences for different screening modalities and what features of different screening types they most valued. The researchers asked about the following screening tests recommended by the U.S. Multi-Society Task Force:
- FIT every year.
- FIT–fecal DNA every 3 years.
- Colon video capsule every 5 years.
- CT colonography every 5 years.
- Colonoscopy every 10 years.
The respondents who completed the online survey were recruited from a sample of more than 20 million people across the United States who have agreed to receive survey invitations. Respondents were excluded if they had a first-degree relative with colorectal cancer, had already undergone colorectal cancer screening or had been diagnosed with colon polyps, Crohn’s disease, or ulcerative colitis.
The respondents were split into those aged 40-49 (61% of the sample) who had not yet discussed colorectal cancer screening with their providers and those aged 50 and older, who might have already discussed it and declined. Eighty percent of the respondents were White, 6% were Black, 6% were Hispanic, 4% were Asian, and 3% reported another race/ethnicity. Just over half (52%) had at least two comorbidities. A quarter (25%) reported one comorbidity, and 22% reported none.
In thinking about the decision to get screened, respondents ranked the test type as the most important consideration, followed by the reduction in their chance of developing colorectal cancer and then frequency of the test. Lower priority on the list of considerations were their chances of a complication, bowel prep before the test, and required diet changes before the test.
The test preferred by the highest proportion of respondents was the FIT–fecal DNA test every 3 years, preferred by 35% of respondents, followed by the colon capsule video test every 5 years (28%). About one in seven respondents (14%) preferred a colonoscopy every 10 years, followed by the annual FIT (12%) and CT colonography every 5 years (11%). When limited only to the two tier 1–option tests – the annual FIT or a colonoscopy every 10 years – a substantial majority of the younger (69%) and older (77%) groups preferred the annual FIT.
”This finding is discordant with current CRC screening utilization in the United States where colonoscopy is the most commonly performed test, and this may partially explain our suboptimal screening rates,” the authors wrote. “Our findings suggest that screening programs should strongly consider a sequential-based strategy where FIT is offered first, and if declined then colonoscopy.”
Underlying factors
Dr. Ness said that many primary care providers might prefer to offer colonoscopies instead of annual FIT tests because it’s easier to track a test given every 10 years instead of every year or every 3 years.
“Providers across most of the U.S. are incentivized to recommend colonoscopy as the primary screening modality because the burden of follow-up on them is less,” Dr. Ness said. “They are able to justify this choice given colonoscopy remains the most accurate screening modality.”
Dr. Ness pointed to the programmatic screening program at Kaiser Permanente of Northern California health care system as a model for a program that utilizes FIT tests more often.
“The only way to accomplish an efficient and equitable colorectal cancer screening program is within the context of a national health service or plan,” Dr. Ness added. “Otherwise, the uninsured and underinsured will remain excluded from the benefits of colorectal cancer screening.”
Preferences did not differ a great deal between the age groups, with 35% of the younger group and 37% of the older group both preferring the FIT–fecal DNA tests every 3 years. Slightly more people in the 50+ age group preferred an annual fit (19% vs. 12%) as opposed to the colon capsule video every 5 years (28% of younger group vs. 23%) or colon CT scan every 5 years (11% of younger group vs. 8%), but the differences were statistically significant (P = .019).
In fact, “sociodemographic, clinical characteristics, and colorectal cancer screening knowledge, attitudes, and beliefs were not predictive of selecting FIT or colonoscopy,” the authors found. ”This demonstrates the individualized nature of decision making on colorectal cancer screening tests. Moreover, as most individuals preferred FIT, it again emphasizes the importance of sequential or choice-based strategies for colorectal cancer screening.”
However, one of the study’s notable limitations was its high proportion of White patients relative to other racial/ethnic groups, so additional research may illuminate whether different sociodemographic groups do have slight preferences for one test over another, Dr. Almario said. The advantage to colonoscopies, he noted, is that they only occur every 10 years and if polyps are discovered, they can be taken care of right away.
”You don’t have to think about it for a decade, which is certainly a pro for the colonoscopy,” Dr. Almario said. “The FIT test is obviously less invasive, but you have to do it every year for it to be an effective screening test.” He noted that some data have shown a drop-off in compliance over multiple years. “We certainly need more systems in place to remind patients and providers to do it annually so that we can see the ultimate screening benefit from doing that test specifically.”
“The most important point from the clinical perspective is, when we’re talking to patients about colon cancer screening, make sure to give them a choice,” Dr. Almario said. “We just can’t look at someone’s chart, their clinical characteristics or demographics, and predict what tests they would prefer. We need to ask them. We need to present them with the options, go over the pros and cons of colonoscopy, the pros and cons of the stool test, and ask the patient what they would prefer to do.”
The research was funded by the National Cancer Institute and the National Institutes of Health. One author served on an advisory board with Exact Sciences. The other authors and Dr. Ness had no disclosures.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY