User login
Understanding and Promoting Compassion in Medicine
In most Western countries, professional standards dictate that physicians should practice medicine with compassion. Patients also expect compassionate care from physicians because it represents a model capable of providing greater patient satisfaction, fostering better doctor-patient relationships, and enabling better psychological states among patients.
The etymology of the term “compassion” derives from the Latin roots “com,” meaning “together with,” and “pati,” meaning “to endure or suffer.” When discussing compassion, it is necessary to distinguish it from empathy, a term generally used to refer to cognitive or emotional processes in which the perspective of the other (in this case, the patient) is taken. Compassion implies or requires empathy and includes the desire to help or alleviate the suffering of others. Compassion in the medical context is likely a specific instance of a more complex adaptive system that has evolved, not only among humans, to motivate recognition and assistance when others suffer.
Compassion Fatigue
Physicians’ compassion is expected by patients and the profession. It is fundamental for effective clinical practice. Although compassion is central to medical practice, most research related to the topic has focused on “compassion fatigue,” which is understood as a specific type of professional burnout, as if physicians had a limited reserve of compassion that dwindles or becomes exhausted with use or overuse. This is one aspect of a much more complex problem, in which compassion represents the endpoint of a dynamic process that encompasses the influences of the physician, the patient, the clinic, and the institution.
Compassion Capacity: Conditioning Factors
Chronic exposure of physicians to conflicting work demands may be associated with the depletion of their psychological resources and, consequently, emotional and cognitive fatigue that can contribute to poorer work outcomes, including the ability to express compassion.
Rates of professional burnout in medicine are increasing. The driving factors of this phenomenon are largely rooted in organizations and healthcare systems and include excessive workloads, inefficient work processes, administrative burdens, and lack of input or control by physicians regarding issues concerning their work life. The outcome often is early retirement of physicians, a current, increasingly widespread phenomenon and a critical issue not only for the Italian National Health Service but also for other healthcare systems worldwide.
Organizational and Personal Values
There is no clear empirical evidence supporting the hypothesis that working in healthcare environments experienced as discrepant with one’s own values has negative effects on key professional outcomes. However, a study published in the Journal of Internal Medicine highlighted the overall negative effect of misalignment between system values and physicians’ personal values, including impaired ability to provide compassionate care, as well as reduced job satisfaction, burnout, absenteeism, and considering the possibility of early retirement. Results from 1000 surveyed professionals indicate that physicians’ subjective competence in providing compassionate care may remain high, but their ability to express it is compromised. From data analysis, the authors hypothesize that when working in environments with discrepant values, occupational contingencies may repeatedly require physicians to set aside their personal values, which can lead them to refrain from using available skills to keep their performance in line with organizational requirements.
These results and hypotheses are not consistent with the notion of compassion fatigue as a reflection of the cost of care resulting from exposure to repeated suffering. Previous evidence shows that expressing compassion in healthcare facilitates greater understanding, suggesting that providing compassion does not impoverish physicians but rather supports them in the effectiveness of interventions and in their satisfaction.
In summary, this study suggests that what prevents compassion is the inability to provide it when hindered by factors related to the situation in which the physician operates. Improving compassion does not simply depend on motivating individual professionals to be more compassionate or on promoting fundamental skills, but probably on the creation of organizational and clinical conditions in which physician compassion can thrive.
This story was translated from Univadis Italy, which is part of the Medscape professional network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
In most Western countries, professional standards dictate that physicians should practice medicine with compassion. Patients also expect compassionate care from physicians because it represents a model capable of providing greater patient satisfaction, fostering better doctor-patient relationships, and enabling better psychological states among patients.
The etymology of the term “compassion” derives from the Latin roots “com,” meaning “together with,” and “pati,” meaning “to endure or suffer.” When discussing compassion, it is necessary to distinguish it from empathy, a term generally used to refer to cognitive or emotional processes in which the perspective of the other (in this case, the patient) is taken. Compassion implies or requires empathy and includes the desire to help or alleviate the suffering of others. Compassion in the medical context is likely a specific instance of a more complex adaptive system that has evolved, not only among humans, to motivate recognition and assistance when others suffer.
Compassion Fatigue
Physicians’ compassion is expected by patients and the profession. It is fundamental for effective clinical practice. Although compassion is central to medical practice, most research related to the topic has focused on “compassion fatigue,” which is understood as a specific type of professional burnout, as if physicians had a limited reserve of compassion that dwindles or becomes exhausted with use or overuse. This is one aspect of a much more complex problem, in which compassion represents the endpoint of a dynamic process that encompasses the influences of the physician, the patient, the clinic, and the institution.
Compassion Capacity: Conditioning Factors
Chronic exposure of physicians to conflicting work demands may be associated with the depletion of their psychological resources and, consequently, emotional and cognitive fatigue that can contribute to poorer work outcomes, including the ability to express compassion.
Rates of professional burnout in medicine are increasing. The driving factors of this phenomenon are largely rooted in organizations and healthcare systems and include excessive workloads, inefficient work processes, administrative burdens, and lack of input or control by physicians regarding issues concerning their work life. The outcome often is early retirement of physicians, a current, increasingly widespread phenomenon and a critical issue not only for the Italian National Health Service but also for other healthcare systems worldwide.
Organizational and Personal Values
There is no clear empirical evidence supporting the hypothesis that working in healthcare environments experienced as discrepant with one’s own values has negative effects on key professional outcomes. However, a study published in the Journal of Internal Medicine highlighted the overall negative effect of misalignment between system values and physicians’ personal values, including impaired ability to provide compassionate care, as well as reduced job satisfaction, burnout, absenteeism, and considering the possibility of early retirement. Results from 1000 surveyed professionals indicate that physicians’ subjective competence in providing compassionate care may remain high, but their ability to express it is compromised. From data analysis, the authors hypothesize that when working in environments with discrepant values, occupational contingencies may repeatedly require physicians to set aside their personal values, which can lead them to refrain from using available skills to keep their performance in line with organizational requirements.
These results and hypotheses are not consistent with the notion of compassion fatigue as a reflection of the cost of care resulting from exposure to repeated suffering. Previous evidence shows that expressing compassion in healthcare facilitates greater understanding, suggesting that providing compassion does not impoverish physicians but rather supports them in the effectiveness of interventions and in their satisfaction.
In summary, this study suggests that what prevents compassion is the inability to provide it when hindered by factors related to the situation in which the physician operates. Improving compassion does not simply depend on motivating individual professionals to be more compassionate or on promoting fundamental skills, but probably on the creation of organizational and clinical conditions in which physician compassion can thrive.
This story was translated from Univadis Italy, which is part of the Medscape professional network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
In most Western countries, professional standards dictate that physicians should practice medicine with compassion. Patients also expect compassionate care from physicians because it represents a model capable of providing greater patient satisfaction, fostering better doctor-patient relationships, and enabling better psychological states among patients.
The etymology of the term “compassion” derives from the Latin roots “com,” meaning “together with,” and “pati,” meaning “to endure or suffer.” When discussing compassion, it is necessary to distinguish it from empathy, a term generally used to refer to cognitive or emotional processes in which the perspective of the other (in this case, the patient) is taken. Compassion implies or requires empathy and includes the desire to help or alleviate the suffering of others. Compassion in the medical context is likely a specific instance of a more complex adaptive system that has evolved, not only among humans, to motivate recognition and assistance when others suffer.
Compassion Fatigue
Physicians’ compassion is expected by patients and the profession. It is fundamental for effective clinical practice. Although compassion is central to medical practice, most research related to the topic has focused on “compassion fatigue,” which is understood as a specific type of professional burnout, as if physicians had a limited reserve of compassion that dwindles or becomes exhausted with use or overuse. This is one aspect of a much more complex problem, in which compassion represents the endpoint of a dynamic process that encompasses the influences of the physician, the patient, the clinic, and the institution.
Compassion Capacity: Conditioning Factors
Chronic exposure of physicians to conflicting work demands may be associated with the depletion of their psychological resources and, consequently, emotional and cognitive fatigue that can contribute to poorer work outcomes, including the ability to express compassion.
Rates of professional burnout in medicine are increasing. The driving factors of this phenomenon are largely rooted in organizations and healthcare systems and include excessive workloads, inefficient work processes, administrative burdens, and lack of input or control by physicians regarding issues concerning their work life. The outcome often is early retirement of physicians, a current, increasingly widespread phenomenon and a critical issue not only for the Italian National Health Service but also for other healthcare systems worldwide.
Organizational and Personal Values
There is no clear empirical evidence supporting the hypothesis that working in healthcare environments experienced as discrepant with one’s own values has negative effects on key professional outcomes. However, a study published in the Journal of Internal Medicine highlighted the overall negative effect of misalignment between system values and physicians’ personal values, including impaired ability to provide compassionate care, as well as reduced job satisfaction, burnout, absenteeism, and considering the possibility of early retirement. Results from 1000 surveyed professionals indicate that physicians’ subjective competence in providing compassionate care may remain high, but their ability to express it is compromised. From data analysis, the authors hypothesize that when working in environments with discrepant values, occupational contingencies may repeatedly require physicians to set aside their personal values, which can lead them to refrain from using available skills to keep their performance in line with organizational requirements.
These results and hypotheses are not consistent with the notion of compassion fatigue as a reflection of the cost of care resulting from exposure to repeated suffering. Previous evidence shows that expressing compassion in healthcare facilitates greater understanding, suggesting that providing compassion does not impoverish physicians but rather supports them in the effectiveness of interventions and in their satisfaction.
In summary, this study suggests that what prevents compassion is the inability to provide it when hindered by factors related to the situation in which the physician operates. Improving compassion does not simply depend on motivating individual professionals to be more compassionate or on promoting fundamental skills, but probably on the creation of organizational and clinical conditions in which physician compassion can thrive.
This story was translated from Univadis Italy, which is part of the Medscape professional network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Losing More Than Fat
Whether you have totally bought into the “obesity is a disease” paradigm or are still in denial, you must admit that the development of a suite of effective weight loss medications has created a tsunami of interest and economic activity in this country on a scale not seen since the Beanie Baby craze of the mid-1990s. But, obesity management is serious business. While most of those soft cuddly toys are gathering dust in shoeboxes across this country, weight loss medications are likely to be the vanguard of rapidly evolving revolution in healthcare management that will be with us for the foreseeable future.
Most thoughtful folks who purchased Beanie Babies in 1994 had no illusions and knew that in a few short years this bubble of soft cuddly toys was going to burst. However, do those of us on the front line of medical care know what the future holds for the patients who are being prescribed or are scavenging those too-good-to-be-true medications?
My guess is that in the long run we will need a combination of some serious tinkering by the pharmaceutical industry and a trek up some steep learning curves before we eventually arrive at a safe and effective chemical management for obese patients. I recently read an article by an obesity management specialist at Harvard Medical School who voiced her concerns that we are missing an opportunity to make this explosion of popularity in GLP-1 drugs into an important learning experience.
In an opinion piece in JAMA Internal Medicine, Dr. Fatima Cody Stanford and her coauthors argue that we, actually the US Food and Drug Administration (FDA), is over-focused on weight loss in determining the efficacy of anti-obesity medications. Dr. Stanford and colleagues point out that when a patient loses weight it isn’t just fat — it is complex process that may include muscle and bone mineralization as well. She has consulted for at least one obesity-drug manufacturer and says that these companies have the resources to produce data on body composition that could help clinicians create management plans that would address the patients’ overall health. However, the FDA has not demanded this broader and deeper assessment of general health when reviewing the drug trials.
I don’t think we can blame the patients for not asking whether they will healthier while taking these medications. They have already spent a lifetime, even if it is just a decade, of suffering as the “fat one.” A new outfit and a look in the mirror can’t help but make them feel better ... in the short term anyway. We as physicians must shoulder some of the blame for focusing on weight. Our spoken or unspoken message has been “Lose weight and you will be healthier.” We may make our message sound more professional by tossing around terms like “BMI,” but as Dr. Stanford points out, “we have known BMI is a flawed metric for a long time.”
There is the notion that obese people have had to build more muscle to help them carry around the extra weight, so that we should expect them to lose that extra muscle along with the fat. However, in older adults there is an entity called sarcopenic obesity, in which the patient doesn’t have that extra muscle to lose.
In a brief Internet research venture, I could find little on the subject of muscle loss and GLP-1s, other than “it can happen.” And, nothing on the effect in adolescents. And that is one of Dr. Stanford’s points. We just don’t know. She said that looking at body composition can be costly and not something that the clinician can do. However, as far as muscle mass is concerned, we need to be alert to the potential for loss. Simple assessments of strength can help us tailor our management to the specific patient’s need.
The bottom line is this ... now that we have effective medications for “weight loss,” we need to redefine the relationship between weight and health. “We” means us as clinicians. It means the folks at FDA. And, if we can improve our messaging, it will osmose to the rest of the population. Just because you’ve dropped two dress sizes doesn’t mean you’re healthy.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Whether you have totally bought into the “obesity is a disease” paradigm or are still in denial, you must admit that the development of a suite of effective weight loss medications has created a tsunami of interest and economic activity in this country on a scale not seen since the Beanie Baby craze of the mid-1990s. But, obesity management is serious business. While most of those soft cuddly toys are gathering dust in shoeboxes across this country, weight loss medications are likely to be the vanguard of rapidly evolving revolution in healthcare management that will be with us for the foreseeable future.
Most thoughtful folks who purchased Beanie Babies in 1994 had no illusions and knew that in a few short years this bubble of soft cuddly toys was going to burst. However, do those of us on the front line of medical care know what the future holds for the patients who are being prescribed or are scavenging those too-good-to-be-true medications?
My guess is that in the long run we will need a combination of some serious tinkering by the pharmaceutical industry and a trek up some steep learning curves before we eventually arrive at a safe and effective chemical management for obese patients. I recently read an article by an obesity management specialist at Harvard Medical School who voiced her concerns that we are missing an opportunity to make this explosion of popularity in GLP-1 drugs into an important learning experience.
In an opinion piece in JAMA Internal Medicine, Dr. Fatima Cody Stanford and her coauthors argue that we, actually the US Food and Drug Administration (FDA), is over-focused on weight loss in determining the efficacy of anti-obesity medications. Dr. Stanford and colleagues point out that when a patient loses weight it isn’t just fat — it is complex process that may include muscle and bone mineralization as well. She has consulted for at least one obesity-drug manufacturer and says that these companies have the resources to produce data on body composition that could help clinicians create management plans that would address the patients’ overall health. However, the FDA has not demanded this broader and deeper assessment of general health when reviewing the drug trials.
I don’t think we can blame the patients for not asking whether they will healthier while taking these medications. They have already spent a lifetime, even if it is just a decade, of suffering as the “fat one.” A new outfit and a look in the mirror can’t help but make them feel better ... in the short term anyway. We as physicians must shoulder some of the blame for focusing on weight. Our spoken or unspoken message has been “Lose weight and you will be healthier.” We may make our message sound more professional by tossing around terms like “BMI,” but as Dr. Stanford points out, “we have known BMI is a flawed metric for a long time.”
There is the notion that obese people have had to build more muscle to help them carry around the extra weight, so that we should expect them to lose that extra muscle along with the fat. However, in older adults there is an entity called sarcopenic obesity, in which the patient doesn’t have that extra muscle to lose.
In a brief Internet research venture, I could find little on the subject of muscle loss and GLP-1s, other than “it can happen.” And, nothing on the effect in adolescents. And that is one of Dr. Stanford’s points. We just don’t know. She said that looking at body composition can be costly and not something that the clinician can do. However, as far as muscle mass is concerned, we need to be alert to the potential for loss. Simple assessments of strength can help us tailor our management to the specific patient’s need.
The bottom line is this ... now that we have effective medications for “weight loss,” we need to redefine the relationship between weight and health. “We” means us as clinicians. It means the folks at FDA. And, if we can improve our messaging, it will osmose to the rest of the population. Just because you’ve dropped two dress sizes doesn’t mean you’re healthy.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Whether you have totally bought into the “obesity is a disease” paradigm or are still in denial, you must admit that the development of a suite of effective weight loss medications has created a tsunami of interest and economic activity in this country on a scale not seen since the Beanie Baby craze of the mid-1990s. But, obesity management is serious business. While most of those soft cuddly toys are gathering dust in shoeboxes across this country, weight loss medications are likely to be the vanguard of rapidly evolving revolution in healthcare management that will be with us for the foreseeable future.
Most thoughtful folks who purchased Beanie Babies in 1994 had no illusions and knew that in a few short years this bubble of soft cuddly toys was going to burst. However, do those of us on the front line of medical care know what the future holds for the patients who are being prescribed or are scavenging those too-good-to-be-true medications?
My guess is that in the long run we will need a combination of some serious tinkering by the pharmaceutical industry and a trek up some steep learning curves before we eventually arrive at a safe and effective chemical management for obese patients. I recently read an article by an obesity management specialist at Harvard Medical School who voiced her concerns that we are missing an opportunity to make this explosion of popularity in GLP-1 drugs into an important learning experience.
In an opinion piece in JAMA Internal Medicine, Dr. Fatima Cody Stanford and her coauthors argue that we, actually the US Food and Drug Administration (FDA), is over-focused on weight loss in determining the efficacy of anti-obesity medications. Dr. Stanford and colleagues point out that when a patient loses weight it isn’t just fat — it is complex process that may include muscle and bone mineralization as well. She has consulted for at least one obesity-drug manufacturer and says that these companies have the resources to produce data on body composition that could help clinicians create management plans that would address the patients’ overall health. However, the FDA has not demanded this broader and deeper assessment of general health when reviewing the drug trials.
I don’t think we can blame the patients for not asking whether they will healthier while taking these medications. They have already spent a lifetime, even if it is just a decade, of suffering as the “fat one.” A new outfit and a look in the mirror can’t help but make them feel better ... in the short term anyway. We as physicians must shoulder some of the blame for focusing on weight. Our spoken or unspoken message has been “Lose weight and you will be healthier.” We may make our message sound more professional by tossing around terms like “BMI,” but as Dr. Stanford points out, “we have known BMI is a flawed metric for a long time.”
There is the notion that obese people have had to build more muscle to help them carry around the extra weight, so that we should expect them to lose that extra muscle along with the fat. However, in older adults there is an entity called sarcopenic obesity, in which the patient doesn’t have that extra muscle to lose.
In a brief Internet research venture, I could find little on the subject of muscle loss and GLP-1s, other than “it can happen.” And, nothing on the effect in adolescents. And that is one of Dr. Stanford’s points. We just don’t know. She said that looking at body composition can be costly and not something that the clinician can do. However, as far as muscle mass is concerned, we need to be alert to the potential for loss. Simple assessments of strength can help us tailor our management to the specific patient’s need.
The bottom line is this ... now that we have effective medications for “weight loss,” we need to redefine the relationship between weight and health. “We” means us as clinicians. It means the folks at FDA. And, if we can improve our messaging, it will osmose to the rest of the population. Just because you’ve dropped two dress sizes doesn’t mean you’re healthy.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Human Brains Are Getting Bigger: Good News for Dementia Risk?
A secular trends analysis using brain imaging data from the long-running Framingham Heart Study revealed an increase in intracranial volume (ICV), cortical gray matter, white matter, and hippocampal volumes, as well as cortical surface area in people born in the 1970s versus those born in the 1930s.
“We hypothesize that the increased size of the brain will lead to increased ‘reserve’ against the diseases of aging, consequently reducing overall risk of dementia,” said Charles DeCarli, MD, director of the Alzheimer’s Disease Research Center and Imaging of Dementia and Aging Laboratory, Department of Neurology and Center for Neuroscience, University of California at Davis.
The study was published online in JAMA Neurology.
Dementia Protection?
An earlier report from the Framingham Heart Study suggested that dementia incidence is declining.
“This difference occurred among persons with at least a high school education and was not affected by differences in vascular risk. Our work was stimulated by this finding and the possibility that differences in brain size might be occurring over the three generations of the Framingham Heart Study which might explain an increased resilience to dementia,” said Dr. DeCarli.
The cross-sectional study used data from 3226 Framingham participants (53% women) born in the decades 1930–1970. None had dementia or a history of stroke. At a mean age of 57.7 years, they underwent brain MRI.
Compared with the 1930s birth decade, the 1970s birth decade had a 6.6% greater ICV (1321 mL vs 1234 mL), 7.7% greater white matter volume (476.3 mL vs 441.9 mL), 5.7% greater hippocampal volume (6.69 mL vs 6.51 mL), and 14.9% greater cortical surface area (2222 cm2 vs 1933 cm2).
Cortical thickness was thinner by 21% over the same period, coinciding with larger intracranial volume, cerebral white matter volume, and cortical surface area.
“We were surprised to find that the brain is getting larger, but the cortex is thinning very slightly. The apparent thinning of the cortex is related to the increased need for expansion of the cortical ribbon. This is based on hypotheses related to the effects of evolution and cortical development designed to make neuronal integration most efficient,” said Dr. DeCarli.
Repeat analysis applied to a subgroup of 1145 individuals of similar age range born in the 1940s (mean age, 60 years) and 1950s (mean age, 59 years) resulted in similar findings.
“These findings likely reflect both secular improvements in early life environmental influences through health, social-cultural, and educational factors, as well as secular improvements in modifiable dementia risk factors leading to better brain health and reserve,” the authors wrote.
While the effects observed are “likely to be small at the level of the individual, they are likely to be substantial at the population level, adding to growing literature that suggests optimized brain development and ideal health through modification of risk factors could substantially modify the effect of common neurodegenerative diseases such as stroke and Alzheiemer’s disease on dementia incidence,” they added.
Limitations included the predominately non-Hispanic White, healthy, and well-educated population that is the Framingham cohort, which is not representative of the broader US population. The cross-sectional nature of the study also limited causal inference.
Exciting Work
“If these results are confirmed by others and the observed differences by decade are as large as those reported, it has important implications for aging and dementia studies,” Prashanthi Lemuria, PhD, with Mayo Clinic, Rochester, Minnesota, wrote in an accompanying editorial.
“First, studies that use brain charts for the human life span to understand the mechanisms of aging, by stitching together data from individuals across the decades, are significantly overestimating the degree of brain health decline using volumes across the life span because the baseline brain health in individuals who are in their older decades is likely lower to begin with,” Dr. Lemuria noted.
“Second, cortical thickness measurements, often used in dementia studies as a cross-sectional marker for neurodegeneration, showed greatest decline due to secular trends and are not scaled for ICV. Therefore, these should be traded in favor of gray matter volumes after consideration of ICV to estimate the true degree of neurodegeneration,” Dr. Vemuri added.
The data also suggest that longitudinal imaging study designs should be preferred when testing hypotheses on brain health, Dr. Vemuri wrote.
Although this work is “exciting and will bring attention to secular trends in brain health, much work is yet to be done to validate and replicate these findings and, more importantly, understand the mechanistic basis of these trends,” she added.
“Do these secular trends in improvement of brain health underlie the decrease in dementia risk? The jury may be still out, but the authors are commended for investigating new avenues,” Dr. Vemuri concluded.
Support for this research was provided by the National Institute on Aging and the National Institute on Neurological Disorders and Stroke and the National Institutes of Health. Dr. DeCarli reported serving as a consultant to Novartis on a safety study of heart failure during the conduct of the study and receiving consultant fees from Eisai and Novo Nordisk outside the submitted work. Dr. Lemuria had no disclosures.
A version of this article appeared on Medscape.com.
A secular trends analysis using brain imaging data from the long-running Framingham Heart Study revealed an increase in intracranial volume (ICV), cortical gray matter, white matter, and hippocampal volumes, as well as cortical surface area in people born in the 1970s versus those born in the 1930s.
“We hypothesize that the increased size of the brain will lead to increased ‘reserve’ against the diseases of aging, consequently reducing overall risk of dementia,” said Charles DeCarli, MD, director of the Alzheimer’s Disease Research Center and Imaging of Dementia and Aging Laboratory, Department of Neurology and Center for Neuroscience, University of California at Davis.
The study was published online in JAMA Neurology.
Dementia Protection?
An earlier report from the Framingham Heart Study suggested that dementia incidence is declining.
“This difference occurred among persons with at least a high school education and was not affected by differences in vascular risk. Our work was stimulated by this finding and the possibility that differences in brain size might be occurring over the three generations of the Framingham Heart Study which might explain an increased resilience to dementia,” said Dr. DeCarli.
The cross-sectional study used data from 3226 Framingham participants (53% women) born in the decades 1930–1970. None had dementia or a history of stroke. At a mean age of 57.7 years, they underwent brain MRI.
Compared with the 1930s birth decade, the 1970s birth decade had a 6.6% greater ICV (1321 mL vs 1234 mL), 7.7% greater white matter volume (476.3 mL vs 441.9 mL), 5.7% greater hippocampal volume (6.69 mL vs 6.51 mL), and 14.9% greater cortical surface area (2222 cm2 vs 1933 cm2).
Cortical thickness was thinner by 21% over the same period, coinciding with larger intracranial volume, cerebral white matter volume, and cortical surface area.
“We were surprised to find that the brain is getting larger, but the cortex is thinning very slightly. The apparent thinning of the cortex is related to the increased need for expansion of the cortical ribbon. This is based on hypotheses related to the effects of evolution and cortical development designed to make neuronal integration most efficient,” said Dr. DeCarli.
Repeat analysis applied to a subgroup of 1145 individuals of similar age range born in the 1940s (mean age, 60 years) and 1950s (mean age, 59 years) resulted in similar findings.
“These findings likely reflect both secular improvements in early life environmental influences through health, social-cultural, and educational factors, as well as secular improvements in modifiable dementia risk factors leading to better brain health and reserve,” the authors wrote.
While the effects observed are “likely to be small at the level of the individual, they are likely to be substantial at the population level, adding to growing literature that suggests optimized brain development and ideal health through modification of risk factors could substantially modify the effect of common neurodegenerative diseases such as stroke and Alzheiemer’s disease on dementia incidence,” they added.
Limitations included the predominately non-Hispanic White, healthy, and well-educated population that is the Framingham cohort, which is not representative of the broader US population. The cross-sectional nature of the study also limited causal inference.
Exciting Work
“If these results are confirmed by others and the observed differences by decade are as large as those reported, it has important implications for aging and dementia studies,” Prashanthi Lemuria, PhD, with Mayo Clinic, Rochester, Minnesota, wrote in an accompanying editorial.
“First, studies that use brain charts for the human life span to understand the mechanisms of aging, by stitching together data from individuals across the decades, are significantly overestimating the degree of brain health decline using volumes across the life span because the baseline brain health in individuals who are in their older decades is likely lower to begin with,” Dr. Lemuria noted.
“Second, cortical thickness measurements, often used in dementia studies as a cross-sectional marker for neurodegeneration, showed greatest decline due to secular trends and are not scaled for ICV. Therefore, these should be traded in favor of gray matter volumes after consideration of ICV to estimate the true degree of neurodegeneration,” Dr. Vemuri added.
The data also suggest that longitudinal imaging study designs should be preferred when testing hypotheses on brain health, Dr. Vemuri wrote.
Although this work is “exciting and will bring attention to secular trends in brain health, much work is yet to be done to validate and replicate these findings and, more importantly, understand the mechanistic basis of these trends,” she added.
“Do these secular trends in improvement of brain health underlie the decrease in dementia risk? The jury may be still out, but the authors are commended for investigating new avenues,” Dr. Vemuri concluded.
Support for this research was provided by the National Institute on Aging and the National Institute on Neurological Disorders and Stroke and the National Institutes of Health. Dr. DeCarli reported serving as a consultant to Novartis on a safety study of heart failure during the conduct of the study and receiving consultant fees from Eisai and Novo Nordisk outside the submitted work. Dr. Lemuria had no disclosures.
A version of this article appeared on Medscape.com.
A secular trends analysis using brain imaging data from the long-running Framingham Heart Study revealed an increase in intracranial volume (ICV), cortical gray matter, white matter, and hippocampal volumes, as well as cortical surface area in people born in the 1970s versus those born in the 1930s.
“We hypothesize that the increased size of the brain will lead to increased ‘reserve’ against the diseases of aging, consequently reducing overall risk of dementia,” said Charles DeCarli, MD, director of the Alzheimer’s Disease Research Center and Imaging of Dementia and Aging Laboratory, Department of Neurology and Center for Neuroscience, University of California at Davis.
The study was published online in JAMA Neurology.
Dementia Protection?
An earlier report from the Framingham Heart Study suggested that dementia incidence is declining.
“This difference occurred among persons with at least a high school education and was not affected by differences in vascular risk. Our work was stimulated by this finding and the possibility that differences in brain size might be occurring over the three generations of the Framingham Heart Study which might explain an increased resilience to dementia,” said Dr. DeCarli.
The cross-sectional study used data from 3226 Framingham participants (53% women) born in the decades 1930–1970. None had dementia or a history of stroke. At a mean age of 57.7 years, they underwent brain MRI.
Compared with the 1930s birth decade, the 1970s birth decade had a 6.6% greater ICV (1321 mL vs 1234 mL), 7.7% greater white matter volume (476.3 mL vs 441.9 mL), 5.7% greater hippocampal volume (6.69 mL vs 6.51 mL), and 14.9% greater cortical surface area (2222 cm2 vs 1933 cm2).
Cortical thickness was thinner by 21% over the same period, coinciding with larger intracranial volume, cerebral white matter volume, and cortical surface area.
“We were surprised to find that the brain is getting larger, but the cortex is thinning very slightly. The apparent thinning of the cortex is related to the increased need for expansion of the cortical ribbon. This is based on hypotheses related to the effects of evolution and cortical development designed to make neuronal integration most efficient,” said Dr. DeCarli.
Repeat analysis applied to a subgroup of 1145 individuals of similar age range born in the 1940s (mean age, 60 years) and 1950s (mean age, 59 years) resulted in similar findings.
“These findings likely reflect both secular improvements in early life environmental influences through health, social-cultural, and educational factors, as well as secular improvements in modifiable dementia risk factors leading to better brain health and reserve,” the authors wrote.
While the effects observed are “likely to be small at the level of the individual, they are likely to be substantial at the population level, adding to growing literature that suggests optimized brain development and ideal health through modification of risk factors could substantially modify the effect of common neurodegenerative diseases such as stroke and Alzheiemer’s disease on dementia incidence,” they added.
Limitations included the predominately non-Hispanic White, healthy, and well-educated population that is the Framingham cohort, which is not representative of the broader US population. The cross-sectional nature of the study also limited causal inference.
Exciting Work
“If these results are confirmed by others and the observed differences by decade are as large as those reported, it has important implications for aging and dementia studies,” Prashanthi Lemuria, PhD, with Mayo Clinic, Rochester, Minnesota, wrote in an accompanying editorial.
“First, studies that use brain charts for the human life span to understand the mechanisms of aging, by stitching together data from individuals across the decades, are significantly overestimating the degree of brain health decline using volumes across the life span because the baseline brain health in individuals who are in their older decades is likely lower to begin with,” Dr. Lemuria noted.
“Second, cortical thickness measurements, often used in dementia studies as a cross-sectional marker for neurodegeneration, showed greatest decline due to secular trends and are not scaled for ICV. Therefore, these should be traded in favor of gray matter volumes after consideration of ICV to estimate the true degree of neurodegeneration,” Dr. Vemuri added.
The data also suggest that longitudinal imaging study designs should be preferred when testing hypotheses on brain health, Dr. Vemuri wrote.
Although this work is “exciting and will bring attention to secular trends in brain health, much work is yet to be done to validate and replicate these findings and, more importantly, understand the mechanistic basis of these trends,” she added.
“Do these secular trends in improvement of brain health underlie the decrease in dementia risk? The jury may be still out, but the authors are commended for investigating new avenues,” Dr. Vemuri concluded.
Support for this research was provided by the National Institute on Aging and the National Institute on Neurological Disorders and Stroke and the National Institutes of Health. Dr. DeCarli reported serving as a consultant to Novartis on a safety study of heart failure during the conduct of the study and receiving consultant fees from Eisai and Novo Nordisk outside the submitted work. Dr. Lemuria had no disclosures.
A version of this article appeared on Medscape.com.
FROM JAMA NEUROLOGY
Most Disadvantaged Least Likely to Receive Thrombolysis
, early research shows.
“The findings should serve as an eye-opener that social determinants of health seem to be playing a role in who receives thrombolytic therapy, said study investigator Chanaka Kahathuduwa, MD, PhD, resident physician, Department of Neurology, School of Medicine, Texas Tech University Health Sciences Center, Lubbock.
The findings were released ahead of the study’s scheduled presentation at the annual meeting of the American Academy of Neurology.
Contributor to Poor Outcomes
Social determinants of health are important contributors to poor stroke-related outcomes, the investigators noted. They pointed out that previous research has yielded conflicting results as to the cause.
Whereas some studies suggest poor social determinants of health drive increased stroke incidence, others raise the question of whether there are disparities in acute stroke care.
To investigate, the researchers used a publicly available database and diagnostic and procedure codes to identify patients presenting at emergency departments in Texas from 2016 to 2019 with ischemic stroke who did and did not receive thrombolytic therapy.
“We focused on Texas, which has a very large area but few places where people have easy access to health care, which is a problem,” said study co-investigator Chathurika Dhanasekara, MD, PhD, research assistant professor in the Department of Surgery, School of Medicine, Texas Tech University Health Sciences Center.
The study included 63,983 stroke patients, of whom 51.6% were female, 66.6% were White, and 17.7% were Black. Of these, 7198 (11.2%) received thrombolytic therapy; such therapies include the tissue plasminogen activators (tPAs) alteplase and tenecteplace.
Researchers collected information on social determinants of health such as age, race, gender, insurance type, and residence based on zip codes. They computed risk ratios (RRs) of administering thrombolysis on the basis of these variables.
Results showed that Black patients were less likely than their White counterparts to receive thrombolysis (RR, 0.90; 95% CI, 0.85-0.96). In addition, patients older than 65 years were less likely those aged 18-45 years to receive thrombolysis (RR, 0.47; 95% CI, 0.44-0.51), and rural residents were less likely than urban dwellers to receive the intervention (RR, 0.60; 95% CI, 0.55-0.65).
It makes some sense, the researchers said, that rural stroke patients would be less likely to get thrombolysis because there’s a limited time window — within 4.5 hours — during which this therapy can be given, and such patients may live a long distance from a hospital.
Two other groups less likely to receive thrombolysis were Hispanic persons versus non-Hispanic persons (RR, 0.93; 95% CI, 0.87-0.98) and Medicare/Medicaid/Veterans Administration patients (RR, 0.77; 95% CI, 0.73-0.81) or uninsured patients (RR, 0.90; 95% CI, 0.94-0.87) vs those with private insurance.
Interestingly, male patients were less likely than female patients to receive thrombolysis (RR, 0.95; 95% CI, 0.90-0.99).
Surprising Findings
With the exception of the discrepancy in thrombolysis rates between rural versus urban dwellers, the study’s findings were surprising, said Dr. Kahathuduwa.
Researchers divided participants into quartiles, from least to most disadvantaged, based on the Social Vulnerability Index (SVI), created by the Centers for Disease Control and Prevention to determine social vulnerability or factors that can negatively affect a community’s health.
Among the 7930 individuals in the least disadvantaged group, 1037 received thrombolysis. In comparison, among the 7966 persons in the most disadvantaged group, 964 received thrombolysis.
After adjusting for age, sex, and education, investigators found that patients in the first quartile based on SVI were more likely to be associated with thrombolysis vs those in the second and third quartiles (RR, 1.13; 95% CI, 1.04-1.22).
The researchers also examined the impact of comorbidities using the Charlson Comorbidity Index. Patients with diabetes, hypertension, and high cholesterol in addition to signs of stroke would rouse a higher degree of suspicion and be more likely to be treated with tPA or tenecteplase, said Dr. Kahathuduwa.
“But even when we controlled for those comorbidities, the relationships we identified between health disparities and the likelihood of receiving thrombolysis remained the same,” said Dr. Kahathuduwa.
It’s not clear from this study what factors contribute to the disparities in stroke treatment. “All we know is these relationships exist,” said Dr. Kahathuduwa. “We should use this as a foundation to understand what’s really going on at the grassroots level.”
However, he added, it’s possible that accessibility plays a role. He noted that Lubbock has the only Level 1 stroke center in west Texas; most stroke centers in the state are concentrated in cities in east and central Texas.
The investigators are embarking on further research to assess the impact of determinants of health on receipt of endovascular therapy and the role of stroke severity.
“In an ideal world, all patients who need thrombolytic therapy would get thrombolytic therapy within the recommended time window because the benefits are very clear,” said Dr. Kahathuduwa.
The findings may not be generalizable because they come from a single database. “Our findings need to be validated in another independent dataset before we can confidently determine what’s going on,” said Dr. Kahathuduwa.
A limitation of the study was that it is unknown how many of the participants were seen at the hospital within the recommended time frame and would thus be eligible to receive the treatment.
Commenting on the research, Martinson Arnan, MD , a vascular neurologist at Bronson Neuroscience Center, Kalamazoo, Michigan, said the study’s “exploratory finding” is important and “illuminates the potential impact of social determinants of health on disparities in acute stroke treatment.”
Neurologists consistently emphasize the principle that “time is brain” — that timely restoration of blood flow is crucial for minimizing morbidity associated with ischemic stroke. This study offers a potential opportunity to investigate how social determinants of health may affect stroke care, said Dr. Arnan.
However, he added, further research is needed “to understand whether the differences in outcomes observed here are influenced by levels of health education, concordance between patients and their treating providers, or other issues related to access barriers.”
The investigators and Dr. Arnan report no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
, early research shows.
“The findings should serve as an eye-opener that social determinants of health seem to be playing a role in who receives thrombolytic therapy, said study investigator Chanaka Kahathuduwa, MD, PhD, resident physician, Department of Neurology, School of Medicine, Texas Tech University Health Sciences Center, Lubbock.
The findings were released ahead of the study’s scheduled presentation at the annual meeting of the American Academy of Neurology.
Contributor to Poor Outcomes
Social determinants of health are important contributors to poor stroke-related outcomes, the investigators noted. They pointed out that previous research has yielded conflicting results as to the cause.
Whereas some studies suggest poor social determinants of health drive increased stroke incidence, others raise the question of whether there are disparities in acute stroke care.
To investigate, the researchers used a publicly available database and diagnostic and procedure codes to identify patients presenting at emergency departments in Texas from 2016 to 2019 with ischemic stroke who did and did not receive thrombolytic therapy.
“We focused on Texas, which has a very large area but few places where people have easy access to health care, which is a problem,” said study co-investigator Chathurika Dhanasekara, MD, PhD, research assistant professor in the Department of Surgery, School of Medicine, Texas Tech University Health Sciences Center.
The study included 63,983 stroke patients, of whom 51.6% were female, 66.6% were White, and 17.7% were Black. Of these, 7198 (11.2%) received thrombolytic therapy; such therapies include the tissue plasminogen activators (tPAs) alteplase and tenecteplace.
Researchers collected information on social determinants of health such as age, race, gender, insurance type, and residence based on zip codes. They computed risk ratios (RRs) of administering thrombolysis on the basis of these variables.
Results showed that Black patients were less likely than their White counterparts to receive thrombolysis (RR, 0.90; 95% CI, 0.85-0.96). In addition, patients older than 65 years were less likely those aged 18-45 years to receive thrombolysis (RR, 0.47; 95% CI, 0.44-0.51), and rural residents were less likely than urban dwellers to receive the intervention (RR, 0.60; 95% CI, 0.55-0.65).
It makes some sense, the researchers said, that rural stroke patients would be less likely to get thrombolysis because there’s a limited time window — within 4.5 hours — during which this therapy can be given, and such patients may live a long distance from a hospital.
Two other groups less likely to receive thrombolysis were Hispanic persons versus non-Hispanic persons (RR, 0.93; 95% CI, 0.87-0.98) and Medicare/Medicaid/Veterans Administration patients (RR, 0.77; 95% CI, 0.73-0.81) or uninsured patients (RR, 0.90; 95% CI, 0.94-0.87) vs those with private insurance.
Interestingly, male patients were less likely than female patients to receive thrombolysis (RR, 0.95; 95% CI, 0.90-0.99).
Surprising Findings
With the exception of the discrepancy in thrombolysis rates between rural versus urban dwellers, the study’s findings were surprising, said Dr. Kahathuduwa.
Researchers divided participants into quartiles, from least to most disadvantaged, based on the Social Vulnerability Index (SVI), created by the Centers for Disease Control and Prevention to determine social vulnerability or factors that can negatively affect a community’s health.
Among the 7930 individuals in the least disadvantaged group, 1037 received thrombolysis. In comparison, among the 7966 persons in the most disadvantaged group, 964 received thrombolysis.
After adjusting for age, sex, and education, investigators found that patients in the first quartile based on SVI were more likely to be associated with thrombolysis vs those in the second and third quartiles (RR, 1.13; 95% CI, 1.04-1.22).
The researchers also examined the impact of comorbidities using the Charlson Comorbidity Index. Patients with diabetes, hypertension, and high cholesterol in addition to signs of stroke would rouse a higher degree of suspicion and be more likely to be treated with tPA or tenecteplase, said Dr. Kahathuduwa.
“But even when we controlled for those comorbidities, the relationships we identified between health disparities and the likelihood of receiving thrombolysis remained the same,” said Dr. Kahathuduwa.
It’s not clear from this study what factors contribute to the disparities in stroke treatment. “All we know is these relationships exist,” said Dr. Kahathuduwa. “We should use this as a foundation to understand what’s really going on at the grassroots level.”
However, he added, it’s possible that accessibility plays a role. He noted that Lubbock has the only Level 1 stroke center in west Texas; most stroke centers in the state are concentrated in cities in east and central Texas.
The investigators are embarking on further research to assess the impact of determinants of health on receipt of endovascular therapy and the role of stroke severity.
“In an ideal world, all patients who need thrombolytic therapy would get thrombolytic therapy within the recommended time window because the benefits are very clear,” said Dr. Kahathuduwa.
The findings may not be generalizable because they come from a single database. “Our findings need to be validated in another independent dataset before we can confidently determine what’s going on,” said Dr. Kahathuduwa.
A limitation of the study was that it is unknown how many of the participants were seen at the hospital within the recommended time frame and would thus be eligible to receive the treatment.
Commenting on the research, Martinson Arnan, MD , a vascular neurologist at Bronson Neuroscience Center, Kalamazoo, Michigan, said the study’s “exploratory finding” is important and “illuminates the potential impact of social determinants of health on disparities in acute stroke treatment.”
Neurologists consistently emphasize the principle that “time is brain” — that timely restoration of blood flow is crucial for minimizing morbidity associated with ischemic stroke. This study offers a potential opportunity to investigate how social determinants of health may affect stroke care, said Dr. Arnan.
However, he added, further research is needed “to understand whether the differences in outcomes observed here are influenced by levels of health education, concordance between patients and their treating providers, or other issues related to access barriers.”
The investigators and Dr. Arnan report no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
, early research shows.
“The findings should serve as an eye-opener that social determinants of health seem to be playing a role in who receives thrombolytic therapy, said study investigator Chanaka Kahathuduwa, MD, PhD, resident physician, Department of Neurology, School of Medicine, Texas Tech University Health Sciences Center, Lubbock.
The findings were released ahead of the study’s scheduled presentation at the annual meeting of the American Academy of Neurology.
Contributor to Poor Outcomes
Social determinants of health are important contributors to poor stroke-related outcomes, the investigators noted. They pointed out that previous research has yielded conflicting results as to the cause.
Whereas some studies suggest poor social determinants of health drive increased stroke incidence, others raise the question of whether there are disparities in acute stroke care.
To investigate, the researchers used a publicly available database and diagnostic and procedure codes to identify patients presenting at emergency departments in Texas from 2016 to 2019 with ischemic stroke who did and did not receive thrombolytic therapy.
“We focused on Texas, which has a very large area but few places where people have easy access to health care, which is a problem,” said study co-investigator Chathurika Dhanasekara, MD, PhD, research assistant professor in the Department of Surgery, School of Medicine, Texas Tech University Health Sciences Center.
The study included 63,983 stroke patients, of whom 51.6% were female, 66.6% were White, and 17.7% were Black. Of these, 7198 (11.2%) received thrombolytic therapy; such therapies include the tissue plasminogen activators (tPAs) alteplase and tenecteplace.
Researchers collected information on social determinants of health such as age, race, gender, insurance type, and residence based on zip codes. They computed risk ratios (RRs) of administering thrombolysis on the basis of these variables.
Results showed that Black patients were less likely than their White counterparts to receive thrombolysis (RR, 0.90; 95% CI, 0.85-0.96). In addition, patients older than 65 years were less likely those aged 18-45 years to receive thrombolysis (RR, 0.47; 95% CI, 0.44-0.51), and rural residents were less likely than urban dwellers to receive the intervention (RR, 0.60; 95% CI, 0.55-0.65).
It makes some sense, the researchers said, that rural stroke patients would be less likely to get thrombolysis because there’s a limited time window — within 4.5 hours — during which this therapy can be given, and such patients may live a long distance from a hospital.
Two other groups less likely to receive thrombolysis were Hispanic persons versus non-Hispanic persons (RR, 0.93; 95% CI, 0.87-0.98) and Medicare/Medicaid/Veterans Administration patients (RR, 0.77; 95% CI, 0.73-0.81) or uninsured patients (RR, 0.90; 95% CI, 0.94-0.87) vs those with private insurance.
Interestingly, male patients were less likely than female patients to receive thrombolysis (RR, 0.95; 95% CI, 0.90-0.99).
Surprising Findings
With the exception of the discrepancy in thrombolysis rates between rural versus urban dwellers, the study’s findings were surprising, said Dr. Kahathuduwa.
Researchers divided participants into quartiles, from least to most disadvantaged, based on the Social Vulnerability Index (SVI), created by the Centers for Disease Control and Prevention to determine social vulnerability or factors that can negatively affect a community’s health.
Among the 7930 individuals in the least disadvantaged group, 1037 received thrombolysis. In comparison, among the 7966 persons in the most disadvantaged group, 964 received thrombolysis.
After adjusting for age, sex, and education, investigators found that patients in the first quartile based on SVI were more likely to be associated with thrombolysis vs those in the second and third quartiles (RR, 1.13; 95% CI, 1.04-1.22).
The researchers also examined the impact of comorbidities using the Charlson Comorbidity Index. Patients with diabetes, hypertension, and high cholesterol in addition to signs of stroke would rouse a higher degree of suspicion and be more likely to be treated with tPA or tenecteplase, said Dr. Kahathuduwa.
“But even when we controlled for those comorbidities, the relationships we identified between health disparities and the likelihood of receiving thrombolysis remained the same,” said Dr. Kahathuduwa.
It’s not clear from this study what factors contribute to the disparities in stroke treatment. “All we know is these relationships exist,” said Dr. Kahathuduwa. “We should use this as a foundation to understand what’s really going on at the grassroots level.”
However, he added, it’s possible that accessibility plays a role. He noted that Lubbock has the only Level 1 stroke center in west Texas; most stroke centers in the state are concentrated in cities in east and central Texas.
The investigators are embarking on further research to assess the impact of determinants of health on receipt of endovascular therapy and the role of stroke severity.
“In an ideal world, all patients who need thrombolytic therapy would get thrombolytic therapy within the recommended time window because the benefits are very clear,” said Dr. Kahathuduwa.
The findings may not be generalizable because they come from a single database. “Our findings need to be validated in another independent dataset before we can confidently determine what’s going on,” said Dr. Kahathuduwa.
A limitation of the study was that it is unknown how many of the participants were seen at the hospital within the recommended time frame and would thus be eligible to receive the treatment.
Commenting on the research, Martinson Arnan, MD , a vascular neurologist at Bronson Neuroscience Center, Kalamazoo, Michigan, said the study’s “exploratory finding” is important and “illuminates the potential impact of social determinants of health on disparities in acute stroke treatment.”
Neurologists consistently emphasize the principle that “time is brain” — that timely restoration of blood flow is crucial for minimizing morbidity associated with ischemic stroke. This study offers a potential opportunity to investigate how social determinants of health may affect stroke care, said Dr. Arnan.
However, he added, further research is needed “to understand whether the differences in outcomes observed here are influenced by levels of health education, concordance between patients and their treating providers, or other issues related to access barriers.”
The investigators and Dr. Arnan report no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
FROM AAN 2024
Infant Exposure to MS Drugs via Breastfeeding: New Data
, new research confirmed.
Registry data showed no differences in health or development in the first 3 years of life among infants exposed to natalizumab, ocrelizumab, rituximab, or ofatumumab, compared with unexposed infants.
“Most monoclonal antibody medications for multiple sclerosis are not currently approved for use while a mother is breastfeeding,” even though the disease can develop during a person’s reproductive years, study investigator Kerstin Hellwig, MD, with Ruhr University in Bochum, Germany, said in a news release.
“Our data show infants exposed to these medications through breastfeeding experienced no negative effects on health or development within the first 3 years of life,” Dr. Hellwig said.
The findings were released ahead of the study’s scheduled presentation at the annual meeting of the American Academy of Neurology.
Registry Data and Analysis
Using the German MS and Pregnancy Registry, researchers identified 183 infants born to mothers taking mAbs while breastfeeding — 180 with a diagnosis of MS and three with a diagnosis of NMOSD. The infants were matched to 183 unexposed infants (control group).
Exposure to mAbs during lactation started a median of 19 days postpartum and lasted for a median of 172 days. The most commonly used mAb during lactation was natalizumab (125 women), followed by ocrelizumab (34 women), rituximab (11 women), and ofatumumab (10 women).
Among the entire infant cohort, two were first exposed to natalizumab and then ocrelizumab; one was exposed to rituximab and then ocrelizumab; three had been previously breastfed on glatiramer acetate and two on interferons.
The primary outcomes were hospitalizations, antibiotic use, developmental delay, and weight during the first 3 years of life in mAb-exposed versus unexposed infants.
In adjusted regression analyses, mAb exposure during breastfeeding was not significantly associated with annual hospitalization (rate ratio [RR], 1.23; P = .473), annual systemic antibiotic use (RR, 1.55; P = .093), developmental delay (odds ratio, 1.16; P = .716), or weight.
A limitation of the study was that only about a third of the infants were followed for the full 3 years. Therefore, Dr. Hellwig said, the results for the third year of life are less meaningful than for years 1 and 2.
‘Reassuring’ Data
Reached for comment, Edith L. Graham, MD, Department of Neurology, Multiple Sclerosis and Neuroimmunology, Northwestern University, Chicago, Illinois, noted that this is the largest group of breastfed infants exposed to mAbs used to treat MS and said the data provide “reassuring infant outcomes with no increase in hospitalization, antibiotic use, or developmental delay.”
Dr. Graham noted that recent publications have reported more on the use of anti-CD20 mAbs (ocrelizumab/rituximab/ofatumumab) while breastfeeding, “and this study adds data for patients on natalizumab.”
“It will be important to know how infusion timing after birth impacts transfer of monoclonal antibodies depending on the milk stage as it transitions from colostrum to mature milk in the first month postpartum,” Dr. Graham said.
“While infection rates of infants are reassuring, data on allergies in the exposed infants would be interesting to look at as well,” she added. “While these infusions are not orally bioavailable, we do not know the full extent of impact on the neonatal gut microbiome.”
In addition, Dr. Graham said it would be important to know whether drugs administered monthly, such as natalizumab and ofatumumab, accumulate in the breast milk at higher levels than medications such as ocrelizumab and rituximab, which are administered twice a year.
The German MS and pregnancy registry was partly supported by the Innovation Fund of the Federal Joint Committee, Almirall Hermal GmbH, Biogen GmbH Germany, Hexal AG, Merck Serono GmbH, Novartis Pharma GmbH, Roche Deutschland GmbH, Sanofi Genzyme, and Teva GmbH. Dr. Hellwig and Dr. Graham had no relevant disclosures.
A version of this article appeared on Medscape.com.
, new research confirmed.
Registry data showed no differences in health or development in the first 3 years of life among infants exposed to natalizumab, ocrelizumab, rituximab, or ofatumumab, compared with unexposed infants.
“Most monoclonal antibody medications for multiple sclerosis are not currently approved for use while a mother is breastfeeding,” even though the disease can develop during a person’s reproductive years, study investigator Kerstin Hellwig, MD, with Ruhr University in Bochum, Germany, said in a news release.
“Our data show infants exposed to these medications through breastfeeding experienced no negative effects on health or development within the first 3 years of life,” Dr. Hellwig said.
The findings were released ahead of the study’s scheduled presentation at the annual meeting of the American Academy of Neurology.
Registry Data and Analysis
Using the German MS and Pregnancy Registry, researchers identified 183 infants born to mothers taking mAbs while breastfeeding — 180 with a diagnosis of MS and three with a diagnosis of NMOSD. The infants were matched to 183 unexposed infants (control group).
Exposure to mAbs during lactation started a median of 19 days postpartum and lasted for a median of 172 days. The most commonly used mAb during lactation was natalizumab (125 women), followed by ocrelizumab (34 women), rituximab (11 women), and ofatumumab (10 women).
Among the entire infant cohort, two were first exposed to natalizumab and then ocrelizumab; one was exposed to rituximab and then ocrelizumab; three had been previously breastfed on glatiramer acetate and two on interferons.
The primary outcomes were hospitalizations, antibiotic use, developmental delay, and weight during the first 3 years of life in mAb-exposed versus unexposed infants.
In adjusted regression analyses, mAb exposure during breastfeeding was not significantly associated with annual hospitalization (rate ratio [RR], 1.23; P = .473), annual systemic antibiotic use (RR, 1.55; P = .093), developmental delay (odds ratio, 1.16; P = .716), or weight.
A limitation of the study was that only about a third of the infants were followed for the full 3 years. Therefore, Dr. Hellwig said, the results for the third year of life are less meaningful than for years 1 and 2.
‘Reassuring’ Data
Reached for comment, Edith L. Graham, MD, Department of Neurology, Multiple Sclerosis and Neuroimmunology, Northwestern University, Chicago, Illinois, noted that this is the largest group of breastfed infants exposed to mAbs used to treat MS and said the data provide “reassuring infant outcomes with no increase in hospitalization, antibiotic use, or developmental delay.”
Dr. Graham noted that recent publications have reported more on the use of anti-CD20 mAbs (ocrelizumab/rituximab/ofatumumab) while breastfeeding, “and this study adds data for patients on natalizumab.”
“It will be important to know how infusion timing after birth impacts transfer of monoclonal antibodies depending on the milk stage as it transitions from colostrum to mature milk in the first month postpartum,” Dr. Graham said.
“While infection rates of infants are reassuring, data on allergies in the exposed infants would be interesting to look at as well,” she added. “While these infusions are not orally bioavailable, we do not know the full extent of impact on the neonatal gut microbiome.”
In addition, Dr. Graham said it would be important to know whether drugs administered monthly, such as natalizumab and ofatumumab, accumulate in the breast milk at higher levels than medications such as ocrelizumab and rituximab, which are administered twice a year.
The German MS and pregnancy registry was partly supported by the Innovation Fund of the Federal Joint Committee, Almirall Hermal GmbH, Biogen GmbH Germany, Hexal AG, Merck Serono GmbH, Novartis Pharma GmbH, Roche Deutschland GmbH, Sanofi Genzyme, and Teva GmbH. Dr. Hellwig and Dr. Graham had no relevant disclosures.
A version of this article appeared on Medscape.com.
, new research confirmed.
Registry data showed no differences in health or development in the first 3 years of life among infants exposed to natalizumab, ocrelizumab, rituximab, or ofatumumab, compared with unexposed infants.
“Most monoclonal antibody medications for multiple sclerosis are not currently approved for use while a mother is breastfeeding,” even though the disease can develop during a person’s reproductive years, study investigator Kerstin Hellwig, MD, with Ruhr University in Bochum, Germany, said in a news release.
“Our data show infants exposed to these medications through breastfeeding experienced no negative effects on health or development within the first 3 years of life,” Dr. Hellwig said.
The findings were released ahead of the study’s scheduled presentation at the annual meeting of the American Academy of Neurology.
Registry Data and Analysis
Using the German MS and Pregnancy Registry, researchers identified 183 infants born to mothers taking mAbs while breastfeeding — 180 with a diagnosis of MS and three with a diagnosis of NMOSD. The infants were matched to 183 unexposed infants (control group).
Exposure to mAbs during lactation started a median of 19 days postpartum and lasted for a median of 172 days. The most commonly used mAb during lactation was natalizumab (125 women), followed by ocrelizumab (34 women), rituximab (11 women), and ofatumumab (10 women).
Among the entire infant cohort, two were first exposed to natalizumab and then ocrelizumab; one was exposed to rituximab and then ocrelizumab; three had been previously breastfed on glatiramer acetate and two on interferons.
The primary outcomes were hospitalizations, antibiotic use, developmental delay, and weight during the first 3 years of life in mAb-exposed versus unexposed infants.
In adjusted regression analyses, mAb exposure during breastfeeding was not significantly associated with annual hospitalization (rate ratio [RR], 1.23; P = .473), annual systemic antibiotic use (RR, 1.55; P = .093), developmental delay (odds ratio, 1.16; P = .716), or weight.
A limitation of the study was that only about a third of the infants were followed for the full 3 years. Therefore, Dr. Hellwig said, the results for the third year of life are less meaningful than for years 1 and 2.
‘Reassuring’ Data
Reached for comment, Edith L. Graham, MD, Department of Neurology, Multiple Sclerosis and Neuroimmunology, Northwestern University, Chicago, Illinois, noted that this is the largest group of breastfed infants exposed to mAbs used to treat MS and said the data provide “reassuring infant outcomes with no increase in hospitalization, antibiotic use, or developmental delay.”
Dr. Graham noted that recent publications have reported more on the use of anti-CD20 mAbs (ocrelizumab/rituximab/ofatumumab) while breastfeeding, “and this study adds data for patients on natalizumab.”
“It will be important to know how infusion timing after birth impacts transfer of monoclonal antibodies depending on the milk stage as it transitions from colostrum to mature milk in the first month postpartum,” Dr. Graham said.
“While infection rates of infants are reassuring, data on allergies in the exposed infants would be interesting to look at as well,” she added. “While these infusions are not orally bioavailable, we do not know the full extent of impact on the neonatal gut microbiome.”
In addition, Dr. Graham said it would be important to know whether drugs administered monthly, such as natalizumab and ofatumumab, accumulate in the breast milk at higher levels than medications such as ocrelizumab and rituximab, which are administered twice a year.
The German MS and pregnancy registry was partly supported by the Innovation Fund of the Federal Joint Committee, Almirall Hermal GmbH, Biogen GmbH Germany, Hexal AG, Merck Serono GmbH, Novartis Pharma GmbH, Roche Deutschland GmbH, Sanofi Genzyme, and Teva GmbH. Dr. Hellwig and Dr. Graham had no relevant disclosures.
A version of this article appeared on Medscape.com.
Diarrhea in Cancer Therapies — Part 1: Chemotherapeutics
Patients with cancer receiving chemotherapeutics may develop diarrhea, which can be highly distressing. In a recent journal article, oncologist Marcus Hentrich, MD, and gastroenterologist Volker Penndorf, MD, PhD, both of Rotkreuzklinikum in Munich, Germany, explained how affected patients should be treated.
As Dr. Hentrich and Dr. Penndorf explained, classical
The cytostatic drug irinotecan, which can lead to an acute cholinergic syndrome within 24 hours, is a special case. This syndrome is characterized by watery diarrhea, abdominal cramps, vomiting, sweating, and bradycardia. Additionally, the development of late-onset diarrhea, occurring approximately 3 days after administration, is frequent.
According to the authors, risk factors for toxic enteritis with diarrhea include advanced age, poor performance and nutritional status, simultaneous radiotherapy of the abdomen and pelvis, and preexisting intestinal conditions.
Medication prophylaxis for chemotherapy-induced diarrhea has not been established. An exception is atropine for prophylaxis and treatment of irinotecan-induced cholinergic syndrome.
Indications for diagnostic procedures are outlined in the current German guideline for supportive therapy in patients with cancer.
For diarrhea accompanied by fever, blood cultures are mandatory. A complete blood count provides information on various aspects (leukocytosis as an inflammatory reaction, neutropenia as a marker for infection risk, hemoglobin as a marker for possible hemoconcentration or existing bleeding, and thrombocytopenia as a marker for bleeding tendency). Disproportionate thrombocytopenia may warrant assessment of fragmented cells and enterohemorrhagic Escherichia coli diagnostics.
To assess electrolyte and fluid loss, electrolytes, albumin, and total protein should be measured. The C-reactive protein value may help identify inflammatory conditions. It may also be elevated, however, because of tumor-related factors. Measuring urea and creatinine allows for estimating whether there is already a prerenal impairment of kidney function. Liver function parameters are mandatory for critically ill patients. In patients with hypotension or tachycardia, blood gas analysis and lactate determination are advisable. Among imaging techniques, ultrasound may be helpful. Indications for conventional abdominal x-ray are rare. In the presence of clinical signs of peritoneal irritation (such as guarding and rebound tenderness), a CT scan should be considered to detect further complications (perforation, ileus, enterocolitis, etc.) promptly.
Endoscopic examinations are recommended only in cases of persistent, worsening symptoms, according to the guideline. Colonoscopy is contraindicated in suspected neutropenic enterocolitis (NEC) because of the risk for perforation.
According to Dr. Hentrich and Dr. Penndorf, diarrhea therapy is carried out in stages and depends on the severity and response to each therapy. The Common Terminology Criteria for Adverse Events distinguishes the following severity grades:
- Grade 1: < four stools per day above baseline
- Grade 2: Four to six stools per day above baseline
- Grade 3: ≥ seven stools per day above baseline; fecal incontinence, hospitalization indicated; limited activities of daily living
- Grade 4: Life-threatening consequences, urgent intervention indicated
Therapy for Grades 1-2
The standard therapeutic after excluding infectious causes is loperamide (initially 4 mg orally, followed by 2 mg every 2-4 hours). A daily dose should not exceed 16 mg.
For irinotecan-associated diarrhea, adjunctive administration of budesonide (3 mg orally three times per day) with loperamide was shown to be effective in a small, randomized study (off-label). Another randomized study demonstrated the efficacy of combining loperamide with racecadotril (100 mg orally three times per day for 48 hours).
Therapy for Grade 3 Diarrhea
In severe diarrhea persisting despite loperamide therapy for 24-48 hours, octreotide (100-150 μg subcutaneously three times per day) may be administered (maximum three times 500 μg). Although octreotide is often used successfully for chemotherapy-induced diarrhea, it is not approved for this indication (off-label use).
According to the authors, other therapy options for loperamide-refractory diarrhea include codeine, tincture of opium, budesonide, and racecadotril. Psyllium husk or diphenoxylate plus atropine may also be attempted. In patients with prolonged neutropenia, overdosing of motility inhibitors should be avoided because of the risk for ileus.
The use of probiotics for chemotherapy-induced diarrhea cannot be generally recommended because of insufficient evidence, and cases of probiotic-associated bacteremia and fungemia have been described.
A particularly serious complication of intensive chemotherapy associated with diarrhea is NEC. It is characterized by fever, abdominal pain, and diarrhea during severe neutropenia (neutrophil count < 500/μL), the authors explained. NEC occurs predominantly, but not exclusively, after intensive chemotherapy for hematologic malignancies, especially acute leukemias.
More common than NEC and often preceding it is the so-called chemotherapy-associated bowel syndrome. It is characterized by fever ≥ 37.8 °C and abdominal pain or absence of stool for at least 72 hours.
Therapy consists of conservative symptomatic measures such as diet, adequate hydration with electrolyte balance, and analgesia. Due to the high risk for bacteremia, antibiotic therapy is indicated after blood cultures are obtained (piperacillin-tazobactam or a carbapenem). According to the authors, NEC improves in most patients with neutrophil regeneration. Granulocyte colony-stimulating factor therapy appears reasonable in this context, although conclusive studies are lacking. Surgical intervention with removal of necrotic bowel segments may be considered in exceptional cases.
This story was translated from Univadis Germany, which is part of the Medscape professional network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Patients with cancer receiving chemotherapeutics may develop diarrhea, which can be highly distressing. In a recent journal article, oncologist Marcus Hentrich, MD, and gastroenterologist Volker Penndorf, MD, PhD, both of Rotkreuzklinikum in Munich, Germany, explained how affected patients should be treated.
As Dr. Hentrich and Dr. Penndorf explained, classical
The cytostatic drug irinotecan, which can lead to an acute cholinergic syndrome within 24 hours, is a special case. This syndrome is characterized by watery diarrhea, abdominal cramps, vomiting, sweating, and bradycardia. Additionally, the development of late-onset diarrhea, occurring approximately 3 days after administration, is frequent.
According to the authors, risk factors for toxic enteritis with diarrhea include advanced age, poor performance and nutritional status, simultaneous radiotherapy of the abdomen and pelvis, and preexisting intestinal conditions.
Medication prophylaxis for chemotherapy-induced diarrhea has not been established. An exception is atropine for prophylaxis and treatment of irinotecan-induced cholinergic syndrome.
Indications for diagnostic procedures are outlined in the current German guideline for supportive therapy in patients with cancer.
For diarrhea accompanied by fever, blood cultures are mandatory. A complete blood count provides information on various aspects (leukocytosis as an inflammatory reaction, neutropenia as a marker for infection risk, hemoglobin as a marker for possible hemoconcentration or existing bleeding, and thrombocytopenia as a marker for bleeding tendency). Disproportionate thrombocytopenia may warrant assessment of fragmented cells and enterohemorrhagic Escherichia coli diagnostics.
To assess electrolyte and fluid loss, electrolytes, albumin, and total protein should be measured. The C-reactive protein value may help identify inflammatory conditions. It may also be elevated, however, because of tumor-related factors. Measuring urea and creatinine allows for estimating whether there is already a prerenal impairment of kidney function. Liver function parameters are mandatory for critically ill patients. In patients with hypotension or tachycardia, blood gas analysis and lactate determination are advisable. Among imaging techniques, ultrasound may be helpful. Indications for conventional abdominal x-ray are rare. In the presence of clinical signs of peritoneal irritation (such as guarding and rebound tenderness), a CT scan should be considered to detect further complications (perforation, ileus, enterocolitis, etc.) promptly.
Endoscopic examinations are recommended only in cases of persistent, worsening symptoms, according to the guideline. Colonoscopy is contraindicated in suspected neutropenic enterocolitis (NEC) because of the risk for perforation.
According to Dr. Hentrich and Dr. Penndorf, diarrhea therapy is carried out in stages and depends on the severity and response to each therapy. The Common Terminology Criteria for Adverse Events distinguishes the following severity grades:
- Grade 1: < four stools per day above baseline
- Grade 2: Four to six stools per day above baseline
- Grade 3: ≥ seven stools per day above baseline; fecal incontinence, hospitalization indicated; limited activities of daily living
- Grade 4: Life-threatening consequences, urgent intervention indicated
Therapy for Grades 1-2
The standard therapeutic after excluding infectious causes is loperamide (initially 4 mg orally, followed by 2 mg every 2-4 hours). A daily dose should not exceed 16 mg.
For irinotecan-associated diarrhea, adjunctive administration of budesonide (3 mg orally three times per day) with loperamide was shown to be effective in a small, randomized study (off-label). Another randomized study demonstrated the efficacy of combining loperamide with racecadotril (100 mg orally three times per day for 48 hours).
Therapy for Grade 3 Diarrhea
In severe diarrhea persisting despite loperamide therapy for 24-48 hours, octreotide (100-150 μg subcutaneously three times per day) may be administered (maximum three times 500 μg). Although octreotide is often used successfully for chemotherapy-induced diarrhea, it is not approved for this indication (off-label use).
According to the authors, other therapy options for loperamide-refractory diarrhea include codeine, tincture of opium, budesonide, and racecadotril. Psyllium husk or diphenoxylate plus atropine may also be attempted. In patients with prolonged neutropenia, overdosing of motility inhibitors should be avoided because of the risk for ileus.
The use of probiotics for chemotherapy-induced diarrhea cannot be generally recommended because of insufficient evidence, and cases of probiotic-associated bacteremia and fungemia have been described.
A particularly serious complication of intensive chemotherapy associated with diarrhea is NEC. It is characterized by fever, abdominal pain, and diarrhea during severe neutropenia (neutrophil count < 500/μL), the authors explained. NEC occurs predominantly, but not exclusively, after intensive chemotherapy for hematologic malignancies, especially acute leukemias.
More common than NEC and often preceding it is the so-called chemotherapy-associated bowel syndrome. It is characterized by fever ≥ 37.8 °C and abdominal pain or absence of stool for at least 72 hours.
Therapy consists of conservative symptomatic measures such as diet, adequate hydration with electrolyte balance, and analgesia. Due to the high risk for bacteremia, antibiotic therapy is indicated after blood cultures are obtained (piperacillin-tazobactam or a carbapenem). According to the authors, NEC improves in most patients with neutrophil regeneration. Granulocyte colony-stimulating factor therapy appears reasonable in this context, although conclusive studies are lacking. Surgical intervention with removal of necrotic bowel segments may be considered in exceptional cases.
This story was translated from Univadis Germany, which is part of the Medscape professional network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Patients with cancer receiving chemotherapeutics may develop diarrhea, which can be highly distressing. In a recent journal article, oncologist Marcus Hentrich, MD, and gastroenterologist Volker Penndorf, MD, PhD, both of Rotkreuzklinikum in Munich, Germany, explained how affected patients should be treated.
As Dr. Hentrich and Dr. Penndorf explained, classical
The cytostatic drug irinotecan, which can lead to an acute cholinergic syndrome within 24 hours, is a special case. This syndrome is characterized by watery diarrhea, abdominal cramps, vomiting, sweating, and bradycardia. Additionally, the development of late-onset diarrhea, occurring approximately 3 days after administration, is frequent.
According to the authors, risk factors for toxic enteritis with diarrhea include advanced age, poor performance and nutritional status, simultaneous radiotherapy of the abdomen and pelvis, and preexisting intestinal conditions.
Medication prophylaxis for chemotherapy-induced diarrhea has not been established. An exception is atropine for prophylaxis and treatment of irinotecan-induced cholinergic syndrome.
Indications for diagnostic procedures are outlined in the current German guideline for supportive therapy in patients with cancer.
For diarrhea accompanied by fever, blood cultures are mandatory. A complete blood count provides information on various aspects (leukocytosis as an inflammatory reaction, neutropenia as a marker for infection risk, hemoglobin as a marker for possible hemoconcentration or existing bleeding, and thrombocytopenia as a marker for bleeding tendency). Disproportionate thrombocytopenia may warrant assessment of fragmented cells and enterohemorrhagic Escherichia coli diagnostics.
To assess electrolyte and fluid loss, electrolytes, albumin, and total protein should be measured. The C-reactive protein value may help identify inflammatory conditions. It may also be elevated, however, because of tumor-related factors. Measuring urea and creatinine allows for estimating whether there is already a prerenal impairment of kidney function. Liver function parameters are mandatory for critically ill patients. In patients with hypotension or tachycardia, blood gas analysis and lactate determination are advisable. Among imaging techniques, ultrasound may be helpful. Indications for conventional abdominal x-ray are rare. In the presence of clinical signs of peritoneal irritation (such as guarding and rebound tenderness), a CT scan should be considered to detect further complications (perforation, ileus, enterocolitis, etc.) promptly.
Endoscopic examinations are recommended only in cases of persistent, worsening symptoms, according to the guideline. Colonoscopy is contraindicated in suspected neutropenic enterocolitis (NEC) because of the risk for perforation.
According to Dr. Hentrich and Dr. Penndorf, diarrhea therapy is carried out in stages and depends on the severity and response to each therapy. The Common Terminology Criteria for Adverse Events distinguishes the following severity grades:
- Grade 1: < four stools per day above baseline
- Grade 2: Four to six stools per day above baseline
- Grade 3: ≥ seven stools per day above baseline; fecal incontinence, hospitalization indicated; limited activities of daily living
- Grade 4: Life-threatening consequences, urgent intervention indicated
Therapy for Grades 1-2
The standard therapeutic after excluding infectious causes is loperamide (initially 4 mg orally, followed by 2 mg every 2-4 hours). A daily dose should not exceed 16 mg.
For irinotecan-associated diarrhea, adjunctive administration of budesonide (3 mg orally three times per day) with loperamide was shown to be effective in a small, randomized study (off-label). Another randomized study demonstrated the efficacy of combining loperamide with racecadotril (100 mg orally three times per day for 48 hours).
Therapy for Grade 3 Diarrhea
In severe diarrhea persisting despite loperamide therapy for 24-48 hours, octreotide (100-150 μg subcutaneously three times per day) may be administered (maximum three times 500 μg). Although octreotide is often used successfully for chemotherapy-induced diarrhea, it is not approved for this indication (off-label use).
According to the authors, other therapy options for loperamide-refractory diarrhea include codeine, tincture of opium, budesonide, and racecadotril. Psyllium husk or diphenoxylate plus atropine may also be attempted. In patients with prolonged neutropenia, overdosing of motility inhibitors should be avoided because of the risk for ileus.
The use of probiotics for chemotherapy-induced diarrhea cannot be generally recommended because of insufficient evidence, and cases of probiotic-associated bacteremia and fungemia have been described.
A particularly serious complication of intensive chemotherapy associated with diarrhea is NEC. It is characterized by fever, abdominal pain, and diarrhea during severe neutropenia (neutrophil count < 500/μL), the authors explained. NEC occurs predominantly, but not exclusively, after intensive chemotherapy for hematologic malignancies, especially acute leukemias.
More common than NEC and often preceding it is the so-called chemotherapy-associated bowel syndrome. It is characterized by fever ≥ 37.8 °C and abdominal pain or absence of stool for at least 72 hours.
Therapy consists of conservative symptomatic measures such as diet, adequate hydration with electrolyte balance, and analgesia. Due to the high risk for bacteremia, antibiotic therapy is indicated after blood cultures are obtained (piperacillin-tazobactam or a carbapenem). According to the authors, NEC improves in most patients with neutrophil regeneration. Granulocyte colony-stimulating factor therapy appears reasonable in this context, although conclusive studies are lacking. Surgical intervention with removal of necrotic bowel segments may be considered in exceptional cases.
This story was translated from Univadis Germany, which is part of the Medscape professional network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Delaying Inguinal Hernia Repair Is Beneficial for Preterm Infants
TOPLINE:
.
METHODOLOGY:
- The study compared the safety of repair before discharge from the NICU with repair after discharge and post-55 weeks gestational plus chronological age (postmenstrual age).
- The study randomized 338 infants from 39 US hospitals to early or late repair; of the 320 infants who had the surgery, 86% were male, 30% were Black, and 59% were White.
- The primary outcome was the occurrence of at least one serious adverse event over the 10-month observation period, including apnea requiring respiratory intervention, intubation for more than 2 days, bradycardia requiring pharmacological intervention, or death.
- Secondary outcomes included a total number of days in the hospital, including the initial NICU stay after randomization, postoperative hospitalization, and any inpatient days due to hospital readmission over the course of the following 10-month period.
TAKEAWAY:
- Infants who underwent late repair had a lower probability of having at least one serious adverse event: 28% had at least one adverse event in the early group vs 18% in the late group.
- Infants in the late repair group had shorter stays in the NICU after randomization, as well as fewer hospital days following surgery.
- Late repair provided the greatest benefit to infants with a gestational age younger than 28 weeks and those who had bronchopulmonary dysplasia.
- Hernias resolved spontaneously in 4% of infants in the early repair group and 11% in the late group, which the authors said supports delaying hernia repair.
IN PRACTICE:
“The decision to treat the inguinal hernia with an early or late repair strategy likely does not influence the overall duration of the neonatal intensive care unit stay but may hasten the discharge by several days if later repair is chosen, which is likely important to parents and neonatologists.”
SOURCE:
The study was published online in JAMA. It was funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development. Martin L. Blakely, MD, MS, from the Department of Surgery at the University of Texas Health Science Center, Houston, Texas, is the corresponding author.
LIMITATIONS:
This study had a modest sample size, an issue compounded by some subjects withdrawing from the trial. The randomization rate was lower than expected. The trial was also discontinued early due to meeting a prespecified stopping rule for effectiveness.
DISCLOSURES:
Study authors report grant support from the US Department of Defense, personal fees, author royalties, and institutional contracts with various companies including Medicem, Fresenius Kabi, Baxter, and Mead Johnson.
A version of this article appeared on Medscape.com.
TOPLINE:
.
METHODOLOGY:
- The study compared the safety of repair before discharge from the NICU with repair after discharge and post-55 weeks gestational plus chronological age (postmenstrual age).
- The study randomized 338 infants from 39 US hospitals to early or late repair; of the 320 infants who had the surgery, 86% were male, 30% were Black, and 59% were White.
- The primary outcome was the occurrence of at least one serious adverse event over the 10-month observation period, including apnea requiring respiratory intervention, intubation for more than 2 days, bradycardia requiring pharmacological intervention, or death.
- Secondary outcomes included a total number of days in the hospital, including the initial NICU stay after randomization, postoperative hospitalization, and any inpatient days due to hospital readmission over the course of the following 10-month period.
TAKEAWAY:
- Infants who underwent late repair had a lower probability of having at least one serious adverse event: 28% had at least one adverse event in the early group vs 18% in the late group.
- Infants in the late repair group had shorter stays in the NICU after randomization, as well as fewer hospital days following surgery.
- Late repair provided the greatest benefit to infants with a gestational age younger than 28 weeks and those who had bronchopulmonary dysplasia.
- Hernias resolved spontaneously in 4% of infants in the early repair group and 11% in the late group, which the authors said supports delaying hernia repair.
IN PRACTICE:
“The decision to treat the inguinal hernia with an early or late repair strategy likely does not influence the overall duration of the neonatal intensive care unit stay but may hasten the discharge by several days if later repair is chosen, which is likely important to parents and neonatologists.”
SOURCE:
The study was published online in JAMA. It was funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development. Martin L. Blakely, MD, MS, from the Department of Surgery at the University of Texas Health Science Center, Houston, Texas, is the corresponding author.
LIMITATIONS:
This study had a modest sample size, an issue compounded by some subjects withdrawing from the trial. The randomization rate was lower than expected. The trial was also discontinued early due to meeting a prespecified stopping rule for effectiveness.
DISCLOSURES:
Study authors report grant support from the US Department of Defense, personal fees, author royalties, and institutional contracts with various companies including Medicem, Fresenius Kabi, Baxter, and Mead Johnson.
A version of this article appeared on Medscape.com.
TOPLINE:
.
METHODOLOGY:
- The study compared the safety of repair before discharge from the NICU with repair after discharge and post-55 weeks gestational plus chronological age (postmenstrual age).
- The study randomized 338 infants from 39 US hospitals to early or late repair; of the 320 infants who had the surgery, 86% were male, 30% were Black, and 59% were White.
- The primary outcome was the occurrence of at least one serious adverse event over the 10-month observation period, including apnea requiring respiratory intervention, intubation for more than 2 days, bradycardia requiring pharmacological intervention, or death.
- Secondary outcomes included a total number of days in the hospital, including the initial NICU stay after randomization, postoperative hospitalization, and any inpatient days due to hospital readmission over the course of the following 10-month period.
TAKEAWAY:
- Infants who underwent late repair had a lower probability of having at least one serious adverse event: 28% had at least one adverse event in the early group vs 18% in the late group.
- Infants in the late repair group had shorter stays in the NICU after randomization, as well as fewer hospital days following surgery.
- Late repair provided the greatest benefit to infants with a gestational age younger than 28 weeks and those who had bronchopulmonary dysplasia.
- Hernias resolved spontaneously in 4% of infants in the early repair group and 11% in the late group, which the authors said supports delaying hernia repair.
IN PRACTICE:
“The decision to treat the inguinal hernia with an early or late repair strategy likely does not influence the overall duration of the neonatal intensive care unit stay but may hasten the discharge by several days if later repair is chosen, which is likely important to parents and neonatologists.”
SOURCE:
The study was published online in JAMA. It was funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development. Martin L. Blakely, MD, MS, from the Department of Surgery at the University of Texas Health Science Center, Houston, Texas, is the corresponding author.
LIMITATIONS:
This study had a modest sample size, an issue compounded by some subjects withdrawing from the trial. The randomization rate was lower than expected. The trial was also discontinued early due to meeting a prespecified stopping rule for effectiveness.
DISCLOSURES:
Study authors report grant support from the US Department of Defense, personal fees, author royalties, and institutional contracts with various companies including Medicem, Fresenius Kabi, Baxter, and Mead Johnson.
A version of this article appeared on Medscape.com.
What is the Best Approach to “Sinus Headaches”?
A 27-year-old woman presents requesting antibiotics for a sinus headache. She reports she has had 3-4 episodes a year with pain in her maxillary area and congestion. She has not had fevers with these episodes. She had the onset of this headache 6 hours ago. She has had resolution of the pain within 24 hours in the past with the use of antibiotics and decongestants. What would be the best treatment for her?
A. Amoxicillin
B. Amoxicillin/clavulanate
C. Amoxicillin + fluticasone nasal spray
D. Sumatriptan
The best treatment would be sumatriptan. This is very likely a variant of migraine headache and migraine-directed therapy is the best option. In regard to sinus headache, the International Headache Society (IHS) classification states that chronic sinusitis is not a cause of headache and facial pain unless it relapses into an acute sinusitis.1
The recurrent nature of the headaches in this patient suggests a primary headache disorder with migraine being the most likely. In a study of 2991 patients with self-diagnosed or physician-diagnosed “sinus headaches,” 88% of the patients met IHS criteria for migraine.2 In this study, most of the patients had symptoms suggesting sinus problems, with the most common symptoms being sinus pressure (84%), sinus pain (82%), and nasal congestion (63%). The likely cause for these symptoms in migraine patients is vasodilation of the nasal mucosa that can be part of the migraine event.
Foroughipour and colleagues found similar results.3 In their study, 58 patients with “sinus headache” were evaluated, with the final diagnosis of migraine in 40 patients (69%), tension-type headache in 16 patients (27%), and chronic sinusitis with recurrent acute episodes in 2 patients (3%). Recurrent antibiotic therapy had been given to 73% of the tension-type headache patients and 66% of the migraine patients.
Obermann et al. looked at how common trigeminal autonomic symptoms were in patients with migraine in a population-based study.4 They found of 841 patients who had migraine, 226 reported accompanying unilateral trigeminal autonomic symptoms (26.9%).
Al-Hashel et al. reported on how patients with frequent migraine are misdiagnosed and how long it takes when they present with sinus symptoms. A total of 130 migraine patients were recruited for the study; of these, 81.5% were misdiagnosed with sinusitis. The mean time delay of migraine diagnosis was almost 8 years.5
In a study by Dr. Elina Kari and Dr. John M. DelGaudio, patients who had a history of “sinus headaches” were treated as though all these headaches were migraines. Fifty-four patients were enrolled, and 38 patients completed the study. All patients had nasal endoscopy and sinus CT scans that were negative. They were then given migraine-directed treatment to use for their headaches. Of the 38 patient who completed the study, 31 patients (82%) had a significant reduction in headache pain with triptan use, and 35 patients (92%) had a significant response to migraine-directed therapy.6 An expert panel consisting of otolaryngologists, neurologists, allergists, and primary care physicians concluded that the majority of sinus headaches can actually be classified as migraines.7
These references aren’t new. This information has been known in the medical literature for more than 2 decades, but I believe that the majority of medical professionals are not aware of it. In my own practice I have found great success treating patients with sinus headache histories with migraine-directed therapy (mostly triptans) when they have return of their headaches.
Pearl: When your patients say they have another sinus headache, think migraine.
Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. He is a member of the editorial advisory board of Internal Medicine News. Dr. Paauw has no conflicts to disclose. Contact him at [email protected].
References
1. Jones NS. Expert Rev Neurother. 2009;9:439-44.
2. Schreiber CP et al. Arch Intern Med. 2004;164:1769-72.
3. Foroughipour M et al. Eur Arch Otorhinolaryngol. 2011;268:1593-6.
4. Obermann M et al. Cephalalgia. 2007 Jun;27(6):504-9.
5. Al-Hashel JY et al. J Headache Pain. 2013 Dec 12;14(1):97.
6. Kari E and DelGaudi JM. Laryngoscope. 2008;118:2235-9.
7. Levine HL et al. Otolaryngol Head Neck Surg. 2006 Mar;134(3):516-23.
A 27-year-old woman presents requesting antibiotics for a sinus headache. She reports she has had 3-4 episodes a year with pain in her maxillary area and congestion. She has not had fevers with these episodes. She had the onset of this headache 6 hours ago. She has had resolution of the pain within 24 hours in the past with the use of antibiotics and decongestants. What would be the best treatment for her?
A. Amoxicillin
B. Amoxicillin/clavulanate
C. Amoxicillin + fluticasone nasal spray
D. Sumatriptan
The best treatment would be sumatriptan. This is very likely a variant of migraine headache and migraine-directed therapy is the best option. In regard to sinus headache, the International Headache Society (IHS) classification states that chronic sinusitis is not a cause of headache and facial pain unless it relapses into an acute sinusitis.1
The recurrent nature of the headaches in this patient suggests a primary headache disorder with migraine being the most likely. In a study of 2991 patients with self-diagnosed or physician-diagnosed “sinus headaches,” 88% of the patients met IHS criteria for migraine.2 In this study, most of the patients had symptoms suggesting sinus problems, with the most common symptoms being sinus pressure (84%), sinus pain (82%), and nasal congestion (63%). The likely cause for these symptoms in migraine patients is vasodilation of the nasal mucosa that can be part of the migraine event.
Foroughipour and colleagues found similar results.3 In their study, 58 patients with “sinus headache” were evaluated, with the final diagnosis of migraine in 40 patients (69%), tension-type headache in 16 patients (27%), and chronic sinusitis with recurrent acute episodes in 2 patients (3%). Recurrent antibiotic therapy had been given to 73% of the tension-type headache patients and 66% of the migraine patients.
Obermann et al. looked at how common trigeminal autonomic symptoms were in patients with migraine in a population-based study.4 They found of 841 patients who had migraine, 226 reported accompanying unilateral trigeminal autonomic symptoms (26.9%).
Al-Hashel et al. reported on how patients with frequent migraine are misdiagnosed and how long it takes when they present with sinus symptoms. A total of 130 migraine patients were recruited for the study; of these, 81.5% were misdiagnosed with sinusitis. The mean time delay of migraine diagnosis was almost 8 years.5
In a study by Dr. Elina Kari and Dr. John M. DelGaudio, patients who had a history of “sinus headaches” were treated as though all these headaches were migraines. Fifty-four patients were enrolled, and 38 patients completed the study. All patients had nasal endoscopy and sinus CT scans that were negative. They were then given migraine-directed treatment to use for their headaches. Of the 38 patient who completed the study, 31 patients (82%) had a significant reduction in headache pain with triptan use, and 35 patients (92%) had a significant response to migraine-directed therapy.6 An expert panel consisting of otolaryngologists, neurologists, allergists, and primary care physicians concluded that the majority of sinus headaches can actually be classified as migraines.7
These references aren’t new. This information has been known in the medical literature for more than 2 decades, but I believe that the majority of medical professionals are not aware of it. In my own practice I have found great success treating patients with sinus headache histories with migraine-directed therapy (mostly triptans) when they have return of their headaches.
Pearl: When your patients say they have another sinus headache, think migraine.
Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. He is a member of the editorial advisory board of Internal Medicine News. Dr. Paauw has no conflicts to disclose. Contact him at [email protected].
References
1. Jones NS. Expert Rev Neurother. 2009;9:439-44.
2. Schreiber CP et al. Arch Intern Med. 2004;164:1769-72.
3. Foroughipour M et al. Eur Arch Otorhinolaryngol. 2011;268:1593-6.
4. Obermann M et al. Cephalalgia. 2007 Jun;27(6):504-9.
5. Al-Hashel JY et al. J Headache Pain. 2013 Dec 12;14(1):97.
6. Kari E and DelGaudi JM. Laryngoscope. 2008;118:2235-9.
7. Levine HL et al. Otolaryngol Head Neck Surg. 2006 Mar;134(3):516-23.
A 27-year-old woman presents requesting antibiotics for a sinus headache. She reports she has had 3-4 episodes a year with pain in her maxillary area and congestion. She has not had fevers with these episodes. She had the onset of this headache 6 hours ago. She has had resolution of the pain within 24 hours in the past with the use of antibiotics and decongestants. What would be the best treatment for her?
A. Amoxicillin
B. Amoxicillin/clavulanate
C. Amoxicillin + fluticasone nasal spray
D. Sumatriptan
The best treatment would be sumatriptan. This is very likely a variant of migraine headache and migraine-directed therapy is the best option. In regard to sinus headache, the International Headache Society (IHS) classification states that chronic sinusitis is not a cause of headache and facial pain unless it relapses into an acute sinusitis.1
The recurrent nature of the headaches in this patient suggests a primary headache disorder with migraine being the most likely. In a study of 2991 patients with self-diagnosed or physician-diagnosed “sinus headaches,” 88% of the patients met IHS criteria for migraine.2 In this study, most of the patients had symptoms suggesting sinus problems, with the most common symptoms being sinus pressure (84%), sinus pain (82%), and nasal congestion (63%). The likely cause for these symptoms in migraine patients is vasodilation of the nasal mucosa that can be part of the migraine event.
Foroughipour and colleagues found similar results.3 In their study, 58 patients with “sinus headache” were evaluated, with the final diagnosis of migraine in 40 patients (69%), tension-type headache in 16 patients (27%), and chronic sinusitis with recurrent acute episodes in 2 patients (3%). Recurrent antibiotic therapy had been given to 73% of the tension-type headache patients and 66% of the migraine patients.
Obermann et al. looked at how common trigeminal autonomic symptoms were in patients with migraine in a population-based study.4 They found of 841 patients who had migraine, 226 reported accompanying unilateral trigeminal autonomic symptoms (26.9%).
Al-Hashel et al. reported on how patients with frequent migraine are misdiagnosed and how long it takes when they present with sinus symptoms. A total of 130 migraine patients were recruited for the study; of these, 81.5% were misdiagnosed with sinusitis. The mean time delay of migraine diagnosis was almost 8 years.5
In a study by Dr. Elina Kari and Dr. John M. DelGaudio, patients who had a history of “sinus headaches” were treated as though all these headaches were migraines. Fifty-four patients were enrolled, and 38 patients completed the study. All patients had nasal endoscopy and sinus CT scans that were negative. They were then given migraine-directed treatment to use for their headaches. Of the 38 patient who completed the study, 31 patients (82%) had a significant reduction in headache pain with triptan use, and 35 patients (92%) had a significant response to migraine-directed therapy.6 An expert panel consisting of otolaryngologists, neurologists, allergists, and primary care physicians concluded that the majority of sinus headaches can actually be classified as migraines.7
These references aren’t new. This information has been known in the medical literature for more than 2 decades, but I believe that the majority of medical professionals are not aware of it. In my own practice I have found great success treating patients with sinus headache histories with migraine-directed therapy (mostly triptans) when they have return of their headaches.
Pearl: When your patients say they have another sinus headache, think migraine.
Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. He is a member of the editorial advisory board of Internal Medicine News. Dr. Paauw has no conflicts to disclose. Contact him at [email protected].
References
1. Jones NS. Expert Rev Neurother. 2009;9:439-44.
2. Schreiber CP et al. Arch Intern Med. 2004;164:1769-72.
3. Foroughipour M et al. Eur Arch Otorhinolaryngol. 2011;268:1593-6.
4. Obermann M et al. Cephalalgia. 2007 Jun;27(6):504-9.
5. Al-Hashel JY et al. J Headache Pain. 2013 Dec 12;14(1):97.
6. Kari E and DelGaudi JM. Laryngoscope. 2008;118:2235-9.
7. Levine HL et al. Otolaryngol Head Neck Surg. 2006 Mar;134(3):516-23.
AI for Email Replies? Not Yet
An article in the March 20 JAMA Network Open looked into the use of AI for responding to patient emails. Basically, they found that this led to a reduction in physician burden, but didn’t save any time.
1. Not sure that’s worth the trouble.
2. Unless the AI is simply responding with something like “message received, thank you” I don’t think this is a good idea.
Yeah, we’re all stretched for time, I understand that. From the starting gun each morning we’re racing between patients, phone calls, incoming test results, staff questions, drug reps, sample closets, dictations, and a million other things.
But
Someday, yeah, maybe it can do this, like 2-1B, the surgical droid that replaced Luke’s hand in “The Empire Strikes Back.” But we’re not even close to that. Just because a log-in screen says “Jumping to Hyperspace” doesn’t mean you’re on the Millennium Falcon.
I generally know my patients, but even if I don’t remember them, I can quickly look up their charts and decide how to answer. AI can look up charts, too, but data is only a part of medicine.
There are a lot of things that don’t make it into a chart: our impressions of people and a knowledge of their personalities and anxieties. We take these into account when responding to their questions. People are different in how things need to be said to them, even if the answer is, overall, the same.
“It’s the AI’s fault” isn’t going to stand up in court, either.
I also have to question the benefit of the findings. If it lessens the “click burden” but still takes the same amount of time, are we really gaining anything?
I’m all for the digital age. In many ways it’s made my practice a lot easier. But I think it has a way to go before I let it start dealing directly with patients.
Dr. Block has a solo neurology practice in Scottsdale, Arizona.
An article in the March 20 JAMA Network Open looked into the use of AI for responding to patient emails. Basically, they found that this led to a reduction in physician burden, but didn’t save any time.
1. Not sure that’s worth the trouble.
2. Unless the AI is simply responding with something like “message received, thank you” I don’t think this is a good idea.
Yeah, we’re all stretched for time, I understand that. From the starting gun each morning we’re racing between patients, phone calls, incoming test results, staff questions, drug reps, sample closets, dictations, and a million other things.
But
Someday, yeah, maybe it can do this, like 2-1B, the surgical droid that replaced Luke’s hand in “The Empire Strikes Back.” But we’re not even close to that. Just because a log-in screen says “Jumping to Hyperspace” doesn’t mean you’re on the Millennium Falcon.
I generally know my patients, but even if I don’t remember them, I can quickly look up their charts and decide how to answer. AI can look up charts, too, but data is only a part of medicine.
There are a lot of things that don’t make it into a chart: our impressions of people and a knowledge of their personalities and anxieties. We take these into account when responding to their questions. People are different in how things need to be said to them, even if the answer is, overall, the same.
“It’s the AI’s fault” isn’t going to stand up in court, either.
I also have to question the benefit of the findings. If it lessens the “click burden” but still takes the same amount of time, are we really gaining anything?
I’m all for the digital age. In many ways it’s made my practice a lot easier. But I think it has a way to go before I let it start dealing directly with patients.
Dr. Block has a solo neurology practice in Scottsdale, Arizona.
An article in the March 20 JAMA Network Open looked into the use of AI for responding to patient emails. Basically, they found that this led to a reduction in physician burden, but didn’t save any time.
1. Not sure that’s worth the trouble.
2. Unless the AI is simply responding with something like “message received, thank you” I don’t think this is a good idea.
Yeah, we’re all stretched for time, I understand that. From the starting gun each morning we’re racing between patients, phone calls, incoming test results, staff questions, drug reps, sample closets, dictations, and a million other things.
But
Someday, yeah, maybe it can do this, like 2-1B, the surgical droid that replaced Luke’s hand in “The Empire Strikes Back.” But we’re not even close to that. Just because a log-in screen says “Jumping to Hyperspace” doesn’t mean you’re on the Millennium Falcon.
I generally know my patients, but even if I don’t remember them, I can quickly look up their charts and decide how to answer. AI can look up charts, too, but data is only a part of medicine.
There are a lot of things that don’t make it into a chart: our impressions of people and a knowledge of their personalities and anxieties. We take these into account when responding to their questions. People are different in how things need to be said to them, even if the answer is, overall, the same.
“It’s the AI’s fault” isn’t going to stand up in court, either.
I also have to question the benefit of the findings. If it lessens the “click burden” but still takes the same amount of time, are we really gaining anything?
I’m all for the digital age. In many ways it’s made my practice a lot easier. But I think it has a way to go before I let it start dealing directly with patients.
Dr. Block has a solo neurology practice in Scottsdale, Arizona.
Liquid Biopsy for Colorectal Cancer Appears Promising But Still Lacks Robust Efficacy
, according to two new modeling studies and an expert consensus commentary.
Although some patients find blood-based tests more convenient, the higher numbers of false positives and false negatives could lead to more CRC cases and deaths.
“Based on their current characteristics, blood tests should not be recommended to replace established colorectal cancer screening tests, since blood tests are neither as effective nor cost-effective and would worsen outcomes,” David Lieberman, MD, AGAF, chair of the American Gastroenterological Association’s CRC Workshop Panel, and lead author of the expert commentary, said in a statement.
The blood tests detect circulating nucleotides, such as cell-free DNA or metabolic products associated with CRC and its precursors. Current tests are in development by Guardant Health and Freenome.
The two modeling studies, published in Gastroenterology on March 26, analyzed the effectiveness and cost-effectiveness of blood-based CRC screening that meets Centers for Medicare & Medicaid Services (CMS) coverage criteria, as well as the comparative effectiveness and cost-effectiveness of CRC screening with blood-based biomarkers versus fecal tests or colonoscopy.
Also published on March 26 in Clinical Gastroenterology and Hepatology, the expert commentary included key conclusions from the AGA CRC Workshop, which analyzed the two modeling studies.
Comparing CRC Screening Methods
In the first modeling study, an international team of researchers ran three microsimulation models for CRC to estimate the effectiveness and cost-effectiveness of triennial blood-based screening for ages 45-75, compared with no screening, annual fecal immunochemical testing (FIT), triennial stool DNA testing combined with a FIT assay, and colonoscopy screening every 10 years. The researchers used CMS coverage criteria for blood tests, with a sensitivity of at least 74% for detection of CRC and specificity of at least 90%.
Without screening, the models predicted between 77 and 88 CRC cases and between 32 and 36 deaths per 1,000 individuals, costing between $5.3 million to $5.8 million. Compared with no screening, blood-based screening was considered cost-effective, with an additional cost of $25,600 to $43,700 per quality-adjusted life-year gained (QALYG).
However, compared with the FIT, stool, and colonoscopy options, blood-based screening was not cost-effective, with both a decrease in QALYG and an increase in costs. FIT was more effective and less costly, with 5-24 QALYG and nearly $3.5 million cheaper than blood-based screening, even when blood-based uptake was 20 percentage points higher than FIT uptake.
In the second modeling study, US researchers compared triennial blood-based screening with established alternatives at the CMS thresholds of 74% sensitivity and 90% specificity.
Overall, a blood-based test at the CMS minimum reduced CRC incidence by 40% and CRC mortality by 52% versus no screening. However, a blood-based test was significantly less effective than triennial stool DNA testing, annual FIT, and colonoscopy every 10 years, which reduced CRC incidence by 68%-79% and CRC mortality by 73%-81%.
Assuming a blood-based test would cost the same as a multi-target stool test, the blood-based test would cost $28,500 per QALYG versus no screening. At the same time, FIT, colonoscopy, and stool DNA testing were less costly and more effective. In general, the blood-based test would match FIT’s clinical outcomes if it achieved 1.4- to 1.8-fold the participation rate for FIT.
Even still, the sensitivity for advanced precancerous lesion (APL) was a key determinant. A paradigm-changing blood-based test would need to have higher than 90% sensitivity for CRC and 80% for APL, 90% specificity, and cost less than $120 to $140, the study authors wrote.
“High APL sensitivity, which can result in CRC prevention, should be a top priority for screening test developers,” the authors wrote. “APL detection should not be penalized by a definition of test specificity that focuses on CRC only.”
Additional Considerations
The AGA CRC Workshop Panel met in September 2023 to review the two modeling studies and other data on blood-based tests for CRC. Overall, the group concluded that a triennial blood test that meets minimal CMS criteria would likely result in better outcomes than no screening and provide a simple process to encourage more people to participate in screening.
However, patients who may have declined colonoscopy should understand the need for a colonoscopy if blood-based tests show abnormal results, the commentary authors wrote.
In addition, because blood-based tests for CRC appear to be less effective and more costly than current screening options, they shouldn’t be recommended to replace established screening methods. Although these blood-based tests may improve screening rates and outcomes in unscreened people, substituting blood tests for other effective tests would increase costs and worsen patient outcomes.
Beyond that, they wrote, the industry should consider other potential benchmarks for an effective blood test, such as a sensitivity for stage I-III CRC of greater than 90% and sensitivity for advanced adenomas of 40%-50% or higher.
“Unless we have the expectation of high sensitivity and specificity, blood-based colorectal cancer tests could lead to false positive and false negative results, which are both bad for patient outcomes,” John M. Carethers, MD, AGAF, vice chancellor for health sciences at UC San Diego, AGA past president, and a member of the AGA CRC Workshop panel, said in a statement.
Several authors reported consultant roles and funding support from numerous companies, including Guardant Health and Freenome.
, according to two new modeling studies and an expert consensus commentary.
Although some patients find blood-based tests more convenient, the higher numbers of false positives and false negatives could lead to more CRC cases and deaths.
“Based on their current characteristics, blood tests should not be recommended to replace established colorectal cancer screening tests, since blood tests are neither as effective nor cost-effective and would worsen outcomes,” David Lieberman, MD, AGAF, chair of the American Gastroenterological Association’s CRC Workshop Panel, and lead author of the expert commentary, said in a statement.
The blood tests detect circulating nucleotides, such as cell-free DNA or metabolic products associated with CRC and its precursors. Current tests are in development by Guardant Health and Freenome.
The two modeling studies, published in Gastroenterology on March 26, analyzed the effectiveness and cost-effectiveness of blood-based CRC screening that meets Centers for Medicare & Medicaid Services (CMS) coverage criteria, as well as the comparative effectiveness and cost-effectiveness of CRC screening with blood-based biomarkers versus fecal tests or colonoscopy.
Also published on March 26 in Clinical Gastroenterology and Hepatology, the expert commentary included key conclusions from the AGA CRC Workshop, which analyzed the two modeling studies.
Comparing CRC Screening Methods
In the first modeling study, an international team of researchers ran three microsimulation models for CRC to estimate the effectiveness and cost-effectiveness of triennial blood-based screening for ages 45-75, compared with no screening, annual fecal immunochemical testing (FIT), triennial stool DNA testing combined with a FIT assay, and colonoscopy screening every 10 years. The researchers used CMS coverage criteria for blood tests, with a sensitivity of at least 74% for detection of CRC and specificity of at least 90%.
Without screening, the models predicted between 77 and 88 CRC cases and between 32 and 36 deaths per 1,000 individuals, costing between $5.3 million to $5.8 million. Compared with no screening, blood-based screening was considered cost-effective, with an additional cost of $25,600 to $43,700 per quality-adjusted life-year gained (QALYG).
However, compared with the FIT, stool, and colonoscopy options, blood-based screening was not cost-effective, with both a decrease in QALYG and an increase in costs. FIT was more effective and less costly, with 5-24 QALYG and nearly $3.5 million cheaper than blood-based screening, even when blood-based uptake was 20 percentage points higher than FIT uptake.
In the second modeling study, US researchers compared triennial blood-based screening with established alternatives at the CMS thresholds of 74% sensitivity and 90% specificity.
Overall, a blood-based test at the CMS minimum reduced CRC incidence by 40% and CRC mortality by 52% versus no screening. However, a blood-based test was significantly less effective than triennial stool DNA testing, annual FIT, and colonoscopy every 10 years, which reduced CRC incidence by 68%-79% and CRC mortality by 73%-81%.
Assuming a blood-based test would cost the same as a multi-target stool test, the blood-based test would cost $28,500 per QALYG versus no screening. At the same time, FIT, colonoscopy, and stool DNA testing were less costly and more effective. In general, the blood-based test would match FIT’s clinical outcomes if it achieved 1.4- to 1.8-fold the participation rate for FIT.
Even still, the sensitivity for advanced precancerous lesion (APL) was a key determinant. A paradigm-changing blood-based test would need to have higher than 90% sensitivity for CRC and 80% for APL, 90% specificity, and cost less than $120 to $140, the study authors wrote.
“High APL sensitivity, which can result in CRC prevention, should be a top priority for screening test developers,” the authors wrote. “APL detection should not be penalized by a definition of test specificity that focuses on CRC only.”
Additional Considerations
The AGA CRC Workshop Panel met in September 2023 to review the two modeling studies and other data on blood-based tests for CRC. Overall, the group concluded that a triennial blood test that meets minimal CMS criteria would likely result in better outcomes than no screening and provide a simple process to encourage more people to participate in screening.
However, patients who may have declined colonoscopy should understand the need for a colonoscopy if blood-based tests show abnormal results, the commentary authors wrote.
In addition, because blood-based tests for CRC appear to be less effective and more costly than current screening options, they shouldn’t be recommended to replace established screening methods. Although these blood-based tests may improve screening rates and outcomes in unscreened people, substituting blood tests for other effective tests would increase costs and worsen patient outcomes.
Beyond that, they wrote, the industry should consider other potential benchmarks for an effective blood test, such as a sensitivity for stage I-III CRC of greater than 90% and sensitivity for advanced adenomas of 40%-50% or higher.
“Unless we have the expectation of high sensitivity and specificity, blood-based colorectal cancer tests could lead to false positive and false negative results, which are both bad for patient outcomes,” John M. Carethers, MD, AGAF, vice chancellor for health sciences at UC San Diego, AGA past president, and a member of the AGA CRC Workshop panel, said in a statement.
Several authors reported consultant roles and funding support from numerous companies, including Guardant Health and Freenome.
, according to two new modeling studies and an expert consensus commentary.
Although some patients find blood-based tests more convenient, the higher numbers of false positives and false negatives could lead to more CRC cases and deaths.
“Based on their current characteristics, blood tests should not be recommended to replace established colorectal cancer screening tests, since blood tests are neither as effective nor cost-effective and would worsen outcomes,” David Lieberman, MD, AGAF, chair of the American Gastroenterological Association’s CRC Workshop Panel, and lead author of the expert commentary, said in a statement.
The blood tests detect circulating nucleotides, such as cell-free DNA or metabolic products associated with CRC and its precursors. Current tests are in development by Guardant Health and Freenome.
The two modeling studies, published in Gastroenterology on March 26, analyzed the effectiveness and cost-effectiveness of blood-based CRC screening that meets Centers for Medicare & Medicaid Services (CMS) coverage criteria, as well as the comparative effectiveness and cost-effectiveness of CRC screening with blood-based biomarkers versus fecal tests or colonoscopy.
Also published on March 26 in Clinical Gastroenterology and Hepatology, the expert commentary included key conclusions from the AGA CRC Workshop, which analyzed the two modeling studies.
Comparing CRC Screening Methods
In the first modeling study, an international team of researchers ran three microsimulation models for CRC to estimate the effectiveness and cost-effectiveness of triennial blood-based screening for ages 45-75, compared with no screening, annual fecal immunochemical testing (FIT), triennial stool DNA testing combined with a FIT assay, and colonoscopy screening every 10 years. The researchers used CMS coverage criteria for blood tests, with a sensitivity of at least 74% for detection of CRC and specificity of at least 90%.
Without screening, the models predicted between 77 and 88 CRC cases and between 32 and 36 deaths per 1,000 individuals, costing between $5.3 million to $5.8 million. Compared with no screening, blood-based screening was considered cost-effective, with an additional cost of $25,600 to $43,700 per quality-adjusted life-year gained (QALYG).
However, compared with the FIT, stool, and colonoscopy options, blood-based screening was not cost-effective, with both a decrease in QALYG and an increase in costs. FIT was more effective and less costly, with 5-24 QALYG and nearly $3.5 million cheaper than blood-based screening, even when blood-based uptake was 20 percentage points higher than FIT uptake.
In the second modeling study, US researchers compared triennial blood-based screening with established alternatives at the CMS thresholds of 74% sensitivity and 90% specificity.
Overall, a blood-based test at the CMS minimum reduced CRC incidence by 40% and CRC mortality by 52% versus no screening. However, a blood-based test was significantly less effective than triennial stool DNA testing, annual FIT, and colonoscopy every 10 years, which reduced CRC incidence by 68%-79% and CRC mortality by 73%-81%.
Assuming a blood-based test would cost the same as a multi-target stool test, the blood-based test would cost $28,500 per QALYG versus no screening. At the same time, FIT, colonoscopy, and stool DNA testing were less costly and more effective. In general, the blood-based test would match FIT’s clinical outcomes if it achieved 1.4- to 1.8-fold the participation rate for FIT.
Even still, the sensitivity for advanced precancerous lesion (APL) was a key determinant. A paradigm-changing blood-based test would need to have higher than 90% sensitivity for CRC and 80% for APL, 90% specificity, and cost less than $120 to $140, the study authors wrote.
“High APL sensitivity, which can result in CRC prevention, should be a top priority for screening test developers,” the authors wrote. “APL detection should not be penalized by a definition of test specificity that focuses on CRC only.”
Additional Considerations
The AGA CRC Workshop Panel met in September 2023 to review the two modeling studies and other data on blood-based tests for CRC. Overall, the group concluded that a triennial blood test that meets minimal CMS criteria would likely result in better outcomes than no screening and provide a simple process to encourage more people to participate in screening.
However, patients who may have declined colonoscopy should understand the need for a colonoscopy if blood-based tests show abnormal results, the commentary authors wrote.
In addition, because blood-based tests for CRC appear to be less effective and more costly than current screening options, they shouldn’t be recommended to replace established screening methods. Although these blood-based tests may improve screening rates and outcomes in unscreened people, substituting blood tests for other effective tests would increase costs and worsen patient outcomes.
Beyond that, they wrote, the industry should consider other potential benchmarks for an effective blood test, such as a sensitivity for stage I-III CRC of greater than 90% and sensitivity for advanced adenomas of 40%-50% or higher.
“Unless we have the expectation of high sensitivity and specificity, blood-based colorectal cancer tests could lead to false positive and false negative results, which are both bad for patient outcomes,” John M. Carethers, MD, AGAF, vice chancellor for health sciences at UC San Diego, AGA past president, and a member of the AGA CRC Workshop panel, said in a statement.
Several authors reported consultant roles and funding support from numerous companies, including Guardant Health and Freenome.